Search results for: estimated%20model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2284

Search results for: estimated%20model

364 3d Gis Participatory Mapping And Conflict Ladm: Comparative Analysis Of Land Policies And Survey Procedures Applied By The Igorots, Ncip, And Denr To Itogon Ancestral Domain Boundaries

Authors: Deniz A. Apostol, Denyl A. Apostol, Oliver T. Macapinlac, George S. Katigbak

Abstract:

Ang lupa ay buhay at ang buhay ay lupa (land is life and life is land). Based on the 2015 census, the Indigenous Peoples (IPs) population in the Philippines is estimated to be 11.3-20.2 million. They hail from various regions, possess distinct cultures, but encounter shared struggles in territorial disputes. Itogon, the largest Benguet municipality, is home to the Ibaloi, Kankanaey, and other Igorot tribes. Despite having three (3) Ancestral Domains (ADs), Itogon is predominantly labeled as timberland or forest. These overlapping land classifications highlight the presence of inconsistencies in national laws and jurisdictions. This study aims to analyze surveying procedures used by the Igorots, NCIP, and DENR in mapping the Itogon AD Boundaries, show land boundary delineation conflicts, propose surveying guidelines, and recommend 3D Participatory Mapping as geomatics solution for updated AD reference maps. Interpretative Phenomenological Analysis (IPA), Comparative Legal Analysis (CLA), and Map Overlay Analysis (MOA) were utilized to examine the interviews, compare land policies and surveying procedures, and identify differences and overlaps in conflicting land boundaries. In the IPA, master themes identified were AD Definition (rights, responsibilities, restrictions), AD Overlaps (land classifications, political boundaries, ancestral domains, land laws/policies), and Other Conflicts (with other agencies, misinterpretations, suggestions), as considerations for mapping ADs. CLA focused on conflicting surveying procedures: AD Definitions, Surveying Equipment, Surveying Methods, Map Projections, Order of Accuracy, Monuments, Survey Parties, Pre-survey, Survey Proper, and Post-survey procedures. MOA emphasized the land area percentage of conflicting areas, showcasing the impact of misaligned surveying procedures. The findings are summarized through a Land Administration Domain Model (LADM) Conflict, for AD versus AD and Political Boundaries. The products of this study are identification of land conflict factors, survey guidelines recommendations, and contested land area computations. These can serve as references for revising survey manuals, updating AD Sustainable Development and Protection Plans, and making amendments to laws.

Keywords: ancestral domain, gis, indigenous people, land policies, participatory mapping, surveying, survey procedures

Procedia PDF Downloads 64
363 Role of Grey Scale Ultrasound Including Elastography in Grading the Severity of Carpal Tunnel Syndrome - A Comparative Cross-sectional Study

Authors: Arjun Prakash, Vinutha H., Karthik N.

Abstract:

BACKGROUND: Carpal tunnel syndrome (CTS) is a common entrapment neuropathy with an estimated prevalence of 0.6 - 5.8% in the general adult population. It is caused by compression of the Median Nerve (MN) at the wrist as it passes through a narrow osteofibrous canal. Presently, the diagnosis is established by the clinical symptoms and physical examination and Nerve conduction study (NCS) is used to assess its severity. However, it is considered to be painful, time consuming and expensive, with a false-negative rate between 16 - 34%. Ultrasonography (USG) is now increasingly used as a diagnostic tool in CTS due to its non-invasive nature, increased accessibility and relatively low cost. Elastography is a newer modality in USG which helps to assess stiffness of tissues. However, there is limited available literature about its applications in peripheral nerves. OBJECTIVES: Our objectives were to measure the Cross-Sectional Area (CSA) and elasticity of MN at the carpal tunnel using Grey scale Ultrasonography (USG), Strain Elastography (SE) and Shear Wave Elastography (SWE). We also made an attempt to independently evaluate the role of Gray scale USG, SE and SWE in grading the severity of CTS, keeping NCS as the gold standard. MATERIALS AND METHODS: After approval from the Institutional Ethics Review Board, we conducted a comparative cross sectional study for a period of 18 months. The participants were divided into two groups. Group A consisted of 54 patients with clinically diagnosed CTS who underwent NCS, and Group B consisted of 50 controls without any clinical symptoms of CTS. All Ultrasound examinations were performed on SAMSUNG RS 80 EVO Ultrasound machine with 2 - 9 Mega Hertz linear probe. In both groups, CSA of the MN was measured on Grey scale USG, and its elasticity was measured at the carpal tunnel (in terms of Strain ratio and Shear Modulus). The variables were compared between both groups by using ‘Independent t test’, and subgroup analyses were performed using one-way analysis of variance. Receiver operating characteristic curves were used to evaluate the diagnostic performance of each variable. RESULTS: The mean CSA of the MN was 13.60 + 3.201 mm2 and 9.17 + 1.665 mm2 in Group A and Group B, respectively (p < 0.001). The mean SWE was 30.65 + 12.996 kPa and 17.33 + 2.919 kPa in Group A and Group B, respectively (p < 0.001), and the mean Strain ratio was 7.545 + 2.017 and 5.802 + 1.153 in Group A and Group B respectively (p < 0.001). CONCLUSION: The combined use of Gray scale USG, SE and SWE is extremely useful in grading the severity of CTS and can be used as a painless and cost-effective alternative to NCS. Early diagnosis and grading of CTS and effective treatment is essential to avoid permanent nerve damage and functional disability.

Keywords: carpal tunnel, ultrasound, elastography, nerve conduction study

Procedia PDF Downloads 74
362 Tracing Sources of Sediment in an Arid River, Southern Iran

Authors: Hesam Gholami

Abstract:

Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.

Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran

Procedia PDF Downloads 54
361 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.

Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document

Procedia PDF Downloads 132
360 Tuning of Indirect Exchange Coupling in FePt/Al₂O₃/Fe₃Pt System

Authors: Rajan Goyal, S. Lamba, S. Annapoorni

Abstract:

The indirect exchange coupled system consists of two ferromagnetic layers separated by non-magnetic spacer layer. The type of exchange coupling may be either ferro or anti-ferro depending on the thickness of the spacer layer. In the present work, the strength of exchange coupling in FePt/Al₂O₃/Fe₃Pt has been investigated by varying the thickness of the spacer layer Al₂O₃. The FePt/Al₂O₃/Fe₃Pt trilayer structure is fabricated on Si <100> single crystal substrate using sputtering technique. The thickness of FePt and Fe₃Pt is fixed at 60 nm and 2 nm respectively. The thickness of spacer layer Al₂O₃ was varied from 0 to 16 nm. The normalized hysteresis loops recorded at room temperature both in the in-plane and out of plane configuration reveals that the orientation of easy axis lies along the plane of the film. It is observed that the hysteresis loop for ts=0 nm does not exhibit any knee around H=0 indicating that the hard FePt layer and soft Fe₃Pt layer are strongly exchange coupled. However, the insertion of Al₂O₃ spacer layer of thickness ts = 0.7 nm results in appearance of a minor knee around H=0 suggesting the weakening of exchange coupling between FePt and Fe₃Pt. The disappearance of knee in hysteresis loop with further increase in thickness of the spacer layer up to 8 nm predicts the co-existence of ferromagnetic (FM) and antiferromagnetic (AFM) exchange interaction between FePt and Fe₃Pt. In addition to this, the out of plane hysteresis loop also shows an asymmetry around H=0. The exchange field Hex = (Hc↑-HC↓)/2, where Hc↑ and Hc↓ are the coercivity estimated from lower and upper branch of hysteresis loop, increases from ~ 150 Oe to ~ 700 Oe respectively. This behavior may be attributed to the uncompensated moments in the hard FePt layer and soft Fe₃Pt layer at the interface. A better insight into the variation in indirect exchange coupling has been investigated using recoil curves. It is observed that the almost closed recoil curves are obtained for ts= 0 nm up to a reverse field of ~ 5 kOe. On the other hand, the appearance of appreciable open recoil curves at lower reverse field ~ 4 kOe for ts = 0.7 nm indicates that uncoupled soft phase undergoes irreversible magnetization reversal at lower reverse field suggesting the weakening of exchange coupling. The openness of recoil curves decreases with increase in thickness of the spacer layer up to 8 nm. This behavior may be attributed to the competition between FM and AFM exchange interactions. The FM exchange coupling between FePt and Fe₃Pt due to porous nature of Al₂O₃ decreases much slower than the weak AFM coupling due to interaction between Fe ions of FePt and Fe₃Pt via O ions of Al₂O₃. The hysteresis loop has been simulated using Monte Carlo based on Metropolis algorithm to investigate the variation in strength of exchange coupling in FePt/Al₂O₃/Fe₃Pt trilayer system.

Keywords: indirect exchange coupling, MH loop, Monte Carlo simulation, recoil curve

Procedia PDF Downloads 171
359 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability

Procedia PDF Downloads 198
358 Consumer Reactions to Hospitality Social Robots Across Cultures

Authors: Lisa C. Wan

Abstract:

To address customers’ safety concerns, more and more hospitality companies are using service robots to provide contactless services. For many companies, the switch from human employees to service robots to lower the contagion risk during and after the pandemic may be permanent. The market size for hospitality service robots is estimated to reach US$3,083 million by 2030, registering a CAGR of 25.5% from 2021 to 2030. While service robots may effectively reduce interpersonal contacts and health risk, it also eliminates the social interactions desired by customers. A recent survey revealed that more than 60% of Americans feel lonely during the pandemic. People who are traveling can also feel isolated when they are at a hotel far away from home. It is therefore important for the hospitality companies to understand whether and how social robots can remedy deprived social connection not only due to a pandemic but also for a trip away from home in the post-pandemic future. This study complements extant hospitality literature regarding service robots by examining how service robots can forge social connections with customers. The service robots we are concerned with are those that can interact and communicate with humans; we broadly refer to them as social robots. We define a social robot as one that is equipped with interaction capabilities – it can either be one that directly interacts with the consumer or one through which the consumer can interact with other humans. Drawing on the theories of mind perception, we propose that service robots can foster social connectedness and increase the perception of social competence of the robot, but these effects will vary across cultures. By applying theories of mind perception and cultural dimension to the hospitality setting, this study shows that service robots that are equipped with social connection function will receive a more favorable evaluation from the consumers and enhance their intention to visit a hotel. The more favorable reaction to social robots is stronger for collectivists (i.e., Asians) than individualists (i.e., Westerners). To our knowledge, this is among the first studies to investigate the impact of culture on consumer reactions to social robots in the hospitality and tourism context. Moreover, this research extends the literature by examining whether people imbue non-human entities (i.e., telepresence social robots) with social competence. Because social robots that foster social connection with humans are still rare in hospitality and tourism, this aspect is an underexplored research area. Our study is the first to propose that, just like their human counterparts that possess relevant social skills, social robots’ interaction capabilities (e.g., telepresence robots) are used to infer social competence. More studies will be conducted to examine consumer reactions to humanoid (vs. non-humanoid) robot in the hospitality settings to generalize our research findings.

Keywords: service robots, COVID-19, social connection, cultures

Procedia PDF Downloads 86
357 A Model of the Universe without Expansion of Space

Authors: Jia-Chao Wang

Abstract:

A model of the universe without invoking space expansion is proposed to explain the observed redshift-distance relation and the cosmic microwave background radiation (CMB). The main hypothesized feature of the model is that photons traveling in space interact with the CMB photon gas. This interaction causes the photons to gradually lose energy through dissipation and, therefore, experience redshift. The interaction also causes some of the photons to be scattered off their track toward an observer and, therefore, results in beam intensity attenuation. As observed, the CMB exists everywhere in space and its photon density is relatively high (about 410 per cm³). The small average energy of the CMB photons (about 6.3×10⁻⁴ eV) can reduce the energies of traveling photons gradually and will not alter their momenta drastically as in, for example, Compton scattering, to totally blur the images of distant objects. An object moving through a thermalized photon gas, such as the CMB, experiences a drag. The cause is that the object sees a blue shifted photon gas along the direction of motion and a redshifted one in the opposite direction. An example of this effect can be the observed CMB dipole: The earth travels at about 368 km/s (600 km/s) relative to the CMB. In the all-sky map from the COBE satellite, radiation in the Earth's direction of motion appears 0.35 mK hotter than the average temperature, 2.725 K, while radiation on the opposite side of the sky is 0.35 mK colder. The pressure of a thermalized photon gas is given by Pγ = Eγ/3 = αT⁴/3, where Eγ is the energy density of the photon gas and α is the Stefan-Boltzmann constant. The observed CMB dipole, therefore, implies a pressure difference between the two sides of the earth and results in a CMB drag on the earth. By plugging in suitable estimates of quantities involved, such as the cross section of the earth and the temperatures on the two sides, this drag can be estimated to be tiny. But for a photon traveling at the speed of light, 300,000 km/s, the drag can be significant. In the present model, for the dissipation part, it is assumed that a photon traveling from a distant object toward an observer has an effective interaction cross section pushing against the pressure of the CMB photon gas. For the attenuation part, the coefficient of the typical attenuation equation is used as a parameter. The values of these two parameters are determined by fitting the 748 µ vs. z data points compiled from 643 supernova and 105 γ-ray burst observations with z values up to 8.1. The fit is as good as that obtained from the lambda cold dark matter (ΛCDM) model using online cosmological calculators and Planck 2015 results. The model can be used to interpret Hubble's constant, Olbers' paradox, the origin and blackbody nature of the CMB radiation, the broadening of supernova light curves, and the size of the observable universe.

Keywords: CMB as the lowest energy state, model of the universe, origin of CMB in a static universe, photon-CMB photon gas interaction

Procedia PDF Downloads 110
356 Cognitive Control Moderates the Concurrent Effect of Autistic and Schizotypal Traits on Divergent Thinking

Authors: Julie Ramain, Christine Mohr, Ahmad Abu-Akel

Abstract:

Divergent thinking—a cognitive component of creativity—and particularly the ability to generate unique and novel ideas, has been linked to both autistic and schizotypal traits. However, to our knowledge, the concurrent effect of these trait dimensions on divergent thinking has not been investigated. Moreover, it has been suggested that creativity is associated with different types of attention and cognitive control, and consequently how information is processed in a given context. Intriguingly, consistent with the diametric model, autistic and schizotypal traits have been associated with contrasting attentional and cognitive control styles. Positive schizotypal traits have been associated with reactive cognitive control and attentional flexibility, while autistic traits have been associated with proactive cognitive control and the increased focus of attention. The current study investigated the relationship between divergent thinking, autistic and schizotypal traits and cognitive control in a non-clinical sample of 83 individuals (Males = 42%; Mean age = 22.37, SD = 2.93), sufficient to detect a medium effect size. Divergent thinking was evaluated in an adapted version of-of the Figural Torrance Test of Creative Thinking. Crucially, since we were interested in testing divergent thinking productivity across contexts, participants were asked to generate items from basic shapes in four different contexts. The variance of the proportion of unique to total responses across contexts represented a measure of context adaptability, with lower variance indicating increased context adaptability. Cognitive control was estimated with the Behavioral Proactive Index of the AX-CPT task, with higher scores representing the ability to actively maintain goal-relevant information in a sustained/anticipatory manner. Autistic and schizotypal traits were assessed with the Autism Quotient (AQ) and the Community Assessment of Psychic Experiences (CAPE-42). Generalized linear models revealed a 3-way interaction of autistic and positive schizotypal traits, and proactive cognitive control, associated with increased context adaptability. Specifically, the concurrent effect of autistic and positive schizotypal traits on increased context adaptability was moderated by the level of proactive control and was only significant when proactive cognitive control was high. Our study reveals that autistic and positive schizotypal traits interactively facilitate the capacity to generate unique ideas across various contexts. However, this effect depends on cognitive control mechanisms indicative of the ability to proactively maintain attention when needed. The current results point to a unique profile of divergent thinkers who have the ability to respectively tap both systematic and flexible processing modes within and across contexts. This is particularly intriguing as such combination of phenotypes has been proposed to explain the genius of Beethoven, Nash, and Newton.

Keywords: autism, schizotypy, creativity, cognitive control

Procedia PDF Downloads 121
355 Calcium Release- Activated Calcium Channels as a Target in Treatment of Allergic Asthma

Authors: Martina Šutovská, Marta Jošková, Ivana Kazimierová, Lenka Pappová, Maroš Adamkov, Soňa Fraňová

Abstract:

Bronchial asthma is characterized by increased bronchoconstrictor responses to provoking agonists, airway inflammation and remodeling. All these processes involve Ca2+ influx through Ca2+-release-activated Ca2+ channels (CRAC) that are widely expressed in immune, respiratory epithelium and airway smooth muscle (ASM) cells. Our previous study pointed on possible therapeutic potency of CRAC blockers using experimental guinea pigs asthma model. Presented work analyzed complex anti-asthmatic effect of long-term administered CRAC blocker, including impact on allergic inflammation, airways hyperreactivity, and remodeling and mucociliary clearance. Ovalbumin-induced allergic inflammation of the airways according to Franova et al. was followed by 14 days lasted administration of CRAC blocker (3-fluoropyridine-4-carboxylic acid, FPCA) in the dose 1.5 mg/kg bw. For comparative purposes salbutamol, budesonide and saline were applied to control groups. The anti-inflammatory effect of FPCA was estimated by serum and bronchoalveolar lavage fluid (BALF) changes in IL-4, IL-5, IL-13 and TNF-α analyzed by Bio-Plex® assay as well as immunohistochemical staining focused on assessment of tryptase and c-Fos positivity in pulmonary samples. The in vivo airway hyperreactivity was evaluated by Pennock et al. and by organ tissue bath methods in vitro. The immunohistochemical changes in ASM actin and collagen III layer as well as mucin secretion evaluated anti-remodeling effect of FPCA. The measurement of ciliary beat frequency (CBF) in vitro using LabVIEW™ Software determined impact on mucociliary clearance. Long-term administration of FPCA to sensitized animals resulted in: i. Significant decrease in cytokine levels, tryptase and c-Fos positivity similar to budesonide effect; ii.Meaningful decrease in basal and bronchoconstrictors-induced in vivo and in vitro airway hyperreactivity comparable to salbutamol; iii. Significant inhibition of airway remodeling parameters; iv. Insignificant changes in CBF. All these findings confirmed complex anti-asthmatic effect of CRAC channels blocker and evidenced these structures as the rational target in the treatment of allergic bronchial asthma.

Keywords: allergic asthma, CRAC channels, cytokines, respiratory epithelium

Procedia PDF Downloads 505
354 Numerical Erosion Investigation of Standalone Screen (Wire-Wrapped) Due to the Impact of Sand Particles Entrained in a Single-Phase Flow (Water Flow)

Authors: Ahmed Alghurabi, Mysara Mohyaldinn, Shiferaw Jufar, Obai Younis, Abdullah Abduljabbar

Abstract:

Erosion modeling equations were typically acquired from regulated experimental trials for solid particles entrained in single-phase or multi-phase flows. Evidently, those equations were later employed to predict the erosion damage caused by the continuous impacts of solid particles entrained in streamflow. It is also well-known that the particle impact angle and velocity do not change drastically in gas-sand flow erosion prediction; hence an accurate prediction of erosion can be projected. On the contrary, high-density fluid flows, such as water flow, through complex geometries, such as sand screens, greatly affect the sand particles’ trajectories/tracks and consequently impact the erosion rate predictions. Particle tracking models and erosion equations are frequently applied simultaneously as a method to improve erosion visualization and estimation. In the present work, computational fluid dynamic (CFD)-based erosion modeling was performed using a commercially available software; ANSYS Fluent. The continuous phase (water flow) behavior was simulated using the realizable K-epsilon model, and the secondary phase (solid particles), having a 5% flow concentration, was tracked with the help of the discrete phase model (DPM). To accomplish a successful erosion modeling, three erosion equations from the literature were utilized and introduced to the ANSYS Fluent software to predict the screen wire-slot velocity surge and estimate the maximum erosion rates on the screen surface. Results of turbulent kinetic energy, turbulence intensity, dissipation rate, the total pressure on the screen, screen wall shear stress, and flow velocity vectors were presented and discussed. Moreover, the particle tracks and path-lines were also demonstrated based on their residence time, velocity magnitude, and flow turbulence. On one hand, results from the utilized erosion equations have shown similarities in screen erosion patterns, locations, and DPM concentrations. On the other hand, the model equations estimated slightly different values of maximum erosion rates of the wire-wrapped screen. This is solely based on the fact that the utilized erosion equations were developed with some assumptions that are controlled by the experimental lab conditions.

Keywords: CFD simulation, erosion rate prediction, material loss due to erosion, water-sand flow

Procedia PDF Downloads 142
353 Comparison between Photogrammetric and Structure from Motion Techniques in Processing Unmanned Aerial Vehicles Imageries

Authors: Ahmed Elaksher

Abstract:

Over the last few years, significant progresses have been made and new approaches have been proposed for efficient collection of 3D spatial data from Unmanned aerial vehicles (UAVs) with reduced costs compared to imagery from satellite or manned aircraft. In these systems, a low-cost GPS unit provides the position, velocity of the vehicle, a low-quality inertial measurement unit (IMU) determines its orientation, and off-the-shelf cameras capture the images. Structure from Motion (SfM) and photogrammetry are the main tools for 3D surface reconstruction from images collected by these systems. Unlike traditional techniques, SfM allows the computation of calibration parameters using point correspondences across images without performing a rigorous laboratory or field calibration process and it is more flexible in that it does not require consistent image overlap or same rotation angles between successive photos. These benefits make SfM ideal for UAVs aerial mapping. In this paper, a direct comparison between SfM Digital Elevation Models (DEM) and those generated through traditional photogrammetric techniques was performed. Data was collected by a 3DR IRIS+ Quadcopter with a Canon PowerShot S100 digital camera. Twenty ground control points were randomly distributed on the ground and surveyed with a total station in a local coordinate system. Images were collected from an altitude of 30 meters with a ground resolution of nine mm/pixel. Data was processed with PhotoScan, VisualSFM, Imagine Photogrammetry, and a photogrammetric algorithm developed by the author. The algorithm starts with performing a laboratory camera calibration then the acquired imagery undergoes an orientation procedure to determine the cameras’ positions and orientations. After the orientation is attained, correlation based image matching is conducted to automatically generate three-dimensional surface models followed by a refining step using sub-pixel image information for high matching accuracy. Tests with different number and configurations of the control points were conducted. Camera calibration parameters estimated from commercial software and those obtained with laboratory procedures were comparable. Exposure station positions were within less than few centimeters and insignificant differences, within less than three seconds, among orientation angles were found. DEM differencing was performed between generated DEMs and few centimeters vertical shifts were found.

Keywords: UAV, photogrammetry, SfM, DEM

Procedia PDF Downloads 268
352 Delhi Metro: A Race towards Zero Emission

Authors: Pramit Garg, Vikas Kumar

Abstract:

In December 2015, all the members of the United Nations Framework Convention on Climate Change (UNFCCC) unanimously adopted the historic Paris Agreement. As per the convention, 197 countries have followed the guidelines of the agreement and have agreed to reduce the use of fossil fuels and also reduce the carbon emission to reach net carbon neutrality by 2050 and reduce the global temperature by 2°C by the year 2100. Globally, transport accounts for 23% of the energy-related CO2 that feeds global warming. Decarbonization of the transport sector is an essential step towards achieving India’s nationally determined contributions and net zero emissions by 2050. Metro rail systems are playing a vital role in the decarbonization of the transport sector as they create metro cities for the “21st-century world” that could ensure “mobility, connectivity, productivity, safety and sustainability” for the populace. Metro rail was introduced in Delhi in 2002 to decarbonize Delhi-National Capital Region and to provide a sustainable mode of public transportation. Metro Rail Projects significantly contribute to pollution reduction and are thus a prerequisite for sustainable development. The Delhi Metro is the 1ˢᵗ metro system in the world to earn carbon credits from Clean Development Mechanism (CDM) projects registered under United Nations Framework Convention on Climate Change. A good Metro Project with reasonable network coverage attracts a modal shift from various private modes and hence fewer vehicles on the road, thus restraining the pollution at the source. The absence of Greenhouse Gas emissions from the vehicle of modal shift passengers and lower emissions due to decongested roads contribute to the reduction in Green House Gas emissions and hence overall reduction in atmospheric pollution. The reduction in emission during the horizon year 2002 to 2019 has been estimated using emission standards and deterioration factor(s) for different categories of vehicles. Presently, our results indicate that the Delhi Metro system has reduced approximately 17.3% of motorized trips by road resulting in an emission reduction significantly. Overall, Delhi Metro, with an immediate catchment area of 17% of the National Capital Territory of Delhi (NCTD), is helping today to reduce 387 tonnes of emissions per day and 141.2 ktonnes of emissions yearly. The findings indicate that the Metro rail system is driving cities towards a more livable environment.

Keywords: Delhi metro, GHG emission, sustainable public transport, urban transport

Procedia PDF Downloads 104
351 Integrating Geographic Information into Diabetes Disease Management

Authors: Tsu-Yun Chiu, Tsung-Hsueh Lu, Tain-Junn Cheng

Abstract:

Background: Traditional chronic disease management did not pay attention to effects of geographic factors on the compliance of treatment regime, which resulted in geographic inequality in outcomes of chronic disease management. This study aims to examine the geographic distribution and clustering of quality indicators of diabetes care. Method: We first extracted address, demographic information and quality of care indicators (number of visits, complications, prescription and laboratory records) of patients with diabetes for 2014 from medical information system in a medical center in Tainan City, Taiwan, and the patients’ addresses were transformed into district- and village-level data. We then compared the differences of geographic distribution and clustering of quality of care indicators between districts and villages. Despite the descriptive results, rate ratios and 95% confidence intervals (CI) were estimated for indices of care in order to compare the quality of diabetes care among different areas. Results: A total of 23,588 patients with diabetes were extracted from the hospital data system; whereas 12,716 patients’ information and medical records were included to the following analysis. More than half of the subjects in this study were male and between 60-79 years old. Furthermore, the quality of diabetes care did indeed vary by geographical levels. Thru the smaller level, we could point out clustered areas more specifically. Fuguo Village (of Yongkang District) and Zhiyi Village (of Sinhua District) were found to be “hotspots” for nephropathy and cerebrovascular disease; while Wangliau Village and Erwang Village (of Yongkang District) would be “coldspots” for lowest proportion of ≥80% compliance to blood lipids examination. On the other hand, Yuping Village (in Anping District) was the area with the lowest proportion of ≥80% compliance to all laboratory examination. Conclusion: In spite of examining the geographic distribution, calculating rate ratios and their 95% CI could also be a useful and consistent method to test the association. This information is useful for health planners, diabetes case managers and other affiliate practitioners to organize care resources to the areas most needed.

Keywords: catchment area of healthcare, chronic disease management, Geographic information system, quality of diabetes care

Procedia PDF Downloads 263
350 Investigating the Effect of Plant Root Exudates and of Saponin on Polycyclic Aromatic Hydrocarbons Solubilization in Brownfield Contaminated Soils

Authors: Marie Davin, Marie-Laure Fauconnier, Gilles Colinet

Abstract:

In Wallonia, there are 6,000 estimated brownfields (rising to over 3.5 million in Europe) that require remediation. Polycyclic Aromatic Hydrocarbons (PAHs) are a class of recalcitrant carcinogenic/mutagenic organic compounds of major concern as they accumulate in the environment and represent 17% of all encountered pollutants. As an alternative to environmentally aggressive, expensive and often disruptive soil remediation strategies, a lot of research has been directed to developing techniques targeting organic pollutants. The following experiment, based on the observation that PAHs soil content decreases in the presence of plants, aimed at improving our understanding of the underlying mechanisms involved in phytoremediation. It focusses on plant root exudates and whether they improve PAHs solubilization, which would make them more available for bioremediation by soil microorganisms. The effect of saponin, a natural surfactant found in some plant roots such as members of the Fabaceae family, on PAHs solubilization was also investigated as part of the implementation of the experimental protocol. The experiments were conducted on soil collected from a brownfield in Saint-Ghislain (Belgium) and presenting weathered PAHs contamination. Samples of soil were extracted with different solutions containing either plant root exudates or commercial saponin. Extracted PAHs were determined in the different aqueous solutions using High-Performance Liquid Chromatography and Fluorimetric Detection (HPLC-FLD). Both root exudates of alfalfa (Medicago sativa L.) or red clover (Trifolium pratense L.) and commercial saponin were tested in different concentrations. Distilled water was used as a control. First of all, results show that PAHs are more extracted using saponin solutions than distilled water and that the amounts generally rise with the saponin concentration. However, the amount of each extracted compound diminishes as its molecular weight rises. Also, it appears that passed a certain surfactant concentration, PAHs are less extracted. This suggests that saponin might be investigated as a washing agent in polluted soil remediation techniques, either for ex-situ or in-situ treatments, as an alternative to synthetic surfactants. On the other hand, preliminary results on experiments using plant root exudates also show differences in PAHs solubilization compared to the control solution. Further results will allow discussion as to whether or not there are differences according to the exudates provenance and concentrations.

Keywords: brownfield, Medicago sativa, phytoremediation, polycyclic aromatic hydrocarbons, root exudates, saponin, solubilization, Trifolium pratense

Procedia PDF Downloads 232
349 Impact of Unconditional Cash Transfer Scheme on the Food Security Status of the Elderly in Ekiti State, Nigeria

Authors: R. O. Babatunde, O. M. Igbalajobi, F. Matambalya

Abstract:

Moderate economic growth in developing and emerging countries has led to improvement in the food consumption and nutrition situation in the last two decades. Nevertheless, about 870 million people, with a quarter of them from Sub-Saharan Africa, are still suffering from hunger worldwide. As part of measures to reduce the widespread poverty and hunger, cash transfer programmes are now being implemented in many countries of the world. While nationwide cash transfer schemes are few in Sub-Saharan Africa generally, the available ones are more concentrated in East and Southern Africa. Much of the available literature on social protection had focused on the poverty impact of cash transfer schemes at the household level, with the larger proportion originating from Latin America. On the contrary, much less empirical studies have been conducted on the poverty impact of cash transfer in Sub-Saharan Africa, let alone on the food security and nutrition impact. To fill this gap in knowledge, this paper examines the impact of cash transfer on food security in Nigeria. As a case study, the paper analysed the Ekiti State Cash Transfer Scheme (ECTS). ECTS is an unconditional transfer scheme which was established in 2011 to directly provide cash transfer to elderly persons aged 65 years and above in Ekiti State of Nigeria. Using survey data collected in 2013, we analysed the impact of the scheme on food availability and dietary diversity of the beneficiary households. Descriptive and Propensity Score Matching (PSM) techniques were used to estimate the Average Treatment Effect (ATE) and Average Treatment Effect on the Treated (ATT) among the beneficiary and control groups. Thereafter, a model to test for the impact of participation in the cash transfer scheme on calorie availability and dietary diversity was estimated. The results indicate that while households in the sample are clearly vulnerable, there were statistically significant differences between the beneficiary and control groups. For instance, monthly expenditure, calorie availability and dietary diversity were significantly larger among the beneficiary and consequently, the prevalence and depth of hunger were lower in the group. Econometric results indicate that the cash transfer has a positive and significant effect on food availability and dietary diversity in the households. Expanding the coverage of the present scheme to cover all eligible households in the country and incorporating cash transfer into a comprehensive hunger reduction policy will make it to have a greater impact at improving food security among the most vulnerable households in the country.

Keywords: calorie availability, cash transfers, dietary diversity, propensity score matching

Procedia PDF Downloads 360
348 Maternal Exposure to Bisphenol A and Its Association with Birth Outcomes

Authors: Yi-Ting Chen, Yu-Fang Huang, Pei-Wei Wang, Hai-Wei Liang, Chun-Hao Lai, Mei-Lien Chen

Abstract:

Background: Bisphenol A (BPA) is commonly used in consumer products, such as inner coatings of cans and polycarbonated bottles. BPA is considered to be an endocrine disrupting substance (EDs) that affects normal human hormones and may cause adverse effects on human health. Pregnant women and fetuses are susceptible groups of endocrine disrupting substances. Prenatal exposure to BPA has been shown to affect the fetus through the placenta. Therefore, it is important to evaluate the potential health risk of fetal exposure to BPA during pregnancy. The aims of this study were (1) to determine the urinary concentration of BPA in pregnant women, and (2) to investigate the association between BPA exposure during pregnancy and birth outcomes. Methods: This study recruited 117 pregnant women and their fetuses from 2012 to 2014 from the Taiwan Maternal- Infant Cohort Study (TMICS). Maternal urine samples were collected in the third trimester and questionnaires were used to collect socio-demographic characteristics, eating habits and medical conditions of the participants. Information about birth outcomes of the fetus was obtained from medical records. As for chemicals analysis, BPA concentrations in urine were determined by off-line solid-phase extraction-ultra-performance liquid chromatography coupled with a Q-Tof mass spectrometer. The urinary concentrations were adjusted with creatinine. The association between maternal concentrations of BPA and birth outcomes was estimated using the logistic regression model. Results: The detection rate of BPA is 99%; the concentration ranges (μg/g) from 0.16 to 46.90. The mean (SD) BPA levels are 5.37(6.42) μg/g creatinine. The mean ±SD of the body weight, body length, head circumference, chest circumference and gestational age at birth are 3105.18 ± 339.53 g, 49.33 ± 1.90 cm, 34.16 ± 1.06 cm, 32.34 ± 1.37 cm and 38.58 ± 1.37 weeks, respectively. After stratifying the exposure levels into two groups by median, pregnant women in higher exposure group would have an increased risk of lower body weight (OR=0.57, 95%CI=0.271-1.193), smaller chest circumference (OR=0.70, 95%CI=0.335-1.47) and shorter gestational age at birth newborn (OR=0.46, 95%CI=0.191-1.114). However, there are no associations between BPA concentration and birth outcomes reach a significant level (p < 0.05) in statistics. Conclusions: This study presents prenatal BPA profiles and infants in northern Taiwan. Women who have higher BPA concentrations tend to give birth to lower body weight, smaller chest circumference or shorter gestational age at birth newborn. More data will be included to verify the results. This report will also present the predictors of BPA concentrations for pregnant women.

Keywords: bisphenol A, birth outcomes, biomonitoring, prenatal exposure

Procedia PDF Downloads 120
347 Adaption to Climate Change as a Challenge for the Manufacturing Industry: Finding Business Strategies by Game-Based Learning

Authors: Jan Schmitt, Sophie Fischer

Abstract:

After the Corona pandemic, climate change is a further, long-lasting challenge the society must deal with. An ongoing climate change need to be prevented. Nevertheless, the adoption tothe already changed climate conditionshas to be focused in many sectors. Recently, the decisive role of the economic sector with high value added can be seen in the Corona crisis. Hence, manufacturing industry as such a sector, needs to be prepared for climate change and adaption. Several examples from the manufacturing industry show the importance of a strategic effort in this field: The outsourcing of a major parts of the value chain to suppliers in other countries and optimizing procurement logistics in a time-, storage- and cost-efficient manner within a network of global value creation, can lead vulnerable impacts due to climate-related disruptions. E.g. the total damage costs after the 2011 flood disaster in Thailand, including costs for delivery failures, were estimated at 45 billion US dollars worldwide. German car manufacturers were also affected by supply bottlenecks andhave close its plant in Thailand for a short time. Another OEM must reduce the production output. In this contribution, a game-based learning approach is presented, which should enable manufacturing companies to derive their own strategies for climate adaption out of a mix of different actions. Based on data from a regional study of small, medium and large manufacturing companies in Mainfranken, a strongly industrialized region of northern Bavaria (Germany) the game-based learning approach is designed. Out of this, the actual state of efforts due to climate adaption is evaluated. First, the results are used to collect single actions for manufacturing companies and second, further actions can be identified. Then, a variety of climate adaption activities can be clustered according to the scope of activity of the company. The combination of different actions e.g. the renewal of the building envelope with regard to thermal insulation, its benefits and drawbacks leads to a specific strategy for climate adaption for each company. Within the game-based approach, the players take on different roles in a fictionalcompany and discuss the order and the characteristics of each action taken into their climate adaption strategy. Different indicators such as economic, ecologic and stakeholder satisfaction compare the success of the respective measures in a competitive format with other virtual companies deriving their own strategy. A "play through" climate change scenarios with targeted adaptation actions illustrate the impact of different actions and their combination onthefictional company.

Keywords: business strategy, climate change, climate adaption, game-based learning

Procedia PDF Downloads 187
346 Modeling of in 738 LC Alloy Mechanical Properties Based on Microstructural Evolution Simulations for Different Heat Treatment Conditions

Authors: M. Tarik Boyraz, M. Bilge Imer

Abstract:

Conventionally cast nickel-based super alloys, such as commercial alloy IN 738 LC, are widely used in manufacturing of industrial gas turbine blades. With carefully designed microstructure and the existence of alloying elements, the blades show improved mechanical properties at high operating temperatures and corrosive environment. The aim of this work is to model and estimate these mechanical properties of IN 738 LC alloy solely based on simulations for projected heat treatment conditions or service conditions. The microstructure (size, fraction and frequency of gamma prime- γ′ and carbide phases in gamma- γ matrix, and grain size) of IN 738 LC needs to be optimized to improve the high temperature mechanical properties by heat treatment process. This process can be performed at different soaking temperature, time and cooling rates. In this work, micro-structural evolution studies were performed experimentally at various heat treatment process conditions, and these findings were used as input for further simulation studies. The operation time, soaking temperature and cooling rate provided by experimental heat treatment procedures were used as micro-structural simulation input. The results of this simulation were compared with the size, fraction and frequency of γ′ and carbide phases, and grain size provided by SEM (EDS module and mapping), EPMA (WDS module) and optical microscope for before and after heat treatment. After iterative comparison of experimental findings and simulations, an offset was determined to fit the real time and theoretical findings. Thereby, it was possible to estimate the final micro-structure without any necessity to carry out the heat treatment experiment. The output of this microstructure simulation based on heat treatment was used as input to estimate yield stress and creep properties. Yield stress was calculated mainly as a function of precipitation, solid solution and grain boundary strengthening contributors in microstructure. Creep rate was calculated as a function of stress, temperature and microstructural factors such as dislocation density, precipitate size, inter-particle spacing of precipitates. The estimated yield stress values were compared with the corresponding experimental hardness and tensile test values. The ability to determine best heat treatment conditions that achieve the desired microstructural and mechanical properties were developed for IN 738 LC based completely on simulations.

Keywords: heat treatment, IN738LC, simulations, super-alloys

Procedia PDF Downloads 230
345 Assessing the Contribution of Informal Buildings to Energy Inefficiency in Kenya: A Case of Mukuru Slums

Authors: Bessy Thuranira

Abstract:

Buildings, as they are designed and used, may contribute to serious environmental problems because of excessive consumption of energy and other natural resources. Buildings in the informal settlements particularly, due to their unplanned physical structure and design, have significantly contributed the global energy problematic scenario typified by high-level inefficiencies. Energy used in buildings in Africa is estimated to be the highest of the total national electricity consumption. Over the last decade, assessments of energy consumption and efficiency/inefficiency has focused on formal and modern buildings. This study seeks to go off the beaten path, by focusing on energy use in informal settlements. Operationally, it sought to establish the contribution of informal buildings in the overall energy consumption in the city and the country at large. This study was carried out in Mukuru kwa Reuben informal settlement where there is distinct manifestation of different settlement morphologies within a small locality. The research narrowed down to three villages (Mombasa, Kosovo and Railway villages) within the settlement, that were representative of the different slum housing typologies. Due to the unpredictability nature and informality in slums, this study takes a multi-methodology approach. Detailed energy audits and measurements are carried out to predict total building consumption, and document building design and envelope, typology, materials and occupancy levels. Moreover, the study uses semi-structured interviews and to access energy supply, cost, access and consumption patterns. Observations and photographs are also used to shed more light on these parameters. The study reveals the high energy inefficiencies in slum buildings mainly related to sub-standard equipment and appliances, building design and settlement layout, poor access and utilization/consumption patterns of energy. The impacts of this inefficiency are high economic burden to the poor, high levels of pollution, lack of thermal comfort and emissions to the environment. The study highlights a set of urban planning and building design principles that can be used to retrofit slums into more energy efficient settlements. The study explores principles of responsive settlement layouts/plans and appropriate building designs that use the beneficial elements of nature to achieve natural lighting, natural ventilation, and solar control to create thermally comfortable, energy efficient, and environmentally responsive buildings/settlements. As energy efficiency in informal settlements is a relatively less explored area of efficiency, it requires further research and policy recommendations, for which this paper will set a background.

Keywords: energy efficiency, informal settlements, renewable energy, settlement layout

Procedia PDF Downloads 106
344 Multicollinearity and MRA in Sustainability: Application of the Raise Regression

Authors: Claudia García-García, Catalina B. García-García, Román Salmerón-Gómez

Abstract:

Much economic-environmental research includes the analysis of possible interactions by using Moderated Regression Analysis (MRA), which is a specific application of multiple linear regression analysis. This methodology allows analyzing how the effect of one of the independent variables is moderated by a second independent variable by adding a cross-product term between them as an additional explanatory variable. Due to the very specification of the methodology, the moderated factor is often highly correlated with the constitutive terms. Thus, great multicollinearity problems arise. The appearance of strong multicollinearity in a model has important consequences. Inflated variances of the estimators may appear, there is a tendency to consider non-significant regressors that they probably are together with a very high coefficient of determination, incorrect signs of our coefficients may appear and also the high sensibility of the results to small changes in the dataset. Finally, the high relationship among explanatory variables implies difficulties in fixing the individual effects of each one on the model under study. These consequences shifted to the moderated analysis may imply that it is not worth including an interaction term that may be distorting the model. Thus, it is important to manage the problem with some methodology that allows for obtaining reliable results. After a review of those works that applied the MRA among the ten top journals of the field, it is clear that multicollinearity is mostly disregarded. Less than 15% of the reviewed works take into account potential multicollinearity problems. To overcome the issue, this work studies the possible application of recent methodologies to MRA. Particularly, the raised regression is analyzed. This methodology mitigates collinearity from a geometrical point of view: the collinearity problem arises because the variables under study are very close geometrically, so by separating both variables, the problem can be mitigated. Raise regression maintains the available information and modifies the problematic variables instead of deleting variables, for example. Furthermore, the global characteristics of the initial model are also maintained (sum of squared residuals, estimated variance, coefficient of determination, global significance test and prediction). The proposal is implemented to data from countries of the European Union during the last year available regarding greenhouse gas emissions, per capita GDP and a dummy variable that represents the topography of the country. The use of a dummy variable as the moderator is a special variant of MRA, sometimes called “subgroup regression analysis.” The main conclusion of this work is that applying new techniques to the field can improve in a substantial way the results of the analysis. Particularly, the use of raised regression mitigates great multicollinearity problems, so the researcher is able to rely on the interaction term when interpreting the results of a particular study.

Keywords: multicollinearity, MRA, interaction, raise

Procedia PDF Downloads 81
343 Magnitude of Transactional Sex and Its Determinant Factors Among Women in Sub-Saharan Africa: Systematic Review and Meat Analysis

Authors: Gedefaye Nibret Mihretie

Abstract:

Background: Transactional sex is casual sex between two people to receive material incentives in exchange for sexual favors. Transactional sex is associated with negative consequences, which increase the risk of sexually transmitted diseases, including HIV/AIDS, unintended pregnancy, unsafe abortion, and physiological trauma. Many primary studies in Sub-Saharan Africa have been conducted to assess the prevalence and associated factors of transactional sex among women. These studies had great discrepancies and inconsistent results. Hence, this systematic review and meta-analysis aimed to synthesize the pooled prevalence of the practice of transactional sex among women and its associated factors in Sub-Saharan Africa. Method: Cross-sectional studies were systematically searched from March 6, 2022, to April 24, 2022, using PubMed, Google Scholar, HINARI, Cochrane Library, and grey literature. The pooled prevalence of transactional sex and associated factors was estimated using DerSemonial-Laird Random Effect Model. Stata (version 16.0) was used to analyze the data. The I-squared statistic was used to assess the studies' heterogeneity. A funnel plot and Egger's test were used to check for publication bias. A subgroup analysis was performed to minimize the underline heterogeneity depending on the study years, source of data, sample sizes and geographical location. Results: Four thousand one hundred thirty articles were extracted from various databases. The final thirty-two studies were included in this systematic review, including 108,075 participants. The pooled prevalence of transactional sex among women in Sub-Saharan Africa was 12.55%, with a confidence interval of 9.59% to 15.52%. Educational status (OR = .48, 95%CI, 0.27, 0.69) was the protective factors of transactional sex whereas, alcohol use (OR = 1.85, 95% CI: 1.19, 2.52), early sex debut (OR = 2.57, 95%CI, 1.17, 3.98), substance abuse (OR = 4.21, 95% CI: 2.05, 6.37), having history of sexual experience abuse (OR = 4.08, 95% CI: 1.38, 6.78), physical violence abuse (OR = 6.59, 95% CI: 1.17, 12.02), and sexual violence abuse (OR = 3.56, 95% CI: 1.15, 8.27) were the risk factors of transactional sex. Conclusion: The prevalence of transactional sex among women in Sub-Saharan Africa was high. Educational status, alcohol use, substance abuse, early sex debut, having a history of sexual experiences, physical violence, and sexual violence were predictors of transaction sex. Governmental and other stakeholders are designed to reduce alcohol utilization, provide health information about the negative consequences of early sex debut, substance abuse, and reduce sexual violence, ensuring gender equality through mass media, which should be included in state policy.

Keywords: women’s health, child health, reproductive health, midwifery

Procedia PDF Downloads 70
342 Comprehensive Approach to Control Virus Infection and Energy Consumption in An Occupant Classroom

Authors: SeyedKeivan Nateghi, Jan Kaczmarczyk

Abstract:

People nowadays spend most of their time in buildings. Accordingly, maintaining a good quality of indoor air is very important. New universal matters related to the prevalence of Covid-19 also highlight the importance of indoor air conditioning in reducing the risk of virus infection. Cooling and Heating of a house will provide a suitable zone of air temperature for residents. One of the significant factors in energy demand is energy consumption in the building. In general, building divisions compose more than 30% of the world's fundamental energy requirement. As energy demand increased, greenhouse effects emerged that caused global warming. Regardless of the environmental damage to the ecosystem, it can spread infectious diseases such as malaria, cholera, or dengue to many other parts of the world. With the advent of the Covid-19 phenomenon, the previous instructions to reduce energy consumption are no longer responsive because they increase the risk of virus infection among people in the room. Two problems of high energy consumption and coronavirus infection are opposite. A classroom with 30 students and one teacher in Katowice, Poland, considered controlling two objectives simultaneal. The probability of transmission of the disease is calculated from the carbon dioxide concentration of people. Also, in a certain period, the amount of energy consumption is estimated by EnergyPlus. The effect of three parameters of number, angle, and time or schedule of opening windows on the probability of infection transmission and energy consumption of the class were investigated. Parameters were examined widely to determine the best possible condition for simultaneous control of infection spread and energy consumption. The number of opening windows is discrete (0,3), and two other parameters are continuous (0,180) and (8 AM, 2 PM). Preliminary results show that changes in the number, angle, and timing of window openings significantly impact the likelihood of virus transmission and class energy consumption. The greater the number, tilt, and timing of window openings, the less likely the student will transmit the virus. But energy consumption is increasing. When all the windows were closed at all hours of the class, the energy consumption for the first day of January was only 0.2 megajoules. In comparison, the probability of transmitting the virus per person in the classroom is more than 45%. But when all windows were open at maximum angles during class, the chance of transmitting the infection was reduced to 0.35%. But the energy consumption will be 36 megajoules. Therefore, school classrooms need an optimal schedule to control both functions. In this article, we will present a suitable plan for the classroom with natural ventilation through windows to control energy consumption and the possibility of infection transmission at the same time.

Keywords: Covid-19, energy consumption, building, carbon dioxide, energyplus

Procedia PDF Downloads 79
341 The Effects of Collaborative Videogame Play on Flow Experience and Mood

Authors: Eva Nolan, Timothy Mcnichols

Abstract:

Gamers spend over 3 billion hours collectively playing video games a week, which is arguably not nearly enough time to indulge in the many benefits gaming has to offer. Much of the previous research on video gaming is centered on the effects of playing violent video games and the negative impacts they have on the individual. However, there is a dearth of research in the area of non-violent video games, specifically the emotional and cognitive benefits playing non-violent games can offer individuals. Current research in the area of video game play suggests there are many benefits to playing for an individual, such as decreasing symptoms of depression, decreasing stress, increasing positive emotions, inducing relaxation, decreasing anxiety, and particularly improving mood. One suggestion as to why video games may offer such benefits is that they possess ideal characteristics to create and maintain flow experiences, which in turn, is the subjective experience where an individual obtains a heightened and improved state of mind while they are engaged in a task where a balance of challenge and skill is found. Many video games offer a platform for collaborative gameplay, which can enhance the emotional experience of gaming through the feeling of social support and social inclusion. The present study was designed to examine the effects of collaborative gameplay and flow experience on participants’ perceived mood. To investigate this phenomenon, an in-between subjects design involving forty participants were randomly divided into two groups where they engaged in solo or collaborative gameplay. Each group represented an even number of frequent gamers and non-frequent gamers. Each participant played ‘The Lego Movie Videogame’ on the Playstation 4 console. The participant’s levels of flow experience and perceived mood were measured by the Flow State Scale (FSS) and the Positive and Negative Affect Schedule (PANAS). The following research hypotheses were investigated: (i.) participants in the collaborative gameplay condition will experience higher levels of flow experience and higher levels of mood than those in the solo gameplay condition; (ii.) participants who are frequent gamers will experience higher levels of flow experience and higher levels of mood than non-frequent gamers; and (iii.) there will be a significant positive relationship between flow experience and mood. If the estimated findings are supported, this suggests that engaging in collaborative gameplay can be beneficial for an individual’s mood and that experiencing a state of flow can also enhance an individual’s mood. Hence, collaborative gaming can be beneficial to promote positive emotions (higher levels of mood) through engaging an individual’s flow state.

Keywords: collaborative gameplay, flow experience, mood, games, positive emotions

Procedia PDF Downloads 316
340 Analytical Validity Of A Tech Transfer Solution To Internalize Genetic Testing

Authors: Lesley Northrop, Justin DeGrazia, Jessica Greenwood

Abstract:

ASPIRA Labs now offers an en-suit and ready-to-implement technology transfer solution to enable labs and hospitals that lack the resources to build it themselves to offer in-house genetic testing. This unique platform employs a patented Molecular Inversion Probe (MIP) technology that combines the specificity of a hybrid capture protocol with the ease of an amplicon-based protocol and utilizes an advanced bioinformatics analysis pipeline based on machine learning. To demonstrate its efficacy, two independent genetic tests were validated on this technology transfer platform: expanded carrier screening (ECS) and hereditary cancer testing (HC). The analytical performance of ECS and HC was validated separately in a blinded manner for calling three different types of variants: SNVs, short indels (typically, <50 bp), and large indels/CNVs defined as multi-exonic del/dup events. The reference set was constructed using samples from Coriell Institute, an external clinical genetic testing laboratory, Maine Molecular Quality Controls Inc. (MMQCI), SeraCare and GIAB Consortium. Overall, the analytical performance showed a sensitivity and specificity of >99.4% for both ECS and HC in detecting SNVs. For indels, both tests reported specificity of 100%, and ECS demonstrated a sensitivity of 100%, whereas HC exhibited a sensitivity of 96.5%. The bioinformatics pipeline also correctly called all reference CNV events resulting in a sensitivity of 100% for both tests. No additional calls were made in the HC panel, leading to a perfect performance (specificity and F-measure of 100%). In the carrier panel, however, three additional positive calls were made outside the reference set. Two of these calls were confirmed using an orthogonal method and were re-classified as true positives leaving only one false positive. The pipeline also correctly identified all challenging carrier statuses, such as positive cases for spinal muscular atrophy and alpha-thalassemia, resulting in 100% sensitivity. After confirmation of additional positive calls via long-range PCR and MLPA, specificity for such cases was estimated at 99%. These performance metrics demonstrate that this tech-transfer solution can be confidently internalized by clinical labs and hospitals to offer mainstream ECS and HC as part of their test catalog, substantially increasing access to quality germline genetic testing for labs of all sizes and resources levels.

Keywords: clinical genetics, genetic testing, molecular genetics, technology transfer

Procedia PDF Downloads 158
339 Changes in Heavy Metals Bioavailability in Manure-Derived Digestates and Subsequent Hydrochars to Be Used as Soil Amendments

Authors: Hellen L. De Castro e Silva, Ana A. Robles Aguilar, Erik Meers

Abstract:

Digestates are residual by-products, rich in nutrients and trace elements, which can be used as organic fertilisers on soils. However, due to the non-digestibility of these elements and reduced dry matter during the anaerobic digestion process, metal concentrations are higher in digestates than in feedstocks, which might hamper their use as fertilisers according to the threshold values of some country policies. Furthermore, there is uncertainty regarding the required assimilated amount of these elements by some crops, which might result in their bioaccumulation. Therefore, further processing of the digestate to obtain safe fertilizing products has been recommended. This research aims to analyze the effect of applying the hydrothermal carbonization process to manure-derived digestates as a thermal treatment to reduce the bioavailability of heavy metals in mono and co-digestates derived from pig manure and maize from contaminated land in France. This study examined pig manure collected from a novel stable system (VeDoWs, province of East Flanders, Belgium) that separates the collection of pig urine and feces, resulting in a solid fraction of manure with high up-concentration of heavy metals and nutrients. Mono-digestion and co-digestion processes were conducted in semi-continuous reactors for 45 days at mesophilic conditions, in which the digestates were dried at 105 °C for 24 hours. Then, hydrothermal carbonization was applied to a 1:10 solid/water ratio to guarantee controlled experimental conditions in different temperatures (180, 200, and 220 °C) and residence times (2 h and 4 h). During the process, the pressure was generated autogenously, and the reactor was cooled down after completing the treatments. The solid and liquid phases were separated through vacuum filtration, in which the solid phase of each treatment -hydrochar- was dried and ground for chemical characterization. Different fractions (exchangeable / adsorbed fraction - F1, carbonates-bound fraction - F2, organic matter-bound fraction - F3, and residual fraction – F4) of some heavy metals (Cd, Cr, Ni, and Cr) have been determined in digestates and derived hydrochars using the modified Community Bureau of Reference (BCR) sequential extraction procedure. The main results indicated a difference in the heavy metals fractionation between digestates and their derived hydrochars; however, the hydrothermal carbonization operating conditions didn’t have remarkable effects on heavy metals partitioning between the hydrochars of the proposed treatments. Based on the estimated potential ecological risk assessment, there was one level decrease (considerate to moderate) when comparing the HMs partitioning in digestates and derived hydrochars.

Keywords: heavy metals, bioavailability, hydrothermal treatment, bio-based fertilisers, agriculture

Procedia PDF Downloads 87
338 Participation of Women in the Brazilian Paralympic Sports

Authors: Ana Carolina Felizardo Da Silva

Abstract:

People with disabilities are those who have limitations of a physical, mental, intellectual or sensory nature and who, therefore, should not be excluded or marginalized. In Brazil, the Brazilian Law for the Inclusion of People with Disabilities defines that people with disabilities have the right to culture, sport, tourism and leisure on an equal basis with other people. Sport for people with disabilities, in its genesis, had a character aimed at rehabilitating men and soldiers, that is, the male figure who returned wounded from war and needed care. By gaining practitioners, the marketing issue emerges and, successively, high performance, what we call Paralympic sport. We found that sport for people with disabilities was designed for men, corroborating the social idea that sport is a masculine and masculinizing environment. In this way, the inclusion of women with disabilities in sports becomes a double challenge because they are women and have a disability. From data collected from official documents of the International Paralympic Committee, it is found that the first report on the participation of women in the Paralympic Games was in 1948, in England, in Stoke Mandeville, a championship considered the firstborn of the games, later, became called the “Paralympic Games”. However, due to the lack of information, the return of the appearance of women participating in the Paralympics took place after long 40 years, in 1984, which demonstrates a large gap of records on the official website referring to women in the games. Despite the great challenge, the number of women has been growing substantially. When collecting data from participants of all 16 editions of the Paralympic Games, in its last edition, held in Tokyo, out of 4,400 competing athletes, 1,853 were women, which represents 42% of the total number of athletes. In this same edition, we had the largest delegation of Brazilian women, represented by 96 athletes out of a total of 260 Brazilian athletes. It is estimated that in the next edition, to be taken place in Paris in 2024, the participation of women will equal or surpass that of men. The certain invisibility of women participating in the Paralympic Games is noticed when we access the database of the Brazilian Paralympic Committee website. It is possible to identify all women medalists of a given edition. On the other side, participating female athletes who did not medal are not registered on the site. Regarding the participation of Brazilian women in the Paralympics, there was a considerable growth in the last two editions, in 2012 there were only 69 women participating, going to 102 in 2016 and 96 in 2021. The same happened in relation to the medalists, going from 8 Brazilians in 2012 to 33 in 2016 and 27 in 2021. In this sense, the present study, aims to analyze how Brazilian women participate in the Paralympics, giving visibility and voice to female athletes. Structured interviews are being carried out with the participants of the games, identifying the difficulties and potentialities of participating with athletes in the competition. The analysis will be carried out through Bardin’s content analysis.

Keywords: paralympics, sport for people with disabilities, woman, woman in sport

Procedia PDF Downloads 55
337 Detection of Glyphosate Using Disposable Sensors for Fast, Inexpensive and Reliable Measurements by Electrochemical Technique

Authors: Jafar S. Noori, Jan Romano-deGea, Maria Dimaki, John Mortensen, Winnie E. Svendsen

Abstract:

Pesticides have been intensively used in agriculture to control weeds, insects, fungi, and pest. One of the most commonly used pesticides is glyphosate. Glyphosate has the ability to attach to the soil colloids and degraded by the soil microorganisms. As glyphosate led to the appearance of resistant species, the pesticide was used more intensively. As a consequence of the heavy use of glyphosate, residues of this compound are increasingly observed in food and water. Recent studies reported a direct link between glyphosate and chronic effects such as teratogenic, tumorigenic and hepatorenal effects although the exposure was below the lowest regulatory limit. Today, pesticides are detected in water by complicated and costly manual procedures conducted by highly skilled personnel. It can take up to several days to get an answer regarding the pesticide content in water. An alternative to this demanding procedure is offered by electrochemical measuring techniques. Electrochemistry is an emerging technology that has the potential of identifying and quantifying several compounds in few minutes. It is currently not possible to detect glyphosate directly in water samples, and intensive research is underway to enable direct selective and quantitative detection of glyphosate in water. This study focuses on developing and modifying a sensor chip that has the ability to selectively measure glyphosate and minimize the signal interference from other compounds. The sensor is a silicon-based chip that is fabricated in a cleanroom facility with dimensions of 10×20 mm. The chip is comprised of a three-electrode configuration. The deposited electrodes consist of a 20 nm layer chromium and 200 nm gold. The working electrode is 4 mm in diameter. The working electrodes are modified by creating molecularly imprinted polymers (MIP) using electrodeposition technique that allows the chip to selectively measure glyphosate at low concentrations. The modification included using gold nanoparticles with a diameter of 10 nm functionalized with 4-aminothiophenol. This configuration allows the nanoparticles to bind to the working electrode surface and create the template for the glyphosate. The chip was modified using electrodeposition technique. An initial potential for the identification of glyphosate was estimated to be around -0.2 V. The developed sensor was used on 6 different concentrations and it was able to detect glyphosate down to 0.5 mgL⁻¹. This value is below the accepted pesticide limit of 0.7 mgL⁻¹ set by the US regulation. The current focus is to optimize the functionalizing procedure in order to achieve glyphosate detection at the EU regulatory limit of 0.1 µgL⁻¹. To the best of our knowledge, this is the first attempt to modify miniaturized sensor electrodes with functionalized nanoparticles for glyphosate detection.

Keywords: pesticides, glyphosate, rapid, detection, modified, sensor

Procedia PDF Downloads 163
336 Association of Genetically Proxied Cholesterol-Lowering Drug Targets and Head and Neck Cancer Survival: A Mendelian Randomization Analysis

Authors: Danni Cheng

Abstract:

Background: Preclinical and epidemiological studies have reported potential protective effects of low-density lipoprotein cholesterol (LDL-C) lowering drugs on head and neck squamous cell cancer (HNSCC) survival, but the causality was not consistent. Genetic variants associated with LDL-C lowering drug targets can predict the effects of their therapeutic inhibition on disease outcomes. Objective: We aimed to evaluate the causal association of genetically proxied cholesterol-lowering drug targets and circulating lipid traits with cancer survival in HNSCC patients stratified by human papillomavirus (HPV) status using two-sample Mendelian randomization (MR) analyses. Method: Single-nucleotide polymorphisms (SNPs) in gene region of LDL-C lowering drug targets (HMGCR, NPC1L1, CETP, PCSK9, and LDLR) associated with LDL-C levels in genome-wide association study (GWAS) from the Global Lipids Genetics Consortium (GLGC) were used to proxy LDL-C lowering drug action. SNPs proxy circulating lipids (LDL-C, HDL-C, total cholesterol, triglycerides, apoprotein A and apoprotein B) were also derived from the GLGC data. Genetic associations of these SNPs and cancer survivals were derived from 1,120 HPV-positive oropharyngeal squamous cell carcinoma (OPSCC) and 2,570 non-HPV-driven HNSCC patients in VOYAGER program. We estimated the causal associations of LDL-C lowering drugs and circulating lipids with HNSCC survival using the inverse-variance weighted method. Results: Genetically proxied HMGCR inhibition was significantly associated with worse overall survival (OS) in non-HPV-drive HNSCC patients (inverse variance-weighted hazard ratio (HR IVW), 2.64[95%CI,1.28-5.43]; P = 0.01) but better OS in HPV-positive OPSCC patients (HR IVW,0.11[95%CI,0.02-0.56]; P = 0.01). Estimates for NPC1L1 were strongly associated with worse OS in both total HNSCC (HR IVW,4.17[95%CI,1.06-16.36]; P = 0.04) and non-HPV-driven HNSCC patients (HR IVW,7.33[95%CI,1.63-32.97]; P = 0.01). A similar result was found that genetically proxied PSCK9 inhibitors were significantly associated with poor OS in non-HPV-driven HNSCC (HR IVW,1.56[95%CI,1.02 to 2.39]). Conclusion: Genetically proxied long-term HMGCR inhibition was significantly associated with decreased OS in non-HPV-driven HNSCC and increased OS in HPV-positive OPSCC. While genetically proxied NPC1L1 and PCSK9 had associations with worse OS in total and non-HPV-driven HNSCC patients. Further research is needed to understand whether these drugs have consistent associations with head and neck tumor outcomes.

Keywords: Mendelian randomization analysis, head and neck cancer, cancer survival, cholesterol, statin

Procedia PDF Downloads 81
335 Predicting Resistance of Commonly Used Antimicrobials in Urinary Tract Infections: A Decision Tree Analysis

Authors: Meera Tandan, Mohan Timilsina, Martin Cormican, Akke Vellinga

Abstract:

Background: In general practice, many infections are treated empirically without microbiological confirmation. Understanding susceptibility of antimicrobials during empirical prescribing can be helpful to reduce inappropriate prescribing. This study aims to apply a prediction model using a decision tree approach to predict the antimicrobial resistance (AMR) of urinary tract infections (UTI) based on non-clinical features of patients over 65 years. Decision tree models are a novel idea to predict the outcome of AMR at an initial stage. Method: Data was extracted from the database of the microbiological laboratory of the University Hospitals Galway on all antimicrobial susceptibility testing (AST) of urine specimens from patients over the age of 65 from January 2011 to December 2014. The primary endpoint was resistance to common antimicrobials (Nitrofurantoin, trimethoprim, ciprofloxacin, co-amoxiclav and amoxicillin) used to treat UTI. A classification and regression tree (CART) model was generated with the outcome ‘resistant infection’. The importance of each predictor (the number of previous samples, age, gender, location (nursing home, hospital, community) and causative agent) on antimicrobial resistance was estimated. Sensitivity, specificity, negative predictive (NPV) and positive predictive (PPV) values were used to evaluate the performance of the model. Seventy-five percent (75%) of the data were used as a training set and validation of the model was performed with the remaining 25% of the dataset. Results: A total of 9805 UTI patients over 65 years had their urine sample submitted for AST at least once over the four years. E.coli, Klebsiella, Proteus species were the most commonly identified pathogens among the UTI patients without catheter whereas Sertia, Staphylococcus aureus; Enterobacter was common with the catheter. The validated CART model shows slight differences in the sensitivity, specificity, PPV and NPV in between the models with and without the causative organisms. The sensitivity, specificity, PPV and NPV for the model with non-clinical predictors was between 74% and 88% depending on the antimicrobial. Conclusion: The CART models developed using non-clinical predictors have good performance when predicting antimicrobial resistance. These models predict which antimicrobial may be the most appropriate based on non-clinical factors. Other CART models, prospective data collection and validation and an increasing number of non-clinical factors will improve model performance. The presented model provides an alternative approach to decision making on antimicrobial prescribing for UTIs in older patients.

Keywords: antimicrobial resistance, urinary tract infection, prediction, decision tree

Procedia PDF Downloads 234