Search results for: location estimate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3933

Search results for: location estimate

1773 The Long-Term Impact of Health Conditions on Social Mobility Outcomes: A Modelling Study

Authors: Lise Retat, Maria Carmen Huerta, Laura Webber, Franco Sassi

Abstract:

Background: Intra-generational social mobility (ISM) can be defined as the extent to which individuals change their socio-economic position over a period of time or during their entire life course. The relationship between poor health and ISM is established. Therefore, quantifying the impact that potential health policies have on ISM now and into the future would provide evidence for how social inequality could be reduced. This paper takes the condition of overweight and obesity as an example and estimates the mean earning change per individual if the UK were to introduce policies to effectively reduce overweight and obesity. Methods: The HealthLumen individual-based model was used to estimate the impact of obesity on social mobility measures, such as earnings, occupation, and wealth. The HL tool models each individual's probability of experiencing downward ISM as a result of their overweight and obesity status. For example, one outcome of interest was the cumulative mean earning per person of implementing a policy which would reduce adult overweight and obesity by 1% each year between 2020 and 2030 in the UK. Results: Preliminary analysis showed that by reducing adult overweight and obesity by 1% each year between 2020 and 2030, the cumulative additional mean earnings would be ~1,000 Euro per adult by 2030. Additional analysis will include other social mobility indicators. Conclusions: These projections are important for illustrating the role of health in social mobility and for providing evidence for how health policy can make a difference to social mobility outcomes and, in turn, help to reduce inequality.

Keywords: modelling, social mobility, obesity, health

Procedia PDF Downloads 109
1772 The Effect of Damper Attachment on Tennis Racket Vibration: A Simulation Study

Authors: Kuangyou B. Cheng

Abstract:

Tennis is among the most popular sports worldwide. During ball-racket impact, substantial vibration transmitted to the hand/arm may be the cause of “tennis elbow”. Although it is common for athletes to attach a “vibration damper” to the spring-bed, the effect remains unclear. To avoid subjective factors and errors in data recording, the effect of damper attachment on racket handle end vibration was investigated with computer simulation. The tennis racket was modeled as a beam with free-free ends (similar to loosely holding the racket). Finite difference method with 40 segments was used to simulate ball-racket impact response. The effect of attaching a damper was modeled as having a segment with increased mass. It was found that the damper has the largest effect when installed at the spring-bed center. However, this is not a practical location due to interference with ball-racket impact. Vibration amplitude changed very slightly when the damper was near the top or bottom of the spring-bed. The damper works only slightly better at the bottom than at the top of the spring-bed. In addition, heavier dampers work better than lighter ones. These simulation results were comparable with experimental recordings in which the selection of damper locations was restricted by ball impact locations. It was concluded that mathematical model simulations were able to objectively investigate the effect of damper attachment on racket vibration. In addition, with very slight difference in grip end vibration amplitude when the damper was attached at the top or bottom of the spring-bed, whether the effect can really be felt by athletes is questionable.

Keywords: finite difference, impact, modeling, vibration amplitude

Procedia PDF Downloads 244
1771 Climate Change, Agriculture and Food Security in Sub-Saharan Africa: What Effects and What Answers?

Authors: Abdoulahad Allamine

Abstract:

The objective of this study is to assess the impact of climate variability on agriculture and food security in 43 countries of sub-Saharan Africa. We use for this purpose the data from BADC bases, UNCTAD, and WDI FAOSTAT to estimate a VAR model on panel data. The sample is divided into three (03) agro-climatic zones, more explicitly the equatorial zone, the Sahel region and the semi-arid zone. This allows to highlight the differential impacts sustained by countries and appropriate responses to each group of countries. The results show that the sharp fluctuations in the volume of rainfall negatively affect agriculture and food security of countries in the equatorial zone, with heavy rainfall and high temperatures in the Sahel region. However, countries with low temperatures and low rainfall are the least affected. The hedging policies against the risks of climate variability must be more active in the first two groups of countries. On this basis and in general, we recommend integration of agricultural policies between countries is done to reduce the effects of climate variability on agriculture and food security. It would be logical to encourage regional and international closer collaboration on the development and dissemination of improved varieties, ecological intensification, and management of biotic and abiotic stresses facing these climate variability to sustainably increase food production. Small farmers also need training in agricultural risk hedging techniques related to climate variations; this requires an increase in state budgets allocated to agriculture.

Keywords: agro-climatic zones, climate variability, food security, Sub-Saharan Africa, VAR on panel data

Procedia PDF Downloads 363
1770 Evaluation of Real-Time Background Subtraction Technique for Moving Object Detection Using Fast-Independent Component Analysis

Authors: Naoum Abderrahmane, Boumehed Meriem, Alshaqaqi Belal

Abstract:

Background subtraction algorithm is a larger used technique for detecting moving objects in video surveillance to extract the foreground objects from a reference background image. There are many challenges to test a good background subtraction algorithm, like changes in illumination, dynamic background such as swinging leaves, rain, snow, and the changes in the background, for example, moving and stopping of vehicles. In this paper, we propose an efficient and accurate background subtraction method for moving object detection in video surveillance. The main idea is to use a developed fast-independent component analysis (ICA) algorithm to separate background, noise, and foreground masks from an image sequence in practical environments. The fast-ICA algorithm is adapted and adjusted with a matrix calculation and searching for an optimum non-quadratic function to be faster and more robust. Moreover, in order to estimate the de-mixing matrix and the denoising de-mixing matrix parameters, we propose to convert all images to YCrCb color space, where the luma component Y (brightness of the color) gives suitable results. The proposed technique has been verified on the publicly available datasets CD net 2012 and CD net 2014, and experimental results show that our algorithm can detect competently and accurately moving objects in challenging conditions compared to other methods in the literature in terms of quantitative and qualitative evaluations with real-time frame rate.

Keywords: background subtraction, moving object detection, fast-ICA, de-mixing matrix

Procedia PDF Downloads 82
1769 Response Delay Model: Bridging the Gap in Urban Fire Disaster Response System

Authors: Sulaiman Yunus

Abstract:

The need for modeling response to urban fire disaster cannot be over emphasized, as recurrent fire outbreaks have gutted most cities of the world. This necessitated the need for a prompt and efficient response system in order to mitigate the impact of the disaster. Promptness, as a function of time, is seen to be the fundamental determinant for efficiency of a response system and magnitude of a fire disaster. Delay, as a result of several factors, is one of the major determinants of promptgness of a response system and also the magnitude of a fire disaster. Response Delay Model (RDM) intends to bridge the gap in urban fire disaster response system through incorporating and synchronizing the delay moments in measuring the overall efficiency of a response system and determining the magnitude of a fire disaster. The model identified two delay moments (pre-notification and Intra-reflex sequence delay) that can be elastic and collectively plays a significant role in influencing the efficiency of a response system. Due to variation in the elasticity of the delay moments, the model provides for measuring the length of delays in order to arrive at a standard average delay moment for different parts of the world, putting into consideration geographic location, level of preparedness and awareness, technological advancement, socio-economic and environmental factors. It is recommended that participatory researches should be embarked on locally and globally to determine standard average delay moments within each phase of the system so as to enable determining the efficiency of response systems and predicting fire disaster magnitudes.

Keywords: delay moment, fire disaster, reflex sequence, response, response delay moment

Procedia PDF Downloads 189
1768 Long Term Changes of Aerosols and Their Radiative Forcing over the Tropical Urban Station Pune, India

Authors: M. P. Raju, P. D. Safai, P. S. P. Rao, P. C. S. Devara, C. V. Naidu

Abstract:

In order to study the Physical and chemical characteristics of aerosols, samples of Total Suspended Particulates (TSP) were collected using a high volume sampler at Pune, a semi-urban location in SW India during March 2009 to February 2010. TSP samples were analyzed for water soluble components like F, Cl, NO3, SO4, NH4, Na, K, Ca, and Mg and acid soluble components like Al, Zn, Fe and Cu using Ion-Chromatograph and Atomic Absorption Spectrometer. Analysis of the data revealed that the monthly mean TSP concentrations varied between 471.3 µg/m3 and 30.5 µg/m3 with an annual mean value of 159.8 µg/m3. TSP concentrations were found to be less during post-monsoon and winter (October through February), compared to those in summer and monsoon (March through September). Anthropogenic activities like vehicular emissions and dust particles originated from urban activities were the major sources for TSP. TSP showed good correlation with all the major ionic components, especially with SO4 (R= 0.62) and NO3 (R= 0.67) indicating the impact of anthropogenic sources over the aerosols at Pune. However, the overall aerosol nature was alkaline (Ave pH = 6.17) mainly due to the neutralizing effects of Ca and NH4. SO4 contributed more (58.8%) to the total acidity as compared to NO3 (41.1%) where as, Ca contributed more (66.5%) to the total alkalinity than NH4 (33.5%). Seasonality of acid soluble component Al, Fe and Cu showed remarkable increase, indicating the dominance of soil source over the man-made activities. Overall study on TSP indicated that aerosols at Pune were mainly affected by the local sources.

Keywords: chemical composition, acidic and neutralization potential, radiative forcing, urban station

Procedia PDF Downloads 230
1767 Parallel Self Organizing Neural Network Based Estimation of Archie’s Parameters and Water Saturation in Sandstone Reservoir

Authors: G. M. Hamada, A. A. Al-Gathe, A. M. Al-Khudafi

Abstract:

Determination of water saturation in sandstone is a vital question to determine the initial oil or gas in place in reservoir rocks. Water saturation determination using electrical measurements is mainly on Archie’s formula. Consequently accuracy of Archie’s formula parameters affects water saturation values rigorously. Determination of Archie’s parameters a, m, and n is proceeded by three conventional techniques, Core Archie-Parameter Estimation (CAPE) and 3-D. This work introduces the hybrid system of parallel self-organizing neural network (PSONN) targeting accepted values of Archie’s parameters and, consequently, reliable water saturation values. This work focuses on Archie’s parameters determination techniques; conventional technique, CAPE technique, and 3-D technique, and then the calculation of water saturation using current. Using the same data, a hybrid parallel self-organizing neural network (PSONN) algorithm is used to estimate Archie’s parameters and predict water saturation. Results have shown that estimated Arche’s parameters m, a, and n are highly accepted with statistical analysis, indicating that the PSONN model has a lower statistical error and higher correlation coefficient. This study was conducted using a high number of measurement points for 144 core plugs from a sandstone reservoir. PSONN algorithm can provide reliable water saturation values, and it can supplement or even replace the conventional techniques to determine Archie’s parameters and thereby calculate water saturation profiles.

Keywords: water saturation, Archie’s parameters, artificial intelligence, PSONN, sandstone reservoir

Procedia PDF Downloads 116
1766 Variability in Saturation Flow and Traffic Performance at Urban Signalized Intersection

Authors: P. N. Salini, B. Anish Kini, R. Ashalatha

Abstract:

At signalized intersections with heterogeneous traffic, the percentage share of different vehicle categories have a bearing on the inter-vehicle space utilization, which eventually impacts the saturation flow. This paper analyzed the impact of the percentage share of various vehicle categories in the traffic stream on the saturation flow at signalized intersections by video graphing major intersections with varying geometry in Kerala, India. It was found that as the percentage share of two-wheelers increases, the saturation flow at signalized intersections increases and vice-versa for the percentage share of cars. The effect of bus blockage and parking maneuvers on the saturation flow were also studied. As the distance of bus blockage increases from the stop line, the effect on the saturation flow decreases, while with more buses stopping at the same bus stop, the saturation flow reduces further. The study revealed that with higher kerbside parking maneuvers on the upstream, the saturation flow reduces, and with an increase in the distance of the parking maneuver from the stop line, the effect on the saturation flow decreases. The adjustment factors for bus blockage due to bus stops within 75m downstream and parking maneuvers within 75m upstream of the intersection have been established for mixed traffic conditions. These adjustment factors could empower the urban planners, enforcement personnel and decision-makers to estimate the reduction in the capacity of signalized intersections for suggesting improvements in the form of parking restrictions/ bus stop relocation for existing intersections or make design changes for planned intersections.

Keywords: signalized intersection, saturation flow, adjustment factors, capacity

Procedia PDF Downloads 107
1765 Computational Approach for Grp78–Nf-ΚB Binding Interactions in the Context of Neuroprotective Pathway in Brain Injuries

Authors: Janneth Gonzalez, Marco Avila, George Barreto

Abstract:

GRP78 participates in multiple functions in the cell during normal and pathological conditions, controlling calcium homeostasis, protein folding and unfolded protein response. GRP78 is located in the endoplasmic reticulum, but it can change its location under stress, hypoxic and apoptotic conditions. NF-κB represents the keystone of the inflammatory process and regulates the transcription of several genes related with apoptosis, differentiation, and cell growth. The possible relationship between GRP78-NF-κB could support and explain several mechanisms that may regulate a variety of cell functions, especially following brain injuries. Although several reports show interactions between NF-κB and heat shock proteins family members, there is a lack of information on how GRP78 may be interacting with NF-κB, and possibly regulating its downstream activation. Therefore, we assessed the computational predictions of the GRP78 (Chain A) and NF-κB complex (IkB alpha and p65) protein-protein interactions. The interaction interface of the docking model showed that the amino acids ASN 47, GLU 215, GLY 403 of GRP78 and THR 54, ASN 182 and HIS 184 of NF-κB are key residues involved in the docking. The electrostatic field between GRP78-NF-κB interfaces and molecular dynamic simulations support the possible interaction between the proteins. In conclusion, this work shed some light in the possible GRP78-NF-κB complex indicating key residues in this crosstalk, which may be used as an input for better drug design strategy targeting NF-κB downstream signaling as a new therapeutic approach following brain injuries.

Keywords: computational biology, protein interactions, Grp78, bioinformatics, molecular dynamics

Procedia PDF Downloads 330
1764 Assessing the Effects of Land Use Spatial Structure on Urban Heat Island Using New Launched Remote Sensing in Shenzhen, China

Authors: Kai Liua, Hongbo Sua, Weimin Wangb, Hong Liangb

Abstract:

Urban heat island (UHI) has attracted attention around the world since they profoundly affect human life and climatological. Better understanding the effects of landscape pattern on UHI is crucial for improving the ecological security and sustainability of cities. This study aims to investigate how landscape composition and configuration would affect UHI in Shenzhen, China, based on the analysis of land surface temperature (LST) in relation landscape metrics, mainly with the aid of three new satellite sensors launched by China. HJ-1B satellite system was utilized to estimate surface temperature and comprehensively explore the urban thermal spatial pattern. The landscape metrics of the high spatial resolution remote sensing satellites (GF-1 and ZY-3) were compared and analyzed to validate the performance of the new launched satellite sensors. Results show that the mean LST is correlated with main landscape metrics involving class-based metrics and landscape-based metrics, suggesting that the landscape composition and the spatial configuration both influence UHI. These relationships also reveal that urban green has a significant effect in mitigating UHI in Shenzhen due to its homogeneous spatial distribution and large spatial extent. Overall, our study not only confirm the applicability and effectiveness of the HJ-1B, GF-1 and ZY-3 satellite system for studying UHI but also reveal the impacts of the urban spatial structure on UHI, which is meaningful for the planning and management of the urban environment.

Keywords: urban heat island, Shenzhen, new remote sensing sensor, remote sensing satellites

Procedia PDF Downloads 392
1763 Structural Modeling and Experimental-Numerical Correlation of the Dynamic Behavior of the Portuguese Guitar by Using a Structural-Fluid Coupled Model

Authors: M. Vieira, V. Infante, P. Serrão, A. Ribeiro

Abstract:

The Portuguese guitar is a pear-shaped plucked chordophone particularly known for its role in Fado, the most distinctive traditional Portuguese musical style. The acknowledgment of the dynamic behavior of the Portuguese guitar, specifically of its modal and mode shape response, has been the focus of different authors. In this research, the experimental results of the dynamic behavior of the guitar, which were previously obtained, are correlated with a vibro-acoustic finite element model of the guitar. The modelling of the guitar offered several challenges which are presented in this work. The results of the correlation between experimental and numerical data are presented and indicate good correspondence for the studied mode shapes. The influence of the air inside the chamber, for the finite element analysis, is shown to be crucial to understand the low-frequency modes of the Portuguese guitar, while, for higher frequency modes, the geometry of the guitar assumes greater relevance. Comparison is made with the classical guitar, providing relevant information about the intrinsic differences between the two, such as between its tones and other acoustical properties. These results represent a sustained base for future work, which will allow the study of the influence of different location and geometry of diverse components of the Portuguese guitar, being as well an asset to the comprehension of its musical properties and qualities and may, furthermore, represent an advantage for its players and luthiers.

Keywords: dynamic behavior of guitars, instrument acoustics, modal analysis, Portuguese guitar

Procedia PDF Downloads 387
1762 Low Complexity Carrier Frequency Offset Estimation for Cooperative Orthogonal Frequency Division Multiplexing Communication Systems without Cyclic Prefix

Authors: Tsui-Tsai Lin

Abstract:

Cooperative orthogonal frequency division multiplexing (OFDM) transmission, which possesses the advantages of better connectivity, expanded coverage, and resistance to frequency selective fading, has been a more powerful solution for the physical layer in wireless communications. However, such a hybrid scheme suffers from the carrier frequency offset (CFO) effects inherited from the OFDM-based systems, which lead to a significant degradation in performance. In addition, insertion of a cyclic prefix (CP) at each symbol block head for combating inter-symbol interference will lead to a reduction in spectral efficiency. The design on the CFO estimation for the cooperative OFDM system without CP is a suspended problem. This motivates us to develop a low complexity CFO estimator for the cooperative OFDM decode-and-forward (DF) communication system without CP over the multipath fading channel. Especially, using a block-type pilot, the CFO estimation is first derived in accordance with the least square criterion. A reliable performance can be obtained through an exhaustive two-dimensional (2D) search with a penalty of heavy computational complexity. As a remedy, an alternative solution realized with an iteration approach is proposed for the CFO estimation. In contrast to the 2D-search estimator, the iterative method enjoys the advantage of the substantially reduced implementation complexity without sacrificing the estimate performance. Computer simulations have been presented to demonstrate the efficacy of the proposed CFO estimation.

Keywords: cooperative transmission, orthogonal frequency division multiplexing (OFDM), carrier frequency offset, iteration

Procedia PDF Downloads 253
1761 A Multi-Release Software Reliability Growth Models Incorporating Imperfect Debugging and Change-Point under the Simulated Testing Environment and Software Release Time

Authors: Sujit Kumar Pradhan, Anil Kumar, Vijay Kumar

Abstract:

The testing process of the software during the software development time is a crucial step as it makes the software more efficient and dependable. To estimate software’s reliability through the mean value function, many software reliability growth models (SRGMs) were developed under the assumption that operating and testing environments are the same. Practically, it is not true because when the software works in a natural field environment, the reliability of the software differs. This article discussed an SRGM comprising change-point and imperfect debugging in a simulated testing environment. Later on, we extended it in a multi-release direction. Initially, the software was released to the market with few features. According to the market’s demand, the software company upgraded the current version by adding new features as time passed. Therefore, we have proposed a generalized multi-release SRGM where change-point and imperfect debugging concepts have been addressed in a simulated testing environment. The failure-increasing rate concept has been adopted to determine the change point for each software release. Based on nine goodness-of-fit criteria, the proposed model is validated on two real datasets. The results demonstrate that the proposed model fits the datasets better. We have also discussed the optimal release time of the software through a cost model by assuming that the testing and debugging costs are time-dependent.

Keywords: software reliability growth models, non-homogeneous Poisson process, multi-release software, mean value function, change-point, environmental factors

Procedia PDF Downloads 61
1760 Modelling the Effect of Physical Environment Factors on Child Pedestrian Severity Collisions in Malaysia: A Multinomial Logistic Regression Analysis

Authors: Muhamad N. Borhan, Nur S. Darus, Siti Z. Ishak, Rozmi Ismail, Siti F. M. Razali

Abstract:

Children are at the greater risk to be involved in road traffic collisions due to the complex interaction of various elements in our transportation system. It encompasses interactions between the elements of children and driver behavior along with physical and social environment factors. The present study examined the effect between the collisions severity and physical environment factors on child pedestrian collisions. The severity of collisions is categorized into four injury outcomes: fatal, serious injury, slight injury, and damage. The sample size comprised of 2487 cases of child pedestrian-vehicle collisions in which children aged 7 to 12 years old was involved in Malaysia for the years 2006-2015. A multinomial logistic regression was applied to establish the effect between severity levels and physical environment factors. The results showed that eight contributing factors influence the probability of an injury road surface material, traffic system, road marking, control type, lighting condition, type of location, land use and road surface condition. Understanding the effect of physical environment factors may contribute to the improvement of physical environment design and decrease the collision involvement.

Keywords: child pedestrian, collisions, primary school, road injuries

Procedia PDF Downloads 151
1759 Identification of Outliers in Flood Frequency Analysis: Comparison of Original and Multiple Grubbs-Beck Test

Authors: Ayesha S. Rahman, Khaled Haddad, Ataur Rahman

Abstract:

At-site flood frequency analysis is used to estimate flood quantiles when at-site record length is reasonably long. In Australia, FLIKE software has been introduced for at-site flood frequency analysis. The advantage of FLIKE is that, for a given application, the user can compare a number of most commonly adopted probability distributions and parameter estimation methods relatively quickly using a windows interface. The new version of FLIKE has been incorporated with the multiple Grubbs and Beck test which can identify multiple numbers of potentially influential low flows. This paper presents a case study considering six catchments in eastern Australia which compares two outlier identification tests (original Grubbs and Beck test and multiple Grubbs and Beck test) and two commonly applied probability distributions (Generalized Extreme Value (GEV) and Log Pearson type 3 (LP3)) using FLIKE software. It has been found that the multiple Grubbs and Beck test when used with LP3 distribution provides more accurate flood quantile estimates than when LP3 distribution is used with the original Grubbs and Beck test. Between these two methods, the differences in flood quantile estimates have been found to be up to 61% for the six study catchments. It has also been found that GEV distribution (with L moments) and LP3 distribution with the multiple Grubbs and Beck test provide quite similar results in most of the cases; however, a difference up to 38% has been noted for flood quantiles for annual exceedance probability (AEP) of 1 in 100 for one catchment. These findings need to be confirmed with a greater number of stations across other Australian states.

Keywords: floods, FLIKE, probability distributions, flood frequency, outlier

Procedia PDF Downloads 434
1758 Evaluation of Residual Stresses in Human Face as a Function of Growth

Authors: M. A. Askari, M. A. Nazari, P. Perrier, Y. Payan

Abstract:

Growth and remodeling of biological structures have gained lots of attention over the past decades. Determining the response of living tissues to mechanical loads is necessary for a wide range of developing fields such as prosthetics design or computerassisted surgical interventions. It is a well-known fact that biological structures are never stress-free, even when externally unloaded. The exact origin of these residual stresses is not clear, but theoretically, growth is one of the main sources. Extracting body organ’s shapes from medical imaging does not produce any information regarding the existing residual stresses in that organ. The simplest cause of such stresses is gravity since an organ grows under its influence from birth. Ignoring such residual stresses might cause erroneous results in numerical simulations. Accounting for residual stresses due to tissue growth can improve the accuracy of mechanical analysis results. This paper presents an original computational framework based on gradual growth to determine the residual stresses due to growth. To illustrate the method, we apply it to a finite element model of a healthy human face reconstructed from medical images. The distribution of residual stress in facial tissues is computed, which can overcome the effect of gravity and maintain tissues firmness. Our assumption is that tissue wrinkles caused by aging could be a consequence of decreasing residual stress and thus not counteracting gravity. Taking into account these stresses seems therefore extremely important in maxillofacial surgery. It would indeed help surgeons to estimate tissues changes after surgery.

Keywords: finite element method, growth, residual stress, soft tissue

Procedia PDF Downloads 255
1757 The Effects of Time and Cyclic Loading to the Axial Capacity for Offshore Pile in Shallow Gas

Authors: Christian H. Girsang, M. Razi B. Mansoor, Noorizal N. Huang

Abstract:

An offshore platform was installed in 1977 at about 260km offshore West Malaysia at the water depth of 73.6m. Twelve (12) piles were installed with four (4) are skirt piles. The piles have 1.219m outside diameter and wall thickness of 31mm and were driven to 109m below seabed. Deterministic analyses of the pile capacity under axial loading were conducted using the current API (American Petroleum Institute) method and the four (4) CPT-based methods: the ICP (Imperial College Pile)-method, the NGI (Norwegian Geotechnical Institute)-Method, the UWA (University of Western Australia)-method and the Fugro-method. A statistical analysis of the model uncertainty associated with each pile capacity method was performed. There were two (2) piles analysed: Pile 1 and piles other than Pile 1, where Pile 1 is the pile that was most affected by shallow gas problems. Using the mean estimate of soil properties, the five (5) methods used for deterministic estimation of axial pile capacity in compression predict an axial capacity from 28 to 42MN for Pile 1 and 32 to 49MN for piles other than Pile 1. These values refer to the static capacity shortly after pile installation. They do not include the effects of cyclic loading during the design storm or time after installation on the axial pile capacity. On average, the axial pile capacity is expected to have increased by about 40% because of ageing since the installation of the platform in 1977. On the other hand, the cyclic loading effects during the design storm may reduce the axial capacity of the piles by around 25%. The study concluded that all piles have sufficient safety factor when the pile aging and cyclic loading effect are considered, as all safety factors are above 2.0 for maximum operating and storm loads.

Keywords: axial capacity, cyclic loading, pile ageing, shallow gas

Procedia PDF Downloads 325
1756 Climate Teleconnections and Their Influence on the Spread of Dengue

Authors: Edilene Machado, Carolina Karoly, Amanda Britz, Luciane Salvi, Claudineia Brazil

Abstract:

Climate teleconnections refer to the climatic relationships between geographically distant regions, where changes in one location can influence weather patterns in another. These connections can occur through atmospheric and oceanic processes, leading to variations in temperature, precipitation, and other climatic elements. Studying teleconnections is crucial for better understanding the mechanisms that govern global climate and the potential consequences of climate change. A notable example of a teleconnection is the El Niño-Southern Oscillation (ENSO), which involves the interaction between the Equatorial Pacific Ocean and the atmosphere. During El Niño episodes, there is anomalous warming of the surface waters in the Equatorial Pacific, resulting in significant changes in global climate patterns. These changes can affect rainfall distribution, wind patterns, and temperatures in different parts of the world. The cold phase of ENSO, known as La Niña, is often associated with reduced precipitation and below-average temperatures in the state of Rio Grande do Sul, Brazil. Therefore, the objective of this research is to identify patterns between El Niño-Southern Oscillation (ENSO) events in their different phases and dengue transmission. Meteorological data and dengue case records for the city of Porto Alegre, in the southern region of Brazil, were used for the development of this research. The study highlighted that the highest incidence of dengue cases occurred during the cold phase of the El Niño-Southern Oscillation (ENSO).

Keywords: climate patterns, climate teleconnections, climate variability, dengue, El Niño-Southern oscillation

Procedia PDF Downloads 74
1755 Asymptotic Analysis of the Viscous Flow through a Pipe and the Derivation of the Darcy-Weisbach Law

Authors: Eduard Marusic-Paloka

Abstract:

The Darcy-Weisbach formula is used to compute the pressure drop of the fluid in the pipe, due to the friction against the wall. Because of its simplicity, the Darcy-Weisbach formula became widely accepted by engineers and is used for laminar as well as the turbulent flows through pipes, once the method to compute the mysterious friction coefficient was derived. Particularly in the second half of the 20th century. Formula is empiric, and our goal is to derive it from the basic conservation law, via rigorous asymptotic analysis. We consider the case of the laminar flow but with significant Reynolds number. In case of the perfectly smooth pipe, the situation is trivial, as the Navier-Stokes system can be solved explicitly via the Poiseuille formula leading to the friction coefficient in the form 64/Re. For the rough pipe, the situation is more complicated and some effects of the roughness appear in the friction coefficient. We start from the Navier-Stokes system in the pipe with periodically corrugated wall and derive an asymptotic expansion for the pressure and for the velocity. We use the homogenization techniques and the boundary layer analysis. The approximation derived by formal analysis is then justified by rigorous error estimate in the norm of the appropriate Sobolev space, using the energy formulation and classical a priori estimates for the Navier-Stokes system. Our method leads to the formula for the friction coefficient. The formula involves resolution of the appropriate boundary layer problems, namely the boundary value problems for the Stokes system in an infinite band, that needs to be done numerically. However, theoretical analysis characterising their nature can be done without solving them.

Keywords: Darcy-Weisbach law, pipe flow, rough boundary, Navier law

Procedia PDF Downloads 342
1754 Co-Alignment of Comfort and Energy Saving Objectives for U.S. Office Buildings and Restaurants

Authors: Lourdes Gutierrez, Eric Williams

Abstract:

Post-occupancy research shows that only 11% of commercial buildings met the ASHRAE thermal comfort standard. Many buildings are too warm in winter and/or too cool in summer, wasting energy and not providing comfort. In this paper, potential energy savings in U.S. offices and restaurants if thermostat settings are calculated according the updated ASHRAE 55-2013 comfort model that accounts for outdoor temperature and clothing choice for different climate zones. eQUEST building models are calibrated to reproduce aggregate energy consumption as reported in the U.S. Commercial Building Energy Consumption Survey. Changes in energy consumption due to the new settings are analyzed for 14 cities in different climate zones and then the results are extrapolated to estimate potential national savings. It is found that, depending on the climate zone, each degree increase in the summer saves 0.6 to 1.0% of total building electricity consumption. Each degree the winter setting is lowered saves 1.2% to 8.7% of total building natural gas consumption. With new thermostat settings, national savings are 2.5% of the total consumed in all office buildings and restaurants, summing up to national savings of 69.6 million GJ annually, comparable to all 2015 total solar PV generation in US. The goals of improved comfort and energy/economic savings are thus co-aligned, raising the importance of thermostat management as an energy efficiency strategy.

Keywords: energy savings quantifications, commercial building stocks, dynamic clothing insulation model, operation-focused interventions, energy management, thermal comfort, thermostat settings

Procedia PDF Downloads 294
1753 Assessment of Water Availability and Quality in the Climate Change Context in Urban Areas

Authors: Rose-Michelle Smith, Musandji Fuamba, Salomon Salumu

Abstract:

Water is vital for life. Access to drinking water and sanitation for humans is one of the Sustainable Development Goals (specifically the sixth) approved by United Nations Member States in September 2015. There are various problems identified relating to water: insufficient fresh water, inequitable distribution of water resources, poor water management in certain places on the planet, detection of water-borne diseases due to poor water quality, and the negative impacts of climate change on water. One of the major challenges in the world is finding ways to ensure that people and the environment have enough water resources to sustain and support their existence. Thus, this research project aims to develop a tool to assess the availability, quality and needs of water in current and future situations with regard to climate change. This tool was tested using threshold values for three regions in three countries: the Metropolitan Community of Montreal (Canada), Normandie Region (France) and North Department (Haiti). The WEAP software was used to evaluate the available quantity of water resources. For water quality, two models were performed: the Canadian Council of Ministers of the Environment (CCME) and the Malaysian Water Quality Index (WQI). Preliminary results showed that the ratio of the needs could be estimated at 155, 308 and 644 m3/capita in 2023 for Normandie, Cap-Haitian and CMM, respectively. Then, the Water Quality Index (WQI) varied from one country to another. Other simulations regarding the water availability and quality are still in progress. This tool will be very useful in decision-making on projects relating to water use in the future; it will make it possible to estimate whether the available resources will be able to satisfy the needs.

Keywords: climate change, water needs, balance sheet, water quality

Procedia PDF Downloads 48
1752 Higher Education Quality Culture: Case Study: Georgia

Authors: Pikria Vardosanidze

Abstract:

This presentation entitled ”Higher Education Quality Culture – Case Study: Georgia”is concerned with an urgent and crucial issue. Located at the crossroads of Europe and Asia, Georgia is a transnational, post-soviet country. And it is conditioned the peculiarity of our education system. Higher education in Georgia has an extensive history and a challenging period of development consisting of several phases, especially noteworthy of which are 1918 and 1991, marking there storation of Georgia’s independence. Georgia joined the Bologna Process in 2005. Given its geopolitical location, Georgian culture has developed, and still pursues the path of development against the background of the Western and Eastern cultures. Furthermore, socio-politically and culturally, it represents part of Europe. It is of particular interest how post-Soviet states develop in terms of education. What is the path to the European integration for Georgia as a post-Soviet country? How developed is the higher education quality culture in Georgia? And, what should be done in the future? It is important to answer these questions. The research carried out in the field of education is characterized by a certain specificity as does the post-colonial research. The field of education contributes to the development of democratic society as well as to the European integration, the Eastern Partnership and so on. What is crucial for the educational system, apart from transparency and democratization, is the improvement of the quality of education which is one of the most powerful tools dictating the need for a doctoral research as such. As for the research method, the comparative method of research, and the qualitative research are applied.

Keywords: internationalization, higher education, policies, Georgia

Procedia PDF Downloads 83
1751 Comparative Study between Inertial Navigation System and GPS in Flight Management System Application

Authors: Othman Maklouf, Matouk Elamari, M. Rgeai, Fateh Alej

Abstract:

In modern avionics the main fundamental component is the flight management system (FMS). An FMS is a specialized computer system that automates a wide variety of in-flight tasks, reducing the workload on the flight crew to the point that modern civilian aircraft no longer carry flight engineers or navigators. The main function of the FMS is in-flight management of the flight plan using various sensors such as Global Positioning System (GPS) and Inertial Navigation System (INS) to determine the aircraft's position and guide the aircraft along the flight plan. GPS which is satellite based navigation system, and INS which generally consists of inertial sensors (accelerometers and gyroscopes). GPS is used to locate positions anywhere on earth, it consists of satellites, control stations, and receivers. GPS receivers take information transmitted from the satellites and uses triangulation to calculate a user’s exact location. The basic principle of an INS is based on the integration of accelerations observed by the accelerometers on board the moving platform, the system will accomplish this task through appropriate processing of the data obtained from the specific force and angular velocity measurements. Thus, an appropriately initialized inertial navigation system is capable of continuous determination of vehicle position, velocity and attitude without the use of the external information. The main objective of article is to introduce a comparative study between the two systems under different conditions and scenarios using MATLAB with SIMULINK software.

Keywords: flight management system, GPS, IMU, inertial navigation system

Procedia PDF Downloads 281
1750 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals

Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty

Abstract:

A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs, and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine-learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient but not the magnitude. A neural network with two hidden layers were then used to learn the coefficient magnitudes along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.

Keywords: quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction

Procedia PDF Downloads 90
1749 Modeling Battery Degradation for Electric Buses: Assessment of Lifespan Reduction from In-Depot Charging

Authors: Anaissia Franca, Julian Fernandez, Curran Crawford, Ned Djilali

Abstract:

A methodology to estimate the state-of-charge (SOC) of battery electric buses, including degradation effects, for a given driving cycle is presented to support long-term techno-economic analysis integrating electric buses and charging infrastructure. The degradation mechanisms, characterized by both capacity and power fade with time, have been modeled using an electrochemical model for Li-ion batteries. Iterative changes in the negative electrode film resistance and decrease in available lithium as a function of utilization is simulated for every cycle. The cycles are formulated to follow typical transit bus driving patterns. The power and capacity decay resulting from the degradation model are introduced as inputs to a longitudinal chassis dynamic analysis that calculates the power consumption of the bus for a given driving cycle to find the state-of-charge of the battery as a function of time. The method is applied to an in-depot charging scenario, for which the bus is charged exclusively at the depot, overnight and to its full capacity. This scenario is run both with and without including degradation effects over time to illustrate the significant impact of degradation mechanisms on bus performance when doing feasibility studies for a fleet of electric buses. The impact of battery degradation on battery lifetime is also assessed. The modeling tool can be further used to optimize component sizing and charging locations for electric bus deployment projects.

Keywords: battery electric bus, E-bus, in-depot charging, lithium-ion battery, battery degradation, capacity fade, power fade, electric vehicle, SEI, electrochemical models

Procedia PDF Downloads 307
1748 Evaluation of Storage Stability and Quality Parameters in Biscuit Made from Blends of Wheat, Cassava (Manihot esculenta) and Carrot (Daucus carota) Flour

Authors: Aminat. O Adelekan, Olawale T. Gbadebo

Abstract:

Biscuit is one of the most consumed cereal foods in Nigeria and research has shown that locally available tropical crops like cassava, sweet potato can be made into flour and used in the production of biscuits and other pastries. This study investigates some quality parameters in biscuits made from blends of wheat, cassava and carrot flour. The values of result of samples increased with increasing percentage substitution of cassava and carrot flour in some quality parameter like fiber, ash, gluten content, and carbohydrate. The protein content reduced significantly (P < 0.05) with increasing percentage substitution of cassava and carrot flour which ranged from 14.80% to 11.80% compared with the control sample which had 15.60%. There was a recorded significant increase (P < 0.05) in some mineral composition such as calcium, magnesium, sodium, iron, phosphorus, and vitamin A and C composition as the percentage substitution of cassava and carrot flour increased. During storage stability test, samples stored in the fridge and freezer were found to be the best storage location to preserve the sensory attributes and inhibit microbial growth when compared with storage under the sun and on the shelf. Biscuit made with blends of wheat, cassava and carrot flour can therefore serve as an alternative to biscuits made from 100% wheat flour, as they are richer in vitamin A, vitamin C, carbohydrate, dietary fiber and some essential minerals.

Keywords: biscuit, carrot, flour blends, storage

Procedia PDF Downloads 120
1747 Imputing the Minimum Social Value of Public Healthcare: A General Equilibrium Model of Israel

Authors: Erez Yerushalmi, Sani Ziv

Abstract:

The rising demand for healthcare services, without a corresponding rise in public supply, led to a debate on whether to increase private healthcare provision - especially in hospital services and second-tier healthcare. Proponents for increasing private healthcare highlight gains in efficiency, while opponents its risk to social welfare. None, however, provide a measure of the social value and its impact on the economy in terms of a monetary value. In this paper, we impute a minimum social value of public healthcare that corresponds to indifference between gains in efficiency, with losses to social welfare. Our approach resembles contingent valuation methods that introduce a hypothetical market for non-commodities, but is different from them because we use numerical simulation techniques to exploit certain market failure conditions. In this paper, we develop a general equilibrium model that distinguishes between public-private healthcare services and public-private financing. Furthermore, the social value is modelled as a by product of healthcare services. The model is then calibrated to our unique health focused Social Accounting Matrix of Israel, and simulates the introduction of a hypothetical health-labour market - given that it is heavily regulated in the baseline (i.e., the true situation in Israel today). For baseline parameters, we estimate the minimum social value at around 18% public healthcare financing. The intuition is that the gain in economic welfare from improved efficiency, is offset by the loss in social welfare due to a reduction in available social value. We furthermore simulate a deregulated healthcare scenario that internalizes the imputed value of social value and searches for the optimal weight of public and private healthcare provision.

Keywords: contingent valuation method (CVM), general equilibrium model, hypothetical market, private-public healthcare, social value of public healthcare

Procedia PDF Downloads 128
1746 Association of Alcohol Consumption with Active Tuberculosis in Taiwanese Adults: A Nationwide Population-Based Cohort Study

Authors: Yung-Feng Yen, Yun-Ju Lai

Abstract:

Background: Animal studies have shown that alcohol exposure may cause immunosuppression and increase the susceptibility to tuberculosis (TB) infection. However, the temporality of alcohol consumption with subsequent TB development remains unclear. This nationwide population-based cohort study aimed to investigate the impact of alcohol exposure on TB development in Taiwanese adults. Methods: We included 46 196 adult participants from three rounds (2001, 2005, 2009) of the Taiwan National Health Interview Survey. Alcohol consumption was classified into heavy, regular, social, or never alcohol use. Heavy alcohol consumption was defined as intoxication at least once/week. Alcohol consumption and other covariates were collected by in-person interviews at baseline. Incident cases of active TB were identified from the National Health Insurance database. Multivariate logistic regression was used to estimate the association between alcohol consumption and active TB, with adjustment for age, sex, smoking, socioeconomic status, and other covariates. Results: A total of 279 new cases of active TB occurred during the study follow-up period. Heavy (adjusted odds ratio [AOR], 5.21; 95% confident interval [CI], 2.41-11.26) and regular alcohol use (AOR, 1.73; 95% CI, 1.26-2.38) were associated with higher risks of incident TB after adjusting for the subject demographics and comorbidities. Moreover, a strong dose-response effect was observed between increasing alcohol consumption and incident TB (AOR, 2.26; 95% CI, 1.59-3.21; P <.001). Conclusion: Heavy and regular alcohol consumption were associated with higher risks of active TB. Future TB control programs should consider strategies to lower the overall level of alcohol consumption to reduce the TB disease burden.

Keywords: alcohol consumption, tuberculosis, risk factor, cohort study

Procedia PDF Downloads 218
1745 Method for Targeting Small Volume in Rat Brainby Gamma Knife and Dosimetric Control: Towards a Standardization

Authors: J. Constanzo, B. Paquette, G. Charest, L. Masson-Côté, M. Guillot

Abstract:

Targeted and whole-brain irradiation in humans can result in significant side effects causing decreased patient quality of life. To adequately investigate structural and functional alterations after stereotactic radiosurgery, preclinical studies are needed. The first step is to establish a robust standardized method of targeted irradiation on small regions of the rat brain. Eleven euthanized male Fischer rats were imaged in a stereotactic bed, by computed tomographic (CT), to estimate positioning variations regarding to the bregma skull reference point. Using a rat brain atlas and the stereotactic bregma coordinates assessed from CT images, various regions of the brain were delimited and a treatment plan was generated. A dose of 37 Gy at 30% isodose which corresponds to 100 Gy in 100% of the target volume (X = 98.1; Y = 109.1; Z = 100.0) was set by Leksell Gamma Plan using sectors number 4, 5, 7, and 8 of the Gamma Knife unit with the 4-mm diameter collimators. Effects of positioning accuracy of the rat brain on the dose deposition were simulated by Gamma Plan and validated with dosimetric measurements. Our results showed that 90% of the target volume received 110 ± 4.7 Gy and the maximum of deposited dose was 124 ± 0.6 Gy, which corresponds to an excellent relative standard deviation of 0.5%. This dose deposition calculated with the Gamma Plan was validated with the dosimetric films resulting in a dose-profile agreement within 2%, both in X- and Z-axis,. Our results demonstrate the feasibility to standardize the irradiation procedure of a small volume in the rat brain using a Gamma Knife.

Keywords: brain irradiation, dosimetry, gamma knife, small-animal irradiation, stereotactic radiosurgery (SRS)

Procedia PDF Downloads 396
1744 Augmented Reality for Children Vocabulary Learning: Case Study in a Macau Kindergarten

Authors: R. W. Chan, Kan Kan Chan

Abstract:

Augmented Reality (AR), with the affordance of bridging between real world and virtual world, brings users immersive experience. It has been applied in education gradually and even come into practice in student daily learning. However, a systematic review shows that there are limited researches in the area of vocabulary acquisition in early childhood education. Since kindergarten is a key stage where children acquire language and AR as an emerging and potential technology to support the vocabulary acquisition, this study aims to explore its value in in real classroom with teacher’s view. Participants were a class of 5 to 6 years old kids studying in a Macau school that follows Cambridge curriculum and emphasizes multicultural ethos. There were 11 boys, 13 girls, and in a total of 24 kids. They learnt animal vocabulary using mobile device and AR flashcards, IPad to scan AR flashcards and interact with pop-up virtual objects. In order to estimate the effectiveness of using Augmented Reality, children attended vocabulary pre-posttest. In addition, teacher interview was administrated after this learning activity to seek practitioner’s opinion towards this technology. For data analysis, paired samples t-test was utilized to measure the instructional effect based on the pre-posttest data. Result shows that Augmented Reality could significantly enhance children vocabulary learning with large effect size. Teachers indicated that children enjoyed the AR learning activity but clear instruction is needed. Suggestions for the future implementation of vocabulary acquisition using AR are suggested.

Keywords: augmented reality, kindergarten children, vocabulary learning, Macau

Procedia PDF Downloads 130