Search results for: pressure-robust error estimate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3559

Search results for: pressure-robust error estimate

859 Prevalence and Risk Factors of Metabolic Syndrome in Adults of Terai Region of Nepal

Authors: Birendra Kumar Jha, Mingma L. Sherpa, Binod Kumar Dahal

Abstract:

Background: The metabolic syndrome is emerging as a major public health concern in the world. Urbanization, surplus energy uptake, compounded by decreased physical activities, and increasing obesity are the major factors contributing to the epidemic of metabolic syndrome worldwide. However, prevalence of metabolic syndrome and its risk factors are little studied in Terai region of Nepal. The objectives of this research were to estimate the prevalence and to identify the risk factors of metabolic syndrome among adults in Terai region of Nepal. Method: We used a community based cross sectional study design. A total of 225 adults (age: 18 to 80 years) were selected from three district of Terai region of Nepal using cluster sampling by camp approach. IDF criteria (central obesity with any two of following four factors: triglycerides ≥ 150 mg/dl or specific treatment for lipid abnormality, reduced HDL, raised blood pressure and raised fasting plasma glucose or previously diagnosed type 2 diabetes) were used to assess metabolic syndrome. Interview, physical and clinical examination, measurement of fasting blood glucose and lipid profile were conducted for all participants. Chi-square test and multivariable logistic regression were employed to explore the risk factors of metabolic syndrome. Result: The overall prevalence of metabolic syndrome was 70.7%. Hypertension, increased fasting blood sugar, increased triglycerides and decreased HDL were observed in 50.7%, 32.4%, 41.8% and 79.1% of the subjects respectively. Socio-economic and behavioral risk factors significantly associated with metabolic syndrome were gender male (OR=2.56, 955 CI: 1.42-4.63; p=0.002), in service or retired from service (OR=3.72, 95% CI: 1.72-8.03; p=0.001) and smoking (OR= 4.10, 95% CI: 1.19-14.07; p=0.016). Conclusion: Higher prevalence of Metabolic syndrome along with presence of behavioral risk factors in Terai region of Nepal likely suggest lack of awareness and health promotion activities for metabolic syndrome and indicate the need to promote public health programs in this region to maintain quality of life.

Keywords: metabolic syndrome, Nepal, prevalence, risk factors, Terai

Procedia PDF Downloads 138
858 Shear Strength and Consolidation Behavior of Clayey Soil with Vertical and Radial Drainage

Authors: R. Pillai Aparna, S. R. Gandhi

Abstract:

Soft clay deposits having low strength and high compressibility are found all over the world. Preloading with vertical drains is a widely used method for improving such type of soils. The coefficient of consolidation, irrespective of the drainage type, plays an important role in the design of vertical drains and it controls accurate prediction of the rate of consolidation of soil. Also, the increase in shear strength of soil with consolidation is another important factor considered in preloading or staged construction. To our best knowledge no clear guidelines are available to estimate the increase in shear strength for a particular degree of consolidation (U) at various stages during the construction. Various methods are available for finding out the consolidation coefficient. This study mainly focuses on the variation of, consolidation coefficient which was found out using different methods and shear strength with pressure intensity. The variation of shear strength with the degree of consolidation was also studied. The consolidation test was done using two types of highly compressible clays with vertical, radial and a few with combined drainage. The test was carried out at different pressures intensities and for each pressure intensity, once the target degree of consolidation is achieved, vane shear test was done at different locations in the sample, in order to determine the shear strength. The shear strength of clayey soils under the application of vertical stress with vertical and radial drainage with target U value of 70% and 90% was studied. It was found that there is not much variation in cv or cr value beyond 80kPa pressure intensity. Correlations were developed between shear strength ratio and consolidation pressure based on laboratory testing under controlled condition. It was observed that the shear strength of sample with target U value of 90% is about 1.4 to 2 times than that of 70% consolidated sample. Settlement analysis was done using Asaoka’s and hyperbolic method. The variation of strength with respect to the depth of sample was also studied, using large-scale consolidation test. It was found, based on the present study that the gain in strength is more on the top half of the clay layer, and also the shear strength of the sample ensuring radial drainage is slightly higher than that of the vertical drainage.

Keywords: consolidation coefficient, degree of consolidation, PVDs, shear strength

Procedia PDF Downloads 221
857 Establishing a Surrogate Approach to Assess the Exposure Concentrations during Coating Process

Authors: Shan-Hong Ying, Ying-Fang Wang

Abstract:

A surrogate approach was deployed for assessing exposures of multiple chemicals at the selected working area of coating processes and applied to assess the exposure concentration of similar exposed groups using the same chemicals but different formula ratios. For the selected area, 6 to 12 portable photoionization detector (PID) were placed uniformly in its workplace to measure its total VOCs concentrations (CT-VOCs) for 6 randomly selected workshifts. Simultaneously, one sampling strain was placed beside one of these portable PIDs, and the collected air sample was analyzed for individual concentration (CVOCi) of 5 VOCs (xylene, butanone, toluene, butyl acetate, and dimethylformamide). Predictive models were established by relating the CT-VOCs to CVOCi of each individual compound via simple regression analysis. The established predictive models were employed to predict each CVOCi based on the measured CT-VOC for each the similar working area using the same portable PID. Results show that predictive models obtained from simple linear regression analyses were found with an R2 = 0.83~0.99 indicating that CT-VOCs were adequate for predicting CVOCi. In order to verify the validity of the exposure prediction model, the sampling analysis of the above chemical substances was further carried out and the correlation between the measured value (Cm) and the predicted value (Cp) was analyzed. It was found that there is a good correction between the predicted value and measured value of each measured chemical substance (R2=0.83~0.98). Therefore, the surrogate approach could be assessed the exposure concentration of similar exposed groups using the same chemicals but different formula ratios. However, it is recommended to establish the prediction model between the chemical substances belonging to each coater and the direct-reading PID, which is more representative of reality exposure situation and more accurately to estimate the long-term exposure concentration of operators.

Keywords: exposure assessment, exposure prediction model, surrogate approach, TVOC

Procedia PDF Downloads 140
856 Rationale of Eye Pupillary Diameter for the UV Protection for Sunglasses

Authors: Liliane Ventura, Mauro Masili

Abstract:

Ultraviolet (UV) protection is critical for sunglasses, and mydriasis, as well as miosis, are relevant parameters to consider. The literature reports that for sunglasses, ultraviolet protection is critical because sunglasses can cause the opposite effect if the lenses do not provide adequate UV protection due to the greater dilation of the pupil when wearing sunglasses. However, the scientific literature does not properly quantify to support this rationale. The reasoning may be misleading by ignoring not only the inherent absorption of UV by the sunglass lens materials but also by ignoring the absorption of the anterior structures of the eye, i.e., the cornea and aqueous humor. Therefore, we estimate the pupil diameter and calculate the solar ultraviolet influx through the pupil of the human eye for two situations of an individual wearing and not wearing sunglasses. We quantify the dilation of the pupil as a function of the luminance of the surrounding. Therefore, we calculate the influx of solar UV through the pupil of the eye for two situations for an individual wearing sunglass and for the eyes free of shade. A typical boundary condition for the calculation is an individual in an upright position wearing sunglasses, staring at the horizon as if the sun is in the zenith. The calculation was done for the latitude of the geographic center of the state of São Paulo (-22º04'11.8'' S) from sunrise to sunset. A model from the literature is used for determining the sky luminance. The initial approach is to obtain pupil diameter as a function of luminance. Therefore, as a preliminary result, we calculate the pupil diameter as a function of the time of day, as the sun moves, for a particular day of the year. The working range for luminance is daylight (10⁻⁴ – 10⁵ cd/m²). We are able to show how the pupil adjusts to brightness change (~2 - ~7.8 mm). At noon, with the sun higher, the direct incidence of light on the pupil is lower if compared to mid-morning or mid-afternoon, when the sun strikes more directly into the eye. Thus, the pupil is larger at midday. As expected, the two situations have opposite behaviors since higher luminance implies a smaller pupil. With these results, we can progress in the short term to obtain the transmittance spectra of sunglasses samples and quantify how light attenuation provided by the spectacles affects pupil diameter.

Keywords: sunglasses, UV protection, pupil diameter, solar irradiance, luminance

Procedia PDF Downloads 66
855 Measurement of Intermediate Slip Rate of Sabzpushan Fault Zone in Southwestern Iran, Using Optically Stimulated Luminescence (OSL) Dating

Authors: Iman Nezamzadeh, Ali Faghih, Behnam Oveisi

Abstract:

In order to reduce earthquake hazards in urban areas, it is necessary to perform comprehensive studies to understand the dynamics of the active faults and identify potentially high risk areas. The fault slip-rates in Late Quaternary sediment are critical indicators of seismic hazard and also provide valuable data to recognize young crustal deformations. To measure slip-rates accurately, is needed to displacement of geomorphic markers and ages of quaternary sediment samples of alluvial deposit that deformed by movements on fault. In this study we produced information about Intermediate term slip rate of Sabzpushan Fault Zone (SPF) within the central part of the Zagros Mountains of Iran using OSL dating technique to make better analysis of seismic hazard and seismic risk reduction for Shiraz city. For this purpose identifiable geomorphic fluvial surfaces help us to provide a reference frame to determine differential or absolute horizontal and vertical deformation. Optically stimulated luminescence (OSL) is an alternative and independent method of determining the burial age of mineral grains in Quaternary sediments. Field observation and satellite imagery show geomorphic markers that deformed horizontally along the Sabzpoushan Fault. Here, drag folds is forming because of evaporites material of Miocen Formation. We estimate 2.8±0.5 mm/yr (mm/y) horizontal slip rate along the Sabzpushan fault zone, where ongoing deformation is involve with drug folding. The Soltan synclinal structure, close to the Sabzpushan fault, shows slight uplift rate due to active core-extrousion.

Keywords: slip rate, active tectonics, OSL, geomorphic markers, Sabzpushan Fault Zone, Zagros, Iran

Procedia PDF Downloads 343
854 The Effect of Foundation on the Earth Fill Dam Settlement

Authors: Masoud Ghaemi, Mohammadjafar Hedayati, Faezeh Yousefzadeh, Hoseinali Heydarzadeh

Abstract:

Careful monitoring in the earth dams to measure deformation caused by settlement and movement has always been a concern for engineers in the field. In order to measure settlement and deformation of earth dams, usually, the precision instruments of settlement set and combined Inclinometer that is commonly referred to IS instrument will be used. In some dams, because the thickness of alluvium is high and there is no possibility of alluvium removal (technically and economically and in terms of performance), there is no possibility of placing the end of IS instrument (precision instruments of Inclinometer-settlement set) in the rock foundation. Inevitably, have to accept installing pipes in the weak and deformable alluvial foundation that leads to errors in the calculation of the actual settlement (absolute settlement) in different parts of the dam body. The purpose of this paper is to present new and refine criteria for predicting settlement and deformation in earth dams. The study is based on conditions in three dams with a deformation quite alluvial (Agh Chai, Narmashir and Gilan-e Gharb) to provide settlement criteria affected by the alluvial foundation. To achieve this goal, the settlement of dams was simulated by using the finite difference method with FLAC3D software, and then the modeling results were compared with the reading IS instrument. In the end, the caliber of the model and validate the results, by using regression analysis techniques and scrutinized modeling parameters with real situations and then by using MATLAB software and CURVE FITTING toolbox, new criteria for the settlement based on elasticity modulus, cohesion, friction angle, the density of earth dam and the alluvial foundation was obtained. The results of these studies show that, by using the new criteria measures, the amount of settlement and deformation for the dams with alluvial foundation can be corrected after instrument readings, and the error rate in reading IS instrument can be greatly reduced.

Keywords: earth-fill dam, foundation, settlement, finite difference, MATLAB, curve fitting

Procedia PDF Downloads 180
853 Seismic Hazard Assessment of Tehran

Authors: Dorna Kargar, Mehrasa Masih

Abstract:

Due to its special geological and geographical conditions, Iran has always been exposed to various natural hazards. Earthquake is one of the natural hazards with random nature that can cause significant financial damages and casualties. This is a serious threat, especially in areas with active faults. Therefore, considering the population density in some parts of the country, locating and zoning high-risk areas are necessary and significant. In the present study, seismic hazard assessment via probabilistic and deterministic method for Tehran, the capital of Iran, which is located in Alborz-Azerbaijan province, has been done. The seismicity study covers a range of 200 km from the north of Tehran (X=35.74° and Y= 51.37° in LAT-LONG coordinate system) to identify the seismic sources and seismicity parameters of the study region. In order to identify the seismic sources, geological maps at the scale of 1: 250,000 are used. In this study, we used Kijko-Sellevoll's method (1992) to estimate seismicity parameters. The maximum likelihood estimation of earthquake hazard parameters (maximum regional magnitude Mmax, activity rate λ, and the Gutenberg-Richter parameter b) from incomplete data files is extended to the case of uncertain magnitude values. By the combination of seismicity and seismotectonic studies of the site, the acceleration with antiseptic probability may happen during the useful life of the structure is calculated with probabilistic and deterministic methods. Applying the results of performed seismicity and seismotectonic studies in the project and applying proper weights in used attenuation relationship, maximum horizontal and vertical acceleration for return periods of 50, 475, 950 and 2475 years are calculated. Horizontal peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.12g, 0.30g, 0.37g and 0.50, and Vertical peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.08g, 0.21g, 0.27g and 0.36g.

Keywords: peak ground acceleration, probabilistic and deterministic, seismic hazard assessment, seismicity parameters

Procedia PDF Downloads 57
852 Stability of Pump Station Cavern in Chagrin Shale with Time

Authors: Mohammad Moridzadeh, Mohammad Djavid, Barry Doyle

Abstract:

An assessment of the long-term stability of a cavern in Chagrin shale excavated by the sequential excavation method was performed during and after construction. During the excavation of the cavern, deformations of rock mass were measured at the surface of excavation and within the rock mass by surface and deep measurement instruments. Rock deformations were measured during construction which appeared to result from the as-built excavation sequence that had potentially disturbed the rock and its behavior. Also some additional time dependent rock deformations were observed during and post excavation. Several opinions have been expressed to explain this time dependent deformation including stress changes induced by excavation, strain softening (or creep) in the beddings with and without clay and creep of the shaley rock under compressive stresses. In order to analyze and replicate rock behavior observed during excavation, including current and post excavation elastic, plastic, and time dependent deformation, Finite Element Analysis (FEA) was performed. The analysis was also intended to estimate long term deformation of the rock mass around the excavation. Rock mass behavior including time dependent deformation was measured by means of rock surface convergence points, MPBXs, extended creep testing on the long anchors, and load history data from load cells attached to several long anchors. Direct creep testing of Chagrin Shale was performed on core samples from the wall of the Pump Room. Results of these measurements were used to calibrate the FEA of the excavation. These analyses incorporate time dependent constitutive modeling for the rock to evaluate the potential long term movement in the roof, walls, and invert of the cavern. The modeling was performed due to the concerns regarding the unanticipated behavior of the rock mass as well as the forecast of long term deformation and stability of rock around the excavation.

Keywords: Cavern, Chagrin shale, creep, finite element.

Procedia PDF Downloads 342
851 Effects of Ensiled Mulberry Leaves and Sun-Dried Mulberry Fruit Pomace on the Composition of Bacteria in Feces of Finishing Steers

Authors: Yan Li, Qingxiang Meng, Bo Zhou, Zhenming Zhou

Abstract:

The objective of this study was to compare the effects of ensiled mulberry leaves (EML), and sun-dried mulberry fruit pomace (SMFP) on fecal bacterial communities in Simmental crossbred finishing steers fed the following 3 diets: a standard TMR diet, standard diet containing EML and standard diet containing SMFP, and the diets had similar protein and energy levels. Bacterial communities in the fecal content were analyzed using Illumina Miseq sequencing of the V4 region of the 16S rRNA gene amplification. Quantitative real-time PCR was used to detect the selected bacterial species in the feces. Most of the sequences were assigned to phyla Firmicutes (56.67%) and Bacteroidetes(35.90%), followed by Proteobacteria(1.86%), Verrucomicrobia(1.80%) and Tenericutes(1.37%). And the predominant genera included the 5-7N15 (5.91%), CF231 (2.49%), Oscillospira (2.33%), Paludibacter (1.23%) and Akkermansia(1.11%). As for the treatments, no significant differences were observed in Firmicutes (p = 0.28), Bacteroidetes (p = 0.63), Proteobacteria (p = 0.46), Verrucomicrobia (p = 0.17) and Tenericutes (p = 0.75). On the genus level, classified genera with high abundance (more than 0.1%) mainly came from two phyla: Bacteroidetes and Firmicutes. Also no differences were observed in most genera level, 5-7N15 (p = 0.21), CF231 (p = 0.62), Oscillospira (p = 0.9), Paludibacter (p = 0.33) and Akkermansia (p = 0.37), except that rc4-4 were lower in the CON and SMFP groups compared to the EML animals (p = 0.02). Additionally, there were no differences in richness estimate and diversity indices (p > 0.16), and treatments had no significant effect on most selected bacterial species in the fecal (p > 0.06), except that Ruminococcus albus were higher in the EML group (p < 0.01) and Streptococcus bovis were lower in the CON group (p < 0.01). In conclusion, diets supplemented with EML and SMFP have little influence on fecal bacterial community composition in finishing steers.

Keywords: fecal bacteria community composition, sequencing, ensiled mulberry leaves (EML), sun-dried mulberry fruit pomace (SMFP)

Procedia PDF Downloads 309
850 Impact of Drainage Defect on the Railway Track Surface Deflections; A Numerical Investigation

Authors: Shadi Fathi, Moura Mehravar, Mujib Rahman

Abstract:

The railwaytransportation network in the UK is over 100 years old and is known as one of the oldest mass transit systems in the world. This aged track network requires frequent closure for maintenance. One of the main reasons for closure is inadequate drainage due to the leakage in the buried drainage pipes. The leaking water can cause localised subgrade weakness, which subsequently can lead to major ground/substructure failure.Different condition assessment methods are available to assess the railway substructure. However, the existing condition assessment methods are not able to detect any local ground weakness/damageand provide details of the damage (e.g. size and location). To tackle this issue, a hybrid back-analysis technique based on artificial neural network (ANN) and genetic algorithm (GA) has been developed to predict the substructurelayers’ moduli and identify any soil weaknesses. At first, afinite element (FE) model of a railway track section under Falling Weight Deflection (FWD) testing was developed and validated against field trial. Then a drainage pipe and various scenarios of the local defect/ soil weakness around the buried pipe with various geometriesand physical properties were modelled. The impact of the soil local weaknesson the track surface deflection wasalso studied. The FE simulations results were used to generate a database for ANN training, and then a GA wasemployed as an optimisation tool to optimise and back-calculate layers’ moduli and soil weakness moduli (ANN’s input). The hybrid ANN-GA back-analysis technique is a computationally efficient method with no dependency on seed modulus values. The modelcan estimate substructures’ layer moduli and the presence of any localised foundation weakness.

Keywords: finite element (FE) model, drainage defect, falling weight deflectometer (FWD), hybrid ANN-GA

Procedia PDF Downloads 143
849 Addressing Public Concerns about Radiation Impacts by Looking Back in Nuclear Accidents Worldwide

Authors: Du Kim, Nelson Baro

Abstract:

According to a report of International Atomic Energy Agency (IAEA), there are approximately 437 nuclear power stations are in operation in the present around the world in order to meet increasing energy demands. Indeed, nearly, a third of the world’s energy demands are met through nuclear power because it is one of the most efficient and long-lasting sources of energy. However, there are also consequences when a major event takes place at a nuclear power station. Over the past years, a few major nuclear accidents have occurred around the world. According to a report of International Nuclear and Radiological Event Scale (INES), there are six nuclear accidents that are considered to be high level (risk) of the events: Fukushima Dai-chi (Level 7), Chernobyl (Level 7), Three Mile Island (Level 5), Windscale (Level 5), Kyshtym (Level 6) and Chalk River (Level 5). Today, many people still have doubt about using nuclear power. There is growing number of people who are against nuclear power after the serious accident occurred at the Fukushima Dai-chi nuclear power plant in Japan. In other words, there are public concerns about radiation impacts which emphasize Linear-No-Threshold (LNT) Issues, Radiation Health Effects, Radiation Protection and Social Impacts. This paper will address those keywords by looking back at the history of these major nuclear accidents worldwide, based on INES. This paper concludes that all major mistake from nuclear accidents are preventable due to the fact that most of them are caused by human error. In other words, the human factor has played a huge role in the malfunction and occurrence of most of those events. The correct handle of a crisis is determined, by having a good radiation protection program in place, it’s what has a big impact on society and determines how acceptable people are of nuclear.

Keywords: linear-no-threshold (LNT) issues, radiation health effects, radiation protection, social impacts

Procedia PDF Downloads 236
848 Development of Scenarios for Sustainable Next Generation Nuclear System

Authors: Muhammad Minhaj Khan, Jaemin Lee, Suhong Lee, Jinyoung Chung, Johoo Whang

Abstract:

The Republic of Korea has been facing strong storage crisis from nuclear waste generation as At Reactor (AR) temporary storage sites are about to reach saturation. Since the country is densely populated with a rate of 491.78 persons per square kilometer, Construction of High-level waste repository will not be a feasible option. In order to tackle the storage waste generation problem which is increasing at a rate of 350 tHM/Yr. and 380 tHM/Yr. in case of 20 PWRs and 4 PHWRs respectively, the study strongly focuses on the advancement of current nuclear power plants to GEN-IV sustainable and ecological nuclear systems by burning TRUs (Pu, MAs). First, Calculations has made to estimate the generation of SNF including Pu and MA from PWR and PHWR NPPS by using the IAEA code Nuclear Fuel Cycle Simulation System (NFCSS) for the period of 2016, 2030 (including the saturation period of each site from 2024~2028), 2089 and 2109 as the number of NPPS will increase due to high import cost of non-nuclear energy sources. 2ndly, in order to produce environmentally sustainable nuclear energy systems, 4 scenarios to burnout the Plutonium and MAs are analyzed with the concentration on burning of MA only, MA and Pu together by utilizing SFR, LFR and KALIMER-600 burner reactor after recycling the spent oxide fuel from PWR through pyro processing technology developed by Korea Atomic Energy Research Institute (KAERI) which shows promising and sustainable future benefits by minimizing the HLW generation with regard to waste amount, decay heat, and activity. Finally, With the concentration on front and back end fuel cycles for open and closed fuel cycles of PWR and Pyro-SFR respectively, an overall assessment has been made which evaluates the quantitative as well as economical combativeness of SFR metallic fuel against PWR once through nuclear fuel cycle.

Keywords: GEN IV nuclear fuel cycle, nuclear waste, waste sustainability, transmutation

Procedia PDF Downloads 346
847 Impact of a Structured Antimicrobial Stewardship Program in a North-East Italian Hospital

Authors: Antonio Marco Miotti, Antonella Ruffatto, Giampaola Basso, Antonio Madia, Giulia Zavatta, Emanuela Salvatico, Emanuela Zilli

Abstract:

A National Action Plan to fight antimicrobial resistance was launched in Italy in 2017. In order to reduce inappropriate exposure to antibiotics and infections from multi-drug resistant bacteria, it is essential to set up a structured system of surveillance and monitoring of the implementation of National Action Plan standards, including antimicrobial consumption, with a special focus on quinolones, third generation cephalosporins and carbapenems. A quantitative estimate of antibiotic consumption (defined daily dose - DDD - consumption per 100 days of hospitalization) has been provided by the Pharmaceutical Service to the Hospital of Cittadella, ULSS 6 Euganea – Health Trust (District of Padua) for the years 2019 (before the pandemic), 2020 and 2021 for all classes of antibiotics. Multidisciplinary meetings have been organized monthly by the local Antimicrobial Stewardship Group. Between 2019 and 2021, an increase in the consumption of carbapenems in the Intensive Care Unit (from 12.2 to 18.2 DDD, + 49.2%) and a decrease in Medical wards (from 5.3 to 2.6 DDD, - 50.9%) was reported; a decrease in the consumption of quinolones in Intensive Care Unit (from 17.2 to 10.8 DDD, - 37.2%), Medical wards (from 10.5 to 6.6 DDD, - 37.1%) and Surgical wards (from 10.2 to 9.3 DDD, - 8.8%) was highlighted; an increase in the consumption of third generation cephalosporins in Medical wards (from 18.1 to 22.6 DDD, + 24,1%) was reported. Finally, after an increase in the consumption of macrolides between 2020 and 2019, in 2021, a decrease was reported in the Intensive Care Unit (DDD: 8.0 in 2019, 18.0 in 2020, 6.4 in 2021) and Medical wards (DDD: 9.0 in 2019, 13.7 in 2020, 10.9 in 2021). Constant monitoring of antimicrobial consumption and timely identifying of warning situations that may need a specific intervention are the cornerstone of Antimicrobial Stewardship programs, together with analysing data on bacterial resistance rates and infections from multi-drug resistant bacteria.

Keywords: carbapenems, quinolones, antimicrobial, stewardship

Procedia PDF Downloads 144
846 Supply, Trade-offs, and Synergies Estimation for Regulating Ecosystem Services of a Local Forest

Authors: Jang-Hwan Jo

Abstract:

The supply management of ecosystem services of local forests is an essential issue as it is linked to the ecological welfare of local residents. This study aims to estimate the supply, trade-offs, and synergies of local forest regulating ecosystem services using a land cover classification map (LCCM) and a forest types map (FTM). Rigorous literature reviews and Expert Delphi analysis were conducted using the detailed variables of 1:5,000 LCCM and FTM. Land-use scoring method and Getis-Ord Gi* Analysis were utilized on detailed variables to propose a method for estimating supply, trade-offs, and synergies of the local forest regulating ecosystem services. The analysis revealed that the rank order (1st to 5th) of supply of regulating ecosystem services was Erosion prevention, Air quality regulation, Heat island mitigation, Water quality regulation, and Carbon storage. When analyzing the correlation between defined services of the entire city, almost all services showed a synergistic effect. However, when analyzing locally, trade-off effects (Heat island mitigation – Air quality regulation, Water quality regulation – Air quality regulation) appeared in the eastern and northwestern forest areas. This suggests the need to consider not only the synergy and trade-offs of the entire forest between specific ecosystem services but also the synergy and trade-offs of local areas in managing the regulating ecosystem services of local forests. The study result can provide primary data for the stakeholders to determine the initial conditions of the planning stage when discussing the establishment of policies related to the adjustment of the supply of regulating ecosystem services of the forests with limited access. Moreover, the study result can also help refine the estimation of the supply of the regulating ecosystem services with the availability of other forms of data.

Keywords: ecosystem service, getis ord gi* analysis, land use scoring method, regional forest, regulating service, synergies, trade-offs

Procedia PDF Downloads 73
845 Improving Urban Mobility: Analyzing Impacts of Connected and Automated Vehicles on Traffic and Emissions

Authors: Saad Roustom, Hajo Ribberink

Abstract:

In most cities in the world, traffic has increased strongly over the last decades, causing high levels of congestion and deteriorating inner-city air quality. This study analyzes the impact of connected and automated vehicles (CAVs) on traffic performance and greenhouse gas (GHG) emissions under different CAV penetration rates in mixed fleet environments of CAVs and driver-operated vehicles (DOVs) and under three different traffic demand levels. Utilizing meso-scale traffic simulations of the City of Ottawa, Canada, the research evaluates the traffic performance of three distinct CAV driving behaviors—Cautious, Normal, and Aggressive—at penetration rates of 25%, 50%, 75%, and 100%, across three different traffic demand levels. The study employs advanced correlation models to estimate GHG emissions. The results reveal that Aggressive and Normal CAVs generally reduce traffic congestion and GHG emissions, with their benefits being more pronounced at higher penetration rates (50% to 100%) and elevated traffic demand levels. On the other hand, Cautious CAVs exhibit an increase in both traffic congestion and GHG emissions. However, results also show deteriorated traffic flow conditions when introducing 25% penetration rates of any type of CAVs. Aggressive CAVs outperform all other driving at improving traffic flow conditions and reducing GHG emissions. The findings of this study highlight the crucial role CAVs can play in enhancing urban traffic performance and mitigating the adverse impact of transportation on the environment. This research advocates for the adoption of effective CAV-related policies by regulatory bodies to optimize traffic flow and reduce GHG emissions. By providing insights into the impact of CAVs, this study aims to inform strategic decision-making and stimulate the development of sustainable urban mobility solutions.

Keywords: connected and automated vehicles, congestion, GHG emissions, mixed fleet environment, traffic performance, traffic simulations

Procedia PDF Downloads 79
844 A Quantitative Model for Replacement of Medical Equipment Based on Technical and Environmental Factors

Authors: Ghadeer Mohammad Said El-Sheikh, Samer Mohamad Shalhoob

Abstract:

Medical equipment operation state is a valid reflection of health care organizations' performance, where such equipment highly contributes to the quality of healthcare services on several levels in which quality improvement has become an intrinsic part of the discourse and activities of health care services. In healthcare organizations, clinical and biomedical engineering departments play an essential role in maintaining the safety and efficiency of such equipment. One of the most challenging topics when it comes to such sophisticated equipment is the lifespan of medical equipment, where many factors will impact such characteristics of medical equipment through its life cycle. So far, many attempts have been made in order to address this issue where most of the approaches are kind of arbitrary approaches and one of the criticisms of existing approaches trying to estimate and understand the lifetime of a medical equipment lies under the inquiry of what are the environmental factors that can play into such a critical characteristic of a medical equipment. In an attempt to address this shortcoming, the purpose of our study rises where in addition to the standard technical factors taken into consideration through the decision-making process by a clinical engineer in case of medical equipment failure, the dimension of environmental factors shall be added. The investigations, researches and studies applied for the purpose of supporting the decision making process by a clinical engineers and assessing the lifespan of healthcare equipment’s in the Lebanese society was highly dependent on the identification of technical criteria’s that impacts the lifespan of a medical equipment where the affecting environmental factors didn’t receive the proper attention. The objective of our study is based on the need for introducing a new well-designed plan for evaluating medical equipment depending on two dimensions. According to this approach, the equipment that should be replaced or repaired will be classified based on a systematic method taking into account two essential criteria; the standard identified technical criteria and the added environmental criteria.

Keywords: technical, environmental, healthcare, characteristic of medical equipment

Procedia PDF Downloads 147
843 Earthquake Identification to Predict Tsunami in Andalas Island, Indonesia Using Back Propagation Method and Fuzzy TOPSIS Decision Seconder

Authors: Muhamad Aris Burhanudin, Angga Firmansyas, Bagus Jaya Santosa

Abstract:

Earthquakes are natural hazard that can trigger the most dangerous hazard, tsunami. 26 December 2004, a giant earthquake occurred in north-west Andalas Island. It made giant tsunami which crushed Sumatra, Bangladesh, India, Sri Lanka, Malaysia and Singapore. More than twenty thousand people dead. The occurrence of earthquake and tsunami can not be avoided. But this hazard can be mitigated by earthquake forecasting. Early preparation is the key factor to reduce its damages and consequences. We aim to investigate quantitatively on pattern of earthquake. Then, we can know the trend. We study about earthquake which has happened in Andalas island, Indonesia one last decade. Andalas is island which has high seismicity, more than a thousand event occur in a year. It is because Andalas island is in tectonic subduction zone of Hindia sea plate and Eurasia plate. A tsunami forecasting is needed to mitigation action. Thus, a Tsunami Forecasting Method is presented in this work. Neutral Network has used widely in many research to estimate earthquake and it is convinced that by using Backpropagation Method, earthquake can be predicted. At first, ANN is trained to predict Tsunami 26 December 2004 by using earthquake data before it. Then after we get trained ANN, we apply to predict the next earthquake. Not all earthquake will trigger Tsunami, there are some characteristics of earthquake that can cause Tsunami. Wrong decision can cause other problem in the society. Then, we need a method to reduce possibility of wrong decision. Fuzzy TOPSIS is a statistical method that is widely used to be decision seconder referring to given parameters. Fuzzy TOPSIS method can make the best decision whether it cause Tsunami or not. This work combines earthquake prediction using neural network method and using Fuzzy TOPSIS to determine the decision that the earthquake triggers Tsunami wave or not. Neural Network model is capable to capture non-linear relationship and Fuzzy TOPSIS is capable to determine the best decision better than other statistical method in tsunami prediction.

Keywords: earthquake, fuzzy TOPSIS, neural network, tsunami

Procedia PDF Downloads 479
842 Disease Trajectories in Relation to Poor Sleep Health in the UK Biobank

Authors: Jiajia Peng, Jianqing Qiu, Jianjun Ren, Yu Zhao

Abstract:

Background: Insufficient sleep has been focused on as a public health epidemic. However, a comprehensive analysis of disease trajectory associated with unhealthy sleep habits is still unclear currently. Objective: This study sought to comprehensively clarify the disease's trajectory in relation to the overall poor sleep pattern and unhealthy sleep behaviors separately. Methods: 410,682 participants with available information on sleep behaviors were collected from the UK Biobank at the baseline visit (2006-2010). These participants were classified as having high- and low risk of each sleep behavior and were followed from 2006 to 2020 to identify the increased risks of diseases. We used Cox regression to estimate the associations of high-risk sleep behaviors with the elevated risks of diseases, and further established diseases trajectory using significant diseases. The low-risk unhealthy sleep behaviors were defined as the reference. Thereafter, we also examined the trajectory of diseases linked with the overall poor sleep pattern by combining all of these unhealthy sleep behaviors. To visualize the disease's trajectory, network analysis was used for presenting these trajectories. Results: During a median follow-up of 12.2 years, we noted 12 medical conditions in relation to unhealthy sleep behaviors and the overall poor sleep pattern among 410,682 participants with a median age of 58.0 years. The majority of participants had unhealthy sleep behaviors; in particular, 75.62% with frequent sleeplessness, and 72.12% had abnormal sleep durations. Besides, a total of 16,032 individuals with an overall poor sleep pattern were identified. In general, three major disease clusters were associated with overall poor sleep status and unhealthy sleep behaviors according to the disease trajectory and network analysis, mainly in the digestive, musculoskeletal and connective tissue, and cardiometabolic systems. Of note, two circularity disease pairs (I25→I20 and I48→I50) showed the highest risks following these unhealthy sleep habits. Additionally, significant differences in disease trajectories were observed in relation to sex and sleep medication among individuals with poor sleep status. Conclusions: We identified the major disease clusters and high-risk diseases following participants with overall poor sleep health and unhealthy sleep behaviors, respectively. It may suggest the need to investigate the potential interventions targeting these key pathways.

Keywords: sleep, poor sleep, unhealthy sleep behaviors, disease trajectory, UK Biobank

Procedia PDF Downloads 76
841 Simulation Study on Effects of Surfactant Properties on Surfactant Enhanced Oil Recovery from Fractured Reservoirs

Authors: Xiaoqian Cheng, Jon Kleppe, Ole Torsaeter

Abstract:

One objective of this work is to analyze the effects of surfactant properties (viscosity, concentration, and adsorption) on surfactant enhanced oil recovery at laboratory scale. The other objective is to obtain the functional relationships between surfactant properties and the ultimate oil recovery and oil recovery rate. A core is cut into two parts from the middle to imitate the matrix with a horizontal fracture. An injector and a producer are at the left and right sides of the fracture separately. The middle slice of the core is used as the model in this paper, whose size is 4cm x 0.1cm x 4.1cm, and the space of the fracture in the middle is 0.1 cm. The original properties of matrix, brine, oil in the base case are from Ekofisk Field. The properties of surfactant are from literature. Eclipse is used as the simulator. The results are followings: 1) The viscosity of surfactant solution has a positive linear relationship with surfactant oil recovery time. And the relationship between viscosity and oil production rate is an inverse function. The viscosity of surfactant solution has no obvious effect on ultimate oil recovery. Since most of the surfactant has no big effect on viscosity of brine, the viscosity of surfactant solution is not a key parameter of surfactant screening for surfactant flooding in fractured reservoirs. 2) The increase of surfactant concentration results a decrease of oil recovery rate and an increase of ultimate oil recovery. However, there are no functions could describe the relationships. Study on economy should be conducted because of the price of surfactant and oil. 3) In the study of surfactant adsorption, assume that the matrix wettability is changed to water-wet when the surfactant adsorption is to the maximum at all cases. And the ratio of surfactant adsorption and surfactant concentration (Cads/Csurf) is used to estimate the functional relationship. The results show that the relationship between ultimate oil recovery and Cads/Csurf is a logarithmic function. The oil production rate has a positive linear relationship with exp(Cads/Csurf). The work here could be used as a reference for the surfactant screening of surfactant enhanced oil recovery from fractured reservoirs. And the functional relationships between surfactant properties and the oil recovery rate and ultimate oil recovery help to improve upscaling methods.

Keywords: fractured reservoirs, surfactant adsorption, surfactant concentration, surfactant EOR, surfactant viscosity

Procedia PDF Downloads 164
840 Investigation into the Optimum Hydraulic Loading Rate for Selected Filter Media Packed in a Continuous Upflow Filter

Authors: A. Alzeyadi, E. Loffill, R. Alkhaddar

Abstract:

Continuous upflow filters can combine the nutrient (nitrogen and phosphate) and suspended solid removal in one unit process. The contaminant removal could be achieved chemically or biologically; in both processes the filter removal efficiency depends on the interaction between the packed filter media and the influent. In this paper a residence time distribution (RTD) study was carried out to understand and compare the transfer behaviour of contaminants through a selected filter media packed in a laboratory-scale continuous up flow filter; the selected filter media are limestone and white dolomite. The experimental work was conducted by injecting a tracer (red drain dye tracer –RDD) into the filtration system and then measuring the tracer concentration at the outflow as a function of time; the tracer injection was applied at hydraulic loading rates (HLRs) (3.8 to 15.2 m h-1). The results were analysed according to the cumulative distribution function F(t) to estimate the residence time of the tracer molecules inside the filter media. The mean residence time (MRT) and variance σ2 are two moments of RTD that were calculated to compare the RTD characteristics of limestone with white dolomite. The results showed that the exit-age distribution of the tracer looks better at HLRs (3.8 to 7.6 m h-1) and (3.8 m h-1) for limestone and white dolomite respectively. At these HLRs the cumulative distribution function F(t) revealed that the residence time of the tracer inside the limestone was longer than in the white dolomite; whereas all the tracer took 8 minutes to leave the white dolomite at 3.8 m h-1. On the other hand, the same amount of the tracer took 10 minutes to leave the limestone at the same HLR. In conclusion, the determination of the optimal level of hydraulic loading rate, which achieved the better influent distribution over the filtration system, helps to identify the applicability of the material as filter media. Further work will be applied to examine the efficiency of the limestone and white dolomite for phosphate removal by pumping a phosphate solution into the filter at HLRs (3.8 to 7.6 m h-1).

Keywords: filter media, hydraulic loading rate, residence time distribution, tracer

Procedia PDF Downloads 267
839 Hedonic Price Analysis of Consumer Preference for Musa spp in Northern Nigeria

Authors: Yakubu Suleiman, S. A. Musa

Abstract:

The research was conducted to determine the physical characteristics of banana fruits that influenced consumer preferences for the fruit in Northern Nigeria. Socio-economic characteristics of the respondents were also identified. Simple descriptive statistics and Hedonic prices model were used to analyze the data collected for socio-economic and consumer preference respectively with the aid of 1000 structured questionnaires. The result revealed the value of R2 to be 0.633, meaning that, 63.3% of the variation in the banana price was brought about by the explanatory variables included in the model and the variables are: colour, size, degree of ripeness, softness, surface blemish, cleanliness of the fruits, weight, length, and cluster size of fruits. However, the remaining 36.7% could be attributed to the error term or random disturbance in the model. It could also be seen from the calculated result that the intercept was 1886.5 and was statistically significant (P < 0.01), meaning that about N1886.5 worth of banana fruits could be bought by consumers without considering the variables of banana included in the model. Moreover, consumers showed that they have significant preference for colours, size, degree of ripeness, softness, weight, length and cluster size of banana fruits and they were tested to be significant at either P < 0.01, P < 0.05, and P < 0.1 . Moreover, the result also shows that consumers did not show significance preferences to surface blemish, cleanliness and variety of the banana fruit as all of them showed non-significance level with negative signs. Based on the findings of the research, it is hereby recommended that plant breeders and research institutes should concentrate on the production of banana fruits that have those physical characteristics that were found to be statistically significance like cluster size, degree of ripeness,’ softness, length, size, and skin colour.

Keywords: analysis, consumers, preference, variables

Procedia PDF Downloads 331
838 Simulation-Based Validation of Safe Human-Robot-Collaboration

Authors: Titanilla Komenda

Abstract:

Human-machine-collaboration defines a direct interaction between humans and machines to fulfil specific tasks. Those so-called collaborative machines are used without fencing and interact with humans in predefined workspaces. Even though, human-machine-collaboration enables a flexible adaption to variable degrees of freedom, industrial applications are rarely found. The reasons for this are not technical progress but rather limitations in planning processes ensuring safety for operators. Until now, humans and machines were mainly considered separately in the planning process, focusing on ergonomics and system performance respectively. Within human-machine-collaboration, those aspects must not be seen in isolation from each other but rather need to be analysed in interaction. Furthermore, a simulation model is needed that can validate the system performance and ensure the safety for the operator at any given time. Following on from this, a holistic simulation model is presented, enabling a simulative representation of collaborative tasks – including both, humans and machines. The presented model does not only include a geometry and a motion model of interacting humans and machines but also a numerical behaviour model of humans as well as a Boole’s probabilistic sensor model. With this, error scenarios can be simulated by validating system behaviour in unplanned situations. As these models can be defined on the basis of Failure Mode and Effects Analysis as well as probabilities of errors, the implementation in a collaborative model is discussed and evaluated regarding limitations and simulation times. The functionality of the model is shown on industrial applications by comparing simulation results with video data. The analysis shows the impact of considering human factors in the planning process in contrast to only meeting system performance. In this sense, an optimisation function is presented that meets the trade-off between human and machine factors and aids in a successful and safe realisation of collaborative scenarios.

Keywords: human-machine-system, human-robot-collaboration, safety, simulation

Procedia PDF Downloads 355
837 The Use of Correlation Difference for the Prediction of Leakage in Pipeline Networks

Authors: Mabel Usunobun Olanipekun, Henry Ogbemudia Omoregbee

Abstract:

Anomalies such as water pipeline and hydraulic or petrochemical pipeline network leakages and bursts have significant implications for economic conditions and the environment. In order to ensure pipeline systems are reliable, they must be efficiently controlled. Wireless Sensor Networks (WSNs) have become a powerful network with critical infrastructure monitoring systems for water, oil and gas pipelines. The loss of water, oil and gas is inevitable and is strongly linked to financial costs and environmental problems, and its avoidance often leads to saving of economic resources. Substantial repair costs and the loss of precious natural resources are part of the financial impact of leaking pipes. Pipeline systems experts have implemented various methodologies in recent decades to identify and locate leakages in water, oil and gas supply networks. These methodologies include, among others, the use of acoustic sensors, measurements, abrupt statistical analysis etc. The issue of leak quantification is to estimate, given some observations about that network, the size and location of one or more leaks in a water pipeline network. In detecting background leakage, however, there is a greater uncertainty in using these methodologies since their output is not so reliable. In this work, we are presenting a scalable concept and simulation where a pressure-driven model (PDM) was used to determine water pipeline leakage in a system network. These pressure data were collected with the use of acoustic sensors located at various node points after a predetermined distance apart. We were able to determine with the use of correlation difference to determine the leakage point locally introduced at a predetermined point between two consecutive nodes, causing a substantial pressure difference between in a pipeline network. After de-noising the signal from the sensors at the nodes, we successfully obtained the exact point where we introduced the local leakage using the correlation difference model we developed.

Keywords: leakage detection, acoustic signals, pipeline network, correlation, wireless sensor networks (WSNs)

Procedia PDF Downloads 89
836 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling

Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed

Abstract:

The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.

Keywords: streamflow, neural network, optimisation, algorithm

Procedia PDF Downloads 144
835 Statistical Modeling and by Artificial Neural Networks of Suspended Sediment Mina River Watershed at Wadi El-Abtal Gauging Station (Northern Algeria)

Authors: Redhouane Ghernaout, Amira Fredj, Boualem Remini

Abstract:

Suspended sediment transport is a serious problem worldwide, but it is much more worrying in certain regions of the world, as is the case in the Maghreb and more particularly in Algeria. It continues to take disturbing proportions in Northern Algeria due to the variability of rains in time and in space and constant deterioration of vegetation. Its prediction is essential in order to identify its intensity and define the necessary actions for its reduction. The purpose of this study is to analyze the concentration data of suspended sediment measured at Wadi El-Abtal Hydrometric Station. It also aims to find and highlight regressive power relationships, which can explain the suspended solid flow by the measured liquid flow. The study strives to find models of artificial neural networks linking the flow, month and precipitation parameters with solid flow. The obtained results show that the power function of the solid transport rating curve and the models of artificial neural networks are appropriate methods for analysing and estimating suspended sediment transport in Wadi Mina at Wadi El-Abtal Hydrometric Station. They made it possible to identify in a fairly conclusive manner the model of neural networks with four input parameters: the liquid flow Q, the month and the daily precipitation measured at the representative stations (Frenda 013002 and Ain El-Hadid 013004 ) of the watershed. The model thus obtained makes it possible to estimate the daily solid flows (interpolate and extrapolate) even beyond the period of observation of solid flows (1985/86 to 1999/00), given the availability of the average daily liquid flows and daily precipitation since 1953/1954.

Keywords: suspended sediment, concentration, regression, liquid flow, solid flow, artificial neural network, modeling, mina, algeria

Procedia PDF Downloads 92
834 Combining Ability for Maize Grain Yield and Yield Component for Resistant to Striga hermmonthica (Del) Benth in Southern Guinea Savannah of Nigeria

Authors: Terkimbi Vange, Obed Abimiku, Lateef Lekan Bello, Lucky Omoigui

Abstract:

In 2014 and 2015, eight maize inbred lines resistant to Striga hermonthica (Del) Benth were crossed in 8 x 8 half diallel (Griffing method 11, model 1). The eight parent inbred lines were planted out in a Randomized Complete Block Design (RCBD) with three replications at two different Striga infested environments (Lafia and Makurdi) during the late cropping season. The objectives were to determine the combining ability of Striga resistant maize inbred lines and identify suitable inbreds for hybrids development. The lines were used to estimate general combining ability (GCA), and specific combining ability (SCA) effects for Striga related parameters such as Striga shoot counts, Striga damage rating (SDR), plant height and grain yield and other agronomic traits. The result of combined ANOVA revealed that mean squares were highly significant for all traits except Striga damage rating (SDR1) at 8WAS and Striga emergence count (STECOI) at 8WAS. Mean squares for SCA were significantly low for all traits. TZSTR190 was the highest yielding parent, and TZSTR166xTZST190 was the highest yielding hybrid (cross). Parent TZSTR166, TZEI188, TZSTR190 and TZSTR193 shows significant (p < 0.05) positive GCA effects for grain yield while the rest had negative GCA effects for grain yield. Parent TZSTR166, TZEI188, TZSTR190, and TZSTR193 could be used for initiating hybrid development. Also, TZSTR166xTZSTR190 cross was the best specific combiner followed by TZEI188xTZSTR193, TZEI80xTZSTR193, and TZSTR190xTZSTR193. TZSTR166xTZSTR190 and TZSTR190xTZSTR193 had the highest SCA effects. However, TZEI80 and TZSTR190 manifested a high positive SCA effect with TZSTR166 indicating that these two inbreds combined better with TZSTR166.

Keywords: combining ability, Striga hermonthica, resistance, grain yield

Procedia PDF Downloads 233
833 Correlation between Cephalometric Measurements and Visual Perception of Facial Profile in Skeletal Type II Patients

Authors: Choki, Supatchai Boonpratham, Suwannee Luppanapornlarp

Abstract:

The objective of this study was to find a correlation between cephalometric measurements and visual perception of facial profile in skeletal type II patients. In this study, 250 lateral cephalograms of female patients from age, 20 to 22 years were analyzed. The profile outlines of all the samples were hand traced and transformed into silhouettes by the principal investigator. Profile ratings were done by 9 orthodontists on Visual Analogue Scale from score one to ten (increasing level of convexity). 37 hard issue and soft tissue cephalometric measurements were analyzed by the principal investigator. All the measurements were repeated after 2 weeks interval for error assessment. At last, the rankings of visual perceptions were correlated with cephalometric measurements using Spearman correlation coefficient (P < 0.05). The results show that the increase in facial convexity was correlated with higher values of ANB (A point, nasion and B point), AF-BF (distance from A point to B point in mm), L1-NB (distance from lower incisor to NB line in mm), anterior maxillary alveolar height, posterior maxillary alveolar height, overjet, H angle hard tissue, H angle soft tissue and lower lip to E plane (absolute correlation values from 0.277 to 0.711). In contrast, the increase in facial convexity was correlated with lower values of Pg. to N perpendicular and Pg. to NB (mm) (absolute correlation value -0.302 and -0.294 respectively). From the soft tissue measurements, H angles had a higher correlation with visual perception than facial contour angle, nasolabial angle, and lower lip to E plane. In conclusion, the findings of this study indicated that the correlation of cephalometric measurements with visual perception was less than expected. Only 29% of cephalometric measurements had a significant correlation with visual perception. Therefore, diagnosis based solely on cephalometric analysis can result in failure to meet the patient’s esthetic expectation.

Keywords: cephalometric measurements, facial profile, skeletal type II, visual perception

Procedia PDF Downloads 127
832 Numerical Simulation of Convective and Transport Processes in the Nocturnal Atmospheric Surface Layer

Authors: K. R. Sreenivas, Shaurya Kaushal

Abstract:

After sunset, under calm & clear-sky nocturnal conditions, the air layer near the surface containing aerosols cools through radiative processes to the upper atmosphere. Due to this cooling, surface air-layer temperature can fall 2-6 degrees C lower than the ground-surface temperature. This unstable convection layer, on the top, is capped by a stable inversion-boundary layer. Radiative divergence, along with the convection within the surface layer, governs the vertical transport of heat and moisture. Micro-physics in this layer have implications for the occurrence and growth of the fog layer. This particular configuration, featuring a convective mixed layer beneath a stably stratified inversion layer, exemplifies a classic case of penetrative convection. In this study, we conduct numerical simulations of the penetrative convection phenomenon within the nocturnal atmospheric surface layer and elucidate its relevance to the dynamics of fog layers. We employ field and laboratory measurements of aerosol number density to model the strength of the radiative cooling. Our analysis encompasses horizontally averaged, vertical profiles of temperature, density, and heat flux. The energetic incursion of the air from the mixed layer into the stable inversion layer across the interface results in entrainment and the growth of the mixed layer, modeling of which is the key focus of our investigation. In our research, we ascertain the appropriate length scale to employ in the Richardson number correlation, which allows us to estimate the entrainment rate and model the growth of the mixed layer. Our analysis of the mixed layer and the entrainment zone reveals a close alignment with previously reported laboratory experiments on penetrative convection. Additionally, we demonstrate how aerosol number density influences the growth or decay of the mixed layer. Furthermore, our study suggests that the presence of fog near the ground surface can induce extensive vertical mixing, a phenomenon observed in field experiments.

Keywords: inversion layer, penetrative convection, radiative cooling, fog occurrence

Procedia PDF Downloads 60
831 The Extent of Land Use Externalities in the Fringe of Jakarta Metropolitan: An Application of Spatial Panel Dynamic Land Value Model

Authors: Rahma Fitriani, Eni Sumarminingsih, Suci Astutik

Abstract:

In a fast growing region, conversion of agricultural lands which are surrounded by some new development sites will occur sooner than expected. This phenomenon has been experienced by many regions in Indonesia, especially the fringe of Jakarta (BoDeTaBek). Being Indonesia’s capital city, rapid conversion of land in this area is an unavoidable process. The land conversion expands spatially into the fringe regions, which were initially dominated by agricultural land or conservation sites. Without proper control or growth management, this activity will invite greater costs than benefits. The current land use is the use which maximizes its value. In order to maintain land for agricultural activity or conservation, some efforts are needed to keep the land value of this activity as high as possible. In this case, the knowledge regarding the functional relationship between land value and its driving forces is necessary. In a fast growing region, development externalities are the assumed dominant driving force. Land value is the product of the past decision of its use leading to its value. It is also affected by the local characteristics and the observed surrounded land use (externalities) from the previous period. The effect of each factor on land value has dynamic and spatial virtues; an empirical spatial dynamic land value model will be more useful to capture them. The model will be useful to test and to estimate the extent of land use externalities on land value in the short run as well as in the long run. It serves as a basis to formulate an effective urban growth management’s policy. This study will apply the model to the case of land value in the fringe of Jakarta Metropolitan. The model will be used further to predict the effect of externalities on land value, in the form of prediction map. For the case of Jakarta’s fringe, there is some evidence about the significance of neighborhood urban activity – negative externalities, the previous land value and local accessibility on land value. The effects are accumulated dynamically over years, but they will fully affect the land value after six years.

Keywords: growth management, land use externalities, land value, spatial panel dynamic

Procedia PDF Downloads 247
830 A Trend Based Forecasting Framework of the ATA Method and Its Performance on the M3-Competition Data

Authors: H. Taylan Selamlar, I. Yavuz, G. Yapar

Abstract:

It is difficult to make predictions especially about the future and making accurate predictions is not always easy. However, better predictions remain the foundation of all science therefore the development of accurate, robust and reliable forecasting methods is very important. Numerous number of forecasting methods have been proposed and studied in the literature. There are still two dominant major forecasting methods: Box-Jenkins ARIMA and Exponential Smoothing (ES), and still new methods are derived or inspired from them. After more than 50 years of widespread use, exponential smoothing is still one of the most practically relevant forecasting methods available due to their simplicity, robustness and accuracy as automatic forecasting procedures especially in the famous M-Competitions. Despite its success and widespread use in many areas, ES models have some shortcomings that negatively affect the accuracy of forecasts. Therefore, a new forecasting method in this study will be proposed to cope with these shortcomings and it will be called ATA method. This new method is obtained from traditional ES models by modifying the smoothing parameters therefore both methods have similar structural forms and ATA can be easily adapted to all of the individual ES models however ATA has many advantages due to its innovative new weighting scheme. In this paper, the focus is on modeling the trend component and handling seasonality patterns by utilizing classical decomposition. Therefore, ATA method is expanded to higher order ES methods for additive, multiplicative, additive damped and multiplicative damped trend components. The proposed models are called ATA trended models and their predictive performances are compared to their counter ES models on the M3 competition data set since it is still the most recent and comprehensive time-series data collection available. It is shown that the models outperform their counters on almost all settings and when a model selection is carried out amongst these trended models ATA outperforms all of the competitors in the M3- competition for both short term and long term forecasting horizons when the models’ forecasting accuracies are compared based on popular error metrics.

Keywords: accuracy, exponential smoothing, forecasting, initial value

Procedia PDF Downloads 169