Search results for: probabilistic scoring distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5453

Search results for: probabilistic scoring distribution

4343 Calculation of Detection Efficiency of Horizontal Large Volume Source Using Exvol Code

Authors: M. Y. Kang, Euntaek Yoon, H. D. Choi

Abstract:

To calculate the full energy (FE) absorption peak efficiency for arbitrary volume sample, we developed and verified the EXVol (Efficiency calculator for EXtended Voluminous source) code which is based on effective solid angle method. EXVol is possible to describe the source area as a non-uniform three-dimensional (x, y, z) source. And decompose and set it into several sets of volume units. Users can equally divide (x, y, z) coordinate system to calculate the detection efficiency at a specific position of a cylindrical volume source. By determining the detection efficiency for differential volume units, the total radiative absolute distribution and the correction factor of the detection efficiency can be obtained from the nondestructive measurement of the source. In order to check the performance of the EXVol code, Si ingot of 20 cm in diameter and 50 cm in height were used as a source. The detector was moved at the collimation geometry to calculate the detection efficiency at a specific position and compared with the experimental values. In this study, the performance of the EXVol code was extended to obtain the detection efficiency distribution at a specific position in a large volume source.

Keywords: attenuation, EXVol, detection efficiency, volume source

Procedia PDF Downloads 170
4342 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring

Authors: Daniel Fundi Murithi

Abstract:

Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.

Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring

Procedia PDF Downloads 146
4341 An Experimental Study on the Temperature Reduction of Exhaust Gas at a Snorkeling of Submarine

Authors: Seok-Tae Yoon, Jae-Yeong Choi, Gyu-Mok Jeon, Yong-Jin Cho, Jong-Chun Park

Abstract:

Conventional submarines obtain propulsive force by using an electric propulsion system consisting of a diesel generator, battery, motor, and propeller. In the underwater, the submarine uses the electric power stored in the battery. After that, when a certain amount of electric power is consumed, the submarine floats near the sea water surface and recharges the electric power by using the diesel generator. The voyage carried out while charging the power is called a snorkel, and the high-temperature exhaust gas from the diesel generator forms a heat distribution on the sea water surface. The heat distribution is detected by weapon system equipped with thermo-detector and that is the main cause of reducing the survivability of the submarine. In this paper, an experimental study was carried out to establish optimal operating conditions of a submarine for reduction of infrared signature radiated from the sea water surface. For this, a hot gas generating system and a round acrylic water tank with adjustable water level were made. The control variables of the experiment were set as the mass flow rate, the temperature difference between the water and the hot gas in the water tank, and the water level difference between the air outlet and the water surface. The experimental instrumentation used a thermocouple of T-type to measure the released air temperature on the surface of the water, and a thermography system to measure the thermal energy distribution on the water surface. As a result of the experiment study, we analyzed the correlation between the final released temperature of the exhaust pipe exit in a submarine and the depth of the snorkel, and presented reasonable operating conditions for the infrared signature reduction of submarine.

Keywords: experiment study, flow rate, infrared signature, snorkeling, thermography

Procedia PDF Downloads 338
4340 Towards Integrating Statistical Color Features for Human Skin Detection

Authors: Mohd Zamri Osman, Mohd Aizaini Maarof, Mohd Foad Rohani

Abstract:

Human skin detection recognized as the primary step in most of the applications such as face detection, illicit image filtering, hand recognition and video surveillance. The performance of any skin detection applications greatly relies on the two components: feature extraction and classification method. Skin color is the most vital information used for skin detection purpose. However, color feature alone sometimes could not handle images with having same color distribution with skin color. A color feature of pixel-based does not eliminate the skin-like color due to the intensity of skin and skin-like color fall under the same distribution. Hence, the statistical color analysis will be exploited such mean and standard deviation as an additional feature to increase the reliability of skin detector. In this paper, we studied the effectiveness of statistical color feature for human skin detection. Furthermore, the paper analyzed the integrated color and texture using eight classifiers with three color spaces of RGB, YCbCr, and HSV. The experimental results show that the integrating statistical feature using Random Forest classifier achieved a significant performance with an F1-score 0.969.

Keywords: color space, neural network, random forest, skin detection, statistical feature

Procedia PDF Downloads 445
4339 Optimization of Line Loss Minimization Using Distributed Generation

Authors: S. Sambath, P. Palanivel

Abstract:

Research conducted in the last few decades has proven that an inclusion of Distributed Genaration (DG) into distribution systems considerably lowers the level of power losses and the power quality improved. Moreover, the choice of DG is even more attractive since it provides not only benefits in power loss minimisation, but also a wide range of other advantages including environment, economic, power qualities and technical issues. This paper is an intent to quantify and analyse the impact of distributed generation (DG) in Tamil Nadu, India to examine what the benefits of decentralized generation would be for meeting rural loads. We used load flow analysis to simulate and quantify the loss reduction and power quality enhancement by having decentralized generation available line conditions for actual rural feeders in Tamil Nadu, India. Reactive and voltage profile was considered. This helps utilities to better plan their system in rural areas to meet dispersed loads, while optimizing the renewable and decentralised generation sources.

Keywords: distributed generation, distribution system, load flow analysis, optimal location, power quality

Procedia PDF Downloads 388
4338 Multi-Objective Random Drift Particle Swarm Optimization Algorithm Based on RDPSO and Crowding Distance Sorting

Authors: Yiqiong Yuan, Jun Sun, Dongmei Zhou, Jianan Sun

Abstract:

In this paper, we presented a Multi-Objective Random Drift Particle Swarm Optimization algorithm (MORDPSO-CD) based on RDPSO and crowding distance sorting to improve the convergence and distribution with less computation cost. MORDPSO-CD makes the most of RDPSO to approach the true Pareto optimal solutions fast. We adopt the crowding distance sorting technique to update and maintain the archived optimal solutions. Introducing the crowding distance technique into MORDPSO can make the leader particles find the true Pareto solution ultimately. The simulation results reveal that the proposed algorithm has better convergence and distribution

Keywords: multi-objective optimization, random drift particle swarm optimization, crowding distance sorting, pareto optimal solution

Procedia PDF Downloads 241
4337 A Two Tailed Secretary Problem with Multiple Criteria

Authors: Alaka Padhye, S. P. Kane

Abstract:

The following study considers some variations made to the secretary problem (SP). In a multiple criteria secretary problem (MCSP), the selection of a unit is based on two independent characteristics. The units that appear before an observer are known say N, the best rank of a unit being N. A unit is selected, if it is better with respect to either first or second or both the characteristics. When the number of units is large and due to constraints like time and cost, the observer might want to stop earlier instead of inspecting all the available units. Let the process terminate at r2th unit where r1Keywords: joint distribution, marginal distribution, real ranks, secretary problem, selection criterion, two tailed secretary problem

Procedia PDF Downloads 262
4336 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling

Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé

Abstract:

Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.

Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation

Procedia PDF Downloads 67
4335 A Periodogram-Based Spectral Method Approach: The Relationship between Tourism and Economic Growth in Turkey

Authors: Mesut BALIBEY, Serpil TÜRKYILMAZ

Abstract:

A popular topic in the econometrics and time series area is the cointegrating relationships among the components of a nonstationary time series. Engle and Granger’s least squares method and Johansen’s conditional maximum likelihood method are the most widely-used methods to determine the relationships among variables. Furthermore, a method proposed to test a unit root based on the periodogram ordinates has certain advantages over conventional tests. Periodograms can be calculated without any model specification and the exact distribution under the assumption of a unit root is obtained. For higher order processes the distribution remains the same asymptotically. In this study, in order to indicate advantages over conventional test of periodograms, we are going to examine a possible relationship between tourism and economic growth during the period 1999:01-2010:12 for Turkey by using periodogram method, Johansen’s conditional maximum likelihood method, Engle and Granger’s ordinary least square method.

Keywords: cointegration, economic growth, periodogram ordinate, tourism

Procedia PDF Downloads 251
4334 Sliding Mode Control and Its Application in Custom Power Device: A Comprehensive Overview

Authors: Pankaj Negi

Abstract:

Nowadays the demand for receiving the high quality electrical energy is being increasing as consumer wants not only reliable but also quality power. Custom power instruments are of the most well-known compensators of power quality in distributed network. This paper present a comprehensive review of compensating custom power devices mainly DSTATCOM (distribution static compensator),DVR (dynamic voltage restorer), and UPQC (unified power quality compensator) and also deals with sliding mode control and its applications to custom power devices. The sliding mode control strategy provides robustness to custom power device and enhances the dynamic response for compensating voltage sag, swell, voltage flicker, and voltage harmonics. The aim of this paper is to provide a broad perspective on the status of compensating devices in electric power distribution system and sliding mode control strategies to researchers and application engineers who are dealing with power quality and stability issues.

Keywords: active power filters(APF), custom power device(CPD), DSTATCOM, DVR, UPQC, sliding mode control (SMC), power quality

Procedia PDF Downloads 424
4333 Young’s Modulus Variability: Influence on Masonry Vault Behavior

Authors: Abdelmounaim Zanaz, Sylvie Yotte, Fazia Fouchal, Alaa Chateauneuf

Abstract:

This paper presents a methodology for probabilistic assessment of bearing capacity and prediction of failure mechanism of masonry vaults at the ultimate state with consideration of the natural variability of Young’s modulus of stones. First, the computation model is explained. The failure mode is the most reported mode, i.e. the four-hinge mechanism. Based on this assumption, the study of a vault composed of 16 segments is presented. The Young’s modulus of the segments is considered as random variable defined by a mean value and a coefficient of variation CV. A relationship linking the vault bearing capacity to the modulus variation of voussoirs is proposed. The failure mechanisms, in addition to that observed in the deterministic case, are identified for each CV value as well as their probability of occurrence. The results show that the mechanism observed in the deterministic case has decreasing probability of occurrence in terms of CV, while the number of other mechanisms and their probability of occurrence increase with the coefficient of variation of Young’s modulus. This means that if a significant change in the Young modulus of the segments is proven, taken it into account in computations becomes mandatory, both for determining the vault bearing capacity and for predicting its failure mechanism.

Keywords: masonry, mechanism, probability, variability, vault

Procedia PDF Downloads 431
4332 Design of Impedance Box to Study Fluid Parameters

Authors: K. AlJimaz, A. Abdullah, A. Abdulsalam, K. Ebdah, A. Abdalrasheed

Abstract:

Understanding flow distribution and head losses is essential to design and calculate Thermo fluid parameters in order to reduce the pressure to a certain required pressure. This paper discusses the ways acquired in design and simulation to create and design an impedance box that reduces pressure. It's controlled by specific scientific principles such as Bernoulli’s principle and conservation of mass. In this paper, the design is made using SOLIDWORKS, and the simulation is done using ANSYS software to solve differential equations and study the parameters in the 3D model, also to understand how the design of this box reduced the pressure. The design was made so that fluid enters at a certain pressure of 3000 Pa in a single inlet; then, it exits from six outlets at a pressure of 300 Pa with respect to the conservation of mass principle. The effect of the distribution of flow and the head losses has been noticed that it has an impact on reducing the pressure since other factors, such as friction, were neglected and also the temperature, which was constant. The design showed that the increase in length and diameter of the pipe helped to reduce the pressure, and the head losses contributed significantly to reduce the pressure to 10% of the original pressure (from 3000 Pa to 300 Pa) at the outlets.

Keywords: box, pressure, thermodynamics, 3D

Procedia PDF Downloads 72
4331 Change Point Detection Using Random Matrix Theory with Application to Frailty in Elderly Individuals

Authors: Malika Kharouf, Aly Chkeir, Khac Tuan Huynh

Abstract:

Detecting change points in time series data is a challenging problem, especially in scenarios where there is limited prior knowledge regarding the data’s distribution and the nature of the transitions. We present a method designed for detecting changes in the covariance structure of high-dimensional time series data, where the number of variables closely matches the data length. Our objective is to achieve unbiased test statistic estimation under the null hypothesis. We delve into the utilization of Random Matrix Theory to analyze the behavior of our test statistic within a high-dimensional context. Specifically, we illustrate that our test statistic converges pointwise to a normal distribution under the null hypothesis. To assess the effectiveness of our proposed approach, we conduct evaluations on a simulated dataset. Furthermore, we employ our method to examine changes aimed at detecting frailty in the elderly.

Keywords: change point detection, hypothesis tests, random matrix theory, frailty in elderly

Procedia PDF Downloads 21
4330 CT Images Based Dense Facial Soft Tissue Thickness Measurement by Open-source Tools in Chinese Population

Authors: Ye Xue, Zhenhua Deng

Abstract:

Objectives: Facial soft tissue thickness (FSTT) data could be obtained from CT scans by measuring the face-to-skull distances at sparsely distributed anatomical landmarks by manually located on face and skull. However, automated measurement using 3D facial and skull models by dense points using open-source software has become a viable option due to the development of computed assisted imaging technologies. By utilizing dense FSTT information, it becomes feasible to generate plausible automated facial approximations. Therefore, establishing a comprehensive and detailed, densely calculated FSTT database is crucial in enhancing the accuracy of facial approximation. Materials and methods: This study utilized head CT scans from 250 Chinese adults of Han ethnicity, with 170 participants originally born and residing in northern China and 80 participants in southern China. The age of the participants ranged from 14 to 82 years, and all samples were divided into five non-overlapping age groups. Additionally, samples were also divided into three categories based on BMI information. The 3D Slicer software was utilized to segment bone and soft tissue based on different Hounsfield Unit (HU) thresholds, and surface models of the face and skull were reconstructed for all samples from CT data. Following procedures were performed unsing MeshLab, including converting the face models into hollowed cropped surface models amd automatically measuring the Hausdorff Distance (referred to as FSTT) between the skull and face models. Hausdorff point clouds were colorized based on depth value and exported as PLY files. A histogram of the depth distributions could be view and subdivided into smaller increments. All PLY files were visualized of Hausdorff distance value of each vertex. Basic descriptive statistics (i.e., mean, maximum, minimum and standard deviation etc.) and distribution of FSTT were analysis considering the sex, age, BMI and birthplace. Statistical methods employed included Multiple Regression Analysis, ANOVA, principal component analysis (PCA). Results: The distribution of FSTT is mainly influenced by BMI and sex, as further supported by the results of the PCA analysis. Additionally, FSTT values exceeding 30mm were found to be more sensitive to sex. Birthplace-related differences were observed in regions such as the forehead, orbital, mandibular, and zygoma. Specifically, there are distribution variances in the depth range of 20-30mm, particularly in the mandibular region. Northern males exhibit thinner FSTT in the frontal region of the forehead compared to southern males, while females shows fewer distribution differences between the northern and southern, except for the zygoma region. The observed distribution variance in the orbital region could be attributed to differences in orbital size and shape. Discussion: This study provides a database of Chinese individuals distribution of FSTT and suggested opening source tool shows fine function for FSTT measurement. By incorporating birthplace as an influential factor in the distribution of FSTT, a greater level of detail can be achieved in facial approximation.

Keywords: forensic anthropology, forensic imaging, cranial facial reconstruction, facial soft tissue thickness, CT, open-source tool

Procedia PDF Downloads 48
4329 Characterization of Nanoemulsion Incorporating Crude Cocoa Polyphenol

Authors: Suzannah Sharif, Aznie Aida Ahmad, Maznah Ismail

Abstract:

Cocoa bean is the raw material for products such as cocoa powder and chocolate. Cocoa bean contains polyphenol which has been shown in several clinical studies to confer beneficial health effects. However studies showed that cocoa polyphenol absorption in the human intestinal tracts are very low. Therefore nanoemulsion may be one way to increase the bioavailability of cocoa polyphenol. This study aim to characterize nanoemulsion incorporating crude cocoa polyphenol produced using high energy technique. Cocoa polyphenol was extracted from fresh freeze-dried cocoa beans from Malaysia. The particle distribution, particle size, and zeta potential were determined. The emulsion was also analysed using transmission electron microscope to visualize the particles. Solubilization study was conducted by titrating the nanoemulsion into distilled water or 1% surfactant solution. Result showed that the nanoemulsion contains particle which have narrow size distribution. The particles size average at 112nm with zeta potential of -45mV. The nanoemulsions behave differently in distilled water and surfactant solution.

Keywords: cocoa, nanoemulsion, cocoa polyphenol, solubilisation study

Procedia PDF Downloads 453
4328 Regional Disparities in Microfinance Distribution: Evidence from Indian States

Authors: Sunil Sangwan, Narayan Chandra Nayak

Abstract:

Over the last few decades, Indian banking system has achieved remarkable growth in its credit volume. However, one of the most disturbing facts about this growth is the uneven distribution of financial services across regions. Having witnessed limited success from all the earlier efforts towards financial inclusion targeting the rural poor and the underprivileged, provision of microfinance, of late, has emerged as a supplementary mechanism. There are two prominent modes of microfinance distribution in India namely Bank-SHG linkage (SBLP) and private Microfinance Institutions (MFIs). Ironically, such efforts also seem to have failed to achieve the desired targets as the microfinance services have witnessed skewed distribution across the states of the country. This study attempts to make a comparative analysis of the geographical skew of the SBLP and MFI in India and examine the factors influencing their regional distribution. The results indicate that microfinance services are largely concentrated in the southern region, accounting for about 50% of all microfinance clients and 49% of all microfinance loan portfolios. This is distantly followed by an eastern region where client outreach is close to 25% only. The north-eastern, northern, central, and western regions lag far behind in microfinance sectors, accounting for only 4%, 4%, 10%, and 7 % client outreach respectively. The penetration of SHGs is equally skewed, with the southern region accounting for 46% of client outreach and 70% of loan portfolios followed by an eastern region with 21% of client outreach and 13% of the loan portfolio. Contrarily, north-eastern, northern, central, western and eastern regions account for 5%, 5%, 10%, and 13% of client outreach and 3%, 3%, 7%, and 4% of loan portfolios respectively. The study examines the impact of literacy rate, rural poverty, population density, primary sector share, non-farm activities, loan default behavior and bank penetration on the microfinance penetration. The study is limited to 17 major states of the country over the period 2008-2014. The results of the GMM estimation indicate the significant positive impact of literacy rate, non-farm activities and population density on microfinance penetration across the states, while the rise in loan default seems to deter it. Rural poverty shows the significant negative impact on the spread of SBLP, while it has a positive impact on MFI penetration, hence indicating the policy of exclusion being adhered to by the formal financial system especially towards the poor. However, MFIs seem to be working as substitute mechanisms to banks to fill the gap. The findings of the study are a pointer towards enhancing financial literacy, non-farm activities, rural bank penetration and containing loan default for achieving greater microfinance prevalence.

Keywords: bank penetration, literacy rate, microfinance, primary sector share, rural non-farm activities, rural poverty

Procedia PDF Downloads 217
4327 Effect of Core Puncture Diameter on Bio-Char Kiln Efficiency

Authors: W. Intagun, T. Khamdaeng, P. Prom-ngarm, N. Panyoyai

Abstract:

Biochar has been used as a soil amendment since it has high porous structure and has proper nutrients and chemical properties for plants. Product yields produced from biochar kiln are dependent on process parameters and kiln types used. The objective of this research is to investigate the effect of core puncture diameter on biochar kiln efficiency, i.e., yields of biochar and produced gas. Corncobs were used as raw material to produce biochar. Briquettes from agricultural wastes were used as fuel. Each treatment was performed by changing the core puncture diameter. From the experiment, it is revealed that the yield of biochar at the core puncture diameter of 3.18 mm, 4.76 mm, and 6.35 mm was 10.62 wt. %, 24.12 wt. %, and 12.24 wt. %, of total solid yields, respectively. The yield of produced gas increased with increasing the core puncture diameter. The maximum percentage by weight of the yield of produced gas was 81.53 wt. % which was found at the core puncture diameter of 6.35 mm. The core puncture diameter was furthermore found to affect the temperature distribution inside the kiln and its thermal efficiency. In conclusion, the high efficient biochar kiln can be designed and constructed by using the proper core puncture diameter.

Keywords: anila stove, bio-char, soil conditioning materials, temperature distribution

Procedia PDF Downloads 216
4326 Model of Transhipment and Routing Applied to the Cargo Sector in Small and Medium Enterprises of Bogotá, Colombia

Authors: Oscar Javier Herrera Ochoa, Ivan Dario Romero Fonseca

Abstract:

This paper presents a design of a model for planning the distribution logistics operation. The significance of this work relies on the applicability of this fact to the analysis of small and medium enterprises (SMEs) of dry freight in Bogotá. Two stages constitute this implementation: the first one is the place where optimal planning is achieved through a hybrid model developed with mixed integer programming, which considers the transhipment operation based on a combined load allocation model as a classic transshipment model; the second one is the specific routing of that operation through the heuristics of Clark and Wright. As a result, an integral model is obtained to carry out the step by step planning of the distribution of dry freight for SMEs in Bogotá. In this manner, optimum assignments are established by utilizing transshipment centers with that purpose of determining the specific routing based on the shortest distance traveled.

Keywords: transshipment model, mixed integer programming, saving algorithm, dry freight transportation

Procedia PDF Downloads 207
4325 Human Leukocyte Antigen Class 1 Phenotype Distribution and Analysis in Persons from Central Uganda with Active Tuberculosis and Latent Mycobacterium tuberculosis Infection

Authors: Helen K. Buteme, Rebecca Axelsson-Robertson, Moses L. Joloba, Henry W. Boom, Gunilla Kallenius, Markus Maeurer

Abstract:

Background: The Ugandan population is heavily affected by infectious diseases and Human leukocyte antigen (HLA) diversity plays a crucial role in the host-pathogen interaction and affects the rates of disease acquisition and outcome. The identification of HLA class 1 alleles and determining which alleles are associated with tuberculosis (TB) outcomes would help in screening individuals in TB endemic areas for susceptibility to TB and to predict resistance or progression to TB which would inevitably lead to better clinical management of TB. Aims: To be able to determine the HLA class 1 phenotype distribution in a Ugandan TB cohort and to establish the relationship between these phenotypes and active and latent TB. Methods: Blood samples were drawn from 32 HIV negative individuals with active TB and 45 HIV negative individuals with latent MTB infection. DNA was extracted from the blood samples and the DNA samples HLA typed by the polymerase chain reaction-sequence specific primer method. The allelic frequencies were determined by direct count. Results: HLA-A*02, A*01, A*74, A*30, B*15, B*58, C*07, C*03 and C*04 were the dominant phenotypes in this Ugandan cohort. There were differences in the distribution of HLA types between the individuals with active TB and the individuals with LTBI with only HLA-A*03 allele showing a statistically significant difference (p=0.0136). However, after FDR computation the corresponding q-value is above the expected proportion of false discoveries (q-value 0.2176). Key findings: We identified a number of HLA class I alleles in a population from Central Uganda which will enable us to carry out a functional characterization of CD8+ T-cell mediated immune responses to MTB. Our results also suggest that there may be a positive association between the HLA-A*03 allele and TB implying that individuals with the HLA-A*03 allele are at a higher risk of developing active TB.

Keywords: HLA, phenotype, tuberculosis, Uganda

Procedia PDF Downloads 390
4324 Associations of Vitamin D Receptor Polymorphisms with Coronary Artery Diseases

Authors: Elham Sharif, Nasser Rizk, Sirin Abu Aqel, Ofelia Masoud

Abstract:

Background: Previous studies have investigated the association of rs1544410, rs7975232 and rs731236 polymorphisms in vitamin D receptor gene and its impact on diseases such as cancer, diabetes and hypertension in different ethnic backgrounds. Aim: The aim of this study is to investigate the association between VDR polymorphisms using three SNP’s (rs1544410, rs7975232 and rs731236) and the severity of the significant lesion in coronary arteries among angiographically diagnosed CAD. Methods: A prospective-retrospective study was conducted on 192 CAD patients enrolled from the cardiology department-Heart Hospital HMC, grouped in 96 subjects with significant stenosis and 96 with non-significant stenosis with a mean age between 30 and 75 years old. Genotyping was performed for the following SNPs rs1544410, rs7975232 and rs731236 using TaqMan assay by the Real Time PCR, ABI 7500 in Health Sciences Labs at Qatar University Biomedical Research Center. Results: The results showed that both groups have matched age and gender distribution but patients with the significant stenosis have significantly higher; BMI (p=0.047); smoking status (p=0.039); FBS (p= 0.031); CK-MB (p=0.025) and Troponin (p=0.002) than the patients with non–significant lesion. Among the traditional risk factors, smoking increases the odds of the severe stenotic lesion in CAD patients by 1.984, with 95% CI between 1.024 – 7.063, with p= 0.042.HWE showed deviations of the rs1544410 and rs731236 among the study subjects. The most frequent genotype in distribution of rs7975232 is the AA among the significant stenosis patients, while the heterozygous AC was the frequent genotype in distribution among the non-significant stenosis group. The carriers of CC genotype in rs7975232 increased the risk of having significant coronary arteries stenotic lesion by 1.83 with 95% CI (1.020 – 3.280), p=0.043. No association was found between the rs7975232 with vitamin D and VDBP. Conclusion: There is a significant association between rs7975232 and the severity of CAD lesion. The carrier of CC genotype in rs7975232 increased the risk of having significant coronary arteries atherosclerotic lesion especially in patients with smoking history independent of vitamin D.

Keywords: vitamin D, vitamin D receptor, polymorphism, coronary harat disease

Procedia PDF Downloads 301
4323 Reducing Uncertainty of Monte Carlo Estimated Fatigue Damage in Offshore Wind Turbines Using FORM

Authors: Jan-Tore H. Horn, Jørgen Juncher Jensen

Abstract:

Uncertainties related to fatigue damage estimation of non-linear systems are highly dependent on the tail behaviour and extreme values of the stress range distribution. By using a combination of the First Order Reliability Method (FORM) and Monte Carlo simulations (MCS), the accuracy of the fatigue estimations may be improved for the same computational efforts. The method is applied to a bottom-fixed, monopile-supported large offshore wind turbine, which is a non-linear and dynamically sensitive system. Different curve fitting techniques to the fatigue damage distribution have been used depending on the sea-state dependent response characteristics, and the effect of a bi-linear S-N curve is discussed. Finally, analyses are performed on several environmental conditions to investigate the long-term applicability of this multistep method. Wave loads are calculated using state-of-the-art theory, while wind loads are applied with a simplified model based on rotor thrust coefficients.

Keywords: fatigue damage, FORM, monopile, Monte Carlo, simulation, wind turbine

Procedia PDF Downloads 243
4322 Waters Colloidal Phase Extraction and Preconcentration: Method Comparison

Authors: Emmanuelle Maria, Pierre Crançon, Gaëtane Lespes

Abstract:

Colloids are ubiquitous in the environment and are known to play a major role in enhancing the transport of trace elements, thus being an important vector for contaminants dispersion. Colloids study and characterization are necessary to improve our understanding of the fate of pollutants in the environment. However, in stream water and groundwater, colloids are often very poorly concentrated. It is therefore necessary to pre-concentrate colloids in order to get enough material for analysis, while preserving their initial structure. Many techniques are used to extract and/or pre-concentrate the colloidal phase from bulk aqueous phase, but yet there is neither reference method nor estimation of the impact of these different techniques on the colloids structure, as well as the bias introduced by the separation method. In the present work, we have tested and compared several methods of colloidal phase extraction/pre-concentration, and their impact on colloids properties, particularly their size distribution and their elementary composition. Ultrafiltration methods (frontal, tangential and centrifugal) have been considered since they are widely used for the extraction of colloids in natural waters. To compare these methods, a ‘synthetic groundwater’ was used as a reference. The size distribution (obtained by Field-Flow Fractionation (FFF)) and the chemical composition of the colloidal phase (obtained by Inductively Coupled Plasma Mass Spectrometry (ICPMS) and Total Organic Carbon analysis (TOC)) were chosen as comparison factors. In this way, it is possible to estimate the pre-concentration impact on the colloidal phase preservation. It appears that some of these methods preserve in a more efficient manner the colloidal phase composition while others are easier/faster to use. The choice of the extraction/pre-concentration method is therefore a compromise between efficiency (including speed and ease of use) and impact on the structural and chemical composition of the colloidal phase. In perspective, the use of these methods should enhance the consideration of colloidal phase in the transport of pollutants in environmental assessment studies and forensics.

Keywords: chemical composition, colloids, extraction, preconcentration methods, size distribution

Procedia PDF Downloads 202
4321 The Distribution of HLA-B*15:01 and HLA-B*51:01 Alleles in Thai Population: Clinical Implementation and Diagnostic Process of COVID-19 Severity

Authors: Aleena Rena Onozuka, Patompong Satapornpong

Abstract:

Introduction: In a Human Leukocyte Antigen (HLA)’s immune response, HLA alleles (HLA class I and class II) play a crucial role in fighting against pathogens. HLA-B*15:01 allele had a significant association with asymptomatic COVID-19 infection (p-value = 5.67 x 10-5 ; OR = 2.40 and 95% CI = 1.54 - 3.64). There was also a notable linkage between HLA-B*51:01 allele and critically ill patients with COVID-19 (p-value = 0.007 and OR = 3.38). This study has described the distribution of HLA marker alleles in Thais and sub-groups. Objective: We want to investigate the prevalence of HLA-B*15:01 and HLA-B*51:01 alleles in the Thai population. Materials and Methods: 200 healthy Thai population were included in this study from the College of Pharmacy, Rangsit University. HLA-B alleles were genotyped using the sequence-specific oligonucleotides process (PCR-SSOs). Results: We found out that HLA-B*46:01 (12.00%), HLA-B*15:02 (9.25%), HLA-B*40:01 (6.50%), HLA-B*13:01 (6.25%), and HLA-B* 38:02 (5.50%) alleles were more common than other alleles in Thai population. HLA-B*46:01 and HLA-B*15:02 were the most common allele found across four regions. Moreover, the frequency of HLA-B*15:01 and HLA-B*51:01 alleles were similarly distributed in Thai population (0.50, 5.25 %) and (p-value > 0.05), respectively. The frequencies of HLA-B*15:01 and HLA-B*51:01 alleles were not significantly different from other populations compared to the Thai population. Conclusions: We can screen for HLA-B*15:01 and HLA-B*51:01 alleles to determine the symptoms of COVID-19 since they are universal HLA-B markers. Importantly, the database of HLA markers indicates the association between HLA frequency and populations. However, we need further research on larger numbers of COVID-19 patients and in different populations.

Keywords: HLA-B*15:01, HLA-B*51:01, COVID-19, HLA-B alleles

Procedia PDF Downloads 104
4320 Second Order Statistics of Dynamic Response of Structures Using Gamma Distributed Damping Parameters

Authors: Badreddine Chemali, Boualem Tiliouine

Abstract:

This article presents the main results of a numerical investigation on the uncertainty of dynamic response of structures with statistically correlated random damping Gamma distributed. A computational method based on a Linear Statistical Model (LSM) is implemented to predict second order statistics for the response of a typical industrial building structure. The significance of random damping with correlated parameters and its implications on the sensitivity of structural peak response in the neighborhood of a resonant frequency are discussed in light of considerable ranges of damping uncertainties and correlation coefficients. The results are compared to those generated using Monte Carlo simulation techniques. The numerical results obtained show the importance of damping uncertainty and statistical correlation of damping coefficients when obtaining accurate probabilistic estimates of dynamic response of structures. Furthermore, the effectiveness of the LSM model to efficiently predict uncertainty propagation for structural dynamic problems with correlated damping parameters is demonstrated.

Keywords: correlated random damping, linear statistical model, Monte Carlo simulation, uncertainty of dynamic response

Procedia PDF Downloads 268
4319 Altered TP53 Mutations in de Novo Acute Myeloid Leukemia Patients in Iran

Authors: Naser Shagerdi Esmaeli, Mohsen Hamidpour, Parisa Hasankhani Tehrani

Abstract:

Background: The TP53 mutation is frequently detected in acute myeloid leukemia (AML) patients with complex karyotype (CK), but the stability of this mutation during the clinical course remains unclear. Material and Methods: In this study, TP53 mutations were identified in 7% of 500 patients with de novo AML and 58.8% of patients with CK in Tabriz, Iran. TP53 mutations were closely associated with older age, lower white blood cell (WBC) and platelet counts, FAB M6 subtype, unfavorable-risk cytogenetics, and CK, but negatively associated with NPM1 mutation, FLT3/ITD and DNMT3A mutation. Result: Multivariate analysis demonstrated that TP53 mutation was an independent poor prognostic factor for overall survival and disease-free survival among the total cohort and the subgroup of patients with CK. A scoring system incorporating TP53 mutation and nine other prognostic factors, including age, WBC counts, cytogenetics, and gene mutations, into survival analysis proved to be very useful to stratify AML patients. Sequential study of 420 samples showed that TP53 mutations were stable during AML evolution, whereas the mutation was acquired only in 1 of the 126 TP53 wild-type patients when therapy-related AML originated from different clone emerged. Conclusion: In conclusion, TP53 mutations are associated with distinct clinic-biological features and poor prognosis in de novo AML patients and are rather stable during disease progression.

Keywords: acute myloblastic leukemia, TP53, FLT3/ITD, Iran

Procedia PDF Downloads 94
4318 A Methodology for Seismic Performance Enhancement of RC Structures Equipped with Friction Energy Dissipation Devices

Authors: Neda Nabid

Abstract:

Friction-based supplemental devices have been extensively used for seismic protection and strengthening of structures, however, the conventional use of these dampers may not necessarily lead to an efficient structural performance. Conventionally designed friction dampers follow a uniform height-wise distribution pattern of slip load values for more practical simplicity. This can lead to localizing structural damage in certain story levels, while the other stories accommodate a negligible amount of relative displacement demand. A practical performance-based optimization methodology is developed to tackle with structural damage localization of RC frame buildings with friction energy dissipation devices under severe earthquakes. The proposed methodology is based on the concept of uniform damage distribution theory. According to this theory, the slip load values of the friction dampers redistribute and shift from stories with lower relative displacement demand to the stories with higher inter-story drifts to narrow down the discrepancy between the structural damage levels in different stories. In this study, the efficacy of the proposed design methodology is evaluated through the seismic performance of five different low to high-rise RC frames equipped with friction wall dampers under six real spectrum-compatible design earthquakes. The results indicate that compared to the conventional design, using the suggested methodology to design friction wall systems can lead to, by average, up to 40% reduction of maximum inter-story drift; and incredibly more uniform height-wise distribution of relative displacement demands under the design earthquakes.

Keywords: friction damper, nonlinear dynamic analysis, RC structures, seismic performance, structural damage

Procedia PDF Downloads 213
4317 Biopsy or Biomarkers: Which Is the Sample of Choice in Assessment of Liver Fibrosis?

Authors: S. H. Atef, N. H. Mahmoud, S. Abdrahman, A. Fattoh

Abstract:

Background: The aim of the study is to assess the diagnostic value of fibrotest and hyaluronic acid in discriminate between insignificant and significant fibrosis. Also, to find out if these parameters could replace liver biopsy which is currently used for selection of chronic hepatitis C patients eligible for antiviral therapy. Study design: This study was conducted on 52 patients with HCV RNA detected by polymerase chain reaction (PCR) who had undergone liver biopsy and attending the internal medicine clinic at Ain Shams University Hospital. Liver fibrosis was evaluated according to the METAVIR scoring system on a scale of F0 to F4. Biochemical markers assessed were: alpha-2 macroglobulin (α2-MG), apolipoprotein A1 (Apo-A1), haptoglobin, gamma-glutamyl transferase (GGT), total bilirubin (TB) and hyaluronic acid (HA). The fibrotest score was computed after adjusting for age and gender. Predictive values and ROC curves were used to assess the accuracy of fibrotest and HA results. Results: For fibrotest, the observed area under curve for the discrimination between minimal or no fibrosis (F0-F1) and significant fibrosis (F2-F4) was 0.6736 for cutoff value 0.19 with sensitivity of 84.2% and specificity of 85.7%. For HA, the sensitivity was 89.5% and specificity was 85.7% and area under curve was 0.540 at the best cutoff value 71 mg/dL. Multi-use of both parameters, HA at 71 mg/dL with fibrotest score at 0.22 give a sensitivity 89.5%, specificity 100 and efficacy 92.3% (AUC 0.895). Conclusion: The use of both fibrotest score and HA could be as alternative to biopsy in most patients with chronic hepaitis C putting in consideration some limitations of the proposed markers in evaluating liver fibrosis.

Keywords: fibrotest, liver fibrosis, HCV RNA, biochemical markers

Procedia PDF Downloads 269
4316 Investigating a Deterrence Function for Work Trips for Perth Metropolitan Area

Authors: Ali Raouli, Amin Chegenizadeh, Hamid Nikraz

Abstract:

The Perth metropolitan area and its surrounding regions have been expanding rapidly in recent decades and it is expected that this growth will continue in the years to come. With this rapid growth and the resulting increase in population, consideration should be given to strategic planning and modelling for the future expansion of Perth. The accurate estimation of projected traffic volumes has always been a major concern for the transport modelers and planners. Development of a reliable strategic transport model depends significantly on the inputs data into the model and the calibrated parameters of the model to reflect the existing situation. Trip distribution is the second step in four-step modelling (FSM) which is complex due to its behavioral nature. Gravity model is the most common method for trip distribution. The spatial separation between the Origin and Destination (OD) zones will be reflected in gravity model by applying deterrence functions which provide an opportunity to include people’s behavior in choosing their destinations based on distance, time and cost of their journeys. Deterrence functions play an important role for distribution of the trips within a study area and would simulate the trip distances and therefore should be calibrated for any particular strategic transport model to correctly reflect the trip behavior within the modelling area. This paper aims to review the most common deterrence functions and propose a calibrated deterrence function for work trips within the Perth Metropolitan Area based on the information obtained from the latest available Household data and Perth and Region Travel Survey (PARTS) data. As part of this study, a four-step transport model using EMME software has been developed for Perth Metropolitan Area to assist with the analysis and findings.

Keywords: deterrence function, four-step modelling, origin destination, transport model

Procedia PDF Downloads 153
4315 The Use of Nano-Crystalline Starch in Probiotic Yogurt and Its Effects on the Physicochemical and Biological Properties

Authors: Ali Seirafi

Abstract:

The purpose of this study was to investigate the effect and application of starch nanocrystals on physicochemical and microbial properties in the industrial production of probiotic yogurt. In this study, probiotic yoghurt was manufactured by industrial method with the optimization and control of the technological factors affecting the probabilistic biomass, using probiotic bacteria Lactobacillus acidophilus and Bifidobacterium bifidum with commonly used yogurt primers. Afterwards, the effects of different levels of fat (1.3%, 2.5 and 4%), as well as the effects of various perbiotic compounds include starch nanocrystals (0.5%, 1 and 1.5%), galactolegalosaccharide (0.5% 1 and 1.5%) and fructooligosaccharide (0.5%, 1 and 1.5%) were evaluated. In addition, the effect of packaging (polyethylene and glass) was studied, while the effect of pH changes and final acidity were studied at each stage. In this research, all experiments were performed in 3 replications and the results were analyzed in a completely randomized design with SAS version 9.1 software. The results of this study showed that the addition of starch nanocrystal compounds as well as the use of glass packaging had the most positive effects on the survival of Lactobacillus acidophilus bacteria and the addition of nano-crystals and the increase in the cooling rate of the product, had the most positive effects on the survival of bacteria Bifidobacterium bifidum.

Keywords: Bifidobacterium bifidum, Lactobacillus acidophilus, prebiotics, probiotic yogurt

Procedia PDF Downloads 144
4314 Sustainability Rating System for Infrastructure Projects in UAE

Authors: Amrutha Venugopal, Rabee Rustum

Abstract:

In spite of huge investments and the vital role infrastructure plays in the economy of UAE, the country has not yet developed an assessment scheme to measure the sustainability of infrastructure projects/development. The aim of this study was to develop a sustainability rating system for infrastructure projects in UAE using weighted indicator scoring. The identification of the list of 66 indicators was done by content analysis. The sources of content analysis were from government guidelines, research literature and sustainability rating system for infrastructure projects namely BCA Greenmark for Infrastructure (Singapore), ISCA (Australia) and Envision (USA). These indicators were shortlisted based on their relevance in the UAE. A mixture of qualitative and quantitative research methods is utilized to find the weightage to be applied to the indicators and to find suggestive measures to improve infrastructure sustainability in this region. Interviews and surveys were conducted with a good mix of experts from the industry. The data collected from the interviews were collated to provide suggestive measures for improving infrastructure sustainability. The collected survey data were analyzed using statistical analysis techniques to find the indicator weighing. The indicators were shortlisted by 75% to minimize the effort and investment into the process. The weighing of the deleted indicators was distributed among the critical clusters identified by Pareto analysis. Finally a simple Microsoft Excel tool was developed as the rating tool by using the calculated weighing for the indicators.

Keywords: infrastructure, rating system, suggestive measures, sustainability, UAE

Procedia PDF Downloads 288