Search results for: causal estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2238

Search results for: causal estimation

738 Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution

Authors: Saleem Z. Ramadan

Abstract:

This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the PTH percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.

Keywords: reliability, accelerated life testing, cumulative exposure model, Bayesian estimation, progressive type-I censoring, Weibull distribution

Procedia PDF Downloads 492
737 Compression Index Estimation by Water Content and Liquid Limit and Void Ratio Using Statistics Method

Authors: Lizhou Chen, Abdelhamid Belgaid, Assem Elsayed, Xiaoming Yang

Abstract:

Compression index is essential in foundation settlement calculation. The traditional method for determining compression index is consolidation test which is expensive and time consuming. Many researchers have used regression methods to develop empirical equations for predicting compression index from soil properties. Based on a large number of compression index data collected from consolidation tests, the accuracy of some popularly empirical equations were assessed. It was found that primary compression index is significantly overestimated in some equations while it is underestimated in others. The sensitivity analyses of soil parameters including water content, liquid limit and void ratio were performed. The results indicate that the compression index obtained from void ratio is most accurate. The ANOVA (analysis of variance) demonstrates that the equations with multiple soil parameters cannot provide better predictions than the equations with single soil parameter. In other words, it is not necessary to develop the relationships between compression index and multiple soil parameters. Meanwhile, it was noted that secondary compression index is approximately 0.7-5.0% of primary compression index with an average of 2.0%. In the end, the proposed prediction equations using power regression technique were provided that can provide more accurate predictions than those from existing equations.

Keywords: compression index, clay, settlement, consolidation, secondary compression index, soil parameter

Procedia PDF Downloads 144
736 Investigation of the IL23R Psoriasis/PsA Susceptibility Locus

Authors: Shraddha Rane, Richard Warren, Stephen Eyre

Abstract:

L-23 is a pro-inflammatory molecule that signals T cells to release cytokines such as IL-17A and IL-22. Psoriasis is driven by a dysregulated immune response, within which IL-23 is now thought to play a key role. Genome-wide association studies (GWAS) have identified a number of genetic risk loci that support the involvement of IL-23 signalling in psoriasis; in particular a robust susceptibility locus at a gene encoding a subunit of the IL-23 receptor (IL23R) (Stuart et al., 2015; Tsoi et al., 2012). The lead psoriasis-associated SNP rs9988642 is located approximately 500 bp downstream of IL23R but is in tight linkage disequilibrium (LD) with a missense SNP rs11209026 (R381Q) within IL23R (r2 = 0.85). The minor (G) allele of rs11209026 is present in approximately 7% of the population and is protective for psoriasis and several other autoimmune diseases including IBD, ankylosing spondylitis, RA and asthma. The psoriasis-associated missense SNP R381Q causes an arginine to glutamine substitution in a region of the IL23R protein between the transmembrane domain and the putative JAK2 binding site in the cytoplasmic portion. This substitution is expected to affect the receptor’s surface localisation or signalling ability, rather than IL23R expression. Recent studies have also identified a psoriatic arthritis (PsA)-specific signal at IL23R; thought to be independent from the psoriasis association (Bowes et al., 2015; Budu-Aggrey et al., 2016). The lead PsA-associated SNP rs12044149 is intronic to IL23R and is in LD with likely causal SNPs intersecting promoter and enhancer marks in memory CD8+ T cells (Budu-Aggrey et al., 2016). It is therefore likely that the PsA-specific SNPs affect IL23R function via a different mechanism compared with the psoriasis-specific SNPs. It could be hypothesised that the risk allele for PsA located within the IL23R promoter causes an increase IL23R expression, relative to the protective allele. An increased expression of IL23R might then lead to an exaggerated immune response. The independent genetic signals identified for psoriasis and PsA in this locus indicate that different mechanisms underlie these two conditions; although likely both affecting the function of IL23R. It is very important to further characterise these mechanisms in order to better understand how the IL-23 receptor and its downstream signalling is affected in both diseases. This will help to determine how psoriasis and PsA patients might differentially respond to therapies, particularly IL-23 biologics. To investigate this further we have developed an in vitro model using CD4 T cells which express either wild type IL23R and IL12Rβ1 or mutant IL23R (R381Q) and IL12Rβ1. Model expressing different isotypes of IL23R is also underway to investigate the effects on IL23R expression. We propose to further investigate the variants for Ps and PsA and characterise key intracellular processes related to the variants.

Keywords: IL23R, psoriasis, psoriatic arthritis, SNP

Procedia PDF Downloads 151
735 The Cartometric-Geographical Analysis of Ivane Javakhishvili 1922: The Map of the Republic of Georgia

Authors: Manana Kvetenadze, Dali Nikolaishvili

Abstract:

The study revealed the territorial changes of Georgia before the Soviet and Post-Soviet periods. This includes the estimation of the country's borders, its administrative-territorial arrangement change as well as the establishment of territorial losses. Georgia’s old and new borders marked on the map are of great interest. The new boundary shows the condition of 1922 year, following the Soviet period. Neither on this map nor in other works Ivane Javakhishvili talks about what he implies in the old borders, though it is evident that this is the Pre-Soviet boundary until 1921 – i.e., before the period when historical Tao, Zaqatala, Lore, Karaia represented the parts of Georgia. According to cartometric-geographical terms, the work presents detailed analysis of Georgia’s borders, along with this the comparison of research results has been carried out: 1) At the boundary line on Soviet topographic maps, the maps of 100,000; 50,000 and 25,000 scales are used; 2) According to Ivane Javakhishvili’s work ('The borders of Georgia in terms of historical and contemporary issues'). During that research, we used multi-disciplined methodology and software. We used Arc GIS for Georeferencing maps, and after that, we compare all post-Soviet Union maps, in order to determine how the borders have changed. During this work, we also use many historical data. The features of the spatial distribution of the territorial administrative units of Georgia, as well as the distribution of administrative-territorial units of the objects depicted on the map, have been established. The results obtained are presented in the forms of thematic maps and diagrams.

Keywords: border, GIS, georgia, historical cartography, old maps

Procedia PDF Downloads 225
734 Investigating the Dimensions of Perceived Attributions in Making Sense of Failure: An Exploratory Study of Lebanese Entrepreneurs

Authors: Ghiwa Dandach

Abstract:

By challenging the anti-failure bias and contributing to the theoretical territory of the attribution theory, this thesis develops a comprehensive process for entrepreneurial learning from failure. The practical implication of the findings suggests assisting entrepreneurs (current, failing, and nascent) in effectively anticipating and reflecting upon failure. Additionally, the process is suggested to enhance the level of institutional and private (accelerators and financers) support provided to entrepreneurs, the implications of which may improve future opportunities for entrepreneurial success. Henceforth, exploring learning from failure is argued to impact the potential survival of future ventures, subsequently revitalizing the economic contribution of entrepreneurship. This learning process can be enhanced with the cognitive development of causal ascriptions for failure, which eventually impacts learning outcomes. However, the mechanism with which entrepreneurs make sense of failure, reflect on the journey, and transform experience into knowledge is still under-researched. More specifically, the cognitive process of failure attribution is under-explored, majorly in the context of developing economies, calling for a more insightful understanding on how entrepreneurs ascribe failure. Responding to the call for more thorough research in such cultural contexts, this study expands the understanding of the dimensions of failure attributions as perceived by entrepreneurs and the impact of these dimensions on learning outcomes in the Lebanese context. The research adopted the exploratory interpretivism paradigm and collected data from interviews with industry experts first, followed by narratives of entrepreneurs using the qualitative multimethod approach. The holistic and categorical content analysis of narratives, preceded by the thematic analysis of interviews, unveiled how entrepreneurs ascribe failure by developing minor and major dimensions of each failure attribution. The findings have also revealed how each dimension impacts the learning from failure when accompanied by emotional resilience. The thesis concludes that exploring in-depth the dimensions of failure attributions significantly determines the level of learning generated. They are moving beyond the simple categorisation of ascriptions as primary internal or external unveiled how learning may occur with each attribution at the individual, venture, and ecosystem levels. This has further accentuated that a major internal attribution of failure combined with a minor external attribution generated the highest levels of transformative and double-loop learning, emphasizing the role of personal blame and responsibility on enhancing learning outcomes.

Keywords: attribution, entrepreneurship, reflection, sense-making, emotions, learning outcomes, failure, exit

Procedia PDF Downloads 201
733 Identification Strategies for Unknown Victims from Mass Disasters and Unknown Perpetrators from Violent Crime or Terrorist Attacks

Authors: Michael Josef Schwerer

Abstract:

Background: The identification of unknown victims from mass disasters, violent crimes, or terrorist attacks is frequently facilitated through information from missing persons lists, portrait photos, old or recent pictures showing unique characteristics of a person such as scars or tattoos, or simply reference samples from blood relatives for DNA analysis. In contrast, the identification or at least the characterization of an unknown perpetrator from criminal or terrorist actions remains challenging, particularly in the absence of material or data for comparison, such as fingerprints, which had been previously stored in criminal records. In scenarios that result in high levels of destruction of the perpetrator’s corpse, for instance, blast or fire events, the chance for a positive identification using standard techniques is further impaired. Objectives: This study shows the forensic genetic procedures in the Legal Medicine Service of the German Air Force for the identification of unknown individuals, including such cases in which reference samples are not available. Scenarios requiring such efforts predominantly involve aircraft crash investigations, which are routinely carried out by the German Air Force Centre of Aerospace Medicine as one of the Institution’s essential missions. Further, casework by military police or military intelligence is supported based on administrative cooperation. In the talk, data from study projects, as well as examples from real casework, will be demonstrated and discussed with the audience. Methods: Forensic genetic identification in our laboratories involves the analysis of Short Tandem Repeats and Single Nucleotide Polymorphisms in nuclear DNA along with mitochondrial DNA haplotyping. Extended DNA analysis involves phenotypic markers for skin, hair, and eye color together with the investigation of a person’s biogeographic ancestry. Assessment of the biological age of an individual employs CpG-island methylation analysis using bisulfite-converted DNA. Forensic Investigative Genealogy assessment allows the detection of an unknown person’s blood relatives in reference databases. Technically, end-point-PCR, real-time PCR, capillary electrophoresis, pyrosequencing as well as next generation sequencing using flow-cell-based and chip-based systems are used. Results and Discussion: Optimization of DNA extraction from various sources, including difficult matrixes like formalin-fixed, paraffin-embedded tissues, degraded specimens from decomposed bodies or from decedents exposed to blast or fire events, provides soil for successful PCR amplification and subsequent genetic profiling. For cases with extremely low yields of extracted DNA, whole genome preamplification protocols are successfully used, particularly regarding genetic phenotyping. Improved primer design for CpG-methylation analysis, together with validated sampling strategies for the analyzed substrates from, e.g., lymphocyte-rich organs, allows successful biological age estimation even in bodies with highly degraded tissue material. Conclusions: Successful identification of unknown individuals or at least their phenotypic characterization using pigmentation markers together with age-informative methylation profiles, possibly supplemented by family tree search employing Forensic Investigative Genealogy, can be provided in specialized laboratories. However, standard laboratory procedures must be adapted to work with difficult and highly degraded sample materials.

Keywords: identification, forensic genetics, phenotypic markers, CPG methylation, biological age estimation, forensic investigative genealogy

Procedia PDF Downloads 33
732 Enhanced Calibration Map for a Four-Hole Probe for Measuring High Flow Angles

Authors: Jafar Mortadha, Imran Qureshi

Abstract:

This research explains and compares the modern techniques used for measuring the flow angles of a flowing fluid with the traditional technique of using multi-hole pressure probes. In particular, the focus of the study is on four-hole probes, which offer great reliability and benefits in several applications where the use of modern measurement techniques is either inconvenient or impractical. Due to modern advancements in manufacturing, small multi-hole pressure probes can be made with high precision, which eliminates the need for calibrating every manufactured probe. This study aims to improve the range of calibration maps for a four-hole probe to allow high flow angles to be measured accurately. The research methodology comprises a literature review of the successful calibration definitions that have been implemented on five-hole probes. These definitions are then adapted and applied on a four-hole probe using a set of raw pressures data. A comparison of the different definitions will be carried out in Matlab and the results will be analyzed to determine the best calibration definition. Taking simplicity of implementation into account as well as the reliability of flow angles estimation, an adapted technique from a research paper written in 2002 offered the most promising outcome. Consequently, the method is seen as a good enhancement for four-hole probes and it can substitute for the existing calibration definitions that offer less accuracy.

Keywords: calibration definitions, calibration maps, flow measurement techniques, four-hole probes, multi-hole pressure probes

Procedia PDF Downloads 279
731 Well-Being Inequality Using Superimposing Satisfaction Waves: Heisenberg Uncertainty in Behavioral Economics and Econometrics

Authors: Okay Gunes

Abstract:

In this article, for the first time in the literature for this subject we propose a new method for the measuring of well-being inequality through a model composed of superimposing satisfaction waves. The displacement of households’ satisfactory state (i.e. satisfaction) is defined in a satisfaction string. The duration of the satisfactory state for a given period of time is measured in order to determine the relationship between utility and total satisfactory time, itself dependent on the density and tension of each satisfaction string. Thus, individual cardinal total satisfaction values are computed by way of a one-dimensional form for scalar sinusoidal (harmonic) moving wave function, using satisfaction waves with varying amplitudes and frequencies which allow us to measure well-being inequality. One advantage to using satisfaction waves is the ability to show that individual utility and consumption amounts would probably not commute; hence it is impossible to measure or to know simultaneously the values of these observables from the dataset. Thus, we crystallize the problem by using a Heisenberg-type uncertainty resolution for self-adjoint economic operators. We propose to eliminate any estimation bias by correlating the standard deviations of selected economic operators; this is achieved by replacing the aforementioned observed uncertainties with households’ perceived uncertainties (i.e. corrected standard deviations) obtained through the logarithmic psychophysical law proposed by Weber and Fechner.

Keywords: Heisenberg uncertainty principle, superimposing satisfaction waves, Weber–Fechner law, well-being inequality

Procedia PDF Downloads 425
730 Heat Vulnerability Index (HVI) Mapping in Extreme Heat Days Coupled with Air Pollution Using Principal Component Analysis (PCA) Technique: A Case Study of Amiens, France

Authors: Aiman Mazhar Qureshi, Ahmed Rachid

Abstract:

Extreme heat events are emerging human environmental health concerns in dense urban areas due to anthropogenic activities. High spatial and temporal resolution heat maps are important for urban heat adaptation and mitigation, helping to indicate hotspots that are required for the attention of city planners. The Heat Vulnerability Index (HVI) is the important approach used by decision-makers and urban planners to identify heat-vulnerable communities and areas that require heat stress mitigation strategies. Amiens is a medium-sized French city, where the average temperature has been increasing since the year 2000 by +1°C. Extreme heat events are recorded in the month of July for the last three consecutive years, 2018, 2019 and 2020. Poor air quality, especially ground-level ozone, has been observed mainly during the same hot period. In this study, we evaluated the HVI in Amiens during extreme heat days recorded last three years (2018,2019,2020). The Principal Component Analysis (PCA) technique is used for fine-scale vulnerability mapping. The main data we considered for this study to develop the HVI model are (a) socio-economic and demographic data; (b) Air pollution; (c) Land use and cover; (d) Elderly heat-illness; (e) socially vulnerable; (f) Remote sensing data (Land surface temperature (LST), mean elevation, NDVI and NDWI). The output maps identified the hot zones through comprehensive GIS analysis. The resultant map shows that high HVI exists in three typical areas: (1) where the population density is quite high and the vegetation cover is small (2) the artificial surfaces (built-in areas) (3) industrial zones that release thermal energy and ground-level ozone while those with low HVI are located in natural landscapes such as rivers and grasslands. The study also illustrates the system theory with a causal diagram after data analysis where anthropogenic activities and air pollution appear in correspondence with extreme heat events in the city. Our suggested index can be a useful tool to guide urban planners and municipalities, decision-makers and public health professionals in targeting areas at high risk of extreme heat and air pollution for future interventions adaptation and mitigation measures.

Keywords: heat vulnerability index, heat mapping, heat health-illness, remote sensing, urban heat mitigation

Procedia PDF Downloads 128
729 Adaptive Environmental Control System Strategy for Cabin Air Quality in Commercial Aircrafts

Authors: Paolo Grasso, Sai Kalyan Yelike, Federico Benzi, Mathieu Le Cam

Abstract:

The cabin air quality (CAQ) in commercial aircraft is of prime interest, especially in the context of the COVID-19 pandemic. Current Environmental Control Systems (ECS) rely on a prescribed fresh airflow per passenger to dilute contaminants. An adaptive ECS strategy is proposed, leveraging air sensing and filtration technologies to ensure a better CAQ. This paper investigates the CAQ level achieved in commercial aircraft’s cabin during various flight scenarios. The modeling and simulation analysis is performed in a Modelica-based environment describing the dynamic behavior of the system. The model includes the following three main systems: cabin, recirculation loop and air-conditioning pack. The cabin model evaluates the thermo-hygrometric conditions and the air quality in the cabin depending on the number of passengers and crew members, the outdoor conditions and the conditions of the air supplied to the cabin. The recirculation loop includes models of the recirculation fan, ordinary and novel filtration technology, mixing chamber and outflow valve. The air-conditioning pack includes models of heat exchangers and turbomachinery needed to condition the hot pressurized air bled from the engine, as well as selected contaminants originated from the outside or bled from the engine. Different ventilation control strategies are modeled and simulated. Currently, a limited understanding of contaminant concentrations in the cabin and the lack of standardized and systematic methods to collect and record data constitute a challenge in establishing a causal relationship between CAQ and passengers' comfort. As a result, contaminants are neither measured nor filtered during flight, and the current sub-optimal way to avoid their accumulation is their dilution with the fresh air flow. However, the use of a prescribed amount of fresh air comes with a cost, making the ECS the most energy-demanding non-propulsive system within an aircraft. In such a context, this study shows that an ECS based on a reduced and adaptive fresh air flow, and relying on air sensing and filtration technologies, provides promising results in terms of CAQ control. The comparative simulation results demonstrate that the proposed adaptive ECS brings substantial improvements to the CAQ in terms of both controlling the asymptotic values of the concentration of the contaminant and in mitigating hazardous scenarios, such as fume events. Original architectures allowing for adaptive control of the inlet air flow rate based on monitored CAQ will change the requirements for filtration systems and redefine the ECS operation.

Keywords: cabin air quality, commercial aircraft, environmental control system, ventilation

Procedia PDF Downloads 84
728 Simulation of Improving the Efficiency of a Fire-Tube Steam Boiler

Authors: Roudane Mohamed

Abstract:

In this study we are interested in improving the efficiency of a steam boiler to 4.5T/h and minimize fume discharge temperature by the addition of a heat exchanger against the current in the energy system, the output of the boiler. The mathematical approach to the problem is based on the use of heat transfer by convection and conduction equations. These equations have been chosen because of their extensive use in a wide range of application. A software and developed for solving the equations governing these phenomena and the estimation of the thermal characteristics of boiler through the study of the thermal characteristics of the heat exchanger by both LMTD and NUT methods. Subsequently, an analysis of the thermal performance of the steam boiler by studying the influence of different operating parameters on heat flux densities, temperatures, exchanged power and performance was carried out. The study showed that the behavior of the boiler is largely influenced. In the first regime (P = 3.5 bar), the boiler efficiency has improved significantly from 93.03 to 99.43 at the rate of 6.47% and 4.5%. For maximum speed, the change is less important, it is of the order of 1.06%. The results obtained in this study of great interest to industrial utilities equipped with smoke tube boilers for the preheating air temperature intervene to calculate the actual temperature of the gas so the heat exchanged will be increased and minimize temperature smoke discharge. On the other hand, this work could be used as a model of computation in the design process.

Keywords: numerical simulation, efficiency, fire tube, heat exchanger, convection and conduction

Procedia PDF Downloads 206
727 Sidelobe Free Inverse Synthetic Aperture Radar Imaging of Non Cooperative Moving Targets Using WiFi

Authors: Jiamin Huang, Shuliang Gui, Zengshan Tian, Fei Yan, Xiaodong Wu

Abstract:

In recent years, with the rapid development of radio frequency technology, the differences between radar sensing and wireless communication in terms of receiving and sending channels, signal processing, data management and control are gradually shrinking. There has been a trend of integrated communication radar sensing. However, most of the existing radar imaging technologies based on communication signals are combined with synthetic aperture radar (SAR) imaging, which does not conform to the practical application case of the integration of communication and radar. Therefore, in this paper proposes a high-precision imaging method using communication signals based on the imaging mechanism of inverse synthetic aperture radar (ISAR) imaging. This method makes full use of the structural characteristics of the orthogonal frequency division multiplexing (OFDM) signal, so the sidelobe effect in distance compression is removed and combines radon transform and Fractional Fourier Transform (FrFT) parameter estimation methods to achieve ISAR imaging of non-cooperative targets. The simulation experiment and measured results verify the feasibility and effectiveness of the method, and prove its broad application prospects in the field of intelligent transportation.

Keywords: integration of communication and radar, OFDM, radon, FrFT, ISAR

Procedia PDF Downloads 102
726 The Characteristics of Quantity Operation for 2nd and 3rd Grade Mathematics Slow Learners

Authors: Pi-Hsia Hung

Abstract:

The development of mathematical competency has individual benefits as well as benefits to the wider society. Children who begin school behind their peers in their understanding of number, counting, and simple arithmetic are at high risk of staying behind throughout their schooling. The development of effective strategies for improving the educational trajectory of these individuals will be contingent on identifying areas of early quantitative knowledge that influence later mathematics achievement. A computer-based quantity assessment was developed in this study to investigate the characteristics of 2nd and 3rd grade slow learners in quantity. The concept of quantification involves understanding measurements, counts, magnitudes, units, indicators, relative size, and numerical trends and patterns. Fifty-five tasks of quantitative reasoning—such as number sense, mental calculation, estimation and assessment of reasonableness of results—are included as quantity problem solving. Thus, quantity is defined in this study as applying knowledge of number and number operations in a wide variety of authentic settings. Around 1000 students were tested and categorized into 4 different performance levels. Students’ quantity ability correlated higher with their school math grade than other subjects. Around 20% students are below basic level. The intervention design implications of the preliminary item map constructed are discussed.

Keywords: mathematics assessment, mathematical cognition, quantity, number sense, validity

Procedia PDF Downloads 230
725 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation Using Physics-Informed Neural Network

Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy

Abstract:

The physics-informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on a strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary conditions to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of the Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful in studying various optical phenomena.

Keywords: deep learning, optical soliton, physics informed neural network, partial differential equation

Procedia PDF Downloads 59
724 Assessment of Indoor Air Pollution in Naturally Ventilated Dwellings of Mega-City Kolkata

Authors: Tanya Kaur Bedi, Shankha Pratim Bhattacharya

Abstract:

The US Environmental Protection Agency defines indoor air pollution as “The air quality within and around buildings, especially as it relates to the health and comfort of building occupants”. According to the 2021 report by the Energy Policy Institute at Chicago, Indian residents, a country which is home to the highest levels of air pollution in the world, lose about 5.9 years from life expectancy due to poor air quality and yet has numerous dwellings dependent on natural ventilation. Currently the urban population spends 90% of the time indoors, this scenario raises a concern for occupant health and well-being. This study attempts to demonstrate the causal relationship between the indoor air pollution and its determining aspects. Detailed indoor air pollution audits were conducted in residential buildings located in Kolkata, India in the months of December and January 2021. According to the air pollution knowledge assessment city program in India, Kolkata is also the second most polluted mega-city after Delhi. Although the air pollution levels are alarming year-long, the winter months are most crucial due to the unfavourable environmental conditions. While emissions remain typically constant throughout the year, cold air is denser and moves slower than warm air, trapping the pollution in place for much longer and consequently is breathed in at a higher rate than the summers. The air pollution monitoring period was selected considering environmental factors and major pollution contributors like traffic and road dust. This study focuses on the relationship between the built environment and the spatial-temporal distribution of air pollutants in and around it. The measured parameters include, temperature, relative humidity, air velocity, particulate matter, volatile organic compounds, formaldehyde, and benzene. A total of 56 rooms were audited, selectively targeting the most dominant middle-income group in the urban area of the metropolitan. The data-collection was conducted using a set of instruments positioned in the human breathing-zone. The study assesses the relationship between indoor air pollution levels and factors determining natural ventilation and air pollution dispersion such as surrounding environment, dominant wind, openable window to floor area ratio, windward or leeward side openings, and natural ventilation type in the room: single side or cross-ventilation, floor height, residents cleaning habits, etc.

Keywords: indoor air quality, occupant health, air pollution, architecture, urban environment

Procedia PDF Downloads 91
723 Quantitative Assessment of Soft Tissues by Statistical Analysis of Ultrasound Backscattered Signals

Authors: Da-Ming Huang, Ya-Ting Tsai, Shyh-Hau Wang

Abstract:

Ultrasound signals backscattered from the soft tissues are mainly depending on the size, density, distribution, and other elastic properties of scatterers in the interrogated sample volume. The quantitative analysis of ultrasonic backscattering is frequently implemented using the statistical approach due to that of backscattering signals tends to be with the nature of the random variable. Thus, the statistical analysis, such as Nakagami statistics, has been applied to characterize the density and distribution of scatterers of a sample. Yet, the accuracy of statistical analysis could be readily affected by the receiving signals associated with the nature of incident ultrasound wave and acoustical properties of samples. Thus, in the present study, efforts were made to explore such effects as the ultrasound operational modes and attenuation of biological tissue on the estimation of corresponding Nakagami statistical parameter (m parameter). In vitro measurements were performed from healthy and pathological fibrosis porcine livers using different single-element ultrasound transducers and duty cycles of incident tone burst ranging respectively from 3.5 to 7.5 MHz and 10 to 50%. Results demonstrated that the estimated m parameter tends to be sensitively affected by the use of ultrasound operational modes as well as the tissue attenuation. The healthy and pathological tissues may be characterized quantitatively by m parameter under fixed measurement conditions and proper calibration.

Keywords: ultrasound backscattering, statistical analysis, operational mode, attenuation

Procedia PDF Downloads 305
722 Managerial Overconfidence, Payout Policy, and Corporate Governance: Evidence from UK Companies

Authors: Abdullah AlGhazali, Richard Fairchild, Yilmaz Guney

Abstract:

We examine the effect of managerial overconfidence on UK firms’ payout policy for the period 2000 to 2012. The analysis incorporates, in addition to common firm-specific factors, a wide range of corporate governance factors and managerial characteristics that have been documented to affect the relationship between overconfidence and payout policy. Our results are robust to several estimation considerations. The findings show that the influence of overconfident CEOs on the amount of, and the propensity to pay, dividends is significant within the UK context. Specifically, we detect that there is a reduction in dividend payments in firms managed by overconfident managers compared to their non-overconfident counterparts. Moreover, we affirm that cash flows, firm size and profitability are positively correlated, while leverage, firm growth and investment are negatively correlated with the amount of and propensity to pay dividends. Interestingly, we demonstrate that firms with the potential for undervaluation reduce dividend payments. Some of the corporate governance factors are shown to motivate firms to pay more dividends while these factors seem to have no influence on the propensity to pay dividends. The results also show that in general higher overconfidence leads to more share repurchases but the lower total payout. Overall, managerial overconfidence should be considered as an important factor influencing payout policy in addition to other known factors.

Keywords: dividends, repurchases, UK firms, overconfidence, corporate governance, undervaluation

Procedia PDF Downloads 254
721 Developing an ANN Model to Predict Anthropometric Dimensions Based on Real Anthropometric Database

Authors: Waleed A. Basuliman, Khalid S. AlSaleh, Mohamed Z. Ramadan

Abstract:

Applying the anthropometric dimensions is considered one of the important factors when designing any human-machine system. In this study, the estimation of anthropometric dimensions has been improved by developing artificial neural network that aims to predict the anthropometric measurements of the male in Saudi Arabia. A total of 1427 Saudi males from age 6 to 60 participated in measuring twenty anthropometric dimensions. These anthropometric measurements are important for designing the majority of work and life applications in Saudi Arabia. The data were collected during 8 months from different locations in Riyadh City. Five of these dimensions were used as predictors variables (inputs) of the model, and the remaining fifteen dimensions were set to be the measured variables (outcomes). The hidden layers have been varied during the structuring stage, and the best performance was achieved with the network structure 6-25-15. The results showed that the developed Neural Network model was significantly able to predict the body dimensions for the population of Saudi Arabia. The network mean absolute percentage error (MAPE) and the root mean squared error (RMSE) were found 0.0348 and 3.225 respectively. The accuracy of the developed neural network was evaluated by compare the predicted outcomes with a multiple regression model. The ANN model performed better and resulted excellent correlation coefficients between the predicted and actual dimensions.

Keywords: artificial neural network, anthropometric measurements, backpropagation, real anthropometric database

Procedia PDF Downloads 555
720 Estimation of Thermal Conductivity of Nanofluids Using MD-Stochastic Simulation-Based Approach

Authors: Sujoy Das, M. M. Ghosh

Abstract:

The thermal conductivity of a fluid can be significantly enhanced by dispersing nano-sized particles in it, and the resultant fluid is termed as "nanofluid". A theoretical model for estimating the thermal conductivity of a nanofluid has been proposed here. It is based on the mechanism that evenly dispersed nanoparticles within a nanofluid undergo Brownian motion in course of which the nanoparticles repeatedly collide with the heat source. During each collision a rapid heat transfer occurs owing to the solid-solid contact. Molecular dynamics (MD) simulation of the collision of nanoparticles with the heat source has shown that there is a pulse-like pick up of heat by the nanoparticles within 20-100 ps, the extent of which depends not only on thermal conductivity of the nanoparticles, but also on the elastic and other physical properties of the nanoparticle. After the collision the nanoparticles undergo Brownian motion in the base fluid and release the excess heat to the surrounding base fluid within 2-10 ms. The Brownian motion and associated temperature variation of the nanoparticles have been modeled by stochastic analysis. Repeated occurrence of these events by the suspended nanoparticles significantly contributes to the characteristic thermal conductivity of the nanofluids, which has been estimated by the present model for a ethylene glycol based nanofluid containing Cu-nanoparticles of size ranging from 8 to 20 nm, with Gaussian size distribution. The prediction of the present model has shown a reasonable agreement with the experimental data available in literature.

Keywords: brownian dynamics, molecular dynamics, nanofluid, thermal conductivity

Procedia PDF Downloads 363
719 The Role of Temporary Migration as Coping Mechanism of Weather Shock: Evidence from Selected Semi-Arid Tropic Villages in India

Authors: Kalandi Charan Pradhan

Abstract:

In this study, we investigate does weather variation determine temporary labour migration using 210 sample households from six Semi-Arid Tropic (SAT) villages for the period of 2005-2014 in India. The study has made an attempt to examine how households use temporary labour migration as a coping mechanism to minimise the risk rather than maximize the utility of the households. The study employs panel Logit regression model to predict the probability of household having at least one temporary labour migrant. As per as econometrics result, it is found that along with demographic and socioeconomic factors; weather variation plays an important role to determine the decision of migration at household level. In order to capture the weather variation, the study uses mean crop yield deviation over the study periods. Based on the random effect logit regression result, the study found that there is a concave relationship between weather variation and decision of temporary labour migration. This argument supports the theory of New Economics of Labour Migration (NELM), which highlights the decision of labour migration not only maximise the households’ utility but it helps to minimise the risks.

Keywords: temporary migration, socioeconomic factors, weather variation, crop yield, logit estimation

Procedia PDF Downloads 206
718 Variation of Airfoil Pressure Profile Due to Confined Air Streams: Application in Gas-Oil Separators

Authors: Amir Hossein Haji, Nabeel Al-Rawahi, Gholamreza Vakili-Nezhaad

Abstract:

An innovative design has been examined for a gas-oil separator based on pressure reduction over an airfoil surface. The primary motivations are to shorten the release trajectory of the bubbles by minimizing the thickness of the oil layer as well as improving uniform pressure reduction zones. Restricted airflow over an airfoil is investigated for its effect on the pressure drop enhancement and the maximum attainable attack angle prior to the stall condition. Aerodynamic separation is delayed based on numerical simulation of Wortmann FX 63137 Airfoil in a confined domain using FLUENT 6.3.26. The proposed set up results in higher pressure drop compared with the free stream case. With the aim of optimum power consumption we have pursued further restriction to an air jet case over the airfoil. Then, a curved strip model is suggested for the air jet which can be applied as an analysis/design tool for the best performance conditions. Pressure reduction is shown to be inversely proportional to the curvature of the upper airfoil profile. This reduction occurs within the tracking zones where the air jet is effectively attached to the airfoil surface. The zero slope condition is suggested to estimate the onset of these zones after which the minimum curvature should be searched. The corresponding zero slope curvature is applied for estimation of the maximum pressure drop which shows satisfactory agreement with the simulation results.

Keywords: airfoil, air jet, curved fluid flow, gas-oil separator

Procedia PDF Downloads 444
717 Carbon Stock Estimation of Urban Forests in Selected Public Parks in Addis Ababa

Authors: Meseret Habtamu, Mekuria Argaw

Abstract:

Urban forests can help to improve the microclimate and air quality. Urban forests in Addis Ababa are important sinks for GHGs as the number of vehicles and the traffic constrain is steadily increasing. The objective of this study was to characterize the vegetation types in selected public parks and to estimate the carbon stock potential of urban forests by assessing carbon in the above, below ground biomass, in the litter and soil. Species which vegetation samples were taken using a systematic transect sampling within value DBH ≥ 5cm were recorded to measure the above, the below ground biomass and the amount of C stored. Allometric models (Y= 34.4703 - 8.0671(DBH) + 0.6589(DBH2) were used to calculate the above ground and Below ground biomass (BGB) = AGB × 0.2 and sampling of soil and litter was based on quadrates. There were 5038 trees recorded from the selected study sites with DBH ≥ 5cm. Most of the Parks had large number of indigenous species, but the numbers of exotic trees are much larger than the indigenous trees. The mean above ground and below ground biomass is 305.7 ± 168.3 and 61.1± 33.7 respectively and the mean carbon in the above ground and below ground biomass is 143.3±74.2 and 28.1 ± 14.4 respectively. The mean CO2 in the above ground and below ground biomass is 525.9 ± 272.2 and 103.1 ± 52.9 respectively. The mean carbon in dead litter and soil carbon were 10.5 ± 2.4 and 69.2t ha-1 respectively. Urban trees reduce atmospheric carbon dioxide (CO2) through sequestration which is important for climate change mitigation, they are also important for recreational, medicinal value and aesthetic and biodiversity conservation.

Keywords: biodiversity, carbon sequestration, climate change, urban forests

Procedia PDF Downloads 210
716 Laser Data Based Automatic Generation of Lane-Level Road Map for Intelligent Vehicles

Authors: Zehai Yu, Hui Zhu, Linglong Lin, Huawei Liang, Biao Yu, Weixin Huang

Abstract:

With the development of intelligent vehicle systems, a high-precision road map is increasingly needed in many aspects. The automatic lane lines extraction and modeling are the most essential steps for the generation of a precise lane-level road map. In this paper, an automatic lane-level road map generation system is proposed. To extract the road markings on the ground, the multi-region Otsu thresholding method is applied, which calculates the intensity value of laser data that maximizes the variance between background and road markings. The extracted road marking points are then projected to the raster image and clustered using a two-stage clustering algorithm. Lane lines are subsequently recognized from these clusters by the shape features of their minimum bounding rectangle. To ensure the storage efficiency of the map, the lane lines are approximated to cubic polynomial curves using a Bayesian estimation approach. The proposed lane-level road map generation system has been tested on urban and expressway conditions in Hefei, China. The experimental results on the datasets show that our method can achieve excellent extraction and clustering effect, and the fitted lines can reach a high position accuracy with an error of less than 10 cm.

Keywords: curve fitting, lane-level road map, line recognition, multi-thresholding, two-stage clustering

Procedia PDF Downloads 116
715 Estimation of Pressure Profile and Boundary Layer Characteristics over NACA 4412 Airfoil

Authors: Anwar Ul Haque, Waqar Asrar, Erwin Sulaeman, Jaffar S. M. Ali

Abstract:

Pressure distribution data of the standard airfoils is usually used for the calibration purposes in subsonic wind tunnels. Results of such experiments are quite old and obtained by using the model in the spanwise direction. In this manuscript, pressure distribution over NACA 4412 airfoil model was presented by placing the 3D model in the lateral direction. The model is made of metal with pressure ports distributed longitudinally as well as in the lateral direction. The pressure model was attached to the floor of the tunnel with the help of the base plate to give the specified angle of attack to the model. Before the start of the experiments, the pressure tubes of the respective ports of the 128 ports pressure scanner are checked for leakage, and the losses due to the length of the pipes were also incorporated in the results for the specified pressure range. Growth rate maps of the boundary layer thickness were also plotted. It was found that with the increase in the velocity, the dynamic pressure distribution was also increased for the alpha seep. Plots of pressure distribution so obtained were overlapped with those obtained by using XFLR software, a low fidelity tool. It was found that at moderate and high angles of attack, the distribution of the pressure coefficients obtained from the experiments is high when compared with the XFLR ® results obtained along with the span of the wing. This under-prediction by XFLR ® is more obvious on the windward than on the leeward side.

Keywords: subsonic flow, boundary layer, wind tunnel, pressure testing

Procedia PDF Downloads 307
714 Estimation of Residual Stresses in Thick Walled Cylinder by Radial Basis Artificial Neural

Authors: Mohammad Heidari

Abstract:

In this paper a method for high strength steel is proposed of residual stresses in autofrettaged tubes by combination of artificial neural networks is presented. Many different thick walled cylinders that were subjected to different conditions were studied. At first, the residual stress is calculated by analytical solution. Then by changing of the parameters that influenced in residual stresses such as percentage of autofrettage, internal pressure, wall ratio of cylinder, material property of cylinder, bauschinger and hardening effect factor, a neural network is created. These parameters are the input of network. The output of network is residual stress. Numerical data, employed for training the network and capabilities of the model in predicting the residual stress has been verified. The output obtained from neural network model is compared with numerical results, and the amount of relative error has been calculated. Based on this verification error, it is shown that the radial basis function of neural network has the average error of 2.75% in predicting residual stress of thick wall cylinder. Further analysis of residual stress of thick wall cylinder under different input conditions has been investigated and comparison results of modeling with numerical considerations shows a good agreement, which also proves the feasibility and effectiveness of the adopted approach.

Keywords: thick walled cylinder, residual stress, radial basis, artificial neural network

Procedia PDF Downloads 400
713 Coping with Incompatible Identities in Russia: Case of Orthodox Gays

Authors: Siuzan Uorner

Abstract:

The era of late modernity is characterized, on the one hand, by social disintegration, values of personal freedom, tolerance, and self-expression. Boundaries between the accessible and the elitist, normal and abnormal are blurring. On the other hand, traditional social institutions, such as religion (especially Russian Orthodox Church), exist, criticizing lifestyle and worldview other than conventionally structured canons. Despite the declared values and opportunities in late modern society, people's freedom is ambivalent. Personal identity and its aspects are becoming a subject of choice. Hence, combinations of identity aspects can be incompatible. Our theoretical framework is based on P. Ricoeur's concept of narrative identity and hermeneutics, E. Goffman’s theory of social stigma, self-presentation, discrepant roles and W. James lectures about varieties of religious experience. This paper aims to reconstruct ways of coping with incompatible identities of Orthodox gays (an extreme sampling of a combination of sexual orientation and religious identity in a heteronormative society). This study focuses on the discourse of Orthodox gay parishioners and ROC gay priests in Russia (sampling ‘hard to reach’ populations because of the secrecy of gay community in ROC and sensitivity of the topic itself). We conducted a qualitative research design, using in-depth personal semi-structured online-interviews. Recruiting of informants took place in 'Nuntiare et Recreare' (Russian movement of religious LGBT) page in VKontakte through the post with an invitation to participate in the research. In this work, we analyzed interview transcripts using axial coding. We chose the Grounded Theory methodology to construct a theory from empirical data and contribute to the growing body of knowledge in ways of harmonizing incompatible identities in late modern societies. The research has found that there are two types of conflicts Orthodox gays meet with: canonic contradictions (postulates of Scripture and its interpretations) and problems in social interaction, mainly with ROC priests and Orthodox parishioners. We have revealed semantic meanings of most commonly used words that appear in the narratives (words such as ‘love’, ‘sin’, ‘religion’ etc.). Finally, we have reconstructed biographical patterns of LGBT social movements’ involvement. This paper argues that all incompatibilities are harmonizing in the narrative itself. As Ricoeur has suggested, the narrative configuration allows the speaker to gather facts and events together and to compose causal relationships between them. Sexual orientation and religious identity are getting along and harmonizing in the narrative.

Keywords: gay priests, incompatible identities, narrative identity, Orthodox gays, religious identity, ROC, sexual orientation

Procedia PDF Downloads 120
712 Using the Structural Equation Model to Explain the Effect of Supervisory Practices on Regulatory Density

Authors: Jill Round

Abstract:

In the economic system, the financial sector plays a crucial role as an intermediary between market participants, other financial institutions, and customers. Financial institutions such as banks have to make decisions to satisfy the demands of all the participants by keeping abreast of regulatory change. In recent years, progress has been made regarding frameworks, development of rules, standards, and processes to manage risks in the banking sector. The increasing focus of regulators and policymakers placed on risk management, corporate governance, and the organization’s culture is of special interest as it requires a well-resourced risk controlling function, compliance function, and internal audit function. In the past years, the relevance of these functions that make up the so-called Three Lines of Defense has moved from the backroom to the boardroom. The approach of the model can vary based on the various organizational characteristics. Due to the intense regulatory requirements, organizations operating in the financial sector have more mature models. In less regulated industries there is more cloudiness about what tasks are allocated where. All parties strive to achieve their objectives through the effective management of risks and serve the identical stakeholders. Today, the Three Lines of Defense model is used throughout the world. The research looks at trends and emerging issues in the professions of the Three Lines of Defense within the banking sector. The answers are believed to helping to explain the increasing regulatory requirements for the banking sector. While the number of supervisory practices increases the risk management requirements intensify and demand more regulatory compliance at the same time. The Structural Equation Modeling (SEM) is applied by making use of conducted surveys in the research field. It aims to describe (i) the theoretical model regarding the applicable linearity relationships, (ii) the causal relationship between multiple predictors (exogenous) and multiple dependent variables (endogenous), (iii) taking into consideration the unobservable variables and (iv) the measurement errors. The surveys conducted on the research field suggest that the observable variables are caused by various latent variables. The SEM consists of the 1) measurement model and the 2) structural model. There is a detectable correlation regarding the cause-effect relationship among the performed supervisory practices and the increasing scope of regulation. Supervisory practices reinforce the regulatory density. In the past, controls were placed after supervisory practices were conducted or incidents occurred. In further research, it is of interest to examine, whether risk management is proactive, reactive to incidents and supervisory practices or can be both at the same time.

Keywords: risk management, structural equation model, supervisory practice, three lines of defense

Procedia PDF Downloads 202
711 A Bayesian Hierarchical Poisson Model with an Underlying Cluster Structure for the Analysis of Measles in Colombia

Authors: Ana Corberan-Vallet, Karen C. Florez, Ingrid C. Marino, Jose D. Bermudez

Abstract:

In 2016, the Region of the Americas was declared free of measles, a viral disease that can cause severe health problems. However, since 2017, measles has reemerged in Venezuela and has subsequently reached neighboring countries. In 2018, twelve American countries reported confirmed cases of measles. Governmental and health authorities in Colombia, a country that shares the longest land boundary with Venezuela, are aware of the need for a strong response to restrict the expanse of the epidemic. In this work, we apply a Bayesian hierarchical Poisson model with an underlying cluster structure to describe disease incidence in Colombia. Concretely, the proposed methodology provides relative risk estimates at the department level and identifies clusters of disease, which facilitates the implementation of targeted public health interventions. Socio-demographic factors, such as the percentage of migrants, gross domestic product, and entry routes, are included in the model to better describe the incidence of disease. Since the model does not impose any spatial correlation at any level of the model hierarchy, it avoids the spatial confounding problem and provides a suitable framework to estimate the fixed-effect coefficients associated with spatially-structured covariates.

Keywords: Bayesian analysis, cluster identification, disease mapping, risk estimation

Procedia PDF Downloads 136
710 Statistical Data Analysis of Migration Impact on the Spread of HIV Epidemic Model Using Markov Monte Carlo Method

Authors: Ofosuhene O. Apenteng, Noor Azina Ismail

Abstract:

Over the last several years, concern has developed over how to minimize the spread of HIV/AIDS epidemic in many countries. AIDS epidemic has tremendously stimulated the development of mathematical models of infectious diseases. The transmission dynamics of HIV infection that eventually developed AIDS has taken a pivotal role of much on building mathematical models. From the initial HIV and AIDS models introduced in the 80s, various improvements have been taken into account as how to model HIV/AIDS frameworks. In this paper, we present the impact of migration on the spread of HIV/AIDS. Epidemic model is considered by a system of nonlinear differential equations to supplement the statistical method approach. The model is calibrated using HIV incidence data from Malaysia between 1986 and 2011. Bayesian inference based on Markov Chain Monte Carlo is used to validate the model by fitting it to the data and to estimate the unknown parameters for the model. The results suggest that the migrants stay for a long time contributes to the spread of HIV. The model also indicates that susceptible individual becomes infected and moved to HIV compartment at a rate that is more significant than the removal rate from HIV compartment to AIDS compartment. The disease-free steady state is unstable since the basic reproduction number is 1.627309. This is a big concern and not a good indicator from the public heath point of view since the aim is to stabilize the epidemic at the disease equilibrium.

Keywords: epidemic model, HIV, MCMC, parameter estimation

Procedia PDF Downloads 585
709 Experimental Investigation on Freeze-Concentration Process Desalting for Highly Saline Brines

Authors: H. Al-Jabli

Abstract:

Using the freeze-melting process for the disposing of high saline brines was the aim of the paper by confirming the performance estimation of the treatment system. A laboratory bench scale freezing technique test unit was designed, constructed, and tested at Doha Research Plant (DRP) in Kuwait. The principal unit operations that have been considered for the laboratory study are: ice crystallization, separation, washing, and melting. The applied process is characterized as “the secondary-refrigerant indirect freezing”, which is utilizing normal freezing concept. The high saline brine was used as definite feed water, i.e. average TDS of 250,000 ppm. Kuwait desalination plants were carried out in the experimental study to measure the performance of the proposed treatment system. Experimental analysis shows that the freeze-melting process is capable of dropping the TDS of the feed water from 249,482 ppm to 56,880 ppm of the freeze-melting process in the two-phase’s course, whereas overall recovery results of the salt passage and salt rejection are 31.11%, 19.05%, and 80.95%, correspondingly. Therefore, the freeze-melting process is encouraging for the proposed application, as it shows on the results, which approves the process capability of reducing a major amount of the dissolved salts of the high saline brine with reasonable sensible recovery. This process might be reasonable with other brine disposal processes.

Keywords: high saline brine, freeze-melting process, ice crystallization, brine disposal process

Procedia PDF Downloads 253