Search results for: averaging
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 98

Search results for: averaging

98 Model Averaging for Poisson Regression

Authors: Zhou Jianhong

Abstract:

Model averaging is a desirable approach to deal with model uncertainty, which, however, has rarely been explored for Poisson regression. In this paper, we propose a model averaging procedure based on an unbiased estimator of the expected Kullback-Leibler distance for the Poisson regression. Simulation study shows that the proposed model average estimator outperforms some other commonly used model selection and model average estimators in some situations. Our proposed methods are further applied to a real data example and the advantage of this method is demonstrated again.

Keywords: model averaging, poission regression, Kullback-Leibler distance, statistics

Procedia PDF Downloads 483
97 Model Averaging in a Multiplicative Heteroscedastic Model

Authors: Alan Wan

Abstract:

In recent years, the body of literature on frequentist model averaging in statistics has grown significantly. Most of this work focuses on models with different mean structures but leaves out the variance consideration. In this paper, we consider a regression model with multiplicative heteroscedasticity and develop a model averaging method that combines maximum likelihood estimators of unknown parameters in both the mean and variance functions of the model. Our weight choice criterion is based on a minimisation of a plug-in estimator of the model average estimator's squared prediction risk. We prove that the new estimator possesses an asymptotic optimality property. Our investigation of finite-sample performance by simulations demonstrates that the new estimator frequently exhibits very favourable properties compared to some existing heteroscedasticity-robust model average estimators. The model averaging method hedges against the selection of very bad models and serves as a remedy to variance function misspecification, which often discourages practitioners from modeling heteroscedasticity altogether. The proposed model average estimator is applied to the analysis of two real data sets.

Keywords: heteroscedasticity-robust, model averaging, multiplicative heteroscedasticity, plug-in, squared prediction risk

Procedia PDF Downloads 331
96 Exact Solutions for Steady Response of Nonlinear Systems under Non-White Excitation

Authors: Yaping Zhao

Abstract:

In the present study, the exact solutions for the steady response of quasi-linear systems under non-white wide-band random excitation are considered by means of the stochastic averaging method. The non linearity of the systems contains the power-law damping and the cross-product term of the power-law damping and displacement. The drift and diffusion coefficients of the Fokker-Planck-Kolmogorov (FPK) equation after averaging are obtained by a succinct approach. After solving the averaged FPK equation, the joint probability density function and the marginal probability density function in steady state are attained. In the process of resolving, the eigenvalue problem of ordinary differential equation is handled by integral equation method. Some new results are acquired and the novel method to deal with the problems in nonlinear random vibration is proposed.

Keywords: random vibration, stochastic averaging method, FPK equation, transition probability density

Procedia PDF Downloads 468
95 Factors Controlling Marine Shale Porosity: A Case Study between Lower Cambrian and Lower Silurian of Upper Yangtze Area, South China

Authors: Xin Li, Zhenxue Jiang, Zhuo Li

Abstract:

Generally, shale gas is trapped within shale systems with low porosity and ultralow permeability as free and adsorbing states. Its production is controlled by properties, in terms of occurrence phases, gas contents, and percolation characteristics. These properties are all influenced by porous features. In this paper, porosity differences of marine shales were explored between Lower Cambrian shale and Lower Silurian shale of Sichuan Basin, South China. Both the two shales were marine shales with abundant oil-prone kerogen and rich siliceous minerals. Whereas Lower Cambrian shale (3.56% Ro) possessed a higher thermal degree than that of Lower Silurian shale (2.31% Ro). Samples were measured by a combination of organic-chemistry geology measurement, organic matter (OM) isolation, X-ray diffraction (XRD), N2 adsorption, and focused ion beam milling and scanning electron microscopy (FIB-SEM). Lower Cambrian shale presented relatively low pore properties, with averaging 0.008ml/g pore volume (PV), averaging 7.99m²/g pore surface area (PSA) and averaging 5.94nm average pore diameter (APD). Lower Silurian shale showed as relatively high pore properties, with averaging 0.015ml/g PV, averaging 10.53m²/g PSA and averaging 18.60nm APD. Additionally, fractal analysis indicated that the two shales presented discrepant pore morphologies, mainly caused by differences in the combination of pore types between the two shales. More specifically, OM-hosted pores with pin-hole shape and dissolved pores with dead-end openings were the main types in Lower Cambrian shale, while OM-hosted pore with a cellular structure was the main type in Lower Silurian shale. Moreover, porous characteristics of isolated OM suggested that OM of Lower Silurian shale was more capable than that of Lower Cambrian shale in the aspect of pore contribution. PV of isolated OM in Lower Silurian shale was almost 6.6 times higher than that in Lower Cambrian shale, and PSA of isolated OM in Lower Silurian shale was almost 4.3 times higher than that in Lower Cambrian shale. However, no apparent differences existed among samples with various matrix compositions. At late diagenetic or metamorphic epoch, extensive diagenesis overprints the effects of minerals on pore properties and OM plays the dominant role in pore developments. Hence, differences of porous features between the two marine shales highlight the effect of diagenetic degree on OM-hosted pore development. Consequently, distinctive pore characteristics may be caused by the different degrees of diagenetic evolution, even with similar matrix basics.

Keywords: marine shale, lower Cambrian, lower Silurian, om isolation, pore properties, om-hosted pore

Procedia PDF Downloads 108
94 Application of Bayesian Model Averaging and Geostatistical Output Perturbation to Generate Calibrated Ensemble Weather Forecast

Authors: Muhammad Luthfi, Sutikno Sutikno, Purhadi Purhadi

Abstract:

Weather forecast has necessarily been improved to provide the communities an accurate and objective prediction as well. To overcome such issue, the numerical-based weather forecast was extensively developed to reduce the subjectivity of forecast. Yet the Numerical Weather Predictions (NWPs) outputs are unfortunately issued without taking dynamical weather behavior and local terrain features into account. Thus, NWPs outputs are not able to accurately forecast the weather quantities, particularly for medium and long range forecast. The aim of this research is to aid and extend the development of ensemble forecast for Meteorology, Climatology, and Geophysics Agency of Indonesia. Ensemble method is an approach combining various deterministic forecast to produce more reliable one. However, such forecast is biased and uncalibrated due to its underdispersive or overdispersive nature. As one of the parametric methods, Bayesian Model Averaging (BMA) generates the calibrated ensemble forecast and constructs predictive PDF for specified period. Such method is able to utilize ensemble of any size but does not take spatial correlation into account. Whereas space dependencies involve the site of interest and nearby site, influenced by dynamic weather behavior. Meanwhile, Geostatistical Output Perturbation (GOP) reckons the spatial correlation to generate future weather quantities, though merely built by a single deterministic forecast, and is able to generate an ensemble of any size as well. This research conducts both BMA and GOP to generate the calibrated ensemble forecast for the daily temperature at few meteorological sites nearby Indonesia international airport.

Keywords: Bayesian Model Averaging, ensemble forecast, geostatistical output perturbation, numerical weather prediction, temperature

Procedia PDF Downloads 243
93 Efficient Tuning Parameter Selection by Cross-Validated Score in High Dimensional Models

Authors: Yoonsuh Jung

Abstract:

As DNA microarray data contain relatively small sample size compared to the number of genes, high dimensional models are often employed. In high dimensional models, the selection of tuning parameter (or, penalty parameter) is often one of the crucial parts of the modeling. Cross-validation is one of the most common methods for the tuning parameter selection, which selects a parameter value with the smallest cross-validated score. However, selecting a single value as an "optimal" value for the parameter can be very unstable due to the sampling variation since the sample sizes of microarray data are often small. Our approach is to choose multiple candidates of tuning parameter first, then average the candidates with different weights depending on their performance. The additional step of estimating the weights and averaging the candidates rarely increase the computational cost, while it can considerably improve the traditional cross-validation. We show that the selected value from the suggested methods often lead to stable parameter selection as well as improved detection of significant genetic variables compared to the tradition cross-validation via real data and simulated data sets.

Keywords: cross validation, parameter averaging, parameter selection, regularization parameter search

Procedia PDF Downloads 381
92 Denoising Transient Electromagnetic Data

Authors: Lingerew Nebere Kassie, Ping-Yu Chang, Hsin-Hua Huang, , Chaw-Son Chen

Abstract:

Transient electromagnetic (TEM) data plays a crucial role in hydrogeological and environmental applications, providing valuable insights into geological structures and resistivity variations. However, the presence of noise often hinders the interpretation and reliability of these data. Our study addresses this issue by utilizing a FASTSNAP system for the TEM survey, which operates at different modes (low, medium, and high) with continuous adjustments to discretization, gain, and current. We employ a denoising approach that processes the raw data obtained from each acquisition mode to improve signal quality and enhance data reliability. We use a signal-averaging technique for each mode, increasing the signal-to-noise ratio. Additionally, we utilize wavelet transform to suppress noise further while preserving the integrity of the underlying signals. This approach significantly improves the data quality, notably suppressing severe noise at late times. The resulting denoised data exhibits a substantially improved signal-to-noise ratio, leading to increased accuracy in parameter estimation. By effectively denoising TEM data, our study contributes to a more reliable interpretation and analysis of underground structures. Moreover, the proposed denoising approach can be seamlessly integrated into existing ground-based TEM data processing workflows, facilitating the extraction of meaningful information from noisy measurements and enhancing the overall quality and reliability of the acquired data.

Keywords: data quality, signal averaging, transient electromagnetic, wavelet transform

Procedia PDF Downloads 54
91 Vulnerability of People to Climate Change: Influence of Methods and Computation Approaches on Assessment Outcomes

Authors: Adandé Belarmain Fandohan

Abstract:

Climate change has become a major concern globally, particularly in rural communities that have to find rapid coping solutions. Several vulnerability assessment approaches have been developed in the last decades. This comes along with a higher risk for different methods to result in different conclusions, thereby making comparisons difficult and decision-making non-consistent across areas. The effect of methods and computational approaches on estimates of people’s vulnerability was assessed using data collected from the Gambia. Twenty-four indicators reflecting vulnerability components: (exposure, sensitivity, and adaptive capacity) were selected for this purpose. Data were collected through household surveys and key informant interviews. One hundred and fifteen respondents were surveyed across six communities and two administrative districts. Results were compared over three computational approaches: the maximum value transformation normalization, the z-score transformation normalization, and simple averaging. Regardless of the approaches used, communities that have high exposure to climate change and extreme events were the most vulnerable. Furthermore, the vulnerability was strongly related to the socio-economic characteristics of farmers. The survey evidenced variability in vulnerability among communities and administrative districts. Comparing output across approaches, overall, people in the study area were found to be highly vulnerable using the simple average and maximum value transformation, whereas they were only moderately vulnerable using the z-score transformation approach. It is suggested that assessment approach-induced discrepancies be accounted for in international debates to harmonize/standardize assessment approaches to the end of making outputs comparable across regions. This will also likely increase the relevance of decision-making for adaptation policies.

Keywords: maximum value transformation, simple averaging, vulnerability assessment, West Africa, z-score transformation

Procedia PDF Downloads 75
90 Assessment of Air Quality Around Western Refinery in Libya: Mobile Monitoring

Authors: A. Elmethnani, A. Jroud

Abstract:

This coastal crude oil refinery is situated north of a big city west of Tripoli; the city then could be highly prone to downwind refinery emissions where the NNE wind direction is prevailing through most seasons of the year. Furthermore, due to the absence of an air quality monitoring network and scarce emission data available for the neighboring community, nearby residents have serious worries about the impacts of the oil refining operations on local air quality. In responding to these concerns, a short term survey has performed for three consecutive days where a semi-continues mobile monitoring approach has developed effectively in this study; the monitoring station (Compact AQM 65 AeroQual) was mounted on a vehicle to move quickly between locations, measurements of 10 minutes averaging of 60 seconds then been taken at each fixed sampling point. The downwind ambient concentration of CO, H₂S, NOₓ, NO₂, SO₂, PM₁, PM₂.₅ PM₁₀, and TSP were measured at carefully chosen sampling locations, ranging from 200m nearby the fence-line passing through the city center up to 4.7 km east to attain best spatial coverage. Results showed worrying levels of PM₂.₅ PM₁₀, and TSP at one sampling location in the city center, southeast of the refinery site, with an average mean of 16.395μg/m³, 33.021μg/m³, and 42.426μg/m³ respectively, which could be attributed to road traffic. No significant concentrations have been detected for other pollutants of interest over the study area, as levels observed for CO, SO₂, H₂S, NOₓ, and NO₂ haven’t respectively exceeded 1.707 ppm, 0.021ppm, 0.134 ppm, 0.4582 ppm, and 0.0018 ppm, which was at the same sampling locations as well. Although it wasn’t possible to compare the results with the Libyan air quality standards due to the difference in the averaging time period, the technique was adequate for the baseline air quality screening procedure. Overall, findings primarily suggest modeling of dispersion of the refinery emissions to assess the likely impact and spatial-temporal distribution of air pollutants.

Keywords: air quality, mobil monitoring, oil refinery

Procedia PDF Downloads 65
89 Pilot Scale Investigation on the Removal of Pollutants from Secondary Effluent to Meet Botswana Irrigation Standards Using Roughing and Slow Sand Filters

Authors: Moatlhodi Wise Letshwenyo, Lesedi Lebogang

Abstract:

Botswana is an arid country that needs to start reusing wastewater as part of its water security plan. Pilot scale slow sand filtration in combination with roughing filter was investigated for the treatment of effluent from Botswana International University of Science and Technology to meet Botswana irrigation standards. The system was operated at hydraulic loading rates of 0.04 m/hr and 0.12 m/hr. The results show that the system was able to reduce turbidity from 262 Nephelometric Turbidity Units to a range between 18 and 0 Nephelometric Turbidity Units which was below 30 Nephelometric Turbidity Units threshold limit. The overall efficacy ranged between 61% and 100%. Suspended solids, Biochemical Oxygen Demand, and Chemical Oxygen Demand removal efficiency averaged 42.6%, 45.5%, and 77% respectively and all within irrigation standards. Other physio-chemical parameters were within irrigation standards except for bicarbonate ion which averaged 297.7±44 mg L-1 in the influent and 196.22±50 mg L-1 in the effluent which was above the limit of 92 mg L-1, therefore averaging a reduction of 34.1% by the system. Total coliforms, fecal coliforms, and Escherichia coli in the effluent were initially averaging 1.1 log counts, 0.5 log counts, and 1.3 log counts respectively compared to corresponding influent log counts of 3.4, 2.7 and 4.1, respectively. As time passed, it was observed that only roughing filter was able to reach reductions of 97.5%, 86% and 100% respectively for faecal coliforms, Escherichia coli, and total coliforms. These organism numbers were observed to have increased in slow sand filter effluent suggesting multiplication in the tank. Water quality index value of 22.79 for the physio-chemical parameters suggests that the effluent is of excellent quality and can be used for irrigation purposes. However, the water quality index value for the microbial parameters (1820) renders the quality unsuitable for irrigation. It is concluded that slow sand filtration in combination with roughing filter is a viable option for the treatment of secondary effluent for reuse purposes. However, further studies should be conducted especially for the removal of microbial parameters using the system.

Keywords: irrigation, slow sand filter, turbidity, wastewater reuse

Procedia PDF Downloads 122
88 Conflation Methodology Applied to Flood Recovery

Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: community resilience, conflation, flood risk, nuisance flooding

Procedia PDF Downloads 56
87 Implementation of 4-Bit Direct Charge Transfer Switched Capacitor DAC with Mismatch Shaping Technique

Authors: Anuja Askhedkar, G. H. Agrawal, Madhu Gudgunti

Abstract:

Direct Charge Transfer Switched Capacitor (DCT-SC) DAC is the internal DAC used in Delta-Sigma (∆∑) DAC which works on Over-Sampling concept. The Switched Capacitor DAC mainly suffers from mismatch among capacitors. Mismatch among capacitors in DAC, causes non linearity between output and input. Dynamic Element Matching (DEM) technique is used to match the capacitors. According to element selection logic there are many types. In this paper, Data Weighted Averaging (DWA) technique is used for mismatch shaping. In this paper, the 4 bit DCT-SC-DAC with DWA-DEM technique is implemented using WINSPICE simulation software in 180nm CMOS technology. DNL for DAC with DWA is ±0.03 LSB and INL is ± 0.02LSB.

Keywords: ∑-Δ DAC, DCT-SC-DAC, mismatch shaping, DWA, DEM

Procedia PDF Downloads 320
86 Computational Fluid Dynamics Analysis of Sit-Ski Aerodynamics in Crosswind Conditions

Authors: Lev Chernyshev, Ekaterina Lieshout, Natalia Kabaliuk

Abstract:

Sit-skis enable individuals with limited lower limb or core movement to ski unassisted confidently. The rise in popularity of the Winter Paralympics has seen an influx of engineering innovation, especially for the Downhill and Super-Giant Slalom events, where the athletes achieve speeds as high as 160km/h. The growth in the sport has inspired recent research into sit-ski aerodynamics. Crosswinds are expected in mountain climates and, therefore, can greatly impact a skier's maneuverability and aerodynamics. This research investigates the impact of crosswinds on the drag force of a Paralympic sit-ski using Computational Fluid Dynamics (CFD). A Paralympic sit-ski with a model of a skier, a leg cover, a bucket seat, and a simplified suspension system was used for CFD analysis in ANSYS Fluent. The hybrid initialisation tool and the SST k–ω turbulence model were used with two tetrahedral mesh bodies of influence. The crosswinds (10, 30, and 50 km/h) acting perpendicular to the sit-ski's direction of travel were simulated, corresponding to the straight-line skiing speeds of 60, 80, and 100km/h. Following the initialisation, 150 iterations for both first and second order steady-state solvers were used, before switching to a transient solver with a computational time of 1.5s and a time step of 0.02s, to allow the solution to converge. CFD results were validated against wind tunnel data. The results suggested that for all crosswind and sit-ski speeds, on average, 64% of the total drag on the ski was due to the athlete's torso. The suspension was associated with the second largest overall sit-ski drag force contribution, averaging at 27%, followed by the leg cover at 10%. While the seat contributed a negligible 0.5% of the total drag force, averaging at 1.2N across the conditions studied. The effect of the crosswind increased the total drag force across all skiing speed studies, with the drag on the athlete's torso and suspension being the most sensitive to the changes in the crosswind magnitude. The effect of the crosswind on the ski drag reduced as the simulated skiing speed increased: for skiing at 60km/h, the drag force on the torso increased by 154% with the increase of the crosswind from 10km/h to 50km/h; whereas, at 100km/h the corresponding drag force increase was halved (75%). The analysis of the flow and pressure field characteristics for a sit-ski in crosswind conditions indicated the flow separation localisation and wake size correlated with the magnitude and directionality of the crosswind relative to straight-line skiing. The findings can inform aerodynamic improvements in sit-ski design and increase skiers' medalling chances.

Keywords: sit-ski, aerodynamics, CFD, crosswind effects

Procedia PDF Downloads 42
85 Melnikov Analysis for the Chaos of the Nonlocal Nanobeam Resting on Fractional-Order Softening Nonlinear Viscoelastic Foundations

Authors: Guy Joseph Eyebe, Gambo Betchewe, Alidou Mohamadou, Timoleon Crepin Kofane

Abstract:

In the present study, the dynamics of nanobeam resting on fractional order softening nonlinear viscoelastic pasternack foundations is studied. The Hamilton principle is used to derive the nonlinear equation of the motion. Approximate analytical solution is obtained by applying the standard averaging method. The Melnikov method is used to investigate the chaotic behaviors of device, the critical curve separating the chaotic and non-chaotic regions are found. It is shown that appearance of chaos in the system depends strongly on the fractional order parameter.

Keywords: chaos, fractional-order, Melnikov method, nanobeam

Procedia PDF Downloads 122
84 An Adaptive CFAR Algorithm Based on Automatic Censoring in Heterogeneous Environments

Authors: Naime Boudemagh

Abstract:

In this work, we aim to improve the detection performances of radar systems. To this end, we propose and analyze a novel censoring technique of undesirable samples, of priori unknown positions, that may be present in the environment under investigation. Therefore, we consider heterogeneous backgrounds characterized by the presence of some irregularities such that clutter edge transitions and/or interfering targets. The proposed detector, termed automatic censoring constant false alarm (AC-CFAR), operates exclusively in a Gaussian background. It is built to allow the segmentation of the environment to regions and switch automatically to the appropriate detector; namely, the cell averaging CFAR (CA-CFAR), the censored mean level CFAR (CMLD-CFAR) or the order statistic CFAR (OS-CFAR). Monte Carlo simulations show that the AC-CFAR detector performs like the CA-CFAR in a homogeneous background. Moreover, the proposed processor exhibits considerable robustness in a heterogeneous background.

Keywords: CFAR, automatic censoring, heterogeneous environments, radar systems

Procedia PDF Downloads 569
83 Scattering Operator and Spectral Clustering for Ultrasound Images: Application on Deep Venous Thrombi

Authors: Thibaud Berthomier, Ali Mansour, Luc Bressollette, Frédéric Le Roy, Dominique Mottier, Léo Fréchier, Barthélémy Hermenault

Abstract:

Deep Venous Thrombosis (DVT) occurs when a thrombus is formed within a deep vein (most often in the legs). This disease can be deadly if a part or the whole thrombus reaches the lung and causes a Pulmonary Embolism (PE). This disorder, often asymptomatic, has multifactorial causes: immobilization, surgery, pregnancy, age, cancers, and genetic variations. Our project aims to relate the thrombus epidemiology (origins, patient predispositions, PE) to its structure using ultrasound images. Ultrasonography and elastography were collected using Toshiba Aplio 500 at Brest Hospital. This manuscript compares two classification approaches: spectral clustering and scattering operator. The former is based on the graph and matrix theories while the latter cascades wavelet convolutions with nonlinear modulus and averaging operators.

Keywords: deep venous thrombosis, ultrasonography, elastography, scattering operator, wavelet, spectral clustering

Procedia PDF Downloads 449
82 The Effect of Spatial Variability on Axial Pile Design of Closed Ended Piles in Sand

Authors: Cormac Reale, Luke J. Prendergast, Kenneth Gavin

Abstract:

While significant improvements have been made in axial pile design methods over recent years, the influence of soils natural variability has not been adequately accounted for within them. Soil variability is a crucial parameter to consider as it can account for large variations in pile capacity across the same site. This paper seeks to address this knowledge deficit, by demonstrating how soil spatial variability can be accommodated into existing cone penetration test (CPT) based pile design methods, in the form of layered non-homogeneous random fields. These random fields model the scope of a given property’s variance and define how it varies spatially. A Monte Carlo analysis of the pile will be performed taking into account parameter uncertainty and spatial variability, described using the measured scales of fluctuation. The results will be discussed in light of Eurocode 7 and the effect of spatial averaging on design capacities will be analysed.

Keywords: pile axial design, reliability, spatial variability, CPT

Procedia PDF Downloads 201
81 Modeling of Enthalpy and Heat Capacity of Phase-Change Materials

Authors: Igor Medved, Anton Trnik, Libor Vozar

Abstract:

Phase-change materials (PCMs) are of great interest in the applications where a temperature level needs to be maintained and/or where there is demand for thermal energy storage. Examples are storage of solar energy, cold, and space heating/cooling of buildings. During a phase change, the enthalpy vs. temperature plot of PCMs shows a jump and there is a distinct peak in the heat capacity plot. We present a theoretical description from which these jumps and peaks can be obtained. We apply our theoretical results to fit experimental data with very good accuracy for selected materials and changes between two phases. The development is based on the observation that PCMs are polycrystalline; i.e., composed of many single-crystalline grains. The enthalpy and heat capacity are thus interpreted as averages of the contributions from the individual grains. We also show how to determine the baseline and excess part of the heat capacity and thus the latent heat corresponding to the phase change.

Keywords: averaging, enthalpy jump, heat capacity peak, phase change

Procedia PDF Downloads 430
80 Hunger and Health: The Acceptability and Development of Health Coaching in the Food Pantry Environment

Authors: Kelsey Fortin, Susan Harvey

Abstract:

The intersection between hunger and health outcomes is beginning to gain traction among the research community. With new interventions focusing on collaborations between the medical and social service sectors, this study aimed to understand the acceptability and approach of a health coaching intervention within a county-wide Midwest food pantry. Through formative research, the study used mixed methods to review secondary data and conduct surveys and semi-structured interviews with food pantry clients (n=30), staff (n=7), and volunteers (n=10). Supplemental secondary data collected and provided by pantry staff were reviewed to understand the broader pantry context of clientele health and health behaviors, annual food donations, and current pantry programming. Results from secondary data showed that the broader pantry client population reported high rates of chronic disease, low consumption of fruits and vegetables, and poor self-reported health, while annual donation data showed increases in produce availability on pantry shelves. This disconnect between produce availability, client health status, and behaviors was supported in the current study, with pantry staff and volunteers reporting lack of knowledge in produce selection and preparation being amongst the most common client inquiries and barriers to healthy food selection. Additional supports to secondary data came from pantry clients in the current study through self-reported high rates of both individual (60%, n=18) and household (43%, n=13 ) disease diagnosis, low consumption of fruits and vegetables averaging zero to one servings of vegetables (67%, n=20) and fruits (47%, n=14) per day, and low levels of physical activity averaging zero to 120 minutes per week (67%, n=20). Further, pantry clients provided health coaching programmatic recommendations through interviews with feedback such as non-judgmental coaching, accountability measures, and providing participant incentives as considerations for future program design and approach. Volunteers and staff reported the need for client education in food preparation, basic nutrition and physical activity, and the need for additional health expertise to educate and respond to diet related nutrition recommendations. All three stakeholder groups supported hosting a health coach within the pantry to focused on nutrition, physical activity, and health programming, with one client stating, 'I am hoping it really works out [the health coaching program]. I think it would be great for something like this to be offered for someone that isn’t knowledgeable like me.' In conclusion, high rates of chronic disease, partnered with low food, nutrition, and physical activity literacy among pantry clients, demonstrates the need to address health behaviors. With all three stakeholder groups showing acceptability of a health coaching program, partnered with existing literature showing health coaching success as a behavior change intervention, further research should be conducted to pilot the design and implementation of such a program in the pantry setting.

Keywords: food insecurity, formative research, food pantries, health coaching, hunger and health

Procedia PDF Downloads 100
79 Steady State Creep Behavior of Functionally Graded Thick Cylinder

Authors: Tejeet Singh, Harmanjit Singh

Abstract:

Creep behavior of thick-walled functionally graded cylinder consisting of AlSiC and subjected to internal pressure and high temperature has been analyzed. The functional relationship between strain rate with stress can be described by the well-known threshold stress based creep law with a stress exponent of five. The effect of imposing non-linear particle gradient on the distribution of creep stresses in the thick-walled functionally graded composite cylinder has been investigated. The study revealed that for the assumed non-linear particle distribution, the radial stress decreases throughout the cylinder, whereas the tangential, axial and effective stresses have averaging effect. The strain rates in the functionally graded composite cylinder could be reduced to significant extent by employing non-linear gradient in the distribution of reinforcement.

Keywords: functionally graded material, pressure, steady state creep, thick-cylinder

Procedia PDF Downloads 447
78 The Relevance of Environmental, Social, and Governance in Sustainable Supplier Selection

Authors: Christoph Koester

Abstract:

Supplier selection is one of the key issues in supply chain management with a growing emphasis on sustainability driven by increasing stakeholder expectations and proactivity. In addition, new regulations, such as the German Supply Chain Act, fostered the inclusion of sustainable incl. governance selection criteria in the selection process. In order to provide a systematic approach to select the most suitable sustainable suppliers, this study quantifies the importance and prioritizes the relevant selection criteria across 17 German industries using the Fuzzy Analytical Hierarchy Process. Results show that economic criteria are still the most important in the selection decision averaging a global weight of 51%. However, environmental, social, and governance (ESG) criteria are combined, on average, almost equally important, with global weights of 22%, 16%, and 11%, respectively. While the type of industry influences criteria weights, other factors, such as type of purchasing or demographic factors, appear to have little impact.

Keywords: ESG, fuzzy analytical hierarchy process, sustainable supplier selection, sustainability

Procedia PDF Downloads 53
77 An Improved Face Recognition Algorithm Using Histogram-Based Features in Spatial and Frequency Domains

Authors: Qiu Chen, Koji Kotani, Feifei Lee, Tadahiro Ohmi

Abstract:

In this paper, we propose an improved face recognition algorithm using histogram-based features in spatial and frequency domains. For adding spatial information of the face to improve recognition performance, a region-division (RD) method is utilized. The facial area is firstly divided into several regions, then feature vectors of each facial part are generated by Binary Vector Quantization (BVQ) histogram using DCT coefficients in low frequency domains, as well as Local Binary Pattern (LBP) histogram in spatial domain. Recognition results with different regions are first obtained separately and then fused by weighted averaging. Publicly available ORL database is used for the evaluation of our proposed algorithm, which is consisted of 40 subjects with 10 images per subject containing variations in lighting, posing, and expressions. It is demonstrated that face recognition using RD method can achieve much higher recognition rate.

Keywords: binary vector quantization (BVQ), DCT coefficients, face recognition, local binary patterns (LBP)

Procedia PDF Downloads 310
76 Group Decision Making through Interval-Valued Intuitionistic Fuzzy Soft Set TOPSIS Method Using New Hybrid Score Function

Authors: Syed Talib Abbas Raza, Tahseen Ahmed Jilani, Saleem Abdullah

Abstract:

This paper presents interval-valued intuitionistic fuzzy soft sets based TOPSIS method for group decision making. The interval-valued intuitionistic fuzzy soft set is a mutation of an interval-valued intuitionistic fuzzy set and soft set. In group decision making problems IVIFSS makes the process much more algebraically elegant. We have used weighted arithmetic averaging operator for aggregating the information and define a new Hybrid Score Function as metric tool for comparison between interval-valued intuitionistic fuzzy values. In an illustrative example we have applied the developed method to a criminological problem. We have developed a group decision making model for integrating the imprecise and hesitant evaluations of multiple law enforcement agencies working on target killing cases in the country.

Keywords: group decision making, interval-valued intuitionistic fuzzy soft set, TOPSIS, score function, criminology

Procedia PDF Downloads 557
75 Co-Seismic Gravity Gradient Changes of the 2006–2007 Great Earthquakes in the Central Kuril Islands from GRACE Observations

Authors: Armin Rahimi

Abstract:

In this study, we reveal co-seismic signals of two combined earthquakes, the 2006 Mw8.3 thrust and 2007 Mw8.1 normal fault earthquakes of the central Kuril Islands from GRACE observations. We compute monthly full gravitational gradient tensor in the local north-east-down frame for Kuril Islands earthquakes without spatial averaging and de-striping filters. Some of the gravitational gradient components (e.g. ΔVxx, ΔVxz) enhance high frequency components of the earth gravity field and reveal more details in spatial and temporal domain. Therefore that preseismic activity can be better illustrated. We show that the positive-negative-positive co-seismic ΔVxx due to the Kuril Islands earthquakes ranges from − 0.13 to + 0.11 milli Eötvös, and ΔVxz shows a positive-negative-positive pattern ranges from − 0.16 to + 0.13 milli Eötvös, agree well with seismic model predictions.

Keywords: GRACE observation, gravitational gradient changes, Kuril island earthquakes, PSGRN/PSCMP

Procedia PDF Downloads 234
74 Blind Watermarking Using Discrete Wavelet Transform Algorithm with Patchwork

Authors: Toni Maristela C. Estabillo, Michaela V. Matienzo, Mikaela L. Sabangan, Rosette M. Tienzo, Justine L. Bahinting

Abstract:

This study is about blind watermarking on images with different categories and properties using two algorithms namely, Discrete Wavelet Transform and Patchwork Algorithm. A program is created to perform watermark embedding, extraction and evaluation. The evaluation is based on three watermarking criteria namely: image quality degradation, perceptual transparency and security. Image quality is measured by comparing the original properties with the processed one. Perceptual transparency is measured by a visual inspection on a survey. Security is measured by implementing geometrical and non-geometrical attacks through a pass or fail testing. Values used to measure the following criteria are mostly based on Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). The results are based on statistical methods used to interpret and collect data such as averaging, z Test and survey. The study concluded that the combined DWT and Patchwork algorithms were less efficient and less capable of watermarking than DWT algorithm only.

Keywords: blind watermarking, discrete wavelet transform algorithm, patchwork algorithm, digital watermark

Procedia PDF Downloads 241
73 System Identification in Presence of Outliers

Authors: Chao Yu, Qing-Guo Wang, Dan Zhang

Abstract:

The outlier detection problem for dynamic systems is formulated as a matrix decomposition problem with low-rank, sparse matrices and further recast as a semidefinite programming (SDP) problem. A fast algorithm is presented to solve the resulting problem while keeping the solution matrix structure and it can greatly reduce the computational cost over the standard interior-point method. The computational burden is further reduced by proper construction of subsets of the raw data without violating low rank property of the involved matrix. The proposed method can make exact detection of outliers in case of no or little noise in output observations. In case of significant noise, a novel approach based on under-sampling with averaging is developed to denoise while retaining the saliency of outliers and so-filtered data enables successful outlier detection with the proposed method while the existing filtering methods fail. Use of recovered “clean” data from the proposed method can give much better parameter estimation compared with that based on the raw data.

Keywords: outlier detection, system identification, matrix decomposition, low-rank matrix, sparsity, semidefinite programming, interior-point methods, denoising

Procedia PDF Downloads 281
72 Intelligent Rheumatoid Arthritis Identification System Based Image Processing and Neural Classifier

Authors: Abdulkader Helwan

Abstract:

Rheumatoid joint inflammation is characterized as a perpetual incendiary issue which influences the joints by hurting body tissues Therefore, there is an urgent need for an effective intelligent identification system of knee Rheumatoid arthritis especially in its early stages. This paper is to develop a new intelligent system for the identification of Rheumatoid arthritis of the knee utilizing image processing techniques and neural classifier. The system involves two principle stages. The first one is the image processing stage in which the images are processed using some techniques such as RGB to gryascale conversion, rescaling, median filtering, background extracting, images subtracting, segmentation using canny edge detection, and features extraction using pattern averaging. The extracted features are used then as inputs for the neural network which classifies the X-ray knee images as normal or abnormal (arthritic) based on a backpropagation learning algorithm which involves training of the network on 400 X-ray normal and abnormal knee images. The system was tested on 400 x-ray images and the network shows good performance during that phase, resulting in a good identification rate 97%.

Keywords: rheumatoid arthritis, intelligent identification, neural classifier, segmentation, backpropoagation

Procedia PDF Downloads 505
71 Digital Material Characterization Using the Quantum Fourier Transform

Authors: Felix Givois, Nicolas R. Gauger, Matthias Kabel

Abstract:

The efficient digital material characterization is of great interest to many fields of application. It consists of the following three steps. First, a 3D reconstruction of 2D scans must be performed. Then, the resulting gray-value image of the material sample is enhanced by image processing methods. Finally, partial differential equations (PDE) are solved on the segmented image, and by averaging the resulting solutions fields, effective properties like stiffness or conductivity can be computed. Due to the high resolution of current CT images, the latter is typically performed with matrix-free solvers. Among them, a solver that uses the explicit formula of the Green-Eshelby operator in Fourier space has been proposed by Moulinec and Suquet. Its algorithmic, most complex part is the Fast Fourier Transformation (FFT). In our talk, we will discuss the potential quantum advantage that can be obtained by replacing the FFT with the Quantum Fourier Transformation (QFT). We will especially show that the data transfer for noisy intermediate-scale quantum (NISQ) devices can be improved by using appropriate boundary conditions for the PDE, which also allows using semi-classical versions of the QFT. In the end, we will compare the results of the QFT-based algorithm for simple geometries with the results of the FFT-based homogenization method.

Keywords: most likelihood amplitude estimation (MLQAE), numerical homogenization, quantum Fourier transformation (QFT), NISQ devises

Procedia PDF Downloads 42
70 Analysis of Two Phase Hydrodynamics in a Column Flotation by Particle Image Velocimetry

Authors: Balraju Vadlakonda, Narasimha Mangadoddy

Abstract:

The hydrodynamic behavior in a laboratory column flotation was analyzed using particle image velocimetry. For complete characterization of column flotation, it is necessary to determine the flow velocity induced by bubbles in the liquid phase, the bubble velocity and bubble characteristics:diameter,shape and bubble size distribution. An experimental procedure for analyzing simultaneous, phase-separated velocity measurements in two-phase flows was introduced. The non-invasive PIV technique has used to quantify the instantaneous flow field, as well as the time averaged flow patterns in selected planes of the column. Using the novel particle velocimetry (PIV) technique by the combination of fluorescent tracer particles, shadowgraphy and digital phase separation with masking technique measured the bubble velocity as well as the Reynolds stresses in the column. Axial and radial mean velocities as well as fluctuating components were determined for both phases by averaging the sufficient number of double images. Bubble size distribution was cross validated with high speed video camera. Average turbulent kinetic energy of bubble were analyzed. Different air flow rates were considered in the experiments.

Keywords: particle image velocimetry (PIV), bubble velocity, bubble diameter, turbulent kinetic energy

Procedia PDF Downloads 468
69 Gait Biometric for Person Re-Identification

Authors: Lavanya Srinivasan

Abstract:

Biometric identification is to identify unique features in a person like fingerprints, iris, ear, and voice recognition that need the subject's permission and physical contact. Gait biometric is used to identify the unique gait of the person by extracting moving features. The main advantage of gait biometric to identify the gait of a person at a distance, without any physical contact. In this work, the gait biometric is used for person re-identification. The person walking naturally compared with the same person walking with bag, coat, and case recorded using longwave infrared, short wave infrared, medium wave infrared, and visible cameras. The videos are recorded in rural and in urban environments. The pre-processing technique includes human identified using YOLO, background subtraction, silhouettes extraction, and synthesis Gait Entropy Image by averaging the silhouettes. The moving features are extracted from the Gait Entropy Energy Image. The extracted features are dimensionality reduced by the principal component analysis and recognised using different classifiers. The comparative results with the different classifier show that linear discriminant analysis outperforms other classifiers with 95.8% for visible in the rural dataset and 94.8% for longwave infrared in the urban dataset.

Keywords: biometric, gait, silhouettes, YOLO

Procedia PDF Downloads 144