Search results for: lateral velocity estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4126

Search results for: lateral velocity estimation

16 Long-Term Subcentimeter-Accuracy Landslide Monitoring Using a Cost-Effective Global Navigation Satellite System Rover Network: Case Study

Authors: Vincent Schlageter, Maroua Mestiri, Florian Denzinger, Hugo Raetzo, Michel Demierre

Abstract:

Precise landslide monitoring with differential global navigation satellite system (GNSS) is well known, but technical or economic reasons limit its application by geotechnical companies. This study demonstrates the reliability and the usefulness of Geomon (Infrasurvey Sàrl, Switzerland), a stand-alone and cost-effective rover network. The system permits deploying up to 15 rovers, plus one reference station for differential GNSS. A dedicated radio communication links all the modules to a base station, where an embedded computer automatically provides all the relative positions (L1 phase, open-source RTKLib software) and populates an Internet server. Each measure also contains information from an internal inclinometer, battery level, and position quality indices. Contrary to standard GNSS survey systems, which suffer from a limited number of beacons that must be placed in areas with good GSM signal, Geomon offers greater flexibility and permits a real overview of the whole landslide with good spatial resolution. Each module is powered with solar panels, ensuring autonomous long-term recordings. In this study, we have tested the system on several sites in the Swiss mountains, setting up to 7 rovers per site, for an 18 month-long survey. The aim was to assess the robustness and the accuracy of the system in different environmental conditions. In one case, we ran forced blind tests (vertical movements of a given amplitude) and compared various session parameters (duration from 10 to 90 minutes). Then the other cases were a survey of real landslides sites using fixed optimized parameters. Sub centimetric-accuracy with few outliers was obtained using the best parameters (session duration of 60 minutes, baseline 1 km or less), with the noise level on the horizontal component half that of the vertical one. The performance (percent of aborting solutions, outliers) was reduced with sessions shorter than 30 minutes. The environment also had a strong influence on the percent of aborting solutions (ambiguity search problem), due to multiple reflections or satellites obstructed by trees and mountains. The length of the baseline (distance reference-rover, single baseline processing) reduced the accuracy above 1 km but had no significant effect below this limit. In critical weather conditions, the system’s robustness was limited: snow, avalanche, and frost-covered some rovers, including the antenna and vertically oriented solar panels, leading to data interruption; and strong wind damaged a reference station. The possibility of changing the sessions’ parameters remotely was very useful. In conclusion, the rover network tested provided the foreseen sub-centimetric-accuracy while providing a dense spatial resolution landslide survey. The ease of implementation and the fully automatic long-term survey were timesaving. Performance strongly depends on surrounding conditions, but short pre-measures should allow moving a rover to a better final placement. The system offers a promising hazard mitigation technique. Improvements could include data post-processing for alerts and automatic modification of the duration and numbers of sessions based on battery level and rover displacement velocity.

Keywords: GNSS, GSM, landslide, long-term, network, solar, spatial resolution, sub-centimeter.

Procedia PDF Downloads 109
15 Black-Box-Optimization Approach for High Precision Multi-Axes Forward-Feed Design

Authors: Sebastian Kehne, Alexander Epple, Werner Herfs

Abstract:

A new method for optimal selection of components for multi-axes forward-feed drive systems is proposed in which the choice of motors, gear boxes and ball screw drives is optimized. Essential is here the synchronization of electrical and mechanical frequency behavior of all axes because even advanced controls (like H∞-controls) can only control a small part of the mechanical modes – namely only those of observable and controllable states whose value can be derived from the positions of extern linear length measurement systems and/or rotary encoders on the motor or gear box shafts. Further problems are the unknown processing forces like cutting forces in machine tools during normal operation which make the estimation and control via an observer even more difficult. To start with, the open source Modelica Feed Drive Library which was developed at the Laboratory for Machine Tools, and Production Engineering (WZL) is extended from one axis design to the multi axes design. It is capable to simulate the mechanical, electrical and thermal behavior of permanent magnet synchronous machines with inverters, different gear boxes and ball screw drives in a mechanical system. To keep the calculation time down analytical equations are used for field and torque producing equivalent circuit, heat dissipation and mechanical torque at the shaft. As a first step, a small machine tool with a working area of 635 x 315 x 420 mm is taken apart, and the mechanical transfer behavior is measured with an impulse hammer and acceleration sensors. With the frequency transfer functions, a mechanical finite element model is built up which is reduced with substructure coupling to a mass-damper system which models the most important modes of the axes. The model is modelled with Modelica Feed Drive Library and validated by further relative measurements between machine table and spindle holder with a piezo actor and acceleration sensors. In a next step, the choice of possible components in motor catalogues is limited by derived analytical formulas which are based on well-known metrics to gain effective power and torque of the components. The simulation in Modelica is run with different permanent magnet synchronous motors, gear boxes and ball screw drives from different suppliers. To speed up the optimization different black-box optimization methods (Surrogate-based, gradient-based and evolutionary) are tested on the case. The objective that was chosen is to minimize the integral of the deviations if a step is given on the position controls of the different axes. Small values are good measures for a high dynamic axes. In each iteration (evaluation of one set of components) the control variables are adjusted automatically to have an overshoot less than 1%. It is obtained that the order of the components in optimization problem has a deep impact on the speed of the black-box optimization. An approach to do efficient black-box optimization for multi-axes design is presented in the last part. The authors would like to thank the German Research Foundation DFG for financial support of the project “Optimierung des mechatronischen Entwurfs von mehrachsigen Antriebssystemen (HE 5386/14-1 | 6954/4-1)” (English: Optimization of the Mechatronic Design of Multi-Axes Drive Systems).

Keywords: ball screw drive design, discrete optimization, forward feed drives, gear box design, linear drives, machine tools, motor design, multi-axes design

Procedia PDF Downloads 283
14 Italian Speech Vowels Landmark Detection through the Legacy Tool 'xkl' with Integration of Combined CNNs and RNNs

Authors: Kaleem Kashif, Tayyaba Anam, Yizhi Wu

Abstract:

This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems.

Keywords: landmark detection, acoustic analysis, convolutional neural network, recurrent neural network

Procedia PDF Downloads 59
13 Aquaporin-1 as a Differential Marker in Toxicant-Induced Lung Injury

Authors: Ekta Yadav, Sukanta Bhattacharya, Brijesh Yadav, Ariel Hus, Jagjit Yadav

Abstract:

Background and Significance: Respiratory exposure to toxicants (chemicals or particulates) causes disruption of lung homeostasis leading to lung toxicity/injury manifested as pulmonary inflammation, edema, and/or other effects depending on the type and extent of exposure. This emphasizes the need for investigating toxicant type-specific mechanisms to understand therapeutic targets. Aquaporins, aka water channels, are known to play a role in lung homeostasis. Particularly, the two major lung aquaporins AQP5 and AQP1 expressed in alveolar epithelial and vasculature endothelia respectively allow for movement of the fluid between the alveolar air space and the associated vasculature. In view of this, the current study is focused on understanding the regulation of lung aquaporins and other targets during inhalation exposure to toxic chemicals (Cigarette smoke chemicals) versus toxic particles (Carbon nanoparticles) or co-exposures to understand their relevance as markers of injury and intervention. Methodologies: C57BL/6 mice (5-7 weeks old) were used in this study following an approved protocol by the University of Cincinnati Institutional Animal Care and Use Committee (IACUC). The mice were exposed via oropharyngeal aspiration to multiwall carbon nanotube (MWCNT) particles suspension once (33 ugs/mouse) followed by housing for four weeks or to Cigarette smoke Extract (CSE) using a daily dose of 30µl/mouse for four weeks, or to co-exposure using the combined regime. Control groups received vehicles following the same dosing schedule. Lung toxicity/injury was assessed in terms of homeostasis changes in the lung tissue and lumen. Exposed lungs were analyzed for transcriptional expression of specific targets (AQPs, surfactant protein A, Mucin 5b) in relation to tissue homeostasis. Total RNA from lungs extracted using TRIreagent kit was analyzed using qRT-PCR based on gene-specific primers. Total protein in bronchoalveolar lavage (BAL) fluid was determined by the DC protein estimation kit (BioRad). GraphPad Prism 5.0 (La Jolla, CA, USA) was used for all analyses. Major findings: CNT exposure alone or as co-exposure with CSE increased the total protein content in the BAL fluid (lung lumen rinse), implying compromised membrane integrity and cellular infiltration in the lung alveoli. In contrast, CSE showed no significant effect. AQP1, required for water transport across membranes of endothelial cells in lungs, was significantly upregulated in CNT exposure but downregulated in CSE exposure and showed an intermediate level of expression for the co-exposure group. Both CNT and CSE exposures had significant downregulating effects on Muc5b, and SP-A expression and the co-exposure showed either no significant effect (Muc5b) or significant downregulating effect (SP-A), suggesting an increased propensity for infection in the exposed lungs. Conclusions: The current study based on the lung toxicity mouse model showed that both toxicant types, particles (CNT) versus chemicals (CSE), cause similar downregulation of lung innate defense targets (SP-A, Muc5b) and mostly a summative effect when presented as co-exposure. However, the two toxicant types show differential induction of aquaporin-1 coinciding with the corresponding differential damage to alveolar integrity (vascular permeability). Interestingly, this implies the potential of AQP1 as a differential marker of toxicant type-specific lung injury.

Keywords: aquaporin, gene expression, lung injury, toxicant exposure

Procedia PDF Downloads 179
12 Influence of Thermal Annealing on Phase Composition and Structure of Quartz-Sericite Minerale

Authors: Atabaev I. G., Fayziev Sh. A., Irmatova Sh. K.

Abstract:

Raw materials with high content of Kalium oxide widely used in ceramic technology for prevention or decreasing of deformation of ceramic goods during drying process and under thermal annealing. Becouse to low melting temperature it is also used to decreasing of the temperature of thermal annealing during fabrication of ceramic goods [1,2]. So called “Porceline or China stones” - quartz-sericite (muscovite) minerals is also can be used for prevention of deformation as the content of Kalium oxide in muscovite is rather high (SiO2, + KAl2[AlSi3O10](OH)2). [3] . To estimation of possibility of use of this mineral for ceramic manufacture, in the presented article the influence of thermal processing on phase and a chemical content of this raw material is investigated. As well as to other ceramic raw materials (kaoline, white burning clays) the basic requirements of the industry to quality of "a porcelain stone» are following: small size of particles, relative high uniformity of disrtribution of components and phase, white color after burning, small content of colorant oxides or chromophores (Fe2O3, FeO, TiO2, etc) [4,5]. In the presented work natural minerale from the Boynaksay deposit (Uzbekistan) is investigated. The samples was mechanically polished for investigation by Scanning Electron Microscope. Powder with size of particle up to 63 μm was used to X-ray diffractometry and chemical analysis. The annealing of samples was performed at 900, 1120, 1350oC during 1 hour. Chemical composition of Boynaksay raw material according to chemical analysis presented in the table 1. For comparison the composition of raw materials from Russia and USA are also presented. In the Boynaksay quartz – sericite the average parity of quartz and sericite makes 55-60 and 30-35 % accordingly. The distribution of quartz and sericite phases in raw material was investigated using electron probe scanning electronic microscope «JEOL» JXA-8800R. In the figure 1 the scanning electron microscope (SEM) micrograps of the surface and the distributions of Al, Si and K atoms in the sample are presented. As it seen small granular, white and dense mineral includes quartz, sericite and small content of impurity minerals. Basically, crystals of quartz have the sizes from 80 up to 500 μm. Between quartz crystals the sericite inclusions having a tablet form with radiant structure are located. The size of sericite crystals is ~ 40-250 μm. Using data on interplanar distance [6,7] and ASTM Powder X-ray Diffraction Data it is shown that natural «a porcelain stone» quartz – sericite consists the quartz SiO2, sericite (muscovite type) KAl2[AlSi3O10](OH)2 and kaolinite Al203SiO22Н2О (See Figure 2 and Table 2). As it seen in the figure 3 and table 3a after annealing at 900oC the quartz – sericite contains quartz – SiO2 and muscovite - KAl2[AlSi3O10](OH)2, the peaks related with Kaolinite are absent. After annealing at 1120oC the full disintegration of muscovite and formation of mullite phase Al203 SiO2 is observed (the weak peaks of mullite appears in fig 3b and table 3b). After annealing at 1350oC the samples contains crystal phase of quartz and mullite (figure 3c and table 3с). Well known Mullite gives to ceramics high density, abrasive and chemical stability. Thus the obtained experimental data on formation of various phases during thermal annealing can be used for development of fabrication technology of advanced materials. Conclusion: The influence of thermal annealing in the interval 900-1350oC on phase composition and structure of quartz-sericite minerale is investigated. It is shown that during annealing the phase content of raw material is changed. After annealing at 1350oC the samples contains crystal phase of quartz and mullite (which gives gives to ceramics high density, abrasive and chemical stability).

Keywords: quartz-sericite, kaolinite, mullite, thermal processing

Procedia PDF Downloads 410
11 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 219
10 Correlation of Unsuited and Suited 5ᵗʰ Female Hybrid III Anthropometric Test Device Model under Multi-Axial Simulated Orion Abort and Landing Conditions

Authors: Christian J. Kennett, Mark A. Baldwin

Abstract:

As several companies are working towards returning American astronauts back to space on US-made spacecraft, NASA developed a human flight certification-by-test and analysis approach due to the cost-prohibitive nature of extensive testing. This process relies heavily on the quality of analytical models to accurately predict crew injury potential specific to each spacecraft and under dynamic environments not tested. As the prime contractor on the Orion spacecraft, Lockheed Martin was tasked with quantifying the correlation of analytical anthropometric test devices (ATDs), also known as crash test dummies, against test measurements under representative impact conditions. Multiple dynamic impact sled tests were conducted to characterize Hybrid III 5th ATD lumbar, head, and neck responses with and without a modified shuttle-era advanced crew escape suit (ACES) under simulated Orion landing and abort conditions. Each ATD was restrained via a 5-point harness in a mockup Orion seat fixed to a dynamic impact sled at the Wright Patterson Air Force Base (WPAFB) Biodynamics Laboratory in the horizontal impact accelerator (HIA). ATDs were subject to multiple impact magnitudes, half-sine pulse rise times, and XZ - ‘eyeballs out/down’ or Z-axis ‘eyeballs down’ orientations for landing or an X-axis ‘eyeballs in’ orientation for abort. Several helmet constraint devices were evaluated during suited testing. Unique finite element models (FEMs) were developed of the unsuited and suited sled test configurations using an analytical 5th ATD model developed by LSTC (Livermore, CA) and deformable representations of the seat, suit, helmet constraint countermeasures, and body restraints. Explicit FE analyses were conducted using the non-linear solver LS-DYNA. Head linear and rotational acceleration, head rotational velocity, upper neck force and moment, and lumbar force time histories were compared between test and analysis using the enhanced error assessment of response time histories (EEARTH) composite score index. The EEARTH rating paired with the correlation and analysis (CORA) corridor rating provided a composite ISO score that was used to asses model correlation accuracy. NASA occupant protection subject matter experts established an ISO score of 0.5 or greater as the minimum expectation for correlating analytical and experimental ATD responses. Unsuited 5th ATD head X, Z, and resultant linear accelerations, head Y rotational accelerations and velocities, neck X and Z forces, and lumbar Z forces all showed consistent ISO scores above 0.5 in the XZ impact orientation, regardless of peak g-level or rise time. Upper neck Y moments were near or above the 0.5 score for most of the XZ cases. Similar trends were found in the XZ and Z-axis suited tests despite the addition of several different countermeasures for restraining the helmet. For the X-axis ‘eyeballs in’ loading direction, only resultant head linear acceleration and lumbar Z-axis force produced ISO scores above 0.5 whether unsuited or suited. The analytical LSTC 5th ATD model showed good correlation across multiple head, neck, and lumbar responses in both the unsuited and suited configurations when loaded in the XZ ‘eyeballs out/down’ direction. Upper neck moments were consistently the most difficult to predict, regardless of impact direction or test configuration.

Keywords: impact biomechanics, manned spaceflight, model correlation, multi-axial loading

Procedia PDF Downloads 111
9 CLOUD Japan: Prospective Multi-Hospital Study to Determine the Population-Based Incidence of Hospitalized Clostridium difficile Infections

Authors: Kazuhiro Tateda, Elisa Gonzalez, Shuhei Ito, Kirstin Heinrich, Kevin Sweetland, Pingping Zhang, Catia Ferreira, Michael Pride, Jennifer Moisi, Sharon Gray, Bennett Lee, Fred Angulo

Abstract:

Clostridium difficile (C. difficile) is the most common cause of antibiotic-associated diarrhea and infectious diarrhea in healthcare settings. Japan has an aging population; the elderly are at increased risk of hospitalization, antibiotic use, and C. difficile infection (CDI). Little is known about the population-based incidence and disease burden of CDI in Japan although limited hospital-based studies have reported a lower incidence than the United States. To understand CDI disease burden in Japan, CLOUD (Clostridium difficile Infection Burden of Disease in Adults in Japan) was developed. CLOUD will derive population-based incidence estimates of the number of CDI cases per 100,000 population per year in Ota-ku (population 723,341), one of the districts in Tokyo, Japan. CLOUD will include approximately 14 of the 28 Ota-ku hospitals including Toho University Hospital, which is a 1,000 bed tertiary care teaching hospital. During the 12-month patient enrollment period, which is scheduled to begin in November 2018, Ota-ku residents > 50 years of age who are hospitalized at a participating hospital with diarrhea ( > 3 unformed stools (Bristol Stool Chart 5-7) in 24 hours) will be actively ascertained, consented, and enrolled by study surveillance staff. A stool specimen will be collected from enrolled patients and tested at a local reference laboratory (LSI Medience, Tokyo) using QUIK CHEK COMPLETE® (Abbott Laboratories). which simultaneously tests specimens for the presence of glutamate dehydrogenase (GDH) and C. difficile toxins A and B. A frozen stool specimen will also be sent to the Pfizer Laboratory (Pearl River, United States) for analysis using a two-step diagnostic testing algorithm that is based on detection of C. difficile strains/spores harboring toxin B gene by PCR followed by detection of free toxins (A and B) using a proprietary cell cytotoxicity neutralization assay (CCNA) developed by Pfizer. Positive specimens will be anaerobically cultured, and C. difficile isolates will be characterized by ribotyping and whole genomic sequencing. CDI patients enrolled in CLOUD will be contacted weekly for 90 days following diarrhea onset to describe clinical outcomes including recurrence, reinfection, and mortality, and patient reported economic, clinical and humanistic outcomes (e.g., health-related quality of life, worsening of comorbidities, and patient and caregiver work absenteeism). Studies will also be undertaken to fully characterize the catchment area to enable population-based estimates. The 12-month active ascertainment of CDI cases among hospitalized Ota-ku residents with diarrhea in CLOUD, and the characterization of the Ota-ku catchment area, including estimation of the proportion of all hospitalizations of Ota-ku residents that occur in the CLOUD-participating hospitals, will yield CDI population-based incidence estimates, which can be stratified by age groups, risk groups, and source (hospital-acquired or community-acquired). These incidence estimates will be extrapolated, following age standardization using national census data, to yield CDI disease burden estimates for Japan. CLOUD also serves as a model for studies in other countries that can use the CLOUD protocol to estimate CDI disease burden.

Keywords: Clostridium difficile, disease burden, epidemiology, study protocol

Procedia PDF Downloads 257
8 Computational Fluid Dynamics Simulation of a Nanofluid-Based Annular Solar Collector with Different Metallic Nano-Particles

Authors: Sireetorn Kuharat, Anwar Beg

Abstract:

Motivation- Solar energy constitutes the most promising renewable energy source on earth. Nanofluids are a very successful family of engineered fluids, which contain well-dispersed nanoparticles suspended in a stable base fluid. The presence of metallic nanoparticles (e.g. gold, silver, copper, aluminum etc) significantly improves the thermo-physical properties of the host fluid and generally results in a considerable boost in thermal conductivity, density, and viscosity of nanofluid compared with the original base (host) fluid. This modification in fundamental thermal properties has profound implications in influencing the convective heat transfer process in solar collectors. The potential for improving solar collector direct absorber efficiency is immense and to gain a deeper insight into the impact of different metallic nanoparticles on efficiency and temperature enhancement, in the present work, we describe recent computational fluid dynamics simulations of an annular solar collector system. The present work studies several different metallic nano-particles and compares their performance. Methodologies- A numerical study of convective heat transfer in an annular pipe solar collector system is conducted. The inner tube contains pure water and the annular region contains nanofluid. Three-dimensional steady-state incompressible laminar flow comprising water- (and other) based nanofluid containing a variety of metallic nanoparticles (copper oxide, aluminum oxide, and titanium oxide nanoparticles) is examined. The Tiwari-Das model is deployed for which thermal conductivity, specific heat capacity and viscosity of the nanofluid suspensions is evaluated as a function of solid nano-particle volume fraction. Radiative heat transfer is also incorporated using the ANSYS solar flux and Rosseland radiative models. The ANSYS FLUENT finite volume code (version 18.1) is employed to simulate the thermo-fluid characteristics via the SIMPLE algorithm. Mesh-independence tests are conducted. Validation of the simulations is also performed with a computational Harlow-Welch MAC (Marker and Cell) finite difference method and excellent correlation achieved. The influence of volume fraction on temperature, velocity, pressure contours is computed and visualized. Main findings- The best overall performance is achieved with copper oxide nanoparticles. Thermal enhancement is generally maximized when water is utilized as the base fluid, although in certain cases ethylene glycol also performs very efficiently. Increasing nanoparticle solid volume fraction elevates temperatures although the effects are less prominent in aluminum and titanium oxide nanofluids. Significant improvement in temperature distributions is achieved with copper oxide nanofluid and this is attributed to the superior thermal conductivity of copper compared to other metallic nano-particles studied. Important fluid dynamic characteristics are also visualized including circulation and temperature shoots near the upper region of the annulus. Radiative flux is observed to enhance temperatures significantly via energization of the nanofluid although again the best elevation in performance is attained consistently with copper oxide. Conclusions-The current study generalizes previous investigations by considering multiple metallic nano-particles and furthermore provides a good benchmark against which to calibrate experimental tests on a new solar collector configuration currently being designed at Salford University. Important insights into the thermal conductivity and viscosity with metallic nano-particles is also provided in detail. The analysis is also extendable to other metallic nano-particles including gold and zinc.

Keywords: heat transfer, annular nanofluid solar collector, ANSYS FLUENT, metallic nanoparticles

Procedia PDF Downloads 141
7 Ensemble Sampler For Infinite-Dimensional Inverse Problems

Authors: Jeremie Coullon, Robert J. Webber

Abstract:

We introduce a Markov chain Monte Carlo (MCMC) sam-pler for infinite-dimensional inverse problems. Our sam-pler is based on the affine invariant ensemble sampler, which uses interacting walkers to adapt to the covariance structure of the target distribution. We extend this ensem-ble sampler for the first time to infinite-dimensional func-tion spaces, yielding a highly efficient gradient-free MCMC algorithm. Because our ensemble sampler does not require gradients or posterior covariance estimates, it is simple to implement and broadly applicable. In many Bayes-ian inverse problems, Markov chain Monte Carlo (MCMC) meth-ods are needed to approximate distributions on infinite-dimensional function spaces, for example, in groundwater flow, medical imaging, and traffic flow. Yet designing efficient MCMC methods for function spaces has proved challenging. Recent gradi-ent-based MCMC methods preconditioned MCMC methods, and SMC methods have improved the computational efficiency of functional random walk. However, these samplers require gradi-ents or posterior covariance estimates that may be challenging to obtain. Calculating gradients is difficult or impossible in many high-dimensional inverse problems involving a numerical integra-tor with a black-box code base. Additionally, accurately estimating posterior covariances can require a lengthy pilot run or adaptation period. These concerns raise the question: is there a functional sampler that outperforms functional random walk without requir-ing gradients or posterior covariance estimates? To address this question, we consider a gradient-free sampler that avoids explicit covariance estimation yet adapts naturally to the covariance struc-ture of the sampled distribution. This sampler works by consider-ing an ensemble of walkers and interpolating and extrapolating between walkers to make a proposal. This is called the affine in-variant ensemble sampler (AIES), which is easy to tune, easy to parallelize, and efficient at sampling spaces of moderate dimen-sionality (less than 20). The main contribution of this work is to propose a functional ensemble sampler (FES) that combines func-tional random walk and AIES. To apply this sampler, we first cal-culate the Karhunen–Loeve (KL) expansion for the Bayesian prior distribution, assumed to be Gaussian and trace-class. Then, we use AIES to sample the posterior distribution on the low-wavenumber KL components and use the functional random walk to sample the posterior distribution on the high-wavenumber KL components. Alternating between AIES and functional random walk updates, we obtain our functional ensemble sampler that is efficient and easy to use without requiring detailed knowledge of the target dis-tribution. In past work, several authors have proposed splitting the Bayesian posterior into low-wavenumber and high-wavenumber components and then applying enhanced sampling to the low-wavenumber components. Yet compared to these other samplers, FES is unique in its simplicity and broad applicability. FES does not require any derivatives, and the need for derivative-free sam-plers has previously been emphasized. FES also eliminates the requirement for posterior covariance estimates. Lastly, FES is more efficient than other gradient-free samplers in our tests. In two nu-merical examples, we apply FES to challenging inverse problems that involve estimating a functional parameter and one or more scalar parameters. We compare the performance of functional random walk, FES, and an alternative derivative-free sampler that explicitly estimates the posterior covariance matrix. We conclude that FES is the fastest available gradient-free sampler for these challenging and multimodal test problems.

Keywords: Bayesian inverse problems, Markov chain Monte Carlo, infinite-dimensional inverse problems, dimensionality reduction

Procedia PDF Downloads 152
6 A Multi-Scale Approach to Space Use: Habitat Disturbance Alters Behavior, Movement and Energy Budgets in Sloths (Bradypus variegatus)

Authors: Heather E. Ewart, Keith Jensen, Rebecca N. Cliffe

Abstract:

Fragmentation and changes in the structural composition of tropical forests – as a result of intensifying anthropogenic disturbance – are increasing pressures on local biodiversity. Species with low dispersal abilities have some of the highest extinction risks in response to environmental change, as even small-scale environmental variation can substantially impact their space use and energetic balance. Understanding the implications of forest disturbance is therefore essential, ultimately allowing for more effective and targeted conservation initiatives. Here, the impact of different levels of forest disturbance on the space use, energetics, movement and behavior of 18 brown-throated sloths (Bradypus variegatus) were assessed in the South Caribbean of Costa Rica. A multi-scale framework was used to measure forest disturbance, including large-scale (landscape-level classifications) and fine-scale (within and surrounding individual home ranges) forest composition. Three landscape-level classifications were identified: primary forests (undisturbed), secondary forests (some disturbance, regenerating) and urban forests (high levels of disturbance and fragmentation). Finer-scale forest composition was determined using measurements of habitat structure and quality within and surrounding individual home ranges for each sloth (home range estimates were calculated using autocorrelated kernel density estimation [AKDE]). Measurements of forest quality included tree connectivity, density, diameter and height, species richness, and percentage of canopy cover. To determine space use, energetics, movement and behavior, six sloths in urban forests, seven sloths in secondary forests and five sloths in primary forests were tracked using a combination of Very High Frequency (VHF) radio transmitters and Global Positioning System (GPS) technology over an average period of 120 days. All sloths were also fitted with micro data-loggers (containing tri-axial accelerometers and pressure loggers) for an average of 30 days to allow for behavior-specific movement analyses (data analysis ongoing for data-loggers and primary forest sloths). Data-loggers included determination of activity budgets, circadian rhythms of activity and energy expenditure (using the vector of the dynamic body acceleration [VeDBA] as a proxy). Analyses to date indicate that home range size significantly increased with the level of forest disturbance. Female sloths inhabiting secondary forests averaged 0.67-hectare home ranges, while female sloths inhabiting urban forests averaged 1.93-hectare home ranges (estimates are represented by median values to account for the individual variation in home range size in sloths). Likewise, home range estimates for male sloths were 2.35 hectares in secondary forests and 4.83 in urban forests. Sloths in urban forests also used nearly double (median = 22.5) the number of trees as sloths in the secondary forest (median = 12). These preliminary data indicate that forest disturbance likely heightens the energetic requirements of sloths, a species already critically limited by low dispersal ability and rates of energy acquisition. Energetic and behavioral analyses from the data-loggers will be considered in the context of fine-scale forest composition measurements (i.e., habitat quality and structure) and are expected to reflect the observed home range and movement constraints. The implications of these results are far-reaching, presenting an opportunity to define a critical index of habitat connectivity for low dispersal species such as sloths.

Keywords: biodiversity conservation, forest disturbance, movement ecology, sloths

Procedia PDF Downloads 104
5 Catastrophic Health Expenditures: Evaluating the Effectiveness of Nepal's National Health Insurance Program Using Propensity Score Matching and Doubly Robust Methodology

Authors: Simrin Kafle, Ulrika Enemark

Abstract:

Catastrophic health expenditure (CHE) is a critical issue in low- and middle-income countries like Nepal, exacerbating financial hardship among vulnerable households. This study assesses the effectiveness of Nepal’s National Health Insurance Program (NHIP), launched in 2015, to reduce out-of-pocket (OOP) healthcare costs and mitigate CHE. Conducted in Pokhara Metropolitan City, the study used an analytical cross-sectional design, sampling 1276 households through a two-stage random sampling method. Data was collected via face-to-face interviews between May and October 2023. The analysis was conducted using SPSS version 29, incorporating propensity score matching to minimize biases and create comparable groups of enrolled and non-enrolled households in the NHIP. PSM helped reduce confounding effects by matching households with similar baseline characteristics. Additionally, a doubly robust methodology was employed, combining propensity score adjustment with regression modeling to enhance the reliability of the results. This comprehensive approach ensured a more accurate estimation of the impact of NHIP enrollment on CHE. Among the 1276 samples, 534 households (41.8%) were enrolled in NHIP. Of them, 84.3% of households renewed their insurance card, though some cited long waiting times, lack of medications, and complex procedures as barriers to renewal. Approximately 57.3% of households reported known diseases before enrollment, with 49.8% attending routine health check-ups in the past year. The primary motivation for enrollment was encouragement from insurance employees (50.2%). The data indicates that 12.5% of enrolled households experienced CHE versus 7.5% among non-enrolled. Enrollment into NHIP does not contribute to lower CHE (AOR: 1.98, 95% CI: 1.21-3.24). Key factors associated with increased CHE risk were presence of non-communicable diseases (NCDs) (AOR: 3.94, 95% CI: 2.10-7.39), acute illnesses/injuries (AOR: 6.70, 95% CI: 3.97-11.30), larger household size (AOR: 3.09, 95% CI: 1.81-5.28), and households below the poverty line (AOR: 5.82, 95% CI: 3.05-11.09). Other factors such as gender, education level, caste/ethnicity, presence of elderly members, and under-five children also showed varying associations with CHE, though not all were statistically significant. The study concludes that enrollment in the NHIP does not significantly reduce the risk of CHE. The reason for this could be inadequate coverage, where high-cost medicines, treatments, and transportation costs are not fully included in the insurance package, leading to significant out-of-pocket expenses. We also considered the long waiting time, lack of medicines, and complex procedures for the utilization of NHIP benefits, which might result in the underuse of covered services. Finally, gaps in enrollment and retention might leave certain households vulnerable to CHE despite the existence of NHIP. Key factors contributing to increased CHE include NCDs, acute illnesses, larger household sizes, and poverty. To improve the program’s effectiveness, it is recommended that NHIP benefits and coverage be expanded to better protect against high healthcare costs. Additionally, simplifying the renewal process, addressing long waiting times, and enhancing the availability of services could improve member satisfaction and retention. Targeted financial protection measures should be implemented for high-risk groups, and efforts should be made to increase awareness and encourage routine health check-ups to prevent severe health issues that contribute to CHE.

Keywords: catastrophic health expenditure, effectiveness, national health insurance program, Nepal

Procedia PDF Downloads 18
4 Towards Dynamic Estimation of Residential Building Energy Consumption in Germany: Leveraging Machine Learning and Public Data from England and Wales

Authors: Philipp Sommer, Amgad Agoub

Abstract:

The construction sector significantly impacts global CO₂ emissions, particularly through the energy usage of residential buildings. To address this, various governments, including Germany's, are focusing on reducing emissions via sustainable refurbishment initiatives. This study examines the application of machine learning (ML) to estimate energy demands dynamically in residential buildings and enhance the potential for large-scale sustainable refurbishment. A major challenge in Germany is the lack of extensive publicly labeled datasets for energy performance, as energy performance certificates, which provide critical data on building-specific energy requirements and consumption, are not available for all buildings or require on-site inspections. Conversely, England and other countries in the European Union (EU) have rich public datasets, providing a viable alternative for analysis. This research adapts insights from these English datasets to the German context by developing a comprehensive data schema and calibration dataset capable of predicting building energy demand effectively. The study proposes a minimal feature set, determined through feature importance analysis, to optimize the ML model. Findings indicate that ML significantly improves the scalability and accuracy of energy demand forecasts, supporting more effective emissions reduction strategies in the construction industry. Integrating energy performance certificates into municipal heat planning in Germany highlights the transformative impact of data-driven approaches on environmental sustainability. The goal is to identify and utilize key features from open data sources that significantly influence energy demand, creating an efficient forecasting model. Using Extreme Gradient Boosting (XGB) and data from energy performance certificates, effective features such as building type, year of construction, living space, insulation level, and building materials were incorporated. These were supplemented by data derived from descriptions of roofs, walls, windows, and floors, integrated into three datasets. The emphasis was on features accessible via remote sensing, which, along with other correlated characteristics, greatly improved the model's accuracy. The model was further validated using SHapley Additive exPlanations (SHAP) values and aggregated feature importance, which quantified the effects of individual features on the predictions. The refined model using remote sensing data showed a coefficient of determination (R²) of 0.64 and a mean absolute error (MAE) of 4.12, indicating predictions based on efficiency class 1-100 (G-A) may deviate by 4.12 points. This R² increased to 0.84 with the inclusion of more samples, with wall type emerging as the most predictive feature. After optimizing and incorporating related features like estimated primary energy consumption, the R² score for the training and test set reached 0.94, demonstrating good generalization. The study concludes that ML models significantly improve prediction accuracy over traditional methods, illustrating the potential of ML in enhancing energy efficiency analysis and planning. This supports better decision-making for energy optimization and highlights the benefits of developing and refining data schemas using open data to bolster sustainability in the building sector. The study underscores the importance of supporting open data initiatives to collect similar features and support the creation of comparable models in Germany, enhancing the outlook for environmental sustainability.

Keywords: machine learning, remote sensing, residential building, energy performance certificates, data-driven, heat planning

Procedia PDF Downloads 54
3 Modelling Farmer’s Perception and Intention to Join Cashew Marketing Cooperatives: An Expanded Version of the Theory of Planned Behaviour

Authors: Gospel Iyioku, Jana Mazancova, Jiri Hejkrlik

Abstract:

The “Agricultural Promotion Policy (2016–2020)” represents a strategic initiative by the Nigerian government to address domestic food shortages and the challenges in exporting products at the required quality standards. Hindered by an inefficient system for setting and enforcing food quality standards, coupled with a lack of market knowledge, the Federal Ministry of Agriculture and Rural Development (FMARD) aims to enhance support for the production and activities of key crops like cashew. By collaborating with farmers, processors, investors, and stakeholders in the cashew sector, the policy seeks to define and uphold high-quality standards across the cashew value chain. Given the challenges and opportunities faced by Nigerian cashew farmers, active participation in cashew marketing groups becomes imperative. These groups serve as essential platforms for farmers to collectively navigate market intricacies, access resources, share knowledge, improve output quality, and bolster their overall bargaining power. Through engagement in these cooperative initiatives, farmers not only boost their economic prospects but can also contribute significantly to the sustainable growth of the cashew industry, fostering resilience and community development. This study explores the perceptions and intentions of farmers regarding their involvement in cashew marketing cooperatives, utilizing an expanded version of the Theory of Planned Behaviour. Drawing insights from a diverse sample of 321 cashew farmers in Southwest Nigeria, the research sheds light on the factors influencing decision-making in cooperative participation. The demographic analysis reveals a diverse landscape, with a substantial presence of middle-aged individuals contributing significantly to the agricultural sector and cashew-related activities emerging as a primary income source for a substantial proportion (23.99%). Employing Structural Equation Modelling (SEM) with Maximum Likelihood Robust (MLR) estimation in R, the research elucidates the associations among latent variables. Despite the model’s complexity, the goodness-of-fit indices attest to the validity of the structural model, explaining approximately 40% of the variance in the intention to join cooperatives. Moral norms emerge as a pivotal construct, highlighting the profound influence of ethical considerations in decision-making processes, while perceived behavioural control presents potential challenges in active participation. Attitudes toward joining cooperatives reveal nuanced perspectives, with strong beliefs in enhanced connections with other farmers but varying perceptions on improved access to essential information. The SEM analysis establishes positive and significant effects of moral norms, perceived behavioural control, subjective norms, and attitudes on farmers’ intention to join cooperatives. The knowledge construct positively affects key factors influencing intention, emphasizing the importance of informed decision-making. A supplementary analysis using partial least squares (PLS) SEM corroborates the robustness of our findings, aligning with covariance-based SEM results. This research unveils the determinants of cooperative participation and provides valuable insights for policymakers and practitioners aiming to empower and support this vital demographic in the cashew industry.

Keywords: marketing cooperatives, theory of planned behaviour, structural equation modelling, cashew farmers

Procedia PDF Downloads 80
2 Supply Side Readiness for Universal Health Coverage: Assessing the Availability and Depth of Essential Health Package in Rural, Remote and Conflict Prone District

Authors: Veenapani Rajeev Verma

Abstract:

Context: Assessing facility readiness is paramount as it can indicate capacity of facilities to provide essential care for resilience to health challenges. In the context of decentralization, estimation of supply side readiness indices at sub national level is imperative for effective evidence based policy but remains a colossal challenge due to lack of dependable and representative data sources. Setting: District Poonch of Jammu and Kashmir was selected for this study. It is remote, rural district with unprecedented topographical barriers and is identified as high priority by government. It is also a fragile area as is bounded by Line of Control with Pakistan bearing the brunt of cease fire violations, military skirmishes and sporadic militant attacks. Hilly geographical terrain, rudimentary/absence of road network and impoverishment are quintessential to this area. Objectives: Objective of the study is to a) Evaluate the service readiness of health facilities and create a concise index subsuming plethora of discrete indicators and b) Ascertain supply side barriers in service provisioning via stakeholder’s analysis. Study also strives to expand analytical domain unravelling context and area specific intricacies associated with service delivery. Methodology: Mixed method approach was employed to triangulate quantitative analysis with qualitative nuances. Facility survey encompassing 90 Subcentres, 44 Primary health centres, 3 Community health centres and 1 District hospital was conducted to gauge general service availability and service specific availability (depth of coverage). Compendium of checklist was designed using Indian Public Health Standards (IPHS) in form of standard core questionnaire and scorecard generated for each facility. Information was collected across dimensions of amenities, equipment, medicines, laboratory and infection control protocols as proposed in WHO’s Service Availability and Readiness Assesment (SARA). Two stage polychoric principal component analysis employed to generate a parsimonious index by coalescing an array of tracer indicators. OLS regression method used to determine factors explaining composite index generated from PCA. Stakeholder analysis was conducted to discern qualitative information. Myriad of techniques like observations, key informant interviews and focus group discussions using semi structured questionnaires on both leaders and laggards were administered for critical stakeholder’s analysis. Results: General readiness score of health facilities was found to be 0.48. Results indicated poorest readiness for subcentres and PHC’s (first point of contact) with composite score of 0.47 and 0.41 respectively. For primary care facilities; principal component was characterized by basic newborn care as well as preparedness for delivery. Results revealed availability of equipment and surgical preparedness having lowest score (0.46 and 0.47) for facilities providing secondary care. Presence of contractual staff, more than 1 hr walk to facility, facilities in zone A (most vulnerable) to cross border shelling and facilities inaccessible due to snowfall and thick jungles was negatively associated with readiness index. Nonchalant staff attitude, unavailability of staff quarters, leakages and constraint in supply chain of drugs and consumables were other impediments identified. Conclusions/Policy Implications: It is pertinent to first strengthen primary care facilities in this setting. Complex dimensions such as geographic barriers, user and provider behavior is not under precinct of this methodology.

Keywords: effective coverage, principal component analysis, readiness index, universal health coverage

Procedia PDF Downloads 117
1 Times2D: A Time-Frequency Method for Time Series Forecasting

Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan

Abstract:

Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.

Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation

Procedia PDF Downloads 37