Search results for: Gaussian kernel
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 539

Search results for: Gaussian kernel

89 Long Wavelength Coherent Pulse of Sound Propagating in Granular Media

Authors: Rohit Kumar Shrivastava, Amalia Thomas, Nathalie Vriend, Stefan Luding

Abstract:

A mechanical wave or vibration propagating through granular media exhibits a specific signature in time. A coherent pulse or wavefront arrives first with multiply scattered waves (coda) arriving later. The coherent pulse is micro-structure independent i.e. it depends only on the bulk properties of the disordered granular sample, the sound wave velocity of the granular sample and hence bulk and shear moduli. The coherent wavefront attenuates (decreases in amplitude) and broadens with distance from its source. The pulse attenuation and broadening effects are affected by disorder (polydispersity; contrast in size of the granules) and have often been attributed to dispersion and scattering. To study the effect of disorder and initial amplitude (non-linearity) of the pulse imparted to the system on the coherent wavefront, numerical simulations have been carried out on one-dimensional sets of particles (granular chains). The interaction force between the particles is given by a Hertzian contact model. The sizes of particles have been selected randomly from a Gaussian distribution, where the standard deviation of this distribution is the relevant parameter that quantifies the effect of disorder on the coherent wavefront. Since, the coherent wavefront is system configuration independent, ensemble averaging has been used for improving the signal quality of the coherent pulse and removing the multiply scattered waves. The results concerning the width of the coherent wavefront have been formulated in terms of scaling laws. An experimental set-up of photoelastic particles constituting a granular chain is proposed to validate the numerical results.

Keywords: discrete elements, Hertzian contact, polydispersity, weakly nonlinear, wave propagation

Procedia PDF Downloads 182
88 Assessment Using Copulas of Simultaneous Damage to Multiple Buildings Due to Tsunamis

Authors: Yo Fukutani, Shuji Moriguchi, Takuma Kotani, Terada Kenjiro

Abstract:

If risk management of the assets owned by companies, risk assessment of real estate portfolio, and risk identification of the entire region are to be implemented, it is necessary to consider simultaneous damage to multiple buildings. In this research, the Sagami Trough earthquake tsunami that could have a significant effect on the Japanese capital region is focused on, and a method is proposed for simultaneous damage assessment using copulas that can take into consideration the correlation of tsunami depths and building damage between two sites. First, the tsunami inundation depths at two sites were simulated by using a nonlinear long-wave equation. The tsunamis were simulated by varying the slip amount (five cases) and the depths (five cases) for each of 10 sources of the Sagami Trough. For each source, the frequency distributions of the tsunami inundation depth were evaluated by using the response surface method. Then, Monte-Carlo simulation was conducted, and frequency distributions of tsunami inundation depth were evaluated at the target sites for all sources of the Sagami Trough. These are marginal distributions. Kendall’s tau for the tsunami inundation simulation at two sites was 0.83. Based on this value, the Gaussian copula, t-copula, Clayton copula, and Gumbel copula (n = 10,000) were generated. Then, the simultaneous distributions of the damage rate were evaluated using the marginal distributions and the copulas. For the correlation of the tsunami inundation depth at the two sites, the expected value hardly changed compared with the case of no correlation, but the damage rate of the ninety-ninth percentile value was approximately 2%, and the maximum value was approximately 6% when using the Gumbel copula.

Keywords: copulas, Monte-Carlo simulation, probabilistic risk assessment, tsunamis

Procedia PDF Downloads 121
87 Current Methods for Drug Property Prediction in the Real World

Authors: Jacob Green, Cecilia Cabrera, Maximilian Jakobs, Andrea Dimitracopoulos, Mark van der Wilk, Ryan Greenhalgh

Abstract:

Predicting drug properties is key in drug discovery to enable de-risking of assets before expensive clinical trials and to find highly active compounds faster. Interest from the machine learning community has led to the release of a variety of benchmark datasets and proposed methods. However, it remains unclear for practitioners which method or approach is most suitable, as different papers benchmark on different datasets and methods, leading to varying conclusions that are not easily compared. Our large-scale empirical study links together numerous earlier works on different datasets and methods, thus offering a comprehensive overview of the existing property classes, datasets, and their interactions with different methods. We emphasise the importance of uncertainty quantification and the time and, therefore, cost of applying these methods in the drug development decision-making cycle. To the best of the author's knowledge, it has been observed that the optimal approach varies depending on the dataset and that engineered features with classical machine learning methods often outperform deep learning. Specifically, QSAR datasets are typically best analysed with classical methods such as Gaussian Processes, while ADMET datasets are sometimes better described by Trees or deep learning methods such as Graph Neural Networks or language models. Our work highlights that practitioners do not yet have a straightforward, black-box procedure to rely on and sets a precedent for creating practitioner-relevant benchmarks. Deep learning approaches must be proven on these benchmarks to become the practical method of choice in drug property prediction.

Keywords: activity (QSAR), ADMET, classical methods, drug property prediction, empirical study, machine learning

Procedia PDF Downloads 53
86 Adaptive Motion Compensated Spatial Temporal Filter of Colonoscopy Video

Authors: Nidhal Azawi

Abstract:

Colonoscopy procedure is widely used in the world to detect an abnormality. Early diagnosis can help to heal many patients. Because of the unavoidable artifacts that exist in colon images, doctors cannot detect a colon surface precisely. The purpose of this work is to improve the visual quality of colonoscopy videos to provide better information for physicians by removing some artifacts. This work complements a series of work consisting of three previously published papers. In this paper, Optic flow is used for motion compensation, and then consecutive images are aligned/registered to integrate some information to create a new image that has or reveals more information than the original one. Colon images have been classified into informative and noninformative images by using a deep neural network. Then, two different strategies were used to treat informative and noninformative images. Informative images were treated by using Lucas Kanade (LK) with an adaptive temporal mean/median filter, whereas noninformative images are treated by using Lucas Kanade with a derivative of Gaussian (LKDOG) with adaptive temporal median images. A comparison result showed that this work achieved better results than that results in the state- of- the- art strategies for the same degraded colon images data set, which consists of 1000 images. The new proposed algorithm reduced the error alignment by about a factor of 0.3 with a 100% successfully image alignment ratio. In conclusion, this algorithm achieved better results than the state-of-the-art approaches in case of enhancing the informative images as shown in the results section; also, it succeeded to convert the non-informative images that have very few details/no details because of the blurriness/out of focus or because of the specular highlight dominate significant amount of an image to informative images.

Keywords: optic flow, colonoscopy, artifacts, spatial temporal filter

Procedia PDF Downloads 95
85 Human Vibrotactile Discrimination Thresholds for Simultaneous and Sequential Stimuli

Authors: Joanna Maj

Abstract:

Body machine interfaces (BMIs) afford users a non-invasive way coordinate movement. Vibrotactile stimulation has been incorporated into BMIs to allow feedback in real-time and guide movement control to benefit patients with cognitive deficits, such as stroke survivors. To advance research in this area, we examined vibrational discrimination thresholds at four body locations to determine suitable application sites for future multi-channel BMIs using vibration cues to guide movement planning and control. Twelve healthy adults had a pair of small vibrators (tactors) affixed to the skin at each location: forearm, shoulders, torso, and knee. A "standard" stimulus (186 Hz; 750 ms) and "probe" stimuli (11 levels ranging from 100 Hz to 235 Hz; 750 ms) were delivered. Probe and test stimulus pairs could occur sequentially or simultaneously (timing). Participants verbally indicated which stimulus felt more intense. Stimulus order was counterbalanced across tactors and body locations. Probabilities that probe stimuli felt more intense than the standard stimulus were computed and fit with a cumulative Gaussian function; the discrimination threshold was defined as one standard deviation of the underlying distribution. Threshold magnitudes depended on stimulus timing and location. Discrimination thresholds were better for stimuli applied sequentially vs. simultaneously at the torso as well as the knee. Thresholds were small (better) and relatively insensitive to timing differences for vibrations applied at the shoulder. BMI applications requiring multiple channels of simultaneous vibrotactile stimulation should therefore consider the shoulder as a deployment site for a vibrotactile BMI interface.

Keywords: electromyography, electromyogram, neuromuscular disorders, biomedical instrumentation, controls engineering

Procedia PDF Downloads 43
84 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race

Authors: Joonas Pääkkönen

Abstract:

In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.

Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling

Procedia PDF Downloads 108
83 A Location-Based Search Approach According to Users’ Application Scenario

Authors: Shih-Ting Yang, Chih-Yun Lin, Ming-Yu Li, Jhong-Ting Syue, Wei-Ming Huang

Abstract:

Global positioning system (GPS) has become increasing precise in recent years, and the location-based service (LBS) has developed rapidly. Take the example of finding a parking lot (such as Parking apps). The location-based service can offer immediate information about a nearby parking lot, including the information about remaining parking spaces. However, it cannot provide expected search results according to the requirement situations of users. For that reason, this paper develops a “Location-based Search Approach according to Users’ Application Scenario” according to the location-based search and demand determination to help users obtain the information consistent with their requirements. The “Location-based Search Approach based on Users’ Application Scenario” of this paper consists of one mechanism and three kernel modules. First, in the Information Pre-processing Mechanism (IPM), this paper uses the cosine theorem to categorize the locations of users. Then, in the Information Category Evaluation Module (ICEM), the kNN (k-Nearest Neighbor) is employed to classify the browsing records of users. After that, in the Information Volume Level Determination Module (IVLDM), this paper makes a comparison between the number of users’ clicking the information at different locations and the average number of users’ clicking the information at a specific location, so as to evaluate the urgency of demand; then, the two-dimensional space is used to estimate the application situations of users. For the last step, in the Location-based Search Module (LBSM), this paper compares all search results and the average number of characters of the search results, categorizes the search results with the Manhattan Distance, and selects the results according to the application scenario of users. Additionally, this paper develops a Web-based system according to the methodology to demonstrate practical application of this paper. The application scenario-based estimate and the location-based search are used to evaluate the type and abundance of the information expected by the public at specific location, so that information demanders can obtain the information consistent with their application situations at specific location.

Keywords: data mining, knowledge management, location-based service, user application scenario

Procedia PDF Downloads 103
82 The Richtmyer-Meshkov Instability Impacted by the Interface with Different Components Distribution

Authors: Sheng-Bo Zhang, Huan-Hao Zhang, Zhi-Hua Chen, Chun Zheng

Abstract:

In this paper, the Richtmyer-Meshkov instability has been studied numerically by using the high-resolution Roe scheme based on the two-dimensional unsteady Euler equation, which was caused by the interaction between shock wave and the helium circular light gas cylinder with different component distributions. The numerical results further discuss the deformation process of the gas cylinder, the wave structure of the flow field and quantitatively analyze the characteristic dimensions (length, height, and central axial width) of the gas cylinder, the volume compression ratio of the cylinder over time. In addition, the flow mechanism of shock-driven interface gas mixing is analyzed from multiple perspectives by combining it with the flow field pressure, velocity, circulation, and gas mixing rate. Then the effects of different initial component distribution conditions on interface instability are investigated. The results show when the diffusion interface transit to the sharp interface, the reflection coefficient gradually increases on both sides of the interface. When the incident shock wave interacts with the cylinder, the transmission of the shock wave will transit from conventional transmission to unconventional transmission. At the same time, the reflected shock wave is gradually strengthened, and the transmitted shock wave is gradually weakened, which leads to an increase in the Richtmyer-Meshkov instability. Moreover, the Atwood number on both sides of the interface also increases as the diffusion interface transit to the sharp interface, which leads to an increase in the Rayleigh-Taylor instability and the Kelvin-Helmholtz instability. Therefore, the increase in instability will lead to an increase the circulation, resulting in an increase in the growth rate of gas mixing rate.

Keywords: shock wave, He light cylinder, Richtmyer-Meshkov instability, Gaussian distribution

Procedia PDF Downloads 56
81 Structural Damage Detection Using Modal Data Employing Teaching Learning Based Optimization

Authors: Subhajit Das, Nirjhar Dhang

Abstract:

Structural damage detection is a challenging work in the field of structural health monitoring (SHM). The damage detection methods mainly focused on the determination of the location and severity of the damage. Model updating is a well known method to locate and quantify the damage. In this method, an error function is defined in terms of difference between the signal measured from ‘experiment’ and signal obtained from undamaged finite element model. This error function is minimised with a proper algorithm, and the finite element model is updated accordingly to match the measured response. Thus, the damage location and severity can be identified from the updated model. In this paper, an error function is defined in terms of modal data viz. frequencies and modal assurance criteria (MAC). MAC is derived from Eigen vectors. This error function is minimized by teaching-learning-based optimization (TLBO) algorithm, and the finite element model is updated accordingly to locate and quantify the damage. Damage is introduced in the model by reduction of stiffness of the structural member. The ‘experimental’ data is simulated by the finite element modelling. The error due to experimental measurement is introduced in the synthetic ‘experimental’ data by adding random noise, which follows Gaussian distribution. The efficiency and robustness of this method are explained through three examples e.g., one truss, one beam and one frame problem. The result shows that TLBO algorithm is efficient to detect the damage location as well as the severity of damage using modal data.

Keywords: damage detection, finite element model updating, modal assurance criteria, structural health monitoring, teaching learning based optimization

Procedia PDF Downloads 196
80 The Hidden Role of Interest Rate Risks in Carry Trades

Authors: Jingwen Shi, Qi Wu

Abstract:

We study the role played interest rate risk in carry trade return in order to understand the forward premium puzzle. In this study, our goal is to investigate to what extent carry trade return is indeed due to compensation for risk taking and, more important, to reveal the nature of these risks. Using option data not only on exchange rates but also on interest rate swaps (swaptions), our first finding is that, besides the consensus currency risks, interest rate risks also contribute a non-negligible portion to the carry trade return. What strikes us is our second finding. We find that large downside risks of future exchange rate movements are, in fact, priced significantly in option market on interest rates. The role played by interest rate risk differs structurally from the currency risk. There is a unique premium associated with interest rate risk, though seemingly small in size, which compensates the tail risks, the left tail to be precise. On the technical front, our study relies on accurately retrieving implied distributions from currency options and interest rate swaptions simultaneously, especially the tail components of the two. For this purpose, our major modeling work is to build a new international asset pricing model where we use an orthogonal setup for pricing kernels and specify non-Gaussian dynamics in order to capture three sets of option skew accurately and consistently across currency options and interest rate swaptions, domestic and foreign, within one model. Our results open a door for studying forward premium anomaly through implied information from interest rate derivative market.

Keywords: carry trade, forward premium anomaly, FX option, interest rate swaption, implied volatility skew, uncovered interest rate parity

Procedia PDF Downloads 423
79 Comparison of Inexpensive Cell Disruption Techniques for an Oleaginous Yeast

Authors: Scott Nielsen, Luca Longanesi, Chris Chuck

Abstract:

Palm oil is obtained from the flesh and kernel of the fruit of oil palms and is the most productive and inexpensive oil crop. The global demand for palm oil is approximately 75 million metric tonnes, a 29% increase in global production of palm oil since 2016. This expansion of oil palm cultivation has resulted in mass deforestation, vast biodiversity destruction and increasing net greenhouse gas emissions. One possible alternative is to produce a saturated oil, similar to palm, from microbes such as oleaginous yeast. The yeasts can be cultured on sugars derived from second-generation sources and do not compete with tropical forests for land. One highly promising oleaginous yeast for this application is Metschnikowia pulcherrima. However, recent techno-economic modeling has shown that cell lysis and standard lipid extraction are major contributors to the cost of the oil. Typical cell disruption techniques to extract either single cell oils or proteins have been based around bead-beating, homogenization and acid lysis. However, these can have a detrimental effect on lipid quality and are energy-intensive. In this study, a vortex separator, which produces high sheer with minimal energy input, was investigated as a potential low energy method of lysing cells. This was compared to four more traditional methods (thermal lysis, acid lysis, alkaline lysis, and osmotic lysis). For each method, the yeast loading was also examined at 1 g/L, 10 g/L and 100 g/L. The quality of the cell disruption was measured by optical cell density, cell counting and the particle size distribution profile comparison over a 2-hour period. This study demonstrates that the vortex separator is highly effective at lysing the cells and could potentially be used as a simple apparatus for lipid recovery in an oleaginous yeast process. The further development of this technology could potentially reduce the overall cost of microbial lipids in the future.

Keywords: palm oil substitute, metschnikowia pulcherrima, cell disruption, cell lysis

Procedia PDF Downloads 177
78 A Support Vector Machine Learning Prediction Model of Evapotranspiration Using Real-Time Sensor Node Data

Authors: Waqas Ahmed Khan Afridi, Subhas Chandra Mukhopadhyay, Bandita Mainali

Abstract:

The research paper presents a unique approach to evapotranspiration (ET) prediction using a Support Vector Machine (SVM) learning algorithm. The study leverages real-time sensor node data to develop an accurate and adaptable prediction model, addressing the inherent challenges of traditional ET estimation methods. The integration of the SVM algorithm with real-time sensor node data offers great potential to improve spatial and temporal resolution in ET predictions. In the model development, key input features are measured and computed using mathematical equations such as Penman-Monteith (FAO56) and soil water balance (SWB), which include soil-environmental parameters such as; solar radiation (Rs), air temperature (T), atmospheric pressure (P), relative humidity (RH), wind speed (u2), rain (R), deep percolation (DP), soil temperature (ST), and change in soil moisture (∆SM). The one-year field data are split into combinations of three proportions i.e. train, test, and validation sets. While kernel functions with tuning hyperparameters have been used to train and improve the accuracy of the prediction model with multiple iterations. This paper also outlines the existing methods and the machine learning techniques to determine Evapotranspiration, data collection and preprocessing, model construction, and evaluation metrics, highlighting the significance of SVM in advancing the field of ET prediction. The results demonstrate the robustness and high predictability of the developed model on the basis of performance evaluation metrics (R2, RMSE, MAE). The effectiveness of the proposed model in capturing complex relationships within soil and environmental parameters provide insights into its potential applications for water resource management and hydrological ecosystem.

Keywords: evapotranspiration, FAO56, KNIME, machine learning, RStudio, SVM, sensors

Procedia PDF Downloads 43
77 The Impact of Adopting Cross Breed Dairy Cows on Households’ Income and Food Security in the Case of Dejen Woreda, Amhara Region, Ethiopia

Authors: Misganaw Chere Siferih

Abstract:

This study assessed the impact of crossbreed dairy cows on household income and food security. The study area is found in Dejen Woreda, East Gojam Zone, and Amhara region of Ethiopia. Random sampling technique was used to obtain a sample of 80 crossbreed dairy cow owners and 176 indigenous dairy cow owners. The study employed food consumption score analytical framework to measure food security status of the household. No Statistical significant mean difference is found between crossbreed owners and indigenous owners. Logistic regression was employed to investigate crossbreed dairy cow adoption determinants , the result indicates that gender, education, labor number, land size cultivated, dairy cooperatives membership, net income and food security status of the household are statistically significant independent variables, which explained the binary dependent variable, crossbreed dairy cow adoption. Propensity score matching (PSM) was employed to analyze the impact of crossbreed dairy cow owners on farmers’ income and food security. The average net income of crossbreed dairy cow owners was found to be significantly higher than indigenous dairy cow owners. Estimates of average treatment effect of the treated (ATT) indicated that crossbreed dairy cow is able to impact households’ net income by 42%, 38.5%, 30.8% and 44.5% higher in kernel, radius, nearest neighborhood and stratification matching algorithms respectively as compared to indigenous dairy cow owners. However, estimates of average treatment of the treated (ATT) suggest that being an owner of crossbreed dairy cow is not able to affect food security significantly. Thus, crossbreed dairy cow enables farmers to increase income but not their food security in the study area. Finally, the study recommended establishing dairy cooperatives and advice farmers to become a member of them, attention to promoting the impact of crossbreed dairy cows and promotion of nutrition focus projects.

Keywords: crossbreed dairy cow, net income, food security, propensity score matching

Procedia PDF Downloads 34
76 Chemical Composition and Antifungal Activity of Selected Essential Oils against Toxigenic Fungi Associated with Maize (Zea mays L.)

Authors: Birhane Atnafu, Chemeda Abedeta Garbaba, Fikre Lemessa, Abdi Mohammed, Alemayehu Chala

Abstract:

Essential oil is a bio-pesticide plant product used as an alternative to pesticides in managing plant pests, including fungal pathogens. Thus, the current study aims to investigate the chemical composition and antifungal activities of essential oils (EO) extracted from three aromatic plants i.e., Thymus vulgaris, Coriandrum sativum, and Cymbopogon martini. The leaf parts of those selected plants were collected from the Jimma area and their essential oil was extracted by hydro-distillation method in a Clevenger apparatus. The chemical composition of selected plant essential oil was analyzed by using Gas chromatography-mass spectrometry (GC/MS) and their inhibitory effects were tested in vitro on toxigenic fungi isolated from maize kernel. Chemical analysis results revealed the presence of 32 compounds in C. sativum with Hexanedioic acid, bis (2-ethylhexyl) ester (46. 9%), 2-Decenal, (E)- (12.6), and linalool (8.3%) being the dominant ones. T. vulgaris essential oils constituted 25 compounds, of which thymol (34.4%), o-cymene (17.5%), and Gamma-Terpinene (16.8%) were the major components. Twenty-five compounds were detected in C. martinii of which geraniol (51.4%), Geranyl acetate (14.5%), and Trans – ß-Ocimene (11.7%) were dominant. The EOs of the tested plants had very high antifungal activity (up to 100% efficacy) against Aspergillus flavus, Aspergillus niger, Fusarium graminearum and Fusarium verticillioides in vitro and on maize grains. The antifungal activities of these essential oils were dependent on the major components such as thymol, hexanedioic acid, bis (2-ethylhexyl) ester, and geraniol. The study affirmed the potential of these essential oils controlling as bio-fungicides to manage the effects of potentially toxigenic fungi associated with maize under post-harvest stages. This can reduce the consequences of the health impacts of the mold and toxigenic compounds produced in maize.

Keywords: bio-activity, bio-pesticides, maize, mycotoxin

Procedia PDF Downloads 49
75 Advanced Electron Microscopy Study of Fission Products in a TRISO Coated Particle Neutron Irradiated to 3.6 X 1021 N/cm² Fast Fluence at 1040 ⁰C

Authors: Haiming Wen, Isabella J. Van Rooyen

Abstract:

Tristructural isotropic (TRISO)-coated fuel particles are designed as nuclear fuel for high-temperature gas reactors. TRISO coating consists of layers of carbon buffer, inner pyrolytic carbon (IPyC), SiC, and outer pyrolytic carbon. The TRISO coating, especially the SiC layer, acts as a containment system for fission products produced in the kernel. However, release of certain metallic fission products across intact TRISO coatings has been observed for decades. Despite numerous studies, mechanisms by which fission products migrate across the coating layers remain poorly understood. In this study, scanning transmission electron microscopy (STEM), energy dispersive X-ray spectroscopy (EDS), high-resolution transmission electron microscopy (HRTEM) and electron energy loss spectroscopy (EELS) were used to examine the distribution, composition and structure of fission products in a TRISO coated particle neutron irradiated to 3.6 x 1021 n/cm² fast fluence at 1040 ⁰C. Precession electron diffraction was used to investigate characters of grain boundaries where specific fission product precipitates are located. The retention fraction of 110mAg in the investigated TRISO particle was estimated to be 0.19. A high density of nanoscale fission product precipitates was observed in the SiC layer close to the SiC-IPyC interface, most of which are rich in Pd, while Ag was not identified. Some Pd-rich precipitates contain U. Precipitates tend to have complex structure and composition. Although a precipitate appears to have uniform contrast in STEM, EDS indicated that there may be composition variations throughout the precipitate, and HRTEM suggested that the precipitate may have several parts different in crystal structure or orientation. Attempts were made to measure charge states of precipitates using EELS and study their possible effect on precipitate transport.

Keywords: TRISO particle, fission product, nuclear fuel, electron microscopy, neutron irradiation

Procedia PDF Downloads 241
74 Insecticidal Effect of Nanoparticles against Helicoverpa armigera Infesting Chickpea

Authors: Shabistana Nisar, Parvez Qamar Rizvi, Sheeraz Malik

Abstract:

The potential advantage of nanotechnology is comparably marginal due to its unclear benefits in agriculture and insufficiency in public opinion. The nanotech products might solve the pesticide problems of societal concern fairly at acceptable or low risk for consumers and environmental applications. The deleterious effect of chemicals used on crops can be compacted either by reducing the existing active ingredient to nanosize or by plummeting the metals into nanoform. Considering the above facts, an attempt was made to determine the efficacy of nanoelements viz., Silver, Copper Manganese and Neem seed kernel extract (NSKE) for effective management of gram pod borer, Helicoverpa armigera infesting chickpea, being the most damaging pest of large number of crops, gram pod borer was selected as test insect to ascertain the impact of nanoparticles under controlled conditions (25-27 ˚C, 60-80% RH). The respective nanoformulations (0.01, 0.005, 0.003, 0.0025, 0.002, 0.001) were topically applied on 4th instar larvae of pod borer. In general, nanochemicals (silver, copper, manganese, NSKE) produced relatively high mortality at low dilutions (0.01, 0.005, 0.003). The least mortality was however recorded at 0.001 concentration. Nanosilver proved most efficient producing significantly highest (f₄,₂₄=129.56, p < 0.05) mortality 63.13±1.77, 83.21±2.02 and 96.10±1.25 % at 0.01 concentration after 2nd, 4th and 6th day, respectively. The least mortality was however recorded with nanoNSKE. The mortality values obtained at respective days were 21.25±1.50%, 25.20±2.00%, and 56.20±2.25%. Nanocopper and nanomanganese showed slow rate of killing on 2nd day of exposure, but increased (79.20±3.25 and 65.33±1.25) at 0.01 dilution on 3rd day, followed by 83.00±3.50% and 70.20±2.20% mortality on 6thday. The sluggishness coupled with antifeedancy was noticed at early stage of exposure. The change in body colour to brown due to additional melanisation in copper, manganese, and silver treated larvae and demalinization in nanoNSKE exposed larvae was observed at later stage of treatment. Thus, all the nanochemicals applied, produced the significant lethal impact on Helicoverpa armigera and can be used as valuable tool for its effective management.

Keywords: chickpea, helicoverpa armigera, management, nanoparticles

Procedia PDF Downloads 335
73 Artificial Neural Network Modeling of a Closed Loop Pulsating Heat Pipe

Authors: Vipul M. Patel, Hemantkumar B. Mehta

Abstract:

Technological innovations in electronic world demand novel, compact, simple in design, less costly and effective heat transfer devices. Closed Loop Pulsating Heat Pipe (CLPHP) is a passive phase change heat transfer device and has potential to transfer heat quickly and efficiently from source to sink. Thermal performance of a CLPHP is governed by various parameters such as number of U-turns, orientations, input heat, working fluids and filling ratio. The present paper is an attempt to predict the thermal performance of a CLPHP using Artificial Neural Network (ANN). Filling ratio and heat input are considered as input parameters while thermal resistance is set as target parameter. Types of neural networks considered in the present paper are radial basis, generalized regression, linear layer, cascade forward back propagation, feed forward back propagation; feed forward distributed time delay, layer recurrent and Elman back propagation. Linear, logistic sigmoid, tangent sigmoid and Radial Basis Gaussian Function are used as transfer functions. Prediction accuracy is measured based on the experimental data reported by the researchers in open literature as a function of Mean Absolute Relative Deviation (MARD). The prediction of a generalized regression ANN model with spread constant of 4.8 is found in agreement with the experimental data for MARD in the range of ±1.81%.

Keywords: ANN models, CLPHP, filling ratio, generalized regression, spread constant

Procedia PDF Downloads 268
72 Status of Reintroduced Houbara Bustard Chlamydotis macqueeni in Saudi Arabia

Authors: Mohammad Zafar-ul Islam

Abstract:

The breeding programme of Houbara bustard was started in Saudi Arabia in 1986 to undertake the restoration of native species such as Houbara through a programme of re-introduction, involving the release of captive-bred birds in the wild. Two sites were selected for houbara re-introduction, i.e., Mahazat as-Sayd and Saja Umm Ar-Rimth protected areas in 1988 and 1998 respectively. Both the areas are fenced fairly level, sandy plain with a few rock outcrops. Captive bred houbara have been released in Mahazat since 1992 by NWRC and those birds have been successfully breeding since then. The nesting season of the houbara at Mahazat recorded from February to May and on an average 20-25 nests are located each year but no nesting recorded in Saja. Houbara are monitored using radio transmitters through aerial tracking technique and also a vehicle for terrestrial tracking. Total population of houbara in Mahazat is roughly estimated around 300-400 birds, using the following: N = n1+n2+n3+n4+n5 (n1 = released or wild-born, radio, regularly monitored/checked; n2 = radio tagged missing; n3 = wild born chicks not recorded; n4 = wild born chicks, recorded but not tagged; n5 = immigrants). However, in Saja only 4-7 individuals of houbara have been survived since 2001 because most of the birds are predated immediately after the release. The mean annual home was also calculated using Kernel and Convex polygons methods with Range VII software. The minimum density of houbara was also calculated. In order to know the houbara movement or their migration to other regions, two captive-reared male houbara that were released into the wild and one wild born female were fitted with Platform Transmitter Terminals (PTT). The home range shows that wild-born female has larger movement than two males. More areas need to be selected for reintroduction programme to establish the network of sites to provide easy access to move these birds and mingle with the wild houbara. Some potential sites have been proposed which require more surveys to check the habitat suitability.

Keywords: re-introduction, survival rate, home range, Saudi Arabia

Procedia PDF Downloads 387
71 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach

Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar

Abstract:

Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.

Keywords: artificial neural networks, ANN, discrete wavelet transform, DWT, gray-level co-occurrence matrix, GLCM, k-nearest neighbor, KNN, region of interest, ROI

Procedia PDF Downloads 127
70 150 KVA Multifunction Laboratory Test Unit Based on Power-Frequency Converter

Authors: Bartosz Kedra, Robert Malkowski

Abstract:

This paper provides description and presentation of laboratory test unit built basing on 150 kVA power frequency converter and Simulink RealTime platform. Assumptions, based on criteria which load and generator types may be simulated using discussed device, are presented, as well as control algorithm structure. As laboratory setup contains transformer with thyristor controlled tap changer, a wider scope of setup capabilities is presented. Information about used communication interface, data maintenance, and storage solution as well as used Simulink real-time features is presented. List and description of all measurements are provided. Potential of laboratory setup modifications is evaluated. For purposes of Rapid Control Prototyping, a dedicated environment was used Simulink RealTime. Therefore, load model Functional Unit Controller is based on a PC computer with I/O cards and Simulink RealTime software. Simulink RealTime was used to create real-time applications directly from Simulink models. In the next step, applications were loaded on a target computer connected to physical devices that provided opportunity to perform Hardware in the Loop (HIL) tests, as well as the mentioned Rapid Control Prototyping process. With Simulink RealTime, Simulink models were extended with I/O cards driver blocks that made automatic generation of real-time applications and performing interactive or automated runs on a dedicated target computer equipped with a real-time kernel, multicore CPU, and I/O cards possible. Results of performed laboratory tests are presented. Different load configurations are described and experimental results are presented. This includes simulation of under frequency load shedding, frequency and voltage dependent characteristics of groups of load units, time characteristics of group of different load units in a chosen area and arbitrary active and reactive power regulation basing on defined schedule.

Keywords: MATLAB, power converter, Simulink Real-Time, thyristor-controlled tap changer

Procedia PDF Downloads 300
69 Optimization of Process Parameters and Modeling of Mass Transport during Hybrid Solar Drying of Paddy

Authors: Aprajeeta Jha, Punyadarshini P. Tripathy

Abstract:

Drying is one of the most critical unit operations for prolonging the shelf-life of food grains in order to ensure global food security. Photovoltaic integrated solar dryers can be a sustainable solution for replacing energy intensive thermal dryers as it is capable of drying in off-sunshine hours and provide better control over drying conditions. But, performance and reliability of PV based solar dryers depend hugely on climatic conditions thereby, drastically affecting process parameters. Therefore, to ensure quality and prolonged shelf-life of paddy, optimization of process parameters for solar dryers is critical. Proper moisture distribution within the grains is most detrimental factor to enhance the shelf-life of paddy therefore; modeling of mass transport can help in providing a better insight of moisture migration. Hence, present work aims at optimizing the process parameters and to develop a 3D finite element model (FEM) for predicting moisture profile in paddy during solar drying. Optimization of process parameters (power level, air velocity and moisture content) was done using box Behnken model in Design expert software. Furthermore, COMSOL Multiphysics was employed to develop a 3D finite element model for predicting moisture profile. Optimized model for drying paddy was found to be 700W, 2.75 m/s and 13% wb with optimum temperature, milling yield and drying time of 42˚C, 62%, 86 min respectively, having desirability of 0.905. Furthermore, 3D finite element model (FEM) for predicting moisture migration in single kernel for every time step has been developed. The mean absolute error (MAE), mean relative error (MRE) and standard error (SE) were found to be 0.003, 0.0531 and 0.0007, respectively, indicating close agreement of model with experimental results. Above optimized conditions can be successfully used to dry paddy in PV integrated solar dryer in order to attain maximum uniformity, quality and yield of product to achieve global food and energy security

Keywords: finite element modeling, hybrid solar drying, mass transport, paddy, process optimization

Procedia PDF Downloads 119
68 Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm

Authors: El Harraj Abdeslam, Raissouni Naoufal

Abstract:

The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.

Keywords: video surveillance, background subtraction, contrast limited histogram equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes

Procedia PDF Downloads 237
67 Habitat Preference of Lepidoptera (Butterflies), Using Geospatial Analysis in Diyasaru Wetland Park, Western Province, Sri Lanka

Authors: Hiripurage Mallika Sandamali Dissanayaka

Abstract:

Butterflies are found everywhere on Earth, helping flowering plants reproduce through pollination. Wetlands perform many valuable functions such as providing wildlife habitat. Diyasaru Wetland Park was chosen as the study site. It is located in a highly urbanized area of Sri Jayawardenepura Kotte, Sri Lanka. A distribution map was prepared to increase butterfly habitat in the urbanized area, and research was conducted to determine the most suitable sections for using it. As this wetland has footpaths for walking, line transect surveys were used to mark species within the sampling area, and directly observed species were recorded. All data collection was done from 0900 to 1200 hours and 1300 to 1600 hours and fieldwork was done from 11 February 2020 to 20 January 2021. ED binoculars (10.5x45), DSLR cameras (Canon EOS/EFS5 mm 3.5-5.6), and Garmin GPS (Etrex 10) were used to observe butterfly species, identify locations, and take photographs as evidence. Analyzing their habitats using GIS (ArcGIS Pro) to identify their distribution within the park premises, the distribution density of the known size of the population was calculated for each point by kernel density, and local similarity values were calculated for each pair of corresponding features through hotspot analysis, and cell values were determined by inverse distance weighting (IDW) using a linearly weighted combination of a set of sample points. According to the maps prepared to predict the distribution of butterflies in this park, the high level of distribution or favorable areas were near flower gardens and meadows, but some individual species prefer habitats that are more suitable for their life activities, so they live in other areas. Sixty-six (66) species belonging to six (6) families have been recorded in the premises. Sixty (60) species of least concern (LC), two (2) near threatened (NT), and four (4) vulnerable (VU) species have been recorded, and several new species, such as Plum Judy (Abisara echerius), were reported. The outcome of the study will form the basis for decision-making by the Sri Lanka Land Development (SLLD) Corporation for the future development and maintenance of the park.

Keywords: wetland, Lepidoptera, habitat, urban, west

Procedia PDF Downloads 27
66 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 309
65 Modelling Volatility Spillovers and Cross Hedging among Major Agricultural Commodity Futures

Authors: Roengchai Tansuchat, Woraphon Yamaka, Paravee Maneejuk

Abstract:

From the past recent, the global financial crisis, economic instability, and large fluctuation in agricultural commodity price have led to increased concerns about the volatility transmission among them. The problem is further exacerbated by commodities volatility caused by other commodity price fluctuations, hence the decision on hedging strategy has become both costly and useless. Thus, this paper is conducted to analysis the volatility spillover effect among major agriculture including corn, soybeans, wheat and rice, to help the commodity suppliers hedge their portfolios, and manage the risk and co-volatility of them. We provide a switching regime approach to analyzing the issue of volatility spillovers in different economic conditions, namely upturn and downturn economic. In particular, we investigate relationships and volatility transmissions between these commodities in different economic conditions. We purposed a Copula-based multivariate Markov Switching GARCH model with two regimes that depend on an economic conditions and perform simulation study to check the accuracy of our proposed model. In this study, the correlation term in the cross-hedge ratio is obtained from six copula families – two elliptical copulas (Gaussian and Student-t) and four Archimedean copulas (Clayton, Gumbel, Frank, and Joe). We use one-step maximum likelihood estimation techniques to estimate our models and compare the performance of these copula using Akaike information criterion (AIC) and Bayesian information criteria (BIC). In the application study of agriculture commodities, the weekly data used are conducted from 4 January 2005 to 1 September 2016, covering 612 observations. The empirical results indicate that the volatility spillover effects among cereal futures are different, as response of different economic condition. In addition, the results of hedge effectiveness will also suggest the optimal cross hedge strategies in different economic condition especially upturn and downturn economic.

Keywords: agricultural commodity futures, cereal, cross-hedge, spillover effect, switching regime approach

Procedia PDF Downloads 182
64 Reliability Analysis of Glass Epoxy Composite Plate under Low Velocity

Authors: Shivdayal Patel, Suhail Ahmad

Abstract:

Safety assurance and failure prediction of composite material component of an offshore structure due to low velocity impact is essential for associated risk assessment. It is important to incorporate uncertainties associated with material properties and load due to an impact. Likelihood of this hazard causing a chain of failure events plays an important role in risk assessment. The material properties of composites mostly exhibit a scatter due to their in-homogeneity and anisotropic characteristics, brittleness of the matrix and fiber and manufacturing defects. In fact, the probability of occurrence of such a scenario is due to large uncertainties arising in the system. Probabilistic finite element analysis of composite plates due to low-velocity impact is carried out considering uncertainties of material properties and initial impact velocity. Impact-induced damage of composite plate is a probabilistic phenomenon due to a wide range of uncertainties arising in material and loading behavior. A typical failure crack initiates and propagates further into the interface causing de-lamination between dissimilar plies. Since individual crack in the ply is difficult to track. The progressive damage model is implemented in the FE code by a user-defined material subroutine (VUMAT) to overcome these problems. The limit state function is accordingly established while the stresses in the lamina are such that the limit state function (g(x)>0). The Gaussian process response surface method is presently adopted to determine the probability of failure. A comparative study is also carried out for different combination of impactor masses and velocities. The sensitivity based probabilistic design optimization procedure is investigated to achieve better strength and lighter weight of composite structures. Chain of failure events due to different modes of failure is considered to estimate the consequences of failure scenario. Frequencies of occurrence of specific impact hazards yield the expected risk due to economic loss.

Keywords: composites, damage propagation, low velocity impact, probability of failure, uncertainty modeling

Procedia PDF Downloads 259
63 Comparison of Home Ranges of Radio Collared Jaguars (Panthera onca L.) in the Dry Chaco and Wet Chaco of Paraguay

Authors: Juan Facetti, Rocky McBride, Karina Loup

Abstract:

The Chaco Region of Paraguay is a key biodiverse area for the conservation of jaguars (Panthera onca), the largest feline of the Americas. It comprises five eco-regions, which holds important but decreasing populations of this species. The last decades, the expansion of soybean over the Atlantic Forest, forced the translocation of cattle-ranches towards the Chaco. Few studies of Jaguar's population densities in the American hemisphere were done until now. In the region, the specie is listed as vulnerable or threatened and more information is needed to implement any conservation policy. Among the factors that threaten the populations are land-use change, habitat fragmentation, prey depletion and illegal hunting. Two largest eco-regions were studied: the Wet Chaco and the Dry Chaco. From 2002 more than 20 jaguars were captured and fitted with GPS-collar. Data collected from 11 GPS-collars were processed, transformed numerically and finally converted into maps for analyzing. 8.092 locations were determined for four adult females (AF) and one adult male (AM) in the Wet Chaco, and one AF, one juvenile male (JM) and four AM in the Dry Chaco, during 1,867 days. GIS and kernel methodology were used to calculate daily distance of movement, home range-HR (95% isopleth), and core area (considered as 50% isopleth). In the Wet Chaco HR were 56 Km2 and 238 km2 for females and males respectively; while in the Dry Chaco HR were 685 Km2 and 844.5 km2 for females and males respectively, and 172 Km2 for a juvenile. Core areas of individual activity for each jaguar, were on average 11.5 Km2 and 33.55 km2 for AF and AM respectively in the Wet Chaco, while in the Dry Chaco were larger: 115 km2 for five AM and 225 Km2 for an AF and 32.4 Km2 for a JM. In both ecoregions, only one relevant overlap of HR of adults was reported. During the reproduction season, the HR (95% K) of one AM overlapped 49.83% with that of one AF. At the Wet Chaco, the maximum daily distance moved by an AF was 14.5 Km and 11.6 Km for the AM, while the Maximum Mean Daily Moved (MMDM) distance was 5.6 km for an AF and 3.1 km for an AM. At the Dry Chaco, the maximum daily distance for an AF was 61.7Km., 50.9Km for the AM and 6.6 Km for the JM, while the MMDM distance was 13.2 km for an AM and 8.4 km for an AF. This study confirmed that, as the invasion to jaguar habitat increased, it resulted in fragmented landscapes that influence spacing patterns of jaguars. Males used largest HR that of the smaller females and males covers largest distances that of the females. There appeared to be important spatial segregation between not only females but also males. It is likely that the larger areas used by males are partly caused by the sexual dimorphism in body size that entails differences in prey requirements. These could explain the larger distances travelled daily by males.

Keywords: Chaco ecoregions, Jaguar, home range, Panthera onca, Paraguay

Procedia PDF Downloads 286
62 Comparison of Iodine Density Quantification through Three Material Decomposition between Philips iQon Dual Layer Spectral CT Scanner and Siemens Somatom Force Dual Source Dual Energy CT Scanner: An in vitro Study

Authors: Jitendra Pratap, Jonathan Sivyer

Abstract:

Introduction: Dual energy/Spectral CT scanning permits simultaneous acquisition of two x-ray spectra datasets and can complement radiological diagnosis by allowing tissue characterisation (e.g., uric acid vs. non-uric acid renal stones), enhancing structures (e.g. boost iodine signal to improve contrast resolution), and quantifying substances (e.g. iodine density). However, the latter showed inconsistent results between the 2 main modes of dual energy scanning (i.e. dual source vs. dual layer). Therefore, the present study aimed to determine which technology is more accurate in quantifying iodine density. Methods: Twenty vials with known concentrations of iodine solutions were made using Optiray 350 contrast media diluted in sterile water. The concentration of iodine utilised ranged from 0.1 mg/ml to 1.0mg/ml in 0.1mg/ml increments, 1.5 mg/ml to 4.5 mg/ml in 0.5mg/ml increments followed by further concentrations at 5.0 mg/ml, 7mg/ml, 10 mg/ml and 15mg/ml. The vials were scanned using Dual Energy scan mode on a Siemens Somatom Force at 80kV/Sn150kV and 100kV/Sn150kV kilovoltage pairing. The same vials were scanned using Spectral scan mode on a Philips iQon at 120kVp and 140kVp. The images were reconstructed at 5mm thickness and 5mm increment using Br40 kernel on the Siemens Force and B Filter on Philips iQon. Post-processing of the Dual Energy data was performed on vendor-specific Siemens Syngo VIA (VB40) and Philips Intellispace Portal (Ver. 12) for the Spectral data. For each vial and scan mode, the iodine concentration was measured by placing an ROI in the coronal plane. Intraclass correlation analysis was performed on both datasets. Results: The iodine concentrations were reproduced with a high degree of accuracy for Dual Layer CT scanner. Although the Dual Source images showed a greater degree of deviation in measured iodine density for all vials, the dataset acquired at 80kV/Sn150kV had a higher accuracy. Conclusion: Spectral CT scanning by the dual layer technique has higher accuracy for quantitative measurements of iodine density compared to the dual source technique.

Keywords: CT, iodine density, spectral, dual-energy

Procedia PDF Downloads 101
61 A Descriptive Study of the Mineral Content of Conserved Forage Fed to Horses in the United Kingdom, Ireland, and France

Authors: Louise Jones, Rafael De Andrade Moral, John C. Stephens

Abstract:

Background: Minerals are an essential component of correct nutrition. Conserved hay/haylage is an important component of many horse's diets. Variations in the mineral content of conserved forage should be considered when assessing dietary intake. Objectives: This study describes the levels and differences in 15 commonly analysed minerals in conserved forage fed to horses in the United Kingdom (UK), Ireland (IRL), and France (FRA). Methods: Hay (FRA n=92, IRL n=168, UK n=152) and haylage samples (UK n=287, IRL n=49) were collected during 2017-2020. Mineral analysis was undertaken using inductively coupled plasma-mass spectrometry (ICP-MS). Statistical analysis was performed using beta regression, Gaussian, or gamma models, depending on the nature of the response variable. Results: There are significant differences in the mineral content of the UK, IRL, and FRA conserved forage samples. FRA hay samples had a significantly higher (p < 0.05) levels of Sulphur (0.16 ± 0.0051 %), Calcium (0.56 ± 0.0342%), Magnesium (0.16 ± 0.0069 mg/ kg DM), Iron (194 ± 23.0 mg/kg DM), Cobalt (0.21 ± 0.0244 mg/kg DM) and Copper (4.94 ± 0.196 mg/kg DM) content compared to hay from the other two countries. UK hay samples had significantly less (p < 0.05) Selenium (0.07 ± 0.0084 mg/kg DM), whilst IRL hay samples were significantly (p < 0.05) higher in Chloride (0.9 ± 0.026mg/kg DM) compared to hay from the other two countries. IRL haylage samples were significantly (p < 0.05) higher in Phosphorus (0.26 ± 0.0102 %), Sulphur (0.17 ± 0.0052 %), Chloride (1.01 ± 0.0519 %), Calcium (0.54 ± 0.0257 %), Selenium (0.17 ± 0.0322 mg/kg DM) and Molybdenum (1.47 ± 0.137 mg/kg DM) compared to haylage from the UK. Main Limitations: Forage samples were obtained from professional yards and may not be reflective of forages fed by most horse owners. Information regarding soil type, species of grass, fertiliser treatment, harvest, or storage conditions were not included in this study. Conclusions: At a DM intake of 2% body weight, conserved forage as sampled in this study will be insufficient to meet Zinc, Iodine, and Copper NRC maintenance requirements, and Se intake will also be insufficient for horses fed the UK conserved forage. Many horses receive hay/haylage as the main component of their diet; this study highlights the need to consider forage analysis when making dietary recommendations.

Keywords: conserved forage, hay, haylage, minerals

Procedia PDF Downloads 203
60 Transient Response of Elastic Structures Subjected to a Fluid Medium

Authors: Helnaz Soltani, J. N. Reddy

Abstract:

Presence of fluid medium interacting with a structure can lead to failure of the structure. Since developing efficient computational model for fluid-structure interaction (FSI) problems has broader impact to realistic problems encountered in aerospace industry, ship industry, oil and gas industry, and so on, one can find an increasing need to find a method in order to investigate the effect of fluid domain on structural response. A coupled finite element formulation of problems involving FSI issue is an accurate method to predict the response of structures in contact with a fluid medium. This study proposes a finite element approach in order to study the transient response of the structures interacting with a fluid medium. Since beam and plate are considered to be the fundamental elements of almost any structure, the developed method is applied to beams and plates benchmark problems in order to demonstrate its efficiency. The formulation is a combination of the various structure theories and the solid-fluid interface boundary condition, which is used to represent the interaction between the solid and fluid regimes. Here, three different beam theories as well as three different plate theories are considered to model the solid medium, and the Navier-Stokes equation is used as the theoretical equation governed the fluid domain. For each theory, a coupled set of equations is derived where the element matrices of both regimes are calculated by Gaussian quadrature integration. The main feature of the proposed methodology is to model the fluid domain as an added mass; the external distributed force due to the presence of the fluid. We validate the accuracy of such formulation by means of some numerical examples. Since the formulation presented in this study covers several theories in literature, the applicability of our proposed approach is independent of any structure geometry. The effect of varying parameters such as structure thickness ratio, fluid density and immersion depth, are studied using numerical simulations. The results indicate that maximum vertical deflection of the structure is affected considerably in the presence of a fluid medium.

Keywords: beam and plate, finite element analysis, fluid-structure interaction, transient response

Procedia PDF Downloads 544