Search results for: atomic models
6030 Promoting Biofuels in India: Assessing Land Use Shifts Using Econometric Acreage Response Models
Authors: Y. Bhatt, N. Ghosh, N. Tiwari
Abstract:
Acreage response function are modeled taking account of expected harvest prices, weather related variables and other non-price variables allowing for partial adjustment possibility. At the outset, based on the literature on price expectation formation, we explored suitable formulations for estimating the farmer’s expected prices. Assuming that farmers form expectations rationally, the prices of food and biofuel crops are modeled using time-series methods for possible ARCH/GARCH effects to account for volatility. The prices projected on the basis of the models are then inserted to proxy for the expected prices in the acreage response functions. Food crop acreages in different growing states are found sensitive to their prices relative to those of one or more of the biofuel crops considered. The required percentage improvement in food crop yields is worked to offset the acreage loss.Keywords: acreage response function, biofuel, food security, sustainable development
Procedia PDF Downloads 2996029 Soil Degradation Resulting from Migration of Ion Leachate in Gosa Dumpsite, Abuja
Authors: S. Ebisintei, M. A. Olutoye, A. S. Kovo, U. G. Akpan
Abstract:
The effect of soil degradation due to ion leachate migration using dumpsite located at Idu industrial area of Abuja was investigated. It was done to assess the health and environmental pollution consequences caused by heavy metals’ concentration in the soil on inhabitants around the settlement. Soil samples collected from four cardinal points and at the center during the dry and wet season were pretreated, digested and heavy metal concentrations present were analyzed using Atomic Absorption Spectrophotometer. The concentrations of Pb, Cu, Mn, Ni, and Cr, were determined and also for control sample obtained 300 m away from the dumpsite. Water samples were collected from three wells to test for physiochemical properties of pH, COD, BOD, DO, hardness, conductivity, and alkalinity. The result showed a significant difference in concentration of toxic heavy metals in the dumpsite as compared to the control sample. A mathematical model was developed to predict the heavy metal concentrations beyond the sampling point. The results indicate that metal concentrations in both dry and wet season were above the WHO, and SON set standards. The trend, if unrestrained, portends danger to human life, reduces agricultural productivity and sustainability.Keywords: soil degradation, ion leachate, productivity, environment, sustainability
Procedia PDF Downloads 3456028 The Use of Empirical Models to Estimate Soil Erosion in Arid Ecosystems and the Importance of Native Vegetation
Authors: Meshal M. Abdullah, Rusty A. Feagin, Layla Musawi
Abstract:
When humans mismanage arid landscapes, soil erosion can become a primary mechanism that leads to desertification. This study focuses on applying soil erosion models to a disturbed landscape in Umm Nigga, Kuwait, and identifying its predicted change under restoration plans, The northern portion of Umm Nigga, containing both coastal and desert ecosystems, falls within the boundaries of the Demilitarized Zone (DMZ) adjacent to Iraq, and has been fenced off to restrict public access since 1994. The central objective of this project was to utilize GIS and remote sensing to compare the MPSIAC (Modified Pacific South West Inter Agency Committee), EMP (Erosion Potential Method), and USLE (Universal Soil Loss Equation) soil erosion models and determine their applicability for arid regions such as Kuwait. Spatial analysis was used to develop the necessary datasets for factors such as soil characteristics, vegetation cover, runoff, climate, and topography. Results showed that the MPSIAC and EMP models produced a similar spatial distribution of erosion, though the MPSIAC had more variability. For the MPSIAC model, approximately 45% of the land surface ranged from moderate to high soil loss, while 35% ranged from moderate to high for the EMP model. The USLE model had contrasting results and a different spatial distribution of the soil loss, with 25% of area ranging from moderate to high erosion, and 75% ranging from low to very low. We concluded that MPSIAC and EMP were the most suitable models for arid regions in general, with the MPSIAC model best. We then applied the MPSIAC model to identify the amount of soil loss between coastal and desert areas, and fenced and unfenced sites. In the desert area, soil loss was different between fenced and unfenced sites. In these desert fenced sites, 88% of the surface was covered with vegetation and soil loss was very low, while at the desert unfenced sites it was 3% and correspondingly higher. In the coastal areas, the amount of soil loss was nearly similar between fenced and unfenced sites. These results implied that vegetation cover played an important role in reducing soil erosion, and that fencing is much more important in the desert ecosystems to protect against overgrazing. When applying the MPSIAC model predictively, we found that vegetation cover could be increased from 3% to 37% in unfenced areas, and soil erosion could then decrease by 39%. We conclude that the MPSIAC model is best to predict soil erosion for arid regions such as Kuwait.Keywords: soil erosion, GIS, modified pacific South west inter agency committee model (MPSIAC), erosion potential method (EMP), Universal soil loss equation (USLE)
Procedia PDF Downloads 2976027 Removal of Heavy Metal from Wastewater using Bio-Adsorbent
Authors: Rakesh Namdeti
Abstract:
The liquid waste-wastewater- is essentially the water supply of the community after it has been used in a variety of applications. In recent years, heavy metal concentrations, besides other pollutants, have increased to reach dangerous levels for the living environment in many regions. Among the heavy metals, Lead has the most damaging effects on human health. It can enter the human body through the uptake of food (65%), water (20%), and air (15%). In this background, certain low-cost and easily available biosorbent was used and reported in this study. The scope of the present study is to remove Lead from its aqueous solution using Olea EuropaeaResin as biosorbent. The results showed that the biosorption capacity of Olea EuropaeaResin biosorbent was more for Lead removal. The Langmuir, Freundlich, Tempkin, and Dubinin-Radushkevich (D-R) models were used to describe the biosorption equilibrium of Lead Olea EuropaeaResin biosorbent, and the biosorption followed the Langmuir isotherm. The kinetic models showed that the pseudo-second-order rate expression was found to represent well the biosorption data for the biosorbent.Keywords: novel biosorbent, central composite design, Lead, isotherms, kinetics
Procedia PDF Downloads 766026 Refitting Equations for Peak Ground Acceleration in Light of the PF-L Database
Authors: Matevž Breška, Iztok Peruš, Vlado Stankovski
Abstract:
Systematic overview of existing Ground Motion Prediction Equations (GMPEs) has been published by Douglas. The number of earthquake recordings that have been used for fitting these equations has increased in the past decades. The current PF-L database contains 3550 recordings. Since the GMPEs frequently model the peak ground acceleration (PGA) the goal of the present study was to refit a selection of 44 of the existing equation models for PGA in light of the latest data. The algorithm Levenberg-Marquardt was used for fitting the coefficients of the equations and the results are evaluated both quantitatively by presenting the root mean squared error (RMSE) and qualitatively by drawing graphs of the five best fitted equations. The RMSE was found to be as low as 0.08 for the best equation models. The newly estimated coefficients vary from the values published in the original works.Keywords: Ground Motion Prediction Equations, Levenberg-Marquardt algorithm, refitting PF-L database, peak ground acceleration
Procedia PDF Downloads 4606025 Finite Element Modeling Techniques of Concrete in Steel and Concrete Composite Members
Authors: J. Bartus, J. Odrobinak
Abstract:
The paper presents a nonlinear analysis 3D model of composite steel and concrete beams with web openings using the Finite Element Method (FEM). The core of the study is the introduction of basic modeling techniques comprehending the description of material behavior, appropriate elements selection, and recommendations for overcoming problems with convergence. Results from various finite element models are compared in the study. The main objective is to observe the concrete failure mechanism and its influence on the structural performance of numerical models of the beams at particular load stages. The bearing capacity of beams, corresponding deformations, stresses, strains, and fracture patterns were determined. The results show how load-bearing elements consisting of concrete parts can be analyzed using FEM software with various options to create the most suitable numerical model. The paper demonstrates the versatility of Ansys software usage for structural simulations.Keywords: Ansys, concrete, modeling, steel
Procedia PDF Downloads 1216024 Generalization of Zhou Fixed Point Theorem
Authors: Yu Lu
Abstract:
Fixed point theory is a basic tool for the study of the existence of Nash equilibria in game theory. This paper presents a significant generalization of the Veinott-Zhou fixed point theorem for increasing correspondences, which serves as an essential framework for investigating the existence of Nash equilibria in supermodular and quasisupermodular games. To establish our proofs, we explore different conceptions of multivalued increasingness and provide comprehensive results concerning the existence of the largest/least fixed point. We provide two distinct approaches to the proof, each offering unique insights and advantages. These advancements not only extend the applicability of the Veinott-Zhou theorem to a broader range of economic scenarios but also enhance the theoretical framework for analyzing equilibrium behavior in complex game-theoretic models. Our findings pave the way for future research in the development of more sophisticated models of economic behavior and strategic interaction.Keywords: fixed-point, Tarski’s fixed-point theorem, Nash equilibrium, supermodular game
Procedia PDF Downloads 526023 Physical Properties of Nano-Sized Poly-N-Isopropylacrylamide Hydrogels
Authors: Esra Alveroglu Durucu, Kenan Koc
Abstract:
In this study, we synthesized and characterized nano-sized Poly- N-isopropylacrylamide (PNIPAM) hydrogels. N-isopropylacrylamide (NIPAM) micro and macro gels are known as a thermosensitive colloidal structure, and they respond to changes in the environmental conditions such as temperature and pH. Here, nano-sized gels were synthesized via precipitation copolymerization method. N,N-methylenebisacrylamide (BIS) and ammonium persulfate APS were used as crosslinker and initiator, respectively. 8-Hydroxypyrene-1,3,6- trisulfonic Acid (Pyranine, Py) molecules were used for arranging the particle size and thus physical properties of the nano-sized hydrogels. Fluorescence spectroscopy, atomic force microscopy and light scattering methods were used for characterizing the synthesized hydrogels. The results show that the gel size was decreased with increasing amount of ionic molecule from 550 to 140 nm due to the electrostatic behavior of the ionic side groups of pyranine. Light scattering experiments demonstrate that lower critical solution temperature (LCST) of the gels shifts to the lower temperature with decreasing size of gel due to the hydrophobicity–hydrophilicity balance of the polymer chains.Keywords: hydrogels, lower critical solution temperature, nanogels, poly(n-isopropylacrylamide)
Procedia PDF Downloads 2436022 Statistical Modeling of Mobile Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes
Authors: Jihad S. Daba, J. P. Dubois
Abstract:
Understanding the statistics of non-isotropic scattering multipath channels that fade randomly with respect to time, frequency, and space in a mobile environment is very crucial for the accurate detection of received signals in wireless and cellular communication systems. In this paper, we derive stochastic models for the probability density function (PDF) of the shift in the carrier frequency caused by the Doppler Effect on the received illuminating signal in the presence of a dominant line of sight. Our derivation is based on a generalized Clarke’s and a two-wave partially developed scattering models, where the statistical distribution of the frequency shift is shown to be consistent with the power spectral density of the Doppler shifted signal.Keywords: Doppler shift, filtered Poisson process, generalized Clark’s model, non-isotropic scattering, partially developed scattering, Rician distribution
Procedia PDF Downloads 3706021 Cirrhosis Mortality Prediction as Classification using Frequent Subgraph Mining
Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride
Abstract:
In this work, we use machine learning and novel data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. To the best of our knowledge, this is the first work to apply modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning
Procedia PDF Downloads 1326020 A Non-Parametric Based Mapping Algorithm for Use in Audio Fingerprinting
Authors: Analise Borg, Paul Micallef
Abstract:
Over the past few years, the online multimedia collection has grown at a fast pace. Several companies showed interest to study the different ways to organize the amount of audio information without the need of human intervention to generate metadata. In the past few years, many applications have emerged on the market which are capable of identifying a piece of music in a short time. Different audio effects and degradation make it much harder to identify the unknown piece. In this paper, an audio fingerprinting system which makes use of a non-parametric based algorithm is presented. Parametric analysis is also performed using Gaussian Mixture Models (GMMs). The feature extraction methods employed are the Mel Spectrum Coefficients and the MPEG-7 basic descriptors. Bin numbers replaced the extracted feature coefficients during the non-parametric modelling. The results show that non-parametric analysis offer potential results as the ones mentioned in the literature.Keywords: audio fingerprinting, mapping algorithm, Gaussian Mixture Models, MFCC, MPEG-7
Procedia PDF Downloads 4196019 Using Confirmatory Factor Analysis to Test the Dimensional Structure of Tourism Service Quality
Authors: Ibrahim A. Elshaer, Alaa M. Shaker
Abstract:
Several previous empirical studies have operationalized service quality as either a multidimensional or unidimensional construct. While few earlier studies investigated some practices of the assumed dimensional structure of service quality, no study has been found to have tested the construct’s dimensionality using confirmatory factor analysis (CFA). To gain a better insight into the dimensional structure of service quality construct, this paper tests its dimensionality using three CFA models (higher order factor model, oblique factor model, and one factor model) on a set of data collected from 390 British tourists visited Egypt. The results of the three tests models indicate that service quality construct is multidimensional. This result helps resolving the problems that might arise from the lack of clarity concerning the dimensional structure of service quality, as without testing the dimensional structure of a measure, researchers cannot assume that the significant correlation is a result of factors measuring the same construct.Keywords: service quality, dimensionality, confirmatory factor analysis, Egypt
Procedia PDF Downloads 5906018 Colored Image Classification Using Quantum Convolutional Neural Networks Approach
Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins
Abstract:
Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning
Procedia PDF Downloads 1286017 Dynamical Models for Enviromental Effect Depuration for Structural Health Monitoring of Bridges
Authors: Francesco Morgan Bono, Simone Cinquemani
Abstract:
This research aims to enhance bridge monitoring by employing innovative techniques that incorporate exogenous factors into the modeling of sensor signals, thereby improving long-term predictability beyond traditional static methods. Using real datasets from two different bridges equipped with Linear Variable Displacement Transducer (LVDT) sensors, the study investigates the fundamental principles governing sensor behavior for more precise long-term forecasts. Additionally, the research evaluates performance on noisy and synthetically damaged data, proposing a residual-based alarm system to detect anomalies in the bridge. In summary, this novel approach combines advanced modeling, exogenous factors, and anomaly detection to extend prediction horizons and improve preemptive damage recognition, significantly advancing structural health monitoring practices.Keywords: structural health monitoring, dynamic models, sindy, railway bridges
Procedia PDF Downloads 376016 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings
Authors: Jude K. Safo
Abstract:
Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics
Procedia PDF Downloads 676015 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis
Authors: Meng Su
Abstract:
High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis
Procedia PDF Downloads 1056014 DUSP16 Inhibition Rescues Neurogenic and Cognitive Deficits in Alzheimer's Disease Mice Models
Authors: Huimin Zhao, Xiaoquan Liu, Haochen Liu
Abstract:
The major challenge facing Alzheimer's Disease (AD) drug development is how to effectively improve cognitive function in clinical practice. Growing evidence indicates that stimulating hippocampal neurogenesis is a strategy for restoring cognition in animal models of AD. The mitogen-activated protein kinase (MAPK) pathway is a crucial factor in neurogenesis, which is negatively regulated by Dual-specificity phosphatase 16 (DUSP16). Transcriptome analysis of post-mortem brain tissue revealed up-regulation of DUSP16 expression in AD patients. Additionally, DUSP16 was involved in regulating the proliferation and neural differentiation of neural progenitor cells (NPCs). Nevertheless, whether the effect of DUSP16 on ameliorating cognitive disorders by influencing NPCs differentiation in AD mice remains unclear. Our study demonstrates an association between DUSP16 SNPs and clinical progression in individuals with mild cognitive impairment (MCI). Besides, we found that increased DUSP16 expression in both 3×Tg and SAMP8 models of AD led to NPC differentiation impairments. By silencing DUSP16, cognitive benefits, the induction of AHN and synaptic plasticity, were observed in AD mice. Furthermore, we found that DUSP16 is involved in the process of NPC differentiation by regulating c-Jun N-terminal kinase (JNK) phosphorylation. Moreover, the increased DUSP16 may be regulated by the ETS transcription factor (ELK1), which binds to the promoter region of DUSP16. Loss of ELK1 resulted in decreased DUSP16 mRNA and protein levels. Our data uncover a potential regulatory role for DUSP16 in adult hippocampal neurogenesis and provide a possibility to find the target of AD intervention.Keywords: alzheimer's disease, cognitive function, DUSP16, hippocampal neurogenesis
Procedia PDF Downloads 716013 Morphological Characteristic of Hybrid Thin Films
Authors: Azyuni Aziz, Syed A. Malik, Shahrul Kadri Ayop, Fatin Hana Naning
Abstract:
Currently, organic-inorganic hybrid thin films have attracted researchers to explore them, where these thin films can give a lot of benefits. Hybrid thin films are thin films that consist of inorganic and organic materials. Inorganic and organic materials give high efficiency and low manufacturing cost in some applications such as solar cells application, furthermore, organic materials are environment-friendly. In this study, poly (3-hexylthiophene) was choosing as organic material which combined with inorganic nanoparticles, Cadmium Sulfide (CdS) quantum dots. Samples were prepared using new technique, Angle Lifting Deposition (ALD) at different weight percentage. All prepared samples were then characterized by Field Emission Scanning Electron Microscopy (FESEM) with Energy-dispersive X-ray spectroscopy (EDX) and Atomic Force Microscopy (AFM) to study surface of samples and determine their surface roughness. Results show that these inorganic nanoparticles have affected the surface of samples and surface roughness of samples increased due to increasing of weight percentage of CdS in the thin films samples.Keywords: AFM, CdS, FESEM-EDX, hybrid thin films, P3HT
Procedia PDF Downloads 5006012 Static and Dynamic Behaviors of Sandwich Structures With Metallic Connections
Authors: Shidokht Rashiddadash, Mojtaba Sadighi, Soheil Dariushi
Abstract:
Since sandwich structures are used in many areas ranging from ships, trains, automobiles, aircrafts, bridge and building, connecting sandwich structures is necessary almost in all industries. So application of metallic joints between sandwich panels is increasing. Various joining methods are available such as mechanically fastened joints (riveting or bolting) or adhesively bonded joints and choosing one of them depends on the application. In this research, sandwich specimens were fabricated with two different types of metallic connections with dissimilar geometries. These specimens included beams and plates and were manufactured using glass-epoxy skins and aluminum honeycomb core. After construction of the specimens, bending and low velocity impact tests were executed on them and the behaviors of specimens were discussed. Numerical models were developed using LS-DYNA software and validated with test results. Finally, parametric studies were performed on the thicknesses and lengths of two connections by employing the numerical models.Keywords: connection, honeycomb, low velocity impact, sandwich panel, static test
Procedia PDF Downloads 546011 A Guide for Using Viscoelasticity in ANSYS
Authors: A. Fettahoglu
Abstract:
Theory of viscoelasticity is used by many researchers to represent the behavior of many materials such as pavements on roads or bridges. Several researches used analytical methods and rheology to predict the material behaviors of simple models. Today, more complex engineering structures are analyzed using Finite Element Method, in which material behavior is embedded by means of three dimensional viscoelastic material laws. As a result, structures of unordinary geometry and domain can be analyzed by means of Finite Element Method and three dimensional viscoelastic equations. In the scope of this study, rheological models embedded in ANSYS, namely, generalized Maxwell model and Prony series, which are two methods used by ANSYS to represent viscoelastic material behavior, are presented explicitly. Afterwards, a guide is illustrated to ease using of viscoelasticity tool in ANSYS.Keywords: ANSYS, generalized Maxwell model, finite element method, Prony series, viscoelasticity, viscoelastic material curve fitting
Procedia PDF Downloads 6016010 Bayesian Meta-Analysis to Account for Heterogeneity in Studies Relating Life Events to Disease
Authors: Elizabeth Stojanovski
Abstract:
Associations between life events and various forms of cancers have been identified. The purpose of a recent random-effects meta-analysis was to identify studies that examined the association between adverse events associated with changes to financial status including decreased income and breast cancer risk. The same association was studied in four separate studies which displayed traits that were not consistent between studies such as the study design, location and time frame. It was of interest to pool information from various studies to help identify characteristics that differentiated study results. Two random-effects Bayesian meta-analysis models are proposed to combine the reported estimates of the described studies. The proposed models allow major sources of variation to be taken into account, including study level characteristics, between study variance, and within study variance and illustrate the ease with which uncertainty can be incorporated using a hierarchical Bayesian modelling approach.Keywords: random-effects, meta-analysis, Bayesian, variation
Procedia PDF Downloads 1596009 Reliable Soup: Reliable-Driven Model Weight Fusion on Ultrasound Imaging Classification
Authors: Shuge Lei, Haonan Hu, Dasheng Sun, Huabin Zhang, Kehong Yuan, Jian Dai, Yan Tong
Abstract:
It remains challenging to measure reliability from classification results from different machine learning models. This paper proposes a reliable soup optimization algorithm based on the model weight fusion algorithm Model Soup, aiming to improve reliability by using dual-channel reliability as the objective function to fuse a series of weights in the breast ultrasound classification models. Experimental results on breast ultrasound clinical datasets demonstrate that reliable soup significantly enhances the reliability of breast ultrasound image classification tasks. The effectiveness of the proposed approach was verified via multicenter trials. The results from five centers indicate that the reliability optimization algorithm can enhance the reliability of the breast ultrasound image classification model and exhibit low multicenter correlation.Keywords: breast ultrasound image classification, feature attribution, reliability assessment, reliability optimization
Procedia PDF Downloads 836008 Simulation of Wind Solar Hybrid Power Generation for Pumping Station
Authors: Masoud Taghavi, Gholamreza Salehi, Ali Lohrasbi Nichkoohi
Abstract:
Despite the growing use of renewable energies in different fields of application of this technology in the field of water supply has been less attention. Photovoltaic and wind hybrid system is that new topics in renewable energy, including photovoltaic arrays, wind turbines, a set of batteries as a storage system and a diesel generator as a backup system is. In this investigation, first climate data including average wind speed and solar radiation at any time during the year, data collection and analysis are performed in the energy. The wind turbines in four models, photovoltaic panels at the 6 position of relative power, batteries and diesel generator capacity in seven states in the two models are combined hours of operation with renewables, diesel generator and battery bank check and a hybrid system of solar power generation-wind, which is optimized conditions, are presented.Keywords: renewable energy, wind and solar energy, hybrid systems, cloning station
Procedia PDF Downloads 3966007 On Differential Growth Equation to Stochastic Growth Model Using Hyperbolic Sine Function in Height/Diameter Modeling of Pines
Authors: S. O. Oyamakin, A. U. Chukwu
Abstract:
Richard's growth equation being a generalized logistic growth equation was improved upon by introducing an allometric parameter using the hyperbolic sine function. The integral solution to this was called hyperbolic Richard's growth model having transformed the solution from deterministic to a stochastic growth model. Its ability in model prediction was compared with the classical Richard's growth model an approach which mimicked the natural variability of heights/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using the coefficient of determination (R2), Mean Absolute Error (MAE) and Mean Square Error (MSE) results. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the behavior of the error term for possible violations. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic Richard's nonlinear growth models better than the classical Richard's growth model.Keywords: height, Dbh, forest, Pinus caribaea, hyperbolic, Richard's, stochastic
Procedia PDF Downloads 4786006 Forecasting Model to Predict Dengue Incidence in Malaysia
Authors: W. H. Wan Zakiyatussariroh, A. A. Nasuhar, W. Y. Wan Fairos, Z. A. Nazatul Shahreen
Abstract:
Forecasting dengue incidence in a population can provide useful information to facilitate the planning of the public health intervention. Many studies on dengue cases in Malaysia were conducted but are limited in modeling the outbreak and forecasting incidence. This article attempts to propose the most appropriate time series model to explain the behavior of dengue incidence in Malaysia for the purpose of forecasting future dengue outbreaks. Several seasonal auto-regressive integrated moving average (SARIMA) models were developed to model Malaysia’s number of dengue incidence on weekly data collected from January 2001 to December 2011. SARIMA (2,1,1)(1,1,1)52 model was found to be the most suitable model for Malaysia’s dengue incidence with the least value of Akaike information criteria (AIC) and Bayesian information criteria (BIC) for in-sample fitting. The models further evaluate out-sample forecast accuracy using four different accuracy measures. The results indicate that SARIMA (2,1,1)(1,1,1)52 performed well for both in-sample fitting and out-sample evaluation.Keywords: time series modeling, Box-Jenkins, SARIMA, forecasting
Procedia PDF Downloads 4836005 Heavy Metal Distribution in Tissues of Two Commercially Important Fish Species, Euryglossa orientalis and Psettodes erumei
Authors: Reza Khoshnood, Zahra Khoshnood, Ali Hajinajaf, Farzad Fahim, Behdokht Hajinajaf, Farhad Fahim
Abstract:
In 2013, 24 fish samples were taken from two fishery regions in Bandar-Abbas and Bandar-Lengeh, the fishing grounds north of Hormoz Strait (Persian Gulf) near the Iranian coastline. The two flat fishes were oriental sole (Euryglossa orientalis) and deep flounder (Psettodes erumei). Using the ROPME method (MOOPAM) for chemical digestion, Cd concentration was measured with a nonflame atomic absorption spectrophotometry technique. The average concentration of Cd in the edible muscle tissue of deep flounder was measured in Bandar-Abbas and was found to be 0.15±.06 µg g-1. It was 0.1±.05 µg.g-1 in Bandar-Lengeh. The corresponding values for oriental sole were 0.2±0.13 and 0.13±0.11 µg.g-1. The average concentration of Cd in the liver tissue of deep flounder in Bandar-Abbas was 0.22±.05 µg g-1 and that in Bandar-Lengeh was 0.2±0.04 µg.g-1. The values for oriental sole were 0.31±0.09 and 0.24±0.13 µg g-1 in Bandar-Abbas and Bandar-Lengeh, respectively.Keywords: trace metal, Euryglossa orientalis, Psettodes erumei, Persian Gulf
Procedia PDF Downloads 6676004 A Comprehensive Study of Spread Models of Wildland Fires
Authors: Manavjit Singh Dhindsa, Ursula Das, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran
Abstract:
These days, wildland fires, also known as forest fires, are more prevalent than ever. Wildfires have major repercussions that affect ecosystems, communities, and the environment in several ways. Wildfires lead to habitat destruction and biodiversity loss, affecting ecosystems and causing soil erosion. They also contribute to poor air quality by releasing smoke and pollutants that pose health risks, especially for individuals with respiratory conditions. Wildfires can damage infrastructure, disrupt communities, and cause economic losses. The economic impact of firefighting efforts, combined with their direct effects on forestry and agriculture, causes significant financial difficulties for the areas impacted. This research explores different forest fire spread models and presents a comprehensive review of various techniques and methodologies used in the field. A forest fire spread model is a computational or mathematical representation that is used to simulate and predict the behavior of a forest fire. By applying scientific concepts and data from empirical studies, these models attempt to capture the intricate dynamics of how a fire spreads, taking into consideration a variety of factors like weather patterns, topography, fuel types, and environmental conditions. These models assist authorities in understanding and forecasting the potential trajectory and intensity of a wildfire. Emphasizing the need for a comprehensive understanding of wildfire dynamics, this research explores the approaches, assumptions, and findings derived from various models. By using a comparison approach, a critical analysis is provided by identifying patterns, strengths, and weaknesses among these models. The purpose of the survey is to further wildfire research and management techniques. Decision-makers, researchers, and practitioners can benefit from the useful insights that are provided by synthesizing established information. Fire spread models provide insights into potential fire behavior, facilitating authorities to make informed decisions about evacuation activities, allocating resources for fire-fighting efforts, and planning for preventive actions. Wildfire spread models are also useful in post-wildfire mitigation strategies as they help in assessing the fire's severity, determining high-risk regions for post-fire dangers, and forecasting soil erosion trends. The analysis highlights the importance of customized modeling approaches for various circumstances and promotes our understanding of the way forest fires spread. Some of the known models in this field are Rothermel’s wildland fuel model, FARSITE, WRF-SFIRE, FIRETEC, FlamMap, FSPro, cellular automata model, and others. The key characteristics that these models consider include weather (includes factors such as wind speed and direction), topography (includes factors like landscape elevation), and fuel availability (includes factors like types of vegetation) among other factors. The models discussed are physics-based, data-driven, or hybrid models, also utilizing ML techniques like attention-based neural networks to enhance the performance of the model. In order to lessen the destructive effects of forest fires, this initiative aims to promote the development of more precise prediction tools and effective management techniques. The survey expands its scope to address the practical needs of numerous stakeholders. Access to enhanced early warning systems enables decision-makers to take prompt action. Emergency responders benefit from improved resource allocation strategies, strengthening the efficacy of firefighting efforts.Keywords: artificial intelligence, deep learning, forest fire management, fire risk assessment, fire simulation, machine learning, remote sensing, wildfire modeling
Procedia PDF Downloads 816003 Identification and Characterization of Heavy Metal Resistant Bacteria from the Klip River
Authors: P. Chihomvu, P. Stegmann, M. Pillay
Abstract:
Pollution of the Klip River has caused microorganisms inhabiting it to develop protective survival mechanisms. This study isolated and characterized the heavy metal resistant bacteria in the Klip River. Water and sediment samples were collected from six sites along the course of the river. The pH, turbidity, salinity, temperature and dissolved oxygen were measured in-situ. The concentrations of six heavy metals (Cd, Cu, Fe, Ni, Pb, and Zn) of the water samples were determined by atomic absorption spectroscopy. Biochemical and antibiotic profiles of the isolates were assessed using the API 20E® and Kirby Bauer Method. Growth studies were carried out using spectrophotometric methods. The isolates were identified using 16SrDNA sequencing. The uppermost part of the Klip River with the lowest pH had the highest levels of heavy metals. Turbidity, salinity and specific conductivity increased measurably at Site 4 (Henley on Klip Weir). MIC tests showed that 16 isolates exhibited high iron and lead resistance. Antibiotic susceptibility tests revealed that the isolates exhibited multi-tolerances to drugs such as tetracycline, ampicillin, and amoxicillin.Keywords: Klip River, heavy metals, resistance, 16SrDNA
Procedia PDF Downloads 3246002 Models of Start-Ups Created in Cooperation with a State University
Authors: Roman Knizek, Denisa Knizkova, Ludmila Fridrichova
Abstract:
The academic environment in Central Europe has recently been transforming itself and is trying to link its research and development with the private sector. However, compared to Western countries, there is a lack of history and continuity because of the centrally controlled economy from the end of the Second World War until the early 1990s. There are two basic models of how to carry out technology transfer between the academic and the business world. The first is to develop something new and then find a suitable private sector partner; the second is to find a partner who has the basic idea and then develop something new in collaboration. This study, unlike some other ones, describes two specific cases that took place in cooperation with the Technical University of Liberec, Faculty of Textiles. As was said before, in one case, a product was first developed, and after that, an investor was sought, and in the other case, there was an investor who wanted a specific product and wanted to help with its development. The study describes the various advantages and disadvantages, including a practical example of the creation of a subsequent start-up.Keywords: start-up, state university, academic environment, licensing agreement
Procedia PDF Downloads 136001 Enhancing Signal Reception in a Mobile Radio Network Using Adaptive Beamforming Antenna Arrays Technology
Authors: Ugwu O. C., Mamah R. O., Awudu W. S.
Abstract:
This work is aimed at enhancing signal reception on a mobile radio network and minimizing outage probability in a mobile radio network using adaptive beamforming antenna arrays. In this research work, an empirical real-time drive measurement was done in a cellular network of Globalcom Nigeria Limited located at Ikeja, the headquarters of Lagos State, Nigeria, with reference base station number KJA 004. The empirical measurement includes Received Signal Strength and Bit Error Rate which were recorded for exact prediction of the signal strength of the network as at the time of carrying out this research work. The Received Signal Strength and Bit Error Rate were measured with a spectrum monitoring Van with the help of a Ray Tracer at an interval of 100 meters up to 700 meters from the transmitting base station. The distance and angular location measurements from the reference network were done with the help Global Positioning System (GPS). The other equipment used were transmitting equipment measurements software (Temsoftware), Laptops and log files, which showed received signal strength with distance from the base station. Results obtained were about 11% from the real-time experiment, which showed that mobile radio networks are prone to signal failure and can be minimized using an Adaptive Beamforming Antenna Array in terms of a significant reduction in Bit Error Rate, which implies improved performance of the mobile radio network. In addition, this work did not only include experiments done through empirical measurement but also enhanced mathematical models that were developed and implemented as a reference model for accurate prediction. The proposed signal models were based on the analysis of continuous time and discrete space, and some other assumptions. These developed (proposed) enhanced models were validated using MATLAB (version 7.6.3.35) program and compared with the conventional antenna for accuracy. These outage models were used to manage the blocked call experience in the mobile radio network. 20% improvement was obtained when the adaptive beamforming antenna arrays were implemented on the wireless mobile radio network.Keywords: beamforming algorithm, adaptive beamforming, simulink, reception
Procedia PDF Downloads 40