Search results for: predicting model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17007

Search results for: predicting model

16617 Identification of Classes of Bilinear Time Series Models

Authors: Anthony Usoro

Abstract:

In this paper, two classes of bilinear time series model are obtained under certain conditions from the general bilinear autoregressive moving average model. Bilinear Autoregressive (BAR) and Bilinear Moving Average (BMA) Models have been identified. From the general bilinear model, BAR and BMA models have been proved to exist for q = Q = 0, => j = 0, and p = P = 0, => i = 0 respectively. These models are found useful in modelling most of the economic and financial data.

Keywords: autoregressive model, bilinear autoregressive model, bilinear moving average model, moving average model

Procedia PDF Downloads 394
16616 A Nonlinear Visco-Hyper Elastic Constitutive Model for Modelling Behavior of Polyurea at Large Deformations

Authors: Shank Kulkarni, Alireza Tabarraei

Abstract:

The fantastic properties of polyurea such as flexibility, durability, and chemical resistance have brought it a wide range of application in various industries. Effective prediction of the response of polyurea under different loading and environmental conditions necessitates the development of an accurate constitutive model. Similar to most polymers, the behavior of polyurea depends on both strain and strain rate. Therefore, the constitutive model should be able to capture both these effects on the response of polyurea. To achieve this objective, in this paper, a nonlinear hyper-viscoelastic constitutive model is developed by the superposition of a hyperelastic and a viscoelastic model. The proposed constitutive model can capture the behavior of polyurea under compressive loading conditions at various strain rates. Four parameter Ogden model and Mooney Rivlin model are used to modeling the hyperelastic behavior of polyurea. The viscoelastic behavior is modeled using both a three-parameter standard linear solid (SLS) model and a K-BKZ model. Comparison of the modeling results with experiments shows that Odgen and SLS model can more accurately predict the behavior of polyurea. The material parameters of the model are found by curve fitting of the proposed model to the uniaxial compression test data. The proposed model can closely reproduce the stress-strain behavior of polyurea for strain rates up to 6500 /s.

Keywords: constitutive modelling, ogden model, polyurea, SLS model, uniaxial compression test

Procedia PDF Downloads 230
16615 Finite Element Analysis of Raft Foundation on Various Soil Types under Earthquake Loading

Authors: Qassun S. Mohammed Shafiqu, Murtadha A. Abdulrasool

Abstract:

The design of shallow foundations to withstand different dynamic loads has given considerable attention in recent years. Dynamic loads may be due to the earthquakes, pile driving, blasting, water waves, and machine vibrations. But, predicting the behavior of shallow foundations during earthquakes remains a difficult task for geotechnical engineers. A database for dynamic and static parameters for different soils in seismic active zones in Iraq is prepared which has been collected from geophysical and geotechnical investigation works. Then, analysis of a typical 3-D soil-raft foundation system under earthquake loading is carried out using the database. And a parametric study has been carried out taking into consideration the influence of some parameters on the dynamic behavior of the raft foundation, such as raft stiffness, damping ratio as well as the influence of the earthquake acceleration-time records. The results of the parametric study show that the settlement caused by the earthquake can be decreased by about 72% with increasing the thickness from 0.5 m to 1.5 m. But, it has been noticed that reduction in the maximum bending moment by about 82% was predicted by decreasing the raft thickness from 1.5 m to 0.5 m in all sites model. Also, it has been observed that the maximum lateral displacement, the maximum vertical settlement and the maximum bending moment for damping ratio 0% is about 14%, 20%, and 18% higher than that for damping ratio 7.5%, respectively for all sites model.

Keywords: shallow foundation, seismic behavior, raft thickness, damping ratio

Procedia PDF Downloads 142
16614 OmniDrive Model of a Holonomic Mobile Robot

Authors: Hussein Altartouri

Abstract:

In this paper the kinematic and kinetic models of an omnidirectional holonomic mobile robot is presented. The kinematic and kinetic models form the OmniDrive model. Therefore, a mathematical model for the robot equipped with three- omnidirectional wheels is derived. This model which takes into consideration the kinematics and kinetics of the robot, is developed to state space representation. Relative analysis of the velocities and displacements is used for the kinematics of the robot. Lagrange’s approach is considered in this study for deriving the equation of motion. The drive train and the mechanical assembly only of the Festo Robotino® is considered in this model. Mainly the model is developed for motion control. Furthermore, the model can be used for simulation purposes in different virtual environments not only Robotino® View. Further use of the model is in the mechatronics research fields with the aim of teaching and learning the advanced control theories.

Keywords: mobile robot, omni-direction wheel, mathematical model, holonomic mobile robot

Procedia PDF Downloads 583
16613 Numerical Modeling of Timber Structures under Varying Humidity Conditions

Authors: Sabina Huč, Staffan Svensson, Tomaž Hozjan

Abstract:

Timber structures may be exposed to various environmental conditions during their service life. Often, the structures have to resist extreme changes in the relative humidity of surrounding air, with simultaneously carrying the loads. Wood material response for this load case is seen as increasing deformation of the timber structure. Relative humidity variations cause moisture changes in timber and consequently shrinkage and swelling of the material. Moisture changes and loads acting together result in mechano-sorptive creep, while sustained load gives viscoelastic creep. In some cases, magnitude of the mechano-sorptive strain can be about five times the elastic strain already at low stress levels. Therefore, analyzing mechano-sorptive creep and its influence on timber structures’ long-term behavior is of high importance. Relatively many one-dimensional rheological models for rheological behavior of wood can be found in literature, while a number of models coupling creep response in each material direction is limited. In this study, mathematical formulation of a coupled two-dimensional mechano-sorptive model and its application to the experimental results are presented. The mechano-sorptive model constitutes of a moisture transport model and a mechanical model. Variation of the moisture content in wood is modelled by multi-Fickian moisture transport model. The model accounts for processes of the bound-water and water-vapor diffusion in wood, that are coupled through sorption hysteresis. Sorption defines a nonlinear relation between moisture content and relative humidity. Multi-Fickian moisture transport model is able to accurately predict unique, non-uniform moisture content field within the timber member over time. Calculated moisture content in timber members is used as an input to the mechanical analysis. In the mechanical analysis, the total strain is assumed to be a sum of the elastic strain, viscoelastic strain, mechano-sorptive strain, and strain due to shrinkage and swelling. Mechano-sorptive response is modelled by so-called spring-dashpot type of a model, that proved to be suitable for describing creep of wood. Mechano-sorptive strain is dependent on change of moisture content. The model includes mechano-sorptive material parameters that have to be calibrated to the experimental results. The calibration is made to the experiments carried out on wooden blocks subjected to uniaxial compressive loaded in tangential direction and varying humidity conditions. The moisture and the mechanical model are implemented in a finite element software. The calibration procedure gives the required, distinctive set of mechano-sorptive material parameters. The analysis shows that mechano-sorptive strain in transverse direction is present, though its magnitude and variation are substantially lower than the mechano-sorptive strain in the direction of loading. The presented mechano-sorptive model enables observing real temporal and spatial distribution of the moisture-induced strains and stresses in timber members. Since the model’s suitability for predicting mechano-sorptive strains is shown and the required material parameters are obtained, a comprehensive advanced analysis of the stress-strain state in timber structures, including connections subjected to constant load and varying humidity is possible.

Keywords: mechanical analysis, mechano-sorptive creep, moisture transport model, timber

Procedia PDF Downloads 235
16612 Predicting Reading Comprehension in Spanish: The Evidence for the Simple View Model

Authors: Gabriela Silva-Maceda, Silvia Romero-Contreras

Abstract:

Spanish is a more transparent language than English given that it has more direct correspondences between sounds and letters. It has become important to understand how decoding and linguistic comprehension contribute to reading comprehension in the framework of the widely known Simple View Model. This study aimed to identify the level of prediction by these two components in a sample of 1st to 4th grade children attending two schools in central Mexico (one public and one private). Within each school, ten children were randomly selected in each grade level, and their parents were asked about reading habits and socioeconomic information. In total, 79 children completed three standardized tests measuring decoding (pseudo-word reading), linguistic comprehension (understanding of paragraphs) and reading comprehension using subtests from the Clinical Evaluation of Language Fundamentals-Spanish, Fourth Edition, and the Test de Lectura y Escritura en Español (LEE). The data were analyzed using hierarchical regression, with decoding as a first step and linguistic comprehension as a second step. Results showed that decoding accounted for 19.2% of the variance in reading comprehension, while linguistic comprehension accounted for an additional 10%, adding up to 29.2% of variance explained: F (2, 75)= 15.45, p <.001. Socioeconomic status derived from parental questionnaires showed a statistically significant association with the type of school attended, X2 (3, N= 79) = 14.33, p =.002. Nonetheless when analyzing the Simple View components, only decoding differences were statistically significant (t = -6.92, df = 76.81, p < .001, two-tailed); reading comprehension differences were also significant (t = -3.44, df = 76, p = .001, two-tailed). When socioeconomic status was included in the model, it predicted a 5.9% unique variance, even when already accounting for Simple View components, adding to a 35.1% total variance explained. This three-predictor model was also significant: F (3, 72)= 12.99, p <.001. In addition, socioeconomic status was significantly correlated with the amount of non-textbook books parents reported to have at home for both adults (rho = .61, p<.001) and children (rho= .47, p<.001). Results converge with a large body of literature finding socioeconomic differences in reading comprehension; in addition this study suggests that these differences were also present in decoding skills. Although linguistic comprehension differences between schools were expected, it is argued that the test used to collect this variable was not sensitive to linguistic differences, since it came from a test to diagnose clinical language disabilities. Even with this caveat, results show that the components of the Simple View Model can predict less than a third of the variance in reading comprehension in Spanish. However, the results also suggest that a fuller model of reading comprehension is obtained when considering the family’s socioeconomic status, given the potential differences shown by the socioeconomic status association with books at home, factors that are particularly important in countries where inequality gaps are relatively large.

Keywords: decoding, linguistic comprehension, reading comprehension, simple view model, socioeconomic status, Spanish

Procedia PDF Downloads 316
16611 A Constitutive Model for Time-Dependent Behavior of Clay

Authors: T. N. Mac, B. Shahbodaghkhan, N. Khalili

Abstract:

A new elastic-viscoplastic (EVP) constitutive model is proposed for the analysis of time-dependent behavior of clay. The proposed model is based on the bounding surface plasticity and the concept of viscoplastic consistency framework to establish continuous transition from plasticity to rate dependent viscoplasticity. Unlike the overstress based models, this model will meet the consistency condition in formulating the constitutive equation for EVP model. The procedure of deriving the constitutive relationship is also presented. Simulation results and comparisons with experimental data are then presented to demonstrate the performance of the model.

Keywords: bounding surface, consistency theory, constitutive model, viscosity

Procedia PDF Downloads 479
16610 Rocket Launch Simulation for a Multi-Mode Failure Prediction Analysis

Authors: Mennatallah M. Hussein, Olivier de Weck

Abstract:

The advancement of space exploration demands a robust space launch services program capable of reliably propelling payloads into orbit. Despite rigorous testing and quality assurance, launch failures still occur, leading to significant financial losses and jeopardizing mission objectives. Traditional failure prediction methods often lack the sophistication to account for multi-mode failure scenarios, as well as the predictive capability in complex dynamic systems. Traditional approaches also rely on expert judgment, leading to variability in risk prioritization and mitigation strategies. Hence, there is a pressing need for robust approaches that enhance launch vehicle reliability from lift-off until it reaches its parking orbit through comprehensive simulation techniques. In this study, the developed model proposes a multi-mode launch vehicle simulation framework for predicting failure scenarios when incorporating new technologies, such as new propulsion systems or advanced staging separation mechanisms in the launch system. To this end, the model combined a 6-DOF system dynamics with comprehensive data analysis to simulate multiple failure modes impacting launch performance. The simulator utilizes high-fidelity physics-based simulations to capture the complex interactions between different subsystems and environmental conditions.

Keywords: launch vehicle, failure prediction, propulsion anomalies, rocket launch simulation, rocket dynamics

Procedia PDF Downloads 15
16609 Effect of Quenching Medium on the Hardness of Dual Phase Steel Heat Treated at a High Temperature

Authors: Tebogo Mabotsa, Tamba Jamiru, David Ibrahim

Abstract:

Dual phase(DP) steel consists essentially of fine grained equiaxial ferrite and a dispersion of martensite. Martensite is the primary precipitate in DP steels, it is the main resistance to dislocation motion within the material. The objective of this paper is to present a relation between the intercritical annealing holding time and the hardness of a dual phase steel. The initial heat treatment involved heating the specimens to 1000oC and holding the sample at that temperature for 30 minutes. After the initial heat treatment, the samples were heated to 770oC and held for a varying amount of time at constant temperature. The samples were held at 30, 60, and 90 minutes respectively. After heating and holding the samples at the austenite-ferrite phase field, the samples were quenched in water, brine, and oil for each holding time. The experimental results proved that an equation for predicting the hardness of a dual phase steel as a function of the intercritical holding time is possible. The relation between intercritical annealing holding time and hardness of a dual phase steel heat treated at high temperatures is parabolic in nature. Theoretically, the model isdependent on the cooling rate because the model differs for each quenching medium; therefore, a universal hardness equation can be derived where the cooling rate is a variable factor.

Keywords: quenching medium, annealing temperature, dual phase steel, martensite

Procedia PDF Downloads 73
16608 Kinetic and Thermodynamic Study of Nitrates Removal by Sorption on Biochar

Authors: Amira Touil, Achouak Arfaoui, Ibtissem Mannaii

Abstract:

The aim of this work is to monitor the process adsorption of nitrates by the biochar via studying the influence of various parameters on the adsorption of this pollutant by biochar in a synthetic aqueous solution. The results which obtained indicate that the 4g/L biochar dose is the most efficient in terms of nitrates removal in aqueous solution. The biochar exhibited a good affinity for nitrates after 1hour of contact. The yield of removal of nitrate by the biochar decreases with the increase of pH of the solution and increases with increasing temperature (60°C>40°C>20°C). The best removal yield is about 80% of the initial concentration introduced (25mg/L) obtained at pH=2, T=60°C, and dose of biochar=4g/L. The second order model fit the nitrate adsorption kinetics of biochar with a high coefficient of determination (R2≥0.997); and a new equation correlating the rate constant of the reaction with temperature and pH was been built. Freundlich isotherms performed well to fit the nitrate adsorption data by biochar (R2>0.96) compared to Langmuir isotherms. The thermodynamic parameters (ΔH°, ΔG°, ΔS°) have been calculated for predicting the nature of adsorption.

Keywords: pollution, biochar, nitrate, adsorption

Procedia PDF Downloads 85
16607 Traffic Forecasting for Open Radio Access Networks Virtualized Network Functions in 5G Networks

Authors: Khalid Ali, Manar Jammal

Abstract:

In order to meet the stringent latency and reliability requirements of the upcoming 5G networks, Open Radio Access Networks (O-RAN) have been proposed. The virtualization of O-RAN has allowed it to be treated as a Network Function Virtualization (NFV) architecture, while its components are considered Virtualized Network Functions (VNFs). Hence, intelligent Machine Learning (ML) based solutions can be utilized to apply different resource management and allocation techniques on O-RAN. However, intelligently allocating resources for O-RAN VNFs can prove challenging due to the dynamicity of traffic in mobile networks. Network providers need to dynamically scale the allocated resources in response to the incoming traffic. Elastically allocating resources can provide a higher level of flexibility in the network in addition to reducing the OPerational EXpenditure (OPEX) and increasing the resources utilization. Most of the existing elastic solutions are reactive in nature, despite the fact that proactive approaches are more agile since they scale instances ahead of time by predicting the incoming traffic. In this work, we propose and evaluate traffic forecasting models based on the ML algorithm. The algorithms aim at predicting future O-RAN traffic by using previous traffic data. Detailed analysis of the traffic data was carried out to validate the quality and applicability of the traffic dataset. Hence, two ML models were proposed and evaluated based on their prediction capabilities.

Keywords: O-RAN, traffic forecasting, NFV, ARIMA, LSTM, elasticity

Procedia PDF Downloads 206
16606 Modelling Biological Treatment of Dye Wastewater in SBR Systems Inoculated with Bacteria by Artificial Neural Network

Authors: Yasaman Sanayei, Alireza Bahiraie

Abstract:

This paper presents a systematic methodology based on the application of artificial neural networks for sequencing batch reactor (SBR). The SBR is a fill-and-draw biological wastewater technology, which is specially suited for nutrient removal. Employing reactive dye by Sphingomonas paucimobilis bacteria at sequence batch reactor is a novel approach of dye removal. The influent COD, MLVSS, and reaction time were selected as the process inputs and the effluent COD and BOD as the process outputs. The best possible result for the discrete pole parameter was a= 0.44. In orderto adjust the parameters of ANN, the Levenberg-Marquardt (LM) algorithm was employed. The results predicted by the model were compared to the experimental data and showed a high correlation with R2> 0.99 and a low mean absolute error (MAE). The results from this study reveal that the developed model is accurate and efficacious in predicting COD and BOD parameters of the dye-containing wastewater treated by SBR. The proposed modeling approach can be applied to other industrial wastewater treatment systems to predict effluent characteristics. Note that SBR are normally operated with constant predefined duration of the stages, thus, resulting in low efficient operation. Data obtained from the on-line electronic sensors installed in the SBR and from the control quality laboratory analysis have been used to develop the optimal architecture of two different ANN. The results have shown that the developed models can be used as efficient and cost-effective predictive tools for the system analysed.

Keywords: artificial neural network, COD removal, SBR, Sphingomonas paucimobilis

Procedia PDF Downloads 398
16605 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images

Authors: Ravija Gunawardana, Banuka Athuraliya

Abstract:

Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.

Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine

Procedia PDF Downloads 129
16604 A Crop Growth Subroutine for Watershed Resources Management (WRM) Model

Authors: Kingsley Nnaemeka Ogbu, Constantine Mbajiorgu

Abstract:

Vegetation has a marked effect on runoff and has become an important component in hydrologic model. The watershed Resources Management (WRM) model, a process-based, continuous, distributed parameter simulation model developed for hydrologic and soil erosion studies at the watershed scale lack a crop growth component. As such, this model assumes a constant parameter values for vegetation and hydraulic parameters throughout the duration of hydrologic simulation. Our approach is to develop a crop growth algorithm based on the original plant growth model used in the Environmental Policy Integrated Climate Model (EPIC) model. This paper describes the development of a single crop growth model which has the capability of simulating all crops using unique parameter values for each crop. Simulated crop growth processes will reflect the vegetative seasonality of the natural watershed system. An existing model was employed for evaluating vegetative resistance by hydraulic and vegetative parameters incorporated into the WRM model. The improved WRM model will have the ability to evaluate the seasonal variation of the vegetative roughness coefficient with depth of flow and further enhance the hydrologic model’s capability for accurate hydrologic studies

Keywords: crop yield, roughness coefficient, PAR, WRM model

Procedia PDF Downloads 397
16603 Machine Learning in Gravity Models: An Application to International Recycling Trade Flow

Authors: Shan Zhang, Peter Suechting

Abstract:

Predicting trade patterns is critical to decision-making in public and private domains, especially in the current context of trade disputes among major economies. In the past, U.S. recycling has relied heavily on strong demand for recyclable materials overseas. However, starting in 2017, a series of new recycling policies (bans and higher inspection standards) was enacted by multiple countries that were the primary importers of recyclables from the U.S. prior to that point. As the global trade flow of recycling shifts, some new importers, mostly developing countries in South and Southeast Asia, have been overwhelmed by the sheer quantities of scrap materials they have received. As the leading exporter of recyclable materials, the U.S. now has a pressing need to build its recycling industry domestically. With respect to the global trade in scrap materials used for recycling, the interest in this paper is (1) predicting how the export of recyclable materials from the U.S. might vary over time, and (2) predicting how international trade flows for recyclables might change in the future. Focusing on three major recyclable materials with a history of trade, this study uses data-driven and machine learning (ML) algorithms---supervised (shrinkage and tree methods) and unsupervised (neural network method)---to decipher the international trade pattern of recycling. Forecasting the potential trade values of recyclables in the future could help importing countries, to which those materials will shift next, to prepare related trade policies. Such policies can assist policymakers in minimizing negative environmental externalities and in finding the optimal amount of recyclables needed by each country. Such forecasts can also help exporting countries, like the U.S understand the importance of healthy domestic recycling industry. The preliminary result suggests that gravity models---in addition to particular selection macroeconomic predictor variables--are appropriate predictors of the total export value of recyclables. With the inclusion of variables measuring aspects of the political conditions (trade tariffs and bans), predictions show that recyclable materials are shifting from more policy-restricted countries to less policy-restricted countries in international recycling trade. Those countries also tend to have high manufacturing activities as a percentage of their GDP.

Keywords: environmental economics, machine learning, recycling, international trade

Procedia PDF Downloads 159
16602 Numerical Modeling of the Depth-Averaged Flow over a Hill

Authors: Anna Avramenko, Heikki Haario

Abstract:

This paper reports the development and application of a 2D depth-averaged model. The main goal of this contribution is to apply the depth averaged equations to a wind park model in which the treatment of the geometry, introduced on the mathematical model by the mass and momentum source terms. The depth-averaged model will be used in future to find the optimal position of wind turbines in the wind park. K-E and 2D LES turbulence models were consider in this article. 2D CFD simulations for one hill was done to check the depth-averaged model in practise.

Keywords: depth-averaged equations, numerical modeling, CFD, wind park model

Procedia PDF Downloads 595
16601 Deep Vision: A Robust Dominant Colour Extraction Framework for T-Shirts Based on Semantic Segmentation

Authors: Kishore Kumar R., Kaustav Sengupta, Shalini Sood Sehgal, Poornima Santhanam

Abstract:

Fashion is a human expression that is constantly changing. One of the prime factors that consistently influences fashion is the change in colour preferences. The role of colour in our everyday lives is very significant. It subconsciously explains a lot about one’s mindset and mood. Analyzing the colours by extracting them from the outfit images is a critical study to examine the individual’s/consumer behaviour. Several research works have been carried out on extracting colours from images, but to the best of our knowledge, there were no studies that extract colours to specific apparel and identify colour patterns geographically. This paper proposes a framework for accurately extracting colours from T-shirt images and predicting dominant colours geographically. The proposed method consists of two stages: first, a U-Net deep learning model is adopted to segment the T-shirts from the images. Second, the colours are extracted only from the T-shirt segments. The proposed method employs the iMaterialist (Fashion) 2019 dataset for the semantic segmentation task. The proposed framework also includes a mechanism for gathering data and analyzing India’s general colour preferences. From this research, it was observed that black and grey are the dominant colour in different regions of India. The proposed method can be adapted to study fashion’s evolving colour preferences.

Keywords: colour analysis in t-shirts, convolutional neural network, encoder-decoder, k-means clustering, semantic segmentation, U-Net model

Procedia PDF Downloads 97
16600 UBCSAND Model Calibration for Generic Liquefaction Triggering Curves

Authors: Jui-Ching Chou

Abstract:

Numerical simulation is a popular method used to evaluate the effects of soil liquefaction on a structure or the effectiveness of a mitigation plan. Many constitutive models (UBCSAND model, PM4 model, SANISAND model, etc.) were presented to model the liquefaction phenomenon. In general, inputs of a constitutive model need to be calibrated against the soil cyclic resistance before being applied to the numerical simulation model. Then, simulation results can be compared with results from simplified liquefaction potential assessing methods. In this article, inputs of the UBCSAND model, a simple elastic-plastic stress-strain model, are calibrated against several popular generic liquefaction triggering curves of simplified liquefaction potential assessing methods via FLAC program. Calibrated inputs can provide engineers to perform a preliminary evaluation of an existing structure or a new design project.

Keywords: calibration, liquefaction, numerical simulation, UBCSAND Model

Procedia PDF Downloads 154
16599 An Alternative Approach for Assessing the Impact of Cutting Conditions on Surface Roughness Using Single Decision Tree

Authors: S. Ghorbani, N. I. Polushin

Abstract:

In this study, an approach to identify factors affecting on surface roughness in a machining process is presented. This study is based on 81 data about surface roughness over a wide range of cutting tools (conventional, cutting tool with holes, cutting tool with composite material), workpiece materials (AISI 1045 Steel, AA2024 aluminum alloy, A48-class30 gray cast iron), spindle speed (630-1000 rpm), feed rate (0.05-0.075 mm/rev), depth of cut (0.05-0.15 mm) and tool overhang (41-65 mm). A single decision tree (SDT) analysis was done to identify factors for predicting a model of surface roughness, and the CART algorithm was employed for building and evaluating regression tree. Results show that a single decision tree is better than traditional regression models with higher rate and forecast accuracy and strong value.

Keywords: cutting condition, surface roughness, decision tree, CART algorithm

Procedia PDF Downloads 364
16598 Predicting Shot Making in Basketball Learnt Fromadversarial Multiagent Trajectories

Authors: Mark Harmon, Abdolghani Ebrahimi, Patrick Lucey, Diego Klabjan

Abstract:

In this paper, we predict the likelihood of a player making a shot in basketball from multiagent trajectories. Previous approaches to similar problems center on hand-crafting features to capture domain-specific knowledge. Although intuitive, recent work in deep learning has shown, this approach is prone to missing important predictive features. To circumvent this issue, we present a convolutional neural network (CNN) approach where we initially represent the multiagent behavior as an image. To encode the adversarial nature of basketball, we use a multichannel image which we then feed into a CNN. Additionally, to capture the temporal aspect of the trajectories, we use “fading.” We find that this approach is superior to a traditional FFN model. By using gradient ascent, we were able to discover what the CNN filters look for during training. Last, we find that a combined FFN+CNN is the best performing network with an error rate of 39%.

Keywords: basketball, computer vision, image processing, convolutional neural network

Procedia PDF Downloads 141
16597 A Crop Growth Subroutine for Watershed Resources Management (WRM) Model 1: Description

Authors: Kingsley Nnaemeka Ogbu, Constantine Mbajiorgu

Abstract:

Vegetation has a marked effect on runoff and has become an important component in hydrologic model. The watershed Resources Management (WRM) model, a process-based, continuous, distributed parameter simulation model developed for hydrologic and soil erosion studies at the watershed scale lack a crop growth component. As such, this model assumes a constant parameter values for vegetation and hydraulic parameters throughout the duration of hydrologic simulation. Our approach is to develop a crop growth algorithm based on the original plant growth model used in the Environmental Policy Integrated Climate Model (EPIC) model. This paper describes the development of a single crop growth model which has the capability of simulating all crops using unique parameter values for each crop. Simulated crop growth processes will reflect the vegetative seasonality of the natural watershed system. An existing model was employed for evaluating vegetative resistance by hydraulic and vegetative parameters incorporated into the WRM model. The improved WRM model will have the ability to evaluate the seasonal variation of the vegetative roughness coefficient with depth of flow and further enhance the hydrologic model’s capability for accurate hydrologic studies.

Keywords: runoff, roughness coefficient, PAR, WRM model

Procedia PDF Downloads 366
16596 Character Development Outcomes: A Predictive Model for Behaviour Analysis in Tertiary Institutions

Authors: Rhoda N. Kayongo

Abstract:

As behavior analysts in education continue to debate on how higher institutions can continue to benefit from their social and academic related programs, higher education is facing challenges in the area of character development. This is manifested in the percentages of college completion rates, teen pregnancies, drug abuse, sexual abuse, suicide, plagiarism, lack of academic integrity, and violence among their students. Attending college is a perceived opportunity to positively influence the actions and behaviors of the next generation of society; thus colleges and universities have to provide opportunities to develop students’ values and behaviors. Prior studies were mainly conducted in private institutions and more so in developed countries. However, with the complexity of the nature of student body currently due to the changing world, a multidimensional approach combining multiple factors that enhance character development outcomes is needed to suit the changing trends. The main purpose of this study was to identify opportunities in colleges and develop a model for predicting character development outcomes. A survey questionnaire composed of 7 scales including in-classroom interaction, out-of-classroom interaction, school climate, personal lifestyle, home environment, and peer influence as independent variables and character development outcomes as the dependent variable was administered to a total of five hundred and one students of 3rd and 4th year level in selected public colleges and universities in the Philippines and Rwanda. Using structural equation modelling, a predictive model explained 57% of the variance in character development outcomes. Findings from the results of the analysis showed that in-classroom interactions have a substantial direct influence on character development outcomes of the students (r = .75, p < .05). In addition, out-of-classroom interaction, school climate, and home environment contributed to students’ character development outcomes but in an indirect way. The study concluded that in the classroom are many opportunities for teachers to teach, model and integrate character development among their students. Thus, suggestions are made to public colleges and universities to deliberately boost and implement experiences that cultivate character within the classroom. These may contribute tremendously to the students' character development outcomes and hence render effective models of behaviour analysis in higher education.

Keywords: character development, tertiary institutions, predictive model, behavior analysis

Procedia PDF Downloads 125
16595 Development of a Model for Predicting Radiological Risks in Interventional Cardiology

Authors: Stefaan Carpentier, Aya Al Masri, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis, and ulceration to appear. In order to prevent these deterministic effects, a prediction of the peak skin-dose for the patient is important in order to improve the post-operative care to be given to the patient. The objective of this study is to estimate, before the intervention, the patient dose for ‘Chronic Total Occlusion (CTO)’ procedures by selecting relevant clinical indicators. Materials and methods: 103 procedures were performed in the ‘Interventional Cardiology (IC)’ department using a Siemens Artis Zee image intensifier that provides the Air Kerma of each IC exam. Peak Skin Dose (PSD) was measured for each procedure using radiochromic films. Patient parameters such as sex, age, weight, and height were recorded. The complexity index J-CTO score, specific to each intervention, was determined by the cardiologist. A correlation method applied to these indicators allowed to specify their influence on the dose. A predictive model of the dose was created using multiple linear regressions. Results: Out of 103 patients involved in the study, 5 were excluded for clinical reasons and 2 for placement of radiochromic films outside the exposure field. 96 2D-dose maps were finally used. The influencing factors having the highest correlation with the PSD are the patient's diameter and the J-CTO score. The predictive model is based on these parameters. The comparison between estimated and measured skin doses shows an average difference of 0.85 ± 0.55 Gy for doses of less than 6 Gy. The mean difference between air-Kerma and PSD is 1.66 Gy ± 1.16 Gy. Conclusion: Using our developed method, a first estimate of the dose to the skin of the patient is available before the start of the procedure, which helps the cardiologist in carrying out its intervention. This estimation is more accurate than that provided by the Air-Kerma.

Keywords: chronic total occlusion procedures, clinical experimentation, interventional radiology, patient's peak skin dose

Procedia PDF Downloads 127
16594 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent

Authors: Faidon Kyriakou, William Dempster, David Nash

Abstract:

Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.

Keywords: AAA, efficiency, finite element analysis, stent deployment

Procedia PDF Downloads 185
16593 Stock Market Prediction by Regression Model with Social Moods

Authors: Masahiro Ohmura, Koh Kakusho, Takeshi Okadome

Abstract:

This paper presents a regression model with autocorrelated errors in which the inputs are social moods obtained by analyzing the adjectives in Twitter posts using a document topic model. The regression model predicts Dow Jones Industrial Average (DJIA) more precisely than autoregressive moving-average models.

Keywords: stock market prediction, social moods, regression model, DJIA

Procedia PDF Downloads 537
16592 Structural Equation Modeling Semiparametric Truncated Spline Using Simulation Data

Authors: Adji Achmad Rinaldo Fernandes

Abstract:

SEM analysis is a complex multivariate analysis because it involves a number of exogenous and endogenous variables that are interconnected to form a model. The measurement model is divided into two, namely, the reflective model (reflecting) and the formative model (forming). Before carrying out further tests on SEM, there are assumptions that must be met, namely the linearity assumption, to determine the form of the relationship. There are three modeling approaches to path analysis, including parametric, nonparametric and semiparametric approaches. The aim of this research is to develop semiparametric SEM and obtain the best model. The data used in the research is secondary data as the basis for the process of obtaining simulation data. Simulation data was generated with various sample sizes of 100, 300, and 500. In the semiparametric SEM analysis, the form of the relationship studied was determined, namely linear and quadratic and determined one and two knot points with various levels of error variance (EV=0.5; 1; 5). There are three levels of closeness of relationship for the analysis process in the measurement model consisting of low (0.1-0.3), medium (0.4-0.6) and high (0.7-0.9) levels of closeness. The best model lies in the form of the relationship X1Y1 linear, and. In the measurement model, a characteristic of the reflective model is obtained, namely that the higher the closeness of the relationship, the better the model obtained. The originality of this research is the development of semiparametric SEM, which has not been widely studied by researchers.

Keywords: semiparametric SEM, measurement model, structural model, reflective model, formative model

Procedia PDF Downloads 23
16591 Modelling the Tensile Behavior of Plasma Sprayed Freestanding Yttria Stabilized Zirconia Coatings

Authors: Supriya Patibanda, Xiaopeng Gong, Krishna N. Jonnalagadda, Ralph Abrahams

Abstract:

Yttria stabilized zirconia (YSZ) is used as a top coat in thermal barrier coatings in high-temperature turbine/jet engine applications. The mechanical behaviour of YSZ depends on the microstructural features like crack density and porosity, which are a result of coating method. However, experimentally ascertaining their individual effect is difficult due to the inherent challenges involved like material synthesis and handling. The current work deals with the development of a phenomenological model to replicate the tensile behavior of air plasma sprayed YSZ obtained from experiments. Initially, uniaxial tensile experiments were performed on freestanding YSZ coatings of ~300 µm thick for different crack densities and porosities. The coatings exhibited a nonlinear behavior and also a huge variation in strength values. With the obtained experimental tensile curve as a base and crack density and porosity as prime variables, a phenomenological model was developed using ABAQUS interface with new user material defined employing VUMAT sub routine. The relation between the tensile stress and the crack density was empirically established. Further, a parametric study was carried out to investigate the effect of the individual features on the non-linearity in these coatings. This work enables to generate new coating designs by varying the key parameters and predicting the mechanical properties with the help of a simulation, thereby minimizing experiments.

Keywords: crack density, finite element method, plasma sprayed coatings, VUMAT

Procedia PDF Downloads 140
16590 The Direct Deconvolutional Model in the Large-Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

The utilization of Large Eddy Simulation (LES) has been extensive in turbulence research. LES concentrates on resolving the significant grid-scale motions while representing smaller scales through subfilter-scale (SFS) models. The deconvolution model, among the available SFS models, has proven successful in LES of engineering and geophysical flows. Nevertheless, the thorough investigation of how sub-filter scale dynamics and filter anisotropy affect SFS modeling accuracy remains lacking. The outcomes of LES are significantly influenced by filter selection and grid anisotropy, factors that have not been adequately addressed in earlier studies. This study examines two crucial aspects of LES: Firstly, the accuracy of direct deconvolution models (DDM) is evaluated concerning sub-filter scale (SFS) dynamics across varying filter-to-grid ratios (FGR) in isotropic turbulence. Various invertible filters are employed, including Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The importance of FGR becomes evident as it plays a critical role in controlling errors for precise SFS stress prediction. When FGR is set to 1, the DDM models struggle to faithfully reconstruct SFS stress due to inadequate resolution of SFS dynamics. Notably, prediction accuracy improves when FGR is set to 2, leading to accurate reconstruction of SFS stress, except for cases involving Helmholtz I and II filters. Remarkably high precision, nearly 100%, is achieved at an FGR of 4 for all DDM models. Furthermore, the study extends to filter anisotropy and its impact on SFS dynamics and LES accuracy. By utilizing the dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with anisotropic filters, aspect ratios (AR) ranging from 1 to 16 are examined in LES filters. The results emphasize the DDM’s proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. Notably high correlation coefficients exceeding 90% are observed in the a priori study for the DDM’s reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as filter anisotropy increases. In the a posteriori analysis, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, including velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strainrate tensors, and SFS stress. It is evident that as filter anisotropy intensifies, the results of DSM and DMM deteriorate, while the DDM consistently delivers satisfactory outcomes across all filter-anisotropy scenarios. These findings underscore the potential of the DDM framework as a valuable tool for advancing the development of sophisticated SFS models for LES in turbulence research.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 60
16589 Metabolic Predictive Model for PMV Control Based on Deep Learning

Authors: Eunji Choi, Borang Park, Youngjae Choi, Jinwoo Moon

Abstract:

In this study, a predictive model for estimating the metabolism (MET) of human body was developed for the optimal control of indoor thermal environment. Human body images for indoor activities and human body joint coordinated values were collected as data sets, which are used in predictive model. A deep learning algorithm was used in an initial model, and its number of hidden layers and hidden neurons were optimized. Lastly, the model prediction performance was analyzed after the model being trained through collected data. In conclusion, the possibility of MET prediction was confirmed, and the direction of the future study was proposed as developing various data and the predictive model.

Keywords: deep learning, indoor quality, metabolism, predictive model

Procedia PDF Downloads 248
16588 Model Averaging in a Multiplicative Heteroscedastic Model

Authors: Alan Wan

Abstract:

In recent years, the body of literature on frequentist model averaging in statistics has grown significantly. Most of this work focuses on models with different mean structures but leaves out the variance consideration. In this paper, we consider a regression model with multiplicative heteroscedasticity and develop a model averaging method that combines maximum likelihood estimators of unknown parameters in both the mean and variance functions of the model. Our weight choice criterion is based on a minimisation of a plug-in estimator of the model average estimator's squared prediction risk. We prove that the new estimator possesses an asymptotic optimality property. Our investigation of finite-sample performance by simulations demonstrates that the new estimator frequently exhibits very favourable properties compared to some existing heteroscedasticity-robust model average estimators. The model averaging method hedges against the selection of very bad models and serves as a remedy to variance function misspecification, which often discourages practitioners from modeling heteroscedasticity altogether. The proposed model average estimator is applied to the analysis of two real data sets.

Keywords: heteroscedasticity-robust, model averaging, multiplicative heteroscedasticity, plug-in, squared prediction risk

Procedia PDF Downloads 364