Search results for: surgical models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7346

Search results for: surgical models

6596 Persian Pistachio Nut (Pistacia vera L.) Dehydration in Natural and Industrial Conditions

Authors: Hamid Tavakolipour, Mohsen Mokhtarian, Ahmad Kalbasi Ashtari

Abstract:

In this study, the effect of various drying methods (sun drying, shade drying and industrial drying) on final moisture content, shell splitting degree, shrinkage and color change were studied. Sun drying resulted higher degree of pistachio nuts shell splitting on pistachio nuts relative other drying methods. The ANOVA results showed that the different drying methods did not significantly effects on color change of dried pistachio nut. The results illustrated that pistachio nut dried by industrial drying had the lowest moisture content. After the end of drying process, initially, the experimental drying data were fitted with five famous drying models namely Newton, Page, Silva et al., Peleg and Henderson and Pabis. The results indicated that Peleg and Page models gave better results compared with other models to monitor the moisture ratio’s pistachio nut in industrial drying and open sun (or shade drying) methods, respectively.

Keywords: industrial drying, pistachio, quality properties, traditional drying

Procedia PDF Downloads 319
6595 Credit Risk Evaluation Using Genetic Programming

Authors: Ines Gasmi, Salima Smiti, Makram Soui, Khaled Ghedira

Abstract:

Credit risk is considered as one of the important issues for financial institutions. It provokes great losses for banks. To this objective, numerous methods for credit risk evaluation have been proposed. Many evaluation methods are black box models that cannot adequately reveal information hidden in the data. However, several works have focused on building transparent rules-based models. For credit risk assessment, generated rules must be not only highly accurate, but also highly interpretable. In this paper, we aim to build both, an accurate and transparent credit risk evaluation model which proposes a set of classification rules. In fact, we consider the credit risk evaluation as an optimization problem which uses a genetic programming (GP) algorithm, where the goal is to maximize the accuracy of generated rules. We evaluate our proposed approach on the base of German and Australian credit datasets. We compared our finding with some existing works; the result shows that the proposed GP outperforms the other models.

Keywords: credit risk assessment, rule generation, genetic programming, feature selection

Procedia PDF Downloads 336
6594 Non-melanoma Nasal Skin Cancer: Literature Review

Authors: Geovanna dos Santos Romeiro, Polintia Rayza Brito da Silva, Luis Henrique Moura, Izadora Moreira Do Amaral, Marília Vitória Pinto Milhomem

Abstract:

Introduction: The nose is one of the most likely sites for the appearance of malignancy on the face. This can be associated with its unique position of exposure to environmental damage, lack of photoprotection and because it is an area susceptible to greater sun exposure. It is already known that the most common type of nasal tumor is basal cell carcinoma. Squamous cell carcinoma is less common but considerably more aggressive, with a tendency to grow rapidly and metastasize. Nasal skin cancer can have a good prognosis, regardless of the type of treatment chosen, i.e., surgery, radiotherapy or electrodissection. However, tumors that are not diagnosed and treated quickly can be harmful and have a greater chance of metastasizing. When curative surgery is performed, therapies and reconstructive surgical procedures are usually required. Objective: The objective is to review the literature on nasal skin tumors and their types and specific locations. Forty-four articles published in Pubmed related to the location of skin cancer in the specific nasal areas region were analyzed. Twelve were excluded for being prior to the year 2000, three with inconclusive results, and one with unbiased conclusions. Results and Conclusion: Regarding the prevalence of types of nasal tumors, basal cell carcinoma comprises the majority, occurring predominantly in the ala, tip and root; squamous cell carcinoma, on the other hand, is more common in the lateral borders and columella. Even so, 2 articles report that the prevalence of metastasis has a higher incidence in squamous cell carcinomas. All of this points to the importance of early location, including regions that are often overlooked in the examination if the patient is wearing glasses. This topic needs further investigation for a greater correlation between anatomy and clinical-surgical implications.

Keywords: skin cancer, melanoma, non-melanoma, surgery

Procedia PDF Downloads 39
6593 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.

Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter

Procedia PDF Downloads 316
6592 Adaptive Neuro Fuzzy Inference System Model Based on Support Vector Regression for Stock Time Series Forecasting

Authors: Anita Setianingrum, Oki S. Jaya, Zuherman Rustam

Abstract:

Forecasting stock price is a challenging task due to the complex time series of the data. The complexity arises from many variables that affect the stock market. Many time series models have been proposed before, but those previous models still have some problems: 1) put the subjectivity of choosing the technical indicators, and 2) rely upon some assumptions about the variables, so it is limited to be applied to all datasets. Therefore, this paper studied a novel Adaptive Neuro-Fuzzy Inference System (ANFIS) time series model based on Support Vector Regression (SVR) for forecasting the stock market. In order to evaluate the performance of proposed models, stock market transaction data of TAIEX and HIS from January to December 2015 is collected as experimental datasets. As a result, the method has outperformed its counterparts in terms of accuracy.

Keywords: ANFIS, fuzzy time series, stock forecasting, SVR

Procedia PDF Downloads 231
6591 Comparison of Fundamental Frequency Model and PWM Based Model for UPFC

Authors: S. A. Al-Qallaf, S. A. Al-Mawsawi, A. Haider

Abstract:

Among all FACTS devices, the unified power flow controller (UPFC) is considered to be the most versatile device. This is due to its capability to control all the transmission system parameters (impedance, voltage magnitude, and phase angle). With the growing interest in UPFC, the attention to develop a mathematical model has increased. Several models were introduced for UPFC in literature for different type of studies in power systems. In this paper a novel comparison study between two dynamic models of UPFC with their proposed control strategies.

Keywords: FACTS, UPFC, dynamic modeling, PWM, fundamental frequency

Procedia PDF Downloads 337
6590 Monitoring Large-Coverage Forest Canopy Height by Integrating LiDAR and Sentinel-2 Images

Authors: Xiaobo Liu, Rakesh Mishra, Yun Zhang

Abstract:

Continuous monitoring of forest canopy height with large coverage is essential for obtaining forest carbon stocks and emissions, quantifying biomass estimation, analyzing vegetation coverage, and determining biodiversity. LiDAR can be used to collect accurate woody vegetation structure such as canopy height. However, LiDAR’s coverage is usually limited because of its high cost and limited maneuverability, which constrains its use for dynamic and large area forest canopy monitoring. On the other hand, optical satellite images, like Sentinel-2, have the ability to cover large forest areas with a high repeat rate, but they do not have height information. Hence, exploring the solution of integrating LiDAR data and Sentinel-2 images to enlarge the coverage of forest canopy height prediction and increase the prediction repeat rate has been an active research topic in the environmental remote sensing community. In this study, we explore the potential of training a Random Forest Regression (RFR) model and a Convolutional Neural Network (CNN) model, respectively, to develop two predictive models for predicting and validating the forest canopy height of the Acadia Forest in New Brunswick, Canada, with a 10m ground sampling distance (GSD), for the year 2018 and 2021. Two 10m airborne LiDAR-derived canopy height models, one for 2018 and one for 2021, are used as ground truth to train and validate the RFR and CNN predictive models. To evaluate the prediction performance of the trained RFR and CNN models, two new predicted canopy height maps (CHMs), one for 2018 and one for 2021, are generated using the trained RFR and CNN models and 10m Sentinel-2 images of 2018 and 2021, respectively. The two 10m predicted CHMs from Sentinel-2 images are then compared with the two 10m airborne LiDAR-derived canopy height models for accuracy assessment. The validation results show that the mean absolute error (MAE) for year 2018 of the RFR model is 2.93m, CNN model is 1.71m; while the MAE for year 2021 of the RFR model is 3.35m, and the CNN model is 3.78m. These demonstrate the feasibility of using the RFR and CNN models developed in this research for predicting large-coverage forest canopy height at 10m spatial resolution and a high revisit rate.

Keywords: remote sensing, forest canopy height, LiDAR, Sentinel-2, artificial intelligence, random forest regression, convolutional neural network

Procedia PDF Downloads 75
6589 Experiments on Weakly-Supervised Learning on Imperfect Data

Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler

Abstract:

Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.

Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation

Procedia PDF Downloads 180
6588 A Systematic Review of the Methodological and Reporting Quality of Case Series in Surgery

Authors: Riaz A. Agha, Alexander J. Fowler, Seon-Young Lee, Buket Gundogan, Katharine Whitehurst, Harkiran K. Sagoo, Kyung Jin Lee Jeong, Douglas G. Altman, Dennis P. Orgill

Abstract:

Introduction: Case Series are an important and common study type. Currently, no guideline exists for reporting case series and there is evidence of key data being missed from such reports. We propose to develop a reporting guideline for case series using a methodologically robust technique. The first step in this process is a systematic review of literature relevant to the reporting deficiencies of case series. Methods: A systematic review of methodological and reporting quality in surgical case series was performed. The electronic search strategy was developed by an information specialist and included MEDLINE, EMBASE, Cochrane Methods Register, Science Citation index and Conference Proceedings Citation index, from the start of indexing until 5th November 2014. Independent screening, eligibility assessments and data extraction was performed. Included articles were analyzed for five areas of deficiency: failure to use standardized definitions missing or selective data transparency or incomplete reporting whether alternate study designs were considered. Results: The database searching identified 2,205 records. Through the process of screening and eligibility assessments, 92 articles met inclusion criteria. Frequency of methodological and reporting issues identified was a failure to use standardized definitions (57%), missing or selective data (66%), transparency, or incomplete reporting (70%), whether alternate study designs were considered (11%) and other issues (52%). Conclusion: The methodological and reporting quality of surgical case series needs improvement. Our data shows that clear evidence-based guidelines for the conduct and reporting of a case series may be useful to those planning or conducting them.

Keywords: case series, reporting quality, surgery, systematic review

Procedia PDF Downloads 349
6587 Development of an Optimised, Automated Multidimensional Model for Supply Chains

Authors: Safaa H. Sindi, Michael Roe

Abstract:

This project divides supply chain (SC) models into seven Eras, according to the evolution of the market’s needs throughout time. The five earliest Eras describe the emergence of supply chains, while the last two Eras are to be created. Research objectives: The aim is to generate the two latest Eras with their respective models that focus on the consumable goods. Era Six contains the Optimal Multidimensional Matrix (OMM) that incorporates most characteristics of the SC and allocates them into four quarters (Agile, Lean, Leagile, and Basic SC). This will help companies, especially (SMEs) plan their optimal SC route. Era Seven creates an Automated Multidimensional Model (AMM) which upgrades the matrix of Era six, as it accounts for all the supply chain factors (i.e. Offshoring, sourcing, risk) into an interactive system with Heuristic Learning that helps larger companies and industries to select the best SC model for their market. Methodologies: The data collection is based on a Fuzzy-Delphi study that analyses statements using Fuzzy Logic. The first round of Delphi study will contain statements (fuzzy rules) about the matrix of Era six. The second round of Delphi contains the feedback given from the first round and so on. Preliminary findings: both models are applicable, Matrix of Era six reduces the complexity of choosing the best SC model for SMEs by helping them identify the best strategy of Basic SC, Lean, Agile and Leagile SC; that’s tailored to their needs. The interactive heuristic learning in the AMM of Era seven will help mitigate error and aid large companies to identify and re-strategize the best SC model and distribution system for their market and commodity, hence increasing efficiency. Potential contributions to the literature: The problematic issue facing many companies is to decide which SC model or strategy to incorporate, due to the many models and definitions developed over the years. This research simplifies this by putting most definition in a template and most models in the Matrix of era six. This research is original as the division of SC into Eras, the Matrix of Era six (OMM) with Fuzzy-Delphi and Heuristic Learning in the AMM of Era seven provides a synergy of tools that were not combined before in the area of SC. Additionally the OMM of Era six is unique as it combines most characteristics of the SC, which is an original concept in itself.

Keywords: Leagile, automation, heuristic learning, supply chain models

Procedia PDF Downloads 380
6586 Deciphering the Action of Neuraminidase in Glioblastoma Models

Authors: Nathalie Baeza-Kallee, Raphaël Bergès, Victoria Hein, Stéphanie Cabaret, Jeremy Garcia, Abigaëlle Gros, Emeline Tabouret, Aurélie Tchoghandjian, Carole Colin, Dominique Figarella-Branger

Abstract:

Glioblastoma (GBM) contains cancer stem cells that are resistant to treatment. GBM cancer stem cell expresses glycolipids recognized by the A2B5 antibody. A2B5, induced by the enzyme ST8 alpha-N-acetyl-neuraminide alpha-2,8-sialyl transferase 3 (ST8Sia3), plays a crucial role in the proliferation, migration, clonogenicity, and tumorigenesis of GBM cancer stem cells. Our aim was to characterize the resulting effects of neuraminidase that remove A2B5 in order to target GBM cancer stem cells. To this end, we set up a GBM organotypic slice model; quantified A2B5 expression by flow cytometry in U87-MG, U87-ST8Sia3, and GBM cancer stem cell lines, treated or not by neuraminidase; performed RNAseq and DNA methylation profiling; and analyzed the ganglioside expression by liquid chromatography-mass spectrometry in these cell lines, treated or not with neuraminidase. Results demonstrated that neuraminidase decreased A2B5 expression, tumor size, and regrowth after surgical removal in the organotypic slice model but did not induce a distinct transcriptomic or epigenetic signature in GBM CSC lines. RNAseq analysis revealed that OLIG2, CHI3L1, TIMP3, TNFAIP2, and TNFAIP6 transcripts were significantly overexpressed in U87-ST8Sia3 compared to U87-MG. RT-qPCR confirmed these results and demonstrated that neuraminidase decreased gene expression in GBM cancer stem cell lines. Moreover, neuraminidase drastically reduced ganglioside expression in GBM cancer stem cell lines. Neuraminidase, by its pleiotropic action, is an attractive local treatment against GBM.

Keywords: cancer stem cell, ganglioside, glioblastoma, targeted treatment

Procedia PDF Downloads 62
6585 Numerical Investigation of Two Turbulence Models for Predicting the Temperature Separation in Conical Vortex Tube

Authors: M. Guen

Abstract:

A three-dimensional numerical study is used to analyze the behavior of the flow inside a vortex tube. The vortex tube or Ranque-Hilsch vortex tube is a simple device which is capable of dividing compressed air from the inlet nozzle tangentially into two flow with different temperatures warm and cold. This phenomenon is known from literature by temperature separation. The K ω-SST and K-ε turbulence models are used to predict the turbulent flow behaviour inside the tube. The vortex tube is an Exair 708 slpm (25 scfm) commercial tube. The cold and hot exits areas are 30.2 and 95 mm2 respectively. The vortex nozzle consists of 6 straight slots; the height and the width of each slot are 0.97 mm and 1.41 mm. The total area normal to the flow associated with six nozzles is therefore 8.15 mm 2. The present study focuses on a comparison between two turbulence models K ω-SST, K-ε by using a new configuration of vortex tube (Conical Vortex Tube). The performance curves of the temperature separation versus cold outlet mass fraction were calculated and compared with experimental and numerical study of other researchers.

Keywords: conical vortex tube, temperature separation, cold mass fraction, turbulence

Procedia PDF Downloads 239
6584 Kinetics, Equilibrium and Thermodynamics of the Adsorption of Triphenyltin onto NanoSiO₂/Fly Ash/Activated Carbon Composite

Authors: Olushola S. Ayanda, Olalekan S. Fatoki, Folahan A. Adekola, Bhekumusa J. Ximba, Cecilia O. Akintayo

Abstract:

In the present study, the kinetics, equilibrium and thermodynamics of the adsorption of triphenyltin (TPT) from TPT-contaminated water onto nanoSiO2/fly ash/activated carbon composite was investigated in batch adsorption system. Equilibrium adsorption data were analyzed using Langmuir, Freundlich, Temkin and Dubinin–Radushkevich (D-R) isotherm models. Pseudo first- and second-order, Elovich and fractional power models were applied to test the kinetic data and in order to understand the mechanism of adsorption, thermodynamic parameters such as ΔG°, ΔSo and ΔH° were also calculated. The results showed a very good compliance with pseudo second-order equation while the Freundlich and D-R models fit the experiment data. Approximately 99.999 % TPT was removed from the initial concentration of 100 mg/L TPT at 80oC, contact time of 60 min, pH 8 and a stirring speed of 200 rpm. Thus, nanoSiO2/fly ash/activated carbon composite could be used as effective adsorbent for the removal of TPT from contaminated water and wastewater.

Keywords: isotherm, kinetics, nanoSiO₂/fly ash/activated carbon composite, tributyltin

Procedia PDF Downloads 284
6583 The Effectiveness of Laser In situ Keratomileusis for Correction Various Types of Refractive Anomalies

Authors: Yuliya Markava

Abstract:

The laser in situ keratomileusis (LASIK) is widely common surgical procedure, which has become an alternative for patients who are not satisfied with the performance of other correction methods. A high level of patient satisfaction functional outcomes after refractive surgery confirms the high reliability and safety of LASIK and provides a significant improvement in the quality of life and social adaptation. Purpose: To perform clinical analysis of the results of correction made to the excimer laser system SCHWIND AMARIS 500E in patients with different types of refractive anomalies. Materials and Methods: This was a retrospective analysis of 1581 operations (812 patients): 413 males (50.86%) and 399 females (49.14%) at the age from 18 to 47 years with different types of ametropia. All operations were performed on excimer laser SCHWIND AMARIS 500E in the LASIK procedure. Formation of the corneal flap was made by mechanical microkeratome SCHWIND. Results: Analyzing the structure of refractive anomalies: The largest number of interventions accounted for myopia: 1505 eyes (95.2%), of which about a low myopia: 706 eyes (44.7%), moderate myopia: 562 eyes (35.5 %), high myopia: eyes 217 (13.7%) and supermyopia: 20 eyes (1.3%). Hyperopia was 0.7% (11 eyes), mixed astigmatism: 4.1% (65 eyes). The efficiency was 80% (in patients with supermyopia) to 91.6% and 95.4% (in groups with myopia low and moderate, respectively). Uncorrected visual acuity average values before and after laser operation was in groups: a low myopia 0.18 (up 0.05 to 0.31) and 0.80 (up 0.60 to 1.0); moderate myopia 0.08 (up 0.03 to 0.13) and 0.87 ( up 0.74 to 1.0); high myopia 0.05 (up 0.02 to 0.08) and 0.83 (up 0.66 to 1.0); supermyopia 0.03 (up 0.02 to 0.04) and 0.59 ( up 0.34 to 0.84); hyperopia 0.27 (up 0.16 to 0.38) and 0.57 (up 0.27 to 0.87); mixed astigmatism of 0.35 (up 0.19 to 0.51) and 0.69 (up 0.44 to 0.94). In all cases, after LASIK indicators uncorrected visual acuity significantly increased. Reoperation was 4.43%. Significance: Clinical results of refractive surgery at the excimer laser system SCHWIND AMARIS 500E in different ametropia correction is characterized by high efficiency.

Keywords: effectiveness of laser correction, LASIK, refractive anomalies, surgical treatment

Procedia PDF Downloads 242
6582 Challenges of Management of Subaortic Membrane in a Young Adult Patient: A Case Review and Literature Review

Authors: Talal Asif, Maya Kosinska, Lucas Georger, Krish Sardesai, Muhammad Shah Miran

Abstract:

This article presents a case review and literature review focused on the challenges of managing subaortic membranes (SAM) in young adult patients with mild aortic regurgitation (AR) or aortic stenosis (AS). The study aims to discuss the diagnosis of SAM, imaging studies used for assessment, management strategies in young patients, the risk of valvular damage, and the controversy surrounding prophylactic resection in mild AR. The management of SAM in adults poses challenges due to limited treatment options and potential complications, necessitating further investigation into the progression of AS and AR in asymptomatic SAM patients. The case presentation describes a 40-year-old male with muscular dystrophy who presented with symptoms and was diagnosed with SAM. Various imaging techniques, including CT chest, transthoracic echocardiogram (TTE), and transesophageal echocardiogram (TEE), were used to confirm the presence and severity of SAM. Based on the patient's clinical profile and the absence of surgical indications, medical therapy was initiated, and regular outpatient follow-up was recommended to monitor disease progression. The discussion highlights the challenges in diagnosing SAM, the importance of imaging studies, and the potential complications associated with SAM in young patients. The article also explores the management options for SAM, emphasizing surgical resection as the definitive treatment while acknowledging the limited success rates of alternative approaches. Close monitoring and prompt intervention for complications are crucial in the management of SAM. The concluding statement emphasizes the need for further research to explore alternative treatments for SAM in young patients.

Keywords: subaortic membrane, management, case report, literature review, aortic regurgitation, aortic stenosis, left ventricular outflow obstruction, guidelines, heart failure

Procedia PDF Downloads 82
6581 Strategy Management of Soybean (Glycine max L.) for Dealing with Extreme Climate through the Use of Cropsyst Model

Authors: Aminah Muchdar, Nuraeni, Eddy

Abstract:

The aims of the research are: (1) to verify the cropsyst plant model of experimental data in the field of soybean plants and (2) to predict planting time and potential yield soybean plant with the use of cropsyst model. This research is divided into several stages: (1) first calibration stage which conducted in the field from June until September 2015.(2) application models stage, where the data obtained from calibration in the field will be included in cropsyst models. The required data models are climate data, ground data/soil data,also crop genetic data. The relationship between the obtained result in field with simulation cropsyst model indicated by Efficiency Index (EF) which the value is 0,939.That is showing that cropsyst model is well used. From the calculation result RRMSE which the value is 1,922%.That is showing that comparative fault prediction results from simulation with result obtained in the field is 1,92%. The conclusion has obtained that the prediction of soybean planting time cropsyst based models that have been made valid for use. and the appropriate planting time for planting soybeans mainly on rain-fed land is at the end of the rainy season, in which the above study first planting time (June 2, 2015) which gives the highest production, because at that time there was still some rain. Tanggamus varieties more resistant to slow planting time cause the percentage decrease in the yield of each decade is lower than the average of all varieties.

Keywords: soybean, Cropsyst, calibration, efficiency Index, RRMSE

Procedia PDF Downloads 168
6580 A Parallel Approach for 3D-Variational Data Assimilation on GPUs in Ocean Circulation Models

Authors: Rossella Arcucci, Luisa D'Amore, Simone Celestino, Giuseppe Scotti, Giuliano Laccetti

Abstract:

This work is the first dowel in a rather wide research activity in collaboration with Euro Mediterranean Center for Climate Changes, aimed at introducing scalable approaches in Ocean Circulation Models. We discuss designing and implementation of a parallel algorithm for solving the Variational Data Assimilation (DA) problem on Graphics Processing Units (GPUs). The algorithm is based on the fully scalable 3DVar DA model, previously proposed by the authors, which uses a Domain Decomposition approach (we refer to this model as the DD-DA model). We proceed with an incremental porting process consisting of 3 distinct stages: requirements and source code analysis, incremental development of CUDA kernels, testing and optimization. Experiments confirm the theoretic performance analysis based on the so-called scale up factor demonstrating that the DD-DA model can be suitably mapped on GPU architectures.

Keywords: data assimilation, GPU architectures, ocean models, parallel algorithm

Procedia PDF Downloads 400
6579 Kalman Filter for Bilinear Systems with Application

Authors: Abdullah E. Al-Mazrooei

Abstract:

In this paper, we present a new kind of the bilinear systems in the form of state space model. The evolution of this system depends on the product of state vector by its self. The well known Lotak Volterra and Lorenz models are special cases of this new model. We also present here a generalization of Kalman filter which is suitable to work with the new bilinear model. An application to real measurements is introduced to illustrate the efficiency of the proposed algorithm.

Keywords: bilinear systems, state space model, Kalman filter, application, models

Procedia PDF Downloads 418
6578 Hemodynamics of a Cerebral Aneurysm under Rest and Exercise Conditions

Authors: Shivam Patel, Abdullah Y. Usmani

Abstract:

Physiological flow under rest and exercise conditions in patient-specific cerebral aneurysm models is numerically investigated. A finite-volume based code with BiCGStab as the linear equation solver is used to simulate unsteady three-dimensional flow field through the incompressible Navier-Stokes equations. Flow characteristics are first established in a healthy cerebral artery for both physiological conditions. The effect of saccular aneurysm on cerebral hemodynamics is then explored through a comparative analysis of the velocity distribution, nature of flow patterns, wall pressure and wall shear stress (WSS) against the reference configuration. The efficacy of coil embolization as a potential strategy of surgical intervention is also examined by modelling coil as a homogeneous and isotropic porous medium where the extended Darcy’s law, including Forchheimer and Brinkman terms, is applicable. The Carreau-Yasuda non-Newtonian blood model is incorporated to capture the shear thinning behavior of blood. Rest and exercise conditions correspond to normotensive and hypertensive blood pressures respectively. The results indicate that the fluid impingement on the outer wall of the arterial bend leads to abnormality in the distribution of wall pressure and WSS, which is expected to be the primary cause of the localized aneurysm. Exercise correlates with elevated flow velocity, vortex strength, wall pressure and WSS inside the aneurysm sac. With the insertion of coils in the aneurysm cavity, the flow bypasses the dilatation, leading to a decline in flow velocities and WSS. Particle residence time is observed to be lower under exercise conditions, a factor favorable for arresting plaque deposition and combating atherosclerosis.

Keywords: 3D FVM, Cerebral aneurysm, hypertension, coil embolization, non-Newtonian fluid

Procedia PDF Downloads 221
6577 3D Numerical Study of Tsunami Loading and Inundation in a Model Urban Area

Authors: A. Bahmanpour, I. Eames, C. Klettner, A. Dimakopoulos

Abstract:

We develop a new set of diagnostic tools to analyze inundation into a model district using three-dimensional CFD simulations, with a view to generating a database against which to test simpler models. A three-dimensional model of Oregon city with different-sized groups of building next to the coastline is used to run calculations of the movement of a long period wave on the shore. The initial and boundary conditions of the off-shore water are set using a nonlinear inverse method based on Eulerian spatial information matching experimental Eulerian time series measurements of water height. The water movement is followed in time, and this enables the pressure distribution on every surface of each building to be followed in a temporal manner. The three-dimensional numerical data set is validated against published experimental work. In the first instance, we use the dataset as a basis to understand the success of reduced models - including 2D shallow water model and reduced 1D models - to predict water heights, flow velocity and forces. This is because models based on the shallow water equations are known to underestimate drag forces after the initial surge of water. The second component is to identify critical flow features, such as hydraulic jumps and choked states, which are flow regions where dissipation occurs and drag forces are large. Finally, we describe how future tsunami inundation models should be modified to account for the complex effects of buildings through drag and blocking.Financial support from UCL and HR Wallingford is greatly appreciated. The authors would like to thank Professor Daniel Cox and Dr. Hyoungsu Park for providing the data on the Seaside Oregon experiment.

Keywords: computational fluid dynamics, extreme events, loading, tsunami

Procedia PDF Downloads 104
6576 Housing Price Prediction Using Machine Learning Algorithms: The Case of Melbourne City, Australia

Authors: The Danh Phan

Abstract:

House price forecasting is a main topic in the real estate market research. Effective house price prediction models could not only allow home buyers and real estate agents to make better data-driven decisions but may also be beneficial for the property policymaking process. This study investigates the housing market by using machine learning techniques to analyze real historical house sale transactions in Australia. It seeks useful models which could be deployed as an application for house buyers and sellers. Data analytics show a high discrepancy between the house price in the most expensive suburbs and the most affordable suburbs in the city of Melbourne. In addition, experiments demonstrate that the combination of Stepwise and Support Vector Machine (SVM), based on the Mean Squared Error (MSE) measurement, consistently outperforms other models in terms of prediction accuracy.

Keywords: house price prediction, regression trees, neural network, support vector machine, stepwise

Procedia PDF Downloads 210
6575 Time Series Forecasting (TSF) Using Various Deep Learning Models

Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan

Abstract:

Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed-length window in the past as an explicit input. In this paper, we study how the performance of predictive models changes as a function of different look-back window sizes and different amounts of time to predict the future. We also consider the performance of the recent attention-based Transformer models, which have had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (RNN, LSTM, GRU, and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the UCI website, which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Average Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.

Keywords: air quality prediction, deep learning algorithms, time series forecasting, look-back window

Procedia PDF Downloads 143
6574 Generalized Hyperbolic Functions: Exponential-Type Quantum Interactions

Authors: Jose Juan Peña, J. Morales, J. García-Ravelo

Abstract:

In the search of potential models applied in the theoretical treatment of diatomic molecules, some of them have been constructed by using standard hyperbolic functions as well as from the so-called q-deformed hyperbolic functions (sc q-dhf) for displacing and modifying the shape of the potential under study. In order to transcend the scope of hyperbolic functions, in this work, a kind of generalized q-deformed hyperbolic functions (g q-dhf) is presented. By a suitable transformation, through the q deformation parameter, it is shown that these g q-dhf can be expressed in terms of their corresponding standard ones besides they can be reduced to the sc q-dhf. As a useful application of the proposed approach, and considering a class of exactly solvable multi-parameter exponential-type potentials, some new q-deformed quantum interactions models that can be used as interesting alternative in quantum physics and quantum states are presented. Furthermore, due that quantum potential models are conditioned on the q-dependence of the parameters that characterize to the exponential-type potentials, it is shown that many specific cases of q-deformed potentials are obtained as particular cases from the proposal.

Keywords: diatomic molecules, exponential-type potentials, hyperbolic functions, q-deformed potentials

Procedia PDF Downloads 171
6573 Model-Based Process Development for the Comparison of a Radial Riveting and Roller Burnishing Process in Mechanical Joining Technology

Authors: Tobias Beyer, Christoph Friedrich

Abstract:

Modern simulation methodology using finite element models is nowadays a recognized tool for product design/optimization. Likewise, manufacturing process design is increasingly becoming the focus of simulation methodology in order to enable sustainable results based on reduced real-life tests here as well. In this article, two process simulations -radial riveting and roller burnishing- used for mechanical joining of components are explained. In the first step, the required boundary conditions are developed and implemented in the respective simulation models. This is followed by process space validation. With the help of the validated models, the interdependencies of the input parameters are investigated and evaluated by means of sensitivity analyses. Limit case investigations are carried out and evaluated with the aid of the process simulations. Likewise, a comparison of the two joining methods to each other becomes possible.

Keywords: FEM, model-based process development, process simulation, radial riveting, roller burnishing, sensitivity analysis

Procedia PDF Downloads 95
6572 A Study of Two Disease Models: With and Without Incubation Period

Authors: H. C. Chinwenyi, H. D. Ibrahim, J. O. Adekunle

Abstract:

The incubation period is defined as the time from infection with a microorganism to development of symptoms. In this research, two disease models: one with incubation period and another without incubation period were studied. The study involves the use of a  mathematical model with a single incubation period. The test for the existence and stability of the disease free and the endemic equilibrium states for both models were carried out. The fourth order Runge-Kutta method was used to solve both models numerically. Finally, a computer program in MATLAB was developed to run the numerical experiments. From the results, we are able to show that the endemic equilibrium state of the model with incubation period is locally asymptotically stable whereas the endemic equilibrium state of the model without incubation period is unstable under certain conditions on the given model parameters. It was also established that the disease free equilibrium states of the model with and without incubation period are locally asymptotically stable. Furthermore, results from numerical experiments using empirical data obtained from Nigeria Centre for Disease Control (NCDC) showed that the overall population of the infected people for the model with incubation period is higher than that without incubation period. We also established from the results obtained that as the transmission rate from susceptible to infected population increases, the peak values of the infected population for the model with incubation period decrease and are always less than those for the model without incubation period.

Keywords: asymptotic stability, Hartman-Grobman stability criterion, incubation period, Routh-Hurwitz criterion, Runge-Kutta method

Procedia PDF Downloads 162
6571 Demand for Domestic Marine and Coastal Tourism and Day Trips on an Island Nation

Authors: John Deely, Stephen Hynes, Mary Cawley, Sarah Hogan

Abstract:

Domestic marine and coastal tourism have increased in importance over the last number of years due to the impacts of international travel, environmental concerns, associated health benefits and COVID-19 related travel restrictions. Consequently, this paper conceptualizes domestic marine and coastal tourism within an economic framework. Two logit models examine the factors that influence participation in the coastal day trips and overnight stays markets, respectively. Two truncated travel cost models are employed to explore trip duration, one analyzing the number of day trips taken and the other examining the number of nights spent in marine and coastal areas. Although a range of variables predicts participation, no one variable had a significant and consistent effect on every model. A division in access to domestic marine and coastal tourism is also observed based on variation in household income. The results also indicate a vibrant day trip market and large consumer surpluses.

Keywords: domestic marine and coastal tourism, day tripper, participation models, truncated travel cost model

Procedia PDF Downloads 122
6570 AI-Driven Forecasting Models for Anticipating Oil Market Trends and Demand

Authors: Gaurav Kumar Sinha

Abstract:

The volatility of the oil market, influenced by geopolitical, economic, and environmental factors, presents significant challenges for stakeholders in predicting trends and demand. This article explores the application of artificial intelligence (AI) in developing robust forecasting models to anticipate changes in the oil market more accurately. We delve into various AI techniques, including machine learning, deep learning, and time series analysis, that have been adapted to analyze historical data and current market conditions to forecast future trends. The study evaluates the effectiveness of these models in capturing complex patterns and dependencies in market data, which traditional forecasting methods often miss. Additionally, the paper discusses the integration of external variables such as political events, economic policies, and technological advancements that influence oil prices and demand. By leveraging AI, stakeholders can achieve a more nuanced understanding of market dynamics, enabling better strategic planning and risk management. The article concludes with a discussion on the potential of AI-driven models in enhancing the predictive accuracy of oil market forecasts and their implications for global economic planning and strategic resource allocation.

Keywords: AI forecasting, oil market trends, machine learning, deep learning, time series analysis, predictive analytics, economic factors, geopolitical influence, technological advancements, strategic planning

Procedia PDF Downloads 21
6569 Kinetic Modeling of Transesterification of Triacetin Using Synthesized Ion Exchange Resin (SIERs)

Authors: Hafizuddin W. Yussof, Syamsutajri S. Bahri, Adam P. Harvey

Abstract:

Strong anion exchange resins with QN+OH-, have the potential to be developed and employed as heterogeneous catalyst for transesterification, as they are chemically stable to leaching of the functional group. Nine different SIERs (SIER1-9) with QN+OH- were prepared by suspension polymerization of vinylbenzyl chloride-divinylbenzene (VBC-DVB) copolymers in the presence of n-heptane (pore-forming agent). The amine group was successfully grafted into the polymeric resin beads through functionalization with trimethylamine. These SIERs are then used as a catalyst for the transesterification of triacetin with methanol. A set of differential equations that represents the Langmuir-Hinshelwood-Hougen-Watson (LHHW) and Eley-Rideal (ER) models for the transesterification reaction were developed. These kinetic models of LHHW and ER were fitted to the experimental data. Overall, the synthesized ion exchange resin-catalyzed reaction were well-described by the Eley-Rideal model compared to LHHW models, with sum of square error (SSE) of 0.742 and 0.996, respectively.

Keywords: anion exchange resin, Eley-Rideal, Langmuir-Hinshelwood-Hougen-Watson, transesterification

Procedia PDF Downloads 349
6568 Simulation of the Large Hadrons Collisions Using Monte Carlo Tools

Authors: E. Al Daoud

Abstract:

In many cases, theoretical treatments are available for models for which there is no perfect physical realization. In this situation, the only possible test for an approximate theoretical solution is to compare with data generated from a computer simulation. In this paper, Monte Carlo tools are used to study and compare the elementary particles models. All the experiments are implemented using 10000 events, and the simulated energy is 13 TeV. The mean and the curves of several variables are calculated for each model using MadAnalysis 5. Anomalies in the results can be seen in the muons masses of the minimal supersymmetric standard model and the two Higgs doublet model.

Keywords: Feynman rules, hadrons, Lagrangian, Monte Carlo, simulation

Procedia PDF Downloads 307
6567 Using Machine Learning to Predict Answers to Big-Five Personality Questions

Authors: Aadityaa Singla

Abstract:

The big five personality traits are as follows: openness, conscientiousness, extraversion, agreeableness, and neuroticism. In order to get an insight into their personality, many flocks to these categories, which each have different meanings/characteristics. This information is important not only to individuals but also to career professionals and psychologists who can use this information for candidate assessment or job recruitment. The links between AI and psychology have been well studied in cognitive science, but it is still a rather novel development. It is possible for various AI classification models to accurately predict a personality question via ten input questions. This would contrast with the hundred questions that normal humans have to answer to gain a complete picture of their five personality traits. In order to approach this problem, various AI classification models were used on a dataset to predict what a user may answer. From there, the model's prediction was compared to its actual response. Normally, there are five answer choices (a 20% chance of correct guess), and the models exceed that value to different degrees, proving their significance. By utilizing an MLP classifier, decision tree, linear model, and K-nearest neighbors, they were able to obtain a test accuracy of 86.643, 54.625, 47.875, and 52.125, respectively. These approaches display that there is potential in the future for more nuanced predictions to be made regarding personality.

Keywords: machine learning, personally, big five personality traits, cognitive science

Procedia PDF Downloads 138