Search results for: models error comparison
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12664

Search results for: models error comparison

10564 Development of Interaction Factors Charts for Piled Raft Foundation

Authors: Abdelazim Makki Ibrahim, Esamaldeen Ali

Abstract:

This study aims at analysing the load settlement behavior and predict the bearing capacity of piled raft foundation a series of finite element models with different foundation configurations and stiffness were established. Numerical modeling is used to study the behavior of the piled raft foundation due to the complexity of piles, raft, and soil interaction and also due to the lack of reliable analytical method that can predict the behavior of the piled raft foundation system. Simple analytical models are developed to predict the average settlement and the load sharing between the piles and the raft in piled raft foundation system. A simple example to demonstrate the applications of these charts is included.

Keywords: finite element, pile-raft foundation, method, PLAXIS software, settlement

Procedia PDF Downloads 557
10563 Determining the Effects of Wind-Aided Midge Movement on the Probability of Coexistence of Multiple Bluetongue Virus Serotypes in Patchy Environments

Authors: Francis Mugabi, Kevin Duffy, Joseph J. Y. T Mugisha, Obiora Collins

Abstract:

Bluetongue virus (BTV) has 27 serotypes, with some of them coexisting in patchy (different) environments, which make its control difficult. Wind-aided midge movement is a known mechanism in the spread of BTV. However, its effects on the probability of coexistence of multiple BTV serotypes are not clear. Deterministic and stochastic models for r BTV serotypes in n discrete patches connected by midge and/or cattle movement are formulated and analyzed. For the deterministic model without midge and cattle movement, using the comparison principle, it is shown that if the patch reproduction number R0 < 1, i=1,2,...,n, j=1,2,...,r, all serotypes go extinct. If R^j_i0>1, competitive exclusion takes place. Using numerical simulations, it is shown that when the n patches are connected by midge movement, coexistence takes place. To account for demographic and movement variability, the deterministic model is transformed into a continuous-time Markov chain stochastic model. Utilizing a multitype branching process, it is shown that the midge movement can have a large effect on the probability of coexistence of multiple BTV serotypes. The probability of coexistence can be brought to zero when the control interventions that directly kill the adult midges are applied. These results indicate the significance of wind-aided midge movement and vector control interventions on the coexistence and control of multiple BTV serotypes in patchy environments.

Keywords: bluetongue virus, coexistence, multiple serotypes, midge movement, branching process

Procedia PDF Downloads 150
10562 Modelling Export Dynamics in the CSEE Countries Using GVAR Model

Authors: S. Jakšić, B. Žmuk

Abstract:

The paper investigates the key factors of export dynamics for a set of Central and Southeast European (CSEE) countries in the context of current economic and financial crisis. In order to model the export dynamics a Global Vector Auto Regressive (GVAR) model is defined. As opposed to models which model each country separately, the GVAR combines all country models in a global model which enables obtaining important information on spill-over effects in the context of globalization and rising international linkages. The results of the study indicate that for most of the CSEE countries, exports are mainly driven by domestic shocks, both in the short run and in the long run. This study is the first application of the GVAR model to studying the export dynamics in the CSEE countries and therefore the results of the study present an important empirical contribution.

Keywords: export, GFEVD, global VAR, international trade, weak exogeneity

Procedia PDF Downloads 301
10561 Effect of a Synthetic Platinum-Based Complex on Autophagy Induction in Leydig TM3 Cells

Authors: Ezzati Givi M., Hoveizi E., Nezhad Marani N.

Abstract:

Platinum-based anticancer therapeutics are the most widely used drugs in clinical chemotherapy but have major limitations and various side effects in clinical applications. Gonadotoxicity and sterility is one of the most common complications for cancer survivors, which seem to be drug-specific and dose-related. Therefore, many efforts have been dedicated to discovering a new structure of platinum-based anticancer agents with improved therapeutic index, fewer side effects. In this regard, new Pt(II)-phosphane complexes containing heterocyclic thionate ligands (PCTL) have been synthesized, which show more potent antitumor activities in comparison to cisplatin. Cisplatin, the best leading metal-based antitumor drug in the field, induces testicular toxicity on Leydig and Sertoli cells leading to serious side effects such as azoospermia and infertility. Therefore in the present study, we aimed to investigate the cytotoxicity effect of PCTL on mice TM4 Sertoli cells with particular emphasis on the role of autophagy in comparison to cisplatin. In this study, an MTT assay was performed to evaluate the IC50 of PCTL and to analyze the TM3 Leydig cell's viability. Cells morphology was evaluated via invert microscope and Changing in morphology for nuclei swelling or autophagic vacuoles formation were assessed by DAPI and MDC staining. Testosterone production in the culture medium was measured using an ELISA kit. Finally, the expression of Autophagy-related genes, Atg5, Beclin1 and p62, were analyzed by qPCR. Based on the obtained results by MTT, the IC50 value of PCTL was 50 μM in TM3 cells and cytotoxic effects was in a dose- and time-dependent manner. Cells morphological changes investigated by inverted microscopy, DAPI, and MDC staining which showed the cytotoxic concentrations of PCTL was significantly higher than cisplatin in the treated TM3 Leydig cells. The results of PCR showed a lack of expression of the p62, Atg5 and Beclin1 gene in TM3 cells treated with PCTL in comparison to cisplatin and control groups. It should be noted that the effects of 25 μM PCTL concentration on TM3 cells have been associated with increased testosterone production and secretion, which requires further study to explain the possible causes and involved molecular mechanisms. The results of the study showed that the PCTL had less-lethal effects on TM3 cells in comparison to cisplatin and probably did not induce autophagy in TM3 cells.

Keywords: platinum-based anticancer agents, cisplatin, Leydig TM3 cells, autophagy

Procedia PDF Downloads 128
10560 Numerical Solution of Steady Magnetohydrodynamic Boundary Layer Flow Due to Gyrotactic Microorganism for Williamson Nanofluid over Stretched Surface in the Presence of Exponential Internal Heat Generation

Authors: M. A. Talha, M. Osman Gani, M. Ferdows

Abstract:

This paper focuses on the study of two dimensional magnetohydrodynamic (MHD) steady incompressible viscous Williamson nanofluid with exponential internal heat generation containing gyrotactic microorganism over a stretching sheet. The governing equations and auxiliary conditions are reduced to a set of non-linear coupled differential equations with the appropriate boundary conditions using similarity transformation. The transformed equations are solved numerically through spectral relaxation method. The influences of various parameters such as Williamson parameter γ, power constant λ, Prandtl number Pr, magnetic field parameter M, Peclet number Pe, Lewis number Le, Bioconvection Lewis number Lb, Brownian motion parameter Nb, thermophoresis parameter Nt, and bioconvection constant σ are studied to obtain the momentum, heat, mass and microorganism distributions. Moment, heat, mass and gyrotactic microorganism profiles are explored through graphs and tables. We computed the heat transfer rate, mass flux rate and the density number of the motile microorganism near the surface. Our numerical results are in better agreement in comparison with existing calculations. The Residual error of our obtained solutions is determined in order to see the convergence rate against iteration. Faster convergence is achieved when internal heat generation is absent. The effect of magnetic parameter M decreases the momentum boundary layer thickness but increases the thermal boundary layer thickness. It is apparent that bioconvection Lewis number and bioconvection parameter has a pronounced effect on microorganism boundary. Increasing brownian motion parameter and Lewis number decreases the thermal boundary layer. Furthermore, magnetic field parameter and thermophoresis parameter has an induced effect on concentration profiles.

Keywords: convection flow, similarity, numerical analysis, spectral method, Williamson nanofluid, internal heat generation

Procedia PDF Downloads 183
10559 An Effective Modification to Multiscale Elastic Network Model and Its Evaluation Based on Analyses of Protein Dynamics

Authors: Weikang Gong, Chunhua Li

Abstract:

Dynamics plays an essential role in function exertion of proteins. Elastic network model (ENM), a harmonic potential-based and cost-effective computational method, is a valuable and efficient tool for characterizing the intrinsic dynamical properties encoded in biomacromolecule structures and has been widely used to detect the large-amplitude collective motions of proteins. Gaussian network model (GNM) and anisotropic network model (ANM) are the two often-used ENM models. In recent years, many ENM variants have been proposed. Here, we propose a small but effective modification (denoted as modified mENM) to the multiscale ENM (mENM) where fitting weights of Kirchhoff/Hessian matrixes with the least square method (LSM) is modified since it neglects the details of pairwise interactions. Then we perform its comparisons with the original mENM, traditional ENM, and parameter-free ENM (pfENM) on reproducing dynamical properties for the six representative proteins whose molecular dynamics (MD) trajectories are available in http://mmb.pcb.ub.es/MoDEL/. In the results, for B-factor prediction, mENM achieves the best performance among the four ENM models. Additionally, it is noted that with the weights of the multiscale Kirchhoff/Hessian matrixes modified, interestingly, the modified mGNM/mANM still has a much better performance than the corresponding traditional ENM and pfENM models. As to dynamical cross-correlation map (DCCM) calculation, taking the data obtained from MD trajectories as the standard, mENM performs the worst while the results produced by the modified mENM and pfENM models are close to those from MD trajectories with the latter a little better than the former. Generally, ANMs perform better than the corresponding GNMs except for the mENM. Thus, pfANM and the modified mANM, especially the former, have an excellent performance in dynamical cross-correlation calculation. Compared with GNMs (except for mGNM), the corresponding ANMs can capture quite a number of positive correlations for the residue pairs nearly largest distances apart, which is maybe due to the anisotropy consideration in ANMs. Furtherly, encouragingly the modified mANM displays the best performance in capturing the functional motional modes, followed by pfANM and traditional ANM models, while mANM fails in all the cases. This suggests that the consideration of long-range interactions is critical for ANM models to produce protein functional motions. Based on the analyses, the modified mENM is a promising method in capturing multiple dynamical characteristics encoded in protein structures. This work is helpful for strengthening the understanding of the elastic network model and provides a valuable guide for researchers to utilize the model to explore protein dynamics.

Keywords: elastic network model, ENM, multiscale ENM, molecular dynamics, parameter-free ENM, protein structure

Procedia PDF Downloads 121
10558 Efficient Neural and Fuzzy Models for the Identification of Dynamical Systems

Authors: Aouiche Abdelaziz, Soudani Mouhamed Salah, Aouiche El Moundhe

Abstract:

The present paper addresses the utilization of Artificial Neural Networks (ANNs) and Fuzzy Inference Systems (FISs) for the identification and control of dynamical systems with some degree of uncertainty. Because ANNs and FISs have an inherent ability to approximate functions and to adapt to changes in input and parameters, they can be used to control systems too complex for linear controllers. In this work, we show how ANNs and FISs can be put in order to form nets that can learn from external data. In sequence, it is presented structures of inputs that can be used along with ANNs and FISs to model non-linear systems. Four systems were used to test the identification and control of the structures proposed. The results show the ANNs and FISs (Back Propagation Algorithm) used were efficient in modeling and controlling the non-linear plants.

Keywords: non-linear systems, fuzzy set Models, neural network, control law

Procedia PDF Downloads 212
10557 Orbit Determination from Two Position Vectors Using Finite Difference Method

Authors: Akhilesh Kumar, Sathyanarayan G., Nirmala S.

Abstract:

An unusual approach is developed to determine the orbit of satellites/space objects. The determination of orbits is considered a boundary value problem and has been solved using the finite difference method (FDM). Only positions of the satellites/space objects are known at two end times taken as boundary conditions. The technique of finite difference has been used to calculate the orbit between end times. In this approach, the governing equation is defined as the satellite's equation of motion with a perturbed acceleration. Using the finite difference method, the governing equations and boundary conditions are discretized. The resulting system of algebraic equations is solved using Tri Diagonal Matrix Algorithm (TDMA) until convergence is achieved. This methodology test and evaluation has been done using all GPS satellite orbits from National Geospatial-Intelligence Agency (NGA) precise product for Doy 125, 2023. Towards this, two hours of twelve sets have been taken into consideration. Only positions at the end times of each twelve sets are considered boundary conditions. This algorithm is applied to all GPS satellites. Results achieved using FDM compared with the results of NGA precise orbits. The maximum RSS error for the position is 0.48 [m] and the velocity is 0.43 [mm/sec]. Also, the present algorithm is applied on the IRNSS satellites for Doy 220, 2023. The maximum RSS error for the position is 0.49 [m], and for velocity is 0.28 [mm/sec]. Next, a simulation has been done for a Highly Elliptical orbit for DOY 63, 2023, for the duration of 6 hours. The RSS of difference in position is 0.92 [m] and velocity is 1.58 [mm/sec] for the orbital speed of more than 5km/sec. Whereas the RSS of difference in position is 0.13 [m] and velocity is 0.12 [mm/sec] for the orbital speed less than 5km/sec. Results show that the newly created method is reliable and accurate. Further applications of the developed methodology include missile and spacecraft targeting, orbit design (mission planning), space rendezvous and interception, space debris correlation, and navigation solutions.

Keywords: finite difference method, grid generation, NavIC system, orbit perturbation

Procedia PDF Downloads 85
10556 Heat Transfer Enhancement by Turbulent Impinging Jet with Jet's Velocity Field Excitations Using OpenFOAM

Authors: Naseem Uddin

Abstract:

Impinging jets are used in variety of engineering and industrial applications. This paper is based on numerical simulations of heat transfer by turbulent impinging jet with velocity field excitations using different Reynolds Averaged Navier-Stokes Equations models. Also Detached Eddy Simulations are conducted to investigate the differences in the prediction capabilities of these two simulation approaches. In this paper the excited jet is simulated in non-commercial CFD code OpenFOAM with the goal to understand the influence of dynamics of impinging jet on heat transfer. The jet’s frequencies are altered keeping in view the preferred mode of the jet. The Reynolds number based on mean velocity and diameter is 23,000 and jet’s outlet-to-target wall distance is 2. It is found that heat transfer at the target wall can be influenced by judicious selection of amplitude and frequencies.

Keywords: excitation, impinging jet, natural frequency, turbulence models

Procedia PDF Downloads 274
10555 Performance Evaluation of Grid Connected Photovoltaic System

Authors: Abdulkadir Magaji

Abstract:

This study analyzes and compares the actual measured and simulated performance of a 3.2 kwP grid-connected photovoltaic system. The system is located at the Outdoor Facility of Government Day secondary School Katsina State, which lies approximately between coordinate of 12°15′N 7°30′E. The system consists of 14 Mono crystalline silicon modules connected in two strings of 7 series-connected modules, each facing north at a fixed tilt of 340. The data presented in this study were measured in the year 2015, where the system supplied a total of 4628 kWh to the local electric utility grid. The performance of the system was simulated using PVsyst software using measured and Meteonorm derived climate data sets (solar radiation, ambient temperature and wind speed). The comparison between measured and simulated energy yield are discussed. Although, both simulation results were similar, better comparison between measured and predicted monthly energy yield is observed with simulation performed using measured weather data at the site. The measured performance ratio in the present study shows 58.4% is higher than those reported elsewhere as compared in the study.

Keywords: performance, evaluation, grid connection, photovoltaic system

Procedia PDF Downloads 181
10554 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores

Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan

Abstract:

Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.

Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics

Procedia PDF Downloads 130
10553 Measuring Housing Quality Using Geographic Information System (GIS)

Authors: Silvija ŠIljeg, Ante ŠIljeg, Ivan Marić

Abstract:

Measuring housing quality is being done on objective and subjective level using different indicators. During the research 5 urban and housing indicators formed according to 58 variables from different housing, domains were used. The aims of the research were to measure housing quality based on GIS approach and to detect critical points of housing in the example of Croatian coastal Town Zadar. The purposes of GIS in the research are to generate models of housing quality indexes by standardisation and aggregation of variables and to examine accuracy model of housing quality index. Analysis of accuracy has been done on the example of variable referring to educational objects availability. By defining weighted coefficients and using different GIS methods high, middle and low housing quality zones were determined. Obtained results can be of use to town planners, spatial planners and town authorities in the process of generating decisions, guidelines, and spatial interventions.

Keywords: housing quality, GIS, housing quality index, indicators, models of housing quality

Procedia PDF Downloads 299
10552 A Mixed Method Approach for Modeling Entry Capacity at Rotary Intersections

Authors: Antonio Pratelli, Lorenzo Brocchini, Reginald Roy Souleyrette

Abstract:

A rotary is a traffic circle intersection where vehicles entering from branches give priority to circulating flow. Vehicles entering the intersection from converging roads move around the central island and weave out of the circle into their desired exiting branch. This creates merging and diverging conflicts among any entry and its successive exit, i.e., a section. Therefore, rotary capacity models are usually based on the weaving of the different movements in any section of the circle, and the maximum rate of flow value is then related to each weaving section of the rotary. Nevertheless, the single-section capacity value does not lead to the typical performance characteristics of the intersection, such as the entry average delay which is directly linked to its level of service. From another point of view, modern roundabout capacity models are based on the limitation of the flow entering from the single entrance due to the amount of flow circulating in front of the entrance itself. Modern roundabouts capacity models generally lead also to a performance evaluation. This paper aims to incorporate a modern roundabout capacity model into an old rotary capacity method to obtain from the latter the single input capacity and ultimately achieve the related performance indicators. Put simply; the main objective is to calculate the average delay of each single roundabout entrance to apply the most common Highway Capacity Manual, or HCM, criteria. The paper is organized as follows: firstly, the rotary and roundabout capacity models are sketched, and it has made a brief introduction to the model combination technique with some practical instances. The successive section is deserved to summarize the TRRL old rotary capacity model and the most recent HCM-7th modern roundabout capacity model. Then, the two models are combined through an iteration-based algorithm, especially set-up and linked to the concept of roundabout total capacity, i.e., the value reached due to a traffic flow pattern leading to the simultaneous congestion of all roundabout entrances. The solution is the average delay for each entrance of the rotary, by which is estimated its respective level of service. In view of further experimental applications, at this research stage, a collection of existing rotary intersections operating with the priority-to-circle rule has already started, both in the US and in Italy. The rotaries have been selected by direct inspection of aerial photos through a map viewer, namely Google Earth. Each instance has been recorded by location, general urban or rural, and its main geometrical patterns. Finally, conclusion remarks are drawn, and a discussion on some further research developments has opened.

Keywords: mixed methods, old rotary and modern roundabout capacity models, total capacity algorithm, level of service estimation

Procedia PDF Downloads 86
10551 Comparison of an Anthropomorphic PRESAGE® Dosimeter and Radiochromic Film with a Commercial Radiation Treatment Planning System for Breast IMRT: A Feasibility Study

Authors: Khalid Iqbal

Abstract:

This work presents a comparison of an anthropomorphic PRESAGE® dosimeter and radiochromic film measurements with a commercial treatment planning system to determine the feasibility of PRESAGE® for 3D dosimetry in breast IMRT. An anthropomorphic PRESAGE® phantom was created in the shape of a breast phantom. A five-field IMRT plan was generated with a commercially available treatment planning system and delivered to the PRESAGE® phantom. The anthropomorphic PRESAGE® was scanned with the Duke midsized optical CT scanner (DMOS-RPC) and the OD distribution was converted to dose. Comparisons were performed between the dose distribution calculated with the Pinnacle3 treatment planning system, PRESAGE®, and EBT2 film measurements. DVHs, gamma maps, and line profiles were used to evaluate the agreement. Gamma map comparisons showed that Pinnacle3 agreed with PRESAGE® as greater than 95% of comparison points for the PTV passed a ± 3%/± 3 mm criterion when the outer 8 mm of phantom data were discluded. Edge artifacts were observed in the optical CT reconstruction, from the surface to approximately 8 mm depth. These artifacts resulted in dose differences between Pinnacle3 and PRESAGE® of up to 5% between the surface and a depth of 8 mm and decreased with increasing depth in the phantom. Line profile comparisons between all three independent measurements yielded a maximum difference of 2% within the central 80% of the field width. For the breast IMRT plan studied, the Pinnacle3 calculations agreed with PRESAGE® measurements to within the ±3%/± 3 mm gamma criterion. This work demonstrates the feasibility of the PRESAGE® to be fashioned into anthropomorphic shape, and establishes the accuracy of Pinnacle3 for breast IMRT. Furthermore, these data have established the groundwork for future investigations into 3D dosimetry with more complex anthropomorphic phantoms.

Keywords: 3D dosimetry, PRESAGE®, IMRT, QA, EBT2 GAFCHROMIC film

Procedia PDF Downloads 416
10550 Application of Stochastic Models on the Portuguese Population and Distortion to Workers Compensation Pensioners Experience

Authors: Nkwenti Mbelli Njah

Abstract:

This research was motivated by a project requested by AXA on the topic of pensions payable under the workers compensation (WC) line of business. There are two types of pensions: the compulsorily recoverable and the not compulsorily recoverable. A pension is compulsorily recoverable for a victim when there is less than 30% of disability and the pension amount per year is less than six times the minimal national salary. The law defines that the mathematical provisions for compulsory recoverable pensions must be calculated by applying the following bases: mortality table TD88/90 and rate of interest 5.25% (maybe with rate of management). To manage pensions which are not compulsorily recoverable is a more complex task because technical bases are not defined by law and much more complex computations are required. In particular, companies have to predict the amount of payments discounted reflecting the mortality effect for all pensioners (this task is monitored monthly in AXA). The purpose of this research was thus to develop a stochastic model for the future mortality of the worker’s compensation pensioners of both the Portuguese market workers and AXA portfolio. Not only is past mortality modeled, also projections about future mortality are made for the general population of Portugal as well as for the two portfolios mentioned earlier. The global model was split in two parts: a stochastic model for population mortality which allows for forecasts, combined with a point estimate from a portfolio mortality model obtained through three different relational models (Cox Proportional, Brass Linear and Workgroup PLT). The one-year death probabilities for ages 0-110 for the period 2013-2113 are obtained for the general population and the portfolios. These probabilities are used to compute different life table functions as well as the not compulsorily recoverable reserves for each of the models required for the pensioners, their spouses and children under 21. The results obtained are compared with the not compulsory recoverable reserves computed using the static mortality table (TD 73/77) that is currently being used by AXA, to see the impact on this reserve if AXA adopted the dynamic tables.

Keywords: compulsorily recoverable, life table functions, relational models, worker’s compensation pensioners

Procedia PDF Downloads 164
10549 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation

Authors: Jonathan Gong

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning

Procedia PDF Downloads 130
10548 Emerging Technologies in Distance Education

Authors: Eunice H. Li

Abstract:

This paper discusses and analyses a small portion of the literature that has been reviewed for research work in Distance Education (DE) pedagogies that I am currently undertaking. It begins by presenting a brief overview of Taylor's (2001) five-generation models of Distance Education. The focus of the discussion will be on the 5th generation, Intelligent Flexible Learning Model. For this generation, educational and other institutions make portal access and interactive multi-media (IMM) an integral part of their operations. The paper then takes a brief look at current trends in technologies – for example smart-watch wearable technology such as Apple Watch. The emergent trends in technologies carry many new features. These are compared to former DE generational features. Also compared is the time span that has elapsed between the generations that are referred to in Taylor's model. This paper is a work in progress. The paper therefore welcome new insights, comparisons and critique of the issues discussed.

Keywords: distance education, e-learning technologies, pedagogy, generational models

Procedia PDF Downloads 462
10547 Surviral: An Agent-Based Simulation Framework for Sars-Cov-2 Outcome Prediction

Authors: Sabrina Neururer, Marco Schweitzer, Werner Hackl, Bernhard Tilg, Patrick Raudaschl, Andreas Huber, Bernhard Pfeifer

Abstract:

History and the current outbreak of Covid-19 have shown the deadly potential of infectious diseases. However, infectious diseases also have a serious impact on areas other than health and healthcare, such as the economy or social life. These areas are strongly codependent. Therefore, disease control measures, such as social distancing, quarantines, curfews, or lockdowns, have to be adopted in a very considerate manner. Infectious disease modeling can support policy and decision-makers with adequate information regarding the dynamics of the pandemic and therefore assist in planning and enforcing appropriate measures that will prevent the healthcare system from collapsing. In this work, an agent-based simulation package named “survival” for simulating infectious diseases is presented. A special focus is put on SARS-Cov-2. The presented simulation package was used in Austria to model the SARS-Cov-2 outbreak from the beginning of 2020. Agent-based modeling is a relatively recent modeling approach. Since our world is getting more and more complex, the complexity of the underlying systems is also increasing. The development of tools and frameworks and increasing computational power advance the application of agent-based models. For parametrizing the presented model, different data sources, such as known infections, wastewater virus load, blood donor antibodies, circulating virus variants and the used capacity for hospitalization, as well as the availability of medical materials like ventilators, were integrated with a database system and used. The simulation result of the model was used for predicting the dynamics and the possible outcomes and was used by the health authorities to decide on the measures to be taken in order to control the pandemic situation. The survival package was implemented in the programming language Java and the analytics were performed with R Studio. During the first run in March 2020, the simulation showed that without measures other than individual personal behavior and appropriate medication, the death toll would have been about 27 million people worldwide within the first year. The model predicted the hospitalization rates (standard and intensive care) for Tyrol and South Tyrol with an accuracy of about 1.5% average error. They were calculated to provide 10-days forecasts. The state government and the hospitals were provided with the 10-days models to support their decision-making. This ensured that standard care was maintained for as long as possible without restrictions. Furthermore, various measures were estimated and thereafter enforced. Among other things, communities were quarantined based on the calculations while, in accordance with the calculations, the curfews for the entire population were reduced. With this framework, which is used in the national crisis team of the Austrian province of Tyrol, a very accurate model could be created on the federal state level as well as on the district and municipal level, which was able to provide decision-makers with a solid information basis. This framework can be transferred to various infectious diseases and thus can be used as a basis for future monitoring.

Keywords: modelling, simulation, agent-based, SARS-Cov-2, COVID-19

Procedia PDF Downloads 174
10546 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap

Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui

Abstract:

As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.

Keywords: calibration, building energy modeling, performance gap, sensor network

Procedia PDF Downloads 160
10545 Scalable Learning of Tree-Based Models on Sparsely Representable Data

Authors: Fares Hedayatit, Arnauld Joly, Panagiotis Papadimitriou

Abstract:

Many machine learning tasks such as text annotation usually require training over very big datasets, e.g., millions of web documents, that can be represented in a sparse input space. State-of the-art tree-based ensemble algorithms cannot scale to such datasets, since they include operations whose running time is a function of the input space size rather than a function of the non-zero input elements. In this paper, we propose an efficient splitting algorithm to leverage input sparsity within decision tree methods. Our algorithm improves training time over sparse datasets by more than two orders of magnitude and it has been incorporated in the current version of scikit-learn.org, the most popular open source Python machine learning library.

Keywords: big data, sparsely representable data, tree-based models, scalable learning

Procedia PDF Downloads 263
10544 Numerical Simulation and Experimental Validation of the Tire-Road Separation in Quarter-car Model

Authors: Quy Dang Nguyen, Reza Nakhaie Jazar

Abstract:

The paper investigates vibration dynamics of tire-road separation for a quarter-car model; this separation model is developed to be close to the real situation considering the tire is able to separate from the ground plane. A set of piecewise linear mathematical models is developed and matches the in-contact and no-contact states to be considered as mother models for further investigations. The bound dynamics are numerically simulated in the time response and phase portraits. The separation analysis may determine which values of suspension parameters can delay and avoid the no-contact phenomenon, which results in improving ride comfort and eliminating the potentially dangerous oscillation. Finally, model verification is carried out in the MSC-ADAMS environment.

Keywords: quarter-car vibrations, tire-road separation, separation analysis, separation dynamics, ride comfort, ADAMS validation

Procedia PDF Downloads 92
10543 Empirical and Indian Automotive Equity Portfolio Decision Support

Authors: P. Sankar, P. James Daniel Paul, Siddhant Sahu

Abstract:

A brief review of the empirical studies on the methodology of the stock market decision support would indicate that they are at a threshold of validating the accuracy of the traditional and the fuzzy, artificial neural network and the decision trees. Many researchers have been attempting to compare these models using various data sets worldwide. However, the research community is on the way to the conclusive confidence in the emerged models. This paper attempts to use the automotive sector stock prices from National Stock Exchange (NSE), India and analyze them for the intra-sectorial support for stock market decisions. The study identifies the significant variables and their lags which affect the price of the stocks using OLS analysis and decision tree classifiers.

Keywords: Indian automotive sector, stock market decisions, equity portfolio analysis, decision tree classifiers, statistical data analysis

Procedia PDF Downloads 485
10542 Impact of Integrated Signals for Doing Human Activity Recognition Using Deep Learning Models

Authors: Milagros Jaén-Vargas, Javier García Martínez, Karla Miriam Reyes Leiva, María Fernanda Trujillo-Guerrero, Francisco Fernandes, Sérgio Barroso Gonçalves, Miguel Tavares Silva, Daniel Simões Lopes, José Javier Serrano Olmedo

Abstract:

Human Activity Recognition (HAR) is having a growing impact in creating new applications and is responsible for emerging new technologies. Also, the use of wearable sensors is an important key to exploring the human body's behavior when performing activities. Hence, the use of these dispositive is less invasive and the person is more comfortable. In this study, a database that includes three activities is used. The activities were acquired from inertial measurement unit sensors (IMU) and motion capture systems (MOCAP). The main objective is differentiating the performance from four Deep Learning (DL) models: Deep Neural Network (DNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and hybrid model Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM), when considering acceleration, velocity and position and evaluate if integrating the IMU acceleration to obtain velocity and position represent an increment in performance when it works as input to the DL models. Moreover, compared with the same type of data provided by the MOCAP system. Despite the acceleration data is cleaned when integrating, results show a minimal increase in accuracy for the integrated signals.

Keywords: HAR, IMU, MOCAP, acceleration, velocity, position, feature maps

Procedia PDF Downloads 98
10541 Saltwater Intrusion Studies in the Cai River in the Khanh Hoa Province, Vietnam

Authors: B. Van Kessel, P. T. Kockelkorn, T. R. Speelman, T. C. Wierikx, C. Mai Van, T. A. Bogaard

Abstract:

Saltwater intrusion is a common problem in estuaries around the world, as it could hinder the freshwater supply of coastal zones. This problem is likely to grow due to climate change and sea-level rise. The influence of these factors on the saltwater intrusion was investigated for the Cai River in the Khanh Hoa province in Vietnam. In addition, the Cai River has high seasonal fluctuations in discharge, leading to increased saltwater intrusion during the dry season. Sea level rise, river discharge changes, river mouth widening and a proposed saltwater intrusion prevention dam can have influences on the saltwater intrusion but have not been quantified for the Cai River estuary. This research used both an analytical and numerical model to investigate the effect of the aforementioned factors. The analytical model was based on a model proposed by Savenije and was calibrated using limited in situ data. The numerical model was a 3D hydrodynamic model made using the Delft3D4 software. The analytical model and numerical model agreed with in situ data, mostly for tidally average data. Both models indicated a roughly similar dependence on discharge, also agreeing that this parameter had the most severe influence on the modeled saltwater intrusion. Especially for discharges below 10 m/s3, the saltwater was predicted to reach further than 10 km. In the models, both sea-level rise and river widening mainly resulted in salinity increments up to 3 kg/m3 in the middle part of the river. The predicted sea-level rise in 2070 was simulated to lead to an increase of 0.5 km in saltwater intrusion length. Furthermore, the effect of the saltwater intrusion dam seemed significant in the model used, but only for the highest position of the gate.

Keywords: Cai River, hydraulic models, river discharge, saltwater intrusion, tidal barriers

Procedia PDF Downloads 112
10540 Nanomechanical Devices Vibrating at Microwave Frequencies in Simple Liquids

Authors: Debadi Chakraborty, John E. Sader

Abstract:

Nanomechanical devices have emerged as a versatile platform for a host of applications due to their extreme sensitivity to environmental conditions. For example, mass measurements with sensitivity at the atomic level have recently been demonstrated. Ultrafast laser spectroscopy coherently excite the vibrational modes of metal nanoparticles and permits precise measurement of the vibration characteristics as a function of nanoparticle shape, size and surrounding environment. This study reports that the vibration of metal nanoparticles in simple liquids, like water and glycerol are not described by conventional fluid mechanics, i.e., Navier Stokes equations. The intrinsic molecular relaxation processes in the surrounding liquid are found to have a profound effect on the fluid-structure interaction of mechanical devices at nanometre scales. Theoretical models have been developed based on the non-Newtonian viscoelastic fluid-structure interaction theory to investigate the vibration of nanoparticles immersed in simple fluids. The utility of this theoretical framework is demonstrated by comparison to measurements on single nanowires and ensembles of metal rods. This study provides a rigorous foundation for the use of metal nanoparticles as ultrasensitive mechanical sensors in fluid and opens a new paradigm for understanding extremely high frequency fluid mechanics, nanoscale sensing technologies, and biophysical processes.

Keywords: fluid-structure interaction, nanoparticle vibration, ultrafast laser spectroscopy, viscoelastic damping

Procedia PDF Downloads 274
10539 Review of the Model-Based Supply Chain Management Research in the Construction Industry

Authors: Aspasia Koutsokosta, Stefanos Katsavounis

Abstract:

This paper reviews the model-based qualitative and quantitative Operations Management research in the context of Construction Supply Chain Management (CSCM). Construction industry has been traditionally blamed for low productivity, cost and time overruns, waste, high fragmentation and adversarial relationships. The construction industry has been slower than other industries to employ the Supply Chain Management (SCM) concept and develop models that support the decision-making and planning. However the last decade there is a distinct shift from a project-based to a supply-based approach of construction management. CSCM comes up as a new promising management tool of construction operations and improves the performance of construction projects in terms of cost, time and quality. Modeling the Construction Supply Chain (CSC) offers the means to reap the benefits of SCM, make informed decisions and gain competitive advantage. Different modeling approaches and methodologies have been applied in the multi-disciplinary and heterogeneous research field of CSCM. The literature review reveals that a considerable percentage of CSC modeling accommodates conceptual or process models which discuss general management frameworks and do not relate to acknowledged soft OR methods. We particularly focus on the model-based quantitative research and categorize the CSCM models depending on their scope, mathematical formulation, structure, objectives, solution approach, software used and decision level. Although over the last few years there has been clearly an increase of research papers on quantitative CSC models, we identify that the relevant literature is very fragmented with limited applications of simulation, mathematical programming and simulation-based optimization. Most applications are project-specific or study only parts of the supply system. Thus, some complex interdependencies within construction are neglected and the implementation of the integrated supply chain management is hindered. We conclude this paper by giving future research directions and emphasizing the need to develop robust mathematical optimization models for the CSC. We stress that CSC modeling needs a multi-dimensional, system-wide and long-term perspective. Finally, prior applications of SCM to other industries have to be taken into account in order to model CSCs, but not without the consequential reform of generic concepts to match the unique characteristics of the construction industry.

Keywords: construction supply chain management, modeling, operations research, optimization, simulation

Procedia PDF Downloads 503
10538 Free Vibration Analysis of Composite Beam with Non-Uniform Section Using Analytical, Numerical and Experimental Method

Authors: Kadda Boumediene, Mohamed Ziani

Abstract:

Mainly because of their good ratio stiffness/mass, and in addition to adjustable mechanical properties, composite materials are more and more often used as an alternative to traditional materials in several domains. Before using these materials in practical application, a detailed and precise characterization of their mechanical properties is necessary. In the present work, we will find a dynamic analyze of composite beam (natural frequencies and mode shape), an experimental vibration technique, which presents a powerful tool for the estimation of mechanical characteristics, is used to characterize a dissimilar beam of a Mortar/ natural mineral fiber. The study is completed by an analytic (Rayleigh & Rayleigh-Ritz), experimental and numerical application for non-uniform composite beam of a Mortar/ natural mineral fiber. The study is supported by a comparison between numerical and analytic results as well as a comparison between experimental and numerical results.

Keywords: composite beam, mortar/ natural mineral fiber, mechanical characteristics, natural frequencies, mode shape

Procedia PDF Downloads 353
10537 Monte Carlo Simulation of X-Ray Spectra in Diagnostic Radiology and Mammography Using MCNP4C

Authors: Sahar Heidary, Ramin Ghasemi Shayan

Abstract:

The overall goal Monte Carlo N-atom radioactivity transference PC program (MCNP4C) was done for the regeneration of x-ray groups in diagnostic radiology and mammography. The electrons were transported till they slow down and stopover in the target. Both bremsstrahlung and characteristic x-ray creation were measured in this study. In this issue, the x-ray spectra forecast by several computational models recycled in the diagnostic radiology and mammography energy kind have been calculated by appraisal with dignified spectra and their outcome on the scheming of absorbed dose and effective dose (ED) told to the adult ORNL hermaphroditic phantom quantified. This comprises practical models (TASMIP and MASMIP), semi-practical models (X-rayb&m, X-raytbc, XCOMP, IPEM, Tucker et al., and Blough et al.), and Monte Carlo modeling (EGS4, ITS3.0, and MCNP4C). Images got consuming synchrotron radiation (SR) and both screen-film and the CR system were related with images of the similar trials attained with digital mammography equipment. In sight of the worthy feature of the effects gained, the CR system was used in two mammographic inspections with SR. For separately mammography unit, the capability acquiesced bilateral mediolateral oblique (MLO) and craniocaudal(CC) mammograms attained in a woman with fatty breasts and a woman with dense breasts. Referees planned the common groups and definite absences that managed to a choice to miscarry the part that formed the scientific imaginings.

Keywords: mammography, monte carlo, effective dose, radiology

Procedia PDF Downloads 131
10536 The Co-Existence of Multidominance and Movement in the Syntax of Chinese Bi-Comparatives

Authors: Yaqing Hu

Abstract:

This paper puts forward a syntactic analysis involving multidominance and rightward movement in Chinese bi-comparatives, as in 'Yuehan bi Mali gao (John is taller than Mary).' It is argued here that the predicate of comparison is a shared constituent in two small clauses, namely one for the target and one for the standard; and then it moves rightward to form a degree phrase with the comparative morpheme. This proposal comes from four aspects. First, the example above can also be expressed in this way, 'A: Yuehan he Mali, shui gao? (John and Mary, who is taller?) B: Yuehan gao./Yuehan geng gao. (John is taller).' This shows that the gradable adjective is predicated of the target. In addition, according to a constraint on Chinese bi-comparatives, namely the target and the standard must be arguments of the predicate simultaneously, it is not unreasonable to assume that the gradable adjective may also be predicated of the standard. Second, subcomparatives are totally disallowed in Chinese, as in '*zhe-zhang zhuozi bi zhe-zhang yizi kuan chang. (This table is longer than this chair is wide.)' In order to save it from ungrammaticality, the target and the standard should be compared along the same dimension denoted by the gradable adjective. It may follow that in Chinese comparatives, having equal roles in the same eventuality, the target and the standard bear the same thematic relationship with the predicate of comparison. Third, verb-copy can appear in Chinese bi-comparatives, as in 'Yuehan qi ma bi Mali qi ma qi de kuai. (John rides horses faster than Mary does.)' The predicate qi seems to form a small clause with both the target and the standard. This might be supporting evidence that both the target and the standard share the predicate of comparison. Fourth, Chinese comparatives do have comparative morphemes, as in 'Yuehan bi Mali geng gao. (John is taller than Mary)', which is semantically equivalent to the first example above. Thus, it follows that one feature of Chinese comparative morphemes is that they can remain overt or covert in the syntax, which will not affect semantics. This further shows that comparative morphemes in bi-comparatives may not be able to saturate the degree argument denoted by the predicate of comparison due to its optionality in the structure. These four aspects present a challenge to the Direct Analysis used in Chinese comparatives since this approach would presume that the target and the standard somehow show independency with the predicate in the syntax. Meanwhile, this study also rejects the previous analysis of multidomiance in bi-comparatives in which the degree phrase comprised of the comparative morpheme and the gradable adjective may be shared by the standard when the comparative morpheme is covert. This syntactic analysis proposed in this study will therefore offer a different perspective of how to treat degree phrase in Chinese comparatives and may offer evidence to argue whether there is degree phrase movement in bi-comparatives as in its English counterparts.

Keywords: Chinese comparatives, degree phrase, movement, multidominance, syntactic analysis

Procedia PDF Downloads 330
10535 Design and Simulation of All Optical Fiber to the Home Network

Authors: Rahul Malhotra

Abstract:

Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.

Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT

Procedia PDF Downloads 556