Search results for: CAR models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6762

Search results for: CAR models

2802 Feature Extraction and Impact Analysis for Solid Mechanics Using Supervised Finite Element Analysis

Authors: Edward Schwalb, Matthias Dehmer, Michael Schlenkrich, Farzaneh Taslimi, Ketron Mitchell-Wynne, Horen Kuecuekyan

Abstract:

We present a generalized feature extraction approach for supporting Machine Learning (ML) algorithms which perform tasks similar to Finite-Element Analysis (FEA). We report results for estimating the Head Injury Categorization (HIC) of vehicle engine compartments across various impact scenarios. Our experiments demonstrate that models learned using features derived with a simple discretization approach provide a reasonable approximation of a full simulation. We observe that Decision Trees could be as effective as Neural Networks for the HIC task. The simplicity and performance of the learned Decision Trees could offer a trade-off of a multiple order of magnitude increase in speed and cost improvement over full simulation for a reasonable approximation. When used as a complement to full simulation, the approach enables rapid approximate feedback to engineering teams before submission for full analysis. The approach produces mesh independent features and is further agnostic of the assembly structure.

Keywords: mechanical design validation, FEA, supervised decision tree, convolutional neural network.

Procedia PDF Downloads 139
2801 How Polarization and Ideological Divisiveness Increase the Likelihood of Executive Action: Evidence from the Italian Case

Authors: Umberto Platini

Abstract:

This paper analyses the role of government fragmentation as predictor of the use of emergency decrees in parliamentary democracies. In particular, it focuses on the relationship between ideological divisiveness within cabinets and the choice by executives to issue emergency decrees rather initiating ordinary legislative procedures. A Bayesian multilevel analysis conducted on the population of government-initiated legislation in Italy between 1996 and 2018 finds significant evidence that those legislative proposals which are further away from the ideological centre of gravity of the executive are around three times more likely to be issued as emergency decrees. Likewise, legislative projects regulating more contentious policy areas are significantly more likely to be issued by decree. However, for more contentious issues the importance of ideological distance as a predictor diminishes. This evidence suggests that cabinets prefer decrees to ordinary legislative procedures when they expect that the bargaining environment in Parliament is more hostile. These results persist regardless of the fluctuations of the political-economic cycle. Their robustness is also tested against a battery of controls and against fixed effects both at the government level and at the legislature level.

Keywords: Bayesian multilevel logit models, executive action, executive decrees, ideology, legislative studies, polarization

Procedia PDF Downloads 105
2800 Maackiain Attenuates Alpha-Synuclein Accumulation and Improves 6-OHDA-Induced Dopaminergic Neuron Degeneration in Parkinson's Disease Animal Model

Authors: Shao-Hsuan Chien, Ju-Hui Fu

Abstract:

Parkinson’s disease (PD) is a degenerative disorder of the central nervous system that is characterized by progressive loss of dopaminergic neurons in the substantia nigra pars compacta and motor impairment. Aggregation of α-synuclein in neuronal cells plays a key role in this disease. At present, therapeutics for PD provides moderate symptomatic benefit but is not able to delay the development of this disease. Current efforts for the treatment of PD are to identify new drugs that show slow or arrest progressive course of PD by interfering with a disease-specific pathogenetic process in PD patients. Maackiain is a bioactive compound isolated from the roots of the Chinese herb Sophora flavescens. The purpose of the present study was to assess the potential for maackiain to ameliorate PD in Caenorhabditis elegans models. Our data reveal that maackiain prevents α-synuclein accumulation in the transgenic Caenorhabditis elegans model and also improves dopaminergic neuron degeneration, food-sensing behavior, and life-span in 6-hydroxydopamine-induced Caenorhabditis elegans model, thus indicating its potential as a candidate antiparkinsonian drug.

Keywords: maackiain, Parkinson’s disease, dopaminergic neurons, α-Synuclein

Procedia PDF Downloads 199
2799 Market Illiquidity and Pricing Errors in the Term Structure of CDS

Authors: Lidia Sanchis-Marco, Antonio Rubia, Pedro Serrano

Abstract:

This paper studies the informational content of pricing errors in the term structure of sovereign CDS spreads. The residuals from a non-arbitrage model are employed to construct a Price discrepancy estimate, or noise measure. The noise estimate is understood as an indicator of market distress and reflects frictions such as illiquidity. Empirically, the noise measure is computed for an extensive panel of CDS spreads. Our results reveal an important fraction of systematic risk is not priced in default swap contracts. When projecting the noise measure onto a set of financial variables, the panel-data estimates show that greater price discrepancies are systematically related to a higher level of offsetting transactions of CDS contracts. This evidence suggests that arbitrage capital flows exit the marketplace during time of distress, and this consistent with a market segmentation among investors and arbitrageurs where professional arbitrageurs are particularly ineffective at bringing prices to their fundamental values during turbulent periods. Our empirical findings are robust for the most common CDS pricing models employed in the industry.

Keywords: credit default swaps, noise measure, illiquidity, capital arbitrage

Procedia PDF Downloads 569
2798 Pharmacokinetics of First-Line Tuberculosis Drugs in South African Patients from Kwazulu-Natal: Effects of Pharmacogenetic Variation on Rifampicin and Isoniazid Concentrations

Authors: Anushka Naidoo, Veron Ramsuran, Maxwell Chirehwa, Paolo Denti, Kogieleum Naidoo, Helen McIlleron, Nonhlanhla Yende-Zuma, Ravesh Singh, Sinaye Ngcapu, Nesri Padayatachi

Abstract:

Background: Despite efforts to introduce new drugs and shorter drug regimens for drug-susceptible tuberculosis (TB), the standard first-line treatment has not changed in over 50 years. Rifampicin, isoniazid, and pyrazinamide are critical components of the current standard treatment regimens. Some studies suggest that microbiologic failure and acquired drug resistance are primarily driven by low drug concentrations that result from pharmacokinetic (PK) variability independent of adherence to treatment. Wide between-patient pharmacokinetic variability for rifampin, isoniazid, and pyrazinamide has been reported in prior studies. There may be several reasons for this variability. However, genetic variability in genes coding for drug metabolizing and transporter enzymes have been shown to be a contributing factor for variable tuberculosis drug exposures. Objective: We describe the pharmacokinetics of first-line TB drugs rifampicin, isoniazid, and pyrazinamide and assess the effect of genetic variability in relevant selected drug metabolizing and transporter enzymes on pharmacokinetic parameters of isoniazid and rifampicin. Methods: We conducted the randomized-controlled Improving retreatment success TB trial in Durban, South Africa. The drug regimen included rifampicin, isoniazid, and pyrazinamide. Drug concentrations were measured in plasma, and concentration-time data were analysed using nonlinear-mixed-effects models to quantify the effects of relevant covariates and single nucleotide polymorphisms (SNP’s) of drug metabolizing and transporter genes on rifampicin, isoniazid and pyrazinamide exposure. A total of 25 SNP’s: four NAT2 (used to determine acetylator status), four SLCO1B1, three Pregnane X receptor (NR1), six ABCB1 and eight UGT1A, were selected for analysis in this study. Genotypes were determined for each of the SNP’s using a TaqMan® Genotyping OpenArray™. Results: Among fifty-eight patients studied; 41 (70.7%) were male, 97% black African, 42 (72.4%) HIV co-infected and 40 (95%) on efavirenz-based ART. Median weight, fat-free mass (FFM), and age at baseline were 56.9 kg (interquartile range, IQR: 51.1-65.2), 46.8 kg (IQR: 42.5-50.3) and 37 years (IQR: 31-42), respectively. The pharmacokinetics of rifampicin and pyrazinamide was best described using one-compartment models with first-order absorption and elimination, while for isoniazid two-compartment disposition was used. The median (interquartile range: IQR) AUC (h·mg/L) and Cmax (mg/L) for rifampicin, isoniazid, and pyrazinamide were; 25.62 (23.01-28.53) and 4.85 (4.36-5.40), 10.62 (9.20-12.25) and 2.79 (2.61-2.97), 345.74 (312.03-383.10) and 28.06 (25.01-31.52), respectively. Eighteen percent of patients were classified as rapid acetylators, and 34% and 43% as slow and intermediate acetylators, respectively. Rapid and intermediate acetylator status based on NAT 2 genotype resulted in 2.3 and 1.6 times higher isoniazid clearance than slow acetylators. We found no effects of the SLCO1B1 genotypes on rifampicin pharmacokinetics. Conclusion: Plasma concentrations of rifampicin, isoniazid, and pyrazinamide were low overall in our patients. Isoniazid clearance was high overall and as expected higher in rapid and intermediate acetylators resulting in lower drug exposures. In contrast to reports from previous South African or Ugandan studies, we did not find any effects of the SLCO1B1 or other genotypes tested on rifampicin PK. However, our findings are in keeping with more recent studies from Malawi and India emphasizing the need for geographically diverse and adequately powered studies. The clinical relevance of the low tuberculosis drug concentrations warrants further investigation.

Keywords: rifampicin, isoniazid pharmacokinetics, genetics, NAT2, SLCO1B1, tuberculosis

Procedia PDF Downloads 187
2797 Modeling User Departure Time Choice for Trips in Urban Streets

Authors: Saeed Sayyad Hagh Shomar

Abstract:

Modeling users’ decisions on departure time choice is the main motivation for this research. In particular, it examines the impact of social-demographic features, household, job characteristics and trip qualities on individuals’ departure time choice. Departure time alternatives are presented as adjacent discrete time periods. The choice between these alternatives is done using a discrete choice model. Since a great deal of early morning trips and traffic congestion at that time of the day comprise work trips, the focus of this study is on the work trip over the entire day. Therefore, this study by using questionnaire of stated preference models users’ departure time choice affected by congestion pricing plan in downtown Tehran. Experimental results demonstrate efficient social-demographic impact on work trips’ departure time. These findings have substantial outcomes for the analysis of transportation planning. Particularly, the analysis shows that ignoring the effects of these variables could result in erroneous information and consequently decisions in the field of transportation planning and air quality would fail and cause financial resources loss.

Keywords: modeling, departure time, travel timing, time of the day, congestion pricing, transportation planning

Procedia PDF Downloads 433
2796 T-S Fuzzy Modeling Based on Power Coefficient Limit Nonlinearity Applied to an Isolated Single Machine Load Frequency Deviation Control

Authors: R. S. Sheu, H. Usman, M. S. Lawal

Abstract:

Takagi-Sugeno (T-S) fuzzy model based control of a load frequency deviation in a single machine with limit nonlinearity on power coefficient is presented in the paper. Two T-S fuzzy rules with only rotor angle variable as input in the premise part, and linear state space models in the consequent part involving characteristic matrices determined from limits set on the power coefficient constant are formulated, state feedback control gains for closed loop control was determined from the formulated Linear Matrix Inequality (LMI) with eigenvalue optimization scheme for asymptotic and exponential stability (speed of esponse). Numerical evaluation of the closed loop object was carried out in Matlab. Simulation results generated of both the open and closed loop system showed the effectiveness of the control scheme in maintaining load frequency stability.

Keywords: T-S fuzzy model, state feedback control, linear matrix inequality (LMI), frequency deviation control

Procedia PDF Downloads 397
2795 Applying the Crystal Model to Different Nuclear Systems

Authors: A. Amar

Abstract:

The angular distributions of the nuclear systems under consideration have been analyzed in the framework of the optical model (OM), where the real part was taken in the crystal model form. A crystal model (CM) has been applied to deuteron elastically scattered by ⁶,⁷Li and ⁹Be. A crystal model (CM) + distorted-wave Born approximation (DWBA) + dynamic polarization potential (DPP) potential has been applied to deuteron elastically scattered by ⁶,⁷Li and 9Be. Also, a crystal model has been applied to ⁶Li elastically scattered by ¹⁶O and ²⁸Sn in addition to the ⁷Li+⁷Li system and the ¹²C(alpha,⁸Be) ⁸Be reaction. The continuum-discretized coupled-channels (CDCC) method has been applied to the ⁷Li+⁷Li system and agreement between the crystal model and the continuum-discretized coupled-channels (CDCC) method has been observed. In general, the models succeeded in reproducing the differential cross sections at the full angular range and for all the energies under consideration.

Keywords: optical model (OM), crystal model (CM), distorted-wave born approximation (DWBA), dynamic polarization potential (DPP), the continuum-discretized coupled-channels (CDCC) method, and deuteron elastically scattered by ⁶, ⁷Li and ⁹Be

Procedia PDF Downloads 79
2794 Component-Based Approach in Assessing Sewer Manholes

Authors: Khalid Kaddoura, Tarek Zayed

Abstract:

Sewer networks are constructed to protect the communities and the environment from any contact with the sewer mediums. Pipelines, being laterals or sewer mains, and manholes form the huge underground infrastructure in every urban city. Due to the sewer networks importance, the infrastructure asset management field has extensive advancement in condition assessment and rehabilitation decision models. However, most of the focus was devoted to pipelines giving little attention toward manholes condition assessment. In fact, recent studies started to emerge in this area to preserve manholes from any malfunction. Therefore, the main objective of this study is to propose a condition assessment model for sewer manholes. The model divides the manhole into several components and determines the relative importance weight of each component using the Analytic Network Process (ANP) decision-making method. Later, the condition of the manhole is computed by aggregating the condition of each component with its corresponding weight. Accordingly, the proposed assessment model will enable decision-makers to have a final index suggesting the overall condition of the manhole and a backward analysis to check the condition of each component. Consequently, better decisions are made pertinent to maintenance, rehabilitation, and replacement actions.

Keywords: Analytic Network Process (ANP), condition assessment, decision-making, manholes

Procedia PDF Downloads 355
2793 Meta-Learning for Hierarchical Classification and Applications in Bioinformatics

Authors: Fabio Fabris, Alex A. Freitas

Abstract:

Hierarchical classification is a special type of classification task where the class labels are organised into a hierarchy, with more generic class labels being ancestors of more specific ones. Meta-learning for classification-algorithm recommendation consists of recommending to the user a classification algorithm, from a pool of candidate algorithms, for a dataset, based on the past performance of the candidate algorithms in other datasets. Meta-learning is normally used in conventional, non-hierarchical classification. By contrast, this paper proposes a meta-learning approach for more challenging task of hierarchical classification, and evaluates it in a large number of bioinformatics datasets. Hierarchical classification is especially relevant for bioinformatics problems, as protein and gene functions tend to be organised into a hierarchy of class labels. This work proposes meta-learning approach for recommending the best hierarchical classification algorithm to a hierarchical classification dataset. This work’s contributions are: 1) proposing an algorithm for splitting hierarchical datasets into new datasets to increase the number of meta-instances, 2) proposing meta-features for hierarchical classification, and 3) interpreting decision-tree meta-models for hierarchical classification algorithm recommendation.

Keywords: algorithm recommendation, meta-learning, bioinformatics, hierarchical classification

Procedia PDF Downloads 314
2792 Removal of Nickel and Zinc Ions from Aqueous Solution by Graphene Oxide and Graphene Oxide Functionalized Glycine

Authors: M. Rajabi, O. Moradi

Abstract:

In this study, removal of Nickel and Zinc by graphene oxide and functionalized graphene oxide–gelaycin surfaces was examined. Amino group was added to surface of graphene oxide to produced functionalized graphene oxide–gelaycin. Effect of contact time and initial concentration of Ni (II) and Zn(II) ions were studied. Results showed that with increase of initial concentration of Ni (II) and Zn(II) adsorption capacity was increased. After 50 min has not a large change at adsorption capacity therefore, 50 min was selected as optimaze time. Scanning electron microscope (SEM) and fourier transform infrared (FT-IR) spectroscopy spectra used for the analysis confirmed the successful fictionalization of the Graphene oxide surface. Adsorption experiments of Ni (II) and Zn(II) ions graphene oxide and functionalized graphene oxide–gelaycin surfaces fixed at 298 K and pH=6. The Pseudo Firs-order and the Pseudo Second-order (types I, II, III and IV) kinetic models were tested for adsorption process and results showed that the kinetic parameters best fits with to type (I) of pseudo-second-order model because presented low X2 values and also high R2 values.

Keywords: graphene oxide, gelaycin, nickel, zinc, adsorption, kinetic, graphene oxide, gelaycin, nickel, zinc, adsorption, kinetic

Procedia PDF Downloads 308
2791 Effects of Initial State on Opinion Formation in Complex Social Networks with Noises

Authors: Yi Yu, Vu Xuan Nguyen, Gaoxi Xiao

Abstract:

Opinion formation in complex social networks may exhibit complex system dynamics even when based on some simplest system evolution models. An interesting and important issue is the effects of the initial state on the final steady-state opinion distribution. By carrying out extensive simulations and providing necessary discussions, we show that, while different initial opinion distributions certainly make differences to opinion evolution in social systems without noises, in systems with noises, given enough time, different initial states basically do not contribute to making any significant differences in the final steady state. Instead, it is the basal distribution of the preferred opinions that contributes to deciding the final state of the systems. We briefly explain the reasons leading to the observed conclusions. Such an observation contradicts with a long-term belief on the roles of system initial state in opinion formation, demonstrating the dominating role that opinion mutation can play in opinion formation given enough time. The observation may help to better understand certain observations of opinion evolution dynamics in real-life social networks.

Keywords: opinion formation, Deffuant model, opinion mutation, consensus making

Procedia PDF Downloads 178
2790 Numerical Prediction of Width Crack of Concrete Dapped-End Beams

Authors: Jatziri Y. Moreno-Martinez, Arturo Galvan, Xavier Chavez Cardenas, Hiram Arroyo

Abstract:

Several methods have been utilized to study the prediction of cracking of concrete structural under loading. The finite element analysis is an alternative that shows good results. The aim of this work was the numerical study of the width crack in reinforced concrete beams with dapped ends, these are frequently found in bridge girders and precast concrete construction. Properly restricting cracking is an important aspect of the design in dapped ends, it has been observed that the cracks that exceed the allowable widths are unacceptable in an aggressive environment for reinforcing steel. For simulating the crack width, the discrete crack approach was considered by means of a Cohesive Zone (CZM) Model using a function to represent the crack opening. Two cases of dapped-end were constructed and tested in the laboratory of Structures and Materials of Engineering Institute of UNAM. The first case considers a reinforcement based on hangers as well as on vertical and horizontal ring, the second case considers 50% of the vertical stirrups in the dapped end to the main part of the beam were replaced by an equivalent area (vertically projected) of diagonal bars under. The loading protocol consisted on applying symmetrical loading to reach the service load. The models were performed using the software package ANSYS v. 16.2. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The reinforcement was introduced with smeared approach. Interface delamination was modeled by traditional fracture mechanics methods such as the nodal release technique adopting softening relationships between tractions and the separations, which in turn introduce a critical fracture energy that is also the energy required to break apart the interface surfaces. This technique is called CZM. The interface surfaces of the materials are represented by a contact elements Surface-to-Surface (CONTA173) with bonded (initial contact). The Mode I dominated bilinear CZM model assumes that the separation of the material interface is dominated by the displacement jump normal to the interface. Furthermore, the opening crack was taken into consideration according to the maximum normal contact stress, the contact gap at the completion of debonding, and the maximum equivalent tangential contact stress. The contact elements were placed in the crack re-entrant corner. To validate the proposed approach, the results obtained with the previous procedure are compared with experimental test. A good correlation between the experimental and numerical Load-Displacement curves was presented, the numerical models also allowed to obtain the load-crack width curves. In these two cases, the proposed model confirms the capability of predicting the maximum crack width, with an error of ± 30 %. Finally, the orientation of the crack is a fundamental for the prediction of crack width. The results regarding the crack width can be considered as good from the practical point view. Load-Displacement curve of the test and the location of the crack were able to obtain favorable results.

Keywords: cohesive zone model, dapped-end beams, discrete crack approach, finite element analysis

Procedia PDF Downloads 168
2789 Decision Analysis Module for Excel

Authors: Radomir Perzina, Jaroslav Ramik

Abstract:

The Analytic Hierarchy Process is frequently used approach for solving decision making problems. There exists wide range of software programs utilizing that approach. Their main disadvantage is that they are relatively expensive and missing intermediate calculations. This work introduces a Microsoft Excel add-in called DAME – Decision Analysis Module for Excel. Comparing to other computer programs DAME is free, can work with scenarios or multiple decision makers and displays intermediate calculations. Users can structure their decision models into three levels – scenarios/users, criteria and variants. Items on all levels can be evaluated either by weights or pair-wise comparisons. There are provided three different methods for the evaluation of the weights of criteria, the variants as well as the scenarios – Saaty’s Method, Geometric Mean Method and Fuller’s Triangle Method. Multiplicative and additive syntheses are supported. The proposed software package is demonstrated on couple of illustrating examples of real life decision problems.

Keywords: analytic hierarchy process, multi-criteria decision making, pair-wise comparisons, Microsoft Excel, scenarios

Procedia PDF Downloads 452
2788 Collaborative Writing on Line with Apps During the Time of Pandemic: A Systematic Literature Review

Authors: Giuseppe Liverano

Abstract:

Today’s school iscalledupon to take the lead role in supporting students towards the formation of conscious identity and a sense of responsible citizenship, through the development of key competencies for lifelong learning A rolethatrequiresit to be ready for change and to respond to the ever new needs of students, by adopting new pedagogical and didactic models and new didactic devices. Information and Communication Technologies, in this sense, reveal themselves to be usefulresourcesthatpermit to focus attention on the learning of eachindividualstudentunderstoodas a dynamic and relational process of constructing shared and participatedmeanings. The use of collaborative writing apps represents a democratic and shared knowledge way of constructionthroughICTs. It promotes the learning of reading-writing, literacy, and the development of transversal competencies in an inclusive perspective peer-to-peer comparison and reflectionthatstimulates the transfer of thought into speech and writing, the transformation of knowledge through a trialogicalapproach to learning generates enthusiasm and strengthensmotivationItrepresents a “different” way of expressing the training needs which come from several disciplinary fields of subjects with different cultures. The contribution aims to reflect on the formative value of collaborative writing through apps and analyse some proposals on line at school during the time of pandemic in order to highlight their critical aspects and pedagogical perspectives.

Keywords: collaborative writing, formative value, online, apps, pandemic

Procedia PDF Downloads 157
2787 Drinking Water Quality Assessment Using Fuzzy Inference System Method: A Case Study of Rome, Italy

Authors: Yas Barzegar, Atrin Barzegar

Abstract:

Drinking water quality assessment is a major issue today; technology and practices are continuously improving; Artificial Intelligence (AI) methods prove their efficiency in this domain. The current research seeks a hierarchical fuzzy model for predicting drinking water quality in Rome (Italy). The Mamdani fuzzy inference system (FIS) is applied with different defuzzification methods. The Proposed Model includes three fuzzy intermediate models and one fuzzy final model. Each fuzzy model consists of three input parameters and 27 fuzzy rules. The model is developed for water quality assessment with a dataset considering nine parameters (Alkalinity, Hardness, pH, Ca, Mg, Fluoride, Sulphate, Nitrates, and Iron). Fuzzy-logic-based methods have been demonstrated to be appropriate to address uncertainty and subjectivity in drinking water quality assessment; it is an effective method for managing complicated, uncertain water systems and predicting drinking water quality. The FIS method can provide an effective solution to complex systems; this method can be modified easily to improve performance.

Keywords: water quality, fuzzy logic, smart cities, water attribute, fuzzy inference system, membership function

Procedia PDF Downloads 75
2786 Estimation of Reservoir Capacity and Sediment Deposition Using Remote Sensing Data

Authors: Odai Ibrahim Mohammed Al Balasmeh, Tapas Karmaker, Richa Babbar

Abstract:

In this study, the reservoir capacity and sediment deposition were estimated using remote sensing data. The satellite images were synchronized with water level and storage capacity to find out the change in sediment deposition due to soil erosion and transport by streamflow. The water bodies spread area was estimated using vegetation indices, e.g., normalize differences vegetation index (NDVI) and normalize differences water index (NDWI). The 3D reservoir bathymetry was modeled by integrated water level, storage capacity, and area. From the models of different time span, the change in reservoir storage capacity was estimated. Another reservoir with known water level, storage capacity, area, and sediment deposition was used to validate the estimation technique. The t-test was used to assess the results between observed and estimated reservoir capacity and sediment deposition.

Keywords: satellite data, normalize differences vegetation index, NDVI, normalize differences water index, NDWI, reservoir capacity, sedimentation, t-test hypothesis

Procedia PDF Downloads 168
2785 A Microwave Heating Model for Endothermic Reaction in the Cement Industry

Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira

Abstract:

Microwave technology has been gaining importance in contributing to decarbonization processes in high energy demand industries. Despite the several numerical models presented in the literature, a proper Verification and Validation exercise is still lacking. This is important and required to evaluate the physical process model accuracy and adequacy. Another issue addresses impedance matching, which is an important mechanism used in microwave experiments to increase electromagnetic efficiency. Such mechanism is not available in current computational tools, thus requiring an external numerical procedure. A numerical model was implemented to study the continuous processing of limestone with microwave heating. This process requires the material to be heated until a certain temperature that will prompt a highly endothermic reaction. Both a 2D and 3D model were built in COMSOL Multiphysics to solve the two-way coupling between Maxwell and Energy equations, along with the coupling between both heat transfer phenomena and limestone endothermic reaction. The 2D model was used to study and evaluate the required numerical procedure, being also a benchmark test, allowing other authors to implement impedance matching procedures. To achieve this goal, a controller built in MATLAB was used to continuously matching the cavity impedance and predicting the required energy for the system, thus successfully avoiding energy inefficiencies. The 3D model reproduces realistic results and therefore supports the main conclusions of this work. Limestone was modeled as a continuous flow under the transport of concentrated species, whose material and kinetics properties were taken from literature. Verification and Validation of the coupled model was taken separately from the chemical kinetic model. The chemical kinetic model was found to correctly describe the chosen kinetic equation by comparing numerical results with experimental data. A solution verification was made for the electromagnetic interface, where second order and fourth order accurate schemes were found for linear and quadratic elements, respectively, with numerical uncertainty lower than 0.03%. Regarding the coupled model, it was demonstrated that the numerical error would diverge for the heat transfer interface with the mapped mesh. Results showed numerical stability for the triangular mesh, and the numerical uncertainty was less than 0.1%. This study evaluated limestone velocity, heat transfer, and load influence on thermal decomposition and overall process efficiency. The velocity and heat transfer coefficient were studied with the 2D model, while different loads of material were studied with the 3D model. Both models demonstrated to be highly unstable when solving non-linear temperature distributions. High velocity flows exhibited propensity to thermal runways, and the thermal efficiency showed the tendency to stabilize for the higher velocities and higher filling ratio. Microwave efficiency denoted an optimal velocity for each heat transfer coefficient, pointing out that electromagnetic efficiency is a consequence of energy distribution uniformity. The 3D results indicated the inefficient development of the electric field for low filling ratios. Thermal efficiencies higher than 90% were found for the higher loads and microwave efficiencies up to 75% were accomplished. The 80% fill ratio was demonstrated to be the optimal load with an associated global efficiency of 70%.

Keywords: multiphysics modeling, microwave heating, verification and validation, endothermic reactions modeling, impedance matching, limestone continuous processing

Procedia PDF Downloads 140
2784 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures

Authors: Adriano Z. Zambom, Preethi Ravikumar

Abstract:

One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.

Keywords: additive model, nonparametric regression, variable selection, Akaike Information Criteria

Procedia PDF Downloads 265
2783 Modelling a Distribution Network with a Hybrid Solar-Hydro Power Plant in Rural Cameroon

Authors: Contimi Kenfack Mouafo, Sebastian Klick

Abstract:

In the rural and remote areas of Cameroon, access to electricity is very limited since most of the population is not connected to the main utility grid. Throughout the country, efforts are underway to not only expand the utility grid to these regions but also to provide reliable off-grid access to electricity. The Cameroonian company Solahydrowatt is currently working on the design and planning of one of the first hybrid solar-hydropower plants of Cameroon in Fotetsa, in the western region of the country, to provide the population with reliable access to electricity. This paper models and proposes a design for the low-voltage network with a hybrid solar-hydropower plant in Fotetsa. The modelling takes into consideration the voltage compliance of the distribution network, the maximum load of operating equipment, and most importantly, the ability for the network to operate as an off-grid system. The resulting modelled distribution network does not only comply with the Cameroonian voltage deviation standard, but it is also capable of being operated as a stand-alone network independent of the main utility grid.

Keywords: Cameroon, rural electrification, hybrid solar-hydro, off-grid electricity supply, network simulation

Procedia PDF Downloads 124
2782 Combined Proteomic and Metabolomic Analysis Approaches to Investigate the Modification in the Proteome and Metabolome of in vitro Models Treated with Gold Nanoparticles (AuNPs)

Authors: H. Chassaigne, S. Gioria, J. Lobo Vicente, D. Carpi, P. Barboro, G. Tomasi, A. Kinsner-Ovaskainen, F. Rossi

Abstract:

Emerging approaches in the area of exposure to nanomaterials and assessment of human health effects combine the use of in vitro systems and analytical techniques to study the perturbation of the proteome and/or the metabolome. We investigated the modification in the cytoplasmic compartment of the Balb/3T3 cell line exposed to gold nanoparticles. On one hand, the proteomic approach is quite standardized even if it requires precautions when dealing with in vitro systems. On the other hand, metabolomic analysis is challenging due to the chemical diversity of cellular metabolites that complicate data elaboration and interpretation. Differentially expressed proteins were found to cover a range of functions including stress response, cell metabolism, cell growth and cytoskeleton organization. In addition, de-regulated metabolites were annotated using the HMDB database. The "omics" fields hold huge promises in the interaction of nanoparticles with biological systems. The combination of proteomics and metabolomics data is possible however challenging.

Keywords: data processing, gold nanoparticles, in vitro systems, metabolomics, proteomics

Procedia PDF Downloads 503
2781 Hemostasis Poly Vinyl Alcohol Gauze Coated with Chitosan Encapsulated with Polymer and Drug

Authors: Abhishekkumar Ramasamy, Parameshwari

Abstract:

Chitosan is the deacyelitated derivative of chitin, the second most abundant biopolymer just after cellulose. Without doubt, its biomedical usages have gained more importance among the vast variety of chitosan applications owing to its good biocompatibility and biodegradability. In recent years, particular interest has been devoted to chitosan hydrogels as a promising alternative in competition with conventional sutures or bioadhesives. Different parameters such as acid type and concentration, and degree of deacetylation (DD%) of chitosan, were altered to modify hydrogel properties including viscosity, pH, cohesive strength, and tissue bioadhesiveness. In the current work, we have investigated the effectiveness of chitosan hydrogel encapsulated with tanexamic acid to stop bleeding. Chitosan film was obtained with solubilization of chitosan powder in aqueous acidic media. In vivo experiments have been conducted on rat and rabbit models that provide a convenient way to evaluate the efficacy of prepared samples. The arteries vein was punctured on the hind limb of the rat and the gauze was been applied on the punchered area. Bioadhesive strength as well as irritant effects were discussed. Samples with higher degree of deacetylation, including Chs-16 and Chs-19 that were dissolved in lactic media showed best sealing effect.

Keywords: chitosan, biocomaptibility, biodegradability, bioadhersive, deacetylation

Procedia PDF Downloads 349
2780 Evaluation of Geomechanical and Geometrical Parameters’ Effects on Hydro-Mechanical Estimation of Water Inflow into Underground Excavations

Authors: M. Mazraehli, F. Mehrabani, S. Zare

Abstract:

In general, mechanical and hydraulic processes are not independent of each other in jointed rock masses. Therefore, the study on hydro-mechanical coupling of geomaterials should be a center of attention in rock mechanics. Rocks in their nature contain discontinuities whose presence extremely influences mechanical and hydraulic characteristics of the medium. Assuming this effect, experimental investigations on intact rock cannot help to identify jointed rock mass behavior. Hence, numerical methods are being used for this purpose. In this paper, water inflow into a tunnel under significant water table has been estimated using hydro-mechanical discrete element method (HM-DEM). Besides, effects of geomechanical and geometrical parameters including constitutive model, friction angle, joint spacing, dip of joint sets, and stress factor on the estimated inflow rate have been studied. Results demonstrate that inflow rates are not identical for different constitutive models. Also, inflow rate reduces with increased spacing and stress factor.

Keywords: distinct element method, fluid flow, hydro-mechanical coupling, jointed rock mass, underground excavations

Procedia PDF Downloads 166
2779 The Comparison of Backward and Forward Running Program on Balance Development and Plantar Flexion Force in Pre Seniors: Healthy Approach

Authors: Neda Dekamei, Mostafa Sarabzadeh, Masoumeh Bigdeli

Abstract:

Backward running is commonly used in different sports conditioning, motor learning, and neurological purposes, and even more commonly in physical rehabilitation. The present study evaluated the effects of six weeks backward and forward running methods on balance promotion adaptation in students. 12 male and female preseniors with the age range of 45-60 years participated and were randomly classified into two groups of backward running (n: 6) and forward running (n: 6) training interventions. During six weeks, 3 sessions per week, all subjects underwent stated different models of backward and forward running training on treadmill (65-80 of HR max). Pre and post-tests were performed by force plate and electromyogram, two times before and after intervention. Data were analyzed using by T test. On the basis of obtained data, significant differences were recorded on balance and plantar flexion force in backward running (BR) and no difference for forward running (FR). It seems the training model of backward running can generate more stimulus to achieve better plantar flexion force and strengthening ankle protectors which leads to balance improvement in pre aging period. It can be recommended as an effective method to promote seniors life quality especially in balance neuromuscular parameters.

Keywords: backward running, balance, plantar flexion, pre seniors

Procedia PDF Downloads 165
2778 Identifying Mitigation Plans in Reducing Usability Risk Using Delphi Method

Authors: Jayaletchumi T. Sambantha Moorthy, Suhaimi bin Ibrahim, Mohd Naz’ri Mahrin

Abstract:

Most quality models have defined usability as a significant factor that leads to improving product acceptability, increasing user satisfaction, improving product reliability, and also financially benefiting companies. Usability is also the best factor that acts as a balance for both the technical and human aspects of a software product, which is an important aspect in defining quality during software development process. A usability risk can be defined as a potential usability risk factor that a chosen action or activity may lead to a possible loss or an undesirable outcome. This could impact the usability of a software product thereby contributing to negative user experiences and causing a possible software product failure. Hence, it is important to mitigate and reduce usability risks in the software development process itself. By managing possible involved usability risks in software development process, failure of software product could be reduced. Therefore, this research uses the Delphi method to identify mitigation plans to reduce potential usability risks. The Delphi method is conducted with seven experts from the field of risk management and software development.

Keywords: usability, usability risk, risk management, risk mitigation, delphi study

Procedia PDF Downloads 467
2777 Artificial Neural Network Based Approach for Estimation of Individual Vehicle Speed under Mixed Traffic Condition

Authors: Subhadip Biswas, Shivendra Maurya, Satish Chandra, Indrajit Ghosh

Abstract:

Developing speed model is a challenging task particularly under mixed traffic condition where the traffic composition plays a significant role in determining vehicular speed. The present research has been conducted to model individual vehicular speed in the context of mixed traffic on an urban arterial. Traffic speed and volume data have been collected from three midblock arterial road sections in New Delhi. Using the field data, a volume based speed prediction model has been developed adopting the methodology of Artificial Neural Network (ANN). The model developed in this work is capable of estimating speed for individual vehicle category. Validation results show a great deal of agreement between the observed speeds and the predicted values by the model developed. Also, it has been observed that the ANN based model performs better compared to other existing models in terms of accuracy. Finally, the sensitivity analysis has been performed utilizing the model in order to examine the effects of traffic volume and its composition on individual speeds.

Keywords: speed model, artificial neural network, arterial, mixed traffic

Procedia PDF Downloads 388
2776 Effects of Knowledge on Fruit Diets by Integrating Posters and Actual-Sized Fruit Models in Health Education for Elderly Patients with Type 2 Diabetes Mellitus

Authors: Suchada Wongsawat

Abstract:

The objectives of this quasi-experiment were: 1) to compare pretest and posttest scores of the experimental group who were given health education on the “Fruit Diets for Elderly Patients with Type 2 Diabetes Mellitus”; and 2) to compare the posttest scores between experimental group and controlled group. The samples of this study were elderly patients with type 2 Diabetes Mellitus at Tambon Kanai Health Promoting Hospital, Thailand. The samples were randomly assigned to experimental and controlled groups, with 30 patients in each group. Statistics used in the data analysis included frequency, percentage, average, standard deviation, paired t-test and independent t-test. The study revealed that the patients in the experimental group had significantly higher posttest scores than the pretest scores in the health education at the .05 statistical level. The posttest scores of the experimental group in the health education were significantly higher than the controlled group at the .05 statistical level.

Keywords: fruit, health education, elderly, diabetes

Procedia PDF Downloads 284
2775 Statistical Inferences for GQARCH-It\^{o} - Jumps Model Based on The Realized Range Volatility

Authors: Fu Jinyu, Lin Jinguan

Abstract:

This paper introduces a novel approach that unifies two types of models: one is the continuous-time jump-diffusion used to model high-frequency data, and the other is discrete-time GQARCH employed to model low-frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named “GQARCH-It\^{o} -Jumps mode.” We adopt the realized range-based threshold estimation for high-frequency financial data rather than the realized return-based volatility estimators, which entail the loss of intra-day information of the price movement. Meanwhile, a quasi-likelihood function for the low-frequency GQARCH structure with jumps is developed for the parametric estimate. The asymptotic theories are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies are implemented to check the finite sample performance of the proposed methodology. Specifically, it is demonstrated that how our proposed approaches can be practically used on some financial data.

Keywords: It\^{o} process, GQARCH, leverage effects, threshold, realized range-based volatility estimator, quasi-maximum likelihood estimate

Procedia PDF Downloads 158
2774 Geometric Contrast of a 3D Model Obtained by Means of Digital Photogrametry with a Quasimetric Camera on UAV Classical Methods

Authors: Julio Manuel de Luis Ruiz, Javier Sedano Cibrián, Rubén Pérez Álvarez, Raúl Pereda García, Cristina Diego Soroa

Abstract:

Nowadays, the use of drones has been extended to practically any human activity. One of the main applications is focused on the surveying field. In this regard, software programs that process the images captured by the sensor from the drone in an almost automatic way have been developed and commercialized, but they only allow contrasting the results through control points. This work proposes the contrast of a 3D model obtained from a flight developed by a drone and a non-metric camera (due to its low cost), with a second model that is obtained by means of the historically-endorsed classical methods. In addition to this, the contrast is developed over a certain territory with a significant unevenness, so as to test the model generated with photogrammetry, and considering that photogrammetry with drones finds more difficulties in terms of accuracy in this kind of situations. Distances, heights, surfaces and volumes are measured on the basis of the 3D models generated, and the results are contrasted. The differences are about 0.2% for the measurement of distances and heights, 0.3% for surfaces and 0.6% when measuring volumes. Although they are not important, they do not meet the order of magnitude that is presented by salespeople.

Keywords: accuracy, classical topographic, model tridimensional, photogrammetry, Uav.

Procedia PDF Downloads 136
2773 Acceleration of Lagrangian and Eulerian Flow Solvers via Graphics Processing Units

Authors: Pooya Niksiar, Ali Ashrafizadeh, Mehrzad Shams, Amir Hossein Madani

Abstract:

There are many computationally demanding applications in science and engineering which need efficient algorithms implemented on high performance computers. Recently, Graphics Processing Units (GPUs) have drawn much attention as compared to the traditional CPU-based hardware and have opened up new improvement venues in scientific computing. One particular application area is Computational Fluid Dynamics (CFD), in which mature CPU-based codes need to be converted to GPU-based algorithms to take advantage of this new technology. In this paper, numerical solutions of two classes of discrete fluid flow models via both CPU and GPU are discussed and compared. Test problems include an Eulerian model of a two-dimensional incompressible laminar flow case and a Lagrangian model of a two phase flow field. The CUDA programming standard is used to employ an NVIDIA GPU with 480 cores and a C++ serial code is run on a single core Intel quad-core CPU. Up to two orders of magnitude speed up is observed on GPU for a certain range of grid resolution or particle numbers. As expected, Lagrangian formulation is better suited for parallel computations on GPU although Eulerian formulation represents significant speed up too.

Keywords: CFD, Eulerian formulation, graphics processing units, Lagrangian formulation

Procedia PDF Downloads 418