Search results for: nonlinear PDEs
69 Computationally Efficient Electrochemical-Thermal Li-Ion Cell Model for Battery Management System
Authors: Sangwoo Han, Saeed Khaleghi Rahimian, Ying Liu
Abstract:
Vehicle electrification is gaining momentum, and many car manufacturers promise to deliver more electric vehicle (EV) models to consumers in the coming years. In controlling the battery pack, the battery management system (BMS) must maintain optimal battery performance while ensuring the safety of a battery pack. Tasks related to battery performance include determining state-of-charge (SOC), state-of-power (SOP), state-of-health (SOH), cell balancing, and battery charging. Safety related functions include making sure cells operate within specified, static and dynamic voltage window and temperature range, derating power, detecting faulty cells, and warning the user if necessary. The BMS often utilizes an RC circuit model to model a Li-ion cell because of its robustness and low computation cost among other benefits. Because an equivalent circuit model such as the RC model is not a physics-based model, it can never be a prognostic model to predict battery state-of-health and avoid any safety risk even before it occurs. A physics-based Li-ion cell model, on the other hand, is more capable at the expense of computation cost. To avoid the high computation cost associated with a full-order model, many researchers have demonstrated the use of a single particle model (SPM) for BMS applications. One drawback associated with the single particle modeling approach is that it forces to use the average current density in the calculation. The SPM would be appropriate for simulating drive cycles where there is insufficient time to develop a significant current distribution within an electrode. However, under a continuous or high-pulse electrical load, the model may fail to predict cell voltage or Li⁺ plating potential. To overcome this issue, a multi-particle reduced-order model is proposed here. The use of multiple particles combined with either linear or nonlinear charge-transfer reaction kinetics enables to capture current density distribution within an electrode under any type of electrical load. To maintain computational complexity like that of an SPM, governing equations are solved sequentially to minimize iterative solving processes. Furthermore, the model is validated against a full-order model implemented in COMSOL Multiphysics.Keywords: battery management system, physics-based li-ion cell model, reduced-order model, single-particle and multi-particle model
Procedia PDF Downloads 11168 Ho-Doped Lithium Niobate Thin Films: Raman Spectroscopy, Structure and Luminescence
Authors: Edvard Kokanyan, Narine Babajanyan, Ninel Kokanyan, Marco Bazzan
Abstract:
Lithium niobate (LN) crystals, renowned for their exceptional nonlinear optical, electro-optical, piezoelectric, and photorefractive properties, stand as foundational materials in diverse fields of study and application. While they have long been utilized in frequency converters of laser radiation, electro-optical modulators, and holographic information recording media, LN crystals doped with rare earth ions represent a compelling frontier for modern compact devices. These materials exhibit immense potential as key components in infrared lasers, optical sensors, self-cooling systems, and radiation balanced laser setups. In this study, we present the successful synthesis of Ho-doped lithium niobate (LN:Ho) thin films on sapphire substrates employing the Sol-Gel technique. The films exhibit a strong crystallographic orientation along the perpendicular direction to the substrate surface, with X-ray diffraction analysis confirming the predominant alignment of the film's "c" axis, notably evidenced by the intense (006) reflection peak. Further characterization through Raman spectroscopy, employing a confocal Raman microscope (LabRAM HR Evolution) with exciting wavelengths of 532 nm and 785 nm, unraveled intriguing insights. Under excitation with a 785 nm laser, Raman scattering obeyed selection rules, while employing a 532 nm laser unveiled additional forbidden lines reminiscent of behaviors observed in bulk LN:Ho crystals. These supplementary lines were attributed to luminescence induced by excitation at 532 nm. Leveraging data from anti-Stokes Raman lines facilitated the disentanglement of luminescence spectra from the investigated samples. Surface scanning affirmed the uniformity of both structure and luminescence across the thin films. Notably, despite the robust orientation of the "c" axis perpendicular to the substrate surface, Raman signals indicated a stochastic distribution of "a" and "b" axes, validating the mosaic structure of the films along the mentioned axis. This study offers valuable insights into the structural properties of Ho-doped lithium niobate thin films, with the observed luminescence behavior holding significant promise for potential applications in optoelectronic devices.Keywords: lithium niobate, Sol-Gel, luminescence, Raman spectroscopy
Procedia PDF Downloads 6067 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows
Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid
Abstract:
Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil
Procedia PDF Downloads 13066 Multiscale Modelling of Textile Reinforced Concrete: A Literature Review
Authors: Anicet Dansou
Abstract:
Textile reinforced concrete (TRC)is increasingly used nowadays in various fields, in particular civil engineering, where it is mainly used for the reinforcement of damaged reinforced concrete structures. TRC is a composite material composed of multi- or uni-axial textile reinforcements coupled with a fine-grained cementitious matrix. The TRC composite is an alternative solution to the traditional Fiber Reinforcement Polymer (FRP) composite. It has good mechanical performance and better temperature stability but also, it makes it possible to meet the criteria of sustainable development better.TRCs are highly anisotropic composite materials with nonlinear hardening behavior; their macroscopic behavior depends on multi-scale mechanisms. The characterization of these materials through numerical simulation has been the subject of many studies. Since TRCs are multiscale material by definition, numerical multi-scale approaches have emerged as one of the most suitable methods for the simulation of TRCs. They aim to incorporate information pertaining to microscale constitute behavior, mesoscale behavior, and macro-scale structure response within a unified model that enables rapid simulation of structures. The computational costs are hence significantly reduced compared to standard simulation at a fine scale. The fine scale information can be implicitly introduced in the macro scale model: approaches of this type are called non-classical. A representative volume element is defined, and the fine scale information are homogenized over it. Analytical and computational homogenization and nested mesh methods belong to these approaches. On the other hand, in classical approaches, the fine scale information are explicitly introduced in the macro scale model. Such approaches pertain to adaptive mesh refinement strategies, sub-modelling, domain decomposition, and multigrid methods This research presents the main principles of numerical multiscale approaches. Advantages and limitations are identified according to several criteria: the assumptions made (fidelity), the number of input parameters required, the calculation costs (efficiency), etc. A bibliographic study of recent results and advances and of the scientific obstacles to be overcome in order to achieve an effective simulation of textile reinforced concrete in civil engineering is presented. A comparative study is further carried out between several methods for the simulation of TRCs used for the structural reinforcement of reinforced concrete structures.Keywords: composites structures, multiscale methods, numerical modeling, textile reinforced concrete
Procedia PDF Downloads 10865 Analytical and Numerical Studies on the Behavior of a Freezing Soil Layer
Authors: X. Li, Y. Liu, H. Wong, B. Pardoen, A. Fabbri, F. McGregor, E. Liu
Abstract:
The target of this paper is to investigate how saturated poroelastic soils subject to freezing temperatures behave and how different boundary conditions can intervene and affect the thermo-hydro-mechanical (THM) responses, based on a particular but classical configuration of a finite homogeneous soil layer studied by Terzaghi. The essential relations on the constitutive behavior of a freezing soil are firstly recalled: ice crystal - liquid water thermodynamic equilibrium, hydromechanical constitutive equations, momentum balance, water mass balance, and the thermal diffusion equation, in general, non-linear case where material parameters are state-dependent. The system of equations is firstly linearized, assuming all material parameters to be constants, particularly the permeability of liquid water, which should depend on the ice content. Two analytical solutions solved by the classic Laplace transform are then developed, accounting for two different sets of boundary conditions. Afterward, the general non-linear equations with state-dependent parameters are solved using a commercial code COMSOL based on finite elements method to obtain numerical results. The validity of this numerical modeling is partially verified using the analytical solution in the limiting case of state-independent parameters. Comparison between the results given by the linearized analytical solutions and the non-linear numerical model reveals that the above-mentioned linear computation will always underestimate the liquid pore pressure and displacement, whatever the hydraulic boundary conditions are. In the nonlinear model, the faster growth of ice crystals, accompanying the subsequent reduction of permeability of freezing soil layer, makes a longer duration for the depressurization of water liquid and slower settlement in the case where the ground surface is swiftly covered by a thin layer of ice, as well as a bigger global liquid pressure and swelling in the case of the impermeable ground surface. Nonetheless, the analytical solutions based on linearized equations give a correct order-of-magnitude estimate, especially at moderate temperature variations, and remain a useful tool for preliminary design checks.Keywords: chemical potential, cryosuction, Laplace transform, multiphysics coupling, phase transformation, thermodynamic equilibrium
Procedia PDF Downloads 8064 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana
Authors: Gautier Viaud, Paul-Henry Cournède
Abstract:
Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models
Procedia PDF Downloads 30363 Finite Element Molecular Modeling: A Structural Method for Large Deformations
Authors: A. Rezaei, M. Huisman, W. Van Paepegem
Abstract:
Atomic interactions in molecular systems are mainly studied by particle mechanics. Nevertheless, researches have also put on considerable effort to simulate them using continuum methods. In early 2000, simple equivalent finite element models have been developed to study the mechanical properties of carbon nanotubes and graphene in composite materials. Afterward, many researchers have employed similar structural simulation approaches to obtain mechanical properties of nanostructured materials, to simplify interface behavior of fiber-reinforced composites, and to simulate defects in carbon nanotubes or graphene sheets, etc. These structural approaches, however, are limited to small deformations due to complicated local rotational coordinates. This article proposes a method for the finite element simulation of molecular mechanics. For ease in addressing the approach, here it is called Structural Finite Element Molecular Modeling (SFEMM). SFEMM method improves the available structural approaches for large deformations, without using any rotational degrees of freedom. Moreover, the method simulates molecular conformation, which is a big advantage over the previous approaches. Technically, this method uses nonlinear multipoint constraints to simulate kinematics of the atomic multibody interactions. Only truss elements are employed, and the bond potentials are implemented through constitutive material models. Because the equilibrium bond- length, bond angles, and bond-torsion potential energies are intrinsic material parameters, the model is independent of initial strains or stresses. In this paper, the SFEMM method has been implemented in ABAQUS finite element software. The constraints and material behaviors are modeled through two Fortran subroutines. The method is verified for the bond-stretch, bond-angle and bond-torsion of carbon atoms. Furthermore, the capability of the method in the conformation simulation of molecular structures is demonstrated via a case study of a graphene sheet. Briefly, SFEMM builds up a framework that offers more flexible features over the conventional molecular finite element models, serving the structural relaxation modeling and large deformations without incorporating local rotational degrees of freedom. Potentially, the method is a big step towards comprehensive molecular modeling with finite element technique, and thereby concurrently coupling an atomistic domain to a solid continuum domain within a single finite element platform.Keywords: finite element, large deformation, molecular mechanics, structural method
Procedia PDF Downloads 15262 Rheological Study of Chitosan/Montmorillonite Nanocomposites: The Effect of Chemical Crosslinking
Authors: K. Khouzami, J. Brassinne, C. Branca, E. Van Ruymbeke, B. Nysten, G. D’Angelo
Abstract:
The development of hybrid organic-inorganic nanocomposites has recently attracted great interest. Typically, polymer silicates represent an emerging class of polymeric nanocomposites that offer superior material properties compared to each compound alone. Among these materials, complexes based on silicate clay and polysaccharides are one of the most promising nanocomposites. The strong electrostatic interaction between chitosan and montmorillonite can induce what is called physical hydrogel, where the coordination bonds or physical crosslinks may associate and dissociate reversibly and in a short time. These mechanisms could be the main origin of the uniqueness of their rheological behavior. However, owing to their structure intrinsically heterogeneous and/or the lack of dissipated energy, they are usually brittle, possess a poor toughness and may not have sufficient mechanical strength. Consequently, the properties of these nanocomposites cannot respond to some requirements of many applications in several fields. To address the issue of weak mechanical properties, covalent chemical crosslink bonds can be introduced to the physical hydrogel. In this way, quite homogeneous dually crosslinked microstructures with high dissipated energy and enhanced mechanical strength can be engineered. In this work, we have prepared a series of chitosan-montmorillonite nanocomposites chemically crosslinked by addition of poly (ethylene glycol) diglycidyl ether. This study aims to provide a better understanding of the mechanical behavior of dually crosslinked chitosan-based nanocomposites by relating it to their microstructures. In these systems, the variety of microstructures is obtained by modifying the number of cross-links. Subsequently, a superior uniqueness of the rheological properties of chemically crosslinked chitosan-montmorillonite nanocomposites is achieved, especially at the highest percentage of clay. Their rheological behaviors depend on the clay/chitosan ratio and the crosslinking. All specimens exhibit a viscous rheological behavior over the frequency range investigated. The flow curves of the nanocomposites show a Newtonian plateau at very low shear rates accompanied by a quite complicated nonlinear decrease with increasing the shear rate. Crosslinking induces a shear thinning behavior revealing the formation of network-like structures. Fitting shear viscosity curves via Ostward-De Waele equation disclosed that crosslinking and clay addition strongly affect the pseudoplasticity of the nanocomposites for shear rates γ ̇>20.Keywords: chitosan, crossliking, nanocomposites, rheological properties
Procedia PDF Downloads 14761 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances
Authors: P. Mounnarath, U. Schmitz, Ch. Zhang
Abstract:
Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis
Procedia PDF Downloads 43560 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment
Authors: Arindam Chaudhuri
Abstract:
Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.Keywords: FRSVM, Hadoop, MapReduce, PFRSVM
Procedia PDF Downloads 49059 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources
Authors: Mustafa Alhamdi
Abstract:
Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification
Procedia PDF Downloads 15058 Reading Strategies of Generation X and Y: A Survey on Learners' Skills and Preferences
Authors: Kateriina Rannula, Elle Sõrmus, Siret Piirsalu
Abstract:
Mixed generation classroom is a phenomenon that current higher education establishments are faced with daily trying to meet the needs of modern labor market with its emphasis on lifelong learning and retraining. Representatives of mainly X and Y generations in one classroom acquiring higher education is a challenge to lecturers considering all the characteristics that differ one generation from another. The importance of outlining different strategies and considering the needs of the students lies in the necessity for everyone to acquire the maximum of the provided knowledge as well as to understand each other to study together in one classroom and successfully cooperate in future workplaces. In addition to different generations, there are also learners with different native languages which have an impact on reading and understanding texts in third languages, including possible translation. Current research aims to investigate, describe and compare reading strategies among the representatives of generation X and Y. Hypotheses were formulated - representatives of generation X and Y use different reading strategies which is also different among first and third year students of the before mentioned generations. Current study is an empirical, qualitative study. To achieve the aim of the research, relevant literature was analyzed and a semi-structured questionnaire conducted among the first and third year students of Tallinn Health Care College. Questionnaire consisted of 25 statements on the text reading strategies, 3 multiple choice questions on preferences considering the design and medium of the text, and three open questions on the translation process when working with a text in student’s third language. The results of the questionnaire were categorized, analyzed and compared. Both, generation X and Y described their reading strategies to be 'scanning' and 'surfing'. Compared to generation X, first year generation Y learners valued interactivity and nonlinear texts. Students frequently used strategies of skimming, scanning, translating and highlighting together with relevant-thinking and assistance-seeking. Meanwhile, the third-year generation Y students no longer frequently used translating, resourcing and highlighting while Generation X learners still incorporated these strategies. Knowing about different needs of the generations currently inside the classrooms and on the labor market enables us with tools to provide sustainable education and grants the society a work force that is more flexible and able to move between professions. Future research should be conducted in order to investigate the amount of learning and strategy- adoption between generations. As for reading, main suggestions arising from the research are as follows: make a variety of materials available to students; allow them to select what they want to read and try to make those materials visually attractive, relevant, and appropriately challenging for learners considering the differences of generations.Keywords: generation X, generation Y, learning strategies, reading strategies
Procedia PDF Downloads 18057 Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth
Authors: Ella Tyuryumina, Alexey Neznanov
Abstract:
This study is an attempt to obtain reliable data on the natural history of breast cancer growth. We analyze the opportunities for using classical mathematical models (exponential and logistic tumor growth models, Gompertz and von Bertalanffy tumor growth models) to try to describe growth of the primary tumor and the secondary distant metastases of human breast cancer. The research aim is to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoMPaS and corresponding software. We are interested in: 1) modelling the whole natural history of the primary tumor and the secondary distant metastases; 2) developing adequate and precise CoMPaS which reflects relations between the primary tumor and the secondary distant metastases; 3) analyzing the CoMPaS scope of application; 4) implementing the model as a software tool. The foundation of the CoMPaS is the exponential tumor growth model, which is described by determinate nonlinear and linear equations. The CoMPaS corresponds to TNM classification. It allows to calculate different growth periods of the primary tumor and the secondary distant metastases: 1) ‘non-visible period’ for the primary tumor; 2) ‘non-visible period’ for the secondary distant metastases; 3) ‘visible period’ for the secondary distant metastases. The CoMPaS is validated on clinical data of 10-years and 15-years survival depending on the tumor stage and diameter of the primary tumor. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer growth models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. The CoMPaS model and predictive software: a) fit to clinical trials data; b) detect different growth periods of the primary tumor and the secondary distant metastases; c) make forecast of the period of the secondary distant metastases appearance; d) have higher average prediction accuracy than the other tools; e) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoMPaS: the number of doublings for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases. The CoMPaS enables, for the first time, to predict ‘whole natural history’ of the primary tumor and the secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on the primary tumor sizes. Summarizing: a) CoMPaS describes correctly the primary tumor growth of IA, IIA, IIB, IIIB (T1-4N0M0) stages without metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and inception of the secondary distant metastases.Keywords: breast cancer, exponential growth model, mathematical model, metastases in lymph nodes, primary tumor, survival
Procedia PDF Downloads 34156 Investigation of Rehabilitation Effects on Fire Damaged High Strength Concrete Beams
Authors: Eun Mi Ryu, Ah Young An, Ji Yeon Kang, Yeong Soo Shin, Hee Sun Kim
Abstract:
As the number of fire incidents has been increased, fire incidents significantly damage economy and human lives. Especially when high strength reinforced concrete is exposed to high temperature due to a fire, deterioration occurs such as loss in strength and elastic modulus, cracking, and spalling of the concrete. Therefore, it is important to understand risk of structural safety in building structures by studying structural behaviors and rehabilitation of fire damaged high strength concrete structures. This paper aims at investigating rehabilitation effect on fire damaged high strength concrete beams using experimental and analytical methods. In the experiments, flexural specimens with high strength concrete are exposed to high temperatures according to ISO 834 standard time temperature curve. After heated, the fire damaged reinforced concrete (RC) beams having different cover thicknesses and fire exposure time periods are rehabilitated by removing damaged part of cover thickness and filling polymeric mortar into the removed part. From four-point loading test, results show that maximum loads of the rehabilitated RC beams are 1.8~20.9% higher than those of the non-fire damaged RC beam. On the other hand, ductility ratios of the rehabilitated RC beams are decreased than that of the non-fire damaged RC beam. In addition, structural analyses are performed using ABAQUS 6.10-3 with same conditions as experiments to provide accurate predictions on structural and mechanical behaviors of rehabilitated RC beams. For the rehabilitated RC beam models, integrated temperature–structural analyses are performed in advance to obtain geometries of the fire damaged RC beams. After spalled and damaged parts are removed, rehabilitated part is added to the damaged model with material properties of polymeric mortar. Three dimensional continuum brick elements are used for both temperature and structural analyses. The same loading and boundary conditions as experiments are implemented to the rehabilitated beam models and nonlinear geometrical analyses are performed. Structural analytical results show good rehabilitation effects, when the result predicted from the rehabilitated models are compared to structural behaviors of the non-damaged RC beams. In this study, fire damaged high strength concrete beams are rehabilitated using polymeric mortar. From four point loading tests, it is found that such rehabilitation is able to make the structural performance of fire damaged beams similar to non-damaged RC beams. The predictions from the finite element models show good agreements with the experimental results and the modeling approaches can be used to investigate applicability of various rehabilitation methods for further study.Keywords: fire, high strength concrete, rehabilitation, reinforced concrete beam
Procedia PDF Downloads 44555 A Grid Synchronization Method Based On Adaptive Notch Filter for SPV System with Modified MPPT
Authors: Priyanka Chaudhary, M. Rizwan
Abstract:
This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.Keywords: solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique
Procedia PDF Downloads 59454 Nondestructive Inspection of Reagents under High Attenuated Cardboard Box Using Injection-Seeded THz-Wave Parametric Generator
Authors: Shin Yoneda, Mikiya Kato, Kosuke Murate, Kodo Kawase
Abstract:
In recent years, there have been numerous attempts to smuggle narcotic drugs and chemicals by concealing them in international mail. Combatting this requires a non-destructive technique that can identify such illicit substances in mail. Terahertz (THz) waves can pass through a wide variety of materials, and many chemicals show specific frequency-dependent absorption, known as a spectral fingerprint, in the THz range. Therefore, it is reasonable to investigate non-destructive mail inspection techniques that use THz waves. For this reason, in this work, we tried to identify reagents under high attenuation shielding materials using injection-seeded THz-wave parametric generator (is-TPG). Our THz spectroscopic imaging system using is-TPG consisted of two non-linear crystals for emission and detection of THz waves. A micro-chip Nd:YAG laser and a continuous wave tunable external cavity diode laser were used as the pump and seed source, respectively. The pump beam and seed beam were injected to the LiNbO₃ crystal satisfying the noncollinear phase matching condition in order to generate high power THz-wave. The emitted THz wave was irradiated to the sample which was raster scanned by the x-z stage while changing the frequencies, and we obtained multispectral images. Then the transmitted THz wave was focused onto another crystal for detection and up-converted to the near infrared detection beam based on nonlinear optical parametric effects, wherein the detection beam intensity was measured using an infrared pyroelectric detector. It was difficult to identify reagents in a cardboard box because of high noise levels. In this work, we introduce improvements for noise reduction and image clarification, and the intensity of the near infrared detection beam was converted correctly to the intensity of the THz wave. A Gaussian spatial filter is also introduced for a clearer THz image. Through these improvements, we succeeded in identification of reagents hidden in a 42-mm thick cardboard box filled with several obstacles, which attenuate 56 dB at 1.3 THz, by improving analysis methods. Using this system, THz spectroscopic imaging was possible for saccharides and may also be applied to cases where illicit drugs are hidden in the box, and multiple reagents are mixed together. Moreover, THz spectroscopic imaging can be achieved through even thicker obstacles by introducing an NIR detector with higher sensitivity.Keywords: nondestructive inspection, principal component analysis, terahertz parametric source, THz spectroscopic imaging
Procedia PDF Downloads 17753 Artificial Intelligence-Aided Extended Kalman Filter for Magnetometer-Based Orbit Determination
Authors: Gilberto Goracci, Fabio Curti
Abstract:
This work presents a robust, light, and inexpensive algorithm to perform autonomous orbit determination using onboard magnetometer data in real-time. Magnetometers are low-cost and reliable sensors typically available on a spacecraft for attitude determination purposes, thus representing an interesting choice to perform real-time orbit determination without the need to add additional sensors to the spacecraft itself. Magnetic field measurements can be exploited by Extended/Unscented Kalman Filters (EKF/UKF) for orbit determination purposes to make up for GPS outages, yielding errors of a few kilometers and tens of meters per second in the position and velocity of a spacecraft, respectively. While this level of accuracy shows that Kalman filtering represents a solid baseline for autonomous orbit determination, it is not enough to provide a reliable state estimation in the absence of GPS signals. This work combines the solidity and reliability of the EKF with the versatility of a Recurrent Neural Network (RNN) architecture to further increase the precision of the state estimation. Deep learning models, in fact, can grasp nonlinear relations between the inputs, in this case, the magnetometer data and the EKF state estimations, and the targets, namely the true position, and velocity of the spacecraft. The model has been pre-trained on Sun-Synchronous orbits (SSO) up to 2126 kilometers of altitude with different initial conditions and levels of noise to cover a wide range of possible real-case scenarios. The orbits have been propagated considering J2-level dynamics, and the geomagnetic field has been modeled using the International Geomagnetic Reference Field (IGRF) coefficients up to the 13th order. The training of the module can be completed offline using the expected orbit of the spacecraft to heavily reduce the onboard computational burden. Once the spacecraft is launched, the model can use the GPS signal, if available, to fine-tune the parameters on the actual orbit onboard in real-time and work autonomously during GPS outages. In this way, the provided module shows versatility, as it can be applied to any mission operating in SSO, but at the same time, the training is completed and eventually fine-tuned, on the specific orbit, increasing performances and reliability. The results provided by this study show an increase of one order of magnitude in the precision of state estimate with respect to the use of the EKF alone. Tests on simulated and real data will be shown.Keywords: artificial intelligence, extended Kalman filter, orbit determination, magnetic field
Procedia PDF Downloads 10552 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence
Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang
Abstract:
Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics
Procedia PDF Downloads 7451 Molecular Dynamics Study of Ferrocene in Low and Room Temperatures
Authors: Feng Wang, Vladislav Vasilyev
Abstract:
Ferrocene (Fe(C5H5)2, i.e., di-cyclopentadienyle iron (FeCp2) or Fc) is a unique example of ‘wrong but seminal’ in chemistry history. It has significant applications in a number of areas such as homogeneous catalysis, polymer chemistry, molecular sensing, and nonlinear optical materials. However, the ‘molecular carousel’ has been a ‘notoriously difficult example’ and subject to long debate for its conformation and properties. Ferrocene is a dynamic molecule. As a result, understanding of the dynamical properties of ferrocene is very important to understand the conformational properties of Fc. In the present study, molecular dynamic (MD) simulations are performed. In the simulation, we use 5 geometrical parameters to define the overall conformation of Fc and all the rest is a thermal noise. The five parameters are defined as: three parameters d---the distance between two Cp planes, α and δ to define the relative positions of the Cp planes, in which α is the angle of the Cp tilt and δ the angle the two Cp plane rotation like a carousel. Two parameters to position the Fe atom between two Cps, i.e., d1 for Fe-Cp1 and d2 for Fe-Cp2 distances. Our preliminary MD simulation discovered the five parameters behave differently. Distances of Fe to the Cp planes show that they are independent, practically identical without correlation. The relative position of two Cp rings, α, indicates that the two Cp planes are most likely not in a parallel position, rather, they tilt in a small angle α≠ 0°. The mean plane dihedral angle δ ≠ 0°. Moreover, δ is neither 0° nor 36°, indicating under those conditions, Fc is neither in a perfect eclipsed structure nor a perfect staggered structure. The simulations show that when the temperature is above 80K, the conformers are virtually in free rotations, A very interesting result from the MD simulation is the five C-Fe bond distances from the same Cp ring. They are surprisingly not identical but in three groups of 2, 2 and 1. We describe the pentagon formed by five carbon atoms as ‘turtle swimming’ for the motion of the Cp rings of Fc as shown in their dynamical animation video. The Fe- C(1) and Fe-C(2) which are identical as ‘the turtle back legs’, Fe-C(3) and Fe-C(4) which are also identical as turtle front paws’, and Fe-C(5) ---’the turtle head’. Such as ‘turtle swimming’ analog may be able to explain the single substituted derivatives of Fc. Again, the mean Fe-C distance obtained from MD simulation is larger than the quantum mechanically calculated Fe-C distances for eclipsed and staggered Fc, with larger deviation with respect to the eclipsed Fc than the staggered Fc. The same trend is obtained for the five Fe-C-H angles from same Cp ring of Fc. The simulated mean IR spectrum at 7K shows split spectral peaks at approximately 470 cm-1 and 488 cm-1, in excellent agreement with quantum mechanically calculated gas phase IR spectrum for eclipsed Fc. As the temperature increases over 80K, the clearly splitting IR spectrum become a very board single peak. Preliminary MD results will be presented.Keywords: ferrocene conformation, molecular dynamics simulation, conformer orientation, eclipsed and staggered ferrocene
Procedia PDF Downloads 21850 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor
Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng
Abstract:
Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.Keywords: electrohysterogram, feature, preterm labor, term labor
Procedia PDF Downloads 57149 Ultra-Tightly Coupled GNSS/INS Based on High Degree Cubature Kalman Filtering
Authors: Hamza Benzerrouk, Alexander Nebylov
Abstract:
In classical GNSS/INS integration designs, the loosely coupled approach uses the GNSS derived position and the velocity as the measurements vector. This design is suboptimal from the standpoint of preventing GNSSoutliers/outages. The tightly coupled GPS/INS navigation filter mixes the GNSS pseudo range and inertial measurements and obtains the vehicle navigation state as the final navigation solution. The ultra‐tightly coupled GNSS/INS design combines the I (inphase) and Q(quadrature) accumulator outputs in the GNSS receiver signal tracking loops and the INS navigation filter function intoa single Kalman filter variant (EKF, UKF, SPKF, CKF and HCKF). As mentioned, EKF and UKF are the most used nonlinear filters in the literature and are well adapted to inertial navigation state estimation when integrated with GNSS signal outputs. In this paper, it is proposed to move a step forward with more accurate filters and modern approaches called Cubature and High Degree cubature Kalman Filtering methods, on the basis of previous results solving the state estimation based on INS/GNSS integration, Cubature Kalman Filter (CKF) and High Degree Cubature Kalman Filter with (HCKF) are the references for the recent developed generalized Cubature rule based Kalman Filter (GCKF). High degree cubature rules are the kernel of the new solution for more accurate estimation with less computational complexity compared with the Gauss-Hermite Quadrature (GHQKF). Gauss-Hermite Kalman Filter GHKF which is not selected in this work because of its limited real-time implementation in high-dimensional state-spaces. In ultra tightly or a deeply coupled GNSS/INS system is dynamics EKF is used with transition matrix factorization together with GNSS block processing which is well described in the paper and assumes available the intermediary frequency IF by using a correlator samples with a rate of 500 Hz in the presented approach. GNSS (GPS+GLONASS) measurements are assumed available and modern SPKF with Cubature Kalman Filter (CKF) are compared with new versions of CKF called high order CKF based on Spherical-radial cubature rules developed at the fifth order in this work. Estimation accuracy of the high degree CKF is supposed to be comparative to GHKF, results of state estimation are then observed and discussed for different initialization parameters. Results show more accurate navigation state estimation and more robust GNSS receiver when Ultra Tightly Coupled approach applied based on High Degree Cubature Kalman Filter.Keywords: GNSS, INS, Kalman filtering, ultra tight integration
Procedia PDF Downloads 28048 Generalized Synchronization in Systems with a Complex Topology of Attractor
Authors: Olga I. Moskalenko, Vladislav A. Khanadeev, Anastasya D. Koloskova, Alexey A. Koronovskii, Anatoly A. Pivovarov
Abstract:
Generalized synchronization is one of the most intricate phenomena in nonlinear science. It can be observed both in systems with a unidirectional and mutual type of coupling including the complex networks. Such a phenomenon has a number of practical applications, for example, for the secure information transmission through the communication channel with a high level of noise. Known methods for the secure information transmission needs in the increase of the privacy of data transmission that arises a question about the observation of such phenomenon in systems with a complex topology of chaotic attractor possessing two or more positive Lyapunov exponents. The present report is devoted to the study of such phenomenon in two unidirectionally and mutually coupled dynamical systems being in chaotic (with one positive Lyapunov exponent) and hyperchaotic (with two or more positive Lyapunov exponents) regimes, respectively. As the systems under study, we have used two mutually coupled modified Lorenz oscillators and two unidirectionally coupled time-delayed generators. We have shown that in both cases the generalized synchronization regime can be detected by means of the calculation of Lyapunov exponents and phase tube approach whereas due to the complex topology of attractor the nearest neighbor method is misleading. Moreover, the auxiliary system approaches being the standard method for the synchronous regime observation, for the mutual type of coupling results in incorrect results. To calculate the Lyapunov exponents in time-delayed systems we have proposed an approach based on the modification of Gram-Schmidt orthogonalization procedure in the context of the time-delayed system. We have studied in detail the mechanisms resulting in the generalized synchronization regime onset paying a great attention to the field where one positive Lyapunov exponent has already been become negative whereas the second one is a positive yet. We have found the intermittency here and studied its characteristics. To detect the laminar phase lengths the method based on a calculation of local Lyapunov exponents has been proposed. The efficiency of the method has been verified using the example of two unidirectionally coupled Rössler systems being in the band chaos regime. We have revealed the main characteristics of intermittency, i.e. the distribution of the laminar phase lengths and dependence of the mean length of the laminar phases on the criticality parameter, for all systems studied in the report. This work has been supported by the Russian President's Council grant for the state support of young Russian scientists (project MK-531.2018.2).Keywords: complex topology of attractor, generalized synchronization, hyperchaos, Lyapunov exponents
Procedia PDF Downloads 27547 Seismic Behavior of Existing Reinforced Concrete Buildings in California under Mainshock-Aftershock Scenarios
Authors: Ahmed Mantawy, James C. Anderson
Abstract:
Numerous cases of earthquakes (main-shocks) that were followed by aftershocks have been recorded in California. In 1992 a pair of strong earthquakes occurred within three hours of each other in Southern California. The first shock occurred near the community of Landers and was assigned a magnitude of 7.3 then the second shock occurred near the city of Big Bear about 20 miles west of the initial shock and was assigned a magnitude of 6.2. In the same year, a series of three earthquakes occurred over two days in the Cape-Mendocino area of Northern California. The main-shock was assigned a magnitude of 7.0 while the second and the third shocks were both assigned a value of 6.6. This paper investigates the effect of a main-shock accompanied with aftershocks of significant intensity on reinforced concrete (RC) frame buildings to indicate nonlinear behavior using PERFORM-3D software. A 6-story building in San Bruno and a 20-story building in North Hollywood were selected for the study as both of them have RC moment resisting frame systems. The buildings are also instrumented at multiple floor levels as a part of the California Strong Motion Instrumentation Program (CSMIP). Both buildings have recorded responses during past events such as Loma-Prieta and Northridge earthquakes which were used in verifying the response parameters of the numerical models in PERFORM-3D. The verification of the numerical models shows good agreement between the calculated and the recorded response values. Then, different scenarios of a main-shock followed by a series of aftershocks from real cases in California were applied to the building models in order to investigate the structural behavior of the moment-resisting frame system. The behavior was evaluated in terms of the lateral floor displacements, the ductility demands, and the inelastic behavior at critical locations. The analysis results showed that permanent displacements may have happened due to the plastic deformation during the main-shock that can lead to higher displacements during after-shocks. Also, the inelastic response at plastic hinges during the main-shock can change the hysteretic behavior during the aftershocks. Higher ductility demands can also occur when buildings are subjected to trains of ground motions compared to the case of individual ground motions. A general conclusion is that the occurrence of aftershocks following an earthquake can lead to increased damage within the elements of an RC frame buildings. Current code provisions for seismic design do not consider the probability of significant aftershocks when designing a new building in zones of high seismic activity.Keywords: reinforced concrete, existing buildings, aftershocks, damage accumulation
Procedia PDF Downloads 28046 Shameful Heroes of Queer Cinema: A Critique of Mumbai Police (2013) and My Life Partner (2014)
Authors: Payal Sudhan
Abstract:
Popular films in India, Bollywood, and other local industries make a range of commercial films that attract vast viewership. Love, Heroism, Action, Adventure, Revenge, etc., are some of the dearest themes chosen by many filmmakers of various popular film Industries across the world. However, sexuality has become an issue to address within the cinema. Such films feature in small numbers compared to other themes. One can easily assume that homosexuality is unlikely to be a favorite theme found in Indian popular cinema. It doesn’t mean that there is absolutely no film made on the issues of homosexuality. There have been several attempts. Earlier, some movies depicted homosexual (gay) characters as comedians, which continued until the beginning of the 21st century. The study aims to explore how modern homophobia and stereotype are represented in the films and how it affects homosexuality in the recent Malayalam Cinema. The study wills primarily focusing on Mumbai Police (2013) and My Life Partner (2014). The study tries to explain social space, the idea of a cure, and criminality. The film that has been selected for the analysis Mumbai Police (2013) is a crime thriller. The nonlinear narration of the movie reveals, towards the end, the murderer of ACP Aryan IPS, who was shot dead in a public meeting. In the end, the culprit is the enquiring officer, ACP Antony Moses, himself a close friend and colleague of the victim. Much to one’s curiosity, the primary cause turns out to be the sexual relation Antony has. My Life Partner generically can be classified as a drama. The movie puts forth male bonding and visibly riddles the notions of love and sex between Kiran and his roommate Richard. Running through the same track, the film deals with a different ‘event.’ The ‘event’ is the exclusive celebration of male bonding. The socio-cultural background of the cinema is heterosexual. The elements of heterosexual social setup meet the ends of diplomacy of the Malayalam queer visual culture. The film reveals the life of two gays who were humiliated by the larger heterosexual society. In the end, Kiran dies because of extreme humiliation. The paper is a comparative and cultural analysis of the two movies, My Life Partner and Mumbai Police. I try to bring all the points of comparison together and explain the similarities and differences, how one movie differs from another. Thus, my attempt here explains how stereotypes and homophobia with other related issues are represented in these two movies.Keywords: queer cinema, homophobia, malayalam cinema, queer films
Procedia PDF Downloads 23345 Welfare Dynamics and Food Prices' Changes: Evidence from Landholding Groups in Rural Pakistan
Authors: Lubna Naz, Munir Ahmad, G. M. Arif
Abstract:
This study analyzes static and dynamic welfare impacts of food price changes for various landholding groups in Pakistan. The study uses three classifications of land ownership, landless, small landowners and large landowners, for analysis. The study uses Panel Survey, Pakistan Rural Household Survey (PRHS) of Pakistan Institute of Development Economics Islamabad, of rural households from two largest provinces (Sindh and Punjab) of Pakistan. The study uses all three waves (2001, 2004 and 2010) of PRHS. This research work makes three important contributions in literature. First, this study uses Quadratic Almost Ideal Demand System (QUAIDS) to estimate demand functions for eight food groups-cereals, meat, milk and milk products, vegetables, cooking oil, pulses and other food. The study estimates food demand functions with Nonlinear Seemingly Unrelated (NLSUR), and employs Lagrange Multiplier and test on the coefficient of squared expenditure term to determine inclusion of squared expenditure term. Test results support the inclusion of squared expenditure term in the food demand model for each of landholding groups (landless, small landowners and large landowners). This study tests for endogeneity and uses control function for its correction. The problem of observed zero expenditure is dealt with a two-step procedure. Second, it creates low price and high price periods, based on literature review. It uses elasticity coefficients from QUAIDS to analyze static and dynamic welfare effects (first and second order Tylor approximation of expenditure function is used) of food price changes across periods. The study estimates compensation variation (CV), money metric loss from food price changes, for landless, small and large landowners. Third, this study compares the findings on welfare implications of food price changes based on QUAIDS with the earlier research in Pakistan, which used other specification of the demand system. The findings indicate that dynamic welfare impacts of food price changes are lower as compared to static welfare impacts for all landholding groups. The static and dynamic welfare impacts of food price changes are highest for landless. The study suggests that government should extend social security nets to landless poor and categorically to vulnerable landless (without livestock) to redress the short-term impact of food price increase. In addition, the government should stabilize food prices and particularly cereal prices in the long- run.Keywords: QUAIDS, Lagrange multiplier, NLSUR, and Tylor approximation
Procedia PDF Downloads 36444 Quantifying Multivariate Spatiotemporal Dynamics of Malaria Risk Using Graph-Based Optimization in Southern Ethiopia
Authors: Yonas Shuke Kitawa
Abstract:
Background: Although malaria incidence has substantially fallen sharply over the past few years, the rate of decline varies by district, time, and malaria type. Despite this turn-down, malaria remains a major public health threat in various districts of Ethiopia. Consequently, the present study is aimed at developing a predictive model that helps to identify the spatio-temporal variation in malaria risk by multiple plasmodium species. Methods: We propose a multivariate spatio-temporal Bayesian model to obtain a more coherent picture of the temporally varying spatial variation in disease risk. The spatial autocorrelation in such a data set is typically modeled by a set of random effects that assign a conditional autoregressive prior distribution. However, the autocorrelation considered in such cases depends on a binary neighborhood matrix specified through the border-sharing rule. Over here, we propose a graph-based optimization algorithm for estimating the neighborhood matrix that merely represents the spatial correlation by exploring the areal units as the vertices of a graph and the neighbor relations as the series of edges. Furthermore, we used aggregated malaria count in southern Ethiopia from August 2013 to May 2019. Results: We recognized that precipitation, temperature, and humidity are positively associated with the malaria threat in the area. On the other hand, enhanced vegetation index, nighttime light (NTL), and distance from coastal areas are negatively associated. Moreover, nonlinear relationships were observed between malaria incidence and precipitation, temperature, and NTL. Additionally, lagged effects of temperature and humidity have a significant effect on malaria risk by either species. More elevated risk of P. falciparum was observed following the rainy season, and unstable transmission of P. vivax was observed in the area. Finally, P. vivax risks are less sensitive to environmental factors than those of P. falciparum. Conclusion: The improved inference was gained by employing the proposed approach in comparison to the commonly used border-sharing rule. Additionally, different covariates are identified, including delayed effects, and elevated risks of either of the cases were observed in districts found in the central and western regions. As malaria transmission operates in a spatially continuous manner, a spatially continuous model should be employed when it is computationally feasible.Keywords: disease mapping, MSTCAR, graph-based optimization algorithm, P. falciparum, P. vivax, waiting matrix
Procedia PDF Downloads 7743 Analytical Study of the Structural Response to Near-Field Earthquakes
Authors: Isidro Perez, Maryam Nazari
Abstract:
Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.Keywords: near-field, pulse, pushover, time-history
Procedia PDF Downloads 14642 Machine Learning Techniques for Estimating Ground Motion Parameters
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine
Procedia PDF Downloads 12241 Bilingual Books in British Sign Language and English: The Development of E-Book
Authors: Katherine O'Grady-Bray
Abstract:
For some deaf children, reading books can be a challenge. Frank Barnes School (FBS) provides guided reading time with Teachers of the Deaf, in which they read books with deaf children using a bilingual approach. The vocabulary and context of the story is explained to deaf children in BSL so they develop skills bridging English and BSL languages. However, the success of this practice is only achieved if the person is fluent in both languages. FBS piloted a scheme to convert an Oxford Reading Tree (ORT) book into an e-book that can be read using tablets. Deaf readers at FBS have access to both languages (BSL and English) during lessons and outside the classroom. The pupils receive guided reading sessions with a Teacher of the Deaf every morning, these one to one sessions give pupils the opportunity to learn how to bridge both languages e.g. how to translate English to BSL and vice versa. Generally, due to our pupils’ lack of access to incidental learning, gaining new information about the world around them is limited. This highlights the importance of quality time to scaffold their language development. In some cases, there is a shortfall of parental support at home due to poor communication skills or an unawareness of how to interact with deaf children. Some families have a limited knowledge of sign language or simply don’t have the required learning environment and strategies needed for language development with deaf children. As the majority of our pupils’ preferred language is BSL we use that to teach reading and writing English. If this is not mirrored at home, there is limited opportunity for joint reading sessions. Development of the e-Book required planning and technical development. The overall production took time as video footage needed to be shot and then edited individually for each page. There were various technical considerations such as having an appropriate background colour so not to draw attention away from the signer. Appointing a signer with the required high level of BSL was essential. The language and pace of the sign language was an important consideration as it was required to match the age and reading level of the book. When translating English text to BSL, careful consideration was given to the nonlinear nature of BSL and the differences in language structure and syntax. The e-book was produced using Apple’s ‘iBook Author’ software which allowed video footage of the signer to be embedded on pages opposite the text and illustration. This enabled BSL translation of the content of the text and inferences of the story. An interpreter was used to directly ‘voice over’ the signer rather than the actual text. The aim behind the structure and layout of the e-book is to allow parents to ‘read’ with their deaf child which helps to develop both languages. From observations, the use of e-books has given pupils confidence and motivation with their reading, developing skills bridging both BSL and English languages and more effective reading time with parents.Keywords: bilingual book, e-book, BSL and English, bilingual e-book
Procedia PDF Downloads 16940 The Effect of Lead(II) Lone Electron Pair and Non-Covalent Interactions on the Supramolecular Assembly and Fluorescence Properties of Pb(II)-Pyrrole-2-Carboxylato Polymer
Authors: M. Kowalik, J. Masternak, K. Kazimierczuk, O. V. Khavryuchenko, B. Kupcewicz, B. Barszcz
Abstract:
Recently, the growing interest of chemists in metal-organic coordination polymers (MOCPs) is primarily derived from their intriguing structures and potential applications in catalysis, gas storage, molecular sensing, ion exchanges, nonlinear optics, luminescence, etc. Currently, we are devoting considerable effort to finding the proper method of synthesizing new coordination polymers containing S- or N-heteroaromatic carboxylates as linkers and characterizing the obtained Pb(II) compounds according to their structural diversity, luminescence, and thermal properties. The choice of Pb(II) as the central ion of MOCPs was motivated by several reasons mentioned in the literature: i) a large ionic radius allowing for a wide range of coordination numbers, ii) the stereoactivity of the 6s2 lone electron pair leading to a hemidirected or holodirected geometry, iii) a flexible coordination environment, and iv) the possibility to form secondary bonds and unusual non-covalent interactions, such as classic hydrogen bonds and π···π stacking interactions, as well as nonconventional hydrogen bonds and rarely reported tetrel bonds, Pb(lone pair)···π interactions, C–H···Pb agostic-type interactions or hydrogen bonds, and chelate ring stacking interactions. Moreover, the construction of coordination polymers requires the selection of proper ligands acting as linkers, because we are looking for materials exhibiting different network topologies and fluorescence properties, which point to potential applications. The reaction of Pb(NO₃)₂ with 1H-pyrrole-2-carboxylic acid (2prCOOH) leads to the formation of a new four-nuclear Pb(II) polymer, [Pb4(2prCOO)₈(H₂O)]ₙ, which has been characterized by CHN, FT-IR, TG, PL and single-crystal X-ray diffraction methods. In view of the primary Pb–O bonds, Pb1 and Pb2 show hemidirected pentagonal pyramidal geometries, while Pb2 and Pb4 display hemidirected octahedral geometries. The topology of the strongest Pb–O bonds was determined as the (4·8²) fes topology. Taking the secondary Pb–O bonds into account, the coordination number of Pb centres increased, Pb1 exhibited a hemidirected monocapped pentagonal pyramidal geometry, Pb2 and Pb4 exhibited a holodirected tricapped trigonal prismatic geometry, and Pb3 exhibited a holodirected bicapped trigonal prismatic geometry. Moreover, the Pb(II) lone pair stereoactivity was confirmed by DFT calculations. The 2D structure was expanded into 3D by the existence of non-covalent O/C–H···π and Pb···π interactions, which was confirmed by the Hirshfeld surface analysis. The above mentioned interactions improve the rigidity of the structure and facilitate the charge and energy transfer between metal centres, making the polymer a promising luminescent compound.Keywords: coordination polymers, fluorescence properties, lead(II), lone electron pair stereoactivity, non-covalent interactions
Procedia PDF Downloads 145