Search results for: extended Kantorovich method.
19287 Architectural Robotics in Micro Living Spaces: An Approach to Enhancing Wellbeing
Authors: Timothy Antoniuk
Abstract:
This paper will demonstrate why the most successful and livable cities in the future will require multi-disciplinary designers to develop a deep understanding of peoples’ changing lifestyles, and why new generations of deeply integrated products, services and experiences need to be created. Disseminating research from the UNEP Creative Economy Reports and through a variety of other consumption and economic-based statistics, a compelling argument will be made that it is peoples’ living spaces that offer the easiest and most significant affordances for inducing positive changes to their wellbeing, and to a city’s economic and environmental prosperity. This idea, that leveraging happiness, wellbeing and prosperity through creating new concepts and typologies of ‘home’, puts people and their needs, wants, desires, aspirations and lifestyles at the beginning of the design process, not at the end, as so often occurs with current-day multi-unit housing construction. As an important part of the creative-reflective and statistical comparisons that are necessary for this on-going body of research and practice, Professor Antoniuk created the Micro Habitation Lab (mHabLab) in 2016. By focusing on testing the functional and economic feasibility of activating small spaces with different types of architectural robotics, a variety of movable, expandable and interactive objects have been hybridized and integrated into the architectural structure of the Lab. Allowing the team to test new ideas continually and accumulate thousands of points of feedback from everyday consumers, a series of on-going open houses is allowing the public-at-large to see, physically engage with, and give feedback on the items they find most and least valuable. This iterative approach of testing has exposed two key findings: Firstly, that there is a clear opportunity to improve the macro and micro functionality of small living spaces; and secondly, that allowing people to physically alter smaller elements of their living space lessens feelings of frustration and enhances feelings of pride and a deeper perception of “home”. Equally interesting to these findings is a grouping of new research questions that are being exposed which relate to: The duality of space; how people can be in two living spaces at one time; and how small living spaces is moving the Extended Home into the public realm.Keywords: architectural robotics, extended home, interactivity, micro living spaces
Procedia PDF Downloads 17019286 Critical Activity Effect on Project Duration in Precedence Diagram Method
Authors: Salman Ali Nisar, Koshi Suzuki
Abstract:
Precedence Diagram Method (PDM) with its additional relationships i.e., start-to-start, finish-to-finish, and start-to-finish, between activities provides more flexible schedule than traditional Critical Path Method (CPM). But, changing the duration of critical activities in PDM network will have anomalous effect on critical path. Researchers have proposed some classification of critical activity effects. In this paper, we do further study on classifications of critical activity effect and provide more information in detailed. Furthermore, we determine the maximum amount of time for each class of critical activity effect by which the project managers can control the dynamic feature (shortening/lengthening) of critical activities and project duration more efficiently.Keywords: construction project management, critical path method, project scheduling, precedence diagram method
Procedia PDF Downloads 50919285 In-Fun-Mation: Putting the Fun in Information Retrieval at the Linnaeus University, Sweden
Authors: Aagesson, Ekstrand, Persson, Sallander
Abstract:
A description of how a team of librarians at Linnaeus University Library in Sweden utilizes a pedagogical approach to deliver engaging digital workshops on information retrieval. The team consists of four librarians supporting three different faculties. The paper discusses the challenges faced in engaging students who may perceive information retrieval as a boring and difficult subject. The paper emphasizes the importance of motivation, inclusivity, constructive feedback, and collaborative learning in enhancing student engagement. By employing a two-librarian teaching model, maintaining a lighthearted approach, and relating information retrieval to everyday experiences, the team aimed to create an enjoyable and meaningful learning experience. The authors describe their approach to increase student engagement and learning outcomes through a three-phase workshop structure: before, during, and after the workshops. The "flipped classroom" method was used, where students were provided with pre-workshop materials, including a short film on information search and encouraged to reflect on the topic using a digital collaboration tool. During the workshops, interactive elements such as quizzes, live demonstrations, and practical training were incorporated, along with opportunities for students to ask questions and provide feedback. The paper concludes by highlighting the benefits of the flipped classroom approach and the extended learning opportunities provided by the before and after workshop phases. The authors believe that their approach offers a sustainable alternative for enhancing information retrieval knowledge among students at Linnaeus University.Keywords: digital workshop, flipped classroom, information retrieval, interactivity, LIS practitioner, student engagement
Procedia PDF Downloads 6419284 Fluid–Structure Interaction Modeling of Wind Turbines
Authors: Andre F. A. Cyrino
Abstract:
Knowing that the technological advance is the focus on the efficient extraction of energy from wind, and therefore in the design of wind turbine structures, this work aims the study of the fluid-structure interaction of an idealized wind turbine. The blade was studied as a beam attached to a cylindrical Hub with rotation axis pointing the air flow that passes through the rotor. Using the calculus of variations and the finite difference method the blade will be simulated by a discrete number of nodes and the aerodynamic forces were evaluated. The study presented here was written on Matlab and performs a numeric simulation of a simplified model of windmill containing a Hub and three blades modeled as Euler-Bernoulli beams for small strains and under the constant and uniform wind. The mathematical approach is done by Hamilton’s Extended Principle with the aerodynamic loads applied on the nodes considering the local relative wind speed, angle of attack and aerodynamic lift and drag coefficients. Due to the wide range of angles of attack, a wind turbine blade operates, the airfoil used on the model was NREL SERI S809 which allowed obtaining equations for Cl and Cd as functions of the angle of attack, based on a NASA study. Tridimensional flow effects were no taken in part, as well as torsion of the beam, which only bends. The results showed the dynamic response of the system in terms of displacement and rotational speed as the turbine reached the final speed. Although the results were not compared to real windmills or more complete models, the resulting values were consistent with the size of the system and wind speed.Keywords: blade aerodynamics, fluid–structure interaction, wind turbine aerodynamics, wind turbine blade
Procedia PDF Downloads 26619283 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton
Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani
Abstract:
Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton
Procedia PDF Downloads 32319282 Improving the Accuracy of Stress Intensity Factors Obtained by Scaled Boundary Finite Element Method on Hybrid Quadtree Meshes
Authors: Adrian W. Egger, Savvas P. Triantafyllou, Eleni N. Chatzi
Abstract:
The scaled boundary finite element method (SBFEM) is a semi-analytical numerical method, which introduces a scaling center in each element’s domain, thus transitioning from a Cartesian reference frame to one resembling polar coordinates. Consequently, an analytical solution is achieved in radial direction, implying that only the boundary need be discretized. The only limitation imposed on the resulting polygonal elements is that they remain star-convex. Further arbitrary p- or h-refinement may be applied locally in a mesh. The polygonal nature of SBFEM elements has been exploited in quadtree meshes to alleviate all issues conventionally associated with hanging nodes. Furthermore, since in 2D this results in only 16 possible cell configurations, these are precomputed in order to accelerate the forward analysis significantly. Any cells, which are clipped to accommodate the domain geometry, must be computed conventionally. However, since SBFEM permits polygonal elements, significantly coarser meshes at comparable accuracy levels are obtained when compared with conventional quadtree analysis, further increasing the computational efficiency of this scheme. The generalized stress intensity factors (gSIFs) are computed by exploiting the semi-analytical solution in radial direction. This is initiated by placing the scaling center of the element containing the crack at the crack tip. Taking an analytical limit of this element’s stress field as it approaches the crack tip, delivers an expression for the singular stress field. By applying the problem specific boundary conditions, the geometry correction factor is obtained, and the gSIFs are then evaluated based on their formal definition. Since the SBFEM solution is constructed as a power series, not unlike mode superposition in FEM, the two modes contributing to the singular response of the element can be easily identified in post-processing. Compared to the extended finite element method (XFEM) this approach is highly convenient, since neither enrichment terms nor a priori knowledge of the singularity is required. Computation of the gSIFs by SBFEM permits exceptional accuracy, however, when combined with hybrid quadtrees employing linear elements, this does not always hold. Nevertheless, it has been shown that crack propagation schemes are highly effective even given very coarse discretization since they only rely on the ratio of mode one to mode two gSIFs. The absolute values of the gSIFs may still be subject to large errors. Hence, we propose a post-processing scheme, which minimizes the error resulting from the approximation space of the cracked element, thus limiting the error in the gSIFs to the discretization error of the quadtree mesh. This is achieved by h- and/or p-refinement of the cracked element, which elevates the amount of modes present in the solution. The resulting numerical description of the element is highly accurate, with the main error source now stemming from its boundary displacement solution. Numerical examples show that this post-processing procedure can significantly improve the accuracy of the computed gSIFs with negligible computational cost even on coarse meshes resulting from hybrid quadtrees.Keywords: linear elastic fracture mechanics, generalized stress intensity factors, scaled finite element method, hybrid quadtrees
Procedia PDF Downloads 14519281 Effect of Spatially Correlated Disorder on Electronic Transport Properties of Aperiodic Superlattices (GaAs/AlxGa1-xAs)
Authors: F. Bendahma, S. Bentata, S. Cherid, A. Zitouni, S. Terkhi, T. Lantri, Y. Sefir, Z. F. Meghoufel
Abstract:
We examine the electronic transport properties in AlxGa1-xAs/GaAs superlattices. Using the transfer-matrix technique and the exact Airy function formalism, we investigate theoretically the effect of structural parameters on the electronic energy spectra of trimer thickness barrier (TTB). Our numerical calculations showed that the localization length of the states becomes more extended when the disorder is correlated (trimer case). We have also found that the resonant tunneling time (RTT) is of the order of several femtoseconds.Keywords: electronic transport properties, structural parameters, superlattices, transfer-matrix technique
Procedia PDF Downloads 28319280 Notes on Frames in Weighted Hardy Spaces and Generalized Weighted Composition Operators
Authors: Shams Alyusof
Abstract:
This work is to enrich the studies of the frames due to their prominent role in pure mathematics as well as in applied mathematics and many applications in computer science and engineering. Recently, there are remarkable studies of operators that preserve frames on some spaces, and this research could be considered as an extension of such studies. Indeed, this paper is to we characterize weighted composition operators that preserve frames in weighted Hardy spaces on the open unit disk. Moreover, it shows that this characterization does not apply to generalized weighted composition operators on such spaces. Nevertheless, this study could be extended to provide more specific characterizations.Keywords: frames, generalized weighted composition operators, weighted Hardy spaces, analytic functions
Procedia PDF Downloads 12019279 Implementation of a Method of Crater Detection Using Principal Component Analysis in FPGA
Authors: Izuru Nomura, Tatsuya Takino, Yuji Kageyama, Shin Nagata, Hiroyuki Kamata
Abstract:
We propose a method of crater detection from the image of the lunar surface captured by the small space probe. We use the principal component analysis (PCA) to detect craters. Nevertheless, considering severe environment of the space, it is impossible to use generic computer in practice. Accordingly, we have to implement the method in FPGA. This paper compares FPGA and generic computer by the processing time of a method of crater detection using principal component analysis.Keywords: crater, PCA, eigenvector, strength value, FPGA, processing time
Procedia PDF Downloads 55219278 Co-Authorship Networks of Scientific Collaboration
Authors: Juha Kettunen
Abstract:
This study analyzes collaborative and networked academic authorship in higher education. The literature review shows evidence that single authorship has made a gradual paradigm shift to joint authorship. The empirical evidence from the Turku University of Applied Sciences indicates that collaborative authorship has notably increased in the last few years. Co-authorship has extended outside the institution to other domestic and international academic organizations. Co-authorship not only increase the merits of academic scholars but builds and maintains networks of research and development. The results of this study help the authors, editors and partners of research and development projects to have a more concrete understanding of how co-authorship has developed and spread beyond higher education institutions.Keywords: co-authorship, social networking, higher education, research and development
Procedia PDF Downloads 24019277 An Investigation of the Relationship between Organizational Culture and Innovation Type: A Mixed Method Study Using the OCAI in a Telecommunication Company in Saudi Arabia
Authors: A. Almubrad, R. Clouse, A. Aljlaoud
Abstract:
Organizational culture (OC) is recognized to have an influence on the propensity of organizations to innovate. It is also presumed that it may impede the innovation process from thriving within the organization. Investigating the role organizational culture plays in enabling or inhibiting innovation merits exploration to investigate organizational cultural attributes necessary to reach innovation goals. This study aims to investigate a preliminary matching heuristic of OC attributes to the type of innovation that has the potential to thrive within those attributes. A mixed methods research approach was adopted to achieve the research aims. Accordingly, participants from a national telecom company in Saudi Arabia took the Organizational Culture Assessment Instrument (OCAI). A further sample selected from the respondents’ pool holding the role of managing directors was interviewed in the qualitative phase. Our study findings reveal that the market culture type has a tendency to adopt radical innovations to disrupt the market and to preserve its market position. In contrast, we find that the adhocracy culture type tends to adopt the incremental innovation type and found this tends to be more convenient for employees due to its low levels of uncertainty. Our results are an encouraging indication that matching organizational culture attributes to the type of innovation aids in innovation management. This study carries limitations while drawing its findings from a limited sample of OC attributes that identify with the adhocracy and market culture types. An extended investigation is merited to explore other types of organizational cultures and their optimal innovation types.Keywords: incremental innovation, radical innovation, organization culture, market culture, adhocracy culture, OACI
Procedia PDF Downloads 10319276 Life Time Improvement of Clamp Structural by Using Fatigue Analysis
Authors: Pisut Boonkaew, Jatuporn Thongsri
Abstract:
In hard disk drive manufacturing industry, the process of reducing an unnecessary part and qualifying the quality of part before assembling is important. Thus, clamp was designed and fabricated as a fixture for holding in testing process. Basically, testing by trial and error consumes a long time to improve. Consequently, the simulation was brought to improve the part and reduce the time taken. The problem is the present clamp has a low life expectancy because of the critical stress that occurred. Hence, the simulation was brought to study the behavior of stress and compressive force to improve the clamp expectancy with all probability of designs which are present up to 27 designs, which excluding the repeated designs. The probability was calculated followed by the full fractional rules of six sigma methodology which was provided correctly. The six sigma methodology is a well-structured method for improving quality level by detecting and reducing the variability of the process. Therefore, the defective will be decreased while the process capability increasing. This research focuses on the methodology of stress and fatigue reduction while compressive force still remains in the acceptable range that has been set by the company. In the simulation, ANSYS simulates the 3D CAD with the same condition during the experiment. Then the force at each distance started from 0.01 to 0.1 mm will be recorded. The setting in ANSYS was verified by mesh convergence methodology and compared the percentage error with the experimental result; the error must not exceed the acceptable range. Therefore, the improved process focuses on degree, radius, and length that will reduce stress and still remain in the acceptable force number. Therefore, the fatigue analysis will be brought as the next process in order to guarantee that the lifetime will be extended by simulating through ANSYS simulation program. Not only to simulate it, but also to confirm the setting by comparing with the actual clamp in order to observe the different of fatigue between both designs. This brings the life time improvement up to 57% compared with the actual clamp in the manufacturing. This study provides a precise and trustable setting enough to be set as a reference methodology for the future design. Because of the combination and adaptation from the six sigma method, finite element, fatigue and linear regressive analysis that lead to accurate calculation, this project will able to save up to 60 million dollars annually.Keywords: clamp, finite element analysis, structural, six sigma, linear regressive analysis, fatigue analysis, probability
Procedia PDF Downloads 23319275 MapReduce Logistic Regression Algorithms with RHadoop
Authors: Byung Ho Jung, Dong Hoon Lim
Abstract:
Logistic regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. Logistic regression is used extensively in numerous disciplines, including the medical and social science fields. In this paper, we address the problem of estimating parameters in the logistic regression based on MapReduce framework with RHadoop that integrates R and Hadoop environment applicable to large scale data. There exist three learning algorithms for logistic regression, namely Gradient descent method, Cost minimization method and Newton-Rhapson's method. The Newton-Rhapson's method does not require a learning rate, while gradient descent and cost minimization methods need to manually pick a learning rate. The experimental results demonstrated that our learning algorithms using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also compared the performance of our Newton-Rhapson's method with gradient descent and cost minimization methods. The results showed that our newton's method appeared to be the most robust to all data tested.Keywords: big data, logistic regression, MapReduce, RHadoop
Procedia PDF Downloads 28019274 An Optimized Method for 3D Magnetic Navigation of Nanoparticles inside Human Arteries
Authors: Evangelos G. Karvelas, Christos Liosis, Andreas Theodorakakos, Theodoros E. Karakasidis
Abstract:
In the present work, a numerical method for the estimation of the appropriate gradient magnetic fields for optimum driving of the particles into the desired area inside the human body is presented. The proposed method combines Computational Fluid Dynamics (CFD), Discrete Element Method (DEM) and Covariance Matrix Adaptation (CMA) evolution strategy for the magnetic navigation of nanoparticles. It is based on an iteration procedure that intents to eliminate the deviation of the nanoparticles from a desired path. Hence, the gradient magnetic field is constantly adjusted in a suitable way so that the particles’ follow as close as possible to a desired trajectory. Using the proposed method, it is obvious that the diameter of particles is crucial parameter for an efficient navigation. In addition, increase of particles' diameter decreases their deviation from the desired path. Moreover, the navigation method can navigate nanoparticles into the desired areas with efficiency approximately 99%.Keywords: computational fluid dynamics, CFD, covariance matrix adaptation evolution strategy, discrete element method, DEM, magnetic navigation, spherical particles
Procedia PDF Downloads 13919273 Meta-Instruction Theory in Mathematics Education and Critique of Bloom’s Theory
Authors: Abdollah Aliesmaeili
Abstract:
The purpose of this research is to present a different perspective on the basic math teaching method called meta-instruction, which reverses the learning path. Meta-instruction is a method of teaching in which the teaching trajectory starts from brain education into learning. This research focuses on the behavior of the mind during learning. In this method, students are not instructed in mathematics, but they are educated. Another goal of the research is to "criticize Bloom's classification in the cognitive domain and reverse it", because it cannot meet the educational and instructional needs of the new generation and "substituting math education instead of math teaching". This is an indirect method of teaching. The method of research is longitudinal through four years. Statistical samples included students ages 6 to 11. The research focuses on improving the mental abilities of children to explore mathematical rules and operations by playing only with eight measurements (any years 2 examinations). The results showed that there is a significant difference between groups in remembering, understanding, and applying. Moreover, educating math is more effective than instructing in overall learning abilities.Keywords: applying, Bloom's taxonomy, brain education, mathematics teaching method, meta-instruction, remembering, starmath method, understanding
Procedia PDF Downloads 2019272 Effect of Type of Pile and Its Installation Method on Pile Bearing Capacity by Physical Modelling in Frustum Confining Vessel
Authors: Seyed Abolhasan Naeini, M. Mortezaee
Abstract:
Various factors such as the method of installation, the pile type, the pile material and the pile shape, can affect the final bearing capacity of a pile executed in the soil; among them, the method of installation is of special importance. The physical modeling is among the best options in the laboratory study of the piles behavior. Therefore, the current paper first presents and reviews the frustum confining vesel (FCV) as a suitable tool for physical modeling of deep foundations. Then, by describing the loading tests of two open-ended and closed-end steel piles, each of which has been performed in two methods, “with displacement" and "without displacement", the effect of end conditions and installation method on the final bearing capacity of the pile is investigated. The soil used in the current paper is silty sand of Firoozkooh. The results of the experiments show that in general the without displacement installation method has a larger bearing capacity in both piles, and in a specific method of installation the closed ended pile shows a slightly higher bearing capacity.Keywords: physical modeling, frustum confining vessel, pile, bearing capacity, installation method
Procedia PDF Downloads 15119271 Seismic Fragility Functions of RC Moment Frames Using Incremental Dynamic Analyses
Authors: Seung-Won Lee, JongSoo Lee, Won-Jik Yang, Hyung-Joon Kim
Abstract:
A capacity spectrum method (CSM), one of methodologies to evaluate seismic fragilities of building structures, has been long recognized as the most convenient method, even if it contains several limitations to predict the seismic response of structures of interest. This paper proposes the procedure to estimate seismic fragility curves using an incremental dynamic analysis (IDA) rather than the method adopting a CSM. To achieve the research purpose, this study compares the seismic fragility curves of a 5-story reinforced concrete (RC) moment frame obtained from both methods, an IDA method and a CSM. Both seismic fragility curves are similar in slight and moderate damage states whereas the fragility curve obtained from the IDA method presents less variation (or uncertainties) in extensive and complete damage states. This is due to the fact that the IDA method can properly capture the structural response beyond yielding rather than the CSM and can directly calculate higher mode effects. From these observations, the CSM could overestimate seismic vulnerabilities of the studied structure in extensive or complete damage states.Keywords: seismic fragility curve, incremental dynamic analysis, capacity spectrum method, reinforced concrete moment frame
Procedia PDF Downloads 42119270 Approximations of Fractional Derivatives and Its Applications in Solving Non-Linear Fractional Variational Problems
Authors: Harendra Singh, Rajesh Pandey
Abstract:
The paper presents a numerical method based on operational matrix of integration and Ryleigh method for the solution of a class of non-linear fractional variational problems (NLFVPs). Chebyshev first kind polynomials are used for the construction of operational matrix. Using operational matrix and Ryleigh method the NLFVP is converted into a system of non-linear algebraic equations, and solving these equations we obtained approximate solution for NLFVPs. Convergence analysis of the proposed method is provided. Numerical experiment is done to show the applicability of the proposed numerical method. The obtained numerical results are compared with exact solution and solution obtained from Chebyshev third kind. Further the results are shown graphically for different fractional order involved in the problems.Keywords: non-linear fractional variational problems, Rayleigh-Ritz method, convergence analysis, error analysis
Procedia PDF Downloads 29619269 An Approximation Method for Exact Boundary Controllability of Euler-Bernoulli
Authors: A. Khernane, N. Khelil, L. Djerou
Abstract:
The aim of this work is to study the numerical implementation of the Hilbert uniqueness method for the exact boundary controllability of Euler-Bernoulli beam equation. This study may be difficult. This will depend on the problem under consideration (geometry, control, and dimension) and the numerical method used. Knowledge of the asymptotic behaviour of the control governing the system at time T may be useful for its calculation. This idea will be developed in this study. We have characterized as a first step the solution by a minimization principle and proposed secondly a method for its resolution to approximate the control steering the considered system to rest at time T.Keywords: boundary control, exact controllability, finite difference methods, functional optimization
Procedia PDF Downloads 34619268 Analysis of Chatterjea Type F-Contraction in F-Metric Space and Application
Authors: Awais Asif
Abstract:
This article investigates fixed point theorems of Chatterjea type F-contraction in the setting of F-metric space. We relax the conditions of F-contraction and define modified F-contraction for two mappings. The study provides fixed point results for both single-valued and multivalued mappings. The results are further extended to common fixed point theorems for two mappings. Moreover, to discuss the applicability of our results, an application is provided, which shows the role of our results in finding the solution to functional equations in dynamic programming. Our results generalize and extend the existing results in the literature.Keywords: Chatterjea type F-contraction, F-cauchy sequence, F-convergent, multi valued mappings
Procedia PDF Downloads 14119267 Online Battery Equivalent Circuit Model Estimation on Continuous-Time Domain Using Linear Integral Filter Method
Authors: Cheng Zhang, James Marco, Walid Allafi, Truong Q. Dinh, W. D. Widanage
Abstract:
Equivalent circuit models (ECMs) are widely used in battery management systems in electric vehicles and other battery energy storage systems. The battery dynamics and the model parameters vary under different working conditions, such as different temperature and state of charge (SOC) levels, and therefore online parameter identification can improve the modelling accuracy. This paper presents a way of online ECM parameter identification using a continuous time (CT) estimation method. The CT estimation method has several advantages over discrete time (DT) estimation methods for ECM parameter identification due to the widely separated battery dynamic modes and fast sampling. The presented method can be used for online SOC estimation. Test data are collected using a lithium ion cell, and the experimental results show that the presented CT method achieves better modelling accuracy compared with the conventional DT recursive least square method. The effectiveness of the presented method for online SOC estimation is also verified on test data.Keywords: electric circuit model, continuous time domain estimation, linear integral filter method, parameter and SOC estimation, recursive least square
Procedia PDF Downloads 38119266 Numerical Investigation of Embankment Settlement Improved by Method of Preloading by Vertical Drains
Authors: Seyed Abolhasan Naeini, Saeideh Mohammadi
Abstract:
Time dependent settlement due to loading on soft saturated soils produces many problems such as high consolidation settlements and low consolidation rates. Also, long term consolidation settlement of soft soil underlying the embankment leads to unpredicted settlements and cracks on soil surface. Preloading method is an effective improvement method to solve this problem. Using vertical drains in preloading method is an effective method for improving soft soils. Applying deep soil mixing method on soft soils is another effective method for improving soft soils. There are little studies on using two methods of preloading and deep soil mixing simultaneously. In this paper, the concurrent effect of preloading with deep soil mixing by vertical drains is investigated through a finite element code, Plaxis2D. The influence of parameters such as deep soil mixing columns spacing, existence of vertical drains and distance between them, on settlement and stability factor of safety of embankment embedded on soft soil is investigated in this research.Keywords: preloading, soft soil, vertical drains, deep soil mixing, consolidation settlement
Procedia PDF Downloads 21519265 Prediction Fluid Properties of Iranian Oil Field with Using of Radial Based Neural Network
Authors: Abdolreza Memari
Abstract:
In this article in order to estimate the viscosity of crude oil,a numerical method has been used. We use this method to measure the crude oil's viscosity for 3 states: Saturated oil's viscosity, viscosity above the bubble point and viscosity under the saturation pressure. Then the crude oil's viscosity is estimated by using KHAN model and roller ball method. After that using these data that include efficient conditions in measuring viscosity, the estimated viscosity by the presented method, a radial based neural method, is taught. This network is a kind of two layered artificial neural network that its stimulation function of hidden layer is Gaussian function and teaching algorithms are used to teach them. After teaching radial based neural network, results of experimental method and artificial intelligence are compared all together. Teaching this network, we are able to estimate crude oil's viscosity without using KHAN model and experimental conditions and under any other condition with acceptable accuracy. Results show that radial neural network has high capability of estimating crude oil saving in time and cost is another advantage of this investigation.Keywords: viscosity, Iranian crude oil, radial based, neural network, roller ball method, KHAN model
Procedia PDF Downloads 49919264 A Hybrid Normalized Gradient Correlation Based Thermal Image Registration for Morphoea
Authors: L. I. Izhar, T. Stathaki, K. Howell
Abstract:
Analyzing and interpreting of thermograms have been increasingly employed in the diagnosis and monitoring of diseases thanks to its non-invasive, non-harmful nature and low cost. In this paper, a novel system is proposed to improve diagnosis and monitoring of morphoea skin disorder based on integration with the published lines of Blaschko. In the proposed system, image registration based on global and local registration methods are found inevitable. This paper presents a modified normalized gradient cross-correlation (NGC) method to reduce large geometrical differences between two multimodal images that are represented by smooth gray edge maps is proposed for the global registration approach. This method is improved further by incorporating an iterative-based normalized cross-correlation coefficient (NCC) method. It is found that by replacing the final registration part of the NGC method where translational differences are solved in the spatial Fourier domain with the NCC method performed in the spatial domain, the performance and robustness of the NGC method can be greatly improved. It is shown in this paper that the hybrid NGC method not only outperforms phase correlation (PC) method but also improved misregistration due to translation, suffered by the modified NGC method alone for thermograms with ill-defined jawline. This also demonstrates that by using the gradients of the gray edge maps and a hybrid technique, the performance of the PC based image registration method can be greatly improved.Keywords: Blaschko’s lines, image registration, morphoea, thermal imaging
Procedia PDF Downloads 30919263 Comparison of Allowable Stress Method and Time History Response Analysis for Seismic Design of Buildings
Authors: Sayuri Inoue, Naohiro Nakamura, Tsubasa Hamada
Abstract:
The seismic design method of buildings is classified into two types: static design and dynamic design. The static design is a design method that exerts static force as seismic force and is a relatively simple design method created based on the experience of seismic motion in the past 100 years. At present, static design is used for most of the Japanese buildings. Dynamic design mainly refers to the time history response analysis. It is a comparatively difficult design method that input the earthquake motion assumed in the building model and examine the response. Currently, it is only used for skyscrapers and specific buildings. In the present design standard in Japan, it is good to use either the design method of the static design and the dynamic design in the medium and high-rise buildings. However, when actually designing middle and high-rise buildings by two kinds of design methods, the relatively simple static design method satisfies the criteria, but in the case of a little difficult dynamic design method, the criterion isn't often satisfied. This is because the dynamic design method was built with the intention of designing super high-rise buildings. In short, higher safety is required as compared with general buildings, and criteria become stricter. The authors consider applying the dynamic design method to general buildings designed by the static design method so far. The reason is that application of the dynamic design method is reasonable for buildings that are out of the conventional standard structural form such as emphasizing design. For the purpose, it is important to compare the design results when the criteria of both design methods are arranged side by side. In this study, we performed time history response analysis to medium-rise buildings that were actually designed with allowable stress method. Quantitative comparison between static design and dynamic design was conducted, and characteristics of both design methods were examined.Keywords: buildings, seismic design, allowable stress design, time history response analysis, Japanese seismic code
Procedia PDF Downloads 15319262 Second Order Analysis of Frames Using Modified Newmark Method
Authors: Seyed Amin Vakili, Sahar Sadat Vakili, Seyed Ehsan Vakili, Nader Abdoli Yazdi
Abstract:
The main purpose of this paper is to present the Modified Newmark Method as a method of non-linear frame analysis by considering the effect of the axial load (second order analysis). The discussion will be restricted to plane frameworks containing a constant cross-section for each element. In addition, it is assumed that the frames are prevented from out-of-plane deflection. This part of the investigation is performed to generalize the established method for the assemblage structures such as frameworks. As explained, the governing differential equations are non-linear and cannot be formulated easily due to unknown axial load of the struts in the frame. By the assumption of constant axial load, the governing equations are changed to linear ones in most methods. Since the modeling and the solutions of the non-linear form of the governing equations are cumbersome, the linear form of the equations would be used in the established method. However, according to the ability of the method to reconsider the minor omitted parameters in modeling during the solution procedure, the axial load in the elements at each stage of the iteration can be computed and applied in the next stage. Therefore, the ability of the method to present an accurate approach to the solutions of non-linear equations will be demonstrated again in this paper.Keywords: nonlinear, stability, buckling, modified newmark method
Procedia PDF Downloads 42419261 Reliability-Based Method for Assessing Liquefaction Potential of Soils
Authors: Mehran Naghizaderokni, Asscar Janalizadechobbasty
Abstract:
This paper explores probabilistic method for assessing the liquefaction potential of sandy soils. The current simplified methods for assessing soil liquefaction potential use a deterministic safety factor in order to determine whether liquefaction will occur or not. However, these methods are unable to determine the liquefaction probability related to a safety factor. A solution to this problem can be found by reliability analysis.This paper presents a reliability analysis method based on the popular certain liquefaction analysis method. The proposed probabilistic method is formulated based on the results of reliability analyses of 190 field records and observations of soil performance against liquefaction. The results of the present study show that confidence coefficient greater and smaller than 1 does not mean safety and/or liquefaction in cadence for liquefaction, and for assuring liquefaction probability, reliability based method analysis should be used. This reliability method uses the empirical acceleration attenuation law in the Chalos area to derive the probability density distribution function and the statistics for the earthquake-induced cyclic shear stress ratio (CSR). The CSR and CRR statistics are used in continuity with the first order and second moment method to calculate the relation between the liquefaction probability, the safety factor and the reliability index. Based on the proposed method, the liquefaction probability related to a safety factor can be easily calculated. The influence of some of the soil parameters on the liquefaction probability can be quantitatively evaluated.Keywords: liquefaction, reliability analysis, chalos area, civil and structural engineering
Procedia PDF Downloads 46919260 Parallelizing the Hybrid Pseudo-Spectral Time Domain/Finite Difference Time Domain Algorithms for the Large-Scale Electromagnetic Simulations Using Massage Passing Interface Library
Authors: Donggun Lee, Q-Han Park
Abstract:
Due to its coarse grid, the Pseudo-Spectral Time Domain (PSTD) method has advantages against the Finite Difference Time Domain (FDTD) method in terms of memory requirement and operation time. However, since the efficiency of parallelization is much lower than that of FDTD, PSTD is not a useful method for a large-scale electromagnetic simulation in a parallel platform. In this paper, we propose the parallelization technique of the hybrid PSTD-FDTD (HPF) method which simultaneously possesses the efficient parallelizability of FDTD and the quick speed and low memory requirement of PSTD. Parallelization cost of the HPF method is exactly the same as the parallel FDTD, but still, it occupies much less memory space and has faster operation speed than the parallel FDTD. Experiments in distributed memory systems have shown that the parallel HPF method saves up to 96% of the operation time and reduces 84% of the memory requirement. Also, by combining the OpenMP library to the MPI library, we further reduced the operation time of the parallel HPF method by 50%.Keywords: FDTD, hybrid, MPI, OpenMP, PSTD, parallelization
Procedia PDF Downloads 14619259 Modeling and Simulation of InAs/GaAs and GaSb/GaAS Quantum Dot Solar Cells in SILVACO TCAD
Authors: Fethi Benyettou, Abdelkader Aissat, M. A. Benammar
Abstract:
In this work, we use Silvaco TCAD software for modeling and simulations of standard GaAs solar cell, InAs/GaAs and GaSb/GaAs p-i-n quantum dot solar cell. When comparing 20-layer InAs/GaAs, GaSb/GaAs quantum dots solar cells with standard GaAs solar cell, the conversion efficiency in simulation results increased from 16.48 % to 22.6% and 16.48% to 22.42% respectively. Also, the absorption range edge of photons with low energies extended from 900 nm to 1200 nm.Keywords: SILVACO TCAD, the quantum dot, simulation, materials engineering
Procedia PDF Downloads 50019258 The Use of Fractional Brownian Motion in the Generation of Bed Topography for Bodies of Water Coupled with the Lattice Boltzmann Method
Authors: Elysia Barker, Jian Guo Zhou, Ling Qian, Steve Decent
Abstract:
A method of modelling topography used in the simulation of riverbeds is proposed in this paper, which removes the need for datapoints and measurements of physical terrain. While complex scans of the contours of a surface can be achieved with other methods, this requires specialised tools, which the proposed method overcomes by using fractional Brownian motion (FBM) as a basis to estimate the real surface within a 15% margin of error while attempting to optimise algorithmic efficiency. This removes the need for complex, expensive equipment and reduces resources spent modelling bed topography. This method also accounts for the change in topography over time due to erosion, sediment transport, and other external factors which could affect the topography of the ground by updating its parameters and generating a new bed. The lattice Boltzmann method (LBM) is used to simulate both stationary and steady flow cases in a side-by-side comparison over the generated bed topography using the proposed method and a test case taken from an external source. The method, if successful, will be incorporated into the current LBM program used in the testing phase, which will allow an automatic generation of topography for the given situation in future research, removing the need for bed data to be specified.Keywords: bed topography, FBM, LBM, shallow water, simulations
Procedia PDF Downloads 97