Search results for: likelihood estimation method
18121 SIF Computation of Cracked Plate by FEM
Authors: Sari Elkahina, Zergoug Mourad, Benachenhou Kamel
Abstract:
The main purpose of this paper is to perform a computations comparison of stress intensity factor 'SIF' evaluation in case of cracked thin plate with Aluminum alloy 7075-T6 and 2024-T3 used in aeronautics structure under uniaxial loading. This evaluation is based on finite element method with a virtual power principle through two techniques: the extrapolation and G−θ. The first one consists to extrapolate the nodal displacements near the cracked tip using a refined triangular mesh with T3 and T6 special elements, while the second, consists of determining the energy release rate G through G−θ method by potential energy derivation which corresponds numerically to the elastic solution post-processing of a cracked solid by a contour integration computation via Gauss points. The SIF obtained results from extrapolation and G−θ methods will be compared to an analytical solution in a particular case. To illustrate the influence of the meshing kind and the size of integration contour position simulations are presented and analyzed.Keywords: crack tip, SIF, finite element method, concentration technique, displacement extrapolation, aluminum alloy 7075-T6 and 2024-T3, energy release rate G, G-θ method, Gauss point numerical integration
Procedia PDF Downloads 33718120 Using the Yield-SAFE Model to Assess the Impacts of Climate Change on Yield of Coffee (Coffea arabica L.) Under Agroforestry and Monoculture Systems
Authors: Tesfay Gidey Bezabeh, Tânia Sofia Oliveira, Josep Crous-Duran, João H. N. Palma
Abstract:
Ethiopia's economy depends strongly on Coffea arabica production. Coffee, like many other crops, is sensitive to climate change. An urgent development and application of strategies against the negative impacts of climate change on coffee production is important. Agroforestry-based system is one of the strategies that may ensure sustainable coffee production amidst the likelihood of future impacts of climate change. This system involves the combination of trees in buffer extremes, thereby modifying microclimate conditions. This paper assessed coffee production under 1) coffee monoculture and 2) coffee grown using an agroforestry system, under a) current climate and b) two different future climate change scenarios. The study focused on two representative coffee-growing regions of Ethiopia under different soil, climate, and elevation conditions. A process-based growth model (Yield-SAFE) was used to simulate coffee production for a time horizon of 40 years. Climate change scenarios considered were representative concentration pathways (RCP) 4.5 and 8.5. The results revealed that in monoculture systems, the current coffee yields are between 1200-1250 kg ha⁻¹ yr⁻¹, with an expected decrease between 4-38% and 20-60% in scenarios RCP 4.5 and 8.5, respectively. However, in agroforestry systems, the current yields are between 1600-2200 kg ha⁻¹ yr⁻¹; the decrease was lower, ranging between 4-13% and 16-25% in RCP 4.5 and 8.5 scenarios, respectively. From the results, it can be concluded that coffee production under agroforestry systems has a higher level of resilience when facing future climate change and reinforces the idea of using this type of management in the near future for adapting climate change's negative impacts on coffee production.Keywords: Albizia gummifera, CORDEX, Ethiopia, HADCM3 model, process-based model
Procedia PDF Downloads 12018119 Polynomial Chaos Expansion Combined with Exponential Spline for Singularly Perturbed Boundary Value Problems with Random Parameter
Authors: W. K. Zahra, M. A. El-Beltagy, R. R. Elkhadrawy
Abstract:
So many practical problems in science and technology developed over the past decays. For instance, the mathematical boundary layer theory or the approximation of solution for different problems described by differential equations. When such problems consider large or small parameters, they become increasingly complex and therefore require the use of asymptotic methods. In this work, we consider the singularly perturbed boundary value problems which contain very small parameters. Moreover, we will consider these perturbation parameters as random variables. We propose a numerical method to solve this kind of problems. The proposed method is based on an exponential spline, Shishkin mesh discretization, and polynomial chaos expansion. The polynomial chaos expansion is used to handle the randomness exist in the perturbation parameter. Furthermore, the Monte Carlo Simulations (MCS) are used to validate the solution and the accuracy of the proposed method. Numerical results are provided to show the applicability and efficiency of the proposed method, which maintains a very remarkable high accuracy and it is ε-uniform convergence of almost second order.Keywords: singular perturbation problem, polynomial chaos expansion, Shishkin mesh, two small parameters, exponential spline
Procedia PDF Downloads 16218118 Numerical Computation of Specific Absorption Rate and Induced Current for Workers Exposed to Static Magnetic Fields of MRI Scanners
Authors: Sherine Farrag
Abstract:
Currently-used MRI scanners in Cairo City possess static magnetic field (SMF) that varies from 0.25 up to 3T. More than half of them possess SMF of 1.5T. The SMF of the magnet determine the diagnostic power of a scanner, but not worker's exposure profile. This research paper presents an approach for numerical computation of induced electric fields and SAR values by estimation of fringe static magnetic fields. Iso-gauss line of MR was mapped and a polynomial function of the 7th degree was generated and tested. Induced current field due to worker motion in the SMF and SAR values for organs and tissues have been calculated. Results illustrate that the computation tool used permits quick accurate MRI iso-gauss mapping and calculation of SAR values which can then be used for assessment of occupational exposure profile of MRI operators.Keywords: MRI occupational exposure, MRI safety, induced current density, specific absorption rate, static magnetic fields
Procedia PDF Downloads 43018117 Introduction of the Harmfulness of the Seismic Signal in the Assessment of the Performance of Reinforced Concrete Frame Structures
Authors: Kahil Amar, Boukais Said, Kezmane Ali, Hannachi Naceur Eddine, Hamizi Mohand
Abstract:
The principle of the seismic performance evaluation methods is to provide a measure of capability for a building or set of buildings to be damaged by an earthquake. The common objective of many of these methods is to supply classification criteria. The purpose of this study is to present a method for assessing the seismic performance of structures, based on Pushover method, we are particularly interested in reinforced concrete frame structures, which represent a significant percentage of damaged structures after a seismic event. The work is based on the characterization of seismic movement of the various earthquake zones in terms of PGA and PGD that is obtained by means of SIMQK_GR and PRISM software and the correlation between the points of performance and the scalar characterizing the earthquakes will be developed.Keywords: seismic performance, pushover method, characterization of seismic motion, harmfulness of the seismic
Procedia PDF Downloads 38318116 On the System of Split Equilibrium and Fixed Point Problems in Real Hilbert Spaces
Authors: Francis O. Nwawuru, Jeremiah N. Ezeora
Abstract:
In this paper, a new algorithm for solving the system of split equilibrium and fixed point problems in real Hilbert spaces is considered. The equilibrium bifunction involves a nite family of pseudo-monotone mappings, which is an improvement over monotone operators. More so, it turns out that the solution of the finite family of nonexpansive mappings. The regularized parameters do not depend on Lipschitz constants. Also, the computations of the stepsize, which plays a crucial role in the convergence analysis of the proposed method, do require prior knowledge of the norm of the involved bounded linear map. Furthermore, to speed up the rate of convergence, an inertial term technique is introduced in the proposed method. Under standard assumptions on the operators and the control sequences, using a modified Halpern iteration method, we establish strong convergence, a desired result in applications. Finally, the proposed scheme is applied to solve some optimization problems. The result obtained improves numerous results announced earlier in this direction.Keywords: equilibrium, Hilbert spaces, fixed point, nonexpansive mapping, extragradient method, regularized equilibrium
Procedia PDF Downloads 5018115 A Monocular Measurement for 3D Objects Based on Distance Area Number and New Minimize Projection Error Optimization Algorithms
Authors: Feixiang Zhao, Shuangcheng Jia, Qian Li
Abstract:
High-precision measurement of the target’s position and size is one of the hotspots in the field of vision inspection. This paper proposes a three-dimensional object positioning and measurement method using a monocular camera and GPS, namely the Distance Area Number-New Minimize Projection Error (DAN-NMPE). Our algorithm contains two parts: DAN and NMPE; specifically, DAN is a picture sequence algorithm, NMPE is a relatively positive optimization algorithm, which greatly improves the measurement accuracy of the target’s position and size. Comprehensive experiments validate the effectiveness of our proposed method on a self-made traffic sign dataset. The results show that with the laser point cloud as the ground truth, the size and position errors of the traffic sign measured by this method are ± 5% and 0.48 ± 0.3m, respectively. In addition, we also compared it with the current mainstream method, which uses a monocular camera to locate and measure traffic signs. DAN-NMPE attains significant improvements compared to existing state-of-the-art methods, which improves the measurement accuracy of size and position by 50% and 15.8%, respectively.Keywords: monocular camera, GPS, positioning, measurement
Procedia PDF Downloads 14418114 An Automated R-Peak Detection Method Using Common Vector Approach
Authors: Ali Kirkbas
Abstract:
R peaks in an electrocardiogram (ECG) are signs of cardiac activity in individuals that reveal valuable information about cardiac abnormalities, which can lead to mortalities in some cases. This paper examines the problem of detecting R-peaks in ECG signals, which is a two-class pattern classification problem in fact. To handle this problem with a reliable high accuracy, we propose to use the common vector approach which is a successful machine learning algorithm. The dataset used in the proposed method is obtained from MIT-BIH, which is publicly available. The results are compared with the other popular methods under the performance metrics. The obtained results show that the proposed method shows good performance than that of the other. methods compared in the meaning of diagnosis accuracy and simplicity which can be operated on wearable devices.Keywords: ECG, R-peak classification, common vector approach, machine learning
Procedia PDF Downloads 6518113 Shaped Crystal Growth of Fe-Ga and Fe-Al Alloy Plates by the Micro Pulling down Method
Authors: Kei Kamada, Rikito Murakami, Masahiko Ito, Mototaka Arakawa, Yasuhiro Shoji, Toshiyuki Ueno, Masao Yoshino, Akihiro Yamaji, Shunsuke Kurosawa, Yuui Yokota, Yuji Ohashi, Akira Yoshikawa
Abstract:
Techniques of energy harvesting y have been widely developed in recent years, due to high demand on the power supply for ‘Internet of things’ devices such as wireless sensor nodes. In these applications, conversion technique of mechanical vibration energy into electrical energy using magnetostrictive materials n have been brought to attention. Among the magnetostrictive materials, Fe-Ga and Fe-Al alloys are attractive materials due to the figure of merits such price, mechanical strength, high magnetostrictive constant. Up to now, bulk crystals of these alloys are produced by the Bridgman–Stockbarger method or the Czochralski method. Using these method big bulk crystal up to 2~3 inch diameter can be grown. However, non-uniformity of chemical composition along to the crystal growth direction cannot be avoid, which results in non-uniformity of magnetostriction constant and reduction of the production yield. The micro-pulling down (μ-PD) method has been developed as a shaped crystal growth technique. Our group have reported shaped crystal growth of oxide, fluoride single crystals with different shape such rod, plate tube, thin fiber, etc. Advantages of this method is low segregation due to high growth rate and small diffusion of melt at the solid-liquid interface, and small kerf loss due to near net shape crystal. In this presentation, we report the shaped long plate crystal growth of Fe-Ga and Fe-Al alloys using the μ-PD method. Alloy crystals were grown by the μ-PD method using calcium oxide crucible and induction heating system under the nitrogen atmosphere. The bottom hole of crucibles was 5 x 1mm² size. A <100> oriented iron-based alloy was used as a seed crystal. 5 x 1 x 320 mm³ alloy crystal plates were successfully grown. The results of crystal growth, chemical composition analysis, magnetostrictive properties and a prototype vibration energy harvester are reported. Furthermore, continuous crystal growth using powder supply system will be reported to minimize the chemical composition non-uniformity along the growth direction.Keywords: crystal growth, micro-pulling-down method, Fe-Ga, Fe-Al
Procedia PDF Downloads 33518112 Spatial Econometric Approaches for Count Data: An Overview and New Directions
Authors: Paula Simões, Isabel Natário
Abstract:
This paper reviews a number of theoretical aspects for implementing an explicit spatial perspective in econometrics for modelling non-continuous data, in general, and count data, in particular. It provides an overview of the several spatial econometric approaches that are available to model data that are collected with reference to location in space, from the classical spatial econometrics approaches to the recent developments on spatial econometrics to model count data, in a Bayesian hierarchical setting. Considerable attention is paid to the inferential framework, necessary for structural consistent spatial econometric count models, incorporating spatial lag autocorrelation, to the corresponding estimation and testing procedures for different assumptions, to the constrains and implications embedded in the various specifications in the literature. This review combines insights from the classical spatial econometrics literature as well as from hierarchical modeling and analysis of spatial data, in order to look for new possible directions on the processing of count data, in a spatial hierarchical Bayesian econometric context.Keywords: spatial data analysis, spatial econometrics, Bayesian hierarchical models, count data
Procedia PDF Downloads 59618111 Exact Soliton Solutions of the Integrable (2+1)-Dimensional Fokas-Lenells Equation
Authors: Meruyert Zhassybayeva, Kuralay Yesmukhanova, Ratbay Myrzakulov
Abstract:
Integrable nonlinear differential equations are an important class of nonlinear wave equations that admit exact soliton solutions. All these equations have an amazing property which is that their soliton waves collide elastically. One of such equations is the (1+1)-dimensional Fokas-Lenells equation. In this paper, we have constructed an integrable (2+1)-dimensional Fokas-Lenells equation. The integrability of this equation is ensured by the existence of a Lax representation for it. We obtained its bilinear form from the Hirota method. Using the Hirota method, exact one-soliton and two-soliton solutions of the (2 +1)-dimensional Fokas-Lenells equation were found.Keywords: Fokas-Lenells equation, integrability, soliton, the Hirota bilinear method
Procedia PDF Downloads 22618110 Anti-Scale Magnetic Method as a Prevention Method for Calcium Carbonate Scaling
Authors: Maha Salman, Gada Al-Nuwaibit
Abstract:
The effect of anti-scale magnetic method (AMM) in retarding scaling deposition is confirmed by many researchers, to result in new crystal morphology, the crystal which has the tendency to remain suspended more than precipitated. AMM is considered as an economic method when compared to other common methods used for scale prevention in desalination plant as acid treatment and addition of antiscalant. The current project was initiated to evaluate the effectiveness of AMM in preventing calcium carbonate scaling. The AMM was tested at different flow velocities (1.0, 0.5, 0.3, 0.1, and 0.003 m/s), different operating temperatures (50, 70, and 90°C), different feed pH and different magnetic field strength. The results showed that AMM was effective in retarding calcium carbonate scaling deposition, and the performance of AMM depends strongly on the flow velocity. The scaling retention time was found to be affected by the operating temperatures, flow velocity, and magnetic strength (MS), and in general, it was found that as the operating temperatures increased the effectiveness of the AMM in retarding calcium carbonate (CaCO₃) scaling increased.Keywords: magnetic treatment, field strength, flow velocity, magnetic scale retention time
Procedia PDF Downloads 38018109 Communication Infrastructure Required for a Driver Behaviour Monitoring System, ‘SiaMOTO’ IT Platform
Authors: Dogaru-Ulieru Valentin, Sălișteanu Ioan Corneliu, Ardeleanu Mihăiță Nicolae, Broscăreanu Ștefan, Sălișteanu Bogdan, Mihai Mihail
Abstract:
The SiaMOTO system is a communications and data processing platform for vehicle traffic. The human factor is the most important factor in the generation of this data, as the driver is the one who dictates the trajectory of the vehicle. Like any trajectory, specific parameters refer to position, speed and acceleration. Constant knowledge of these parameters allows complex analyses. Roadways allow many vehicles to travel through their confined space, and the overlapping trajectories of several vehicles increase the likelihood of collision events, known as road accidents. Any such event has causes that lead to its occurrence, so the conditions for its occurrence are known. The human factor is predominant in deciding the trajectory parameters of the vehicle on the road, so monitoring it by knowing the events reported by the DiaMOTO device over time, will generate a guide to target any potentially high-risk driving behavior and reward those who control the driving phenomenon well. In this paper, we have focused on detailing the communication infrastructure of the DiaMOTO device with the traffic data collection server, the infrastructure through which the database that will be used for complex AI/DLM analysis is built. The central element of this description is the data string in CODEC-8 format sent by the DiaMOTO device to the SiaMOTO collection server database. The data presented are specific to a functional infrastructure implemented in an experimental model stage, by installing on a number of 50 vehicles DiaMOTO unique code devices, integrating ADAS and GPS functions, through which vehicle trajectories can be monitored 24 hours a day.Keywords: DiaMOTO, Codec-8, ADAS, GPS, driver monitoring
Procedia PDF Downloads 8118108 Land Subsidence Monitoring in Semarang and Demak Coastal Area Using Persistent Scatterer Interferometric Synthetic Aperture Radar
Authors: Reyhan Azeriansyah, Yudo Prasetyo, Bambang Darmo Yuwono
Abstract:
Land subsidence is one of the problems that occur in the coastal areas of Java Island, one of which is the Semarang and Demak areas located in the northern region of Central Java. The impact of sea erosion, rising sea levels, soil structure vulnerable and economic development activities led to both these areas often occurs on land subsidence. To know how much land subsidence that occurred in the region needs to do the monitoring carried out by remote sensing methods such as PS-InSAR method. PS-InSAR is a remote sensing technique that is the development of the DInSAR method that can monitor the movement of the ground surface that allows users to perform regular measurements and monitoring of fixed objects on the surface of the earth. PS InSAR processing is done using Standford Method of Persistent Scatterers (StaMPS). Same as the recent analysis technique, Persistent Scatterer (PS) InSAR addresses both the decorrelation and atmospheric problems of conventional InSAR. StaMPS identify and extract the deformation signal even in the absence of bright scatterers. StaMPS is also applicable in areas undergoing non-steady deformation, with no prior knowledge of the variations in deformation rate. In addition, this method can also cover a large area so that the decline in the face of the land can cover all coastal areas of Semarang and Demak. From the PS-InSAR method can be known the impact on the existing area in Semarang and Demak region per year. The PS-InSAR results will also be compared with the GPS monitoring data to determine the difference in land decline that occurs between the two methods. By utilizing remote sensing methods such as PS-InSAR method, it is hoped that the PS-InSAR method can be utilized in monitoring the land subsidence and can assist other survey methods such as GPS surveys and the results can be used in policy determination in the affected coastal areas of Semarang and Demak.Keywords: coastal area, Demak, land subsidence, PS-InSAR, Semarang, StaMPS
Procedia PDF Downloads 26918107 Rating and Generating Sudoku Puzzles Based on Constraint Satisfaction Problems
Authors: Bahare Fatemi, Seyed Mehran Kazemi, Nazanin Mehrasa
Abstract:
Sudoku is a logic-based combinatorial puzzle game which people in different ages enjoy playing it. The challenging and addictive nature of this game has made it a ubiquitous game. Most magazines, newspapers, puzzle books, etc. publish lots of Sudoku puzzles every day. These puzzles often come in different levels of difficulty so that all people, from beginner to expert, can play the game and enjoy it. Generating puzzles with different levels of difficulty is a major concern of Sudoku designers. There are several works in the literature which propose ways of generating puzzles having a desirable level of difficulty. In this paper, we propose a method based on constraint satisfaction problems to evaluate the difficulty of the Sudoku puzzles. Then, we propose a hill climbing method to generate puzzles with different levels of difficulty. Whereas other methods are usually capable of generating puzzles with only few number of difficulty levels, our method can be used to generate puzzles with arbitrary number of different difficulty levels. We test our method by generating puzzles with different levels of difficulty and having a group of 15 people solve all the puzzles and recording the time they spend for each puzzle.Keywords: constraint satisfaction problem, generating Sudoku puzzles, hill climbing
Procedia PDF Downloads 40218106 Enhanced Retrieval-Augmented Generation (RAG) Method with Knowledge Graph and Graph Neural Network (GNN) for Automated QA Systems
Authors: Zhihao Zheng, Zhilin Wang, Linxin Liu
Abstract:
In the research of automated knowledge question-answering systems, accuracy and efficiency are critical challenges. This paper proposes a knowledge graph-enhanced Retrieval-Augmented Generation (RAG) method, combined with a Graph Neural Network (GNN) structure, to automatically determine the correctness of knowledge competition questions. First, a domain-specific knowledge graph was constructed from a large corpus of academic journal literature, with key entities and relationships extracted using Natural Language Processing (NLP) techniques. Then, the RAG method's retrieval module was expanded to simultaneously query both text databases and the knowledge graph, leveraging the GNN to further extract structured information from the knowledge graph. During answer generation, contextual information provided by the knowledge graph and GNN is incorporated to improve the accuracy and consistency of the answers. Experimental results demonstrate that the knowledge graph and GNN-enhanced RAG method perform excellently in determining the correctness of questions, achieving an accuracy rate of 95%. Particularly in cases involving ambiguity or requiring contextual information, the structured knowledge provided by the knowledge graph and GNN significantly enhances the RAG method's performance. This approach not only demonstrates significant advantages in improving the accuracy and efficiency of automated knowledge question-answering systems but also offers new directions and ideas for future research and practical applications.Keywords: knowledge graph, graph neural network, retrieval-augmented generation, NLP
Procedia PDF Downloads 4218105 Is the Okun's Law Valid in Tunisia?
Authors: El Andari Chifaa, Bouaziz Rached
Abstract:
The central focus of this paper was to check whether the Okun’s law in Tunisia is valid or not. For this purpose, we have used quarterly time series data during the period 1990Q1-2014Q1. Firstly, we applied the error correction model instead of the difference version of Okun's Law, the Engle-Granger and Johansen test are employed to find out long run association between unemployment, production, and how error correction mechanism (ECM) is used for short run dynamic. Secondly, we used the gap version of Okun’s law where the estimation is done from three band pass filters which are mathematical tools used in macro-economic and especially in business cycles theory. The finding of the study indicates that the inverse relationship between unemployment and output is verified in the short and long term, and the Okun's law holds for the Tunisian economy, but with an Okun’s coefficient lower than required. Therefore, our empirical results have important implications for structural and cyclical policymakers in Tunisia to promote economic growth in a context of lower unemployment growth.Keywords: Okun’s law, validity, unit root, cointegration, error correction model, bandpass filters
Procedia PDF Downloads 31818104 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios
Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu
Abstract:
Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method
Procedia PDF Downloads 16918103 Computationally Efficient Stacking Sequence Blending for Composite Structures with a Large Number of Design Regions Using Cellular Automata
Authors: Ellen Van Den Oord, Julien Marie Jan Ferdinand Van Campen
Abstract:
This article introduces a computationally efficient method for stacking sequence blending of composite structures. The computational efficiency makes the presented method especially interesting for composite structures with a large number of design regions. Optimization of composite structures with an unequal load distribution may lead to locally optimized thicknesses and ply orientations that are incompatible with one another. Blending constraints can be enforced to achieve structural continuity. In literature, many methods can be found to implement structural continuity by means of stacking sequence blending in one way or another. The complexity of the problem makes the blending of a structure with a large number of adjacent design regions, and thus stacking sequences, prohibitive. In this work the local stacking sequence optimization is preconditioned using a method found in the literature that couples the mechanical behavior of the laminate, in the form of lamination parameters, to blending constraints, yielding near-optimal easy-to-blend designs. The preconditioned design is then fed to the scheme using cellular automata that have been developed by the authors. The method is applied to the benchmark 18-panel horseshoe blending problem to demonstrate its performance. The computational efficiency of the proposed method makes it especially suited for composite structures with a large number of design regions.Keywords: composite, blending, optimization, lamination parameters
Procedia PDF Downloads 23018102 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings
Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir
Abstract:
Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine
Procedia PDF Downloads 16318101 A New Conjugate Gradient Method with Guaranteed Descent
Authors: B. Sellami, M. Belloufi
Abstract:
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. In this paper, we propose a new two-parameter family of conjugate gradient methods for unconstrained optimization. The two-parameter family of methods not only includes the already existing three practical nonlinear conjugate gradient methods, but also has other family of conjugate gradient methods as subfamily. The two-parameter family of methods with the Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the two-parameter family of methods. The numerical results show that this method is efficient for the given test problems. In addition, the methods related to this family are uniformly discussed.Keywords: unconstrained optimization, conjugate gradient method, line search, global convergence
Procedia PDF Downloads 45618100 Periodic Topology and Size Optimization Design of Tower Crane Boom
Authors: Wu Qinglong, Zhou Qicai, Xiong Xiaolei, Zhang Richeng
Abstract:
In order to achieve the layout and size optimization of the web members of tower crane boom, a truss topology and cross section size optimization method based on continuum is proposed considering three typical working conditions. Firstly, the optimization model is established by replacing web members with web plates. And the web plates are divided into several sub-domains so that periodic soft kill option (SKO) method can be carried out for topology optimization of the slender boom. After getting the optimized topology of web plates, the optimized layout of web members is formed through extracting the principal stress distribution. Finally, using the web member radius as design variable, the boom compliance as objective and the material volume of the boom as constraint, the cross section size optimization mathematical model is established. The size optimization criterion is deduced from the mathematical model by Lagrange multiplier method and Kuhn-Tucker condition. By comparing the original boom with the optimal boom, it is identified that this optimization method can effectively lighten the boom and improve its performance.Keywords: tower crane boom, topology optimization, size optimization, periodic, SKO, optimization criterion
Procedia PDF Downloads 55518099 Redefining Solar Generation Estimation: A Comprehensive Analysis of Real Utility Advanced Metering Infrastructure (AMI) Data from Various Projects in New York
Authors: Haowei Lu, Anaya Aaron
Abstract:
Understanding historical solar generation and forecasting future solar generation from interconnected Distributed Energy Resources (DER) is crucial for utility planning and interconnection studies. The existing methodology, which relies on solar radiation, weather data, and common inverter models, is becoming less accurate. Rapid advancements in DER technologies have resulted in more diverse project sites, deviating from common patterns due to various factors such as DC/AC ratio, solar panel performance, tilt angle, and the presence of DC-coupled battery energy storage systems. In this paper, the authors review 10,000 DER projects within the system and analyze the Advanced Metering Infrastructure (AMI) data for various types to demonstrate the impact of different parameters. An updated methodology is proposed for redefining historical and future solar generation in distribution feeders.Keywords: photovoltaic system, solar energy, fluctuations, energy storage, uncertainty
Procedia PDF Downloads 3518098 Seismic Vulnerability of Structures Designed in Accordance with the Allowable Stress Design and Load Resistant Factor Design Methods
Authors: Mohammadreza Vafaei, Amirali Moradi, Sophia C. Alih
Abstract:
The method selected for the design of structures not only can affect their seismic vulnerability but also can affect their construction cost. For the design of steel structures, two distinct methods have been introduced by existing codes, namely allowable stress design (ASD) and load resistant factor design (LRFD). This study investigates the effect of using the aforementioned design methods on the seismic vulnerability and construction cost of steel structures. Specifically, a 20-story building equipped with special moment resisting frame and an eccentrically braced system was selected for this study. The building was designed for three different intensities of peak ground acceleration including 0.2 g, 0.25 g, and 0.3 g using the ASD and LRFD methods. The required sizes of beams, columns, and braces were obtained using response spectrum analysis. Then, the designed frames were subjected to nine natural earthquake records which were scaled to the designed response spectrum. For each frame, the base shear, story shears, and inter-story drifts were calculated and then were compared. Results indicated that the LRFD method led to a more economical design for the frames. In addition, the LRFD method resulted in lower base shears and larger inter-story drifts when compared with the ASD method. It was concluded that the application of the LRFD method not only reduced the weights of structural elements but also provided a higher safety margin against seismic actions when compared with the ASD method.Keywords: allowable stress design, load resistant factor design, nonlinear time history analysis, seismic vulnerability, steel structures
Procedia PDF Downloads 27018097 Temporally Coherent 3D Animation Reconstruction from RGB-D Video Data
Authors: Salam Khalifa, Naveed Ahmed
Abstract:
We present a new method to reconstruct a temporally coherent 3D animation from single or multi-view RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. In the subsequent steps, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vectors based dynamic alignment method then fully reconstruct a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using novel error functions to analyze the results. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data.Keywords: 3D video, 3D animation, RGB-D video, temporally coherent 3D animation
Procedia PDF Downloads 37418096 Mine Project Evaluations in the Rising of Uncertainty: Real Options Analysis
Authors: I. Inthanongsone, C. Drebenstedt, J. C. Bongaerts, P. Sontamino
Abstract:
The major concern in evaluating the value of mining projects related to the deficiency of the traditional discounted cash flow (DCF) method. This method does not take uncertainties into account and, hence it does not allow for an economic assessment of managerial flexibility and operational adaptability, which are increasingly determining long-term corporate success. Such an assessment can be performed with the real options valuation (ROV) approach, since it allows for a comparative evaluation of unforeseen uncertainties in a project life cycle. This paper presents an economic evaluation model for open pit mining projects based on real options valuation approach. Uncertainties in the model are caused by metal prices and cost uncertainties and the system dynamics (SD) modeling method is used to structure and solve the real options model. The model is applied to a case study. It can be shown that that managerial flexibility reacting to uncertainties may create additional value to a mining project in comparison to the outcomes of a DCF method. One important insight for management dealing with uncertainty is seen in choosing the optimal time to exercise strategic options.Keywords: DCF methods, ROV approach, system dynamics modeling methods, uncertainty
Procedia PDF Downloads 50218095 Optimization of Process Parameters by Using Taguchi Method for Bainitic Steel Machining
Authors: Vinay Patil, Swapnil Kekade, Ashish Supare, Vinayak Pawar, Shital Jadhav, Rajkumar Singh
Abstract:
In recent days, bainitic steel is used in automobile and non-automobile sectors due to its high strength. Bainitic steel is difficult to machine because of its high hardness, hence in this paper machinability of bainitic steel is studied by using Taguchi design of experiments (DOE) approach. Convectional turning experiments were done by using L16 orthogonal array for three input parameters viz. cutting speed, depth of cut and feed. The Taguchi method is applied to study the performance characteristics of machining parameters with surface roughness (Ra), cutting force and tool wear rate. By using Taguchi analysis, optimized process parameters for best surface finish and minimum cutting forces were analyzed.Keywords: conventional turning, Taguchi method, S/N ratio, bainitic steel machining
Procedia PDF Downloads 33218094 Cascaded Neural Network for Internal Temperature Forecasting in Induction Motor
Authors: Hidir S. Nogay
Abstract:
In this study, two systems were created to predict interior temperature in induction motor. One of them consisted of a simple ANN model which has two layers, ten input parameters and one output parameter. The other one consisted of eight ANN models connected each other as cascaded. Cascaded ANN system has 17 inputs. Main reason of cascaded system being used in this study is to accomplish more accurate estimation by increasing inputs in the ANN system. Cascaded ANN system is compared with simple conventional ANN model to prove mentioned advantages. Dataset was obtained from experimental applications. Small part of the dataset was used to obtain more understandable graphs. Number of data is 329. 30% of the data was used for testing and validation. Test data and validation data were determined for each ANN model separately and reliability of each model was tested. As a result of this study, it has been understood that the cascaded ANN system produced more accurate estimates than conventional ANN model.Keywords: cascaded neural network, internal temperature, inverter, three-phase induction motor
Procedia PDF Downloads 34518093 Attachment Systems and Psychotherapy: An Internal Secure Caregiver to Heal and Protect the Parts of Our Clients: InCorporer Method
Authors: Julien Baillet
Abstract:
In light of 30 years of scientific research, InCorporer Method was created in 2019 as a new approach to heal traumatic, developmental, and dissociative injuries. Following natural nervous system functions, InCorporer aims to heal, develop, and update the different defensive mammalian subsystems: fight, flight, freeze, feign death, cry for help, & energy regulator. The dimensions taken into account are: (i) Heal the traumatic injuries who are still bleeding, (ii) Develop the systems that never received the security, attention, and affection they needed. (iii) Update the parts that stayed stuck in the past, ignoring for too long that they are out of danger now. Through the Present Part and its caregiving skills, InCorporer method enables a balanced, soothed, and collaborative personality system. To be as integrative as possible, InCorporer method has been designed according to several fields of research, such as structural dissociation theory, attachment theory, and information processing theory. In this paper, the author presents how the internal caregiver is developed and trained to heal all the different parts/subsystems of our clients through mindful attention and reflex movement integration.Keywords: PTSD, attachment, dissociation, part work
Procedia PDF Downloads 8118092 Efficient Method for Inducing Embryos from Isolated Microspores of Durum Wheat
Authors: Zelikha Labbani
Abstract:
Durum wheat represents an attractive species to study androgenesis via isolated microspore culture in order to increase the efficiency of androgenic yield in recalcitrant species such as in induction embryogenesis. We describe here an efficient method for inducing embryos from isolated microspores of durum wheat. It is shown that this method, associated with cold alone or cold plus mannitol pretreatment, or mannitol alone of the spikes kept within their sheath leaves during different times, has significant positive effects on embryo production. The aim of this study was, therefore, to test the effect of mannitol 0,3M and cold pretreatment on the quality and quantity of embryos produced from microspore culture from wheat cultivars.Keywords: in vitro embryogenesis, isolated microspores culture, durum wheat, pretreatments, mannitol 0.3m, cold pretreatment
Procedia PDF Downloads 58