Search results for: lattice discrete element method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21303

Search results for: lattice discrete element method

18033 Dissimilarity Measure for General Histogram Data and Its Application to Hierarchical Clustering

Authors: K. Umbleja, M. Ichino

Abstract:

Symbolic data mining has been developed to analyze data in very large datasets. It is also useful in cases when entry specific details should remain hidden. Symbolic data mining is quickly gaining popularity as datasets in need of analyzing are becoming ever larger. One type of such symbolic data is a histogram, which enables to save huge amounts of information into a single variable with high-level of granularity. Other types of symbolic data can also be described in histograms, therefore making histogram a very important and general symbolic data type - a method developed for histograms - can also be applied to other types of symbolic data. Due to its complex structure, analyzing histograms is complicated. This paper proposes a method, which allows to compare two histogram-valued variables and therefore find a dissimilarity between two histograms. Proposed method uses the Ichino-Yaguchi dissimilarity measure for mixed feature-type data analysis as a base and develops a dissimilarity measure specifically for histogram data, which allows to compare histograms with different number of bins and bin widths (so called general histogram). Proposed dissimilarity measure is then used as a measure for clustering. Furthermore, linkage method based on weighted averages is proposed with the concept of cluster compactness to measure the quality of clustering. The method is then validated with application on real datasets. As a result, the proposed dissimilarity measure is found producing adequate and comparable results with general histograms without the loss of detail or need to transform the data.

Keywords: dissimilarity measure, hierarchical clustering, histograms, symbolic data analysis

Procedia PDF Downloads 162
18032 A Problem with IFOC and a New PWM Based 180 Degree Conduction Mode

Authors: Usman Nasir, Minxiao Han, S. M. R. Kazmi

Abstract:

Three phase inverters being used today are based on field orientation control (FOC) and sine wave PWM (SPWM) techniques because 120 degree or 180 degree conduction methods produce high value of THD (total harmonic distortion) in the power system. The indirect field orientation control (IFOC) method is difficult to implement in real systems due to speed sensor accuracy issue. This paper discusses the problem with IFOC and a PWM based 180 degree conduction mode for the three phase inverter. The modified control method improves THD and this paper also compares the results obtained using modified control method with the conventional 180 degree conduction mode.

Keywords: three phase inverters, IFOC, THD, sine wave PWM (SPWM)

Procedia PDF Downloads 428
18031 Thermal Resistance of Special Garments Exposed to a Radiant Heat

Authors: Jana Pichova, Lubos Hes, Vladimir Bajzik

Abstract:

Protective clothing is designed to keep a wearer save in hazardous conditions or enable perform short time working operation without being injured or feeling discomfort. Firefighters or other related workers are exposed to abnormal heat which can be conductive, convective or radiant type. Their garment is proposed to resist this conditions and prevent burn injuries or dead of human. However thermal comfort of firefighter exposed to high heat source have not been studied yet. Thermal resistance is the best representative parameter of thermal comfort. In this study a new method of testing of thermal resistance of special clothing exposed to high radiation heat source was designed. This method simulates human body wearing single or multi-layered garment which is exposed to radiative heat. Setup of this method enables measuring of radiative heat flow in time without effect of convection. The new testing method is verified on chosen group of textiles for firefighters.

Keywords: protective clothing, radiative heat, thermal comfort of firefighters, thermal resistance of special garments

Procedia PDF Downloads 380
18030 Analyzing Large Scale Recurrent Event Data with a Divide-And-Conquer Approach

Authors: Jerry Q. Cheng

Abstract:

Currently, in analyzing large-scale recurrent event data, there are many challenges such as memory limitations, unscalable computing time, etc. In this research, a divide-and-conquer method is proposed using parametric frailty models. Specifically, the data is randomly divided into many subsets, and the maximum likelihood estimator from each individual data set is obtained. Then a weighted method is proposed to combine these individual estimators as the final estimator. It is shown that this divide-and-conquer estimator is asymptotically equivalent to the estimator based on the full data. Simulation studies are conducted to demonstrate the performance of this proposed method. This approach is applied to a large real dataset of repeated heart failure hospitalizations.

Keywords: big data analytics, divide-and-conquer, recurrent event data, statistical computing

Procedia PDF Downloads 166
18029 A New Center of Motion in Cabling Robots

Authors: Alireza Abbasi Moshaii, Farshid Najafi

Abstract:

In this paper a new model for centre of motion creating is proposed. This new method uses cables. So, it is very useful in robots because it is light and has easy assembling process. In the robots which need to be in touch with some things this method is very good. It will be described in the following. The accuracy of the idea is proved by an experiment. This system could be used in the robots which need a fixed point in the contact with some things and make a circular motion. Such as dancer, physician or repair robots.

Keywords: centre of motion, robotic cables, permanent touching, mechatronics engineering

Procedia PDF Downloads 443
18028 Stability Analysis and Controller Design of Further Development of Miniaturized Mössbauer Spectrometer II for Space Applications with Focus on the Extended Lyapunov Method – Part I –

Authors: Mohammad Beyki, Justus Pawlak, Robert Patzke, Franz Renz

Abstract:

In the context of planetary exploration, the MIMOS II (miniaturized Mössbauer spectrometer) serves as a proven and reliable measuring instrument. The transmission behaviour of the electronics in the Mössbauer spectroscopy is newly developed and optimized. For this purpose, the overall electronics is split into three parts. This elaboration deals exclusively with the first part of the signal chain for the evaluation of photons in experiments with gamma radiation. Parallel to the analysis of the electronics, a new method for the stability consideration of linear and non-linear systems is presented: The extended method of Lyapunov’s stability criteria. The design helps to weigh advantages and disadvantages against other simulated circuits in order to optimize the MIMOS II for the terestric and extraterestric measurment. Finally, after stability analysis, the controller design according to Ackermann is performed, achieving the best possible optimization of the output variable through a skillful pole assignment.

Keywords: Mössbauer spectroscopy, electronic signal amplifier, light processing technology, photocurrent, trans-impedance amplifier, extended Lyapunov method

Procedia PDF Downloads 100
18027 An Online Adaptive Thresholding Method to Classify Google Trends Data Anomalies for Investor Sentiment Analysis

Authors: Duygu Dere, Mert Ergeneci, Kaan Gokcesu

Abstract:

Google Trends data has gained increasing popularity in the applications of behavioral finance, decision science and risk management. Because of Google’s wide range of use, the Trends statistics provide significant information about the investor sentiment and intention, which can be used as decisive factors for corporate and risk management fields. However, an anomaly, a significant increase or decrease, in a certain query cannot be detected by the state of the art applications of computation due to the random baseline noise of the Trends data, which is modelled as an Additive white Gaussian noise (AWGN). Since through time, the baseline noise power shows a gradual change an adaptive thresholding method is required to track and learn the baseline noise for a correct classification. To this end, we introduce an online method to classify meaningful deviations in Google Trends data. Through extensive experiments, we demonstrate that our method can successfully classify various anomalies for plenty of different data.

Keywords: adaptive data processing, behavioral finance , convex optimization, online learning, soft minimum thresholding

Procedia PDF Downloads 167
18026 The Influence of High Temperatures on HVFA Concrete Columns by NDT Methods

Authors: D. Jagath Kumari, K. Srinivasa Rao

Abstract:

Quality assurance of the structures subjected to high temperatures is now enforcing measure for the Structural Engineers. The existing relations between strength and nondestructive measurements have been established under normal conditions are not suitable to concretes that have been exposed to high temperatures. The scope of the work is to investigate the influence of high temperatures of short durations on the residual properties of reinforced HVFA concrete columns that affect the strength by non-destructive tests (NDT). Fly ash concrete is increasingly used in the design of normal strength, high strength and high performance concretes. In this paper, the authors revealed the influence of high temperatures on HVFA concrete columns. These columns are heated from 100oC to 800oC with increments of 100oC and allowed to cool to room temperature by two methods one is air cooling method and the other immediate water quenching method. All the specimens were tested identically, before heating and after heating for compressive strength and material integrity by rebound hammer and ultrasonic pulse velocity (UPV) meter respectively. HVFA concrete retained more residual strength by water quenching method than air-cooling method.

Keywords: HVFA concrete, NDT methods, residual strength, non-destructive tests

Procedia PDF Downloads 457
18025 Using Genetic Algorithms to Outline Crop Rotations and a Cropping-System Model

Authors: Nicolae Bold, Daniel Nijloveanu

Abstract:

The idea of cropping-system is a method used by farmers. It is an environmentally-friendly method, protecting the natural resources (soil, water, air, nutritive substances) and increase the production at the same time, taking into account some crop particularities. The combination of this powerful method with the concepts of genetic algorithms results into a possibility of generating sequences of crops in order to form a rotation. The usage of this type of algorithms has been efficient in solving problems related to optimization and their polynomial complexity allows them to be used at solving more difficult and various problems. In our case, the optimization consists in finding the most profitable rotation of cultures. One of the expected results is to optimize the usage of the resources, in order to minimize the costs and maximize the profit. In order to achieve these goals, a genetic algorithm was designed. This algorithm ensures the finding of several optimized solutions of cropping-systems possibilities which have the highest profit and, thus, which minimize the costs. The algorithm uses genetic-based methods (mutation, crossover) and structures (genes, chromosomes). A cropping-system possibility will be considered a chromosome and a crop within the rotation is a gene within a chromosome. Results about the efficiency of this method will be presented in a special section. The implementation of this method would bring benefits into the activity of the farmers by giving them hints and helping them to use the resources efficiently.

Keywords: chromosomes, cropping, genetic algorithm, genes

Procedia PDF Downloads 428
18024 Polynomial Chaos Expansion Combined with Exponential Spline for Singularly Perturbed Boundary Value Problems with Random Parameter

Authors: W. K. Zahra, M. A. El-Beltagy, R. R. Elkhadrawy

Abstract:

So many practical problems in science and technology developed over the past decays. For instance, the mathematical boundary layer theory or the approximation of solution for different problems described by differential equations. When such problems consider large or small parameters, they become increasingly complex and therefore require the use of asymptotic methods. In this work, we consider the singularly perturbed boundary value problems which contain very small parameters. Moreover, we will consider these perturbation parameters as random variables. We propose a numerical method to solve this kind of problems. The proposed method is based on an exponential spline, Shishkin mesh discretization, and polynomial chaos expansion. The polynomial chaos expansion is used to handle the randomness exist in the perturbation parameter. Furthermore, the Monte Carlo Simulations (MCS) are used to validate the solution and the accuracy of the proposed method. Numerical results are provided to show the applicability and efficiency of the proposed method, which maintains a very remarkable high accuracy and it is ε-uniform convergence of almost second order.

Keywords: singular perturbation problem, polynomial chaos expansion, Shishkin mesh, two small parameters, exponential spline

Procedia PDF Downloads 160
18023 Port Governance in Santos, Brazil: A Qualitative Approach

Authors: Guilherme B. B. Vieira, Rafael M. da Silva, Eliana T. P. Senna, Luiz A. S. Senna, Francisco J. Kliemann Neto

Abstract:

Given the importance of ports as links in the global supply chains and because they are key elements to induce competitiveness in their hinterlands, the number of studies devoted to port governance, management and operations has increased in the last decades. Some of these studies address the port governance model as an element to improve coordination among the actors of the port logistics chain and to generate a better port performance. In this context, the present study analyzes the governance of Port of Santos through individual interviews with port managers, based on a conceptual model that considers the key dimensions associated with port governance. The results reinforce the usefulness of the applied model and highlight some existing improvement opportunities in the port studied.

Keywords: port governance, model, Port of Santos, managers’ perception

Procedia PDF Downloads 536
18022 Introduction of the Harmfulness of the Seismic Signal in the Assessment of the Performance of Reinforced Concrete Frame Structures

Authors: Kahil Amar, Boukais Said, Kezmane Ali, Hannachi Naceur Eddine, Hamizi Mohand

Abstract:

The principle of the seismic performance evaluation methods is to provide a measure of capability for a building or set of buildings to be damaged by an earthquake. The common objective of many of these methods is to supply classification criteria. The purpose of this study is to present a method for assessing the seismic performance of structures, based on Pushover method, we are particularly interested in reinforced concrete frame structures, which represent a significant percentage of damaged structures after a seismic event. The work is based on the characterization of seismic movement of the various earthquake zones in terms of PGA and PGD that is obtained by means of SIMQK_GR and PRISM software and the correlation between the points of performance and the scalar characterizing the earthquakes will be developed.

Keywords: seismic performance, pushover method, characterization of seismic motion, harmfulness of the seismic

Procedia PDF Downloads 383
18021 On the System of Split Equilibrium and Fixed Point Problems in Real Hilbert Spaces

Authors: Francis O. Nwawuru, Jeremiah N. Ezeora

Abstract:

In this paper, a new algorithm for solving the system of split equilibrium and fixed point problems in real Hilbert spaces is considered. The equilibrium bifunction involves a nite family of pseudo-monotone mappings, which is an improvement over monotone operators. More so, it turns out that the solution of the finite family of nonexpansive mappings. The regularized parameters do not depend on Lipschitz constants. Also, the computations of the stepsize, which plays a crucial role in the convergence analysis of the proposed method, do require prior knowledge of the norm of the involved bounded linear map. Furthermore, to speed up the rate of convergence, an inertial term technique is introduced in the proposed method. Under standard assumptions on the operators and the control sequences, using a modified Halpern iteration method, we establish strong convergence, a desired result in applications. Finally, the proposed scheme is applied to solve some optimization problems. The result obtained improves numerous results announced earlier in this direction.

Keywords: equilibrium, Hilbert spaces, fixed point, nonexpansive mapping, extragradient method, regularized equilibrium

Procedia PDF Downloads 49
18020 A Monocular Measurement for 3D Objects Based on Distance Area Number and New Minimize Projection Error Optimization Algorithms

Authors: Feixiang Zhao, Shuangcheng Jia, Qian Li

Abstract:

High-precision measurement of the target’s position and size is one of the hotspots in the field of vision inspection. This paper proposes a three-dimensional object positioning and measurement method using a monocular camera and GPS, namely the Distance Area Number-New Minimize Projection Error (DAN-NMPE). Our algorithm contains two parts: DAN and NMPE; specifically, DAN is a picture sequence algorithm, NMPE is a relatively positive optimization algorithm, which greatly improves the measurement accuracy of the target’s position and size. Comprehensive experiments validate the effectiveness of our proposed method on a self-made traffic sign dataset. The results show that with the laser point cloud as the ground truth, the size and position errors of the traffic sign measured by this method are ± 5% and 0.48 ± 0.3m, respectively. In addition, we also compared it with the current mainstream method, which uses a monocular camera to locate and measure traffic signs. DAN-NMPE attains significant improvements compared to existing state-of-the-art methods, which improves the measurement accuracy of size and position by 50% and 15.8%, respectively.

Keywords: monocular camera, GPS, positioning, measurement

Procedia PDF Downloads 144
18019 An Automated R-Peak Detection Method Using Common Vector Approach

Authors: Ali Kirkbas

Abstract:

R peaks in an electrocardiogram (ECG) are signs of cardiac activity in individuals that reveal valuable information about cardiac abnormalities, which can lead to mortalities in some cases. This paper examines the problem of detecting R-peaks in ECG signals, which is a two-class pattern classification problem in fact. To handle this problem with a reliable high accuracy, we propose to use the common vector approach which is a successful machine learning algorithm. The dataset used in the proposed method is obtained from MIT-BIH, which is publicly available. The results are compared with the other popular methods under the performance metrics. The obtained results show that the proposed method shows good performance than that of the other. methods compared in the meaning of diagnosis accuracy and simplicity which can be operated on wearable devices.

Keywords: ECG, R-peak classification, common vector approach, machine learning

Procedia PDF Downloads 64
18018 Coupled Space and Time Homogenization of Viscoelastic-Viscoplastic Composites

Authors: Sarra Haouala, Issam Doghri

Abstract:

In this work, a multiscale computational strategy is proposed for the analysis of structures, which are described at a refined level both in space and in time. The proposal is applied to two-phase viscoelastic-viscoplastic (VE-VP) reinforced thermoplastics subjected to large numbers of cycles. The main aim is to predict the effective long time response while reducing the computational cost considerably. The proposed computational framework is a combination of the mean-field space homogenization based on the generalized incrementally affine formulation for VE-VP composites, and the asymptotic time homogenization approach for coupled isotropic VE-VP homogeneous solids under large numbers of cycles. The time homogenization method is based on the definition of micro and macro-chronological time scales, and on asymptotic expansions of the unknown variables. First, the original anisotropic VE-VP initial-boundary value problem of the composite material is decomposed into coupled micro-chronological (fast time scale) and macro-chronological (slow time-scale) problems. The former is purely VE, and solved once for each macro time step, whereas the latter problem is nonlinear and solved iteratively using fully implicit time integration. Second, mean-field space homogenization is used for both micro and macro-chronological problems to determine the micro and macro-chronological effective behavior of the composite material. The response of the matrix material is VE-VP with J2 flow theory assuming small strains. The formulation exploits the return-mapping algorithm for the J2 model, with its two steps: viscoelastic predictor and plastic corrections. The proposal is implemented for an extended Mori-Tanaka scheme, and verified against finite element simulations of representative volume elements, for a number of polymer composite materials subjected to large numbers of cycles.

Keywords: asymptotic expansions, cyclic loadings, inclusion-reinforced thermoplastics, mean-field homogenization, time homogenization

Procedia PDF Downloads 369
18017 Shaped Crystal Growth of Fe-Ga and Fe-Al Alloy Plates by the Micro Pulling down Method

Authors: Kei Kamada, Rikito Murakami, Masahiko Ito, Mototaka Arakawa, Yasuhiro Shoji, Toshiyuki Ueno, Masao Yoshino, Akihiro Yamaji, Shunsuke Kurosawa, Yuui Yokota, Yuji Ohashi, Akira Yoshikawa

Abstract:

Techniques of energy harvesting y have been widely developed in recent years, due to high demand on the power supply for ‘Internet of things’ devices such as wireless sensor nodes. In these applications, conversion technique of mechanical vibration energy into electrical energy using magnetostrictive materials n have been brought to attention. Among the magnetostrictive materials, Fe-Ga and Fe-Al alloys are attractive materials due to the figure of merits such price, mechanical strength, high magnetostrictive constant. Up to now, bulk crystals of these alloys are produced by the Bridgman–Stockbarger method or the Czochralski method. Using these method big bulk crystal up to 2~3 inch diameter can be grown. However, non-uniformity of chemical composition along to the crystal growth direction cannot be avoid, which results in non-uniformity of magnetostriction constant and reduction of the production yield. The micro-pulling down (μ-PD) method has been developed as a shaped crystal growth technique. Our group have reported shaped crystal growth of oxide, fluoride single crystals with different shape such rod, plate tube, thin fiber, etc. Advantages of this method is low segregation due to high growth rate and small diffusion of melt at the solid-liquid interface, and small kerf loss due to near net shape crystal. In this presentation, we report the shaped long plate crystal growth of Fe-Ga and Fe-Al alloys using the μ-PD method. Alloy crystals were grown by the μ-PD method using calcium oxide crucible and induction heating system under the nitrogen atmosphere. The bottom hole of crucibles was 5 x 1mm² size. A <100> oriented iron-based alloy was used as a seed crystal. 5 x 1 x 320 mm³ alloy crystal plates were successfully grown. The results of crystal growth, chemical composition analysis, magnetostrictive properties and a prototype vibration energy harvester are reported. Furthermore, continuous crystal growth using powder supply system will be reported to minimize the chemical composition non-uniformity along the growth direction.

Keywords: crystal growth, micro-pulling-down method, Fe-Ga, Fe-Al

Procedia PDF Downloads 335
18016 State Estimation Based on Unscented Kalman Filter for Burgers’ Equation

Authors: Takashi Shimizu, Tomoaki Hashimoto

Abstract:

Controlling the flow of fluids is a challenging problem that arises in many fields. Burgers’ equation is a fundamental equation for several flow phenomena such as traffic, shock waves, and turbulence. The optimal feedback control method, so-called model predictive control, has been proposed for Burgers’ equation. However, the model predictive control method is inapplicable to systems whose all state variables are not exactly known. In practical point of view, it is unusual that all the state variables of systems are exactly known, because the state variables of systems are measured through output sensors and limited parts of them can be only available. In fact, it is usual that flow velocities of fluid systems cannot be measured for all spatial domains. Hence, any practical feedback controller for fluid systems must incorporate some type of state estimator. To apply the model predictive control to the fluid systems described by Burgers’ equation, it is needed to establish a state estimation method for Burgers’ equation with limited measurable state variables. To this purpose, we apply unscented Kalman filter for estimating the state variables of fluid systems described by Burgers’ equation. The objective of this study is to establish a state estimation method based on unscented Kalman filter for Burgers’ equation. The effectiveness of the proposed method is verified by numerical simulations.

Keywords: observer systems, unscented Kalman filter, nonlinear systems, Burgers' equation

Procedia PDF Downloads 153
18015 Determination of Cadmium , Lead, Nickel, and Zinc in Some Green Tea Samples Collected from Libyan Markets

Authors: Jamal A. Mayouf, Hashim Salih Al Bayati

Abstract:

Green tea is one of the most common drinks in all cities of Libyan. Heavy metal contents such as cadmium (Cd), lead (Pb), nickel (Ni) and zinc (Zn) were determined in four green tea samples collected from Libyan market and their tea infusions by using atomic emission spectrophotometry after acid digestion. The results obtained indicate that the concentrations of Cd, Pb, Ni, and Zn in tea infusions samples ranged from 0.07-0.12, 0.19-0.28, 0.09-0.15, 0.18-0.43 mg/l after boiling for 5 min., 0.06-0.08, 0.18-0.23, 0.08-0.14, 0.17-0.27 mg/l after boiling for 10 min., 0.07-0.11, 0.18-0.24, 0.08-0.14, 0.21-0.34 mg/l after boiling for 15 min. respectively. On the other hand, the concentrations of the same element mentioned above obtained in tea leaves ranged from 6.0-18.0, 36.0-42.0, 16.0-20.0, 44.0-132.0 mg/kg respectively. The concentrations of Cd, Pb, Ni and Zn in tea leaves samples were higher than Prevention of Food Adulteration (PFA) limit and World Health Organization(WHO) permissible limit.

Keywords: tea, infusion, metals, Libya

Procedia PDF Downloads 411
18014 Exact Soliton Solutions of the Integrable (2+1)-Dimensional Fokas-Lenells Equation

Authors: Meruyert Zhassybayeva, Kuralay Yesmukhanova, Ratbay Myrzakulov

Abstract:

Integrable nonlinear differential equations are an important class of nonlinear wave equations that admit exact soliton solutions. All these equations have an amazing property which is that their soliton waves collide elastically. One of such equations is the (1+1)-dimensional Fokas-Lenells equation. In this paper, we have constructed an integrable (2+1)-dimensional Fokas-Lenells equation. The integrability of this equation is ensured by the existence of a Lax representation for it. We obtained its bilinear form from the Hirota method. Using the Hirota method, exact one-soliton and two-soliton solutions of the (2 +1)-dimensional Fokas-Lenells equation were found.

Keywords: Fokas-Lenells equation, integrability, soliton, the Hirota bilinear method

Procedia PDF Downloads 224
18013 Anti-Scale Magnetic Method as a Prevention Method for Calcium Carbonate Scaling

Authors: Maha Salman, Gada Al-Nuwaibit

Abstract:

The effect of anti-scale magnetic method (AMM) in retarding scaling deposition is confirmed by many researchers, to result in new crystal morphology, the crystal which has the tendency to remain suspended more than precipitated. AMM is considered as an economic method when compared to other common methods used for scale prevention in desalination plant as acid treatment and addition of antiscalant. The current project was initiated to evaluate the effectiveness of AMM in preventing calcium carbonate scaling. The AMM was tested at different flow velocities (1.0, 0.5, 0.3, 0.1, and 0.003 m/s), different operating temperatures (50, 70, and 90°C), different feed pH and different magnetic field strength. The results showed that AMM was effective in retarding calcium carbonate scaling deposition, and the performance of AMM depends strongly on the flow velocity. The scaling retention time was found to be affected by the operating temperatures, flow velocity, and magnetic strength (MS), and in general, it was found that as the operating temperatures increased the effectiveness of the AMM in retarding calcium carbonate (CaCO₃) scaling increased.

Keywords: magnetic treatment, field strength, flow velocity, magnetic scale retention time

Procedia PDF Downloads 377
18012 Mechanical Properties of Biological Tissues

Authors: Young June Yoon

Abstract:

We will present four different topics in estimating the mechanical properties of biological tissues. First we elucidate the viscoelastic behavior of collagen molecules whose diameter is a couple of nanometers. By using the molecular dynamics simulation, we observed the viscoelastic behavior in different pulling velocity. Second, the protein layer, so called ‘sheath’ in enamel microstructure reduces the stress concentration in enamel minerals. We examined the result by using the finite element methods. Third, the anisotropic elastic constants of dentin are estimated by micromechanical analysis and estimated results are close to the experimentally measured data. Last, new formulation between the fabric tensor and the wave velocity is established for calcaneus by employing the poroelasticity. This formulation can be simply used for future experiments.

Keywords: tissues, mechanics, mechanical properties, wave propagation

Procedia PDF Downloads 374
18011 Business Continuity Risk Review for a Large Petrochemical Complex

Authors: Michel A. Thomet

Abstract:

A discrete-event simulation model was used to perform a Reliability-Availability-Maintainability (RAM) study of a large petrochemical complex which included sixteen process units, and seven feeds and intermediate streams. All the feeds and intermediate streams have associated storage tanks, so that if a processing unit fails and shuts down, the downstream units can keep producing their outputs. This also helps the upstream units which do not have to reduce their outputs, but can store their excess production until the failed unit restart. Each process unit and each pipe section carrying the feeds and intermediate streams has a probability of failure with an associated distribution and a Mean Time Between Failure (MTBF), as well as a distribution of the time to restore and a Mean Time To Restore (MTTR). The utilities supporting the process units can also fail and have their own distributions with specific MTBF and MTTR. The model runs are for ten years or more and the runs are repeated several times to obtain statistically relevant results. One of the main results is the On-Stream factor (OSF) of each process unit (percent of hours in a year when the unit is running in nominal conditions). One of the objectives of the study was to investigate if the storage capacity of each of the feeds and the intermediate stream was adequate. This was done by increasing the storage capacities in several steps and through running the simulation to see if the OSF were improved and by how much. Other objectives were to see if the failure of the utilities were an important factor in the overall OSF, and what could be done to reduce their failure rates through redundant equipment.

Keywords: business continuity, on-stream factor, petrochemical, RAM study, simulation, MTBF

Procedia PDF Downloads 219
18010 Land Subsidence Monitoring in Semarang and Demak Coastal Area Using Persistent Scatterer Interferometric Synthetic Aperture Radar

Authors: Reyhan Azeriansyah, Yudo Prasetyo, Bambang Darmo Yuwono

Abstract:

Land subsidence is one of the problems that occur in the coastal areas of Java Island, one of which is the Semarang and Demak areas located in the northern region of Central Java. The impact of sea erosion, rising sea levels, soil structure vulnerable and economic development activities led to both these areas often occurs on land subsidence. To know how much land subsidence that occurred in the region needs to do the monitoring carried out by remote sensing methods such as PS-InSAR method. PS-InSAR is a remote sensing technique that is the development of the DInSAR method that can monitor the movement of the ground surface that allows users to perform regular measurements and monitoring of fixed objects on the surface of the earth. PS InSAR processing is done using Standford Method of Persistent Scatterers (StaMPS). Same as the recent analysis technique, Persistent Scatterer (PS) InSAR addresses both the decorrelation and atmospheric problems of conventional InSAR. StaMPS identify and extract the deformation signal even in the absence of bright scatterers. StaMPS is also applicable in areas undergoing non-steady deformation, with no prior knowledge of the variations in deformation rate. In addition, this method can also cover a large area so that the decline in the face of the land can cover all coastal areas of Semarang and Demak. From the PS-InSAR method can be known the impact on the existing area in Semarang and Demak region per year. The PS-InSAR results will also be compared with the GPS monitoring data to determine the difference in land decline that occurs between the two methods. By utilizing remote sensing methods such as PS-InSAR method, it is hoped that the PS-InSAR method can be utilized in monitoring the land subsidence and can assist other survey methods such as GPS surveys and the results can be used in policy determination in the affected coastal areas of Semarang and Demak.

Keywords: coastal area, Demak, land subsidence, PS-InSAR, Semarang, StaMPS

Procedia PDF Downloads 267
18009 Rating and Generating Sudoku Puzzles Based on Constraint Satisfaction Problems

Authors: Bahare Fatemi, Seyed Mehran Kazemi, Nazanin Mehrasa

Abstract:

Sudoku is a logic-based combinatorial puzzle game which people in different ages enjoy playing it. The challenging and addictive nature of this game has made it a ubiquitous game. Most magazines, newspapers, puzzle books, etc. publish lots of Sudoku puzzles every day. These puzzles often come in different levels of difficulty so that all people, from beginner to expert, can play the game and enjoy it. Generating puzzles with different levels of difficulty is a major concern of Sudoku designers. There are several works in the literature which propose ways of generating puzzles having a desirable level of difficulty. In this paper, we propose a method based on constraint satisfaction problems to evaluate the difficulty of the Sudoku puzzles. Then, we propose a hill climbing method to generate puzzles with different levels of difficulty. Whereas other methods are usually capable of generating puzzles with only few number of difficulty levels, our method can be used to generate puzzles with arbitrary number of different difficulty levels. We test our method by generating puzzles with different levels of difficulty and having a group of 15 people solve all the puzzles and recording the time they spend for each puzzle.

Keywords: constraint satisfaction problem, generating Sudoku puzzles, hill climbing

Procedia PDF Downloads 402
18008 A Low-Area Fully-Reconfigurable Hardware Design of Fast Fourier Transform System for 3GPP-LTE Standard

Authors: Xin-Yu Shih, Yue-Qu Liu, Hong-Ru Chou

Abstract:

This paper presents a low-area and fully-reconfigurable Fast Fourier Transform (FFT) hardware design for 3GPP-LTE communication standard. It can fully support 32 different FFT sizes, up to 2048 FFT points. Besides, a special processing element is developed for making reconfigurable computing characteristics possible, while first-in first-out (FIFO) scheduling scheme design technique is proposed for hardware-friendly FIFO resource arranging. In a synthesis chip realization via TSMC 40 nm CMOS technology, the hardware circuit only occupies core area of 0.2325 mm2 and dissipates 233.5 mW at maximal operating frequency of 250 MHz.

Keywords: reconfigurable, fast Fourier transform (FFT), single-path delay feedback (SDF), 3GPP-LTE

Procedia PDF Downloads 278
18007 Enhanced Retrieval-Augmented Generation (RAG) Method with Knowledge Graph and Graph Neural Network (GNN) for Automated QA Systems

Authors: Zhihao Zheng, Zhilin Wang, Linxin Liu

Abstract:

In the research of automated knowledge question-answering systems, accuracy and efficiency are critical challenges. This paper proposes a knowledge graph-enhanced Retrieval-Augmented Generation (RAG) method, combined with a Graph Neural Network (GNN) structure, to automatically determine the correctness of knowledge competition questions. First, a domain-specific knowledge graph was constructed from a large corpus of academic journal literature, with key entities and relationships extracted using Natural Language Processing (NLP) techniques. Then, the RAG method's retrieval module was expanded to simultaneously query both text databases and the knowledge graph, leveraging the GNN to further extract structured information from the knowledge graph. During answer generation, contextual information provided by the knowledge graph and GNN is incorporated to improve the accuracy and consistency of the answers. Experimental results demonstrate that the knowledge graph and GNN-enhanced RAG method perform excellently in determining the correctness of questions, achieving an accuracy rate of 95%. Particularly in cases involving ambiguity or requiring contextual information, the structured knowledge provided by the knowledge graph and GNN significantly enhances the RAG method's performance. This approach not only demonstrates significant advantages in improving the accuracy and efficiency of automated knowledge question-answering systems but also offers new directions and ideas for future research and practical applications.

Keywords: knowledge graph, graph neural network, retrieval-augmented generation, NLP

Procedia PDF Downloads 41
18006 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 168
18005 The Effect of Soil-Structure Interaction on the Post-Earthquake Fire Performance of Structures

Authors: A. T. Al-Isawi, P. E. F. Collins

Abstract:

The behaviour of structures exposed to fire after an earthquake is not a new area of engineering research, but there remain a number of areas where further work is required. Such areas relate to the way in which seismic excitation is applied to a structure, taking into account the effect of soil-structure interaction (SSI) and the method of analysis, in addition to identifying the excitation load properties. The selection of earthquake data input for use in nonlinear analysis and the method of analysis are still challenging issues. Thus, realistic artificial ground motion input data must be developed to certify that site properties parameters adequately describe the effects of the nonlinear inelastic behaviour of the system and that the characteristics of these parameters are coherent with the characteristics of the target parameters. Conversely, ignoring the significance of some attributes, such as frequency content, soil site properties and earthquake parameters may lead to misleading results, due to the misinterpretation of required input data and the incorrect synthesise of analysis hypothesis. This paper presents a study of the post-earthquake fire (PEF) performance of a multi-storey steel-framed building resting on soft clay, taking into account the effects of the nonlinear inelastic behaviour of the structure and soil, and the soil-structure interaction (SSI). Structures subjected to an earthquake may experience various levels of damage; the geometrical damage, which indicates the change in the initial structure’s geometry due to the residual deformation as a result of plastic behaviour, and the mechanical damage which identifies the degradation of the mechanical properties of the structural elements involved in the plastic range of deformation. Consequently, the structure presumably experiences partial structural damage but is then exposed to fire under its new residual material properties, which may result in building failure caused by a decrease in fire resistance. This scenario would be more complicated if SSI was also considered. Indeed, most earthquake design codes ignore the probability of PEF as well as the effect that SSI has on the behaviour of structures, in order to simplify the analysis procedure. Therefore, the design of structures based on existing codes which neglect the importance of PEF and SSI can create a significant risk of structural failure. In order to examine the criteria for the behaviour of a structure under PEF conditions, a two-dimensional nonlinear elasto-plastic model is developed using ABAQUS software; the effects of SSI are included. Both geometrical and mechanical damages have been taken into account after the earthquake analysis step. For comparison, an identical model is also created, which does not include the effects of soil-structure interaction. It is shown that damage to structural elements is underestimated if SSI is not included in the analysis, and the maximum percentage reduction in fire resistance is detected in the case when SSI is included in the scenario. The results are validated using the literature.

Keywords: Abaqus Software, Finite Element Analysis, post-earthquake fire, seismic analysis, soil-structure interaction

Procedia PDF Downloads 122
18004 Computationally Efficient Stacking Sequence Blending for Composite Structures with a Large Number of Design Regions Using Cellular Automata

Authors: Ellen Van Den Oord, Julien Marie Jan Ferdinand Van Campen

Abstract:

This article introduces a computationally efficient method for stacking sequence blending of composite structures. The computational efficiency makes the presented method especially interesting for composite structures with a large number of design regions. Optimization of composite structures with an unequal load distribution may lead to locally optimized thicknesses and ply orientations that are incompatible with one another. Blending constraints can be enforced to achieve structural continuity. In literature, many methods can be found to implement structural continuity by means of stacking sequence blending in one way or another. The complexity of the problem makes the blending of a structure with a large number of adjacent design regions, and thus stacking sequences, prohibitive. In this work the local stacking sequence optimization is preconditioned using a method found in the literature that couples the mechanical behavior of the laminate, in the form of lamination parameters, to blending constraints, yielding near-optimal easy-to-blend designs. The preconditioned design is then fed to the scheme using cellular automata that have been developed by the authors. The method is applied to the benchmark 18-panel horseshoe blending problem to demonstrate its performance. The computational efficiency of the proposed method makes it especially suited for composite structures with a large number of design regions.

Keywords: composite, blending, optimization, lamination parameters

Procedia PDF Downloads 228