Search results for: linear acceleration method
20165 Comparative Study on Daily Discharge Estimation of Soolegan River
Authors: Redvan Ghasemlounia, Elham Ansari, Hikmet Kerem Cigizoglu
Abstract:
Hydrological modeling in arid and semi-arid regions is very important. Iran has many regions with these climate conditions such as Chaharmahal and Bakhtiari province that needs lots of attention with an appropriate management. Forecasting of hydrological parameters and estimation of hydrological events of catchments, provide important information that used for design, management and operation of water resources such as river systems, and dams, widely. Discharge in rivers is one of these parameters. This study presents the application and comparison of some estimation methods such as Feed-Forward Back Propagation Neural Network (FFBPNN), Multi Linear Regression (MLR), Gene Expression Programming (GEP) and Bayesian Network (BN) to predict the daily flow discharge of the Soolegan River, located at Chaharmahal and Bakhtiari province, in Iran. In this study, Soolegan, station was considered. This Station is located in Soolegan River at 51° 14՜ Latitude 31° 38՜ longitude at North Karoon basin. The Soolegan station is 2086 meters higher than sea level. The data used in this study are daily discharge and daily precipitation of Soolegan station. Feed Forward Back Propagation Neural Network(FFBPNN), Multi Linear Regression (MLR), Gene Expression Programming (GEP) and Bayesian Network (BN) models were developed using the same input parameters for Soolegan's daily discharge estimation. The results of estimation models were compared with observed discharge values to evaluate performance of the developed models. Results of all methods were compared and shown in tables and charts.Keywords: ANN, multi linear regression, Bayesian network, forecasting, discharge, gene expression programming
Procedia PDF Downloads 56120164 A Method of the Semantic on Image Auto-Annotation
Authors: Lin Huo, Xianwei Liu, Jingxiong Zhou
Abstract:
Recently, due to the existence of semantic gap between image visual features and human concepts, the semantic of image auto-annotation has become an important topic. Firstly, by extract low-level visual features of the image, and the corresponding Hash method, mapping the feature into the corresponding Hash coding, eventually, transformed that into a group of binary string and store it, image auto-annotation by search is a popular method, we can use it to design and implement a method of image semantic auto-annotation. Finally, Through the test based on the Corel image set, and the results show that, this method is effective.Keywords: image auto-annotation, color correlograms, Hash code, image retrieval
Procedia PDF Downloads 49720163 Non-Population Search Algorithms for Capacitated Material Requirement Planning in Multi-Stage Assembly Flow Shop with Alternative Machines
Authors: Watcharapan Sukkerd, Teeradej Wuttipornpun
Abstract:
This paper aims to present non-population search algorithms called tabu search (TS), simulated annealing (SA) and variable neighborhood search (VNS) to minimize the total cost of capacitated MRP problem in multi-stage assembly flow shop with two alternative machines. There are three main steps for the algorithm. Firstly, an initial sequence of orders is constructed by a simple due date-based dispatching rule. Secondly, the sequence of orders is repeatedly improved to reduce the total cost by applying TS, SA and VNS separately. Finally, the total cost is further reduced by optimizing the start time of each operation using the linear programming (LP) model. Parameters of the algorithm are tuned by using real data from automotive companies. The result shows that VNS significantly outperforms TS, SA and the existing algorithm.Keywords: capacitated MRP, tabu search, simulated annealing, variable neighborhood search, linear programming, assembly flow shop, application in industry
Procedia PDF Downloads 23420162 Flow Analysis for Different Pelton Turbine Bucket by Applying Computation Fluid Dynamic
Authors: Sedat Yayla, Azhin Abdullah
Abstract:
In the process of constructing hydroelectric power plants, the Pelton turbine, which is characterized by its simple manufacturing and construction, is performed in high head and low water flow. Parameters of the turbine have to be comprised in the designing process for obtaining hydraulic turbine with the highest efficiency during different operating conditions. The present investigation applied three-dimensional computational fluid dynamics (CFD). In addition, the bucket of Pelton turbine models with different splitter angle and inlet velocity values were examined for determining the force and visualizing the flow pattern on the bucket. The study utilized two diverse bucket models at various inlet velocities (20, 25, 30,35and 40m/s) and four different splitter angles (55, 75,90and 115 degree) for finding out the impacts of every single parameter on the effective force on the bucket. The acquired outcomes revealed that there is a linear relationship between force and inlet velocity on the bucket. Furthermore, the results also uncovered that the relationship between splitter angle and force on the bucket is linear until 90 degree.Keywords: bucket design, computational fluid dynamics (CFD), free surface flow, two-phase flow, volume of fluid (VOF)
Procedia PDF Downloads 27120161 Transition from Linear to Circular Economy in Gypsum in India
Authors: Shanti Swaroop Gupta, Bibekananda Mohapatra, S. K. Chaturvedi, Anand Bohra
Abstract:
For sustainable development in India, there is an urgent need to follow the principles of industrial symbiosis in the industrial processes, under which the scraps, wastes, or by‐products of one industry can become the raw materials for another. This will not only help in reducing the dependence on natural resources but also help in gaining economic advantage to the industry. Gypsum is one such area in India, where the linear economy model of by-product gypsum utilization has resulted in unutilized legacy phosphogypsum stock of 64.65 million tonnes (mt) at phosphoric acid plants in 2020-21. In the future, this unutilized gypsum stock will increase further due to the expected generation of Flue Gas Desulphurization (FGD) gypsum in huge quantities from thermal power plants. Therefore, it is essential to transit from the linear to circular economy in Gypsum in India, which will result in huge environmental as well as ecological benefits. Gypsum is required in many sectors like Construction (Cement industry, gypsum boards, glass fiber reinforced gypsum panels, gypsum plaster, fly ash lime bricks, floor screeds, road construction), agriculture, in the manufacture of Plaster of Paris, pottery, ceramic industry, water treatment processes, manufacture of ammonium sulphate, paints, textiles, etc. The challenges faced in areas of quality, policy, logistics, lack of infrastructure, promotion, etc., for complete utilization of by-product gypsum have been discussed. The untapped potential of by-product gypsum utilization in various sectors like the use of gypsum in agriculture for sodic soil reclamation, utilization of legacy stock in cement industry on mission mode, improvement in quality of by-product gypsum by standardization and usage in building materials industry has been identified. Based on the measures required to tackle the various challenges and utilization of the untapped potential of gypsum, a comprehensive action plan for the transition from linear to the circular economy in gypsum in India has been formulated. The strategies and policy measures required to implement the action plan to achieve a circular economy in Gypsum have been recommended for various government departments. It is estimated that the focused implementation of the proposed action plan would result in a significant decrease in unutilized gypsum legacy stock in the next five years and it would cease to exist by 2027-28 if the proposed action plan is effectively implemented.Keywords: circular economy, FGD gypsum, India, phosphogypsum
Procedia PDF Downloads 26820160 Q-Efficient Solutions of Vector Optimization via Algebraic Concepts
Authors: Elham Kiyani
Abstract:
In this paper, we first introduce the concept of Q-efficient solutions in a real linear space not necessarily endowed with a topology, where Q is some nonempty (not necessarily convex) set. We also used the scalarization technique including the Gerstewitz function generated by a nonconvex set to characterize these Q-efficient solutions. The algebraic concepts of interior and closure are useful to study optimization problems without topology. Studying nonconvex vector optimization is valuable since topological interior is equal to algebraic interior for a convex cone. So, we use the algebraic concepts of interior and closure to define Q-weak efficient solutions and Q-Henig proper efficient solutions of set-valued optimization problems, where Q is not a convex cone. Optimization problems with set-valued maps have a wide range of applications, so it is expected that there will be a useful analytical tool in optimization theory for set-valued maps. These kind of optimization problems are closely related to stochastic programming, control theory, and economic theory. The paper focus on nonconvex problems, the results are obtained by assuming generalized non-convexity assumptions on the data of the problem. In convex problems, main mathematical tools are convex separation theorems, alternative theorems, and algebraic counterparts of some usual topological concepts, while in nonconvex problems, we need a nonconvex separation function. Thus, we consider the Gerstewitz function generated by a general set in a real linear space and re-examine its properties in the more general setting. A useful approach for solving a vector problem is to reduce it to a scalar problem. In general, scalarization means the replacement of a vector optimization problem by a suitable scalar problem which tends to be an optimization problem with a real valued objective function. The Gerstewitz function is well known and widely used in optimization as the basis of the scalarization. The essential properties of the Gerstewitz function, which are well known in the topological framework, are studied by using algebraic counterparts rather than the topological concepts of interior and closure. Therefore, properties of the Gerstewitz function, when it takes values just in a real linear space are studied, and we use it to characterize Q-efficient solutions of vector problems whose image space is not endowed with any particular topology. Therefore, we deal with a constrained vector optimization problem in a real linear space without assuming any topology, and also Q-weak efficient and Q-proper efficient solutions in the senses of Henig are defined. Moreover, by means of the Gerstewitz function, we provide some necessary and sufficient optimality conditions for set-valued vector optimization problems.Keywords: algebraic interior, Gerstewitz function, vector closure, vector optimization
Procedia PDF Downloads 21620159 Radiochemical Purity of 68Ga-BCA-Peptides: Separation of All 68Ga Species with a Single iTLC Strip
Authors: Anton A. Larenkov, Alesya Ya Maruk
Abstract:
In the present study, highly effective iTLC single strip method for the determination of radiochemical purity (RCP) of 68Ga-BCA-peptides was developed (with no double-developing, changing of eluents or other additional manipulation). In this method iTLC-SG strips and commonly used eluent TFAaq. (3-5 % (v/v)) are used. The method allows determining each of the key radiochemical forms of 68Ga (colloidal, bound, ionic) separately with the peaks separation being no less than 4 σ. Rf = 0.0-0.1 for 68Ga-colloid; Rf = 0.5-0.6 for 68Ga-BCA-peptides; Rf = 0.9-1.0 for ionic 68Ga. The method is simple and fast: For developing length of 75 mm only 4-6 min is required (versus 18-20 min for pharmacopoeial method). The method has been tested on various compounds (including 68Ga-DOTA-TOC, 68Ga-DOTA-TATE, 68Ga-NODAGA-RGD2 etc.). The cross-validation work for every specific form of 68Ga showed good correlation between method developed and control (pharmacopoeial) methods. The method can become convenient and much more informative replacement for pharmacopoeial methods, including HPLC.Keywords: DOTA-TATE, 68Ga, quality control, radiochemical purity, radiopharmaceuticals, TLC
Procedia PDF Downloads 29020158 Comparing Numerical Accuracy of Solutions of Ordinary Differential Equations (ODE) Using Taylor's Series Method, Euler's Method and Runge-Kutta (RK) Method
Authors: Palwinder Singh, Munish Sandhir, Tejinder Singh
Abstract:
The ordinary differential equations (ODE) represent a natural framework for mathematical modeling of many real-life situations in the field of engineering, control systems, physics, chemistry and astronomy etc. Such type of differential equations can be solved by analytical methods or by numerical methods. If the solution is calculated using analytical methods, it is done through calculus theories, and thus requires a longer time to solve. In this paper, we compare the numerical accuracy of the solutions given by the three main types of one-step initial value solvers: Taylor’s Series Method, Euler’s Method and Runge-Kutta Fourth Order Method (RK4). The comparison of accuracy is obtained through comparing the solutions of ordinary differential equation given by these three methods. Furthermore, to verify the accuracy; we compare these numerical solutions with the exact solutions.Keywords: Ordinary differential equations (ODE), Taylor’s Series Method, Euler’s Method, Runge-Kutta Fourth Order Method
Procedia PDF Downloads 35820157 Two-Stage Approach for Solving the Multi-Objective Optimization Problem on Combinatorial Configurations
Authors: Liudmyla Koliechkina, Olena Dvirna
Abstract:
The statement of the multi-objective optimization problem on combinatorial configurations is formulated, and the approach to its solution is proposed. The problem is of interest as a combinatorial optimization one with many criteria, which is a model of many applied tasks. The approach to solving the multi-objective optimization problem on combinatorial configurations consists of two stages; the first is the reduction of the multi-objective problem to the single criterion based on existing multi-objective optimization methods, the second stage solves the directly replaced single criterion combinatorial optimization problem by the horizontal combinatorial method. This approach provides the optimal solution to the multi-objective optimization problem on combinatorial configurations, taking into account additional restrictions for a finite number of steps.Keywords: discrete set, linear combinatorial optimization, multi-objective optimization, Pareto solutions, partial permutation set, structural graph
Procedia PDF Downloads 16720156 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model
Authors: Donatella Giuliani
Abstract:
In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation
Procedia PDF Downloads 21720155 Path Integrals and Effective Field Theory of Large Scale Structure
Authors: Revant Nayar
Abstract:
In this work, we recast the equations describing large scale structure, and by extension all nonlinear fluids, in the path integral formalism. We first calculate the well known two and three point functions using Schwinger Keldysh formalism used commonly to perturbatively solve path integrals in non- equilibrium systems. Then we include EFT corrections due to pressure, viscosity, and noise as effects on the time-dependent propagator. We are able to express results for arbitrary two and three point correlation functions in LSS in terms of differential operators acting on a triple K master intergral. We also, for the first time, get analytical results for more general initial conditions deviating from the usual power law P∝kⁿ by introducing a mass scale in the initial conditions. This robust field theoretic formalism empowers us with tools from strongly coupled QFT to study the strongly non-linear regime of LSS and turbulent fluid dynamics such as OPE and holographic duals. These could be used to capture fully the strongly non-linear dynamics of fluids and move towards solving the open problem of classical turbulence.Keywords: quantum field theory, cosmology, effective field theory, renormallisation
Procedia PDF Downloads 13520154 A Linear Autoregressive and Non-Linear Regime Switching Approach in Identifying the Structural Breaks Caused by Anti-Speculation Measures: The Case of Hong Kong
Authors: Mengna Hu
Abstract:
This paper examines the impact of an anti-speculation tax policy on the trading activities and home price movements in the housing market in Hong Kong. The study focuses on the secondary residential property market where transactions dominate. The policy intervention substantially raised the transaction cost to speculators as well as genuine homeowners who dispose their homes within a certain period. Through the demonstration of structural breaks, our empirical results show that the rise in transaction cost effectively reduced speculative trading activities. However, it accelerated price increase in the small-sized segment by vastly demotivating existing homeowners from trading up to better homes, causing congestion in the lower-end market where the demand from first-time buyers is still strong. Apart from that, by employing regime switching approach, we further show that the unintended consequences are likely to be persistent due to this policy together with other strengthened cooling measures.Keywords: transaction costs, housing market, structural breaks, regime switching
Procedia PDF Downloads 26320153 A Study of Effective Stereo Matching Method for Long-Wave Infrared Camera Module
Authors: Hyun-Koo Kim, Yonghun Kim, Yong-Hoon Kim, Ju Hee Lee, Myungho Song
Abstract:
In this paper, we have described an efficient stereo matching method and pedestrian detection method using stereo types LWIR camera. We compared with three types stereo camera algorithm as block matching, ELAS, and SGM. For pedestrian detection using stereo LWIR camera, we used that SGM stereo matching method, free space detection method using u/v-disparity, and HOG feature based pedestrian detection. According to testing result, SGM method has better performance than block matching and ELAS algorithm. Combination of SGM, free space detection, and pedestrian detection using HOG features and SVM classification can detect pedestrian of 30m distance and has a distance error about 30 cm.Keywords: advanced driver assistance system, pedestrian detection, stereo matching method, stereo long-wave IR camera
Procedia PDF Downloads 41520152 Identifying Factors Contributing to the Spread of Lyme Disease: A Regression Analysis of Virginia’s Data
Authors: Fatemeh Valizadeh Gamchi, Edward L. Boone
Abstract:
This research focuses on Lyme disease, a widespread infectious condition in the United States caused by the bacterium Borrelia burgdorferi sensu stricto. It is critical to identify environmental and economic elements that are contributing to the spread of the disease. This study examined data from Virginia to identify a subset of explanatory variables significant for Lyme disease case numbers. To identify relevant variables and avoid overfitting, linear poisson, and regularization regression methods such as a ridge, lasso, and elastic net penalty were employed. Cross-validation was performed to acquire tuning parameters. The methods proposed can automatically identify relevant disease count covariates. The efficacy of the techniques was assessed using four criteria on three simulated datasets. Finally, using the Virginia Department of Health’s Lyme disease data set, the study successfully identified key factors, and the results were consistent with previous studies.Keywords: lyme disease, Poisson generalized linear model, ridge regression, lasso regression, elastic net regression
Procedia PDF Downloads 13820151 Spatially Downscaling Land Surface Temperature with a Non-Linear Model
Authors: Kai Liu
Abstract:
Remote sensing-derived land surface temperature (LST) can provide an indication of the temporal and spatial patterns of surface evapotranspiration (ET). However, the spatial resolution achieved by existing commonly satellite products is ~1 km, which remains too coarse for ET estimations. This paper proposed a model that can disaggregate coarse resolution MODIS LST at 1 km scale to fine spatial resolutions at the scale of 250 m. Our approach attempted to weaken the impacts of soil moisture and growing statues on LST variations. The proposed model spatially disaggregates the coarse thermal data by using a non-linear model involving Bowen ratio, normalized difference vegetation index (NDVI) and photochemical reflectance index (PRI). This LST disaggregation model was tested on two heterogeneous landscapes in central Iowa, USA and Heihe River, China, during the growing seasons. Statistical results demonstrated that our model achieved better than the two classical methods (DisTrad and TsHARP). Furthermore, using the surface energy balance model, it was observed that the estimated ETs using the disaggregated LST from our model were more accurate than those using the disaggregated LST from DisTrad and TsHARP.Keywords: Bowen ration, downscaling, evapotranspiration, land surface temperature
Procedia PDF Downloads 32920150 Generic Model for Timetabling Problems by Integer Linear Programmimg Approach
Authors: Nur Aidya Hanum Aizam, Vikneswary Uvaraja
Abstract:
The agenda of showing the scheduled time for performing certain tasks is known as timetabling. It widely used in many departments such as transportation, education, and production. Some difficulties arise to ensure all tasks happen in the time and place allocated. Therefore, many researchers invented various programming model to solve the scheduling problems from several fields. However, the studies in developing the general integer programming model for many timetabling problems are still questionable. Meanwhile, this thesis describe about creating a general model which solve different types of timetabling problems by considering the basic constraints. Initially, the common basic constraints from five different fields are selected and analyzed. A general basic integer programming model was created and then verified by using the medium set of data obtained randomly which is much similar to realistic data. The mathematical software, AIMMS with CPLEX as a solver has been used to solve the model. The model obtained is significant in solving many timetabling problems easily since it is modifiable to all types of scheduling problems which have same basic constraints.Keywords: AIMMS mathematical software, integer linear programming, scheduling problems, timetabling
Procedia PDF Downloads 43620149 Impact of ICT on Efficient Services Providing to Users by LIPs in NCR India
Authors: Mani Gupta
Abstract:
This study deals with question: i) Whether ICT plays a positive role in improvement of efficiency of LIPs in terms of providing efficient services to the Users in LICs? and ii) Role of finance in terms of required technological logistics and infrastructure for usage of ICT based services to comfort in accessing databases by Users in LICs. This is based on primary data which are collected from various libraries and Information Centers of NCR Delhi. The survey conducted during December 15 and 31, 2010 on 496 respondents across 96 libraries and information centers in NCR Delhi through electronic data collection method. There is positive and emphatic relationship between ICT and its effect on improving the level of efficient services providing by LIPs in LICs in NCR Delhi. This is divided into 6 sub-headings and finally the outcomes.Keywords: modern globalization, linear correlation, efficient service, internet revolution, logistics
Procedia PDF Downloads 35720148 The Role of the Stud’s Configuration in the Structural Response of Composite Bridges
Authors: Mohammad Mahdi Mohammadi Dehnavi, Alessandra De Angelis, Maria Rosaria Pecce
Abstract:
This paper deals with the role of studs in the structural response of steel-concrete composite beams. A tri-linear slip-shear strength law is assumed according to literature and codes provisions for developing a finite element (FE) model of a case study of a composite deck. The variation of the strength and ductility of the connection is implemented in the numerical model carrying out nonlinear analyses. The results confirm the utility of the model to evaluate the importance of the studs capacity, ductility and strength on the global response (ductility and strength) of the structures but also to analyze the trend of slip and shear at interface along the beams.Keywords: stud connectors, finite element method, slip, shear load, steel-concrete composite bridge
Procedia PDF Downloads 15320147 Investigation of the Composition and Structure of Tar by Lignite Pyrolysis Using Thermogravimetry, Gas Chromatography and Mass Spectrum Coupled Instrument System
Authors: Li Feng, Cheng Zhang, Chuanzhou Yuang
Abstract:
Understanding the macromolecular structure of low-rank coal is very important for its gasification and liquefaction. The pyrolysis is one of the methods of analyzing the macromolecular structure of coal. The gaseous products decomposed directly by the raw lignite at 500 °C and indirectly by tar products from raw lignite pyrolysis at 500 °C were investigated and compared by thermogravimetry, gas chromatography and mass spectrum coupled instrument system (TG/GC/MS) in this paper. The results show that 52 kinds of products were found from the raw lignite and 70 kinds of products from the tar. The pyrolysis products directly from the lignite appear more monocyclic aromatic hydrocarbons and less substituent groups or branch chain, compared with the products from the tar. There is less linear chain and double bonds structure in the tar, which can be speculated that linear chain and double bonds structure took part in the generation of condensed rings and other reactions. There are more kinds of phenol and furan in the tar, which indicate that these products may be generated from the secondary reaction. The formation process of phenol, phenol naphthalene, naphthene and furan are discussed.Keywords: composition and structure, lignite, pyrolysis of coal, tar, TG/GC/MS
Procedia PDF Downloads 14120146 Switched System Diagnosis Based on Intelligent State Filtering with Unknown Models
Authors: Nada Slimane, Foued Theljani, Faouzi Bouani
Abstract:
The paper addresses the problem of fault diagnosis for systems operating in several modes (normal or faulty) based on states assessment. We use, for this purpose, a methodology consisting of three main processes: 1) sequential data clustering, 2) linear model regression and 3) state filtering. Typically, Kalman Filter (KF) is an algorithm that provides estimation of unknown states using a sequence of I/O measurements. Inevitably, although it is an efficient technique for state estimation, it presents two main weaknesses. First, it merely predicts states without being able to isolate/classify them according to their different operating modes, whether normal or faulty modes. To deal with this dilemma, the KF is endowed with an extra clustering step based fully on sequential version of the k-means algorithm. Second, to provide state estimation, KF requires state space models, which can be unknown. A linear regularized regression is used to identify the required models. To prove its effectiveness, the proposed approach is assessed on a simulated benchmark.Keywords: clustering, diagnosis, Kalman Filtering, k-means, regularized regression
Procedia PDF Downloads 18220145 Analysis and Modeling of Stresses and Creeps Resulting from Soil Mechanics in Southern Plains of Kerman Province
Authors: Kourosh Nazarian
Abstract:
Many of the engineering materials, such as behavioral metals, have at least a certain level of linear behavior. It means that if the stresses are doubled, the deformations would be also doubled. In fact, these materials have linear elastic properties. Soils do not follow this law, for example, when compressed, soils become gradually tighter. On the surface of the ground, the sand can be easily deformed with a finger, but in high compressive stresses, they gain considerable hardness and strength. This is mainly due to the increase in the forces among the separate particles. Creeps also deform the soils under a constant load over time. Clay and peat soils have creep behavior. As a result of this phenomenon, structures constructed on such soils will continue their collapse over time. In this paper, the researchers analyzed and modeled the stresses and creeps in the southern plains of Kerman province in Iran through library-documentary, quantitative and software techniques, and field survey. The results of the modeling showed that these plains experienced severe stresses and had a collapse of about 26 cm in the last 15 years and also creep evidence was discovered in an area with a gradient of 3-6 degrees.Keywords: Stress, creep, faryab, surface runoff
Procedia PDF Downloads 17920144 Characterization of Double Shockley Stacking Fault in 4H-SiC Epilayer
Authors: Zhe Li, Tao Ju, Liguo Zhang, Zehong Zhang, Baoshun Zhang
Abstract:
In-grow stacking-faults (IGSFs) in 4H-SiC epilayers can cause increased leakage current and reduce the blocking voltage of 4H-SiC power devices. Double Shockley stacking fault (2SSF) is a common type of IGSF with double slips on the basal planes. In this study, a 2SSF in the 4H-SiC epilayer grown by chemical vaper deposition (CVD) is characterized. The nucleation site of the 2SSF is discussed, and a model for the 2SSF nucleation is proposed. Homo-epitaxial 4H-SiC is grown on a commercial 4 degrees off-cut substrate by a home-built hot-wall CVD. Defect-selected-etching (DSE) is conducted with melted KOH at 500 degrees Celsius for 1-2 min. Room temperature cathodoluminescence (CL) is conducted at a 20 kV acceleration voltage. Low-temperature photoluminescence (LTPL) is conducted at 3.6 K with the 325 nm He-Cd laser line. In the CL image, a triangular area with bright contrast is observed. Two partial dislocations (PDs) with a 20-degree angle in between show linear dark contrast on the edges of the IGSF. CL and LTPL spectrums are conducted to verify the IGSF’s type. The CL spectrum shows the maximum photoemission at 2.431 eV and negligible bandgap emission. In the LTPL spectrum, four phonon replicas are found at 2.468 eV, 2.438 eV, 2.420 eV and 2.410 eV, respectively. The Egx is estimated to be 2.512 eV. A shoulder with a red-shift to the main peak in CL, and a slight protrude at the same wavelength in LTPL are verified as the so called Egx- lines. Based on the CL and LTPL results, the IGSF is identified as a 2SSF. Back etching by neutral loop discharge and DSE are conducted to track the origin of the 2SSF, and the nucleation site is found to be a threading screw dislocation (TSD) in this sample. A nucleation mechanism model is proposed for the formation of the 2SSF. Steps introduced by the off-cut and the TSD on the surface are both suggested to be two C-Si bilayers height. The intersections of such two types of steps are along [11-20] direction from the TSD, while a four-bilayer step at each intersection. The nucleation of the 2SSF in the growth is proposed as follows. Firstly, the upper two bilayers of the four-bilayer step grow down and block the lower two at one intersection, and an IGSF is generated. Secondly, the step-flow grows over the IGSF successively, and forms an AC/ABCABC/BA/BC stacking sequence. Then a 2SSF is formed and extends by the step-flow growth. In conclusion, a triangular IGSF is characterized by CL approach. Base on the CL and LTPL spectrums, the estimated Egx is 2.512 eV and the IGSF is identified to be a 2SSF. By back etching, the 2SSF nucleation site is found to be a TSD. A model for the 2SSF nucleation from an intersection of off-cut- and TSD- introduced steps is proposed.Keywords: cathodoluminescence, defect-selected-etching, double Shockley stacking fault, low-temperature photoluminescence, nucleation model, silicon carbide
Procedia PDF Downloads 31620143 Characterization of Bacteria by a Nondestructive Sample Preparation Method in a TEM System
Authors: J. Shiue, I. H. Chen, S. W. Y. Chiu, Y. L. Wang
Abstract:
In this work, we present a nondestructive method to characterize bacteria in a TEM system. Unlike the conventional TEM specimen preparation method, which needs to thin the specimen in a destructive way, or spread the samples on a tiny millimeter sized carbon grid, our method is easy to operate without the need of sample pretreatment. With a specially designed transparent chip that allows the electron beam to pass through, and a custom made chip holder to fit into a standard TEM sample holder, the bacteria specimen can be easily prepared on the chip without any pretreatment, and then be observed under TEM. The centimeter-sized chip is covered with Au nanoparticles in the surface as the markers which allow the bacteria to be observed easily on the chip. We demonstrate the success of our method by using E. coli as an example, and show that high-resolution TEM images of E. coli can be obtained with the method presented. Some E. coli morphology characteristics imaged using this method are also presented.Keywords: bacteria, chip, nanoparticles, TEM
Procedia PDF Downloads 31420142 Detection of Heroin and Its Metabolites in Urine Samples: A Chemiluminescence Approach
Authors: Sonu Gandhi, Neena Capalash, Prince Sharma, C. Raman Suri
Abstract:
A sensitive chemiluminescence immunoassay (CIA) for heroin and its major metabolites is reported. The method is based on the competitive reaction of horseradish peroxidase (HRP)-labeled anti-MAM antibody and free drug in spiked urine samples. A hapten-protein conjugate was synthesized by using acidic derivative of monoacetyl morphine (MAM) coupled to carrier protein BSA and was used as an immunogen for the generation of anti-MAM (monoacetyl morphine) antibody. A high titer of antibody (1:64,0000) was obtained and the relative affinity constant (Kaff) of antibody was 3.1×107 l/mol. Under the optimal conditions, linear range and reactivity for heroin, mono acetyl morphine (MAM), morphine and codeine were 0.08, 0.09, 0.095 and 0.092 ng/mL respectively. The developed chemiluminescence inhibition assay could detect heroin and its metabolites in standard and urine samples up to 0.01 ng/ml.Keywords: heroin, metabolites, chemiluminescence immunoassay, horse radish peroxidase
Procedia PDF Downloads 27020141 Comparison of Prognostic Models in Different Scenarios of Shoreline Position on Ponta Negra Beach in Northeastern Brazil
Authors: Débora V. Busman, Venerando E. Amaro, Mattheus da C. Prudêncio
Abstract:
Prognostic studies of the shoreline are of utmost importance for Ponta Negra Beach, located in Natal, Northeastern Brazil, where the infrastructure recently built along the shoreline is severely affected by flooding and erosion. This study compares shoreline predictions using three linear regression methods (LMS, LRR and WLR) and tries to discern the best method for different shoreline position scenarios. The methods have shown erosion on the beach in each of the scenarios tested, even in less intense dynamic conditions. The WLA_A with confidence interval of 95% was the well-adjusted model and calculated a retreat of -1.25 m/yr to -2.0 m/yr in hot spot areas. The change of the shoreline on Ponta Negra Beach can be measured as a negative exponential curve. Analysis of these methods has shown a correlation with the morphodynamic stage of the beach.Keywords: coastal erosion, prognostic model, DSAS, environmental safety
Procedia PDF Downloads 33520140 Seismic Analysis of Adjacent Buildings Connected with Dampers
Authors: Devyani D. Samarth, Sachin V. Bakre, Ratnesh Kumar
Abstract:
This work deals with two buildings adjacent to each other connected with dampers. The “Imperial Valley Earthquake - El Centro", "May 18, 1940 earthquake time history is used for dynamic analysis of the system in the time domain. The effectiveness of fluid joint dampers is then investigated in terms of the reduction of displacement, acceleration and base shear responses of adjacent buildings. Finally, an extensive parametric study is carried out to find optimum damper properties like stiffness (Kd) and damping coefficient (Cd) for adjacent buildings. Results show that using fluid dampers to connect the adjacent buildings of different fundamental frequencies can effectively reduce earthquake-induced responses of either building if damper optimum properties are selected.Keywords: energy dissipation devices, time history analysis, viscous damper, optimum parameters
Procedia PDF Downloads 49320139 Analytical Study of the Structural Response to Near-Field Earthquakes
Authors: Isidro Perez, Maryam Nazari
Abstract:
Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.Keywords: near-field, pulse, pushover, time-history
Procedia PDF Downloads 14620138 Evaluation of Hand Grip Strength and EMG Signal on Visual Reaction
Authors: Sung-Wook Shin, Sung-Taek Chung
Abstract:
Hand grip strength has been utilized as an indicator to evaluate the motor ability of hands, responsible for performing multiple body functions. It is, however, difficult to evaluate other factors (other than hand muscular strength) utilizing the hand grip strength only. In this study, we analyzed the motor ability of hands using EMG and the hand grip strength, simultaneously in order to evaluate concentration, muscular strength reaction time, instantaneous muscular strength change, and agility in response to visual reaction. In results, the average time (and their standard deviations) of muscular strength reaction EMG signal and hand grip strength was found to be 209.6 ± 56.2 ms and 354.3 ± 54.6 ms, respectively. In addition, the onset time which represents acceleration time to reach 90% of maximum hand grip strength, was 382.9 ± 129.9 ms.Keywords: hand grip strength, EMG, visual reaction, endurance
Procedia PDF Downloads 46320137 Evidence of Climate Change from Statistical Analysis of Temperature and Rainfall Data of Kaduna State, Nigeria
Authors: Iliya Bitrus Abaje
Abstract:
This study examines the evidence of climate change scenario in Kaduna State from the analysis of temperature and rainfall data (1976-2015) from three meteorological stations along a geographic transect from the southern part to the northern part of the State. Different statistical methods were used in determining the changes in both the temperature and rainfall series. The result of the linear trend lines revealed a mean increase in average temperature of 0.73oC for the 40 years period of study in the State. The plotted standard deviation for the temperature anomalies generally revealed that years of temperatures above the mean standard deviation (hotter than the normal conditions) in the last two decades (1996-2005 and 2006-2015) were more than those below (colder than the normal condition). The Cramer’s test and student’s t-test generally revealed an increasing temperature trend in the recent decades. The increased in temperature is an evidence that the earth’s atmosphere is getting warmer in recent years. The linear trend line equation of the annual rainfall for the period of study showed a mean increase of 316.25 mm for the State. Findings also revealed that the plotted standard deviation for the rainfall anomalies, and the 10-year non-overlapping and 30-year overlapping sub-periods analysis in all the three stations generally showed an increasing trend from the beginning of the data to the recent years. This is an evidence that the study area is now experiencing wetter conditions in recent years and hence climate change. The study recommends diversification of the economic base of the populace with emphasis on moving away from activities that are sensitive to temperature and rainfall extremes Also, appropriate strategies to ameliorate the scourge of climate change at all levels/sectors should always take into account the recent changes in temperature and rainfall amount in the area.Keywords: anomalies, linear trend, rainfall, temperature
Procedia PDF Downloads 31920136 The Analysis of the Two Dimensional Huxley Equation Using the Galerkin Method
Authors: Pius W. Molo Chin
Abstract:
Real life problems such as the Huxley equation are always modeled as nonlinear differential equations. These problems need accurate and reliable methods for their solutions. In this paper, we propose a nonstandard finite difference method in time and the Galerkin combined with the compactness method in the space variables. This coupled method, is used to analyze a two dimensional Huxley equation for the existence and uniqueness of the continuous solution of the problem in appropriate spaces to be defined. We proceed to design a numerical scheme consisting of the aforementioned method and show that the scheme is stable. We further show that the stable scheme converges with the rate which is optimal in both the L2 as well as the H1-norms. Furthermore, we show that the scheme replicates the decaying qualities of the exact solution. Numerical experiments are presented with the help of an example to justify the validity of the designed scheme.Keywords: Huxley equations, non-standard finite difference method, Galerkin method, optimal rate of convergence
Procedia PDF Downloads 216