Search results for: linear congruential algorithm
5523 Experimental Analysis of Tuned Liquid Damper (TLD) with Embossments Subject to Random Excitation
Authors: Mohamad Saberi, Arash Sohrabi
Abstract:
Tuned liquid damper is one the passive structural control ways which has been used since mid-1980 decade for seismic control in civil engineering. This system is made of one or many tanks filled with fluid, mostly water that installed on top of the high raised structure and used to prevent structure vibration. In this article we will show how to make seismic table contain TLD system and analysis the result of using this system in our structure. Results imply that when frequency ratio approaches 1 this system can perform its best in both dissipate energy and increasing structural damping. And also results of these serial experiments are proved compatible with Hunzer linear theory behaviour.Keywords: TLD, seismic table, structural system, Hunzer linear behaviour
Procedia PDF Downloads 3785522 The Predictors of Student Engagement: Instructional Support vs Emotional Support
Authors: Tahani Salman Alangari
Abstract:
Student success can be impacted by internal factors such as their emotional well-being and external factors such as organizational support and instructional support in the classroom. This study is to identify at least one factor that forecasts student engagement. It is a cross-sectional, conducted on 6206 teachers and encompassed three years of data collection and observations of math instruction in approximately 50 schools and 300 classrooms. A multiple linear regression revealed that a model predicting student engagement from emotional support, classroom organization, and instructional support was significant. Four linear regression models were tested using hierarchical regression to examine the effects of independent variables: emotional support was the highest predictor of student engagement while instructional support was the lowest.Keywords: student engagement, emotional support, organizational support, instructional support, well-being
Procedia PDF Downloads 815521 Efficient Reconstruction of DNA Distance Matrices Using an Inverse Problem Approach
Authors: Boris Melnikov, Ye Zhang, Dmitrii Chaikovskii
Abstract:
We continue to consider one of the cybernetic methods in computational biology related to the study of DNA chains. Namely, we are considering the problem of reconstructing the not fully filled distance matrix of DNA chains. When applied in a programming context, it is revealed that with a modern computer of average capabilities, creating even a small-sized distance matrix for mitochondrial DNA sequences is quite time-consuming with standard algorithms. As the size of the matrix grows larger, the computational effort required increases significantly, potentially spanning several weeks to months of non-stop computer processing. Hence, calculating the distance matrix on conventional computers is hardly feasible, and supercomputers are usually not available. Therefore, we started publishing our variants of the algorithms for calculating the distance between two DNA chains; then, we published algorithms for restoring partially filled matrices, i.e., the inverse problem of matrix processing. In this paper, we propose an algorithm for restoring the distance matrix for DNA chains, and the primary focus is on enhancing the algorithms that shape the greedy function within the branches and boundaries method framework.Keywords: DNA chains, distance matrix, optimization problem, restoring algorithm, greedy algorithm, heuristics
Procedia PDF Downloads 1185520 Spectral Clustering for Manufacturing Cell Formation
Authors: Yessica Nataliani, Miin-Shen Yang
Abstract:
Cell formation (CF) is an important step in group technology. It is used in designing cellular manufacturing systems using similarities between parts in relation to machines so that it can identify part families and machine groups. There are many CF methods in the literature, but there is less spectral clustering used in CF. In this paper, we propose a spectral clustering algorithm for machine-part CF. Some experimental examples are used to illustrate its efficiency. Overall, the spectral clustering algorithm can be used in CF with a wide variety of machine/part matrices.Keywords: group technology, cell formation, spectral clustering, grouping efficiency
Procedia PDF Downloads 4055519 Impact of Depreciation Technique on Taxable Income and Financial Performance of Quoted Consumer Goods Company in Nigeria
Authors: Ibrahim Ali, Adamu Danlami Ahmed
Abstract:
This study examines the impact of depreciation on taxable income and financial performance of consumer goods companies quoted on the Nigerian stock exchange. The study adopts ex-post factor research design. Data were collected using a secondary source. The findings of the study suggest that, method of depreciation adopted in any organization influence the taxable profit. Depreciation techniques can either be: depressive, accelerative and linear depreciation. It was also recommended that consumer goods should adjust their method of depreciation to make sure an appropriate method is adopted. This will go a long way to revitalize their taxable profit.Keywords: accelerated, linear, depressive, depreciation
Procedia PDF Downloads 2855518 Application of an Analytical Model to Obtain Daily Flow Duration Curves for Different Hydrological Regimes in Switzerland
Authors: Ana Clara Santos, Maria Manuela Portela, Bettina Schaefli
Abstract:
This work assesses the performance of an analytical model framework to generate daily flow duration curves, FDCs, based on climatic characteristics of the catchments and on their streamflow recession coefficients. According to the analytical model framework, precipitation is considered to be a stochastic process, modeled as a marked Poisson process, and recession is considered to be deterministic, with parameters that can be computed based on different models. The analytical model framework was tested for three case studies with different hydrological regimes located in Switzerland: pluvial, snow-dominated and glacier. For that purpose, five time intervals were analyzed (the four meteorological seasons and the civil year) and two developments of the model were tested: one considering a linear recession model and the other adopting a nonlinear recession model. Those developments were combined with recession coefficients obtained from two different approaches: forward and inverse estimation. The performance of the analytical framework when considering forward parameter estimation is poor in comparison with the inverse estimation for both, linear and nonlinear models. For the pluvial catchment, the inverse estimation shows exceptional good results, especially for the nonlinear model, clearing suggesting that the model has the ability to describe FDCs. For the snow-dominated and glacier catchments the seasonal results are better than the annual ones suggesting that the model can describe streamflows in those conditions and that future efforts should focus on improving and combining seasonal curves instead of considering single annual ones.Keywords: analytical streamflow distribution, stochastic process, linear and non-linear recession, hydrological modelling, daily discharges
Procedia PDF Downloads 1625517 Optimal Portfolio Selection under Treynor Ratio Using Genetic Algorithms
Authors: Imad Zeyad Ramadan
Abstract:
In this paper a genetic algorithm was developed to construct the optimal portfolio based on the Treynor method. The GA maximizes the Treynor ratio under budget constraint to select the best allocation of the budget for the companies in the portfolio. The results show that the GA was able to construct a conservative portfolio which includes companies from the three sectors. This indicates that the GA reduced the risk on the investor as it choose some companies with positive risks (goes with the market) and some with negative risks (goes against the market).Keywords: oOptimization, genetic algorithm, portfolio selection, Treynor method
Procedia PDF Downloads 4495516 Liquefaction Susceptibility of Tailing Storage Facility-Comparison of National Centre for Earthquake Engineering Research and Finite Element Methods
Authors: Mehdi Ghatei, Masoomeh Lorestani
Abstract:
Upstream Tailings Storage Facilities (TSFs) may experience slope instabilities due to soil liquefaction, especially in regions known to be seismically active. In this study, liquefaction susceptibility of an upstream-raised TSF in Western Australia was assessed using two different approaches. The first approach assessed liquefaction susceptibility using Cone Penetration Tests with pore pressure measurement (CPTu) as described by the National Centre for Earthquake Engineering Research (NCEER). This assessment was based on the four CPTu tests that were conducted on the perimeter embankment of the TSF. The second approach used the Finite Element (FE) method with application of an equivalent linear model to predict the undrained cyclic behavior, the pore water pressure and the liquefaction of the materials. The tailings parameters were estimated from the CPTu profiles and from the laboratory tests. The cyclic parameters were estimated from the literature where test results of similar material were available. The results showed that there was a good agreement, in the liquefaction susceptibility of the tailings material, between the NCEER and FE methods with equivalent linear model.Keywords: liquefaction , CPTU, NCEER, finite element method, equivalent linear model
Procedia PDF Downloads 2725515 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink
Authors: Sanjay Rathee, Arti Kashyap
Abstract:
Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining
Procedia PDF Downloads 2945514 Modelling Fluoride Pollution of Groundwater Using Artificial Neural Network in the Western Parts of Jharkhand
Authors: Neeta Kumari, Gopal Pathak
Abstract:
Artificial neural network has been proved to be an efficient tool for non-parametric modeling of data in various applications where output is non-linearly associated with input. It is a preferred tool for many predictive data mining applications because of its power , flexibility, and ease of use. A standard feed forward networks (FFN) is used to predict the groundwater fluoride content. The ANN model is trained using back propagated algorithm, Tansig and Logsig activation function having varying number of neurons. The models are evaluated on the basis of statistical performance criteria like Root Mean Squarred Error (RMSE) and Regression coefficient (R2), bias (mean error), Coefficient of variation (CV), Nash-Sutcliffe efficiency (NSE), and the index of agreement (IOA). The results of the study indicate that Artificial neural network (ANN) can be used for groundwater fluoride prediction in the limited data situation in the hard rock region like western parts of Jharkhand with sufficiently good accuracy.Keywords: Artificial neural network (ANN), FFN (Feed-forward network), backpropagation algorithm, Levenberg-Marquardt algorithm, groundwater fluoride contamination
Procedia PDF Downloads 5505513 Probabilistic Gathering of Agents with Simple Sensors: Distributed Algorithm for Aggregation of Robots Equipped with Binary On-Board Detectors
Authors: Ariel Barel, Rotem Manor, Alfred M. Bruckstein
Abstract:
We present a probabilistic gathering algorithm for agents that can only detect the presence of other agents in front of or behind them. The agents act in the plane and are identical and indistinguishable, oblivious, and lack any means of direct communication. They do not have a common frame of reference in the plane and choose their orientation (direction of possible motion) at random. The analysis of the gathering process assumes that the agents act synchronously in selecting random orientations that remain fixed during each unit time-interval. Two algorithms are discussed. The first one assumes discrete jumps based on the sensing results given the randomly selected motion direction, and in this case, extensive experimental results exhibit probabilistic clustering into a circular region with radius equal to the step-size in time proportional to the number of agents. The second algorithm assumes agents with continuous sensing and motion, and in this case, we can prove gathering into a very small circular region in finite expected time.Keywords: control, decentralized, gathering, multi-agent, simple sensors
Procedia PDF Downloads 1645512 Approximate Solution to Non-Linear Schrödinger Equation with Harmonic Oscillator by Elzaki Decomposition Method
Authors: Emad K. Jaradat, Ala’a Al-Faqih
Abstract:
Nonlinear Schrödinger equations are regularly experienced in numerous parts of science and designing. Varieties of analytical methods have been proposed for solving these equations. In this work, we construct an approximate solution for the nonlinear Schrodinger equations, with harmonic oscillator potential, by Elzaki Decomposition Method (EDM). To illustrate the effects of harmonic oscillator on the behavior wave function, nonlinear Schrodinger equation in one and two dimensions is provided. The results show that, it is more perfectly convenient and easy to apply the EDM in one- and two-dimensional Schrodinger equation.Keywords: non-linear Schrodinger equation, Elzaki decomposition method, harmonic oscillator, one and two-dimensional Schrodinger equation
Procedia PDF Downloads 1875511 Hypergeometric Solutions to Linear Nonhomogeneous Fractional Equations with Spherical Bessel Functions of the First Kind
Authors: Pablo Martin, Jorge Olivares, Fernando Maass
Abstract:
The use of fractional derivatives to different problems in Engineering and Physics has been increasing in the last decade. For this reason, we have here considered partial derivatives when the integral is a spherical Bessel function of the first kind in both regular and modified ones simple initial conditions have been also considered. In this way, the solution has been found as a combination of hypergeometric functions. The case of a general rational value for α of the fractional derivative α has been solved in a general way for alpha between zero and two. The modified spherical Bessel functions of the first kind have been also considered and how to go from the regular case to the modified one will be also shown.Keywords: caputo fractional derivatives, hypergeometric functions, linear differential equations, spherical Bessel functions
Procedia PDF Downloads 3255510 High Harmonics Generation in Hexagonal Graphene Quantum Dots
Authors: Armenuhi Ghazaryan, Qnarik Poghosyan, Tadevos Markosyan
Abstract:
We have considered the high-order harmonic generation in-plane graphene quantum dots of hexagonal shape by the independent quasiparticle approximation-tight binding model. We have investigated how such a nonlinear effect is affected by a strong optical wave field, quantum dot typical band gap and lateral size, and dephasing processes. The equation of motion for the density matrix is solved by performing the time integration with the eight-order Runge-Kutta algorithm. If the optical wave frequency is much less than the quantum dot intrinsic band gap, the main aspects of multiphoton high harmonic emission in quantum dots are revealed. In such a case, the dependence of the cutoff photon energy on the strength of the optical pump wave is almost linear. But when the wave frequency is comparable to the bandgap of the quantum dot, the cutoff photon energy shows saturation behavior with an increase in the wave field strength.Keywords: strong wave field, multiphoton, bandgap, wave field strength, nanostructure
Procedia PDF Downloads 1555509 Frequent Itemset Mining Using Rough-Sets
Authors: Usman Qamar, Younus Javed
Abstract:
Frequent pattern mining is the process of finding a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set. It was proposed in the context of frequent itemsets and association rule mining. Frequent pattern mining is used to find inherent regularities in data. What products were often purchased together? Its applications include basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis. However, one of the bottlenecks of frequent itemset mining is that as the data increase the amount of time and resources required to mining the data increases at an exponential rate. In this investigation a new algorithm is proposed which can be uses as a pre-processor for frequent itemset mining. FASTER (FeAture SelecTion using Entropy and Rough sets) is a hybrid pre-processor algorithm which utilizes entropy and rough-sets to carry out record reduction and feature (attribute) selection respectively. FASTER for frequent itemset mining can produce a speed up of 3.1 times when compared to original algorithm while maintaining an accuracy of 71%.Keywords: rough-sets, classification, feature selection, entropy, outliers, frequent itemset mining
Procedia PDF Downloads 4375508 Seamless Mobility in Heterogeneous Mobile Networks
Authors: Mohab Magdy Mostafa Mohamed
Abstract:
The objective of this paper is to introduce a vertical handover (VHO) algorithm between wireless LANs (WLANs) and LTE mobile networks. The proposed algorithm is based on the fuzzy control theory and takes into consideration power level, subscriber velocity, and target cell load instead of only power level in traditional algorithms. Simulation results show that network performance in terms of number of handovers and handover occurrence distance is improved.Keywords: vertical handover, fuzzy control theory, power level, speed, target cell load
Procedia PDF Downloads 3525507 Breast Cancer Metastasis Detection and Localization through Transfer-Learning Convolutional Neural Network Classification Based on Convolutional Denoising Autoencoder Stack
Authors: Varun Agarwal
Abstract:
Introduction: With the advent of personalized medicine, histopathological review of whole slide images (WSIs) for cancer diagnosis presents an exceedingly time-consuming, complex task. Specifically, detecting metastatic regions in WSIs of sentinel lymph node biopsies necessitates a full-scanned, holistic evaluation of the image. Thus, digital pathology, low-level image manipulation algorithms, and machine learning provide significant advancements in improving the efficiency and accuracy of WSI analysis. Using Camelyon16 data, this paper proposes a deep learning pipeline to automate and ameliorate breast cancer metastasis localization and WSI classification. Methodology: The model broadly follows five stages -region of interest detection, WSI partitioning into image tiles, convolutional neural network (CNN) image-segment classifications, probabilistic mapping of tumor localizations, and further processing for whole WSI classification. Transfer learning is applied to the task, with the implementation of Inception-ResNetV2 - an effective CNN classifier that uses residual connections to enhance feature representation, adding convolved outputs in the inception unit to the proceeding input data. Moreover, in order to augment the performance of the transfer learning CNN, a stack of convolutional denoising autoencoders (CDAE) is applied to produce embeddings that enrich image representation. Through a saliency-detection algorithm, visual training segments are generated, which are then processed through a denoising autoencoder -primarily consisting of convolutional, leaky rectified linear unit, and batch normalization layers- and subsequently a contrast-normalization function. A spatial pyramid pooling algorithm extracts the key features from the processed image, creating a viable feature map for the CNN that minimizes spatial resolution and noise. Results and Conclusion: The simplified and effective architecture of the fine-tuned transfer learning Inception-ResNetV2 network enhanced with the CDAE stack yields state of the art performance in WSI classification and tumor localization, achieving AUC scores of 0.947 and 0.753, respectively. The convolutional feature retention and compilation with the residual connections to inception units synergized with the input denoising algorithm enable the pipeline to serve as an effective, efficient tool in the histopathological review of WSIs.Keywords: breast cancer, convolutional neural networks, metastasis mapping, whole slide images
Procedia PDF Downloads 1305506 Supplier Selection and Order Allocation Using a Stochastic Multi-Objective Programming Model and Genetic Algorithm
Authors: Rouhallah Bagheri, Morteza Mahmoudi, Hadi Moheb-Alizadeh
Abstract:
In this paper, we develop a supplier selection and order allocation multi-objective model in stochastic environment in which purchasing cost, percentage of delivered items with delay and percentage of rejected items provided by each supplier are supposed to be stochastic parameters following any arbitrary probability distribution. To do so, we use dependent chance programming (DCP) that maximizes probability of the event that total purchasing cost, total delivered items with delay and total rejected items are less than or equal to pre-determined values given by decision maker. After transforming the above mentioned stochastic multi-objective programming problem into a stochastic single objective problem using minimum deviation method, we apply a genetic algorithm to get the later single objective problem solved. The employed genetic algorithm performs a simulation process in order to calculate the stochastic objective function as its fitness function. At the end, we explore the impact of stochastic parameters on the given solution via a sensitivity analysis exploiting coefficient of variation. The results show that as stochastic parameters have greater coefficients of variation, the value of objective function in the stochastic single objective programming problem is worsened.Keywords: dependent chance programming, genetic algorithm, minimum deviation method, order allocation, supplier selection
Procedia PDF Downloads 2565505 An Efficient Process Analysis and Control Method for Tire Mixing Operation
Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park
Abstract:
Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process
Procedia PDF Downloads 2655504 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems
Authors: Riadh Zorgati, Thomas Triboulet
Abstract:
In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix
Procedia PDF Downloads 1355503 Photoreflectance Anisotropy Spectroscopy of Coupled Quantum Wells
Authors: J. V. Gonzalez Fernandez, T. Mozume, S. Gozu, A. Lastras Martinez, L. F. Lastras Martinez, J. Ortega Gallegos, R. E. Balderas Navarro
Abstract:
We report on a theoretical-experimental study of photoreflectance anisotropy (PRA) spectroscopy of coupled double quantum wells. By probing the in-plane interfacial optical anisotropies, we demonstrate that PRA spectroscopy has the capacity to detect and distinguish layers with quantum dimensions. In order to account for the experimental PRA spectra, we have used a theoretical model at k=0 based on a linear electro-optic effect through a piezoelectric shear strain.Keywords: coupled double quantum well (CDQW), linear electro-optic (LEO) effect, photoreflectance anisotropy (PRA), piezoelectric shear strain
Procedia PDF Downloads 6945502 Series-Parallel Systems Reliability Optimization Using Genetic Algorithm and Statistical Analysis
Authors: Essa Abrahim Abdulgader Saleem, Thien-My Dao
Abstract:
The main objective of this paper is to optimize series-parallel system reliability using Genetic Algorithm (GA) and statistical analysis; considering system reliability constraints which involve the redundant numbers of selected components, total cost, and total weight. To perform this work, firstly the mathematical model which maximizes system reliability subject to maximum system cost and maximum system weight constraints is presented; secondly, a statistical analysis is used to optimize GA parameters, and thirdly GA is used to optimize series-parallel systems reliability. The objective is to determine the strategy choosing the redundancy level for each subsystem to maximize the overall system reliability subject to total cost and total weight constraints. Finally, the series-parallel system case study reliability optimization results are showed, and comparisons with the other previous results are presented to demonstrate the performance of our GA.Keywords: reliability, optimization, meta-heuristic, genetic algorithm, redundancy
Procedia PDF Downloads 3375501 A Comparative Study of k-NN and MLP-NN Classifiers Using GA-kNN Based Feature Selection Method for Wood Recognition System
Authors: Uswah Khairuddin, Rubiyah Yusof, Nenny Ruthfalydia Rosli
Abstract:
This paper presents a comparative study between k-Nearest Neighbour (k-NN) and Multi-Layer Perceptron Neural Network (MLP-NN) classifier using Genetic Algorithm (GA) as feature selector for wood recognition system. The features have been extracted from the images using Grey Level Co-Occurrence Matrix (GLCM). The use of GA based feature selection is mainly to ensure that the database used for training the features for the wood species pattern classifier consists of only optimized features. The feature selection process is aimed at selecting only the most discriminating features of the wood species to reduce the confusion for the pattern classifier. This feature selection approach maintains the ‘good’ features that minimizes the inter-class distance and maximizes the intra-class distance. Wrapper GA is used with k-NN classifier as fitness evaluator (GA-kNN). The results shows that k-NN is the best choice of classifier because it uses a very simple distance calculation algorithm and classification tasks can be done in a short time with good classification accuracy.Keywords: feature selection, genetic algorithm, optimization, wood recognition system
Procedia PDF Downloads 5455500 Experimental Analysis of Tuned Liquid Damper (TLD) for High Raised Structures
Authors: Mohamad Saberi, Arash Sohrabi
Abstract:
Tuned liquid damper is one the passive structural control ways which has been used since mid-1980 decade for seismic control in civil engineering. This system is made of one or many tanks filled with fluid, mostly water that installed on top of the high raised structure and used to prevent structure vibration. In this article, we will show how to make seismic table contain TLD system and analysis the result of using this system in our structure. Results imply that when frequency ratio approaches 1 this system can perform its best in both dissipate energy and increasing structural damping. And also results of these serial experiments are proved compatible with Hunzer linear theory behaviour.Keywords: TLD, seismic table, structural system, Hunzer linear behaviour
Procedia PDF Downloads 3355499 Optimal Design of Friction Dampers for Seismic Retrofit of a Moment Frame
Authors: Hyungoo Kang, Jinkoo Kim
Abstract:
This study investigated the determination of the optimal location and friction force of friction dampers to effectively reduce the seismic response of a reinforced concrete structure designed without considering seismic load. To this end, the genetic algorithm process was applied and the results were compared with those obtained by simplified methods such as distribution of dampers based on the story shear or the inter-story drift ratio. The seismic performance of the model structure with optimally positioned friction dampers was evaluated by nonlinear static and dynamic analyses. The analysis results showed that compared with the system without friction dampers, the maximum roof displacement and the inter-story drift ratio were reduced by about 30% and 40%, respectively. After installation of the dampers about 70% of the earthquake input energy was dissipated by the dampers and the energy dissipated in the structural elements was reduced by about 50%. In comparison with the simplified methods of installation, the genetic algorithm provided more efficient solutions for seismic retrofit of the model structure.Keywords: friction dampers, genetic algorithm, optimal design, RC buildings
Procedia PDF Downloads 2445498 Non-Linear Finite Element Analysis of Bonded Single Lap Joint in Composite Material
Authors: A. Benhamena, L. Aminallah, A. Aid, M. Benguediab, A. Amrouche
Abstract:
The goal of this work is to analyze the severity of interfacial stress distribution in the single lap adhesive joint under tensile loading. The three-dimensional and non-linear finite element method based on the computation of the peel and shear stresses was used to analyze the fracture behaviour of single lap adhesive joint. The effect of the loading magnitude and the overlap length on the distribution of peel and shear stresses was highlighted. A good correlation was found between the FEM simulations and the analytical results.Keywords: aluminum 2024-T3 alloy, single-lap adhesive joints, Interface stress distributions, material nonlinear analysis, adhesive, bending moment, finite element method
Procedia PDF Downloads 5705497 Growth Performance, Body Linear Measurements and Body Condition Score of Savanna Brown Goats Fed Enzyme Treated Sawdust Diets as Replacement for Maize Offal and Managed Semi-intensively
Authors: Alabi Olushola John, Ogbiko Anthonia, Tsado Daniel Nma, Mbajiorgu Ejike Felix, Adama Theophilus Zubairu
Abstract:
A total of thirty (30) goats weighting between 5.8 and 7.3 kg were used to determine the growth performance, body linear measurements and body condition score of Semi intensively manged Savanna Brown goats fed enzyme treated sawdust diets (ETSD). They divided into five dietary treatments (T) groups with three replications using a completely randomized design. Treatment one (1) comprises of animals fed diet on 0 % enzyme treated sawdust while Treatment 2 (T2), Treatment 3 (T3), Treatment 4 (T4) and Treatment 5 (T5) comprises of animals fed diets containing 10, 20, 30 and 40 % enzyme treated sawdust diets, respectively. The study lasted 16 weeks. Data on growth performance parameters, body linear measurement (height at wither, body length, chest girth, hind leg length, foreleg length, facial length) and body condition score were collected and analyzed using one way analysis of variance. No significant difference (p>0.05) was observed in the all growth performance parameters and linear body measurements. However, significant difference was observed in body length and daily body length gains with highest value observed in animals fed the control diets (7.38 and 0.08 cm respectively) and animals on 30 % ETSD (7.25 and 0.07 cm respectively) and lowest values (4.75 and 0.05 cm respectively) were observed in animals fed 10 % ETSD among the treatment groups. It was, therefore, concluded that enzyme treated sawdust can be used in the diets of Savanna Brown goats up to 40 % replacement for maize offal since this treatment improved the body length and daily body length gains.Keywords: performance, sawdust, enzyme treated, semi-intensively, replacement
Procedia PDF Downloads 1045496 Survey of Methods for Solutions of Spatial Covariance Structures and Their Limitations
Authors: Joseph Thomas Eghwerido, Julian I. Mbegbu
Abstract:
In modelling environment processes, we apply multidisciplinary knowledge to explain, explore and predict the Earth's response to natural human-induced environmental changes. Thus, the analysis of spatial-time ecological and environmental studies, the spatial parameters of interest are always heterogeneous. This often negates the assumption of stationarity. Hence, the dispersion of the transportation of atmospheric pollutants, landscape or topographic effect, weather patterns depends on a good estimate of spatial covariance. The generalized linear mixed model, although linear in the expected value parameters, its likelihood varies nonlinearly as a function of the covariance parameters. As a consequence, computing estimates for a linear mixed model requires the iterative solution of a system of simultaneous nonlinear equations. In other to predict the variables at unsampled locations, we need to know the estimate of the present sampled variables. The geostatistical methods for solving this spatial problem assume covariance stationarity (locally defined covariance) and uniform in space; which is not apparently valid because spatial processes often exhibit nonstationary covariance. Hence, they have globally defined covariance. We shall consider different existing methods of solutions of spatial covariance of a space-time processes at unsampled locations. This stationary covariance changes with locations for multiple time set with some asymptotic properties.Keywords: parametric, nonstationary, Kernel, Kriging
Procedia PDF Downloads 2555495 Development of a Few-View Computed Tomographic Reconstruction Algorithm Using Multi-Directional Total Variation
Authors: Chia Jui Hsieh, Jyh Cheng Chen, Chih Wei Kuo, Ruei Teng Wang, Woei Chyn Chu
Abstract:
Compressed sensing (CS) based computed tomographic (CT) reconstruction algorithm utilizes total variation (TV) to transform CT image into sparse domain and minimizes L1-norm of sparse image for reconstruction. Different from the traditional CS based reconstruction which only calculates x-coordinate and y-coordinate TV to transform CT images into sparse domain, we propose a multi-directional TV to transform tomographic image into sparse domain for low-dose reconstruction. Our method considers all possible directions of TV calculations around a pixel, so the sparse transform for CS based reconstruction is more accurate. In 2D CT reconstruction, we use eight-directional TV to transform CT image into sparse domain. Furthermore, we also use 26-directional TV for 3D reconstruction. This multi-directional sparse transform method makes CS based reconstruction algorithm more powerful to reduce noise and increase image quality. To validate and evaluate the performance of this multi-directional sparse transform method, we use both Shepp-Logan phantom and a head phantom as the targets for reconstruction with the corresponding simulated sparse projection data (angular sampling interval is 5 deg and 6 deg, respectively). From the results, the multi-directional TV method can reconstruct images with relatively less artifacts compared with traditional CS based reconstruction algorithm which only calculates x-coordinate and y-coordinate TV. We also choose RMSE, PSNR, UQI to be the parameters for quantitative analysis. From the results of quantitative analysis, no matter which parameter is calculated, the multi-directional TV method, which we proposed, is better.Keywords: compressed sensing (CS), low-dose CT reconstruction, total variation (TV), multi-directional gradient operator
Procedia PDF Downloads 2565494 Nonhomogeneous Linear Second Order Differential Equations and Resonance through Geogebra Program
Authors: F. Maass, P. Martin, J. Olivares
Abstract:
The aim of this work is the application of the program GeoGebra in teaching the study of nonhomogeneous linear second order differential equations with constant coefficients. Different kind of functions or forces will be considered in the right hand side of the differential equations, in particular, the emphasis will be placed in the case of trigonometrical functions producing the resonance phenomena. In order to obtain this, the frequencies of the trigonometrical functions will be changed. Once the resonances appear, these have to be correlationated with the roots of the second order algebraic equation determined by the coefficients of the differential equation. In this way, the physics and engineering students will understand resonance effects and its consequences in the simplest way. A large variety of examples will be shown, using different kind of functions for the nonhomogeneous part of the differential equations.Keywords: education, geogebra, ordinary differential equations, resonance
Procedia PDF Downloads 245