Search results for: minimum root mean square (RMS) error matching algorithm
8903 Assisted Video Colorization Using Texture Descriptors
Authors: Andre Peres Ramos, Franklin Cesar Flores
Abstract:
Colorization is the process of add colors to a monochromatic image or video. Usually, the process involves to segment the image in regions of interest and then apply colors to each one, for videos, this process is repeated for each frame, which makes it a tedious and time-consuming job. We propose a new assisted method for video colorization; the user only has to colorize one frame, and then the colors are propagated to following frames. The user can intervene at any time to correct eventual errors in color assignment. The method consists of to extract intensity and texture descriptors from the frames and then perform a feature matching to determine the best color for each segment. To reduce computation time and give a better spatial coherence we narrow the area of search and give weights for each feature to emphasize texture descriptors. To give a more natural result, we use an optimization algorithm to make the color propagation. Experimental results in several image sequences, compared to others existing methods, demonstrates that the proposed method perform a better colorization with less time and user interference.Keywords: colorization, feature matching, texture descriptors, video segmentation
Procedia PDF Downloads 1628902 Performance Evaluation of MIMO-OFDM Communication Systems
Authors: M. I. Youssef, A. E. Emam, M. Abd Elghany
Abstract:
This paper evaluates the bit error rate (BER) performance of MIMO-OFDM communication system. MIMO system uses multiple transmitting and receiving antennas with different coding techniques to either enhance the transmission diversity or spatial multiplexing gain. Utilizing alamouti algorithm were the same information transmitted over multiple antennas at different time intervals and then collected again at the receivers to minimize the probability of error, combat fading and thus improve the received signal to noise ratio. While utilizing V-BLAST algorithm, the transmitted signals are divided into different transmitting channels and transferred over the channel to be received by different receiving antennas to increase the transmitted data rate and achieve higher throughput. The paper provides a study of different diversity gain coding schemes and spatial multiplexing coding for MIMO systems. A comparison of various channels' estimation and equalization techniques are given. The simulation is implemented using MATLAB, and the results had shown the performance of transmission models under different channel environments.Keywords: MIMO communication, BER, space codes, channels, alamouti, V-BLAST
Procedia PDF Downloads 1758901 Algorithm Optimization to Sort in Parallel by Decreasing the Number of the Processors in SIMD (Single Instruction Multiple Data) Systems
Authors: Ali Hosseini
Abstract:
Paralleling is a mechanism to decrease the time necessary to execute the programs. Sorting is one of the important operations to be used in different systems in a way that the proper function of many algorithms and operations depend on sorted data. CRCW_SORT algorithm executes ‘N’ elements sorting in O(1) time on SIMD (Single Instruction Multiple Data) computers with n^2/2-n/2 number of processors. In this article having presented a mechanism by dividing the input string by the hinge element into two less strings the number of the processors to be used in sorting ‘N’ elements in O(1) time has decreased to n^2/8-n/4 in the best state; by this mechanism the best state is when the hinge element is the middle one and the worst state is when it is minimum. The findings from assessing the proposed algorithm by other methods on data collection and number of the processors indicate that the proposed algorithm uses less processors to sort during execution than other methods.Keywords: CRCW, SIMD (Single Instruction Multiple Data) computers, parallel computers, number of the processors
Procedia PDF Downloads 3108900 Modeling of Tool Flank Wear in Finish Hard Turning of AISI D2 Using Genetic Programming
Authors: V. Pourmostaghimi, M. Zadshakoyan
Abstract:
Efficiency and productivity of the finish hard turning can be enhanced impressively by utilizing accurate predictive models for cutting tool wear. However, the ability of genetic programming in presenting an accurate analytical model is a notable characteristic which makes it more applicable than other predictive modeling methods. In this paper, the genetic equation for modeling of tool flank wear is developed with the use of the experimentally measured flank wear values and genetic programming during finish turning of hardened AISI D2. Series of tests were conducted over a range of cutting parameters and the values of tool flank wear were measured. On the basis of obtained results, genetic model presenting connection between cutting parameters and tool flank wear were extracted. The accuracy of the genetically obtained model was assessed by using two statistical measures, which were root mean square error (RMSE) and coefficient of determination (R²). Evaluation results revealed that presented genetic model predicted flank wear over the study area accurately (R² = 0.9902 and RMSE = 0.0102). These results allow concluding that the proposed genetic equation corresponds well with experimental data and can be implemented in real industrial applications.Keywords: cutting parameters, flank wear, genetic programming, hard turning
Procedia PDF Downloads 1788899 Utilizing Spatial Uncertainty of On-The-Go Measurements to Design Adaptive Sampling of Soil Electrical Conductivity in a Rice Field
Authors: Ismaila Olabisi Ogundiji, Hakeem Mayowa Olujide, Qasim Usamot
Abstract:
The main reasons for site-specific management for agricultural inputs are to increase the profitability of crop production, to protect the environment and to improve products’ quality. Information about the variability of different soil attributes within a field is highly essential for the decision-making process. Lack of fast and accurate acquisition of soil characteristics remains one of the biggest limitations of precision agriculture due to being expensive and time-consuming. Adaptive sampling has been proven as an accurate and affordable sampling technique for planning within a field for site-specific management of agricultural inputs. This study employed spatial uncertainty of soil apparent electrical conductivity (ECa) estimates to identify adaptive re-survey areas in the field. The original dataset was grouped into validation and calibration groups where the calibration group was sub-grouped into three sets of different measurements pass intervals. A conditional simulation was performed on the field ECa to evaluate the ECa spatial uncertainty estimates by the use of the geostatistical technique. The grouping of high-uncertainty areas for each set was done using image segmentation in MATLAB, then, high and low area value-separate was identified. Finally, an adaptive re-survey was carried out on those areas of high-uncertainty. Adding adaptive re-surveying significantly minimized the time required for resampling whole field and resulted in ECa with minimal error. For the most spacious transect, the root mean square error (RMSE) yielded from an initial crude sampling survey was minimized after an adaptive re-survey, which was close to that value of the ECa yielded with an all-field re-survey. The estimated sampling time for the adaptive re-survey was found to be 45% lesser than that of all-field re-survey. The results indicate that designing adaptive sampling through spatial uncertainty models significantly mitigates sampling cost, and there was still conformity in the accuracy of the observations.Keywords: soil electrical conductivity, adaptive sampling, conditional simulation, spatial uncertainty, site-specific management
Procedia PDF Downloads 1328898 An Improved Approach Based on MAS Architecture and Heuristic Algorithm for Systematic Maintenance
Authors: Abdelhadi Adel, Kadri Ouahab
Abstract:
This paper proposes an improved approach based on MAS Architecture and Heuristic Algorithm for systematic maintenance to minimize makespan. We have implemented a problem-solving approach for optimizing the processing time, methods based on metaheuristics. The proposed approach is inspired by the behavior of the human body. This hybridization is between a multi-agent system and inspirations of the human body, especially genetics. The effectiveness of our approach has been demonstrated repeatedly in this paper. To solve such a complex problem, we proposed an approach which we have used advanced operators such as uniform crossover set and single point mutation. The proposed approach is applied to three preventive maintenance policies. These policies are intended to maximize the availability or to maintain a minimum level of reliability during the production chain. The results show that our algorithm outperforms existing algorithms. We assumed that the machines might be unavailable periodically during the production scheduling.Keywords: multi-agent systems, emergence, genetic algorithm, makespan, systematic maintenance, scheduling, hybrid flow shop scheduling
Procedia PDF Downloads 3018897 The Intention to Use Telecare in People of Fall Experience: Application of Fuzzy Neural Network
Authors: Jui-Chen Huang, Shou-Hsiung Cheng
Abstract:
This study examined their willingness to use telecare for people who have had experience falling in the last three months in Taiwan. This study adopted convenience sampling and a structural questionnaire to collect data. It was based on the definition and the constructs related to the Health Belief Model (HBM). HBM is comprised of seven constructs: perceived benefits (PBs), perceived disease threat (PDT), perceived barriers of taking action (PBTA), external cues to action (ECUE), internal cues to action (ICUE), attitude toward using (ATT), and behavioral intention to use (BI). This study adopted Fuzzy Neural Network (FNN) to put forward an effective method. It shows the dependence of ATT on PB, PDT, PBTA, ECUE, and ICUE. The training and testing data RMSE (root mean square error) are 0.028 and 0.166 in the FNN, respectively. The training and testing data RMSE are 0.828 and 0.578 in the regression model, respectively. On the other hand, as to the dependence of ATT on BI, as presented in the FNN, the training and testing data RMSE are 0.050 and 0.109, respectively. The training and testing data RMSE are 0.529 and 0.571 in the regression model, respectively. The results show that the FNN method is better than the regression analysis. It is an effective and viable good way.Keywords: fall, fuzzy neural network, health belief model, telecare, willingness
Procedia PDF Downloads 2018896 Estimating Air Particulate Matter 10 Using Satellite Data and Analyzing Its Annual Temporal Pattern over Gaza Strip, Palestine
Authors: ِAbdallah A. A. Shaheen
Abstract:
Gaza Strip faces economic and political issues such as conflict, siege and urbanization; all these have led to an increase in the air pollution over Gaza Strip. In this study, Particulate matter 10 (PM10) concentration over Gaza Strip has been estimated by Landsat Thematic Mapper (TM) and Landsat Enhanced Thematic Mapper Plus (ETM+) data, based on a multispectral algorithm. Simultaneously, in-situ measurements for the corresponding particulate are acquired for selected time period. Landsat and ground data for eleven years are used to develop the algorithm while four years data (2002, 2006, 2010 and 2014) have been used to validate the results of algorithm. The developed algorithm gives highest regression, R coefficient value i.e. 0.86; RMSE value as 9.71 µg/m³; P values as 0. Average validation of algorithm show that calculated PM10 strongly correlates with measured PM10, indicating high efficiency of algorithm for the mapping of PM10 concentration during the years 2000 to 2014. Overall results show increase in minimum, maximum and average yearly PM10 concentrations, also presents similar trend over urban area. The rate of urbanization has been evaluated by supervised classification of the Landsat image. Urban sprawl from year 2000 to 2014 results in a high concentration of PM10 in the study area.Keywords: PM10, landsat, atmospheric reflectance, Gaza strip, urbanization
Procedia PDF Downloads 2528895 Comparison of Sediment Rating Curve and Artificial Neural Network in Simulation of Suspended Sediment Load
Authors: Ahmad Saadiq, Neeraj Sahu
Abstract:
Sediment, which comprises of solid particles of mineral and organic material are transported by water. In river systems, the amount of sediment transported is controlled by both the transport capacity of the flow and the supply of sediment. The transport of sediment in rivers is important with respect to pollution, channel navigability, reservoir ageing, hydroelectric equipment longevity, fish habitat, river aesthetics and scientific interests. The sediment load transported in a river is a very complex hydrological phenomenon. Hence, sediment transport has attracted the attention of engineers from various aspects, and different methods have been used for its estimation. So, several experimental equations have been submitted by experts. Though the results of these methods have considerable differences with each other and with experimental observations, because the sediment measures have some limits, these equations can be used in estimating sediment load. In this present study, two black box models namely, an SRC (Sediment Rating Curve) and ANN (Artificial Neural Network) are used in the simulation of the suspended sediment load. The study is carried out for Seonath subbasin. Seonath is the biggest tributary of Mahanadi river, and it carries a vast amount of sediment. The data is collected for Jondhra hydrological observation station from India-WRIS (Water Resources Information System) and IMD (Indian Meteorological Department). These data include the discharge, sediment concentration and rainfall for 10 years. In this study, sediment load is estimated from the input parameters (discharge, rainfall, and past sediment) in various combination of simulations. A sediment rating curve used the water discharge to estimate the sediment concentration. This estimated sediment concentration is converted to sediment load. Likewise, for the application of these data in ANN, they are normalised first and then fed in various combinations to yield the sediment load. RMSE (root mean square error) and R² (coefficient of determination) between the observed load and the estimated load are used as evaluating criteria. For an ideal model, RMSE is zero and R² is 1. However, as the models used in this study are black box models, they don’t carry the exact representation of the factors which causes sedimentation. Hence, a model which gives the lowest RMSE and highest R² is the best model in this study. The lowest values of RMSE (based on normalised data) for sediment rating curve, feed forward back propagation, cascade forward back propagation and neural network fitting are 0.043425, 0.00679781, 0.0050089 and 0.0043727 respectively. The corresponding values of R² are 0.8258, 0.9941, 0.9968 and 0.9976. This implies that a neural network fitting model is superior to the other models used in this study. However, a drawback of neural network fitting is that it produces few negative estimates, which is not at all tolerable in the field of estimation of sediment load, and hence this model can’t be crowned as the best model among others, based on this study. A cascade forward back propagation produces results much closer to a neural network model and hence this model is the best model based on the present study.Keywords: artificial neural network, Root mean squared error, sediment, sediment rating curve
Procedia PDF Downloads 3258894 A Review on Parametric Optimization of Casting Processes Using Optimization Techniques
Authors: Bhrugesh Radadiya, Jaydeep Shah
Abstract:
In Indian foundry industry, there is a need of defect free casting with minimum production cost in short lead time. Casting defect is a very large issue in foundry shop which increases the rejection rate of casting and wastage of materials. The various parameters influences on casting process such as mold machine related parameters, green sand related parameters, cast metal related parameters, mold related parameters and shake out related parameters. The mold related parameters are most influences on casting defects in sand casting process. This paper review the casting produced by foundry with shrinkage and blow holes as a major defects was analyzed and identified that mold related parameters such as mold temperature, pouring temperature and runner size were not properly set in sand casting process. These parameters were optimized using different optimization techniques such as Taguchi method, Response surface methodology, Genetic algorithm and Teaching-learning based optimization algorithm. Finally, concluded that a Teaching-learning based optimization algorithm give better result than other optimization techniques.Keywords: casting defects, genetic algorithm, parametric optimization, Taguchi method, TLBO algorithm
Procedia PDF Downloads 7288893 Algorithm Research on Traffic Sign Detection Based on Improved EfficientDet
Authors: Ma Lei-Lei, Zhou You
Abstract:
Aiming at the problems of low detection accuracy of deep learning algorithm in traffic sign detection, this paper proposes improved EfficientDet based traffic sign detection algorithm. Multi-head self-attention is introduced in the minimum resolution layer of the backbone of EfficientDet to achieve effective aggregation of local and global depth information, and this study proposes an improved feature fusion pyramid with increased vertical cross-layer connections, which improves the performance of the model while introducing a small amount of complexity, the Balanced L1 Loss is introduced to replace the original regression loss function Smooth L1 Loss, which solves the problem of balance in the loss function. Experimental results show, the algorithm proposed in this study is suitable for the task of traffic sign detection. Compared with other models, the improved EfficientDet has the best detection accuracy. Although the test speed is not completely dominant, it still meets the real-time requirement.Keywords: convolutional neural network, transformer, feature pyramid networks, loss function
Procedia PDF Downloads 978892 Digestion Optimization Algorithm: A Novel Bio-Inspired Intelligence for Global Optimization Problems
Authors: Akintayo E. Akinsunmade
Abstract:
The digestion optimization algorithm is a novel biological-inspired metaheuristic method for solving complex optimization problems. The algorithm development was inspired by studying the human digestive system. The algorithm mimics the process of food ingestion, breakdown, absorption, and elimination to effectively and efficiently search for optimal solutions. This algorithm was tested for optimal solutions on seven different types of optimization benchmark functions. The algorithm produced optimal solutions with standard errors, which were compared with the exact solution of the test functions.Keywords: bio-inspired algorithm, benchmark optimization functions, digestive system in human, algorithm development
Procedia PDF Downloads 88891 High Performance of Direct Torque and Flux Control of a Double Stator Induction Motor Drive with a Fuzzy Stator Resistance Estimator
Authors: K. Kouzi
Abstract:
In order to have stable and high performance of direct torque and flux control (DTFC) of double star induction motor drive (DSIM), proper on-line adaptation of the stator resistance is very important. This is inevitably due to the variation of the stator resistance during operating conditions, which introduces error in estimated flux position and the magnitude of the stator flux. Error in the estimated stator flux deteriorates the performance of the DTFC drive. Also, the effect of error in estimation is very important especially at low speed. Due to this, our aim is to overcome the sensitivity of the DTFC to the stator resistance variation by proposing on-line fuzzy estimation stator resistance. The fuzzy estimation method is based on an on-line stator resistance correction through the variations of the stator current estimation error and its variations. The fuzzy logic controller gives the future stator resistance increment at the output. The main advantage of the suggested algorithm control is to avoid the drive instability that may occur in certain situations and ensure the tracking of the actual stator resistance. The validity of the technique and the improvement of the whole system performance are proved by the results.Keywords: direct torque control, dual stator induction motor, Fuzzy Logic estimation, stator resistance adaptation
Procedia PDF Downloads 3258890 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis
Authors: N. R. N. Idris, S. Baharom
Abstract:
A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates. On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.Keywords: aggregate data, combined-level data, individual patient data, meta-analysis
Procedia PDF Downloads 3758889 A Study of Rapid Replication of Square-Microlens Structures
Authors: Ting-Ting Wen, Jung-Ruey Tsai
Abstract:
This paper reports a method for the replication of micro-scale structures. By using electromagnetic force-assisted imprinting system with magnetic soft stamp written square-microlens cavity, a photopolymer square-microlens structures can be rapidly fabricated. Under the proper processing conditions, the polymeric square-microlens structures with feature size of width 100.3um and height 15.2um across a large area can be successfully fabricated. Scanning electron microscopy (SEM) and surface profiler observations confirm that the micro-scale polymer structures are produced without defects or distortion and with good pattern fidelity over a 60x60mm2 area. This technique shows great potential for the efficient replication of the micro-scale structure array at room temperature and with high productivity and low cost.Keywords: square-microlens structures, electromagnetic force-assisted imprinting, magnetic soft stamp
Procedia PDF Downloads 3348888 Robust Inference with a Skew T Distribution
Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici
Abstract:
There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness
Procedia PDF Downloads 3978887 Income-Consumption Relationships in Pakistan (1980-2011): A Cointegration Approach
Authors: Himayatullah Khan, Alena Fedorova
Abstract:
The present paper analyses the income-consumption relationships in Pakistan using annual time series data from 1980-81 to 2010-1. The paper uses the Augmented Dickey-Fuller test to check the unit root and stationarity in these two time series. The paper finds that the two time series are nonstationary but stationary at their first difference levels. The Augmented Engle-Granger test and the Cointegrating Regression Durbin-Watson test imply that the two time series of consumption and income are cointegrated and that long-run marginal propensity to consume is 0.88 which is given by the estimated (static) equilibrium relation. The paper also used the error correction mechanism to find out to model dynamic relationship. The purpose of the ECM is to indicate the speed of adjustment from the short-run equilibrium to the long-run equilibrium state. The results show that MPC is equal to 0.93 and is highly significant. The coefficient of Engle-Granger residuals is negative but insignificant. Statistically, the equilibrium error term is zero, which suggests that consumption adjusts to changes in GDP in the same period. The short-run changes in GDP have a positive impact on short-run changes in consumption. The paper concludes that we may interpret 0.93 as the short-run MPC. The pair-wise Granger Causality test shows that both GDP and consumption Granger cause each other.Keywords: cointegrating regression, Augmented Dickey Fuller test, Augmented Engle-Granger test, Granger causality, error correction mechanism
Procedia PDF Downloads 4148886 Relay Node Placement for Connectivity Restoration in Wireless Sensor Networks Using Genetic Algorithms
Authors: Hanieh Tarbiat Khosrowshahi, Mojtaba Shakeri
Abstract:
Wireless Sensor Networks (WSNs) consist of a set of sensor nodes with limited capability. WSNs may suffer from multiple node failures when they are exposed to harsh environments such as military zones or disaster locations and lose connectivity by getting partitioned into disjoint segments. Relay nodes (RNs) are alternatively introduced to restore connectivity. They cost more than sensors as they benefit from mobility, more power and more transmission range, enforcing a minimum number of them to be used. This paper addresses the problem of RN placement in a multiple disjoint network by developing a genetic algorithm (GA). The problem is reintroduced as the Steiner tree problem (which is known to be an NP-hard problem) by the aim of finding the minimum number of Steiner points where RNs are to be placed for restoring connectivity. An upper bound to the number of RNs is first computed to set up the length of initial chromosomes. The GA algorithm then iteratively reduces the number of RNs and determines their location at the same time. Experimental results indicate that the proposed GA is capable of establishing network connectivity using a reasonable number of RNs compared to the best existing work.Keywords: connectivity restoration, genetic algorithms, multiple-node failure, relay nodes, wireless sensor networks
Procedia PDF Downloads 2408885 On the Basis Number and the Minimum Cycle Bases of the Wreath Product of Paths with Wheels
Authors: M. M. M. Jaradat
Abstract:
For a given graph G, the set Ԑ of all subsets of E(G) forms an |E(G)| dimensional vector space over Z2 with vector addition X⊕Y = (X\Y ) [ (Y \X) and scalar multiplication 1.X = X and 0.X = Ø for all X, Yϵ Ԑ. The cycle space, C(G), of a graph G is the vector subspace of (E; ⊕; .) spanned by the cycles of G. Traditionally there have been two notions of minimality among bases of C(G). First, a basis B of G is called a d-fold if each edge of G occurs in at most d cycles of the basis B. The basis number, b(G), of G is the least non-negative integer d such that C(G) has a d-fold basis; a required basis of C(G) is a basis for which each edge of G belongs to at most b(G) elements of B. Second, a basis B is called a minimum cycle basis (MCB) if its total length Σ BϵB |B| is minimum among all bases of C(G). The lexicographic product GρH has the vertex set V (GρH) = V (G) x V (H) and the edge set E(GρH) = {(u1, v1)(u2, v2)|u1 = u2 and v1 v2 ϵ E(H); or u1u2 ϵ E(G) and there is α ϵ Aut(H) such that α (v1) = v2}. In this work, a construction of a minimum cycle basis for the wreath product of wheels with paths is presented. Also, the length of the longest cycle of a minimum cycle basis is determined. Moreover, the basis number for the wreath product of the same is investigated.Keywords: cycle space, minimum cycle basis, basis number, wreath product
Procedia PDF Downloads 2808884 Model Order Reduction for Frequency Response and Effect of Order of Method for Matching Condition
Authors: Aref Ghafouri, Mohammad javad Mollakazemi, Farhad Asadi
Abstract:
In this paper, model order reduction method is used for approximation in linear and nonlinearity aspects in some experimental data. This method can be used for obtaining offline reduced model for approximation of experimental data and can produce and follow the data and order of system and also it can match to experimental data in some frequency ratios. In this study, the method is compared in different experimental data and influence of choosing of order of the model reduction for obtaining the best and sufficient matching condition for following the data is investigated in format of imaginary and reality part of the frequency response curve and finally the effect and important parameter of number of order reduction in nonlinear experimental data is explained further.Keywords: frequency response, order of model reduction, frequency matching condition, nonlinear experimental data
Procedia PDF Downloads 4028883 Application of Artificial Neural Network for Prediction of High Tensile Steel Strands in Post-Tensioned Slabs
Authors: Gaurav Sancheti
Abstract:
This study presents an impacting approach of Artificial Neural Networks (ANNs) in determining the quantity of High Tensile Steel (HTS) strands required in post-tensioned (PT) slabs. Various PT slab configurations were generated by varying the span and depth of the slab. For each of these slab configurations, quantity of required HTS strands were recorded. ANNs with backpropagation algorithm and varying architectures were developed and their performance was evaluated in terms of Mean Square Error (MSE). The recorded data for the quantity of HTS strands was used as a feeder database for training the developed ANNs. The networks were validated using various validation techniques. The results show that the proposed ANNs have a great potential with good prediction and generalization capability.Keywords: artificial neural networks, back propagation, conceptual design, high tensile steel strands, post tensioned slabs, validation techniques
Procedia PDF Downloads 2218882 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations
Authors: Karthikeyan Kalirajan, Ashok Joshi
Abstract:
An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection
Procedia PDF Downloads 4278881 A Study on the Influence of Pin-Hole Position Error of Carrier on Mesh Load and Planet Load Sharing of Planetary Gear
Authors: Kyung Min Kang, Peng Mou, Dong Xiang, Gang Shen
Abstract:
For planetary gear system, Planet pin-hole position accuracy is one of most influential factor to efficiency and reliability of planetary gear system. This study considers planet pin-hole position error as a main input error for model and build multi body dynamic simulation model of planetary gear including planet pin-hole position error using MSC. ADAMS. From this model, the mesh load results between meshing gears in each pin-hole position error cases are obtained and based on these results, planet load sharing factor which reflect equilibrium state of mesh load sharing between whole meshing gear pair is calculated. Analysis result indicates that the pin-hole position error of tangential direction cause profound influence to mesh load and load sharing factor between meshing gear pair.Keywords: planetary gear, load sharing factor, multibody dynamics, pin-hole position error
Procedia PDF Downloads 5788880 Multiple Relaxation Times in the Gibbs Ensemble Monte Carlo Simulation of Phase Separation
Authors: Bina Kumari, Subir K. Sarkar, Pradipta Bandyopadhyay
Abstract:
The autocorrelation function of the density fluctuation is studied in each of the two phases in a Gibbs Ensemble Monte Carlo (GEMC) simulation of the problem of phase separation for a square well potential with various values of its range. We find that the normalized autocorrelation function is described very well as a linear combination of an exponential function with a time scale τ₂ and a stretched exponential function with a time scale τ₁ and an exponent α. Dependence of (α, τ₁, τ₂) on the parameters of the GEMC algorithm and the range of the square well potential is investigated and interpreted. We also analyse the issue of how to choose the parameters of the GEMC simulation optimally.Keywords: autocorrelation function, density fluctuation, GEMC, simulation
Procedia PDF Downloads 1868879 Generation of High-Quality Synthetic CT Images from Cone Beam CT Images Using A.I. Based Generative Networks
Authors: Heeba A. Gurku
Abstract:
Introduction: Cone Beam CT(CBCT) images play an integral part in proper patient positioning in cancer patients undergoing radiation therapy treatment. But these images are low in quality. The purpose of this study is to generate high-quality synthetic CT images from CBCT using generative models. Material and Methods: This study utilized two datasets from The Cancer Imaging Archive (TCIA) 1) Lung cancer dataset of 20 patients (with full view CBCT images) and 2) Pancreatic cancer dataset of 40 patients (only 27 patients having limited view images were included in the study). Cycle Generative Adversarial Networks (GAN) and its variant Attention Guided Generative Adversarial Networks (AGGAN) models were used to generate the synthetic CTs. Models were evaluated by visual evaluation and on four metrics, Structural Similarity Index Measure (SSIM), Peak Signal Noise Ratio (PSNR) Mean Absolute Error (MAE) and Root Mean Square Error (RMSE), to compare the synthetic CT and original CT images. Results: For pancreatic dataset with limited view CBCT images, our study showed that in Cycle GAN model, MAE, RMSE, PSNR improved from 12.57to 8.49, 20.94 to 15.29 and 21.85 to 24.63, respectively but structural similarity only marginally increased from 0.78 to 0.79. Similar, results were achieved with AGGAN with no improvement over Cycle GAN. However, for lung dataset with full view CBCT images Cycle GAN was able to reduce MAE significantly from 89.44 to 15.11 and AGGAN was able to reduce it to 19.77. Similarly, RMSE was also decreased from 92.68 to 23.50 in Cycle GAN and to 29.02 in AGGAN. SSIM and PSNR also improved significantly from 0.17 to 0.59 and from 8.81 to 21.06 in Cycle GAN respectively while in AGGAN SSIM increased to 0.52 and PSNR increased to 19.31. In both datasets, GAN models were able to reduce artifacts, reduce noise, have better resolution, and better contrast enhancement. Conclusion and Recommendation: Both Cycle GAN and AGGAN were significantly able to reduce MAE, RMSE and PSNR in both datasets. However, full view lung dataset showed more improvement in SSIM and image quality than limited view pancreatic dataset.Keywords: CT images, CBCT images, cycle GAN, AGGAN
Procedia PDF Downloads 838878 Application of a Universal Distortion Correction Method in Stereo-Based Digital Image Correlation Measurement
Authors: Hu Zhenxing, Gao Jianxin
Abstract:
Stereo-based digital image correlation (also referred to as three-dimensional (3D) digital image correlation (DIC)) is a technique for both 3D shape and surface deformation measurement of a component, which has found increasing applications in academia and industries. The accuracy of the reconstructed coordinate depends on many factors such as configuration of the setup, stereo-matching, distortion, etc. Most of these factors have been investigated in literature. For instance, the configuration of a binocular vision system determines the systematic errors. The stereo-matching errors depend on the speckle quality and the matching algorithm, which can only be controlled in a limited range. And the distortion is non-linear particularly in a complex imaging acquisition system. Thus, the distortion correction should be carefully considered. Moreover, the distortion function is difficult to formulate in a complex imaging acquisition system using conventional models in such cases where microscopes and other complex lenses are involved. The errors of the distortion correction will propagate to the reconstructed 3D coordinates. To address the problem, an accurate mapping method based on 2D B-spline functions is proposed in this study. The mapping functions are used to convert the distorted coordinates into an ideal plane without distortions. This approach is suitable for any image acquisition distortion models. It is used as a prior process to convert the distorted coordinate to an ideal position, which enables the camera to conform to the pin-hole model. A procedure of this approach is presented for stereo-based DIC. Using 3D speckle image generation, numerical simulations were carried out to compare the accuracy of both the conventional method and the proposed approach.Keywords: distortion, stereo-based digital image correlation, b-spline, 3D, 2D
Procedia PDF Downloads 4988877 A Comparison of Image Data Representations for Local Stereo Matching
Authors: André Smith, Amr Abdel-Dayem
Abstract:
The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.Keywords: colour data, local stereo matching, stereo correspondence, disparity map
Procedia PDF Downloads 3708876 The Effects of Root Zone Supply of Aluminium on Vegetative Growth of 15 Groundnut Cultivars Grown in Solution Culture
Authors: Mosima M. Mabitsela
Abstract:
Groundnut is preferably grown on light textured soils. Most of these light textured soils tend to be highly weathered and characterized by high soil acidity and low nutrient status. One major soil factor associated with infertility of acidic soils that can negatively depress groundnut yield is aluminium (Al) toxicity. In plants Al toxicity damages root cells, leading to inhibition of root growth as a result of the suppression of cell division, cell elongation and cell expansion in the apical meristem cells of the root. The end result is that roots become stunted and brittle, root hair development is poor, and the root apices become swollen. This study was conducted to determine the effects of aluminium (Al) toxicity on a range of groundnut varieties. Fifteen cultivars were tested in incremental aluminum (Al) supply in an ebb and flow solution culture laid out in a randomized complete block design. There were six aluminium (Al) treatments viz. 0 µM, 1 µM, 5.7 µM, 14.14 µM, 53.18 µM, and 200 µM. At 1 µM there was no inhibitory effect on the growth of groundnut. The inhibition of groundnut growth was noticeable from 5.7 µM to 200 µM, where the severe effect of aluminium (Al) stress was observed at 200 µM. The cultivars varied in their response to aluminium (Al) supply in solution culture. Groundnuts are one of the most important food crops in the world, and its supply is on a decline due to the light-textured soils that they thrive under as these soils are acidic and can easily solubilize aluminium (Al) to its toxic form. Consequently, there is a need to develop groundnut cultivars with high tolerance to soil acidity.Keywords: aluminium toxicity, cultivars, reduction, root growth
Procedia PDF Downloads 1528875 Cross Matching: An Improved Method to Obtain Comprehensive and Consolidated Evidence
Authors: Tuula Heinonen, Wilhelm Gaus
Abstract:
At present safety, assessment starts with animal tests although their predictivity is often poor. Even after extended human use experimental data are often judged as the core information for risk assessment. However, the best opportunity to generate true evidence is to match all available information. Cross matching methodology combines the different fields of knowledge and types of data (e.g. in-vitro and in-vivo experiments, clinical observations, clinical and epidemiological studies, and daily life observations) and gives adequate weight to individual findings. To achieve a consolidated outcome, the information from all available sources is analysed and compared with each other. If single pieces of information fit together a clear picture becomes visible. If pieces of information are inconsistent or contradictory careful consideration is necessary. 'Cross' can be understood as 'orthographic' in geometry or as 'independent' in mathematics. Results coming from different sources bring independent and; therefore, they result in new information. Independent information gives a larger contribution to evidence than results coming repeatedly from the same source. A successful example of cross matching is the assessment of Ginkgo biloba where we were able to come to the conclusive result: Ginkgo biloba leave extract is well tolerated and safe for humans.Keywords: cross-matching, human use, safety assessment, Ginkgo biloba leave extract
Procedia PDF Downloads 2868874 Predicting Root Cause of a Fire Incident through Transient Simulation
Authors: Mira Ezora Zainal Abidin, Siti Fauzuna Othman, Zalina Harun, M. Hafiz M. Pikri
Abstract:
In a fire incident involving a Nitrogen storage tank that over-pressured and exploded, resulting in a fire in one of the units in a refinery, lack of data and evidence hampered the investigation to determine the root cause. Instrumentation and fittings were destroyed in the fire. To make it worst, this incident occurred during the COVID-19 pandemic, making collecting and testing evidence delayed. In addition to that, the storage tank belonged to a third-party company which requires legal agreement prior to the refinery getting approval to test the remains. Despite all that, the investigation had to be carried out with stakeholders demanding answers. The investigation team had to devise alternative means to support whatever little evidence came out as the most probable root cause. International standards, practices, and previous incidents on similar tanks were referred. To narrow down to just one root cause from 8 possible causes, transient simulations were conducted to simulate the overpressure scenarios to prove and eliminate the other causes, leaving one root cause. This paper shares the methodology used and details how transient simulations were applied to help solve this. The experience and lessons learned gained from the event investigation and from numerous case studies via transient analysis in finding the root cause of the accident leads to the formulation of future mitigations and design modifications aiming at preventing such incidents or at least minimize the consequences from the fire incident.Keywords: fire, transient, simulation, relief
Procedia PDF Downloads 95