Search results for: global generalized minimum residual (Gl-GMRES).
1438 A Linearization and Decomposition Based Approach to Minimize the Non-Productive Time in Transfer Lines
Authors: Hany Osman, M. F. Baki
Abstract:
We address the balancing problem of transfer lines in this paper to find the optimal line balancing that minimizes the nonproductive time. We focus on the tool change time and face orientation change time both of which influence the makespane. We consider machine capacity limitations and technological constraints associated with the manufacturing process of auto cylinder heads. The problem is represented by a mixed integer programming model that aims at distributing the design features to workstations and sequencing the machining processes at a minimum non-productive time. The proposed model is solved by an algorithm established using linearization schemes and Benders- decomposition approach. The experiments show the efficiency of the algorithm in reaching the exact solution of small and medium problem instances at reasonable time.Keywords: Transfer line balancing, Benders' decomposition, Linearization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17311437 LAYMOD; A Layered and Modular Platform for CAx Collaboration Management and Supporting Product data Integration based on STEP Standard
Authors: Omid F. Valilai, Mahmoud Houshmand
Abstract:
Nowadays companies strive to survive in a competitive global environment. To speed up product development/modifications, it is suggested to adopt a collaborative product development approach. However, despite the advantages of new IT improvements still many CAx systems work separately and locally. Collaborative design and manufacture requires a product information model that supports related CAx product data models. To solve this problem many solutions are proposed, which the most successful one is adopting the STEP standard as a product data model to develop a collaborative CAx platform. However, the improvement of the STEP-s Application Protocols (APs) over the time, huge number of STEP AP-s and cc-s, the high costs of implementation, costly process for conversion of older CAx software files to the STEP neutral file format; and lack of STEP knowledge, that usually slows down the implementation of the STEP standard in collaborative data exchange, management and integration should be considered. In this paper the requirements for a successful collaborative CAx system is discussed. The STEP standard capability for product data integration and its shortcomings as well as the dominant platforms for supporting CAx collaboration management and product data integration are reviewed. Finally a platform named LAYMOD to fulfil the requirements of CAx collaborative environment and integrating the product data is proposed. The platform is a layered platform to enable global collaboration among different CAx software packages/developers. It also adopts the STEP modular architecture and the XML data structures to enable collaboration between CAx software packages as well as overcoming the STEP standard limitations. The architecture and procedures of LAYMOD platform to manage collaboration and avoid contradicts in product data integration are introduced.Keywords: CAx, Collaboration management, STEP applicationmodules, STEP standard, XML data structures
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22181436 sEMG Interface Design for Locomotion Identification
Authors: Rohit Gupta, Ravinder Agarwal
Abstract:
Surface electromyographic (sEMG) signal has the potential to identify the human activities and intention. This potential is further exploited to control the artificial limbs using the sEMG signal from residual limbs of amputees. The paper deals with the development of multichannel cost efficient sEMG signal interface for research application, along with evaluation of proposed class dependent statistical approach of the feature selection method. The sEMG signal acquisition interface was developed using ADS1298 of Texas Instruments, which is a front-end interface integrated circuit for ECG application. Further, the sEMG signal is recorded from two lower limb muscles for three locomotions namely: Plane Walk (PW), Stair Ascending (SA), Stair Descending (SD). A class dependent statistical approach is proposed for feature selection and also its performance is compared with 12 preexisting feature vectors. To make the study more extensive, performance of five different types of classifiers are compared. The outcome of the current piece of work proves the suitability of the proposed feature selection algorithm for locomotion recognition, as compared to other existing feature vectors. The SVM Classifier is found as the outperformed classifier among compared classifiers with an average recognition accuracy of 97.40%. Feature vector selection emerges as the most dominant factor affecting the classification performance as it holds 51.51% of the total variance in classification accuracy. The results demonstrate the potentials of the developed sEMG signal acquisition interface along with the proposed feature selection algorithm.Keywords: Classifiers, feature selection, locomotion, sEMG.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14911435 Studies on Distortion of Dissimilar Thin Sheet Weld Joints Using Laser Beam Welding
Authors: K. Kalaiselvan, A. Elango
Abstract:
To achieve reliable welds with minimum distortion for the fabrication of components in aerospace industry laser beam welding is attempted. Laser welding can provide a significant benefit for the welding of Titanium and Aluminium thin sheet alloys of its precision and rapid processing capability. For laser welding, pulse shape, energy, duration, repetition rate and peak power are the most important parameters that influence directly the quality of welds. In this experimental work for joining 1mm thick TI6AL4V and AA2024 alloy and JK600 Nd:YAG pulsed laser units used. The distortions at different welding power and speed of titanium and aluminium thin sheet alloys are investigated. Test results reveal that increase in welding speed increases distortion in weldment
Keywords: Laser Beam Welding, Titanium, Aluminium alloy sheets and distortion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26821434 SMCC: Self-Managing Congestion Control Algorithm
Authors: Sh. Jamali, A. Eftekhari
Abstract:
Transmission control protocol (TCP) Vegas detects network congestion in the early stage and successfully prevents periodic packet loss that usually occurs in TCP Reno. It has been demonstrated that TCP Vegas outperforms TCP Reno in many aspects. However, TCP Vegas suffers several problems that affect its congestion avoidance mechanism. One of the most important weaknesses in TCP Vegas is that alpha and beta depend on a good expected throughput estimate, which as we have seen, depends on a good minimum RTT estimate. In order to make the system more robust alpha and beta must be made responsive to network conditions (they are currently chosen statically). This paper proposes a modified Vegas algorithm, which can be adjusted to present good performance compared to other transmission control protocols (TCPs). In order to do this, we use PSO algorithm to tune alpha and beta. The simulation results validate the advantages of the proposed algorithm in term of performance.Keywords: Self-managing, Congestion control, TCP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14671433 Trust Building Mechanisms for Electronic Business Networks and Their Relation to eSkills
Authors: Radoslav Delina, Michal Tkáč
Abstract:
Globalization, supported by information and communication technologies, changes the rules of competitiveness and increases the significance of information, knowledge and network cooperation. In line with this trend, the need for efficient trust-building tools has emerged. The absence of trust building mechanisms and strategies was identified within several studies. Through trust development, participation on e-business network and usage of network services will increase and provide to SMEs new economic benefits. This work is focused on effective trust building strategies development for electronic business network platforms. Based on trust building mechanism identification, the questionnairebased analysis of its significance and minimum level of requirements was conducted. In the paper, we are confirming the trust dependency on e-Skills which play crucial role in higher level of trust into the more sophisticated and complex trust building ICT solutions.Keywords: Correlation analysis, decision trees, e-marketplace, trust building
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19271432 Fuzzy C-Means Clustering Algorithm for Voltage Stability in Large Power Systems
Authors: Mohamad R. Khaldi, Christine S. Khoury, Guy M. Naim
Abstract:
The steady-state operation of maintaining voltage stability is done by switching various controllers scattered all over the power network. When a contingency occurs, whether forced or unforced, the dispatcher is to alleviate the problem in a minimum time, cost, and effort. Persistent problem may lead to blackout. The dispatcher is to have the appropriate switching of controllers in terms of type, location, and size to remove the contingency and maintain voltage stability. Wrong switching may worsen the problem and that may lead to blackout. This work proposed and used a Fuzzy CMeans Clustering (FCMC) to assist the dispatcher in the decision making. The FCMC is used in the static voltage stability to map instantaneously a contingency to a set of controllers where the types, locations, and amount of switching are induced.Keywords: Fuzzy logic, Power system control, Reactive power control, Voltage control
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18851431 Integrated Simulation and Optimization for Carbon Capture and Storage System
Authors: Taekyoon Park, Seok Goo Lee, Sung Ho Kim, Ung Lee, Jong Min Lee, Chonghun Han
Abstract:
CO2 capture and storage/sequestration (CCS) is a key technology for addressing the global warming issue. This paper proposes an integrated model for the whole chain of CCS, from a power plant to a reservoir. The integrated model is further utilized to determine optimal operating conditions and study responses to various changes in input variables.
Keywords: CCS, Caron Dioxide, Carbon Capture and Storage, Simulation, Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26861430 Asymmetrical Informative Estimation for Macroeconomic Model: Special Case in the Tourism Sector of Thailand
Authors: Chukiat Chaiboonsri, Satawat Wannapan
Abstract:
This paper used an asymmetric informative concept to apply in the macroeconomic model estimation of the tourism sector in Thailand. The variables used to statistically analyze are Thailand international and domestic tourism revenues, the expenditures of foreign and domestic tourists, service investments by private sectors, service investments by the government of Thailand, Thailand service imports and exports, and net service income transfers. All of data is a time-series index which was observed between 2002 and 2015. Empirically, the tourism multiplier and accelerator were estimated by two statistical approaches. The first was the result of the Generalized Method of Moments model (GMM) based on the assumption which the tourism market in Thailand had perfect information (Symmetrical data). The second was the result of the Maximum Entropy Bootstrapping approach (MEboot) based on the process that attempted to deal with imperfect information and reduced uncertainty in data observations (Asymmetrical data). In addition, the tourism leakages were investigated by a simple model based on the injections and leakages concept. The empirical findings represented the parameters computed from the MEboot approach which is different from the GMM method. However, both of the MEboot estimation and GMM model suggests that Thailand’s tourism sectors are in a period capable of stimulating the economy.
Keywords: Thailand tourism, maximum entropy bootstrapping approach, macroeconomic model, asymmetric information.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12641429 Biomechanical Analysis of the Basic Classical Dance Jump – The Grand Jeté
Authors: M. Kalichová
Abstract:
The aim of this study was to analyse the most important parameters determining the quality of the motion structure of the basic classical dance jump – grand jeté.Research sample consisted of 8 students of the Dance Conservatory in Brno. Using the system Simi motion we performed a 3D kinematic analysis of the jump. On the basis of the comparison of structure quality and measured data of the grand jeté, we defined the optimal values of the relevant parameters determining the quality of the performance. The take-off speed should achieve about 2.4 m·s-1, the optimum take-off angle is 28 - 30º. The take-off leg should swing backward at the beginning of the flight phase with the minimum speed of 3.3 m·s-1.If motor abilities of dancers achieve the level necessary for optimal performance of a classical dance jump, there is room for certain variability of the structure of the dance jump.Keywords: biomechanical analysis, classical dance, grand jeté, jump
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 78821428 A Hybrid Neural Network and Traditional Approach for Forecasting Lumpy Demand
Authors: A. Nasiri Pour, B. Rostami Tabar, A.Rahimzadeh
Abstract:
Accurate demand forecasting is one of the most key issues in inventory management of spare parts. The problem of modeling future consumption becomes especially difficult for lumpy patterns, which characterized by intervals in which there is no demand and, periods with actual demand occurrences with large variation in demand levels. However, many of the forecasting methods may perform poorly when demand for an item is lumpy. In this study based on the characteristic of lumpy demand patterns of spare parts a hybrid forecasting approach has been developed, which use a multi-layered perceptron neural network and a traditional recursive method for forecasting future demands. In the described approach the multi-layered perceptron are adapted to forecast occurrences of non-zero demands, and then a conventional recursive method is used to estimate the quantity of non-zero demands. In order to evaluate the performance of the proposed approach, their forecasts were compared to those obtained by using Syntetos & Boylan approximation, recently employed multi-layered perceptron neural network, generalized regression neural network and elman recurrent neural network in this area. The models were applied to forecast future demand of spare parts of Arak Petrochemical Company in Iran, using 30 types of real data sets. The results indicate that the forecasts obtained by using our proposed mode are superior to those obtained by using other methods.Keywords: Lumpy Demand, Neural Network, Forecasting, Hybrid Approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26801427 Reformulations of Big Bang-Big Crunch Algorithm for Discrete Structural Design Optimization
Authors: O. Hasançebi, S. Kazemzadeh Azad
Abstract:
In the present study the efficiency of Big Bang-Big Crunch (BB-BC) algorithm is investigated in discrete structural design optimization. It is shown that a standard version of the BB-BC algorithm is sometimes unable to produce reasonable solutions to problems from discrete structural design optimization. Two reformulations of the algorithm, which are referred to as modified BB-BC (MBB-BC) and exponential BB-BC (EBB-BC), are introduced to enhance the capability of the standard algorithm in locating good solutions for steel truss and frame type structures, respectively. The performances of the proposed algorithms are experimented and compared to its standard version as well as some other algorithms over several practical design examples. In these examples, steel structures are sized for minimum weight subject to stress, stability and displacement limitations according to the provisions of AISC-ASD.Keywords: Structural optimization, discrete optimization, metaheuristics, big bang-big crunch (BB-BC) algorithm, design optimization of steel trusses and frames.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23901426 An Investigation of Direct and Indirect Geo-Referencing Techniques on the Accuracy of Points in Photogrammetry
Authors: F. Yildiz, S. Y. Oturanc
Abstract:
Advances technology in the field of photogrammetry replaces analog cameras with reflection on aircraft GPS/IMU system with a digital aerial camera. In this system, when determining the position of the camera with the GPS, camera rotations are also determined by the IMU systems. All around the world, digital aerial cameras have been used for the photogrammetry applications in the last ten years. In this way, in terms of the work done in photogrammetry it is possible to use time effectively, costs to be reduced to a minimum level, the opportunity to make fast and accurate. Geo-referencing techniques that are the cornerstone of the GPS / INS systems, photogrammetric triangulation of images required for balancing (interior and exterior orientation) brings flexibility to the process. Also geo-referencing process; needed in the application of photogrammetry targets to help to reduce the number of ground control points. In this study, the use of direct and indirect georeferencing techniques on the accuracy of the points was investigated in the production of photogrammetric mapping.
Keywords: Photogrammetry, GPS/IMU Systems, Geo- Referencing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27201425 MMSE Based Beamforming for Chip Interleaved CDMA in Aeronautical Mobile Radio Channel
Authors: Sherif K. El Dyasti, Esam A. Hagras, Adel E. El-Hennawy
Abstract:
This paper addresses the performance of antenna array beamforming on Chip-Interleaved Code Division Multiple Access (CI_CDMA) system based on Minimum Mean Square Error (MMSE) detector in aeronautical mobile radio channel. Multipath fading, Doppler shifts caused by the speed of the aircraft, and Multiple Access Interference (MAI) are the most important reasons that affect and reduce the performance of aeronautical system. In this paper we suggested the CI-CDMA with antenna array to combat this fading and improve the bit error rate (BER) performance. We further evaluate the performance of the proposed system in the four standard scenarios in aeronautical mobile radio channel.
Keywords: Aeronautical Channel, CI-CDMA, Beamforming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21441424 Blow up in Polynomial Differential Equations
Authors: Rudolf Csikja, Janos Toth
Abstract:
Methods to detect and localize time singularities of polynomial and quasi-polynomial ordinary differential equations are systematically presented and developed. They are applied to examples taken form different fields of applications and they are also compared to better known methods such as those based on the existence of linear first integrals or Lyapunov functions.
Keywords: blow up, finite escape time, polynomial ODE, singularity, Lotka–Volterra equation, Painleve analysis, Ψ-series, global existence
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21821423 An Electronic and Performance Test for the Applicants to Faculty of Education for Early Childhood in Egypt for Measuring the Skills of Teacher Students
Authors: Ahmed Amin Mousa, Gehan Azam
Abstract:
The current study presents an electronic test to measure teaching skills. This test is a part of the admission system of the Faculty of Education for Early Childhood, Cairo University. The test has been prepared to evaluate university students who apply for admission the Faculty. It measures some social and physiological skills which are important for successful teachers, such as emotional adjustment and problem solving; moreover, the extent of their love for children and their capability to interact with them. The test has been approved by 13 experts. Finally, it has been introduced to 1,100 students during the admission system of the academic year 2016/2017. The results showed that most of the applicants have an auditory learning style. In addition, 97% of them have the minimum requirement skills for teaching children.Keywords: Electronic test, early childhood, skills, teacher student.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8741422 Discrete Breeding Swarm for Cost Minimization of Parallel Job Shop Scheduling Problem
Authors: Tarek Aboueldah, Hanan Farag
Abstract:
Parallel Job Shop Scheduling Problem (JSSP) is a multi-objective and multi constrains NP-optimization problem. Traditional Artificial Intelligence techniques have been widely used; however, they could be trapped into the local minimum without reaching the optimum solution. Thus, we propose a hybrid Artificial Intelligence (AI) model with Discrete Breeding Swarm (DBS) added to traditional AI to avoid this trapping. This model is applied in the cost minimization of the Car Sequencing and Operator Allocation (CSOA) problem. The practical experiment shows that our model outperforms other techniques in cost minimization.
Keywords: Parallel Job Shop Scheduling Problem, Artificial Intelligence, Discrete Breeding Swarm, Car Sequencing and Operator Allocation, cost minimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6101421 An Improved Method to Compute Sparse Graphs for Traveling Salesman Problem
Authors: Y. Wang
Abstract:
The Traveling salesman problem (TSP) is NP-hard in combinatorial optimization. The research shows the algorithms for TSP on the sparse graphs have the shorter computation time than those for TSP according to the complete graphs. We present an improved iterative algorithm to compute the sparse graphs for TSP by frequency graphs computed with frequency quadrilaterals. The iterative algorithm is enhanced by adjusting two parameters of the algorithm. The computation time of the algorithm is O(CNmaxn2) where C is the iterations, Nmax is the maximum number of frequency quadrilaterals containing each edge and n is the scale of TSP. The experimental results showed the computed sparse graphs generally have less than 5n edges for most of these Euclidean instances. Moreover, the maximum degree and minimum degree of the vertices in the sparse graphs do not have much difference. Thus, the computation time of the methods to resolve the TSP on these sparse graphs will be greatly reduced.
Keywords: Frequency quadrilateral, iterative algorithm, sparse graph, traveling salesman problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10101420 A Review of Coverage and Routing for Wireless Sensor Networks
Authors: Hamid Barati, Ali Movaghar, Ali Barati, Arash Azizi Mazreah
Abstract:
The special constraints of sensor networks impose a number of technical challenges for employing them. In this review, we study the issues and existing protocols in three areas: coverage and routing. We present two types of coverage problems: to determine the minimum number of sensor nodes that need to perform active sensing in order to monitor a certain area; and to decide the quality of service that can be provided by a given sensor network. While most routing protocols in sensor networks are data-centric, there are other types of routing protocols as well, such as hierarchical, location-based, and QoS-aware. We describe and compare several protocols in each group. We present several multipath routing protocols and single-path with local repair routing protocols, which are proposed for recovering from sensor node crashes. We also discuss some transport layer schemes for reliable data transmission in lossy wireless channels.Keywords: Sensor networks, Coverage, Routing, Robustness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16801419 Neural Network Implementation Using FPGA: Issues and Application
Authors: A. Muthuramalingam, S. Himavathi, E. Srinivasan
Abstract:
.Hardware realization of a Neural Network (NN), to a large extent depends on the efficient implementation of a single neuron. FPGA-based reconfigurable computing architectures are suitable for hardware implementation of neural networks. FPGA realization of ANNs with a large number of neurons is still a challenging task. This paper discusses the issues involved in implementation of a multi-input neuron with linear/nonlinear excitation functions using FPGA. Implementation method with resource/speed tradeoff is proposed to handle signed decimal numbers. The VHDL coding developed is tested using Xilinx XC V50hq240 Chip. To improve the speed of operation a lookup table method is used. The problems involved in using a lookup table (LUT) for a nonlinear function is discussed. The percentage saving in resource and the improvement in speed with an LUT for a neuron is reported. An attempt is also made to derive a generalized formula for a multi-input neuron that facilitates to estimate approximately the total resource requirement and speed achievable for a given multilayer neural network. This facilitates the designer to choose the FPGA capacity for a given application. Using the proposed method of implementation a neural network based application, namely, a Space vector modulator for a vector-controlled drive is presented
Keywords: FPGA implementation, multi-input neuron, neural network, nn based space vector modulator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 44251418 Optimal Parameters of Double Moving Average Control Chart
Authors: Y. Areepong
Abstract:
The objective of this paper is to present explicit analytical formulas for evaluating important characteristics of Double Moving Average control chart (DMA) for Poisson distribution. The most popular characteristics of a control chart are Average Run Length ( 0 ARL ) - the mean of observations that are taken before a system is signaled to be out-of control when it is actually still incontrol, and Average Delay time ( 1 ARL ) - mean delay of true alarm times. An important property required of 0 ARL is that it should be sufficiently large when the process is in-control to reduce a number of false alarms. On the other side, if the process is actually out-ofcontrol then 1 ARL should be as small as possible. In particular, the explicit analytical formulas for evaluating 0 ARL and 1 ARL be able to get a set of optimal parameters which depend on a width of the moving average ( w ) and width of control limit ( H ) for designing DMA chart with minimum of 1 ARLKeywords: Optimal parameters, Average Run Length, Average Delay time, Double Moving Average chart.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23261417 Heavy Metals (Pb, Cu, Fe, and Zn) Level in Shellfish (Etheria elliptica), Water and Sediments of River Ogbese, Ondo State, Nigeria
Authors: O. O. Olawusi-Peters, O. E. Aguda, F. O. Okoye
Abstract:
Investigations on the accumulation of heavy metals in water and sediments of river Ogbese were carried out between December 2010 and February 2011 using Atomic Absorption Spectrophotometer. Etheria elliptica a sessile organism was also used to determine the concentration of heavy metal in the aquatic environmental. In water, Cu had the highest concentration (0.55 – 0.13 mg/l ±0.1) while in sediments, the highest value obtained was in Fe (1.46-3.89mg/l±0.27). The minimum concentrations recorded were in Pb; which was below detectable level. The result also revealed that the shell accumulate more heavy metals than the flesh of the mussel with Cu in the shell exhibiting a negative correlation with all the metals in the flesh. However, the condition factor (K) value is 6.44, an indication of good health. The length-weight relationship is expressed as W = -0.48 x L 1.94 (r2 = 0.29) showing the growth pattern to be negatively allometric.Keywords: Condition factor, Etheria elliptica, heavy metals, River Ogbese.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21271416 Study of Rayleigh-Bénard-Brinkman Convection Using LTNE Model and Coupled, Real Ginzburg-Landau Equations
Authors: P. G. Siddheshwar, R. K. Vanishree, C. Kanchana
Abstract:
A local nonlinear stability analysis using a eight-mode expansion is performed in arriving at the coupled amplitude equations for Rayleigh-Bénard-Brinkman convection (RBBC) in the presence of LTNE effects. Streamlines and isotherms are obtained in the two-dimensional unsteady finite-amplitude convection regime. The parameters’ influence on heat transport is found to be more pronounced at small time than at long times. Results of the Rayleigh-Bénard convection is obtained as a particular case of the present study. Additional modes are shown not to significantly influence the heat transport thus leading us to infer that five minimal modes are sufficient to make a study of RBBC. The present problem that uses rolls as a pattern of manifestation of instability is a needed first step in the direction of making a very general non-local study of two-dimensional unsteady convection. The results may be useful in determining the preferred range of parameters’ values while making rheometric measurements in fluids to ascertain fluid properties such as viscosity. The results of LTE are obtained as a limiting case of the results of LTNE obtained in the paper.Keywords: Rayleigh-Bénard convection, heat transport, porous media, generalized Lorenz model, coupled Ginzburg-Landau model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9271415 Low-Complexity Channel Estimation Algorithm for MIMO-OFDM Systems
Authors: Ali Beydoun, Hamzé H. Alaeddine
Abstract:
One of the main challenges in MIMO-OFDM system to achieve the expected performances in terms of data rate and robustness against multi-path fading channels is the channel estimation. Several methods were proposed in the literature based on either least square (LS) or minimum mean squared error (MMSE) estimators. These methods present high implementation complexity as they require the inversion of large matrices. In order to overcome this problem and to reduce the complexity, this paper presents a solution that benefits from the use of the STBC encoder and transforms the channel estimation process into a set of simple linear operations. The proposed method is evaluated via simulation in AWGN-Rayleigh fading channel. Simulation results show a maximum reduction of 6.85% of the bit error rate (BER) compared to the one obtained with the ideal case where the receiver has a perfect knowledge of the channel.Keywords: Channel estimation, MIMO, OFDM, STBC, CAZAC sequence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8811414 Design Optimization of Ferrocement-Laminated Plate Using Genetic Algorithm
Authors: M. Rokonuzzaman, Z. Gürdal
Abstract:
This paper describes the design optimization of ferrocement-laminated plate made up of reinforcing steel wire mesh(es) and cement mortar. For the improvement of the designing process, the plate is modeled as a multi-layer medium, dividing the ferrocement plate into layers of mortar and ferrocement. The mortar layers are assumed to be isotropic in nature and the ferrocement layers are assumed to be orthotropic. The ferrocement layers are little stiffer, but much more costlier, than the mortar layers due the presence of steel wire mesh. The optimization is performed for minimum weight design of the laminate using a genetic algorithm. The optimum designs are discussed for different plate configurations and loadings, and it is compared with the worst designs obtained at the final generation. The paper provides a procedure for the designers in decision-making process.
Keywords: Buckling, Ferrocement-Laminated Plate, Genetic Algorithm, Plate Theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21861413 Improving the Effectiveness of Software Testing through Test Case Reduction
Authors: R. P. Mahapatra, Jitendra Singh
Abstract:
This paper proposes a new technique for improving the efficiency of software testing, which is based on a conventional attempt to reduce test cases that have to be tested for any given software. The approach utilizes the advantage of Regression Testing where fewer test cases would lessen time consumption of the testing as a whole. The technique also offers a means to perform test case generation automatically. Compared to one of the techniques in the literature where the tester has no option but to perform the test case generation manually, the proposed technique provides a better option. As for the test cases reduction, the technique uses simple algebraic conditions to assign fixed values to variables (Maximum, minimum and constant variables). By doing this, the variables values would be limited within a definite range, resulting in fewer numbers of possible test cases to process. The technique can also be used in program loops and arrays.Keywords: Software Testing, Test Case Generation, Test CaseReduction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30171412 Batch-Oriented Setting Time Optimisation in an Aerodynamic Feeding System
Authors: Jan Busch, Maurice Schmidt, Peter Nyhuis
Abstract:
The change of conditions for production companies in high-wage countries is characterized by the globalization of competition and the transition of a supplier´s to a buyer´s market. The companies need to face the challenges of reacting flexibly to these changes. Due to the significant and increasing degree of automation, assembly has become the most expensive production process. Regarding the reduction of production cost, assembly consequently offers a considerable rationalizing potential. Therefore, an aerodynamic feeding system has been developed at the Institute of Production Systems and Logistics (IFA), Leibniz Universitaet Hannover. This system has been enabled to adjust itself by using a genetic algorithm. The longer this genetic algorithm is executed the better is the feeding quality. In this paper, the relation between the system´s setting time and the feeding quality is observed and a function which enables the user to achieve the minimum of the total feeding time is presented.Keywords: Aerodynamic feeding system, batch size, optimisation, setting time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14531411 Analysis of SEIG for a Wind Pumping Plant Using Induction Motor
Authors: A. Abbou, H. Mahmoudi, M. Akherraz
Abstract:
In contrast to conventional generators, self-excited induction generators are found to be most suitable machines for wind energy conversion in remote and windy areas due to many advantages over grid connected machines. This papers presents a Self-Excited Induction Generator (SEIG) driven by wind turbine and supplying an induction motor which is coupled to a centrifugal pump. A method to describe the steady state performance based on nodal analysis is presented. Therefore the advanced knowledge of the minimum excitation capacitor value is required. The effects of variation of excitation capacitance on system and rotor speed under different loading conditions have been analyzed and considered to optimize induction motor pump performances.
Keywords: SEIG, Induction Motor, Centrifugal Pump, capacitance requirements, wind rotor speed.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23161410 A Hybrid Approach for Color Image Quantization Using K-means and Firefly Algorithms
Authors: Parisut Jitpakdee, Pakinee Aimmanee, Bunyarit Uyyanonvara
Abstract:
Color Image quantization (CQ) is an important problem in computer graphics, image and processing. The aim of quantization is to reduce colors in an image with minimum distortion. Clustering is a widely used technique for color quantization; all colors in an image are grouped to small clusters. In this paper, we proposed a new hybrid approach for color quantization using firefly algorithm (FA) and K-means algorithm. Firefly algorithm is a swarmbased algorithm that can be used for solving optimization problems. The proposed method can overcome the drawbacks of both algorithms such as the local optima converge problem in K-means and the early converge of firefly algorithm. Experiments on three commonly used images and the comparison results shows that the proposed algorithm surpasses both the base-line technique k-means clustering and original firefly algorithm.Keywords: Clustering, Color quantization, Firefly algorithm, Kmeans.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22181409 Optimal Mitigation of Slopes by Probabilistic Methods
Authors: D. De-León-Escobedo, D. J. Delgado-Hernández, S. Pérez
Abstract:
A probabilistic formulation to assess the slopes safety under the hazard of strong storms is presented and illustrated through a slope in Mexico. The formulation is based on the classical safety factor (SF) used in practice to appraise the slope stability, but it is introduced the treatment of uncertainties, and the slope failure probability is calculated as the probability that SF<1. As the main hazard is the rainfall on the area, statistics of rainfall intensity and duration are considered and modeled with an exponential distribution. The expected life-cycle cost is assessed by considering a monetary value on the slope failure consequences. Alternative mitigation measures are simulated, and the formulation is used to get the measures driving to the optimal one (minimum life-cycle costs). For the example, the optimal mitigation measure is the reduction on the slope inclination angle.
Keywords: Expected life-cycle cost, failure probability, slopes failure, storms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 782