Search results for: Particle Swarm Optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2358

Search results for: Particle Swarm Optimization

198 Improvement of the Reliability of the Industrial Electric Networks

Authors: M. Bouguerra, I. Habi

Abstract:

The continuity in the electric supply of the electric installations is becoming one of the main requirements of the electric supply network (generation, transmission, and distribution of the electric energy). The achievement of this requirement depends from one side on the structure of the electric network and on the other side on the avaibility of the reserve source provided to maintain the supply in case of failure of the principal one. The avaibility of supply does not only depends on the reliability parameters of the both sources (principal and reserve) but it also depends on the reliability of the circuit breaker which plays the role of interlocking the reserve source in case of failure of the principal one. In addition, the principal source being under operation, its control can be ideal and sure, however, for the reserve source being in stop, a preventive maintenances which proceed on time intervals (periodicity) and for well defined lengths of time are envisaged, so that this source will always available in case of the principal source failure. The choice of the periodicity of preventive maintenance of the source of reserve influences directly the reliability of the electric feeder system In this work and on the basis of the semi- markovian's processes, the influence of the time of interlocking the reserve source upon the reliability of an industrial electric network is studied and is given the optimal time of interlocking the reserve source in case of failure the principal one, also the influence of the periodicity of the preventive maintenance of the source of reserve is studied and is given the optimal periodicity.

Keywords: Semi-Markovians processes, reliability, optimization, industrial electric network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1271
197 A PIM (Processor-In-Memory) for Computer Graphics : Data Partitioning and Placement Schemes

Authors: Jae Chul Cha, Sandeep K. Gupta

Abstract:

The demand for higher performance graphics continues to grow because of the incessant desire towards realism. And, rapid advances in fabrication technology have enabled us to build several processor cores on a single die. Hence, it is important to develop single chip parallel architectures for such data-intensive applications. In this paper, we propose an efficient PIM architectures tailored for computer graphics which requires a large number of memory accesses. We then address the two important tasks necessary for maximally exploiting the parallelism provided by the architecture, namely, partitioning and placement of graphic data, which affect respectively load balances and communication costs. Under the constraints of uniform partitioning, we develop approaches for optimal partitioning and placement, which significantly reduce search space. We also present heuristics for identifying near-optimal placement, since the search space for placement is impractically large despite our optimization. We then demonstrate the effectiveness of our partitioning and placement approaches via analysis of example scenes; simulation results show considerable search space reductions, and our heuristics for placement performs close to optimal – the average ratio of communication overheads between our heuristics and the optimal was 1.05. Our uniform partitioning showed average load-balance ratio of 1.47 for geometry processing and 1.44 for rasterization, which is reasonable.

Keywords: Data Partitioning and Placement, Graphics, PIM, Search Space Reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1493
196 Discovering Complex Regularities: from Tree to Semi-Lattice Classifications

Authors: A. Faro, D. Giordano, F. Maiorana

Abstract:

Data mining uses a variety of techniques each of which is useful for some particular task. It is important to have a deep understanding of each technique and be able to perform sophisticated analysis. In this article we describe a tool built to simulate a variation of the Kohonen network to perform unsupervised clustering and support the entire data mining process up to results visualization. A graphical representation helps the user to find out a strategy to optimize classification by adding, moving or delete a neuron in order to change the number of classes. The tool is able to automatically suggest a strategy to optimize the number of classes optimization, but also support both tree classifications and semi-lattice organizations of the classes to give to the users the possibility of passing from one class to the ones with which it has some aspects in common. Examples of using tree and semi-lattice classifications are given to illustrate advantages and problems. The tool is applied to classify macroeconomic data that report the most developed countries- import and export. It is possible to classify the countries based on their economic behaviour and use the tool to characterize the commercial behaviour of a country in a selected class from the analysis of positive and negative features that contribute to classes formation. Possible interrelationships between the classes and their meaning are also discussed.

Keywords: Unsupervised classification, Kohonen networks, macroeconomics, Visual data mining, Cluster interpretation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1542
195 Feature Based Dense Stereo Matching using Dynamic Programming and Color

Authors: Hajar Sadeghi, Payman Moallem, S. Amirhassn Monadjemi

Abstract:

This paper presents a new feature based dense stereo matching algorithm to obtain the dense disparity map via dynamic programming. After extraction of some proper features, we use some matching constraints such as epipolar line, disparity limit, ordering and limit of directional derivative of disparity as well. Also, a coarseto- fine multiresolution strategy is used to decrease the search space and therefore increase the accuracy and processing speed. The proposed method links the detected feature points into the chains and compares some of the feature points from different chains, to increase the matching speed. We also employ color stereo matching to increase the accuracy of the algorithm. Then after feature matching, we use the dynamic programming to obtain the dense disparity map. It differs from the classical DP methods in the stereo vision, since it employs sparse disparity map obtained from the feature based matching stage. The DP is also performed further on a scan line, between any matched two feature points on that scan line. Thus our algorithm is truly an optimization method. Our algorithm offers a good trade off in terms of accuracy and computational efficiency. Regarding the results of our experiments, the proposed algorithm increases the accuracy from 20 to 70%, and reduces the running time of the algorithm almost 70%.

Keywords: Chain Correspondence, Color Stereo Matching, Dynamic Programming, Epipolar Line, Stereo Vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2349
194 Lean Production to Increase Reproducibility and Work Safety in the Laser Beam Melting Process Chain

Authors: C. Bay, A. Mahr, H. Groneberg, F. Döpper

Abstract:

Additive Manufacturing processes are becoming increasingly established in the industry for the economic production of complex prototypes and functional components. Laser beam melting (LBM), the most frequently used Additive Manufacturing technology for metal parts, has been gaining in industrial importance for several years. The LBM process chain – from material storage to machine set-up and component post-processing – requires many manual operations. These steps often depend on the manufactured component and are therefore not standardized. These operations are often not performed in a standardized manner, but depend on the experience of the machine operator, e.g., levelling of the build plate and adjusting the first powder layer in the LBM machine. This lack of standardization limits the reproducibility of the component quality. When processing metal powders with inhalable and alveolar particle fractions, the machine operator is at high risk due to the high reactivity and the toxic (e.g., carcinogenic) effect of the various metal powders. Faulty execution of the operation or unintentional omission of safety-relevant steps can impair the health of the machine operator. In this paper, all the steps of the LBM process chain are first analysed in terms of their influence on the two aforementioned challenges: reproducibility and work safety. Standardization to avoid errors increases the reproducibility of component quality as well as the adherence to and correct execution of safety-relevant operations. The corresponding lean method 5S will therefore be applied, in order to develop approaches in the form of recommended actions that standardize the work processes. These approaches will then be evaluated in terms of ease of implementation and their potential for improving reproducibility and work safety. The analysis and evaluation showed that sorting tools and spare parts as well as standardizing the workflow are likely to increase reproducibility. Organizing the operational steps and production environment decreases the hazards of material handling and consequently improves work safety.

Keywords: Additive manufacturing, lean production, reproducibility, work safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 846
193 Optimization of Energy Harvesting Systems for RFID Applications

Authors: P. Chambe, B. Canova, A. Balabanian, M. Pele, N. Coeur

Abstract:

To avoid battery assisted tags with limited lifetime batteries, it is proposed here to replace them by energy harvesting systems, able to feed from local environment. This would allow total independence to RFID systems, very interesting for applications where tag removal from its location is not possible. Example is here described for luggage safety in airports, and is easily  extendable to similar situation in terms of operation constraints. The idea is to fix RFID tag with energy harvesting system not only to identify luggage but also to supply an embedded microcontroller with a sensor delivering luggage weight making it impossible to add or to remove anything from the luggage during transit phases. The aim is to optimize the harvested energy for such RFID applications, and to study in which limits these applications are theoretically possible. Proposed energy harvester is based on two energy sources: piezoelectricity and electromagnetic waves, so that when the luggage is moving on ground transportation to airline counters, the piezo module supplies the tag and its microcontroller, while the RF module operates during luggage transit thanks to readers located along the way. Tag location on the luggage is analyzed to get best vibrations, as well as harvester better choice for optimizing the energy supply depending on applications and the amount of energy harvested during a period of time. Effects of system parameters (RFID UHF frequencies, limit distance between the tag and the antenna necessary to harvest energy, produced voltage and voltage threshold) are discussed and working conditions for such system are delimited.

Keywords: EM waves, Energy Harvesting, Piezoelectric, RFID Tag.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3176
192 Energy-Aware Scheduling in Real-Time Systems: An Analysis of Fair Share Scheduling and Priority-Driven Preemptive Scheduling

Authors: Su Xiaohan, Jin Chicheng, Liu Yijing, Burra Venkata Durga Kumar

Abstract:

Energy-aware scheduling in real-time systems aims to minimize energy consumption, but issues related to resource reservation and timing constraints remain challenges. This study focuses on analyzing two scheduling algorithms, Fair-Share Scheduling (FFS) and Priority-Driven Preemptive Scheduling (PDPS), for solving these issues and energy-aware scheduling in real-time systems. Based on research on both algorithms and the processes of solving two problems, it can be found that FFS ensures fair allocation of resources but needs to improve with an imbalanced system load. And PDPS prioritizes tasks based on criticality to meet timing constraints through preemption but relies heavily on task prioritization and may not be energy efficient. Therefore, improvements to both algorithms with energy-aware features will be proposed. Future work should focus on developing hybrid scheduling techniques that minimize energy consumption through intelligent task prioritization, resource allocation, and meeting time constraints.

Keywords: Energy-aware scheduling, fair-share scheduling, priority-driven preemptive scheduling, real-time systems, optimization, resource reservation, timing constraints.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 120
191 A Condition-Based Maintenance Policy for Multi-Unit Systems Subject to Deterioration

Authors: Nooshin Salari, Viliam Makis

Abstract:

In this paper, we propose a condition-based maintenance policy for multi-unit systems considering the existence of economic dependency among units. We consider a system composed of N identical units, where each unit deteriorates independently. Deterioration process of each unit is modeled as a three-state continuous time homogeneous Markov chain with two working states and a failure state. The average production rate of units varies in different working states and demand rate of the system is constant. Units are inspected at equidistant time epochs, and decision regarding performing maintenance is determined by the number of units in the failure state. If the total number of units in the failure state exceeds a critical level, maintenance is initiated, where units in failed state are replaced correctively and deteriorated state units are maintained preventively. Our objective is to determine the optimal number of failed units to initiate maintenance minimizing the long run expected average cost per unit time. The problem is formulated and solved in the semi-Markov decision process (SMDP) framework. A numerical example is developed to demonstrate the proposed policy and the comparison with the corrective maintenance policy is presented.

Keywords: Reliability, production, maintenance optimization, Semi-Markov Decision Process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 811
190 Effects of Various Wavelet Transforms in Dynamic Analysis of Structures

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history dynamic analysis of structures is considered as an exact method while being computationally intensive. Filtration of earthquake strong ground motions applying wavelet transform is an approach towards reduction of computational efforts, particularly in optimization of structures against seismic effects. Wavelet transforms are categorized into continuum and discrete transforms. Since earthquake strong ground motion is a discrete function, the discrete wavelet transform is applied in the present paper. Wavelet transform reduces analysis time by filtration of non-effective frequencies of strong ground motion. Filtration process may be repeated several times while the approximation induces more errors. In this paper, strong ground motion of earthquake has been filtered once applying each wavelet. Strong ground motion of Northridge earthquake is filtered applying various wavelets and dynamic analysis of sampled shear and moment frames is implemented. The error, regarding application of each wavelet, is computed based on comparison of dynamic response of sampled structures with exact responses. Exact responses are computed by dynamic analysis of structures applying non-filtered strong ground motion.

Keywords: Wavelet transform, computational error, computational duration, strong ground motion data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1373
189 Formex Algebra Adaptation into Parametric Design Tools: Dome Structures

Authors: Réka Sárközi, Péter Iványi, Attila B. Széll

Abstract:

The aim of this paper is to present the adaptation of the dome construction tool for formex algebra to the parametric design software Grasshopper. Formex algebra is a mathematical system, primarily used for planning structural systems such like truss-grid domes and vaults, together with the programming language Formian. The goal of the research is to allow architects to plan truss-grid structures easily with parametric design tools based on the versatile formex algebra mathematical system. To produce regular structures, coordinate system transformations are used and the dome structures are defined in spherical coordinate system. Owing to the abilities of the parametric design software, it is possible to apply further modifications on the structures and gain special forms. The paper covers the basic dome types, and also additional dome-based structures using special coordinate-system solutions based on spherical coordinate systems. It also contains additional structural possibilities like making double layer grids in all geometry forms. The adaptation of formex algebra and the parametric workflow of Grasshopper together give the possibility of quick and easy design and optimization of special truss-grid domes.

Keywords: Parametric design, structural morphology, space structures, spherical coordinate system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1459
188 Bridging Stress Modeling of Composite Materials Reinforced by Fibers Using Discrete Element Method

Authors: Chong Wang, Kellem M. Soares, Luis E. Kosteski

Abstract:

The problem of toughening in brittle materials reinforced by fibers is complex, involving all of the mechanical properties of fibers, matrix and the fiber/matrix interface, as well as the geometry of the fiber. Development of new numerical methods appropriate to toughening simulation and analysis is necessary. In this work, we have performed simulations and analysis of toughening in brittle matrix reinforced by randomly distributed fibers by means of the discrete elements method. At first, we put forward a mechanical model of toughening contributed by random fibers. Then with a numerical program, we investigated the stress, damage and bridging force in the composite material when a crack appeared in the brittle matrix. From the results obtained, we conclude that: (i) fibers of high strength and low elasticity modulus are beneficial to toughening; (ii) fibers of relatively high elastic modulus compared to the matrix may result in substantial matrix damage due to spalling effect; (iii) employment of high-strength synthetic fibers is a good option for toughening. We expect that the combination of the discrete element method (DEM) with the finite element method (FEM) can increase the versatility and efficiency of the software developed. The present work can guide the design of ceramic composites of high performance through the optimization of the parameters.

Keywords: Bridging stress, discrete element method, fiber reinforced composites, toughening.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1899
187 Modeling and Analysis for Effective Capacity of a Cross-Layer Optimized Wireless Networks

Authors: Reham A. El-mayet, Hesham M. El-Badawy, Salwa H. Elramly

Abstract:

New generation mobile communication networks have the ability of supporting triple play. In order that, Orthogonal Frequency Division Multiplexing (OFDM) access techniques have been chosen to enlarge the system ability for high data rates networks. Many of cross-layer modeling and optimization schemes for Quality of Service (QoS) and capacity of downlink multiuser OFDM system were proposed. In this paper, the Maximum Weighted Capacity (MWC) based resource allocation at the Physical (PHY) layer is used. This resource allocation scheme provides a much better QoS than the previous resource allocation schemes, while maintaining the highest or nearly highest capacity and costing similar complexity. In addition, the Delay Satisfaction (DS) scheduling at the Medium Access Control (MAC) layer, which allows more than one connection to be served in each slot is used. This scheduling technique is more efficient than conventional scheduling to investigate both of the number of users as well as the number of subcarriers against system capacity. The system will be optimized for different operational environments: the outdoor deployment scenarios as well as the indoor deployment scenarios are investigated and also for different channel models. In addition, effective capacity approach [1] is used not only for providing QoS for different mobile users, but also to increase the total wireless network's throughput.

Keywords: Cross-layer, effective capacity, LTE, OFDM, QoS, resource allocation, wireless networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1796
186 Upgraded Cuckoo Search Algorithm to Solve Optimisation Problems Using Gaussian Selection Operator and Neighbour Strategy Approach

Authors: Mukesh Kumar Shah, Tushar Gupta

Abstract:

An Upgraded Cuckoo Search Algorithm is proposed here to solve optimization problems based on the improvements made in the earlier versions of Cuckoo Search Algorithm. Short comings of the earlier versions like slow convergence, trap in local optima improved in the proposed version by random initialization of solution by suggesting an Improved Lambda Iteration Relaxation method, Random Gaussian Distribution Walk to improve local search and further proposing Greedy Selection to accelerate to optimized solution quickly and by “Study Nearby Strategy” to improve global search performance by avoiding trapping to local optima. It is further proposed to generate better solution by Crossover Operation. The proposed strategy used in algorithm shows superiority in terms of high convergence speed over several classical algorithms. Three standard algorithms were tested on a 6-generator standard test system and the results are presented which clearly demonstrate its superiority over other established algorithms. The algorithm is also capable of handling higher unit systems.

Keywords: Economic dispatch, Gaussian selection operator, prohibited operating zones, ramp rate limits, upgraded cuckoo search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 684
185 Self-Adaptive Differential Evolution Based Power Economic Dispatch of Generators with Valve-Point Effects and Multiple Fuel Options

Authors: R.Balamurugan, S.Subramanian

Abstract:

This paper presents the solution of power economic dispatch (PED) problem of generating units with valve point effects and multiple fuel options using Self-Adaptive Differential Evolution (SDE) algorithm. The global optimal solution by mathematical approaches becomes difficult for the realistic PED problem in power systems. The Differential Evolution (DE) algorithm is found to be a powerful evolutionary algorithm for global optimization in many real problems. In this paper the key parameters of control in DE algorithm such as the crossover constant CR and weight applied to random differential F are self-adapted. The PED problem formulation takes into consideration of nonsmooth fuel cost function due to valve point effects and multi fuel options of generator. The proposed approach has been examined and tested with the numerical results of PED problems with thirteen-generation units including valve-point effects, ten-generation units with multiple fuel options neglecting valve-point effects and ten-generation units including valve-point effects and multiple fuel options. The test results are promising and show the effectiveness of proposed approach for solving PED problems.

Keywords: Multiple fuels, power economic dispatch, selfadaptivedifferential evolution and valve-point effects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1895
184 LOD Exploitation and Fast Silhouette Detection for Shadow Volumes

Authors: Mustafa S. Fawad, Wang Wencheng, Wu Enhua

Abstract:

Shadows add great amount of realism to a scene and many algorithms exists to generate shadows. Recently, Shadow volumes (SVs) have made great achievements to place a valuable position in the gaming industries. Looking at this, we concentrate on simple but valuable initial partial steps for further optimization in SV generation, i.e.; model simplification and silhouette edge detection and tracking. Shadow volumes (SVs) usually takes time in generating boundary silhouettes of the object and if the object is complex then the generation of edges become much harder and slower in process. The challenge gets stiffer when real time shadow generation and rendering is demanded. We investigated a way to use the real time silhouette edge detection method, which takes the advantage of spatial and temporal coherence, and exploit the level-of-details (LOD) technique for reducing silhouette edges of the model to use the simplified version of the model for shadow generation speeding up the running time. These steps highly reduce the execution time of shadow volume generations in real-time and are easily flexible to any of the recently proposed SV techniques. Our main focus is to exploit the LOD and silhouette edge detection technique, adopting them to further enhance the shadow volume generations for real time rendering.

Keywords: LOD, perception, Shadow Volumes, SilhouetteEdge, Spatial and Temporal coherence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613
183 Polymer Mediated Interaction between Grafted Nanosheets

Authors: Supriya Gupta, Paresh Chokshi

Abstract:

Polymer-particle interactions can be effectively utilized to produce composites that possess physicochemical properties superior to that of neat polymer. The incorporation of fillers with dimensions comparable to polymer chain size produces composites with extra-ordinary properties owing to very high surface to volume ratio. The dispersion of nanoparticles is achieved by inducing steric repulsion realized by grafting particles with polymeric chains. A comprehensive understanding of the interparticle interaction between these functionalized nanoparticles plays an important role in the synthesis of a stable polymer nanocomposite. With the focus on incorporation of clay sheets in a polymer matrix, we theoretically construct the polymer mediated interparticle potential for two nanosheets grafted with polymeric chains. The self-consistent field theory (SCFT) is employed to obtain the inhomogeneous composition field under equilibrium. Unlike the continuum models, SCFT is built from the microscopic description taking in to account the molecular interactions contributed by both intra- and inter-chain potentials. We present the results of SCFT calculations of the interaction potential curve for two grafted nanosheets immersed in the matrix of polymeric chains of dissimilar chemistry to that of the grafted chains. The interaction potential is repulsive at short separation and shows depletion attraction for moderate separations induced by high grafting density. It is found that the strength of attraction well can be tuned by altering the compatibility between the grafted and the mobile chains. Further, we construct the interaction potential between two nanosheets grafted with diblock copolymers with one of the blocks being chemically identical to the free polymeric chains. The interplay between the enthalpic interaction between the dissimilar species and the entropy of the free chains gives rise to a rich behavior in interaction potential curve obtained for two separate cases of free chains being chemically similar to either the grafted block or the free block of the grafted diblock chains.

Keywords: Clay nanosheets, polymer brush, polymer nanocomposites, self-consistent field theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469
182 Precision Grinding of Titanium (Ti-6Al-4V) Alloy Using Nanolubrication

Authors: Ahmed A. D. Sarhan, Hong Wan Ping, M. Sayuti

Abstract:

In this current era of competitive machinery productions, the industries are designed to place more emphasis on the product quality and reduction of cost whilst abiding by the pollution-preventing policy. In attempting to delve into the concerns, the industries are aware that the effectiveness of existing lubrication systems must be improved to achieve power-efficient and pollution-preventing machining processes. As such, this research is targeted to study on a plausible solution to the issue in grinding titanium alloy (Ti-6Al-4V) by using nanolubrication, as an alternative to flood grinding. The aim of this research is to evaluate the optimum condition of grinding force and surface roughness using MQL lubricating system to deliver nano-oil at different level of weight concentration of Silicon Dioxide (SiO2) mixed normal mineral oil. Taguchi Design of Experiment (DoE) method is carried out using a standard Taguchi orthogonal array of L16(43) to find the optimized combination of weight concentration mixture of SiO2, nozzle orientation and pressure of MQL. Surface roughness and grinding force are also analyzed using signal-to-noise(S/N) ratio to determine the best level of each factor that are tested. Consequently, the best combination of parameters is tested for a period of time and the results are compared with conventional grinding method of dry and flood condition. The results show a positive performance of MQL nanolubrication.  

Keywords: Grinding, MQL, precision grinding, Taguchi optimization, titanium alloy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884
181 An Educational Data Mining System for Advising Higher Education Students

Authors: Heba Mohammed Nagy, Walid Mohamed Aly, Osama Fathy Hegazy

Abstract:

Educational  data mining  is  a  specific  data   mining field applied to data originating from educational environments, it relies on different  approaches to discover hidden knowledge  from  the  available   data. Among these approaches are   machine   learning techniques which are used to build a system that acquires learning from previous data. Machine learning can be applied to solve different regression, classification, clustering and optimization problems.

In  our  research, we propose  a “Student  Advisory  Framework” that  utilizes  classification  and  clustering  to  build  an  intelligent system. This system can be used to provide pieces of consultations to a first year  university  student to  pursue a  certain   education   track   where  he/she  will  likely  succeed  in, aiming  to  decrease   the  high  rate   of  academic  failure   among these  students.  A real case study  in Cairo  Higher  Institute  for Engineering, Computer  Science  and  Management  is  presented using  real  dataset   collected  from  2000−2012.The dataset has two main components: pre-higher education dataset and first year courses results dataset. Results have proved the efficiency of the suggested framework.

Keywords: Classification, Clustering, Educational Data Mining (EDM), Machine Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5213
180 Taguchi-Based Optimization of Surface Roughness and Dimensional Accuracy in Wire EDM Process with S7 Heat Treated Steel

Authors: Joseph C. Chen, Joshua Cox

Abstract:

This research focuses on the use of the Taguchi method to reduce the surface roughness and improve dimensional accuracy of parts machined by Wire Electrical Discharge Machining (EDM) with S7 heat treated steel material. Due to its high impact toughness, the material is a candidate for a wide variety of tooling applications which require high precision in dimension and desired surface roughness. This paper demonstrates that Taguchi Parameter Design methodology is able to optimize both dimensioning and surface roughness successfully by investigating seven wire-EDM controllable parameters: pulse on time (ON), pulse off time (OFF), servo voltage (SV), voltage (V), servo feed (SF), wire tension (WT), and wire speed (WS). The temperature of the water in the Wire EDM process is investigated as the noise factor in this research. Experimental design and analysis based on L18 Taguchi orthogonal arrays are conducted. This paper demonstrates that the Taguchi-based system enables the wire EDM process to produce (1) high precision parts with an average of 0.6601 inches dimension, while the desired dimension is 0.6600 inches; and (2) surface roughness of 1.7322 microns which is significantly improved from 2.8160 microns.

Keywords: Taguchi parameter design, surface roughness, dimensional accuracy, Wire EDM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1088
179 A New Fast Intra Prediction Mode Decision Algorithm for H.264/AVC Encoders

Authors: A. Elyousfi, A. Tamtaoui, E. Bouyakhf

Abstract:

The H.264/AVC video coding standard contains a number of advanced features. Ones of the new features introduced in this standard is the multiple intramode prediction. Its function exploits directional spatial correlation with adjacent block for intra prediction. With this new features, intra coding of H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression standard, but computational complexity is increased significantly when brut force rate distortion optimization (RDO) algorithm is used. In this paper, we propose a new fast intra prediction mode decision method for the complexity reduction of H.264 video coding. for luma intra prediction, the proposed method consists of two step: in the first step, we make the RDO for four mode of intra 4x4 block, based the distribution of RDO cost of those modes and the idea that the fort correlation with adjacent mode, we select the best mode of intra 4x4 block. In the second step, we based the fact that the dominating direction of a smaller block is similar to that of bigger block, the candidate modes of 8x8 blocks and 16x16 macroblocks are determined. So, in case of chroma intra prediction, the variance of the chroma pixel values is much smaller than that of luma ones, since our proposed uses only the mode DC. Experimental results show that the new fast intra mode decision algorithm increases the speed of intra coding significantly with negligible loss of PSNR.

Keywords: Intra prediction, H264/AVC, video coding, encodercomplexity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2506
178 Asynchronous Parallel Distributed Genetic Algorithm with Elite Migration

Authors: Kazunori Kojima, Masaaki Ishigame, Goutam Chakraborty, Hiroshi Hatsuo, Shozo Makino

Abstract:

In most of the popular implementation of Parallel GAs the whole population is divided into a set of subpopulations, each subpopulation executes GA independently and some individuals are migrated at fixed intervals on a ring topology. In these studies, the migrations usually occur 'synchronously' among subpopulations. Therefore, CPUs are not used efficiently and the communication do not occur efficiently either. A few studies tried asynchronous migration but it is hard to implement and setting proper parameter values is difficult. The aim of our research is to develop a migration method which is easy to implement, which is easy to set parameter values, and which reduces communication traffic. In this paper, we propose a traffic reduction method for the Asynchronous Parallel Distributed GA by migration of elites only. This is a Server-Client model. Every client executes GA on a subpopulation and sends an elite information to the server. The server manages the elite information of each client and the migrations occur according to the evolution of sub-population in a client. This facilitates the reduction in communication traffic. To evaluate our proposed model, we apply it to many function optimization problems. We confirm that our proposed method performs as well as current methods, the communication traffic is less, and setting of the parameters are much easier.

Keywords: Parallel Distributed Genetic Algorithm (PDGA), asynchronousPDGA, Server-Client configuration, Elite Migration

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1372
177 Numerical Simulation on Deformation Behaviour of Additively Manufactured AlSi10Mg Alloy

Authors: Racholsan Raj Nirmal, B. S. V. Patnaik, R. Jayaganthan

Abstract:

The deformation behaviour of additively manufactured AlSi10Mg alloy under low strains, high strain rates and elevated temperature conditions is essential to analyse and predict its response against dynamic loading such as impact and thermomechanical fatigue. The constitutive relation of Johnson-Cook is used to capture the strain rate sensitivity and thermal softening effect in AlSi10Mg alloy. Johnson-Cook failure model is widely used for exploring damage mechanics and predicting the fracture in many materials. In this present work, Johnson-Cook material and damage model parameters for additively manufactured AlSi10Mg alloy have been determined numerically from four types of uniaxial tensile test. Three different uniaxial tensile tests with dynamic strain rates (0.1, 1, 10, 50, and 100 s-1) and elevated temperature tensile test with three different temperature conditions (450 K, 500 K and 550 K) were performed on 3D printed AlSi10Mg alloy in ABAQUS/Explicit. Hexahedral elements are used to discretize tensile specimens and fracture energy value of 43.6 kN/m was used for damage initiation. Levenberg Marquardt optimization method was used for the evaluation of Johnson-Cook model parameters. It was observed that additively manufactured AlSi10Mg alloy has shown relatively higher strain rate sensitivity and lower thermal stability as compared to the other Al alloys.

Keywords: ABAQUS, additive manufacturing, AlSi10Mg, Johnson-Cook model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1136
176 A Combined Conventional and Differential Evolution Method for Model Order Reduction

Authors: J. S. Yadav, N. P. Patidar, J. Singhai, S. Panda, C. Ardil

Abstract:

In this paper a mixed method by combining an evolutionary and a conventional technique is proposed for reduction of Single Input Single Output (SISO) continuous systems into Reduced Order Model (ROM). In the conventional technique, the mixed advantages of Mihailov stability criterion and continued Fraction Expansions (CFE) technique is employed where the reduced denominator polynomial is derived using Mihailov stability criterion and the numerator is obtained by matching the quotients of the Cauer second form of Continued fraction expansions. Then, retaining the numerator polynomial, the denominator polynomial is recalculated by an evolutionary technique. In the evolutionary method, the recently proposed Differential Evolution (DE) optimization technique is employed. DE method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. The proposed method is illustrated through a numerical example and compared with ROM where both numerator and denominator polynomials are obtained by conventional method to show its superiority.

Keywords: Reduced Order Modeling, Stability, Mihailov Stability Criterion, Continued Fraction Expansions, Differential Evolution, Integral Squared Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2163
175 Optimization of Wire EDM Parameters for Fabrication of Micro Channels

Authors: Gurinder Singh Brar, Sarbjeet Singh, Harry Garg

Abstract:

Wire Electric Discharge Machining (WEDM) is thermal machining process capable of machining very hard electrically conductive material irrespective of their hardness. WEDM is being widely used to machine micro scale parts with the high dimensional accuracy and surface finish. The objective of this paper is to optimize the process parameters of wire EDM to fabricate the micro channels and to calculate the surface finish and material removal rate of micro channels fabricated using wire EDM. The material used is aluminum 6061 alloy. The experiments were performed using CNC wire cut electric discharge machine. The effect of various parameters of WEDM like pulse on time (TON) with the levels (100, 150, 200), pulse off time (TOFF) with the levels (25, 35, 45) and current (IP) with the levels (105, 110, 115) were investigated to study the effect on output parameter i.e. Surface Roughness and Material Removal Rate (MRR). Each experiment was conducted under different conditions of pulse on time, pulse off time and peak current. For material removal rate, TON and Ip were the most significant process parameter. MRR increases with the increase in TON and Ip and decreases with the increase in TOFF. For surface roughness, TON and Ip have the maximum effect and TOFF was found out to be less effective.

Keywords: Micro Channels, Wire Electric Discharge Machining (WEDM), Metal Removal Rate (MRR), Surface Finish.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2699
174 Soil/Phytofisionomy Relationship in Southeast of Chapada Diamantina, Bahia, Brazil

Authors: Marcelo Araujo da Nóbrega, Ariel Moura Vilas Boas

Abstract:

This study aims to characterize the physicochemical aspects of the soils of southeastern Chapada Diamantina - Bahia related to the phytophysiognomies of this area, rupestrian field, small savanna (savanna fields), small dense savanna (savanna fields), savanna (Cerrado), dry thorny forest (Caatinga), dry thorny forest/savanna, scrub (Carrasco - ecotone), forest island (seasonal semi-deciduous forest - Capão) and seasonal semi-deciduous forest. To achieve the research objective, soil samples were collected in each plant formation and analyzed in the soil laboratory of ESALQ - USP in order to identify soil fertility through the determination of pH, organic matter, phosphorus, potassium, calcium, magnesium, potential acidity, sum of bases, cation exchange capacity and base saturation. The composition of soil particles was also checked; that is, the texture, step made in the terrestrial ecosystems laboratory of the Department of Ecology of USP and in the soil laboratory of ESALQ. Another important factor also studied was to show the variations in the vegetation cover in the region as a function of soil moisture in the different existing physiographic environments. Another study carried out was a comparison between the average soil moisture data with precipitation data from three locations with very different phytophysiognomies. The soils found in this part of Bahia can be classified into 5 classes, with a predominance of oxisols. All of these classes have a great diversity of physical and chemical properties, as can be seen in photographs and in particle size and fertility analyzes. The deepest soils are located in the Central Pediplano of Chapada Diamantina where the dirty field, the clean field, the executioner and the semideciduous seasonal forest (Capão) are located, and the shallower soils were found in the rupestrian field, dry thorny forest, and savanna fields, the latter located on a hillside. As for the variations in water in the region's soil, the data indicate that there were large spatial variations in humidity in both the rainy and dry periods.

Keywords: Bahia, Chapada diamantina, phytophysiognomies, soils.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 582
173 Physical, Chemical and Mineralogical Characterization of Construction and Demolition Waste Produced in Greece

Authors: C. Alexandridou, G. N. Angelopoulos, F. A. Coutelieris

Abstract:

Construction industry in Greece consumes annually more than 25 million tons of natural aggregates originating mainly from quarries. At the same time, more than 2 million tons of construction and demolition waste are deposited every year, usually without control, therefore increasing the environmental impact of this sector. A potential alternative for saving natural resources and minimize landfilling, could be the recycling and re-use of Concrete and Demolition Waste (CDW) in concrete production. Moreover, in order to conform to the European legislation, Greece is obliged to recycle non-hazardous construction and demolition waste to a minimum of 70% by 2020. In this paper characterization of recycled materials - commercially and laboratory produced, coarse and fine, Recycled Concrete Aggregates (RCA) - has been performed. Namely, X-Ray Fluorescence and X-ray diffraction (XRD) analysis were used for chemical and mineralogical analysis respectively. Physical properties such as particle density, water absorption, sand equivalent and resistance to fragmentation were also determined. This study, first time made in Greece, aims at outlining the differences between RCA and natural aggregates and evaluating their possible influence in concrete performance. Results indicate that RCA’s chemical composition is enriched in Si, Al, and alkali oxides compared to natural aggregates. X-ray diffraction (XRD) analyses results indicated the presence of calcite, quartz and minor peaks of mica and feldspars. From all the evaluated physical properties of coarse RCA, only water absorption and resistance to fragmentation seem to have a direct influence on the properties of concrete. Low Sand Equivalent and significantly high water absorption values indicate that fine fractions of RCA cannot be used for concrete production unless further processed. Chemical properties of RCA in terms of water soluble ions are similar to those of natural aggregates. Four different concrete mixtures were produced and examined, replacing natural coarse aggregates with RCA by a ratio of 0%, 25%, 50% and 75% respectively. Results indicate that concrete mixtures containing recycled concrete aggregates have a minor deterioration of their properties (3-9% lower compression strength at 28 days) compared to conventional concrete containing the same cement quantity.

Keywords: Chemical and physical characterization, compressive strength, mineralogical analysis, recycled concrete aggregates, waste management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3475
172 Development of Mechanical Properties of Self Compacting Concrete Contain Rice Husk Ash

Authors: M. A. Ahmadi, O. Alidoust, I. Sadrinejad, M. Nayeri

Abstract:

Self-compacting concrete (SCC), a new kind of high performance concrete (HPC) have been first developed in Japan in 1986. The development of SCC has made casting of dense reinforcement and mass concrete convenient, has minimized noise. Fresh self-compacting concrete (SCC) flows into formwork and around obstructions under its own weight to fill it completely and self-compact (without any need for vibration), without any segregation and blocking. The elimination of the need for compaction leads to better quality concrete and substantial improvement of working conditions. SCC mixes generally have a much higher content of fine fillers, including cement, and produce excessively high compressive strength concrete, which restricts its field of application to special concrete only. To use SCC mixes in general concrete construction practice, requires low cost materials to make inexpensive concrete. Rice husk ash (RHA) has been used as a highly reactive pozzolanic material to improve the microstructure of the interfacial transition zone (ITZ) between the cement paste and the aggregate in self compacting concrete. Mechanical experiments of RHA blended Portland cement concretes revealed that in addition to the pozzolanic reactivity of RHA (chemical aspect), the particle grading (physical aspect) of cement and RHA mixtures also exerted significant influences on the blending efficiency. The scope of this research was to determine the usefulness of Rice husk ash (RHA) in the development of economical self compacting concrete (SCC). The cost of materials will be decreased by reducing the cement content by using waste material like rice husk ash instead of. This paper presents a study on the development of Mechanical properties up to 180 days of self compacting and ordinary concretes with rice-husk ash (RHA), from a rice paddy milling industry in Rasht (Iran). Two different replacement percentages of cement by RHA, 10%, and 20%, and two different water/cementicious material ratios (0.40 and 0.35), were used for both of self compacting and normal concrete specimens. The results are compared with those of the self compacting concrete without RHA, with compressive, flexural strength and modulus of elasticity. It is concluded that RHA provides a positive effect on the Mechanical properties at age after 60 days. Base of the result self compacting concrete specimens have higher value than normal concrete specimens in all test except modulus of elasticity. Also specimens with 20% replacement of cement by RHA have the best performance.

Keywords: Self compacting concrete (SCC), Rice husk ash(RHA), Mechanical properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3679
171 Application of Statistical Approach for Optimizing CMCase Production by Bacillus tequilensis S28 Strain via Submerged Fermentation Using Wheat Bran as Carbon Source

Authors: A. Sharma, R. Tewari, S. K. Soni

Abstract:

Biofuels production has come forth as a future technology to combat the problem of depleting fossil fuels. Bio-based ethanol production from enzymatic lignocellulosic biomass degradation serves an efficient method and catching the eye of scientific community. High cost of the enzyme is the major obstacle in preventing the commercialization of this process. Thus main objective of the present study was to optimize composition of medium components for enhancing cellulase production by newly isolated strain of Bacillus tequilensis. Nineteen factors were taken into account using statistical Plackett-Burman Design. The significant variables influencing the cellulose production were further employed in statistical Response Surface Methodology using Central Composite Design for maximizing cellulase production. The optimum medium composition for cellulase production was: peptone (4.94 g/L), ammonium chloride (4.99 g/L), yeast extract (2.00 g/L), Tween-20 (0.53 g/L), calcium chloride (0.20 g/L) and cobalt chloride (0.60 g/L) with pH 7, agitation speed 150 rpm and 72 h incubation at 37oC. Analysis of variance (ANOVA) revealed high coefficient of determination (R2) of 0.99. Maximum cellulase productivity of 11.5 IU/ml was observed against the model predicted value of 13 IU/ml. This was found to be optimally active at 60oC and pH 5.5.

Keywords: Bacillus tequilensis, CMCase, Submerged Fermentation, Optimization, Plackett-Burman Design, Response Surface Methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3063
170 Comparison between Higher-Order SVD and Third-order Orthogonal Tensor Product Expansion

Authors: Chiharu Okuma, Jun Murakami, Naoki Yamamoto

Abstract:

In digital signal processing it is important to approximate multi-dimensional data by the method called rank reduction, in which we reduce the rank of multi-dimensional data from higher to lower. For 2-dimennsional data, singular value decomposition (SVD) is one of the most known rank reduction techniques. Additional, outer product expansion expanded from SVD was proposed and implemented for multi-dimensional data, which has been widely applied to image processing and pattern recognition. However, the multi-dimensional outer product expansion has behavior of great computation complex and has not orthogonally between the expansion terms. Therefore we have proposed an alterative method, Third-order Orthogonal Tensor Product Expansion short for 3-OTPE. 3-OTPE uses the power method instead of nonlinear optimization method for decreasing at computing time. At the same time the group of B. D. Lathauwer proposed Higher-Order SVD (HOSVD) that is also developed with SVD extensions for multi-dimensional data. 3-OTPE and HOSVD are similarly on the rank reduction of multi-dimensional data. Using these two methods we can obtain computation results respectively, some ones are the same while some ones are slight different. In this paper, we compare 3-OTPE to HOSVD in accuracy of calculation and computing time of resolution, and clarify the difference between these two methods.

Keywords: Singular value decomposition (SVD), higher-order SVD (HOSVD), higher-order tensor, outer product expansion, power method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1562
169 Prediction of the Lateral Bearing Capacity of Short Piles in Clayey Soils Using Imperialist Competitive Algorithm-Based Artificial Neural Networks

Authors: Reza Dinarvand, Mahdi Sadeghian, Somaye Sadeghian

Abstract:

Prediction of the ultimate bearing capacity of piles (Qu) is one of the basic issues in geotechnical engineering. So far, several methods have been used to estimate Qu, including the recently developed artificial intelligence methods. In recent years, optimization algorithms have been used to minimize artificial network errors, such as colony algorithms, genetic algorithms, imperialist competitive algorithms, and so on. In the present research, artificial neural networks based on colonial competition algorithm (ANN-ICA) were used, and their results were compared with other methods. The results of laboratory tests of short piles in clayey soils with parameters such as pile diameter, pile buried length, eccentricity of load and undrained shear resistance of soil were used for modeling and evaluation. The results showed that ICA-based artificial neural networks predicted lateral bearing capacity of short piles with a correlation coefficient of 0.9865 for training data and 0.975 for test data. Furthermore, the results of the model indicated the superiority of ICA-based artificial neural networks compared to back-propagation artificial neural networks as well as the Broms and Hansen methods.

Keywords: Lateral bearing capacity, short pile, clayey soil, artificial neural network, Imperialist competition algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 942