Search results for: Single owner protocol
343 Ensemble Learning with Decision Tree for Remote Sensing Classification
Authors: Mahesh Pal
Abstract:
In recent years, a number of works proposing the combination of multiple classifiers to produce a single classification have been reported in remote sensing literature. The resulting classifier, referred to as an ensemble classifier, is generally found to be more accurate than any of the individual classifiers making up the ensemble. As accuracy is the primary concern, much of the research in the field of land cover classification is focused on improving classification accuracy. This study compares the performance of four ensemble approaches (boosting, bagging, DECORATE and random subspace) with a univariate decision tree as base classifier. Two training datasets, one without ant noise and other with 20 percent noise was used to judge the performance of different ensemble approaches. Results with noise free data set suggest an improvement of about 4% in classification accuracy with all ensemble approaches in comparison to the results provided by univariate decision tree classifier. Highest classification accuracy of 87.43% was achieved by boosted decision tree. A comparison of results with noisy data set suggests that bagging, DECORATE and random subspace approaches works well with this data whereas the performance of boosted decision tree degrades and a classification accuracy of 79.7% is achieved which is even lower than that is achieved (i.e. 80.02%) by using unboosted decision tree classifier.Keywords: Ensemble learning, decision tree, remote sensingclassification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2585342 Fabrication of Powdery Composites Based Alumina and Its Consolidation by Hot Pressing Method in OXY-GON Furnace
Authors: T. Kuchukhidze, N. Jalagonia, T. Korkia, V. Gabunia, N. Jalabadze, R. Chedia
Abstract:
In this work, obtaining methods of ultrafine alumina powdery composites and high temperature pressing technology of matrix ceramic composites with different compositions have been discussed. Alumina was obtained by solution combustion synthesis and sol-gel methods. Metal carbides containing powdery composites were obtained by homogenization of finishing powders in nanomills, as well as by their single-step high temperature synthesis .Different types of matrix ceramics composites (α-Al2O3-ZrO2-Y2O3, α-Al2O3- Y2O3-MgO, α-Al2O3-SiC-Y2O3, α-Al2O3-WC-Co-Y2O3, α-Al2O3- B4C-Y2O3, α-Al2O3- B4C-TiB2 etc.) were obtained by using OXYGON furnace. Consolidation of powders were carried out at 1550- 1750°C (hold time - 1 h, pressure - 50 MPa). Corundum ceramics samples have been obtained and characterized by high hardness and fracture toughness, absence of open porosity, high corrosion resistance. Their density reaches 99.5-99.6% TD. During the work, the following devices have been used: High temperature vacuum furnace OXY-GON Industries Inc (USA), Electronic Scanning Microscopes Nikon Eclipse LV 150, Optical Microscope NMM- 800TRF, Planetary mill Pulverisette 7 premium line, Shimadzu Dynamic Ultra Micro Hardness Tester DUH-211S, Analysette 12 Dynasizer.Keywords: α-Alumina, Consolidation, Matrix Ceramics, Powdery composites.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1095341 Revised PLWAP Tree with Non-frequent Items for Mining Sequential Pattern
Authors: R. Vishnu Priya, A. Vadivel
Abstract:
Sequential pattern mining is a challenging task in data mining area with large applications. One among those applications is mining patterns from weblog. Recent times, weblog is highly dynamic and some of them may become absolute over time. In addition, users may frequently change the threshold value during the data mining process until acquiring required output or mining interesting rules. Some of the recently proposed algorithms for mining weblog, build the tree with two scans and always consume large time and space. In this paper, we build Revised PLWAP with Non-frequent Items (RePLNI-tree) with single scan for all items. While mining sequential patterns, the links related to the nonfrequent items are not considered. Hence, it is not required to delete or maintain the information of nodes while revising the tree for mining updated transactions. The algorithm supports both incremental and interactive mining. It is not required to re-compute the patterns each time, while weblog is updated or minimum support changed. The performance of the proposed tree is better, even the size of incremental database is more than 50% of existing one. For evaluation purpose, we have used the benchmark weblog dataset and found that the performance of proposed tree is encouraging compared to some of the recently proposed approaches.
Keywords: Sequential pattern mining, weblog, frequent and non-frequent items, incremental and interactive mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1931340 Effect of Relative Permeability on Well Testing Behavior of Naturally Fractured Lean Gas Condensate Reservoirs
Authors: G.H. Montazeri, Z. Dastkhan, H. Aliabadi
Abstract:
Gas condensate Reservoirs show complicated thermodynamic behavior when their pressure reduces to under dew point pressure. Condensate blockage around the producing well cause significant reduction of production rate as well bottom-hole pressure drops below saturation pressure. The main objective of this work was to examine the well test analysis of naturally fractured lean gas condensate reservoir and investigate the effect of condensate formed around the well-bore on behavior of single phase pseudo pressure and its derivative curves. In this work a naturally fractured lean gas condensate reservoir is simulated with compositional simulator. Different sensitivity analysis done on Corry parameters and result of simulator is feed to analytical well testing software. For consideration of these phenomena eighteen compositional models with Capillary number effect are constructed. Matrix relative permeability obeys Corry relative permeability and relative permeability in fracture is linear. Well testing behavior of these models are studied and interpreted. Results show different sensitivity analysis on relative permeability of matrix does not have strong effect on well testing behavior even most part of the matrix around the well is occupied with condensate.
Keywords: Lean gas, fractured condensate reservoir, capillary number, well testing analysis, relative permeability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2970339 Power System Stability Improvement by Simultaneous Tuning of PSS and SVC Based Damping Controllers Employing Differential Evolution Algorithm
Authors: Sangram Keshori Mohapatra, Sidhartha Panda, Prasant Kumar Satpathy
Abstract:
Power-system stability improvement by simultaneous tuning of power system stabilizer (PSS) and a Static Var Compensator (SVC) based damping controller is thoroughly investigated in this paper. Both local and remote signals with associated time delays are considered in the present study. The design problem of the proposed controller is formulated as an optimization problem, and differential evolution (DE) algorithm is employed to search for the optimal controller parameters. The performances of the proposed controllers are evaluated under different disturbances for both single-machine infinite bus power system and multi-machine power system. The performance of the proposed controllers with variations in the signal transmission delays has also been investigated. The proposed stabilizers are tested on a weakly connected power system subjected to different disturbances. Nonlinear simulation results are presented to show the effectiveness and robustness of the proposed control schemes over a wide range of loading conditions and disturbances. Further, the proposed design approach is found to be robust and improves stability effectively even under small disturbance conditions.
Keywords: Differential Evolution Algorithm, Power System Stability, Power System Stabilizer, Static Var Compensator
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2339338 Effects of Cerium Oxide Nanoparticle Addition in Diesel and Diesel-Biodiesel Blends on the Performance Characteristics of a CI Engine
Authors: Abbas Alli Taghipoor Bafghi, Hosein Bakhoda, Fateme Khodaei Chegeni
Abstract:
An experimental investigation is carried out to establish the performance characteristics of a compression ignition engine while using cerium oxide nanoparticles as additive in neat diesel and diesel-biodiesel blends. In the first phase of the experiments, stability of neat diesel and diesel-biodiesel fuel blends with the addition of cerium oxide nanoparticles is analyzed. After series of experiments, it is found that the blends subjected to high speed blending followed by ultrasonic bath stabilization improves the stability. In the second phase, performance characteristics are studied using the stable fuel blends in a single cylinder four stroke engine coupled with an electrical dynamometer and a data acquisition system. The cerium oxide acts as an oxygen donating catalyst and provides oxygen for combustion. The activation energy of cerium oxide acts to burn off carbon deposits within the engine cylinder at the wall temperature and prevents the deposition of non-polar compounds on the cylinder wall results reduction in HC emissions. The tests revealed that cerium oxide nanoparticles can be used as additive in diesel and diesel-biodiesel blends to improve complete combustion of the fuel significantly.Keywords: Diesel engine, cerium oxide, diesel-biodiesel blends, nanoparticles.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4811337 Numerical Analysis of Flow in the Gap between a Simplified Tractor-Trailer Model and Cross Vortex Trap Device
Authors: Terrance Charles, Zhiyin Yang, Yiling Lu
Abstract:
Heavy trucks are aerodynamically inefficient due to their un-streamlined body shapes, leading to more than of 60% engine power being required to overcome the aerodynamics drag at 60 m/hr. There are many aerodynamics drag reduction devices developed and this paper presents a study on a drag reduction device called Cross Vortex Trap Device (CVTD) deployed in the gap between the tractor and the trailer of a simplified tractor-trailer model. Numerical simulations have been carried out at Reynolds number 0.51×106 based on inlet flow velocity and height of the trailer using the Reynolds-Averaged Navier-Stokes (RANS) approach. Three different configurations of CVTD have been studied, ranging from single to three slabs, equally spaced on the front face of the trailer. Flow field around three different configurations of trap device have been analysed and presented. The results show that a maximum of 12.25% drag reduction can be achieved when a triple vortex trap device is used. Detailed flow field analysis along with pressure contours are presented to elucidate the drag reduction mechanisms of CVTD and why the triple vortex trap configuration produces the maximum drag reduction among the three configurations tested.
Keywords: Aerodynamic drag, cross vortex trap device, truck, RANS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 649336 Navigation of Multiple Mobile Robots using Rule-based-Neuro-Fuzzy Technique
Authors: Saroj Kumar Pradhan, Dayal Ramakrushna Parhi, Anup Kumar Panda
Abstract:
This paper deals with motion planning of multiple mobile robots. Mobile robots working together to achieve several objectives have many advantages over single robot system. However, the planning and coordination between the mobile robots is extremely difficult. In the present investigation rule-based and rulebased- neuro-fuzzy techniques are analyzed for multiple mobile robots navigation in an unknown or partially known environment. The final aims of the robots are to reach some pre-defined goals. Based upon a reference motion, direction; distances between the robots and obstacles; and distances between the robots and targets; different types of rules are taken heuristically and refined later to find the steering angle. The control system combines a repelling influence related to the distance between robots and nearby obstacles and with an attracting influence between the robots and targets. Then a hybrid rule-based-neuro-fuzzy technique is analysed to find the steering angle of the robots. Simulation results show that the proposed rulebased- neuro-fuzzy technique can improve navigation performance in complex and unknown environments compared to this simple rulebased technique.Keywords: Mobile robots, Navigation, Neuro-fuzzy, Obstacle avoidance, Rule-based, Target seeking
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1793335 Sustainability and Promotion of Inland Waterway Transportation Projects in Colombia: Case of the Magdalena River
Authors: David Julian Bernal Melgarejo
Abstract:
Inland Waterway Transportation (IWT) is playing an important role in national transport systems, water transportation is considered to be safe, energy efficient and environmentally friendly mode of transport, all benefits of IWT cause national awareness increase, for instance the Colombian government is planning to restore the navigability of the most important river of the country, the Magdalena’s River navigability, embrace waterway transportation in Colombia could strength competitiveness while reduce most of the transport externalities. However, the current situation of the Magdalena is deplorable, the most important river of Colombia has been abandoned for decades and the solution is beyond of a single administrative entity. This paper analyzes the outcomes of the Navigation And Inland Waterway Action and Development in Europe program (NAIADES) as a prospective to develop a similar program in Colombia with similar objectives and guidelines, considering sustainability, guarantying the long-term future results and adaptability of the program. Identifying stakeholders and policy experts, a set of individual interviews were carried out; findings support the idea of lack of integration within governmental institutions and lack of importance in marketing promotion as possible drawbacks on the implementation of IWT projects.
Keywords: Inland waterway transportation, Logistics, Sustainability, Multimodal transport systems, Water transportation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2832334 Determination of an Efficient Differentiation Pathway of Stem Cells Employing Predictory Neural Network Model
Authors: Mughal Yar M, Israr Ul Haq, Bushra Noman
Abstract:
The stem cells have ability to differentiated themselves through mitotic cell division and various range of specialized cell types. Cellular differentiation is a way by which few specialized cell develops into more specialized.This paper studies the fundamental problem of computational schema for an artificial neural network based on chemical, physical and biological variables of state. By doing this type of study system could be model for a viable propagation of various economically important stem cells differentiation. This paper proposes various differentiation outcomes of artificial neural network into variety of potential specialized cells on implementing MATLAB version 2009. A feed-forward back propagation kind of network was created to input vector (five input elements) with single hidden layer and one output unit in output layer. The efficiency of neural network was done by the assessment of results achieved from this study with that of experimental data input and chosen target data. The propose solution for the efficiency of artificial neural network assessed by the comparatative analysis of “Mean Square Error" at zero epochs. There are different variables of data in order to test the targeted results.Keywords: Computational shcmin, meiosis, mitosis, neuralnetwork, Stem cell SOM;
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1507333 Data-driven Multiscale Tsallis Complexity: Application to EEG Analysis
Authors: Young-Seok Choi
Abstract:
This work proposes a data-driven multiscale based quantitative measures to reveal the underlying complexity of electroencephalogram (EEG), applying to a rodent model of hypoxic-ischemic brain injury and recovery. Motivated by that real EEG recording is nonlinear and non-stationary over different frequencies or scales, there is a need of more suitable approach over the conventional single scale based tools for analyzing the EEG data. Here, we present a new framework of complexity measures considering changing dynamics over multiple oscillatory scales. The proposed multiscale complexity is obtained by calculating entropies of the probability distributions of the intrinsic mode functions extracted by the empirical mode decomposition (EMD) of EEG. To quantify EEG recording of a rat model of hypoxic-ischemic brain injury following cardiac arrest, the multiscale version of Tsallis entropy is examined. To validate the proposed complexity measure, actual EEG recordings from rats (n=9) experiencing 7 min cardiac arrest followed by resuscitation were analyzed. Experimental results demonstrate that the use of the multiscale Tsallis entropy leads to better discrimination of the injury levels and improved correlations with the neurological deficit evaluation after 72 hours after cardiac arrest, thus suggesting an effective metric as a prognostic tool.
Keywords: Electroencephalogram (EEG), multiscale complexity, empirical mode decomposition, Tsallis entropy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2063332 Finding Pareto Optimal Front for the Multi-Mode Time, Cost Quality Trade-off in Project Scheduling
Authors: H. Iranmanesh, M. R. Skandari, M. Allahverdiloo
Abstract:
Project managers are the ultimate responsible for the overall characteristics of a project, i.e. they should deliver the project on time with minimum cost and with maximum quality. It is vital for any manager to decide a trade-off between these conflicting objectives and they will be benefited of any scientific decision support tool. Our work will try to determine optimal solutions (rather than a single optimal solution) from which the project manager will select his desirable choice to run the project. In this paper, the problem in project scheduling notated as (1,T|cpm,disc,mu|curve:quality,time,cost) will be studied. The problem is multi-objective and the purpose is finding the Pareto optimal front of time, cost and quality of a project (curve:quality,time,cost), whose activities belong to a start to finish activity relationship network (cpm) and they can be done in different possible modes (mu) which are non-continuous or discrete (disc), and each mode has a different cost, time and quality . The project is constrained to a non-renewable resource i.e. money (1,T). Because the problem is NP-Hard, to solve the problem, a meta-heuristic is developed based on a version of genetic algorithm specially adapted to solve multi-objective problems namely FastPGA. A sample project with 30 activities is generated and then solved by the proposed method.Keywords: FastPGA, Multi-Execution Activity Mode, ParetoOptimality, Project Scheduling, Time-Cost-Quality Trade-Off.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686331 Development of a Support Tool for Cost and Schedule Integration Managment at Program Level
Authors: H. J. Yang, R. Z. Jin, I. J. Park, C. T. Hyun
Abstract:
There has been gradual progress of late in construction projects, particularly in big-scale megaprojects. Due to the long-term construction period, however, with large-scale budget investment, lack of construction management technologies, and increase in the incomplete elements of project schedule management, a plan to conduct efficient operations and to ensure business safety is required. In particular, as the project management information system (PMIS) is meant for managing a single project centering on the construction phase, there is a limitation in the management of program-scale businesses like megaprojects. Thus, a program management information system (PgMIS) that includes program-level management technologies is needed to manage multiple projects. In this study, a support tool was developed for managing the cost and schedule information occurring in the construction phase, at the program level. In addition, a case study on the developed support tool was conducted to verify the usability of the system. With the use of the developed support tool program, construction managers can monitor the progress of the entire project and of the individual subprojects in real time.
Keywords: Cost∙Schedule integration management, Supporting Tool, UI, WBS, CBS, introduce PgMIS (Program Management Information System), PMIS (Project Management Information System)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1461330 Ensemble Approach for Predicting Student's Academic Performance
Authors: L. A. Muhammad, M. S. Argungu
Abstract:
Educational data mining (EDM) has recorded substantial considerations. Techniques of data mining in one way or the other have been proposed to dig out out-of-sight knowledge in educational data. The result of the study got assists academic institutions in further enhancing their process of learning and methods of passing knowledge to students. Consequently, the performance of students boasts and the educational products are by no doubt enhanced. This study adopted a student performance prediction model premised on techniques of data mining with Students' Essential Features (SEF). SEF are linked to the learner's interactivity with the e-learning management system. The performance of the student's predictive model is assessed by a set of classifiers, viz. Bayes Network, Logistic Regression, and Reduce Error Pruning Tree (REP). Consequently, ensemble methods of Bagging, Boosting, and Random Forest (RF) are applied to improve the performance of these single classifiers. The study reveals that the result shows a robust affinity between learners' behaviors and their academic attainment. Result from the study shows that the REP Tree and its ensemble record the highest accuracy of 83.33% using SEF. Hence, in terms of the Receiver Operating Curve (ROC), boosting method of REP Tree records 0.903, which is the best. This result further demonstrates the dependability of the proposed model.
Keywords: Ensemble, bagging, Random Forest, boosting, data mining, classifiers, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 765329 Optimizing of Fuzzy C-Means Clustering Algorithm Using GA
Authors: Mohanad Alata, Mohammad Molhim, Abdullah Ramini
Abstract:
Fuzzy C-means Clustering algorithm (FCM) is a method that is frequently used in pattern recognition. It has the advantage of giving good modeling results in many cases, although, it is not capable of specifying the number of clusters by itself. In FCM algorithm most researchers fix weighting exponent (m) to a conventional value of 2 which might not be the appropriate for all applications. Consequently, the main objective of this paper is to use the subtractive clustering algorithm to provide the optimal number of clusters needed by FCM algorithm by optimizing the parameters of the subtractive clustering algorithm by an iterative search approach and then to find an optimal weighting exponent (m) for the FCM algorithm. In order to get an optimal number of clusters, the iterative search approach is used to find the optimal single-output Sugenotype Fuzzy Inference System (FIS) model by optimizing the parameters of the subtractive clustering algorithm that give minimum least square error between the actual data and the Sugeno fuzzy model. Once the number of clusters is optimized, then two approaches are proposed to optimize the weighting exponent (m) in the FCM algorithm, namely, the iterative search approach and the genetic algorithms. The above mentioned approach is tested on the generated data from the original function and optimal fuzzy models are obtained with minimum error between the real data and the obtained fuzzy models.Keywords: Fuzzy clustering, Fuzzy C-Means, Genetic Algorithm, Sugeno fuzzy systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3257328 Kinetic Theory Based CFD Modeling of Particulate Flows in Horizontal Pipes
Authors: Pandaba Patro, Brundaban Patro
Abstract:
The numerical simulation of fully developed gas–solid flow in a horizontal pipe is done using the eulerian-eulerian approach, also known as two fluids modeling as both phases are treated as continuum and inter-penetrating continua. The solid phase stresses are modeled using kinetic theory of granular flow (KTGF). The computed results for velocity profiles and pressure drop are compared with the experimental data. We observe that the convection and diffusion terms in the granular temperature cannot be neglected in gas solid flow simulation along a horizontal pipe. The particle-wall collision and lift also play important role in eulerian modeling. We also investigated the effect of flow parameters like gas velocity, particle properties and particle loading on pressure drop prediction in different pipe diameters. Pressure drop increases with gas velocity and particle loading. The gas velocity has the same effect ((proportional toU2 ) as single phase flow on pressure drop prediction. With respect to particle diameter, pressure drop first increases, reaches a peak and then decreases. The peak is a strong function of pipe bore.
Keywords: CFD, Eulerian modeling, gas solid flow, KTGF.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3175327 Tailormade Geometric Properties of Chitosan by Gamma Irradiation
Authors: F. Elashhab, L. Sheha, R. Fawzi Elsupikhe, A. E. A. Youssef, R. M. Sheltami, T. Alfazani
Abstract:
Chitosans, CSs, in solution are increasingly used in a range of geometric properties in various academic and industrial sectors, especially in the domain of pharmaceutical and biomedical engineering. In order to provide a tailoring guide of CSs to the applicants, gamma (γ)-irradiation technology and simple viscosity measurements have been used in this study. Accordingly, CS solid discs (0.5 cm thickness and 2.5 cm diameter) were exposed in air to Cobalt-60 (γ)-radiation, at room temperature and constant 50 kGy dose for different periods of exposer time (tγ). Diluted solutions of native and different irradiated CS were then prepared by dissolving 1.25 mg cm-3 of each polymer in 0.1 M NaCl/0.2 M CH3COOH. The single-concentration relative viscosity (ƞr) measurements were employed to obtain their intrinsic viscosity ([ƞ]) values and interrelated parameters, like: the molar mass (Mƞ), hydrodynamic radiuses (RH,ƞ), radius of gyration (RG,ƞ), and second virial coefficient (A2,ƞ) of CSs in the solution. The results show an exponential decrease of ƞr, [ƞ], Mƞ, RH,ƞ and RG,ƞ with increasing tγ. This suggests the influence of random chain-scission of CSs glycosidic bonds, with rate constant kr and kr-1 (lifetime τr ~ 0.017 min-1 and 57.14 min, respectively). The results also show an exponential decrease of A2ƞ with increasing tγ, which can be attributed to the growth of excluded volume effect in CS segments by tγ and, hence, better solution quality. The results are represented in following scaling laws as a tailoring guide to the applicants: RH,ƞ = 6.98 x 10-3 Mr0.65; RG,ƞ = 7.09 x 10-4 Mr0.83; A2,ƞ = 121.03 Mƞ,r-0.19.
Keywords: Gamma irradiation, geometric properties, kinetic model, scaling laws, viscosity measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 408326 Finding Pareto Optimal Front for the Multi- Mode Time, Cost Quality Trade-off in Project Scheduling
Authors: H. Iranmanesh, M. R. Skandari, M. Allahverdiloo
Abstract:
Project managers are the ultimate responsible for the overall characteristics of a project, i.e. they should deliver the project on time with minimum cost and with maximum quality. It is vital for any manager to decide a trade-off between these conflicting objectives and they will be benefited of any scientific decision support tool. Our work will try to determine optimal solutions (rather than a single optimal solution) from which the project manager will select his desirable choice to run the project. In this paper, the problem in project scheduling notated as (1,T|cpm,disc,mu|curve:quality,time,cost) will be studied. The problem is multi-objective and the purpose is finding the Pareto optimal front of time, cost and quality of a project (curve:quality,time,cost), whose activities belong to a start to finish activity relationship network (cpm) and they can be done in different possible modes (mu) which are non-continuous or discrete (disc), and each mode has a different cost, time and quality . The project is constrained to a non-renewable resource i.e. money (1,T). Because the problem is NP-Hard, to solve the problem, a meta-heuristic is developed based on a version of genetic algorithm specially adapted to solve multi-objective problems namely FastPGA. A sample project with 30 activities is generated and then solved by the proposed method.Keywords: FastPGA, Multi-Execution Activity Mode, Pareto Optimality, Project Scheduling, Time-Cost-Quality Trade-Off.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1810325 A Robust Approach to the Load Frequency Control Problem with Speed Regulation Uncertainty
Authors: S. Z. Sayed Hassen
Abstract:
The load frequency control problem of power systems has attracted a lot of attention from engineers and researchers over the years. Increasing and quickly changing load demand, coupled with the inclusion of more generators with high variability (solar and wind power generators) on the network are making power systems more difficult to regulate. Frequency changes are unavoidable but regulatory authorities require that these changes remain within a certain bound. Engineers are required to perform the tricky task of adjusting the control system to maintain the frequency within tolerated bounds. It is well known that to minimize frequency variations, a large proportional feedback gain (speed regulation constant) is desirable. However, this improvement in performance using proportional feedback comes about at the expense of a reduced stability margin and also allows some steady-state error. A conventional PI controller is then included as a secondary control loop to drive the steadystate error to zero. In this paper, we propose a robust controller to replace the conventional PI controller which guarantees performance and stability of the power system over the range of variation of the speed regulation constant. Simulation results are shown to validate the superiority of the proposed approach on a simple single-area power system model.
Keywords: Robust control, power system, integral action, minimax LQG control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1919324 UPFC Supplementary Controller Design Using Real-Coded Genetic Algorithm for Damping Low Frequency Oscillations in Power Systems
Authors: A.K. Baliarsingh, S. Panda, A.K. Mohanty, C. Ardil
Abstract:
This paper presents a systematic approach for designing Unified Power Flow Controller (UPFC) based supplementary damping controllers for damping low frequency oscillations in a single-machine infinite-bus power system. Detailed investigations have been carried out considering the four alternatives UPFC based damping controller namely modulating index of series inverter (mB), modulating index of shunt inverter (mE), phase angle of series inverter (δB ) and phase angle of the shunt inverter (δE ). The design problem of the proposed controllers is formulated as an optimization problem and Real- Coded Genetic Algorithm (RCGA) is employed to optimize damping controller parameters. Simulation results are presented and compared with a conventional method of tuning the damping controller parameters to show the effectiveness and robustness of the proposed design approach.
Keywords: Power System Oscillations, Real-Coded Genetic Algorithm (RCGA), Flexible AC Transmission Systems (FACTS), Unified Power Flow Controller (UPFC), Damping Controller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2084323 Increase of Organization in Complex Systems
Authors: Georgi Yordanov Georgiev, Michael Daly, Erin Gombos, Amrit Vinod, Gajinder Hoonjan
Abstract:
Measures of complexity and entropy have not converged to a single quantitative description of levels of organization of complex systems. The need for such a measure is increasingly necessary in all disciplines studying complex systems. To address this problem, starting from the most fundamental principle in Physics, here a new measure for quantity of organization and rate of self-organization in complex systems based on the principle of least (stationary) action is applied to a model system - the central processing unit (CPU) of computers. The quantity of organization for several generations of CPUs shows a double exponential rate of change of organization with time. The exact functional dependence has a fine, S-shaped structure, revealing some of the mechanisms of self-organization. The principle of least action helps to explain the mechanism of increase of organization through quantity accumulation and constraint and curvature minimization with an attractor, the least average sum of actions of all elements and for all motions. This approach can help describe, quantify, measure, manage, design and predict future behavior of complex systems to achieve the highest rates of self organization to improve their quality. It can be applied to other complex systems from Physics, Chemistry, Biology, Ecology, Economics, Cities, network theory and others where complex systems are present.
Keywords: Organization, self-organization, complex system, complexification, quantitative measure, principle of least action, principle of stationary action, attractor, progressive development, acceleration, stochastic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641322 Load Discontinuity in Shock Response and Its Remedies
Authors: Shuenn-Yih Chang, Chiu-Li Huang
Abstract:
It has been shown that a load discontinuity at the end of an impulse will result in an extra impulse and hence an extra amplitude distortion if a step-by-step integration method is employed to yield the shock response. In order to overcome this difficulty, three remedies are proposed to reduce the extra amplitude distortion. The first remedy is to solve the momentum equation of motion instead of the force equation of motion in the step-by-step solution of the shock response, where an external momentum is used in the solution of the momentum equation of motion. Since the external momentum is a resultant of the time integration of external force, the problem of load discontinuity will automatically disappear. The second remedy is to perform a single small time step immediately upon termination of the applied impulse while the other time steps can still be conducted by using the time step determined from general considerations. This is because that the extra impulse caused by a load discontinuity at the end of an impulse is almost linearly proportional to the step size. Finally, the third remedy is to use the average value of the two different values at the integration point of the load discontinuity to replace the use of one of them for loading input. The basic motivation of this remedy originates from the concept of no loading input error associated with the integration point of load discontinuity. The feasibility of the three remedies are analytically explained and numerically illustrated.Keywords: Dynamic analysis, load discontinuity, shock response, step-by-step integration
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1332321 A Mathematical Representation for Mechanical Model Assessment: Numerical Model Qualification Method
Authors: Keny Ordaz-Hernandez, Xavier Fischer, Fouad Bennis
Abstract:
This article illustrates a model selection management approach for virtual prototypes in interactive simulations. In those numerical simulations, the virtual prototype and its environment are modelled as a multiagent system, where every entity (prototype,human, etc.) is modelled as an agent. In particular, virtual prototyp ingagents that provide mathematical models of mechanical behaviour inform of computational methods are considered. This work argues that selection of an appropriate model in a changing environment,supported by models? characteristics, can be managed by the deter-mination a priori of specific exploitation and performance measures of virtual prototype models. As different models exist to represent a single phenomenon, it is not always possible to select the best one under all possible circumstances of the environment. Instead the most appropriate shall be selecting according to the use case. The proposed approach consists in identifying relevant metrics or indicators for each group of models (e.g. entity models, global model), formulate their qualification, analyse the performance, and apply the qualification criteria. Then, a model can be selected based on the performance prediction obtained from its qualification. The authors hope that this approach will not only help to inform engineers and researchers about another approach for selecting virtual prototype models, but also assist virtual prototype engineers in the systematic or automatic model selection.
Keywords: Virtual prototype models, domain, qualification criterion, model qualification, model assessment, environmental modelling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2041320 Optical Limiting Characteristics of Core-Shell Nanoparticles
Authors: G.Vinitha, A.Ramalingam
Abstract:
TiO2 nanoparticles were synthesized by hydrothermal method at 180°C from TiOSO4 aqueous solution with1m/l concentration. The obtained products were coated with silica by means of a seeded polymerization technique for a coating time of 1440 minutes to obtain well defined TiO2@SiO2 core-shell structure. The uncoated and coated nanoparticles were characterized by using X-Ray diffraction technique (XRD), Fourier Transform Infrared Spectroscopy (FT-IR) to study their physico-chemical properties. Evidence from XRD and FTIR results show that SiO2 is homogenously coated on the surface of titania particles. FTIR spectra show that there exists an interaction between TiO2 and SiO2 and results in the formation of Ti-O-Si chemical bonds at the interface of TiO2 particles and SiO2 coating layer. The non linear optical limiting properties of TiO2 and TiO2@SiO2 nanoparticles dispersed in ethylene glycol were studied at 532nm using 5ns Nd:YAG laser pulses. Three-photon absorption is responsible for optical limiting characteristics in these nanoparticles and it is seen that the optical nonlinearity is enhanced in core-shell structures when compared with single counterparts. This effective three-photon type absorption at this wavelength, is of potential application in fabricating optical limiting devices.Keywords: hydrothermal method, optical limiting devicesseeded polymerization technique, three-photon type absorption
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1818319 A Renovated Cook's Distance Based On The Buckley-James Estimate In Censored Regression
Authors: Nazrina Aziz, Dong Q. Wang
Abstract:
There have been various methods created based on the regression ideas to resolve the problem of data set containing censored observations, i.e. the Buckley-James method, Miller-s method, Cox method, and Koul-Susarla-Van Ryzin estimators. Even though comparison studies show the Buckley-James method performs better than some other methods, it is still rarely used by researchers mainly because of the limited diagnostics analysis developed for the Buckley-James method thus far. Therefore, a diagnostic tool for the Buckley-James method is proposed in this paper. It is called the renovated Cook-s Distance, (RD* i ) and has been developed based on the Cook-s idea. The renovated Cook-s Distance (RD* i ) has advantages (depending on the analyst demand) over (i) the change in the fitted value for a single case, DFIT* i as it measures the influence of case i on all n fitted values Yˆ∗ (not just the fitted value for case i as DFIT* i) (ii) the change in the estimate of the coefficient when the ith case is deleted, DBETA* i since DBETA* i corresponds to the number of variables p so it is usually easier to look at a diagnostic measure such as RD* i since information from p variables can be considered simultaneously. Finally, an example using Stanford Heart Transplant data is provided to illustrate the proposed diagnostic tool.
Keywords: Buckley-James estimators, censored regression, censored data, diagnostic analysis, product-limit estimator, renovated Cook's Distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1438318 Agreement Options in Multi-person Decision on Optimizing High-Rise Building Columns
Authors: Christiono Utomo, Arazi Idrus, Madzlan Napiah, Mohd. Faris Khamidi
Abstract:
This paper presents a conceptual model of agreement options for negotiation support in multi-person decision on optimizing high-rise building columns. The decision is complicated since many parties involved in choosing a single alternative from a set of solutions. There are different concern caused by differing preferences, experiences, and background. Such building columns as alternatives are referred to as agreement options which are determined by identifying the possible decision maker group, followed by determining the optimal solution for each group. The group in this paper is based on three-decision makers preferences that are designer, programmer, and construction manager. Decision techniques applied to determine the relative value of the alternative solutions for performing the function. Analytical Hierarchy Process (AHP) was applied for decision process and game theory based agent system for coalition formation. An n-person cooperative game is represented by the set of all players. The proposed coalition formation model enables each agent to select individually its allies or coalition. It further emphasizes the importance of performance evaluation in the design process and value-based decision.Keywords: Agreement options, coalition, group choice, game theory, building columns selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1625317 Medical Image Watermark and Tamper Detection Using Constant Correlation Spread Spectrum Watermarking
Authors: Peter U. Eze, P. Udaya, Robin J. Evans
Abstract:
Data hiding can be achieved by Steganography or invisible digital watermarking. For digital watermarking, both accurate retrieval of the embedded watermark and the integrity of the cover image are important. Medical image security in Teleradiology is one of the applications where the embedded patient record needs to be extracted with accuracy as well as the medical image integrity verified. In this research paper, the Constant Correlation Spread Spectrum digital watermarking for medical image tamper detection and accurate embedded watermark retrieval is introduced. In the proposed method, a watermark bit from a patient record is spread in a medical image sub-block such that the correlation of all watermarked sub-blocks with a spreading code, W, would have a constant value, p. The constant correlation p, spreading code, W and the size of the sub-blocks constitute the secret key. Tamper detection is achieved by flagging any sub-block whose correlation value deviates by more than a small value, ℇ, from p. The major features of our new scheme include: (1) Improving watermark detection accuracy for high-pixel depth medical images by reducing the Bit Error Rate (BER) to Zero and (2) block-level tamper detection in a single computational process with simultaneous watermark detection, thereby increasing utility with the same computational cost.
Keywords: Constant correlation, medical image, spread spectrum, tamper detection, watermarking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 974316 Optimization of Kinematics for Birds and UAVs Using Evolutionary Algorithms
Authors: Mohamed Hamdaoui, Jean-Baptiste Mouret, Stephane Doncieux, Pierre Sagaut
Abstract:
The aim of this work is to present a multi-objective optimization method to find maximum efficiency kinematics for a flapping wing unmanned aerial vehicle. We restrained our study to rectangular wings with the same profile along the span and to harmonic dihedral motion. It is assumed that the birdlike aerial vehicle (whose span and surface area were fixed respectively to 1m and 0.15m2) is in horizontal mechanically balanced motion at fixed speed. We used two flight physics models to describe the vehicle aerodynamic performances, namely DeLaurier-s model, which has been used in many studies dealing with flapping wings, and the model proposed by Dae-Kwan et al. Then, a constrained multi-objective optimization of the propulsive efficiency is performed using a recent evolutionary multi-objective algorithm called є-MOEA. Firstly, we show that feasible solutions (i.e. solutions that fulfil the imposed constraints) can be obtained using Dae-Kwan et al.-s model. Secondly, we highlight that a single objective optimization approach (weighted sum method for example) can also give optimal solutions as good as the multi-objective one which nevertheless offers the advantage of directly generating the set of the best trade-offs. Finally, we show that the DeLaurier-s model does not yield feasible solutions.
Keywords: Flight physics, evolutionary algorithm, optimization, Pareto surface.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646315 Network Reconfiguration of Distribution System Using Artificial Bee Colony Algorithm
Authors: S. Ganesh
Abstract:
Power distribution systems typically have tie and sectionalizing switches whose states determine the topological configuration of the network. The aim of network reconfiguration of the distribution network is to minimize the losses for a load arrangement at a particular time. Thus the objective function is to minimize the losses of the network by satisfying the distribution network constraints. The various constraints are radiality, voltage limits and the power balance condition. In this paper the status of the switches is obtained by using Artificial Bee Colony (ABC) algorithm. ABC is based on a particular intelligent behavior of honeybee swarms. ABC is developed based on inspecting the behaviors of real bees to find nectar and sharing the information of food sources to the bees in the hive. The proposed methodology has three stages. In stage one ABC is used to find the tie switches, in stage two the identified tie switches are checked for radiality constraint and if the radilaity constraint is satisfied then the procedure is proceeded to stage three otherwise the process is repeated. In stage three load flow analysis is performed. The process is repeated till the losses are minimized. The ABC is implemented to find the power flow path and the Forward Sweeper algorithm is used to calculate the power flow parameters. The proposed methodology is applied for a 33–bus single feeder distribution network using MATLAB.
Keywords: Artificial Bee Colony (ABC) algorithm, Distribution system, Loss reduction, Network reconfiguration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3810314 Efficient Program Slicing Algorithms for Measuring Functional Cohesion and Parallelism
Authors: Jehad Al Dallal
Abstract:
Program slicing is the task of finding all statements in a program that directly or indirectly influence the value of a variable occurrence. The set of statements that can affect the value of a variable at some point in a program is called a program slice. In several software engineering applications, such as program debugging and measuring program cohesion and parallelism, several slices are computed at different program points. In this paper, algorithms are introduced to compute all backward and forward static slices of a computer program by traversing the program representation graph once. The program representation graph used in this paper is called Program Dependence Graph (PDG). We have conducted an experimental comparison study using 25 software modules to show the effectiveness of the introduced algorithm for computing all backward static slices over single-point slicing approaches in computing the parallelism and functional cohesion of program modules. The effectiveness of the algorithm is measured in terms of time execution and number of traversed PDG edges. The comparison study results indicate that using the introduced algorithm considerably saves the slicing time and effort required to measure module parallelism and functional cohesion.
Keywords: Backward slicing, cohesion measure, forward slicing, parallelism measure, program dependence graph, program slicing, static slicing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1449