Search results for: site selection optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7714

Search results for: site selection optimization

6994 Comparison of Accumulated Stress Based Pore Pressure Model and Plasticity Model in 1D Site Response Analysis

Authors: Saeedullah J. Mandokhail, Shamsher Sadiq, Meer H. Khan

Abstract:

This paper presents the comparison of excess pore water pressure ratio (ru) predicted by using accumulated stress based pore pressure model and plasticity model. One dimensional effective stress site response analyses were performed on a 30 m deep sand column (consists of a liquefiable layer in between non-liquefiable layers) using accumulated stress based pore pressure model in Deepsoil and PDMY2 (PressureDependentMultiYield02) model in Opensees. Three Input motions with different peak ground acceleration (PGA) levels of 0.357 g, 0.124 g, and 0.11 g were used in this study. The developed excess pore pressure ratio predicted by the above two models were compared and analyzed along the depth. The time history of the ru at mid of the liquefiable layer and non-liquefiable layer were also compared. The comparisons show that the two models predict mostly similar ru values. The predicted ru is also consistent with the PGA level of the input motions.

Keywords: effective stress, excess pore pressure ratio, pore pressure model, site response analysis

Procedia PDF Downloads 230
6993 3-Dimensional Contamination Conceptual Site Model: A Case Study Illustrating the Multiple Applications of Developing and Maintaining a 3D Contamination Model during an Active Remediation Project on a Former Urban Gasworks Site

Authors: Duncan Fraser

Abstract:

A 3-Dimensional (3D) conceptual site model was developed using the Leapfrog Works® platform utilising a comprehensive historical dataset for a large former Gasworks site in Fitzroy, Melbourne. The gasworks had been constructed across two fractured geological units with varying hydraulic conductivities. A Newer Volcanic (basaltic) outcrop covered approximately half of the site and was overlying a fractured Melbourne formation (Siltstone) bedrock outcropping over the remaining portion. During the investigative phase of works, a dense non-aqueous phase liquid (DNAPL) plume (coal tar) was identified within both geological units in the subsurface originating from multiple sources, including gasholders, tar wells, condensers, and leaking pipework. The first stage of model development was undertaken to determine the horizontal and vertical extents of the coal tar in the subsurface and assess the potential causality between potential sources, plume location, and site geology. Concentrations of key contaminants of interest (COIs) were also interpolated within Leapfrog to refine the distribution of contaminated soils. The model was subsequently used to develop a robust soil remediation strategy and achieve endorsement from an Environmental Auditor. A change in project scope, following the removal and validation of the three former gasholders, necessitated the additional excavation of a significant volume of residual contaminated rock to allow for the future construction of two-story underground basements. To assess financial liabilities associated with the offsite disposal or thermal treatment of material, the 3D model was updated with three years of additional analytical data from the active remediation phase of works. Chemical concentrations and the residual tar plume within the rock fractures were modelled to pre-classify the in-situ material and enhance separation strategies to prevent the unnecessary treatment of material and reduce costs.

Keywords: 3D model, contaminated land, Leapfrog, remediation

Procedia PDF Downloads 139
6992 Locating Potential Site for Biomass Power Plant Development in Central Luzon Philippines Using GIS-Based Suitability Analysis

Authors: Bryan M. Baltazar, Marjorie V. Remolador, Klathea H. Sevilla, Imee Saladaga, Loureal Camille Inocencio, Ma. Rosario Concepcion O. Ang

Abstract:

Biomass energy is a traditional source of sustainable energy, which has been widely used in developing countries. The Philippines, specifically Central Luzon, has an abundant source of biomass. Hence, it could supply abundant agricultural residues (rice husks), as feedstock in a biomass power plant. However, locating a potential site for biomass development is a complex process which involves different factors, such as physical, environmental, socio-economic, and risks that are usually diverse and conflicting. Moreover, biomass distribution is highly dispersed geographically. Thus, this study develops an integrated method combining Geographical Information Systems (GIS) and methods for energy planning; Multi-Criteria Decision Analysis (MCDA) and Analytical Hierarchy Process (AHP), for locating suitable site for biomass power plant development in Central Luzon, Philippines by considering different constraints and factors. Using MCDA, a three level hierarchy of factors and constraints was produced, with corresponding weights determined by experts by using AHP. Applying the results, a suitability map for Biomass power plant development in Central Luzon was generated. It showed that the central part of the region has the highest potential for biomass power plant development. It is because of the characteristics of the area such as the abundance of rice fields, with generally flat land surfaces, accessible roads and grid networks, and low risks to flooding and landslide. This study recommends the use of higher accuracy resource maps, and further analysis in selecting the optimum site for biomass power plant development that would account for the cost and transportation of biomass residues.

Keywords: analytic hierarchy process, biomass energy, GIS, multi-criteria decision analysis, site suitability analysis

Procedia PDF Downloads 435
6991 Optimal Allocation of Distributed Generation Sources for Loss Reduction and Voltage Profile Improvement by Using Particle Swarm Optimization

Authors: Muhammad Zaheer Babar, Amer Kashif, Muhammad Rizwan Javed

Abstract:

Nowadays distributed generation integration is best way to overcome the increasing load demand. Optimal allocation of distributed generation plays a vital role in reducing system losses and improves voltage profile. In this paper, a Meta heuristic technique is proposed for allocation of DG in order to reduce power losses and improve voltage profile. The proposed technique is based on Multi Objective Particle Swarm optimization. Fewer control parameters are needed in this algorithm. Modification is made in search space of PSO. The effectiveness of proposed technique is tested on IEEE 33 bus test system. Single DG as well as multiple DG scenario is adopted for proposed method. Proposed method is more effective as compared to other Meta heuristic techniques and gives better results regarding system losses and voltage profile.

Keywords: Distributed generation (DG), Multi Objective Particle Swarm Optimization (MOPSO), particle swarm optimization (PSO), IEEE standard Test System

Procedia PDF Downloads 458
6990 Machine Learning Approach for Yield Prediction in Semiconductor Production

Authors: Heramb Somthankar, Anujoy Chakraborty

Abstract:

This paper presents a classification study on yield prediction in semiconductor production using machine learning approaches. A complicated semiconductor production process is generally monitored continuously by signals acquired from sensors and measurement sites. A monitoring system contains a variety of signals, all of which contain useful information, irrelevant information, and noise. In the case of each signal being considered a feature, "Feature Selection" is used to find the most relevant signals. The open-source UCI SECOM Dataset provides 1567 such samples, out of which 104 fail in quality assurance. Feature extraction and selection are performed on the dataset, and useful signals were considered for further study. Afterward, common machine learning algorithms were employed to predict whether the signal yields pass or fail. The most relevant algorithm is selected for prediction based on the accuracy and loss of the ML model.

Keywords: deep learning, feature extraction, feature selection, machine learning classification algorithms, semiconductor production monitoring, signal processing, time-series analysis

Procedia PDF Downloads 114
6989 Low Overhead Dynamic Channel Selection with Cluster-Based Spatial-Temporal Station Reporting in Wireless Networks

Authors: Zeyad Abdelmageid, Xianbin Wang

Abstract:

Choosing the operational channel for a WLAN access point (AP) in WLAN networks has been a static channel assignment process initiated by the user during the deployment process of the AP, which fails to cope with the dynamic conditions of the assigned channel at the station side afterward. However, the dramatically growing number of Wi-Fi APs and stations operating in the unlicensed band has led to dynamic, distributed, and often severe interference. This highlights the urgent need for the AP to dynamically select the best overall channel of operation for the basic service set (BSS) by considering the distributed and changing channel conditions at all stations. Consequently, dynamic channel selection algorithms which consider feedback from the station side have been developed. Despite the significant performance improvement, existing channel selection algorithms suffer from very high feedback overhead. Feedback latency from the STAs, due to the high overhead, can cause the eventually selected channel to no longer be optimal for operation due to the dynamic sharing nature of the unlicensed band. This has inspired us to develop our own dynamic channel selection algorithm with reduced overhead through the proposed low-overhead, cluster-based station reporting mechanism. The main idea behind the cluster-based station reporting is the observation that STAs which are very close to each other tend to have very similar channel conditions. Instead of requesting each STA to report on every candidate channel while causing high overhead, the AP divides STAs into clusters then assigns each STA in each cluster one channel to report feedback on. With the proper design of the cluster based reporting, the AP does not lose any information about the channel conditions at the station side while reducing feedback overhead. The simulation results show equal performance and, at times, better performance with a fraction of the overhead. We believe that this algorithm has great potential in designing future dynamic channel selection algorithms with low overhead.

Keywords: channel assignment, Wi-Fi networks, clustering, DBSCAN, overhead

Procedia PDF Downloads 125
6988 Assessment of Heavy Metal Contamination in Soil and Groundwater Due to Leachate Migration from an Open Dumping Site

Authors: Kali Prasad Sarma

Abstract:

Indiscriminate disposal of municipal solid waste (MSW) in open dumping site is a common scenario in developing countries like India which poses a risk to the environment as well as human health. The objective of the present investigation was to find out the concentration of heavy metals (Pb, Cr, Ni, Mn, Zn, Cu, and Cd) and other physicochemical parameters of leachate and soil collected from an open dumping site of Tezpur town, Assam, India and its associated potential ecological risk. Tezpur is an urban agglomeration coming under the category of Class I UAs/Towns with a population of 105,377 as per data released by Government of India for Census 2011. Impact of the leachate on the groundwater was also addressed in our study. The concentrations of heavy metals were determined using ICP-OES. Energy dispersive X-Ray (SEM-EDS) microanalysis was also conducted to see the presence of the studied metals in the soil. X-Ray diffraction analysis (XRD) and Fourier Transform Infrared (FTIR) spectroscopy were also used to identify dominant minerals present in the soil samples. The trend of measured heavy metals in the soil samples was found in the following order: Mn > Pb > Cu > Zn > Cr > Ni > Cd. The assessment of heavy metal contamination in the soil was carried out by calculating enrichment factor (EF), geo-accumulation index (Igeo), contamination factor (Cfi), degree of contamination (Cd), pollution load index (PLI) and ecological risk factor (Eri). The study showed that the concentrations of Pb, Cu, and Cd were much higher than their respective average shale value and the EF of the soil samples depicted very severe enrichment for Pb, Cu, and Cd; moderate enrichment for Cr and Zn. Calculated Igeo values indicated that the soil is moderate to strongly contaminated with Pb and uncontaminated to moderately contaminated with Cd and Cu. The Cfi value for Pb indicates a very strong contamination level of the metal in the soil. The Cfi values for Cu and Cd were 2.37 and 1.65 respectively indicating moderate contamination level. To apportion the possible sources of heavy metal contamination in soil, principal components analysis (PCA) has been adopted. From the leachate, heavy metals are accumulated at the dumping site soil which could easily percolate through the soil and reach the groundwater. The possible relation of groundwater contamination due to leachate percolation was examined by analyzing the heavy metal concentrations in groundwater with respect to distance from the dumping site. The concentrations of Cd and Pb in groundwater (at a distance of 20m from dumping site) exceeded the permissible limit for drinking water as set by WHO. Occurrence of elevated concentration of potentially toxic heavy metals such as Pb and Cd in groundwater and soil are much environmental concern as it is detrimental to human health and ecosystem.

Keywords: groundwater, heavy metal contamination, leachate, open dumping site

Procedia PDF Downloads 114
6987 Two Points Crossover Genetic Algorithm for Loop Layout Design Problem

Authors: Xu LiYun, Briand Florent, Fan GuoLiang

Abstract:

The loop-layout design problem (LLDP) aims at optimizing the sequence of positioning of the machines around the cyclic production line. Traffic congestion is the usual criteria to minimize in this type of problem, i.e. the number of additional cycles spent by each part in the network until the completion of its required routing sequence of machines. This paper aims at applying several improvements mechanisms such as a positioned-based crossover operator for the Genetic Algorithm (GA) called a Two Points Crossover (TPC) and an offspring selection process. The performance of the improved GA is measured using well-known examples from literature and compared to other evolutionary algorithms. Good results show that GA can still be competitive for this type of problem against more recent evolutionary algorithms.

Keywords: crossover, genetic algorithm, layout design problem, loop-layout, manufacturing optimization

Procedia PDF Downloads 284
6986 Transport Mode Selection under Lead Time Variability and Emissions Constraint

Authors: Chiranjit Das, Sanjay Jharkharia

Abstract:

This study is focused on transport mode selection under lead time variability and emissions constraint. In order to reduce the carbon emissions generation due to transportation, organization has often faced a dilemmatic choice of transport mode selection since logistic cost and emissions reduction are complementary with each other. Another important aspect of transportation decision is lead-time variability which is least considered in transport mode selection problem. Thus, in this study, we provide a comprehensive mathematical based analytical model to decide transport mode selection under emissions constraint. We also extend our work through analysing the effect of lead time variability in the transport mode selection by a sensitivity analysis. In order to account lead time variability into the model, two identically normally distributed random variables are incorporated in this study including unit lead time variability and lead time demand variability. Therefore, in this study, we are addressing following questions: How the decisions of transport mode selection will be affected by lead time variability? How lead time variability will impact on total supply chain cost under carbon emissions? To accomplish these objectives, a total transportation cost function is developed including unit purchasing cost, unit transportation cost, emissions cost, holding cost during lead time, and penalty cost for stock out due to lead time variability. A set of modes is available to transport each node, in this paper, we consider only four transport modes such as air, road, rail, and water. Transportation cost, distance, emissions level for each transport mode is considered as deterministic and static in this paper. Each mode is having different emissions level depending on the distance and product characteristics. Emissions cost is indirectly affected by the lead time variability if there is any switching of transport mode from lower emissions prone transport mode to higher emissions prone transport mode in order to reduce penalty cost. We provide a numerical analysis in order to study the effectiveness of the mathematical model. We found that chances of stock out during lead time will be higher due to the higher variability of lead time and lad time demand. Numerical results show that penalty cost of air transport mode is negative that means chances of stock out zero, but, having higher holding and emissions cost. Therefore, air transport mode is only selected when there is any emergency order to reduce penalty cost, otherwise, rail and road transport is the most preferred mode of transportation. Thus, this paper is contributing to the literature by a novel approach to decide transport mode under emissions cost and lead time variability. This model can be extended by studying the effect of lead time variability under some other strategic transportation issues such as modal split option, full truck load strategy, and demand consolidation strategy etc.

Keywords: carbon emissions, inventory theoretic model, lead time variability, transport mode selection

Procedia PDF Downloads 438
6985 Collaborative Energy Optimization for Multi-Microgrid Distribution System Based on Two-Stage Game Approach

Authors: Hanmei Peng, Yiqun Wang, Mao Tan, Zhuocen Dai, Yongxin Su

Abstract:

Efficient energy management in multi-microgrid distribution systems holds significant importance for enhancing the economic benefits of regional power grids. To better balance conflicts among various stakeholders, a two-stage game-based collaborative optimization approach is proposed in this paper, effectively addressing the realistic scenario involving both competition and collaboration among stakeholders. The first stage, aimed at maximizing individual benefits, involves constructing a non-cooperative tariff game model for the distribution network and surplus microgrid. In the second stage, considering power flow and physical line capacity constraints we establish a cooperative P2P game model for the multi-microgrid distribution system, and the optimization involves employing the Lagrange method of multipliers to handle complex constraints. Simulation results demonstrate that the proposed approach can effectively improve the system economics while harmonizing individual and collective rationality.

Keywords: cooperative game, collaborative optimization, multi-microgrid distribution system, non-cooperative game

Procedia PDF Downloads 75
6984 Temporal Effects on Chemical Composition of Treated Wastewater and Borehole Water Used for Irrigation in Limpopo Province, South Africa

Authors: Pholosho M. Kgopa, Phatu W. Mashela, Alen Manyevere

Abstract:

Increasing incidents of drought spells in most Sub-Saharan Africa call for using alternative sources of water for irrigation in arid and semi-arid regions. A study was conducted to investigate chemical composition of borehole and treated wastewater from different sampling disposal sites at University of Limpopo Experimental Farm (ULEF). A 4 × 5 factorial experiment, with the borehole as a reference sampling site and three other sampling sites along the wastewater disposal system was conducted over five months. Water samples were collected at four sites namely, (a) exit from Pond 16 into the furrow, (b) entry into night-dam, (c) exit from night dam to irrigated fields and (d) exit from borehole to irrigated fields. Water samples were collected in the middle of each month, starting from July to November 2016. Samples were analysed for pH, EC, Ca, Mg, Na, K, Al, B, Zn, Cu, Cr, Pb, Cd and As. The site × time interactions were highly significant for Ca, Mg, Zn, Cu, Cr, Pb, Cd, and As variables, but not for Na and K. Sampling site was highly significant on all variables, with sampling period not significant for K and Na. Relative to water from the borehole, Na concentration in wastewater samples from the night-dam exit, night-dam entry and Pond16 exit were lower by 69, 34 and 55%, respectively. Relative to borehole water, Al was higher in wastewater sampling sites. In conclusion, both sampling site and period affected the chemical composition of treated wastewater.

Keywords: irrigation water quality, spatial effects, temporal effects, water reuse, water scarcity

Procedia PDF Downloads 243
6983 A New Conjugate Gradient Method with Guaranteed Descent

Authors: B. Sellami, M. Belloufi

Abstract:

Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. In this paper, we propose a new two-parameter family of conjugate gradient methods for unconstrained optimization. The two-parameter family of methods not only includes the already existing three practical nonlinear conjugate gradient methods, but also has other family of conjugate gradient methods as subfamily. The two-parameter family of methods with the Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the two-parameter family of methods. The numerical results show that this method is efficient for the given test problems. In addition, the methods related to this family are uniformly discussed.

Keywords: unconstrained optimization, conjugate gradient method, line search, global convergence

Procedia PDF Downloads 458
6982 Grey Wolf Optimization Technique for Predictive Analysis of Products in E-Commerce: An Adaptive Approach

Authors: Shital Suresh Borse, Vijayalaxmi Kadroli

Abstract:

E-commerce industries nowadays implement the latest AI, ML Techniques to improve their own performance and prediction accuracy. This helps to gain a huge profit from the online market. Ant Colony Optimization, Genetic algorithm, Particle Swarm Optimization, Neural Network & GWO help many e-commerce industries for up-gradation of their predictive performance. These algorithms are providing optimum results in various applications, such as stock price prediction, prediction of drug-target interaction & user ratings of similar products in e-commerce sites, etc. In this study, customer reviews will play an important role in prediction analysis. People showing much interest in buying a lot of services& products suggested by other customers. This ultimately increases net profit. In this work, a convolution neural network (CNN) is proposed which further is useful to optimize the prediction accuracy of an e-commerce website. This method shows that CNN is used to optimize hyperparameters of GWO algorithm using an appropriate coding scheme. Accurate model results are verified by comparing them to PSO results whose hyperparameters have been optimized by CNN in Amazon's customer review dataset. Here, experimental outcome proves that this proposed system using the GWO algorithm achieves superior execution in terms of accuracy, precision, recovery, etc. in prediction analysis compared to the existing systems.

Keywords: prediction analysis, e-commerce, machine learning, grey wolf optimization, particle swarm optimization, CNN

Procedia PDF Downloads 118
6981 Hybrid Artificial Bee Colony and Least Squares Method for Rule-Based Systems Learning

Authors: Ahcene Habbi, Yassine Boudouaoui

Abstract:

This paper deals with the problem of automatic rule generation for fuzzy systems design. The proposed approach is based on hybrid artificial bee colony (ABC) optimization and weighted least squares (LS) method and aims to find the structure and parameters of fuzzy systems simultaneously. More precisely, two ABC based fuzzy modeling strategies are presented and compared. The first strategy uses global optimization to learn fuzzy models, the second one hybridizes ABC and weighted least squares estimate method. The performances of the proposed ABC and ABC-LS fuzzy modeling strategies are evaluated on complex modeling problems and compared to other advanced modeling methods.

Keywords: automatic design, learning, fuzzy rules, hybrid, swarm optimization

Procedia PDF Downloads 444
6980 Improvement of the Robust Proportional–Integral–Derivative (PID) Controller Parameters for Controlling the Frequency in the Intelligent Multi-Zone System at the Present of Wind Generation Using the Seeker Optimization Algorithm

Authors: Roya Ahmadi Ahangar, Hamid Madadyari

Abstract:

The seeker optimization algorithm (SOA) is increasingly gaining popularity among the researchers society due to its effectiveness in solving some real-world optimization problems. This paper provides the load-frequency control method based on the SOA for removing oscillations in the power system. A three-zone power system includes a thermal zone, a hydraulic zone and a wind zone equipped with robust proportional-integral-differential (PID) controllers. The result of simulation indicates that load-frequency changes in the wind zone for the multi-zone system are damped in a short period of time. Meanwhile, in the oscillation period, the oscillations amplitude is not significant. The result of simulation emphasizes that the PID controller designed using the seeker optimization algorithm has a robust function and a better performance for oscillations damping compared to the traditional PID controller. The proposed controller’s performance has been compared to the performance of PID controller regulated with Particle Swarm Optimization (PSO) and. Genetic Algorithm (GA) and Artificial Bee Colony (ABC) algorithms in order to show the superior capability of the proposed SOA in regulating the PID controller. The simulation results emphasize the better performance of the optimized PID controller based on SOA compared to the PID controller optimized with PSO, GA and ABC algorithms.

Keywords: load-frequency control, multi zone, robust PID controller, wind generation

Procedia PDF Downloads 305
6979 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features

Authors: Bushra Zafar, Usman Qamar

Abstract:

Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.

Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection

Procedia PDF Downloads 320
6978 Topology Optimization of the Interior Structures of Beams under Various Load and Support Conditions with Solid Isotropic Material with Penalization Method

Authors: Omer Oral, Y. Emre Yilmaz

Abstract:

Topology optimization is an approach that optimizes material distribution within a given design space for a certain load and boundary conditions by providing performance goals. It uses various restrictions such as boundary conditions, set of loads, and constraints to maximize the performance of the system. It is different than size and shape optimization methods, but it reserves some features of both methods. In this study, interior structures of the parts were optimized by using SIMP (Solid Isotropic Material with Penalization) method. The volume of the part was preassigned parameter and minimum deflection was the objective function. The basic idea behind the theory was considered, and different methods were discussed. Rhinoceros 3D design tool was used with Grasshopper and TopOpt plugins to create and optimize parts. A Grasshopper algorithm was designed and tested for different beams, set of arbitrary located forces and support types such as pinned, fixed, etc. Finally, 2.5D shapes were obtained and verified by observing the changes in density function.

Keywords: Grasshopper, lattice structure, microstructures, Rhinoceros, solid isotropic material with penalization method, TopOpt, topology optimization

Procedia PDF Downloads 141
6977 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data

Authors: K. Sathishkumar, V. Thiagarasu

Abstract:

Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.

Keywords: microarray technology, gene expression data, clustering, gene Selection

Procedia PDF Downloads 328
6976 Investigating the Glass Ceiling Phenomenon: An Empirical Study of Glass Ceiling's Effects on Selection, Promotion and Female Effectiveness

Authors: Sharjeel Saleem

Abstract:

The glass ceiling has been a burning issue for many researchers. In this research, we examine gender of the BOD, training and development, workforce diversity, positive attitude towards women, and employee acts as antecedents of glass ceiling. Furthermore, we also look for effects of glass ceiling on likelihood of female selection and promotion and on female effectiveness. Multiple linear regression conducted on data drawn from different public and private sector organizations support our hypotheses. The research, however, is limited to Faisalabad city and only females from minority group are targeted here.

Keywords: glass ceiling, stereotype attitudes, female effectiveness

Procedia PDF Downloads 295
6975 The Impact of Illegal Firearms Possession, Limited Security Staff and Porosity of Border on Human Security in Ipokia Local Government Area, Ogun State

Authors: Ogunmefun Folorunsho Muyideen, Aluko Tolulope Evelyn

Abstract:

One of the trending menaces faced in the world today is centered on the porosity of borders and proliferation of illegal weapons among the state members without the state authorizations. The proliferation of weapons along porous borders remains a germane and unsolvable question among developed and developing nations due to crisis degenerated from the menace (loss of lives, properties, traumatization, civil unrest and retrogressive economic development). A mixed method was adopted while the survey method was used for communities’ selection (Oke-Odan, Ajilete, Illaise, Lanlate) at Ipokia Local Government as a sample frame. Multi-stage sampling was employed to break down the site into wards, streets, and different house numbers before randomizing administration of the questionnaires using face to face method, while purposive sampling was used for collecting verbal information through an in-depth interviews method. The population size for the site is 150.398, while 399 was the sample size derived from the use of Yamane sample size formula. After retrieval of structured questionnaires, 346 were found useful, while 10 percent (399) of the quantitative instruments was summed to 30 participants that were interviewed using the in-depth interviews technique. The result of the first hypothesis shows a composite relationship between the variables tested (independents and dependent). The result indicated that the porosity of the border, illegal possession of guns, and limited security staff jointly predispose insecurity among the residents of the selected study site. The result of the second hypothesis deciphers that the illegal gun possession (independent) variable predict business outcome among the residents of the study site because sporadic gun shoot will regress the business activities in the study area. The result of third result indicated that the independent (porosity of borders) variable predict social bonding network because a high level of insecurity will destroy the level of trust in the communication among the residents of the study area. The last questions give comprehensive meaning to one of the recommendations derived using content systematic analysis, which explains that out of 30 participants interviewed, 18 submitted individual involvement in monitoring communities will solve the problem, 7 out of 30 opines that governmental agents are to be trained for effective combat, 3 participants out 30 submits that the fight is for both government and the citizens while 2 participants out of 30 claimed that there must be an agreement between Nigerian and neighbouring countries on border security. International donors must totally control the sales of weapons to unauthorized personalities. Criminal cases must be treated with deterrence measures and target hardened procedures through decoying and blending, stakeout, and sting tactics.

Keywords: human security, illegal weapons, porous borders, development

Procedia PDF Downloads 189
6974 Classification of Political Affiliations by Reduced Number of Features

Authors: Vesile Evrim, Aliyu Awwal

Abstract:

By the evolvement in technology, the way of expressing opinions switched the direction to the digital world. The domain of politics as one of the hottest topics of opinion mining research merged together with the behavior analysis for affiliation determination in text which constitutes the subject of this paper. This study aims to classify the text in news/blogs either as Republican or Democrat with the minimum number of features. As an initial set, 68 features which 64 are constituted by Linguistic Inquiry and Word Count (LIWC) features are tested against 14 benchmark classification algorithms. In the later experiments, the dimensions of the feature vector reduced based on the 7 feature selection algorithms. The results show that Decision Tree, Rule Induction and M5 Rule classifiers when used with SVM and IGR feature selection algorithms performed the best up to 82.5% accuracy on a given dataset. Further tests on a single feature and the linguistic based feature sets showed the similar results. The feature “function” as an aggregate feature of the linguistic category, is obtained as the most differentiating feature among the 68 features with 81% accuracy by itself in classifying articles either as Republican or Democrat.

Keywords: feature selection, LIWC, machine learning, politics

Procedia PDF Downloads 384
6973 Optimal Portfolio Selection under Treynor Ratio Using Genetic Algorithms

Authors: Imad Zeyad Ramadan

Abstract:

In this paper a genetic algorithm was developed to construct the optimal portfolio based on the Treynor method. The GA maximizes the Treynor ratio under budget constraint to select the best allocation of the budget for the companies in the portfolio. The results show that the GA was able to construct a conservative portfolio which includes companies from the three sectors. This indicates that the GA reduced the risk on the investor as it choose some companies with positive risks (goes with the market) and some with negative risks (goes against the market).

Keywords: oOptimization, genetic algorithm, portfolio selection, Treynor method

Procedia PDF Downloads 453
6972 A Multilayer Perceptron Neural Network Model Optimized by Genetic Algorithm for Significant Wave Height Prediction

Authors: Luis C. Parra

Abstract:

The significant wave height prediction is an issue of great interest in the field of coastal activities because of the non-linear behavior of the wave height and its complexity of prediction. This study aims to present a machine learning model to forecast the significant wave height of the oceanographic wave measuring buoys anchored at Mooloolaba of the Queensland Government Data. Modeling was performed by a multilayer perceptron neural network-genetic algorithm (GA-MLP), considering Relu(x) as the activation function of the MLPNN. The GA is in charge of optimized the MLPNN hyperparameters (learning rate, hidden layers, neurons, and activation functions) and wrapper feature selection for the window width size. Results are assessed using Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The GAMLPNN algorithm was performed with a population size of thirty individuals for eight generations for the prediction optimization of 5 steps forward, obtaining a performance evaluation of 0.00104 MSE, 0.03222 RMSE, 0.02338 MAE, and 0.71163% of MAPE. The results of the analysis suggest that the MLPNNGA model is effective in predicting significant wave height in a one-step forecast with distant time windows, presenting 0.00014 MSE, 0.01180 RMSE, 0.00912 MAE, and 0.52500% of MAPE with 0.99940 of correlation factor. The GA-MLP algorithm was compared with the ARIMA forecasting model, presenting better performance criteria in all performance criteria, validating the potential of this algorithm.

Keywords: significant wave height, machine learning optimization, multilayer perceptron neural networks, evolutionary algorithms

Procedia PDF Downloads 112
6971 A Robust Optimization for Multi-Period Lost-Sales Inventory Control Problem

Authors: Shunichi Ohmori, Sirawadee Arunyanart, Kazuho Yoshimoto

Abstract:

We consider a periodic review inventory control problem of minimizing production cost, inventory cost, and lost-sales under demand uncertainty, in which product demands are not specified exactly and it is only known to belong to a given uncertainty set, yet the constraints must hold for possible values of the data from the uncertainty set. We propose a robust optimization formulation for obtaining lowest cost possible and guaranteeing the feasibility with respect to range of order quantity and inventory level under demand uncertainty. Our formulation is based on the adaptive robust counterpart, which suppose order quantity is affine function of past demands. We derive certainty equivalent problem via second-order cone programming, which gives 'not too pessimistic' worst-case.

Keywords: robust optimization, inventory control, supply chain managment, second-order programming

Procedia PDF Downloads 413
6970 Development of Wave-Dissipating Block Installation Simulation for Inexperienced Worker Training

Authors: Hao Min Chuah, Tatsuya Yamazaki, Ryosui Iwasawa, Tatsumi Suto

Abstract:

In recent years, with the advancement of digital technology, the movement to introduce so-called ICT (Information and Communication Technology), such as computer technology and network technology, to civil engineering construction sites and construction sites is accelerating. As part of this movement, attempts are being made in various situations to reproduce actual sites inside computers and use them for designing and construction planning, as well as for training inexperienced engineers. The installation of wave-dissipating blocks on coasts, etc., is a type of work that has been carried out by skilled workers based on their years of experience and is one of the tasks that is difficult for inexperienced workers to carry out on site. Wave-dissipating blocks are structures that are designed to protect coasts, beaches, and so on from erosion by reducing the energy of ocean waves. Wave-dissipating blocks usually weigh more than 1 t and are installed by being suspended by a crane, so it would be time-consuming and costly for inexperienced workers to train on-site. In this paper, therefore, a block installation simulator is developed based on Unity 3D, a game development engine. The simulator computes porosity. Porosity is defined as the ratio of the total volume of the wave breaker blocks inside the structure to the final shape of the ideal structure. Using the evaluation of porosity, the simulator can determine how well the user is able to install the blocks. The voxelization technique is used to calculate the porosity of the structure, simplifying the calculations. Other techniques, such as raycasting and box overlapping, are employed for accurate simulation. In the near future, the simulator will install an automatic block installation algorithm based on combinatorial optimization solutions and compare the user-demonstrated block installation and the appropriate installation solved by the algorithm.

Keywords: 3D simulator, porosity, user interface, voxelization, wave-dissipating blocks

Procedia PDF Downloads 108
6969 Frequent Itemset Mining Using Rough-Sets

Authors: Usman Qamar, Younus Javed

Abstract:

Frequent pattern mining is the process of finding a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set. It was proposed in the context of frequent itemsets and association rule mining. Frequent pattern mining is used to find inherent regularities in data. What products were often purchased together? Its applications include basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis. However, one of the bottlenecks of frequent itemset mining is that as the data increase the amount of time and resources required to mining the data increases at an exponential rate. In this investigation a new algorithm is proposed which can be uses as a pre-processor for frequent itemset mining. FASTER (FeAture SelecTion using Entropy and Rough sets) is a hybrid pre-processor algorithm which utilizes entropy and rough-sets to carry out record reduction and feature (attribute) selection respectively. FASTER for frequent itemset mining can produce a speed up of 3.1 times when compared to original algorithm while maintaining an accuracy of 71%.

Keywords: rough-sets, classification, feature selection, entropy, outliers, frequent itemset mining

Procedia PDF Downloads 441
6968 Two Stage Fuzzy Methodology to Evaluate the Credit Risks of Investment Projects

Authors: O. Badagadze, G. Sirbiladze, I. Khutsishvili

Abstract:

The work proposes a decision support methodology for the credit risk minimization in selection of investment projects. The methodology provides two stages of projects’ evaluation. Preliminary selection of projects with minor credit risks is made using the Expertons Method. The second stage makes ranking of chosen projects using the Possibilistic Discrimination Analysis Method. The latter is a new modification of a well-known Method of Fuzzy Discrimination Analysis.

Keywords: expert valuations, expertons, investment project risks, positive and negative discriminations, possibility distribution

Procedia PDF Downloads 680
6967 Differences in Motivations for the Use of Facebook between Males and Females

Authors: Arti Bakhshi, Remia Mahajan

Abstract:

Social networking sites have evolved with great pace and India has been no exception. Facebook is the top most rated social networking site (SNS) in India. Though this site is mostly used by younger generations, the popularity of this site is increasing among all masses and classes. The current paper explores gender differences in motivations for the use of Facebook. Of the sample (N=556), 229 male and 327 female Facebook users from India were asked to rate the motivations for the use of Facebook from ‘most preferred’ to ‘least preferred’. The five motivations studied were- time passing, information, relationship development, relationship maintenance and trend following. The cross tab chi square analyses revealed significant differences in three out of five motivations between male and female Facebook users, namely time passing, relationship development and trend following. Female Facebook users rated ‘time passing’ as a more preferred motivation in comparison to male Facebook users, while male users rated ‘relationship development’ and ‘trend following’ motivations as more preferred in comparison to female Facebook users. Suggestions for future research are discussed.

Keywords: facebook, gender, motivations, social networking sites

Procedia PDF Downloads 474
6966 Adapting the Tweeting Factory Concept for Universal Production Optimization in Industry 5.0

Authors: Sławomir Lasota, Tomasz Kajdanowicz

Abstract:

This paper delves into adapting the Tweeting Factory paradigm to achieve universal production optimization under the Industry 5.0 framework. The proposed system creates a dynamic decision-making environment by collecting and analyzing structured telemetry data (”tweets”) from production lines. A hybrid recommendation engine combines rule-based systems with machine learning models to enhance real-time responsiveness and operator engagement. The research evaluates the system’s ability to optimize diverse industrial processes through predictive KPIs and real-time feedback loops. Results indicate significant advancements in eco-efficiency and operator productivity, showcasing the versatility of the Tweeting Factory approach in meeting the demands of human-centric and sustainable production.

Keywords: tweeting factory, production optimization, industry 5.0, recommendation

Procedia PDF Downloads 10
6965 Traffic Signal Control Using Citizens’ Knowledge through the Wisdom of the Crowd

Authors: Aleksandar Jovanovic, Katarina Kukic, Ana Uzelac, Dusan Teodorovic

Abstract:

Wisdom of the Crowd (WoC) is a decentralized method that uses the collective intelligence of humans. Individual guesses may be far from the target, but when considered as a group, they converge on optimal solutions for a given problem. We will utilize WoC to address the challenge of controlling traffic lights within intersections from the streets of Kragujevac, Serbia. The problem at hand falls within the category of NP-hard problems. We will employ an algorithm that leverages the swarm intelligence of bees: Bee Colony Optimization (BCO). Data regarding traffic signal timing at a single intersection will be gathered from citizens through a survey. Results obtained in that manner will be compared to the BCO results for different traffic scenarios. We will use Vissim traffic simulation software as a tool to compare the performance of bees’ and humans’ collective intelligence.

Keywords: wisdom of the crowd, traffic signal control, combinatorial optimization, bee colony optimization

Procedia PDF Downloads 113