Search results for: Discrete Cuckoo Optimization Algorithm (DCOA)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6419

Search results for: Discrete Cuckoo Optimization Algorithm (DCOA)

4349 Scientific Linux Cluster for BIG-DATA Analysis (SLBD): A Case of Fayoum University

Authors: Hassan S. Hussein, Rania A. Abul Seoud, Amr M. Refaat

Abstract:

Scientific researchers face in the analysis of very large data sets that is increasing noticeable rate in today’s and tomorrow’s technologies. Hadoop and Spark are types of software that developed frameworks. Hadoop framework is suitable for many Different hardware platforms. In this research, a scientific Linux cluster for Big Data analysis (SLBD) is presented. SLBD runs open source software with large computational capacity and high performance cluster infrastructure. SLBD composed of one cluster contains identical, commodity-grade computers interconnected via a small LAN. SLBD consists of a fast switch and Gigabit-Ethernet card which connect four (nodes). Cloudera Manager is used to configure and manage an Apache Hadoop stack. Hadoop is a framework allows storing and processing big data across the cluster by using MapReduce algorithm. MapReduce algorithm divides the task into smaller tasks which to be assigned to the network nodes. Algorithm then collects the results and form the final result dataset. SLBD clustering system allows fast and efficient processing of large amount of data resulting from different applications. SLBD also provides high performance, high throughput, high availability, expandability and cluster scalability.

Keywords: big data platforms, cloudera manager, Hadoop, MapReduce

Procedia PDF Downloads 335
4348 Investigation of Contact Pressure Distribution at Expanded Polystyrene Geofoam Interfaces Using Tactile Sensors

Authors: Chen Liu, Dawit Negussey

Abstract:

EPS (Expanded Polystyrene) geofoam as light-weight material in geotechnical applications are made of pre-expanded resin beads that form fused cellular micro-structures. The strength and deformation properties of geofoam blocks are determined by unconfined compression of small test samples between rigid loading plates. Applied loads are presumed to be supported uniformly over the entire mating end areas. Predictions of field performance on the basis of such laboratory tests widely over-estimate actual post-construction settlements and exaggerate predictions of long-term creep deformations. This investigation examined the development of contact pressures at a large number of discrete points at low and large strain levels for different densities of geofoam. Development of pressure patterns for fine and coarse interface material textures as well as for molding skin and hot wire cut geofoam surfaces were examined. The lab testing showed that I-Scan tactile sensors are useful for detailed observation of contact pressures at a large number of discrete points simultaneously. At low strain level (1%), the lower density EPS block presents low variations in localized stress distribution compared to higher density EPS. At high strain level (10%), the dense geofoam reached the sensor cut-off limit. The imprint and pressure patterns for different interface textures can be distinguished with tactile sensing. The pressure sensing system can be used in many fields with real-time pressure detection. The research findings provide a better understanding of EPS geofoam behavior for improvement of design methods and performance prediction of critical infrastructures, which will be anticipated to guide future improvements in design and rapid construction of critical transportation infrastructures with geofoam in geotechnical applications.

Keywords: geofoam, pressure distribution, tactile pressure sensors, interface

Procedia PDF Downloads 150
4347 A Method for Solving a Bi-Objective Transportation Problem under Fuzzy Environment

Authors: Sukhveer Singh, Sandeep Singh

Abstract:

A bi-objective fuzzy transportation problem with the objectives to minimize the total fuzzy cost and fuzzy time of transportation without according priorities to them is considered. To the best of our knowledge, there is no method in the literature to find efficient solutions of the bi-objective transportation problem under uncertainty. In this paper, a bi-objective transportation problem in an uncertain environment has been formulated. An algorithm has been proposed to find efficient solutions of the bi-objective transportation problem under uncertainty. The proposed algorithm avoids the degeneracy and gives the optimal solution faster than other existing algorithms for the given uncertain transportation problem.

Keywords: uncertain transportation problem, efficient solution, ranking function, fuzzy transportation problem

Procedia PDF Downloads 494
4346 Alternating Expectation-Maximization Algorithm for a Bilinear Model in Isoform Quantification from RNA-Seq Data

Authors: Wenjiang Deng, Tian Mou, Yudi Pawitan, Trung Nghia Vu

Abstract:

Estimation of isoform-level gene expression from RNA-seq data depends on simplifying assumptions, such as uniform reads distribution, that are easily violated in real data. Such violations typically lead to biased estimates. Most existing methods provide a bias correction step(s), which is based on biological considerations, such as GC content–and applied in single samples separately. The main problem is that not all biases are known. For example, new technologies such as single-cell RNA-seq (scRNA-seq) may introduce new sources of bias not seen in bulk-cell data. This study introduces a method called XAEM based on a more flexible and robust statistical model. Existing methods are essentially based on a linear model Xβ, where the design matrix X is known and derived based on the simplifying assumptions. In contrast, XAEM considers Xβ as a bilinear model with both X and β unknown. Joint estimation of X and β is made possible by simultaneous analysis of multi-sample RNA-seq data. Compared to existing methods, XAEM automatically performs empirical correction of potentially unknown biases. XAEM implements an alternating expectation-maximization (AEM) algorithm, alternating between estimation of X and β. For speed XAEM utilizes quasi-mapping for read alignment, thus leading to a fast algorithm. Overall XAEM performs favorably compared to other recent advanced methods. For simulated datasets, XAEM obtains higher accuracy for multiple-isoform genes, particularly for paralogs. In a differential-expression analysis of a real scRNA-seq dataset, XAEM achieves substantially greater rediscovery rates in an independent validation set.

Keywords: alternating EM algorithm, bias correction, bilinear model, gene expression, RNA-seq

Procedia PDF Downloads 124
4345 MapReduce Algorithm for Geometric and Topological Information Extraction from 3D CAD Models

Authors: Ahmed Fradi

Abstract:

In a digital world in perpetual evolution and acceleration, data more and more voluminous, rich and varied, the new software solutions emerged with the Big Data phenomenon offer new opportunities to the company enabling it not only to optimize its business and to evolve its production model, but also to reorganize itself to increase competitiveness and to identify new strategic axes. Design and manufacturing industrial companies, like the others, face these challenges, data represent a major asset, provided that they know how to capture, refine, combine and analyze them. The objective of our paper is to propose a solution allowing geometric and topological information extraction from 3D CAD model (precisely STEP files) databases, with specific algorithm based on the programming paradigm MapReduce. Our proposal is the first step of our future approach to 3D CAD object retrieval.

Keywords: Big Data, MapReduce, 3D object retrieval, CAD, STEP format

Procedia PDF Downloads 518
4344 Intrusion Detection in Computer Networks Using a Hybrid Model of Firefly and Differential Evolution Algorithms

Authors: Mohammad Besharatloo

Abstract:

Intrusion detection is an important research topic in network security because of increasing growth in the use of computer network services. Intrusion detection is done with the aim of detecting the unauthorized use or abuse in the networks and systems by the intruders. Therefore, the intrusion detection system is an efficient tool to control the user's access through some predefined regulations. Since, the data used in intrusion detection system has high dimension, a proper representation is required to show the basis structure of this data. Therefore, it is necessary to eliminate the redundant features to create the best representation subset. In the proposed method, a hybrid model of differential evolution and firefly algorithms was employed to choose the best subset of properties. In addition, decision tree and support vector machine (SVM) are adopted to determine the quality of the selected properties. In the first, the sorted population is divided into two sub-populations. These optimization algorithms were implemented on these sub-populations, respectively. Then, these sub-populations are merged to create next repetition population. The performance evaluation of the proposed method is done based on KDD Cup99. The simulation results show that the proposed method has better performance than the other methods in this context.

Keywords: intrusion detection system, differential evolution, firefly algorithm, support vector machine, decision tree

Procedia PDF Downloads 63
4343 A Laundry Algorithm for Colored Textiles

Authors: H. E. Budak, B. Arslan-Ilkiz, N. Cakmakci, I. Gocek, U. K. Sahin, H. Acikgoz-Tufan, M. H. Arslan

Abstract:

The aim of this study is to design a novel laundry algorithm for colored textiles which have significant decoloring problem. During the experimental work, bleached knitted single jersey fabric made of 100% cotton and dyed with reactive dyestuff was utilized, since according to a conducted survey textiles made of cotton are the most demanded textile products in the textile market by the textile consumers and for coloration of textiles reactive dyestuffs are the ones that are the most commonly used in the textile industry for dyeing cotton-made products. Therefore, the fabric used in this study was selected and purchased in accordance with the survey results. The fabric samples cut out of this fabric were dyed with different dyeing parameters by using Remazol Brilliant Red 3BS dyestuff in Gyrowash machine at laboratory conditions. From the alternative reactive-dyed cotton fabric samples, the ones that have high tendency to color loss were determined and examined. Accordingly, the parameters of the dyeing process used for these fabric samples were evaluated and the dyeing process which was chosen to be used for causing high tendency to color loss for the cotton fabrics was determined in order to reveal the level of improvement in color loss during this study clearly. Afterwards, all of the untreated fabric samples cut out of the fabric purchased were dyed with the dyeing process selected. When dyeing process was completed, an experimental design was created for the laundering process by using Minitab® program considering temperature, time and mechanical action as parameters. All of the washing experiments were performed in domestic washing machine. 16 washing experiments were performed with 8 different experimental conditions and 2 repeats for each condition. After each of the washing experiments, water samples of the main wash of the laundering process were measured with UV spectrophotometer. The values obtained were compared with the calibration curve of the materials used for the dyeing process. The results of the washing experiments were statistically analyzed with Minitab® program. According to the results, the most suitable washing algorithm to be used in terms of the parameters temperature, time and mechanical action for domestic washing machines for minimizing fabric color loss was chosen. The laundry algorithm proposed in this study have the ability of minimalizing the problem of color loss of colored textiles in washing machines by eliminating the negative effects of the parameters of laundering process on color of textiles without compromising the fundamental effects of basic cleaning action being performed properly. Therefore, since fabric color loss is minimized with this washing algorithm, dyestuff residuals will definitely be lower in the grey water released from the laundering process. In addition to this, with this laundry algorithm it is possible to wash and clean other types of textile products with proper cleaning effect and minimized color loss.

Keywords: color loss, laundry algorithm, textiles, domestic washing process

Procedia PDF Downloads 327
4342 Monocular Visual Odometry for Three Different View Angles by Intel Realsense T265 with the Measurement of Remote

Authors: Heru Syah Putra, Aji Tri Pamungkas Nurcahyo, Chuang-Jan Chang

Abstract:

MOIL-SDK method refers to the spatial angle that forms a view with a different perspective from the Fisheye image. Visual Odometry forms a trusted application for extending projects by tracking using image sequences. A real-time, precise, and persistent approach that is able to contribute to the work when taking datasets and generate ground truth as a reference for the estimates of each image using the FAST Algorithm method in finding Keypoints that are evaluated during the tracking process with the 5-point Algorithm with RANSAC, as well as produce accurate estimates the camera trajectory for each rotational, translational movement on the X, Y, and Z axes.

Keywords: MOIL-SDK, intel realsense T265, Fisheye image, monocular visual odometry

Procedia PDF Downloads 109
4341 A Multifactorial Algorithm to Automate Screening of Drug-Induced Liver Injury Cases in Clinical and Post-Marketing Settings

Authors: Osman Turkoglu, Alvin Estilo, Ritu Gupta, Liliam Pineda-Salgado, Rajesh Pandey

Abstract:

Background: Hepatotoxicity can be linked to a variety of clinical symptoms and histopathological signs, posing a great challenge in the surveillance of suspected drug-induced liver injury (DILI) cases in the safety database. Additionally, the majority of such cases are rare, idiosyncratic, highly unpredictable, and tend to demonstrate unique individual susceptibility; these qualities, in turn, lend to a pharmacovigilance monitoring process that is often tedious and time-consuming. Objective: Develop a multifactorial algorithm to assist pharmacovigilance physicians in identifying high-risk hepatotoxicity cases associated with DILI from the sponsor’s safety database (Argus). Methods: Multifactorial selection criteria were established using Structured Query Language (SQL) and the TIBCO Spotfire® visualization tool, via a combination of word fragments, wildcard strings, and mathematical constructs, based on Hy’s law criteria and pattern of injury (R-value). These criteria excluded non-eligible cases from monthly line listings mined from the Argus safety database. The capabilities and limitations of these criteria were verified by comparing a manual review of all monthly cases with system-generated monthly listings over six months. Results: On an average, over a period of six months, the algorithm accurately identified 92% of DILI cases meeting established criteria. The automated process easily compared liver enzyme elevations with baseline values, reducing the screening time to under 15 minutes as opposed to multiple hours exhausted using a cognitively laborious, manual process. Limitations of the algorithm include its inability to identify cases associated with non-standard laboratory tests, naming conventions, and/or incomplete/incorrectly entered laboratory values. Conclusions: The newly developed multifactorial algorithm proved to be extremely useful in detecting potential DILI cases, while heightening the vigilance of the drug safety department. Additionally, the application of this algorithm may be useful in identifying a potential signal for DILI in drugs not yet known to cause liver injury (e.g., drugs in the initial phases of development). This algorithm also carries the potential for universal application, due to its product-agnostic data and keyword mining features. Plans for the tool include improving it into a fully automated application, thereby completely eliminating a manual screening process.

Keywords: automation, drug-induced liver injury, pharmacovigilance, post-marketing

Procedia PDF Downloads 126
4340 Compressed Sensing of Fetal Electrocardiogram Signals Based on Joint Block Multi-Orthogonal Least Squares Algorithm

Authors: Xiang Jianhong, Wang Cong, Wang Linyu

Abstract:

With the rise of medical IoT technologies, Wireless body area networks (WBANs) can collect fetal electrocardiogram (FECG) signals to support telemedicine analysis. The compressed sensing (CS)-based WBANs system can avoid the sampling of a large amount of redundant information and reduce the complexity and computing time of data processing, but the existing algorithms have poor signal compression and reconstruction performance. In this paper, a Joint block multi-orthogonal least squares (JBMOLS) algorithm is proposed. We apply the FECG signal to the Joint block sparse model (JBSM), and a comparative study of sparse transformation and measurement matrices is carried out. A FECG signal compression transmission mode based on Rbio5.5 wavelet, Bernoulli measurement matrix, and JBMOLS algorithm is proposed to improve the compression and reconstruction performance of FECG signal by CS-based WBANs. Experimental results show that the compression ratio (CR) required for accurate reconstruction of this transmission mode is increased by nearly 10%, and the runtime is saved by about 30%.

Keywords: telemedicine, fetal ECG, compressed sensing, joint sparse reconstruction, block sparse signal

Procedia PDF Downloads 104
4339 Design and Implementation of DC-DC Converter with Inc-Cond Algorithm

Authors: Mustafa Engin Başoğlu, Bekir Çakır

Abstract:

The most important component affecting the efficiency of photovoltaic power systems are solar panels. Efficiency of these systems are significantly affected because of being low efficiency of solar panel. Therefore, solar panels should be operated under maximum power point conditions through a power converter. In this study, design boost converter with maximum power point tracking (MPPT) operation has been designed and performed with Incremental Conductance (Inc-Cond) algorithm by using direct duty control. Furthermore, it is shown that performance of boost converter with MPPT operation fails under low load resistance connection.

Keywords: boost converter, incremental conductance (Inc-Cond), MPPT, solar panel

Procedia PDF Downloads 1020
4338 Generalized Rough Sets Applied to Graphs Related to Urban Problems

Authors: Mihai Rebenciuc, Simona Mihaela Bibic

Abstract:

Branch of modern mathematics, graphs represent instruments for optimization and solving practical applications in various fields such as economic networks, engineering, network optimization, the geometry of social action, generally, complex systems including contemporary urban problems (path or transport efficiencies, biourbanism, & c.). In this paper is studied the interconnection of some urban network, which can lead to a simulation problem of a digraph through another digraph. The simulation is made univoc or more general multivoc. The concepts of fragment and atom are very useful in the study of connectivity in the digraph that is simulation - including an alternative evaluation of k- connectivity. Rough set approach in (bi)digraph which is proposed in premier in this paper contribute to improved significantly the evaluation of k-connectivity. This rough set approach is based on generalized rough sets - basic facts are presented in this paper.

Keywords: (bi)digraphs, rough set theory, systems of interacting agents, complex systems

Procedia PDF Downloads 217
4337 Stochastic Simulation of Random Numbers Using Linear Congruential Method

Authors: Melvin Ballera, Aldrich Olivar, Mary Soriano

Abstract:

Digital computers nowadays must be able to have a utility that is capable of generating random numbers. Usually, computer-generated random numbers are not random given predefined values such as starting point and end points, making the sequence almost predictable. There are many applications of random numbers such business simulation, manufacturing, services domain, entertainment sector and other equally areas making worthwhile to design a unique method and to allow unpredictable random numbers. Applying stochastic simulation using linear congruential algorithm, it shows that as it increases the numbers of the seed and range the number randomly produced or selected by the computer becomes unique. If this implemented in an environment where random numbers are very much needed, the reliability of the random number is guaranteed.

Keywords: stochastic simulation, random numbers, linear congruential algorithm, pseudorandomness

Procedia PDF Downloads 289
4336 Shape Optimization of a Hole for Water Jetting in a Spudcan for a Jack-Up Rig

Authors: Han Ik Park, Jeong Hyeon Seong, Dong Seop Han, Su-Chul Shin, Young Chul Park

Abstract:

A Spudcan is mounted on the lower leg of the jack-up rig, a device for preventing a rollover of a structure and to support the structure in a stable sea floor. At the time of inserting the surface of the spud can to penetrate when the sand layer is stable and smoothly pulled to the clay layer, and at that time of recovery when uploading the spud can is equipped with a water injection device. In this study, it is significant to optimize the shape of pipelines holes for water injection device and it was set in two kinds of shape, the oval and round. Interpretation of the subject into the site of Gulf of Mexico offshore Wind Turbine Installation Vessels (WTIV)was chosen as a target platform. Using the ANSYS Workbench commercial programs, optimal design was conducted. The results of this study can be applied to the hole-shaped design of various marine structures.

Keywords: kriging method, jack-up rig, shape optimization, spudcan

Procedia PDF Downloads 486
4335 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 192
4334 Investigated Optimization of Davidson Path Loss Model for Digital Terrestrial Television (DTTV) Propagation in Urban Area

Authors: Pitak Keawbunsong, Sathaporn Promwong

Abstract:

This paper presents an investigation on the efficiency of the optimized Davison path loss model in order to look for a suitable path loss model to design and planning DTTV propagation for small and medium urban areas in southern Thailand. Hadyai City in Songkla Province is chosen as the case study to collect the analytical data on the electric field strength. The optimization is conducted through the least square method while the efficiency index is through the statistical value of relative error (RE). The result of the least square method is the offset and slop of the frequency to be used in the optimized process. The statistical result shows that RE of the old Davidson model is at the least when being compared with the optimized Davison and the Hata models. Thus, the old Davison path loss model is the most accurate that further becomes the most optimized for the plan on the propagation network design.

Keywords: DTTV propagation, path loss model, Davidson model, least square method

Procedia PDF Downloads 319
4333 Optimization of Bioremediation Process to Remove Hexavalent Chromium from Tannery Effluent

Authors: Satish Babu Rajulapati

Abstract:

The removal of toxic and heavy metal contaminants from wastewater streams and industrial effluents is one of the most important environmental issues being faced world over. In the present study three bacterial cultures tolerating high concentrations of chromium were isolated from the soil and wastewater sample collected from the tanneries located in Warangal, Telangana state. The bacterial species were identified as Bacillus sp., Staphylococcus sp. and pseudomonas sp. Preliminary studies were carried out with the three bacterial species at various operating parameters such as pH and temperature. The results indicate that pseudomonas sp. is the efficient one in the uptake of Cr(VI). Further, detailed investigation of Pseudomonas sp. have been carried out to determine the efficiency of removal of Cr(VI). The various parameters influencing the biosorption of Cr(VI) such as pH, temperature, initial chromium concentration, innoculum size and incubation time have been studied. Response Surface Methodology (RSM) was applied to optimize the removal of Cr(VI). Maximum Cr(VI) removal was found to be 85.72% Cr(VI) atpH 7, temperature 35 °C, initial concentration 67mg/l, inoculums size 9 %(v/v) and time 60 hrs.

Keywords: Staphylococcus sp, chromium, RSM, optimization, Cr(IV)

Procedia PDF Downloads 296
4332 Design of 3-Step Skew BLAC Motor for Better Performance in Electric Power Steering System

Authors: Subrato Saha, Yun-Hyun Cho

Abstract:

In electric power steering (EPS), spoke type brushless ac (BLAC) motors offer distinct advantages over other electric motor types in terms torque smoothness, reliability and efficiency. This paper deals with the shape optimization of spoke type BLAC motor, in order to reduce cogging torque. This paper examines 3 steps skewing rotor angle, optimizing rotor core edge and rotor overlap length for reducing cogging torque in spoke type BLAC motor. The methods were applied to existing machine designs and their performance was calculated using finite- element analysis (FEA). Prototypes of the machine designs were constructed and experimental results obtained. It is shown that the FEA predicted the cogging torque to be nearly reduce using those methods.

Keywords: EPS, 3-Step skewing, spoke type BLAC, cogging torque, FEA, optimization

Procedia PDF Downloads 465
4331 Troubleshooting Petroleum Equipment Based on Wireless Sensors Based on Bayesian Algorithm

Authors: Vahid Bayrami Rad

Abstract:

In this research, common methods and techniques have been investigated with a focus on intelligent fault finding and monitoring systems in the oil industry. In fact, remote and intelligent control methods are considered a necessity for implementing various operations in the oil industry, but benefiting from the knowledge extracted from countless data generated with the help of data mining algorithms. It is a avoid way to speed up the operational process for monitoring and troubleshooting in today's big oil companies. Therefore, by comparing data mining algorithms and checking the efficiency and structure and how these algorithms respond in different conditions, The proposed (Bayesian) algorithm using data clustering and their analysis and data evaluation using a colored Petri net has provided an applicable and dynamic model from the point of view of reliability and response time. Therefore, by using this method, it is possible to achieve a dynamic and consistent model of the remote control system and prevent the occurrence of leakage in oil pipelines and refineries and reduce costs and human and financial errors. Statistical data The data obtained from the evaluation process shows an increase in reliability, availability and high speed compared to other previous methods in this proposed method.

Keywords: wireless sensors, petroleum equipment troubleshooting, Bayesian algorithm, colored Petri net, rapid miner, data mining-reliability

Procedia PDF Downloads 41
4330 Breast Cancer Risk is Predicted Using Fuzzy Logic in MATLAB Environment

Authors: S. Valarmathi, P. B. Harathi, R. Sridhar, S. Balasubramanian

Abstract:

Machine learning tools in medical diagnosis is increasing due to the improved effectiveness of classification and recognition systems to help medical experts in diagnosing breast cancer. In this study, ID3 chooses the splitting attribute with the highest gain in information, where gain is defined as the difference between before the split versus after the split. It is applied for age, location, taluk, stage, year, period, martial status, treatment, heredity, sex, and habitat against Very Serious (VS), Very Serious Moderate (VSM), Serious (S) and Not Serious (NS) to calculate the gain of information. The ranked histogram gives the gain of each field for the breast cancer data. The doctors use TNM staging which will decide the risk level of the breast cancer and play an important decision making field in fuzzy logic for perception based measurement. Spatial risk area (taluk) of the breast cancer is calculated. Result clearly states that Coimbatore (North and South) was found to be risk region to the breast cancer than other areas at 20% criteria. Weighted value of taluk was compared with criterion value and integrated with Map Object to visualize the results. ID3 algorithm shows the high breast cancer risk regions in the study area. The study has outlined, discussed and resolved the algorithms, techniques / methods adopted through soft computing methodology like ID3 algorithm for prognostic decision making in the seriousness of the breast cancer.

Keywords: ID3 algorithm, breast cancer, fuzzy logic, MATLAB

Procedia PDF Downloads 497
4329 Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour

Authors: Mohammad Izadkhah, Mojtaba Hoseini, Alireza Khalili Tehrani

Abstract:

In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target's edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation.

Keywords: video tracking, particle filter, greedy snake, neural network

Procedia PDF Downloads 316
4328 Neural Network Based Fluctuation Frequency Control in PV-Diesel Hybrid Power System

Authors: Heri Suryoatmojo, Adi Kurniawan, Feby A. Pamuji, Nursalim, Syaffaruddin, Herbert Innah

Abstract:

Photovoltaic (PV) system hybrid with diesel system is utilized widely for electrification in remote area. PV output power fluctuates due to uncertainty condition of temperature and sun irradiance. When the penetration of PV power is large, the reliability of the power utility will be disturbed and seriously impact the unstable frequency of system. Therefore, designing a robust frequency controller in PV-diesel hybrid power system is very important. This paper proposes new method of frequency control application in hybrid PV-diesel system based on artificial neural network (ANN). This method can minimize the frequency deviation without smoothing PV output power that controlled by maximum power point tracking (MPPT) method. The neural network algorithm controller considers average irradiance, change of irradiance and frequency deviation. In order the show the effectiveness of proposed algorithm, the addition of battery as energy storage system is also presented. To validate the proposed method, the results of proposed system are compared with the results of similar system using MPPT only. The simulation results show that the proposed method able to suppress frequency deviation smaller compared to the results of system using MPPT only.

Keywords: energy storage system, frequency deviation, hybrid power generation, neural network algorithm

Procedia PDF Downloads 475
4327 Filtering Intrusion Detection Alarms Using Ant Clustering Approach

Authors: Ghodhbani Salah, Jemili Farah

Abstract:

With the growth of cyber attacks, information safety has become an important issue all over the world. Many firms rely on security technologies such as intrusion detection systems (IDSs) to manage information technology security risks. IDSs are considered to be the last line of defense to secure a network and play a very important role in detecting large number of attacks. However the main problem with today’s most popular commercial IDSs is generating high volume of alerts and huge number of false positives. This drawback has become the main motivation for many research papers in IDS area. Hence, in this paper we present a data mining technique to assist network administrators to analyze and reduce false positive alarms that are produced by an IDS and increase detection accuracy. Our data mining technique is unsupervised clustering method based on hybrid ANT algorithm. This algorithm discovers clusters of intruders’ behavior without prior knowledge of a possible number of classes, then we apply K-means algorithm to improve the convergence of the ANT clustering. Experimental results on real dataset show that our proposed approach is efficient with high detection rate and low false alarm rate.

Keywords: intrusion detection system, alarm filtering, ANT class, ant clustering, intruders’ behaviors, false alarms

Procedia PDF Downloads 386
4326 High Resolution Image Generation Algorithm for Archaeology Drawings

Authors: Xiaolin Zeng, Lei Cheng, Zhirong Li, Xueping Liu

Abstract:

Aiming at the problem of low accuracy and susceptibility to cultural relic diseases in the generation of high-resolution archaeology drawings by current image generation algorithms, an archaeology drawings generation algorithm based on a conditional generative adversarial network is proposed. An attention mechanism is added into the high-resolution image generation network as the backbone network, which enhances the line feature extraction capability and improves the accuracy of line drawing generation. A dual-branch parallel architecture consisting of two backbone networks is implemented, where the semantic translation branch extracts semantic features from orthophotographs of cultural relics, and the gradient screening branch extracts effective gradient features. Finally, the fusion fine-tuning module combines these two types of features to achieve the generation of high-quality and high-resolution archaeology drawings. Experimental results on the self-constructed archaeology drawings dataset of grotto temple statues show that the proposed algorithm outperforms current mainstream image generation algorithms in terms of pixel accuracy (PA), structural similarity (SSIM), and peak signal-to-noise ratio (PSNR) and can be used to assist in drawing archaeology drawings.

Keywords: archaeology drawings, digital heritage, image generation, deep learning

Procedia PDF Downloads 27
4325 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem

Authors: Ouafa Amira, Jiangshe Zhang

Abstract:

Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.

Keywords: clustering, fuzzy c-means, regularization, relative entropy

Procedia PDF Downloads 242
4324 The Utilization of Particle Swarm Optimization Method to Solve Nurse Scheduling Problem

Authors: Norhayati Mohd Rasip, Abd. Samad Hasan Basari , Nuzulha Khilwani Ibrahim, Burairah Hussin

Abstract:

The allocation of working schedule especially for shift environment is hard to fulfill its fairness among them. In the case of nurse scheduling, to set up the working time table for them is time consuming and complicated, which consider many factors including rules, regulation and human factor. The scenario is more complicated since most nurses are women which have personnel constraints and maternity leave factors. The undesirable schedule can affect the nurse productivity, social life and the absenteeism can significantly as well affect patient's life. This paper aimed to enhance the scheduling process by utilizing the particle swarm optimization in order to solve nurse scheduling problem. The result shows that the generated multiple initial schedule is fulfilled the requirements and produces the lowest cost of constraint violation.

Keywords: nurse scheduling, particle swarm optimisation, nurse rostering, hard and soft constraint

Procedia PDF Downloads 346
4323 Automatic Threshold Search for Heat Map Based Feature Selection: A Cancer Dataset Analysis

Authors: Carlos Huertas, Reyes Juarez-Ramirez

Abstract:

Public health is one of the most critical issues today; therefore, there is great interest to improve technologies in the area of diseases detection. With machine learning and feature selection, it has been possible to aid the diagnosis of several diseases such as cancer. In this work, we present an extension to the Heat Map Based Feature Selection algorithm, this modification allows automatic threshold parameter selection that helps to improve the generalization performance of high dimensional data such as mass spectrometry. We have performed a comparison analysis using multiple cancer datasets and compare against the well known Recursive Feature Elimination algorithm and our original proposal, the results show improved classification performance that is very competitive against current techniques.

Keywords: biomarker discovery, cancer, feature selection, mass spectrometry

Procedia PDF Downloads 306
4322 Enhancing Predictive Accuracy in Pharmaceutical Sales through an Ensemble Kernel Gaussian Process Regression Approach

Authors: Shahin Mirshekari, Mohammadreza Moradi, Hossein Jafari, Mehdi Jafari, Mohammad Ensaf

Abstract:

This research employs Gaussian Process Regression (GPR) with an ensemble kernel, integrating Exponential Squared, Revised Matern, and Rational Quadratic kernels to analyze pharmaceutical sales data. Bayesian optimization was used to identify optimal kernel weights: 0.76 for Exponential Squared, 0.21 for Revised Matern, and 0.13 for Rational Quadratic. The ensemble kernel demonstrated superior performance in predictive accuracy, achieving an R² score near 1.0, and significantly lower values in MSE, MAE, and RMSE. These findings highlight the efficacy of ensemble kernels in GPR for predictive analytics in complex pharmaceutical sales datasets.

Keywords: Gaussian process regression, ensemble kernels, bayesian optimization, pharmaceutical sales analysis, time series forecasting, data analysis

Procedia PDF Downloads 44
4321 Minimizing Vehicular Traffic via Integrated Land Use Development: A Heuristic Optimization Approach

Authors: Babu Veeregowda, Rongfang Liu

Abstract:

The current traffic impact assessment methodology and environmental quality review process for approval of land development project are conventional, stagnant, and one-dimensional. The environmental review policy and procedure lacks in providing the direction to regulate or seek alternative land uses and sizes that exploits the existing or surrounding elements of built environment (‘4 D’s’ of development – Density, Diversity, Design, and Distance to Transit) or smart growth principles which influence the travel behavior and have a significant effect in reducing vehicular traffic. Additionally, environmental review policy does not give directions on how to incorporate urban planning into the development in ways such as incorporating non-motorized roadway elements such as sidewalks, bus shelters, and access to community facilities. This research developed a methodology to optimize the mix of land uses and sizes using the heuristic optimization process to minimize the auto dependency development and to meet the interests of key stakeholders. A case study of Willets Point Mixed Use Development in Queens, New York, was used to assess the benefits of the methodology. The approved Willets Point Mixed Use project was based on maximum envelop of size and land use type allowed by current conventional urban renewal plans. This paper will also evaluate the parking accumulation for various land uses to explore the potential for shared parking to further optimize the mix of land uses and sizes. This research is very timely and useful to many stakeholders interested in understanding the benefits of integrated land uses and its development.

Keywords: traffic impact, mixed use, optimization, trip generation

Procedia PDF Downloads 191
4320 Real-Time Optimisation and Minimal Energy Use for Water and Environment Efficient Irrigation

Authors: Kanya L. Khatri, Ashfaque A. Memon, Rod J. Smith, Shamas Bilal

Abstract:

The viability and sustainability of crop production is currently threatened by increasing water scarcity. Water scarcity problems can be addressed through improved water productivity and the options usually presumed in this context are efficient water use and conversion of surface irrigation to pressurized systems. By replacing furrow irrigation with drip or centre pivot systems, the water efficiency can be improved by up to 30 to 45%. However, the installation and application of pumps and pipes, and the associated fuels needed for these alternatives increase energy consumption and cause significant greenhouse gas emissions. Hence, a balance between the improvement in water use and the potential increase in energy consumption is required keeping in view adverse impact of increased carbon emissions on the environment. When surface water is used, pressurized systems increase energy consumption substantially, by between 65% to 75%, and produce greenhouse gas emissions around 1.75 times higher than that of gravity based irrigation. With gravity based surface irrigation methods the energy consumption is assumed to be negligible. This study has shown that a novel real-time infiltration model REIP has enabled implementation of real-time optimization and control of surface irrigation and surface irrigation with real-time optimization has potential to bring significant improvements in irrigation performance along with substantial water savings of 2.92 ML/ha which is almost equivalent to that given by pressurized systems. Thus real-time optimization and control offers a modern, environment friendly and water efficient system with close to zero increase in energy consumption and minimal greenhouse gas emissions.

Keywords: pressurised irrigation, carbon emissions, real-time, environmentally-friendly, REIP

Procedia PDF Downloads 478