Search results for: variable step-size adaptive algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2862

Search results for: variable step-size adaptive algorithms

612 An Assessment of the Effects of Microbial Products on the Specific Oxygen Uptake in Submerged Membrane Bioreactor

Authors: M. F. R. Zuthi, H. H. Ngo, W. S. Guo, S. S. Chen, N. C. Nguyen, L. J. Deng, T. D. C. Tran

Abstract:

Sustaining a desired rate of oxygen transfer for microbial activity is a matter of major concern for biological wastewater treatment (MBR). The study reported in the paper was aimed at assessing the effects of microbial products on the specific oxygen uptake rate (SOUR) in a conventional membrane bioreactor (CMBR) and that in a sponge submerged MBR (SSMBR). The production and progressive accumulation of soluble microbial products (SMP) and bound-extracellular polymeric substances (bEPS) were affecting the SOUR of the microorganisms which varied at different stages of operation of the MBR systems depending on the variable concentrations of the SMP/bEPS. The effect of bEPS on the SOUR was stronger in the SSMBR compared to that of the SMP, while relative high concentrations of SMP had adverse effects on the SOUR of the CMBR system. Of the different mathematical correlations analyzed in the study, logarithmic mathematical correlations could be established between SOUR and bEPS in SSMBR, and similar correlations could also be found between SOUR and SMP concentrations in the CMBR.

Keywords: Microbial products, Microbial activity, Specific oxygen uptake rate, Membrane bioreactor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1887
611 A Fast Silhouette Detection Algorithm for Shadow Volumes in Augmented Reality

Authors: Hoshang Kolivand, Mahyar Kolivand, Mohd Shahrizal Sunar, Mohd Azhar M. Arsad

Abstract:

Real-time shadow generation in virtual environments and Augmented Reality (AR) was always a hot topic in the last three decades. Lots of calculation for shadow generation among AR needs a fast algorithm to overcome this issue and to be capable of implementing in any real-time rendering. In this paper, a silhouette detection algorithm is presented to generate shadows for AR systems. Δ+ algorithm is presented based on extending edges of occluders to recognize which edges are silhouettes in the case of real-time rendering. An accurate comparison between the proposed algorithm and current algorithms in silhouette detection is done to show the reduction calculation by presented algorithm. The algorithm is tested in both virtual environments and AR systems. We think that this algorithm has the potential to be a fundamental algorithm for shadow generation in all complex environments.

Keywords: Silhouette detection, shadow volumes, real-time shadows, rendering, augmented reality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2018
610 An Energy Aware Data Aggregation in Wireless Sensor Network Using Connected Dominant Set

Authors: M. Santhalakshmi, P Suganthi

Abstract:

Wireless Sensor Networks (WSNs) have many advantages. Their deployment is easier and faster than wired sensor networks or other wireless networks, as they do not need fixed infrastructure. Nodes are partitioned into many small groups named clusters to aggregate data through network organization. WSN clustering guarantees performance achievement of sensor nodes. Sensor nodes energy consumption is reduced by eliminating redundant energy use and balancing energy sensor nodes use over a network. The aim of such clustering protocols is to prolong network life. Low Energy Adaptive Clustering Hierarchy (LEACH) is a popular protocol in WSN. LEACH is a clustering protocol in which the random rotations of local cluster heads are utilized in order to distribute energy load among all sensor nodes in the network. This paper proposes Connected Dominant Set (CDS) based cluster formation. CDS aggregates data in a promising approach for reducing routing overhead since messages are transmitted only within virtual backbone by means of CDS and also data aggregating lowers the ratio of responding hosts to the hosts existing in virtual backbones. CDS tries to increase networks lifetime considering such parameters as sensors lifetime, remaining and consumption energies in order to have an almost optimal data aggregation within networks. Experimental results proved CDS outperformed LEACH regarding number of cluster formations, average packet loss rate, average end to end delay, life computation, and remaining energy computation.

Keywords: Wireless sensor network, connected dominant set, clustering, data aggregation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1127
609 Optimal Design of Airfoil Platform Shapes with High Aspect Ratio Using Genetic Algorithm

Authors: Kyoungwoo Park, Byeong-Sam Kim

Abstract:

Unmanned aerial vehicles (UAVs) performing their operations for a long time have been attracting much attention in military and civil aviation industries for the past decade. The applicable field of UAV is changing from the military purpose only to the civil one. Because of their low operation cost, high reliability and the necessity of various application areas, numerous development programs have been initiated around the world. To obtain the optimal solutions of the design variable (i.e., sectional airfoil profile, wing taper ratio and sweep) for high performance of UAVs, both the lift and lift-to-drag ratio are maximized whereas the pitching moment should be minimized, simultaneously. It is found that the lift force and lift-to-drag ratio are linearly dependent and a unique and dominant solution are existed. However, a trade-off phenomenon is observed between the lift-to-drag ratio and pitching moment. As the result of optimization, sixty-five (65) non-dominated Pareto individuals at the cutting edge of design spaces that are decided by airfoil shapes can be obtained.

Keywords: Unmanned aerial vehicle (UAV), Airfoil, CFD, Shape optimization, Genetic Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1958
608 When Explanations “Cause“ Error: A Look at Representations and Compressions

Authors: Michael Lissack

Abstract:

We depend upon explanation in order to “make sense" out of our world. And, making sense is all the more important when dealing with change. But, what happens if our explanations are wrong? This question is examined with respect to two types of explanatory model. Models based on labels and categories we shall refer to as “representations." More complex models involving stories, multiple algorithms, rules of thumb, questions, ambiguity we shall refer to as “compressions." Both compressions and representations are reductions. But representations are far more reductive than compressions. Representations can be treated as a set of defined meanings – coherence with regard to a representation is the degree of fidelity between the item in question and the definition of the representation, of the label. By contrast, compressions contain enough degrees of freedom and ambiguity to allow us to make internal predictions so that we may determine our potential actions in the possibility space. Compressions are explanatory via mechanism. Representations are explanatory via category. Managers are often confusing their evocation of a representation (category inclusion) as the creation of a context of compression (description of mechanism). When this type of explanatory error occurs, more errors follow. In the drive for efficiency such substitutions are all too often proclaimed – at the manager-s peril..

Keywords: Coherence, Emergence, Reduction, Model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1238
607 Intensification of Ethyl Esters Synthesis Using a Packed-Bed Tubular Reactor at Supercritical Conditions

Authors: Camila da Silva, Simone Belorte de Andrade, Vitor Augusto dos Santos Garcia, Vladimir Ferreira Cabral, J. Vladimir Oliveira Lúcio Cardozo-Filho

Abstract:

In the present study, the non-catalytic transesterification of soybean oil in continuous mode using supercritical ethanol were investigated. Experiments were performed in a packed-bed tubular reactor (PBTR) and variable studied were reaction temperature (523 K to 598 K), pressure (10 MPa to 20 MPa), oil to ethanol molar ratio (1:10 to 1:40) and water concentration (0 wt% to 10 wt% in ethanol). Results showed that ethyl esters yields obtained in the PBTR were higher (> 20 wt%) than those verified in a tubular reactor (TR), due to improved mass transfer conditions attained in the PBTR. Results demonstrated that temperature, pressure, oil to ethanol molar ratio and water concentration had a positive effect on fatty acid ethyl esters (FAEE) production in the experimental range investigated, with appreciable reaction yields (90 wt%) achieved at 598 K, 20 MPa, oil to ethanol molar ratio of 1:40 and 10 wt% of water concentration.

Keywords: Packed bed reactor, ethyl esters, continuous process, catalyst-free process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1295
606 Benchmarking of Pentesting Tools

Authors: Esteban Alejandro Armas Vega, Ana Lucila Sandoval Orozco, Luis Javier García Villalba

Abstract:

The benchmarking of tools for dynamic analysis of vulnerabilities in web applications is something that is done periodically, because these tools from time to time update their knowledge base and search algorithms, in order to improve their accuracy. Unfortunately, the vast majority of these evaluations are made by software enthusiasts who publish their results on blogs or on non-academic websites and always with the same evaluation methodology. Similarly, academics who have carried out this type of analysis from a scientific approach, the majority, make their analysis within the same methodology as well the empirical authors. This paper is based on the interest of finding answers to questions that many users of this type of tools have been asking over the years, such as, to know if the tool truly test and evaluate every vulnerability that it ensures do, or if the tool, really, deliver a real report of all the vulnerabilities tested and exploited. This kind of questions have also motivated previous work but without real answers. The aim of this paper is to show results that truly answer, at least on the tested tools, all those unanswered questions. All the results have been obtained by changing the common model of benchmarking used for all those previous works.

Keywords: Cybersecurity, IDS, security, web scanners, web vulnerabilities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1804
605 A Fuzzy Approach to Liver Tumor Segmentation with Zernike Moments

Authors: Abder-Rahman Ali, Antoine Vacavant, Manuel Grand-Brochier, Adélaïde Albouy-Kissi, Jean-Yves Boire

Abstract:

In this paper, we present a new segmentation approach for liver lesions in regions of interest within MRI (Magnetic Resonance Imaging). This approach, based on a two-cluster Fuzzy CMeans methodology, considers the parameter variable compactness to handle uncertainty. Fine boundaries are detected by a local recursive merging of ambiguous pixels with a sequential forward floating selection with Zernike moments. The method has been tested on both synthetic and real images. When applied on synthetic images, the proposed approach provides good performance, segmentations obtained are accurate, their shape is consistent with the ground truth, and the extracted information is reliable. The results obtained on MR images confirm such observations. Our approach allows, even for difficult cases of MR images, to extract a segmentation with good performance in terms of accuracy and shape, which implies that the geometry of the tumor is preserved for further clinical activities (such as automatic extraction of pharmaco-kinetics properties, lesion characterization, etc.).

Keywords: Defuzzification, floating search, fuzzy clustering, Zernike moments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2049
604 Modified Naïve Bayes Based Prediction Modeling for Crop Yield Prediction

Authors: Kefaya Qaddoum

Abstract:

Most of greenhouse growers desire a determined amount of yields in order to accurately meet market requirements. The purpose of this paper is to model a simple but often satisfactory supervised classification method. The original naive Bayes have a serious weakness, which is producing redundant predictors. In this paper, utilized regularization technique was used to obtain a computationally efficient classifier based on naive Bayes. The suggested construction, utilized L1-penalty, is capable of clearing redundant predictors, where a modification of the LARS algorithm is devised to solve this problem, making this method applicable to a wide range of data. In the experimental section, a study conducted to examine the effect of redundant and irrelevant predictors, and test the method on WSG data set for tomato yields, where there are many more predictors than data, and the urge need to predict weekly yield is the goal of this approach. Finally, the modified approach is compared with several naive Bayes variants and other classification algorithms (SVM and kNN), and is shown to be fairly good.

Keywords: Tomato yields prediction, naive Bayes, redundancy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5108
603 The Relationship between the Environmental and Financial Performance of Australian Electricity Producers

Authors: S. Forughi, A. De Zoysa, S. Bhati

Abstract:

The present study focuses on the environmental performance of the companies in the electricity-producing sector and its relationship with their financial performance. We will review the major studies that examined the relationship between the environmental and financial performance of firms in various industries. While the classical economic debates consider the environmental friendly activities costly and harmful to a firm’s profitability, it is claimed that firms will be rewarded with higher profitability in long run through the investments in environmental friendly activities. In this context, prior studies have examined the relationship between the environmental and financial performance of firms operating in different industry sectors. Our study will employ an environmental indicator to increase the accuracy of the results and be employed as an independent variable in our developed econometric model to evaluate the impact of the financial performance of the firms on their environmental friendly activities in the context of companies operating in the Australian electricity-producing sector. As a result, we expect our methodology to contribute to the literature and the findings of the study will help us to provide recommendations and policy implications to the electricity producers.

Keywords: Australian electricity sector, efficiency measurement, environmental-financial performance interaction, environmental index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1187
602 Multichannel Scheme under Max-Min Fairness Environment for Cognitive Radio Networks

Authors: Hans R. Márquez, Cesar Hernández, Ingrid Páez

Abstract:

This paper develops a multiple channel assignment model, which allows to take advantage of spectrum opportunities in cognitive radio networks in the most efficient way. The developed scheme allows making several assignments of available and frequency adjacent channel, which require a bigger bandwidth, under an equality environment. The hybrid assignment model it is made by two algorithms, one that makes the ranking and selects available frequency channels and the other one in charge of establishing the Max-Min Fairness for not restrict the spectrum opportunities for all the other secondary users, who also claim to make transmissions. Measurements made were done for average bandwidth, average delay, as well as fairness computation for several channel assignments. Reached results were evaluated with experimental spectrum occupational data from captured GSM frequency band. The developed model shows evidence of improvement in spectrum opportunity use and a wider average transmission bandwidth for each secondary user, maintaining equality criteria in channel assignment.

Keywords: Bandwidth, fairness, multichannel, secondary users.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1762
601 The Effect of Confinement Shapes on Over-Reinforced HSC Beams

Authors: Ross Jeffry, Muhammad N. S. Hadi

Abstract:

High strength concrete (HSC) provides high strength but lower ductility than normal strength concrete. This low ductility limits the benefit of using HSC in building safe structures. On the other hand, when designing reinforced concrete beams, designers have to limit the amount of tensile reinforcement to prevent the brittle failure of concrete. Therefore the full potential of the use of steel reinforcement can not be achieved. This paper presents the idea of confining concrete in the compression zone so that the HSC will be in a state of triaxial compression, which leads to improvements in strength and ductility. Five beams made of HSC were cast and tested. The cross section of the beams was 200×300 mm, with a length of 4 m and a clear span of 3.6 m subjected to four-point loading, with emphasis placed on the midspan deflection. The first beam served as a reference beam. The remaining beams had different tensile reinforcement and the confinement shapes were changed to gauge their effectiveness in improving the strength and ductility of the beams. The compressive strength of the concrete was 85 MPa and the tensile strength of the steel was 500 MPa and for the stirrups and helixes was 250 MPa. Results of testing the five beams proved that placing helixes with different diameters as a variable parameter in the compression zone of reinforced concrete beams improve their strength and ductility.

Keywords: Confinement, ductility, high strength concrete, reinforced concrete beam.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2238
600 Fast Factored DCT-LMS Speech Enhancement for Performance Enhancement of Digital Hearing Aid

Authors: Sunitha. S.L., V. Udayashankara

Abstract:

Background noise is particularly damaging to speech intelligibility for people with hearing loss especially for sensorineural loss patients. Several investigations on speech intelligibility have demonstrated sensorineural loss patients need 5-15 dB higher SNR than the normal hearing subjects. This paper describes Discrete Cosine Transform Power Normalized Least Mean Square algorithm to improve the SNR and to reduce the convergence rate of the LMS for Sensory neural loss patients. Since it requires only real arithmetic, it establishes the faster convergence rate as compare to time domain LMS and also this transformation improves the eigenvalue distribution of the input autocorrelation matrix of the LMS filter. The DCT has good ortho-normal, separable, and energy compaction property. Although the DCT does not separate frequencies, it is a powerful signal decorrelator. It is a real valued function and thus can be effectively used in real-time operation. The advantages of DCT-LMS as compared to standard LMS algorithm are shown via SNR and eigenvalue ratio computations. . Exploiting the symmetry of the basis functions, the DCT transform matrix [AN] can be factored into a series of ±1 butterflies and rotation angles. This factorization results in one of the fastest DCT implementation. There are different ways to obtain factorizations. This work uses the fast factored DCT algorithm developed by Chen and company. The computer simulations results show superior convergence characteristics of the proposed algorithm by improving the SNR at least 10 dB for input SNR less than and equal to 0 dB, faster convergence speed and better time and frequency characteristics.

Keywords: Hearing Impairment, DCT Adaptive filter, Sensorineural loss patients, Convergence rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2169
599 Histological Structure of the Thyroid Gland in Duck: A Light and Electron Microscopic Study

Authors: Parchami A., Fatahian Dehkordi RF.

Abstract:

The present investigation aimed to study the histomorphometric characterizations of the thyroid gland of the duck. Five adult male and five adult female ducks were used in the experiment. Results showed that the overall histological structure of the thyroid gland of the duck were similar to those of the other vertebrae. The gland consisted of roughly spherical randomly distributed micro and macrofollicles with very little interstitial tissue between them. Each follicle is lined by a single layer of epithelial cells enclosing a cavity, the follicular cavity, which is filled with colloid. Ultrastructural findings showed that the apical surface of the follicular cells bears a variable number of short, irregularly distributed microvilli which are apparently more numerous on the columnar cells than on the lower, relatively inactive cells. Mitochondria and rough endoplasmic reticulum occupy the subnuclear region of the follicular cell, whereas the Golgi complex, free ribosomes and colloid droplets were found in the apical cytoplasm. At light or electron microscopic levels, there was no sex difference in histomorphometric characteristics of the thyroid glands.ls.

Keywords: Duck, Thyroid gland, Light microscopy, Electron microscopy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2557
598 Evaluation of Model Evaluation Criterion for Software Development Effort Estimation

Authors: S. K. Pillai, M. K. Jeyakumar

Abstract:

Estimation of model parameters is necessary to predict the behavior of a system. Model parameters are estimated using optimization criteria. Most algorithms use historical data to estimate model parameters. The known target values (actual) and the output produced by the model are compared. The differences between the two form the basis to estimate the parameters. In order to compare different models developed using the same data different criteria are used. The data obtained for short scale projects are used here. We consider software effort estimation problem using radial basis function network. The accuracy comparison is made using various existing criteria for one and two predictors. Then, we propose a new criterion based on linear least squares for evaluation and compared the results of one and two predictors. We have considered another data set and evaluated prediction accuracy using the new criterion. The new criterion is easy to comprehend compared to single statistic. Although software effort estimation is considered, this method is applicable for any modeling and prediction.

Keywords: Software effort estimation, accuracy, Radial Basis Function, linear least squares.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2039
597 Optimal Design of Multimachine Power System Stabilizers Using Improved Multi-Objective Particle Swarm Optimization Algorithm

Authors: Badr M. Alshammari, T. Guesmi

Abstract:

In this paper, the concept of a non-dominated sorting multi-objective particle swarm optimization with local search (NSPSO-LS) is presented for the optimal design of multimachine power system stabilizers (PSSs). The controller design is formulated as an optimization problem in order to shift the system electromechanical modes in a pre-specified region in the s-plan. A composite set of objective functions comprising the damping factor and the damping ratio of the undamped and lightly damped electromechanical modes is considered. The performance of the proposed optimization algorithm is verified for the 3-machine 9-bus system. Simulation results based on eigenvalue analysis and nonlinear time-domain simulation show the potential and superiority of the NSPSO-LS algorithm in tuning PSSs over a wide range of loading conditions and large disturbance compared to the classic PSO technique and genetic algorithms.

Keywords: Multi-objective optimization, particle swarm optimization, power system stabilizer, low frequency oscillations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1231
596 Geo-Spatial Methods to Better Understand Urban Food Deserts

Authors: Brian Ceh, Alison Jackson-Holland

Abstract:

Food deserts are a reality in some cities. These deserts can be described as a shortage of healthy food options within close proximity of consumers. The shortage in this case is typically facilitated by a lack of stores in an urban area that provide adequate fruit and vegetable choices. This study explores new avenues to better understand food deserts by examining modes of transportation that are available to shoppers or consumers, e.g. walking, automobile, or public transit. Further, this study is unique in that it not only explores the location of large grocery stores, but small grocery and convenience stores too. In this study, the relationship between some socio-economic indicators, such as personal income, are also explored to determine any possible association with food deserts. In addition, to help facilitate our understanding of food deserts, complex network spatial models that are built on adequate algorithms are used to investigate the possibility of food deserts in the city of Hamilton, Canada. It is found that Hamilton, Canada is adequate serviced by retailers who provide healthy food choices and that the food desert phenomena is almost absent.

Keywords: Canada, desert, food, Hamilton, stores.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1297
595 Fault Localization and Alarm Correlation in Optical WDM Networks

Authors: G. Ramesh, S. Sundara Vadivelu

Abstract:

For several high speed networks, providing resilience against failures is an essential requirement. The main feature for designing next generation optical networks is protecting and restoring high capacity WDM networks from the failures. Quick detection, identification and restoration make networks more strong and consistent even though the failures cannot be avoided. Hence, it is necessary to develop fast, efficient and dependable fault localization or detection mechanisms. In this paper we propose a new fault localization algorithm for WDM networks which can identify the location of a failure on a failed lightpath. Our algorithm detects the failed connection and then attempts to reroute data stream through an alternate path. In addition to this, we develop an algorithm to analyze the information of the alarms generated by the components of an optical network, in the presence of a fault. It uses the alarm correlation in order to reduce the list of suspected components shown to the network operators. By our simulation results, we show that our proposed algorithms achieve less blocking probability and delay while getting higher throughput.

Keywords: Alarm correlation, blocking probability, delay, fault localization, WDM networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067
594 Forecasting Electricity Spot Price with Generalized Long Memory Modeling: Wavelet and Neural Network

Authors: Souhir Ben Amor, Heni Boubaker, Lotfi Belkacem

Abstract:

This aims of this paper is to forecast the electricity spot prices. First, we focus on modeling the conditional mean of the series so we adopt a generalized fractional -factor Gegenbauer process (k-factor GARMA). Secondly, the residual from the -factor GARMA model has used as a proxy for the conditional variance; these residuals were predicted using two different approaches. In the first approach, a local linear wavelet neural network model (LLWNN) has developed to predict the conditional variance using the Back Propagation learning algorithms. In the second approach, the Gegenbauer generalized autoregressive conditional heteroscedasticity process (G-GARCH) has adopted, and the parameters of the k-factor GARMA-G-GARCH model has estimated using the wavelet methodology based on the discrete wavelet packet transform (DWPT) approach. The empirical results have shown that the k-factor GARMA-G-GARCH model outperform the hybrid k-factor GARMA-LLWNN model, and find it is more appropriate for forecasts.

Keywords: k-factor, GARMA, LLWNN, G-GARCH, electricity price, forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994
593 Robot Movement Using the Trust Region Policy Optimization

Authors: Romisaa Ali

Abstract:

The Policy Gradient approach is a subset of the Deep Reinforcement Learning (DRL) combines Deep Neural Networks (DNN) with Reinforcement Learning (RL). This approach finds the optimal policy of robot movement, based on the experience it gains from interaction with its environment. Unlike previous policy gradient algorithms, which were unable to handle the two types of error variance and bias introduced by the DNN model due to over- or underestimation, this algorithm is capable of handling both types of error variance and bias. This article will discuss the state-of-the-art SOTA policy gradient technique, trust region policy optimization (TRPO), by applying this method in various environments compared to another policy gradient method, the Proximal Policy Optimization (PPO), to explain their robust optimization, using this SOTA to gather experience data during various training phases after observing the impact of hyper-parameters on neural network performance.

Keywords: Deep neural networks, deep reinforcement learning, Proximal Policy Optimization, state-of-the-art, trust region policy optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 180
592 A Distance Function for Data with Missing Values and Its Application

Authors: Loai AbdAllah, Ilan Shimshoni

Abstract:

Missing values in data are common in real world applications. Since the performance of many data mining algorithms depend critically on it being given a good metric over the input space, we decided in this paper to define a distance function for unlabeled datasets with missing values. We use the Bhattacharyya distance, which measures the similarity of two probability distributions, to define our new distance function. According to this distance, the distance between two points without missing attributes values is simply the Mahalanobis distance. When on the other hand there is a missing value of one of the coordinates, the distance is computed according to the distribution of the missing coordinate. Our distance is general and can be used as part of any algorithm that computes the distance between data points. Because its performance depends strongly on the chosen distance measure, we opted for the k nearest neighbor classifier to evaluate its ability to accurately reflect object similarity. We experimented on standard numerical datasets from the UCI repository from different fields. On these datasets we simulated missing values and compared the performance of the kNN classifier using our distance to other three basic methods. Our  experiments show that kNN using our distance function outperforms the kNN using other methods. Moreover, the runtime performance of our method is only slightly higher than the other methods.

Keywords: Missing values, Distance metric, Bhattacharyya distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2750
591 Centre Of Mass Selection Operator Based Meta-Heuristic For Unbounded Knapsack Problem

Authors: D.Venkatesan, K.Kannan, S. Raja Balachandar

Abstract:

In this paper a new Genetic Algorithm based on a heuristic operator and Centre of Mass selection operator (CMGA) is designed for the unbounded knapsack problem(UKP), which is NP-Hard combinatorial optimization problem. The proposed genetic algorithm is based on a heuristic operator, which utilizes problem specific knowledge. This center of mass operator when combined with other Genetic Operators forms a competitive algorithm to the existing ones. Computational results show that the proposed algorithm is capable of obtaining high quality solutions for problems of standard randomly generated knapsack instances. Comparative study of CMGA with simple GA in terms of results for unbounded knapsack instances of size up to 200 show the superiority of CMGA. Thus CMGA is an efficient tool of solving UKP and this algorithm is competitive with other Genetic Algorithms also.

Keywords: Genetic Algorithm, Unbounded Knapsack Problem, Combinatorial Optimization, Meta-Heuristic, Center of Mass

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1698
590 Modeling the Symptom-Disease Relationship by Using Rough Set Theory and Formal Concept Analysis

Authors: Mert Bal, Hayri Sever, Oya Kalıpsız

Abstract:

Medical Decision Support Systems (MDSSs) are sophisticated, intelligent systems that can provide inference due to lack of information and uncertainty. In such systems, to model the uncertainty various soft computing methods such as Bayesian networks, rough sets, artificial neural networks, fuzzy logic, inductive logic programming and genetic algorithms and hybrid methods that formed from the combination of the few mentioned methods are used. In this study, symptom-disease relationships are presented by a framework which is modeled with a formal concept analysis and theory, as diseases, objects and attributes of symptoms. After a concept lattice is formed, Bayes theorem can be used to determine the relationships between attributes and objects. A discernibility relation that forms the base of the rough sets can be applied to attribute data sets in order to reduce attributes and decrease the complexity of computation.

Keywords: Formal Concept Analysis, Rough Set Theory, Granular Computing, Medical Decision Support System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1813
589 Investigation of Buoyant Parameters of k-ε Turbulence Model in Gravity Stratified Flows

Authors: A. Majid Bahari, Kourosh Hejazi

Abstract:

Different variants for buoyancy-affected terms in k-ε turbulence model have been utilized to predict the flow parameters more accurately, and investigate applicability of alternative k-ε turbulence buoyant closures in numerical simulation of a horizontal gravity current. The additional non-isotropic turbulent stress due to buoyancy has been considered in production term, based on Algebraic Stress Model (ASM). In order to account for turbulent scalar fluxes, general gradient diffusion hypothesis has been used along with Boussinesq gradient diffusion hypothesis with a variable turbulent Schmidt number and additional empirical constant c3ε.To simulate buoyant flow domain a 2D vertical numerical model (WISE, Width Integrated Stratified Environments), based on Reynolds- Averaged Navier-Stokes (RANS) equations, has been deployed and the model has been further developed for different k-ε turbulence closures. Results are compared against measured laboratory values of a saline gravity current to explore the efficient turbulence model.

Keywords: Buoyant flows, Buoyant k-ε turbulence model, saline gravity current.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3908
588 AC Signals Estimation from Irregular Samples

Authors: Predrag B. Petrović

Abstract:

The paper deals with the estimation of amplitude and phase of an analogue multi-harmonic band-limited signal from irregularly spaced sampling values. To this end, assuming the signal fundamental frequency is known in advance (i.e., estimated at an independent stage), a complexity-reduced algorithm for signal reconstruction in time domain is proposed. The reduction in complexity is achieved owing to completely new analytical and summarized expressions that enable a quick estimation at a low numerical error. The proposed algorithm for the calculation of the unknown parameters requires O((2M+1)2) flops, while the straightforward solution of the obtained equations takes O((2M+1)3) flops (M is the number of the harmonic components). It is applied in signal reconstruction, spectral estimation, system identification, as well as in other important signal processing problems. The proposed method of processing can be used for precise RMS measurements (for power and energy) of a periodic signal based on the presented signal reconstruction. The paper investigates the errors related to the signal parameter estimation, and there is a computer simulation that demonstrates the accuracy of these algorithms.

Keywords: Band-limited signals, Fourier coefficient estimation, analytical solutions, signal reconstruction, time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1748
587 Examination of the Reasons for the Formation of Red Oil in Spent Caustic from Olefin Plant

Authors: Mehdi Seifollahi, Ashkan Forootan, Sajjad Bahrami Reyhan

Abstract:

Due to the complexity of olefinic plants, various environmental pollutants exist such as NOx, CO2, Tar Water, and most importantly Spent Caustic. In this paper, instead of investigating ways of treating this pollutant, we evaluated the production in relation to plant’s variable items. We primarily discussed the factors affecting the quality of the output spent caustic such as impurities in the feed of olefin plant, the amount of injected dimethyl disulfide (DMDS) in furnaces, variation in feed composition, differences among gas temperatures and the concentration of caustic solution at the bottom of the tower. The results of the laboratory proved that in the formation of Red Oil, 1,3butadiene and acetaldehyde followed free radical and aldol condensation mechanism respectively. By increasing the injection rate of DMDS, Mercaptide amount increases in the effluent. In addition, pyrolysis gasoline accumulation is directly related to caustic concentration in the tower. Increasing naphtenes in the liquid feed augments the amount of 1,3butadiene, as one of the sources of Red Oil formation. By increasing the oxygenated compound in the feed, the rate of acetaldehyde formation, as the main source of Red Oil formation, increases.

Keywords: Olefin, spent caustic, red oil, caustic wash tower.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2220
586 Evaluation of the ANN Based Nonlinear System Models in the MSE and CRLB Senses

Authors: M.V Rajesh, Archana R, A Unnikrishnan, R Gopikakumari, Jeevamma Jacob

Abstract:

The System Identification problem looks for a suitably parameterized model, representing a given process. The parameters of the model are adjusted to optimize a performance function based on error between the given process output and identified process output. The linear system identification field is well established with many classical approaches whereas most of those methods cannot be applied for nonlinear systems. The problem becomes tougher if the system is completely unknown with only the output time series is available. It has been reported that the capability of Artificial Neural Network to approximate all linear and nonlinear input-output maps makes it predominantly suitable for the identification of nonlinear systems, where only the output time series is available. [1][2][4][5]. The work reported here is an attempt to implement few of the well known algorithms in the context of modeling of nonlinear systems, and to make a performance comparison to establish the relative merits and demerits.

Keywords: Multilayer neural networks, Radial Basis Functions, Clustering algorithm, Back Propagation training, Extended Kalmanfiltering, Mean Square Error, Nonlinear Modeling, Cramer RaoLower Bound.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646
585 Performance Improvement of Information System of a Banking System Based on Integrated Resilience Engineering Design

Authors: S. H. Iranmanesh, L. Aliabadi, A. Mollajan

Abstract:

Integrated resilience engineering (IRE) is capable of returning banking systems to the normal state in extensive economic circumstances. In this study, information system of a large bank (with several branches) is assessed and optimized under severe economic conditions. Data envelopment analysis (DEA) models are employed to achieve the objective of this study. Nine IRE factors are considered to be the outputs, and a dummy variable is defined as the input of the DEA models. A standard questionnaire is designed and distributed among executive managers to be considered as the decision-making units (DMUs). Reliability and validity of the questionnaire is examined based on Cronbach's alpha and t-test. The most appropriate DEA model is determined based on average efficiency and normality test. It is shown that the proposed integrated design provides higher efficiency than the conventional RE design. Results of sensitivity and perturbation analysis indicate that self-organization, fault tolerance, and reporting culture respectively compose about 50 percent of total weight.

Keywords: Banking system, data envelopment analysis, DEA, integrated resilience engineering, IRE, performance evaluation, perturbation analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 840
584 Photocatalytic Degradation of Organic Pollutant Reacting with Tungstates: Role of Microstructure and Size Effect on Oxidation Kinetics

Authors: A. Taoufyq, B. Bakiz, A. Benlhachemi, L. Patout, D. V. Chokouadeua, F. Guinneton, G. Nolibe, A. Lyoussi, J-R. Gavarri

Abstract:

The aim of this study was to investigate the photocatalytic activity of polycrystalline phases of bismuth tungstate of formula Bi2WO6. Polycrystalline samples were elaborated using a coprecipitation technique followed by a calcination process at different temperatures (300, 400, 600 and 900°C). The obtained polycrystalline phases have been characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), and transmission electron microscopy (TEM). Crystal cell parameters and cell volume depend on elaboration temperature. High-resolution electron microscopy images and image simulations, associated with X-ray diffraction data, allowed confirming the lattices and space groups Pca21. The photocatalytic activity of the as-prepared samples was studied by irradiating aqueous solutions of Rhodamine B, associated with Bi2WO6 additives having variable crystallite sizes. The photocatalytic activity of such bismuth tungstates increased as the crystallite sizes decreased. The high specific area of the photocatalytic particles obtained at 300°C seems to condition the degradation kinetics of RhB.

Keywords: Bismuth tungstate, crystallite sizes, electron microscopy, photocatalytic activity, X-ray diffraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2068
583 Optimal Transmission Network Usage and Loss Allocation Using Matrices Methodology and Cooperative Game Theory

Authors: Baseem Khan, Ganga Agnihotri

Abstract:

Restructuring of Electricity supply industry introduced many issues such as transmission pricing, transmission loss allocation and congestion management. Many methodologies and algorithms were proposed for addressing these issues. In this paper a power flow tracing based method is proposed which involves Matrices methodology for the transmission usage and loss allocation for generators and demands. This method provides loss allocation in a direct way because all the computation is previously done for usage allocation. The proposed method is simple and easy to implement in a large power system. Further it is less computational because it requires matrix inversion only a single time. After usage and loss allocation cooperative game theory is applied to results for finding efficient economic signals. Nucleolus and Shapely value approach is used for optimal allocation of results. Results are shown for the IEEE 6 bus system and IEEE 14 bus system.

Keywords: Modified Kirchhoff Matrix, Power flow tracing, Transmission Pricing, Transmission Loss Allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2592