Search results for: invasive weed optimization algorithm
5101 Methods for Early Detection of Invasive Plant Species: A Case Study of Hueston Woods State Nature Preserve
Authors: Suzanne Zazycki, Bamidele Osamika, Heather Craska, Kaelyn Conaway, Reena Murphy, Stephanie Spence
Abstract:
Invasive Plant Species (IPS) are an important component of effective preservation and conservation of natural lands management. IPS are non-native plants which can aggressively encroach upon native species and pose a significant threat to the ecology, public health, and social welfare of a community. The presence of IPS in U.S. nature preserves has caused economic costs, which has estimated to exceed $26 billion a year. While different methods have been identified to control IPS, few methods have been recognized for early detection of IPS. This study examined identified methods for early detection of IPS in Hueston Woods State Nature Preserve. Mixed methods research design was adopted in this four-phased study. The first phase entailed data gathering, the phase described the characteristics and qualities of IPS and the importance of early detection (ED). The second phase explored ED methods, Geographic Information Systems (GIS) and Citizen Science were discovered as ED methods for IPS. The third phase of the study involved the creation of hotspot maps to identify likely areas for IPS growth. While the fourth phase involved testing and evaluating mobile applications that can support the efforts of citizen scientists in IPS detection. Literature reviews were conducted on IPS and ED methods, and four regional experts from ODNR and Miami University were interviewed. A questionnaire was used to gather information about ED methods used across the state. The findings revealed that geospatial methods, including Unmanned Aerial Vehicles (UAVs), Multispectral Satellites (MSS), and Normalized Difference Vegetation Index (NDVI), are not feasible for early detection of IPS, as they require GIS expertise, are still an emerging technology, and are not suitable for every habitat for the ED of IPS. Therefore, Other ED methods options were explored, which include predicting areas where IPS will grow, which can be done through monitoring areas that are like the species’ native habitat. Through literature review and interviews, IPS are known to grow in frequently disturbed areas such as along trails, shorelines, and streambanks. The research team called these areas “hotspots” and created maps of these hotspots specifically for HW NP to support and narrow the efforts of citizen scientists and staff in the ED of IPS. The results further showed that utilizing citizen scientists in the ED of IPS is feasible, especially through single day events or passive monitoring challenges. The study concluded that the creation of hotspot maps to direct the efforts of citizen scientists are effective for the early detection of IPS. Several recommendations were made, among which is the creation of hotspot maps to narrow the ED efforts as citizen scientists continues to work in the preserves and utilize citizen science volunteers to identify and record emerging IPS.Keywords: early detection, hueston woods state nature preserve, invasive plant species, hotspots
Procedia PDF Downloads 1035100 Optimal Maintenance Policy for a Three-Unit System
Authors: A. Abbou, V. Makis, N. Salari
Abstract:
We study the condition-based maintenance (CBM) problem of a system subject to stochastic deterioration. The system is composed of three units (or modules): (i) Module 1 deterioration follows a Markov process with two operational states and one failure state. The operational states are partially observable through periodic condition monitoring. (ii) Module 2 deterioration follows a Gamma process with a known failure threshold. The deterioration level of this module is fully observable through periodic inspections. (iii) Only the operating age information is available of Module 3. The lifetime of this module has a general distribution. A CBM policy prescribes when to initiate a maintenance intervention and which modules to repair during intervention. Our objective is to determine the optimal CBM policy minimizing the long-run expected average cost of operating the system. This is achieved by formulating a Markov decision process (MDP) and developing the value iteration algorithm for solving the MDP. We provide numerical examples illustrating the cost-effectiveness of the optimal CBM policy through a comparison with heuristic policies commonly found in the literature.Keywords: reliability, maintenance optimization, Markov decision process, heuristics
Procedia PDF Downloads 2195099 Exergetic Optimization on Solid Oxide Fuel Cell Systems
Authors: George N. Prodromidis, Frank A. Coutelieris
Abstract:
Biogas can be currently considered as an alternative option for electricity production, mainly due to its high energy content (hydrocarbon-rich source), its renewable status and its relatively low utilization cost. Solid Oxide Fuel Cell (SOFC) stacks convert fuel’s chemical energy to electricity with high efficiencies and reveal significant advantages on fuel flexibility combined with lower emissions rate, especially when utilize biogas. Electricity production by biogas constitutes a composite problem which incorporates an extensive parametric analysis on numerous dynamic variables. The main scope of the presented study is to propose a detailed thermodynamic model on the optimization of SOFC-based power plants’ operation based on fundamental thermodynamics, energy and exergy balances. This model named THERMAS (THERmodynamic MAthematical Simulation model) incorporates each individual process, during electricity production, mathematically simulated for different case studies that represent real life operational conditions. Also, THERMAS offers the opportunity to choose a great variety of different values for each operational parameter individually, thus allowing for studies within unexplored and experimentally impossible operational ranges. Finally, THERMAS innovatively incorporates a specific criterion concluded by the extensive energy analysis to identify the most optimal scenario per simulated system in exergy terms. Therefore, several dynamical parameters as well as several biogas mixture compositions have been taken into account, to cover all the possible incidents. Towards the optimization process in terms of an innovative OPF (OPtimization Factor), presented here, this research study reveals that systems supplied by low methane fuels can be comparable to these supplied by pure methane. To conclude, such an innovative simulation model indicates a perspective on the optimal design of a SOFC stack based system, in the direction of the commercialization of systems utilizing biogas.Keywords: biogas, exergy, efficiency, optimization
Procedia PDF Downloads 3705098 A Novel Meta-Heuristic Algorithm Based on Cloud Theory for Redundancy Allocation Problem under Realistic Condition
Authors: H. Mousavi, M. Sharifi, H. Pourvaziri
Abstract:
Redundancy Allocation Problem (RAP) is a well-known mathematical problem for modeling series-parallel systems. It is a combinatorial optimization problem which focuses on determining an optimal assignment of components in a system design. In this paper, to be more practical, we have considered the problem of redundancy allocation of series system with interval valued reliability of components. Therefore, during the search process, the reliabilities of the components are considered as a stochastic variable with a lower and upper bounds. In order to optimize the problem, we proposed a simulated annealing based on cloud theory (CBSAA). Also, the Monte Carlo simulation (MCS) is embedded to the CBSAA to handle the random variable components’ reliability. This novel approach has been investigated by numerical examples and the experimental results have shown that the CBSAA combining MCS is an efficient tool to solve the RAP of systems with interval-valued component reliabilities.Keywords: redundancy allocation problem, simulated annealing, cloud theory, monte carlo simulation
Procedia PDF Downloads 4125097 Optimizing SCADA/RTU Control System Alarms for Gas Wells
Authors: Mohammed Ali Faqeeh
Abstract:
SCADA System Alarms Optimization Process has been introduced recently and applied accordingly in different implemented stages. First, MODBUS communication protocols between RTU/SCADA were improved at the level of I/O points scanning intervals. Then, some of the technical issues related to manufacturing limitations were resolved. Afterward, another approach was followed to take a decision on the configured alarms database. So, a couple of meetings and workshops were held among all system stakeholders, which resulted in an agreement of disabling unnecessary (Diagnostic) alarms. Moreover, a leap forward step was taken to segregate the SCADA Operator Graphics in a way to show only process-related alarms while some other graphics will ensure the availability of field alarms related to maintenance and engineering purposes. This overall system management and optimization have resulted in a huge effective impact on all operations, maintenance, and engineering. It has reduced unneeded open tickets for maintenance crews which led to reduce the driven mileages accordingly. Also, this practice has shown a good impression on the operation reactions and response to the emergency situations as the SCADA operators can be staying much vigilant on the real alarms rather than gets distracted by noisy ones. SCADA System Alarms Optimization process has been executed utilizing all applicable in-house resources among engineering, maintenance, and operations crews. The methodology of the entire enhanced scopes is performed through various stages.Keywords: SCADA, RTU Communication, alarm management system, SCADA alarms, Modbus, DNP protocol
Procedia PDF Downloads 1665096 Spectral Clustering for Manufacturing Cell Formation
Authors: Yessica Nataliani, Miin-Shen Yang
Abstract:
Cell formation (CF) is an important step in group technology. It is used in designing cellular manufacturing systems using similarities between parts in relation to machines so that it can identify part families and machine groups. There are many CF methods in the literature, but there is less spectral clustering used in CF. In this paper, we propose a spectral clustering algorithm for machine-part CF. Some experimental examples are used to illustrate its efficiency. Overall, the spectral clustering algorithm can be used in CF with a wide variety of machine/part matrices.Keywords: group technology, cell formation, spectral clustering, grouping efficiency
Procedia PDF Downloads 4085095 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption
Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu
Abstract:
By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture
Procedia PDF Downloads 3775094 Preparation of Chemically Activated Carbon from Waste Tire Char for Lead Ions Adsorption and Optimization Using Response Surface Methodology
Authors: Lucky Malise, Hilary Rutto, Tumisang Seodigeng
Abstract:
The use of tires in automobiles is very important in the automobile industry. However, there is a serious environmental problem concerning the disposal of these rubber tires once they become worn out. The main aim of this study was to prepare activated carbon from waste tire pyrolysis char by impregnating KOH on pyrolytic char. Adsorption studies on lead onto chemically activated carbon was carried out using response surface methodology. The effect of process parameters such as temperature (°C), adsorbent dosage (g/1000ml), pH, contact time (minutes) and initial lead concentration (mg/l) on the adsorption capacity were investigated. It was found that the adsorption capacity increases with an increase in contact time, pH, temperature and decreases with an increase in lead concentration. Optimization of the process variables was done using a numerical optimization method. Fourier Transform Infrared Spectra (FTIR) analysis, XRay diffraction (XRD), Thermogravimetric analysis (TGA) and scanning electron microscope was used to characterize the pyrolytic carbon char before and after activation. The optimum points 1g/ 100 ml for adsorbent dosage, 7 for pH value of the solution, 115.2 min for contact time, 100 mg/l for initial metal concentration, and 25°C for temperature were obtained to achieve the highest adsorption capacity of 93.176 mg/g with a desirability of 0.994. Fourier Transform Infrared Spectra (FTIR) analysis and Thermogravimetric analysis (TGA) show the presence of oxygen-containing functional groups on the surface of the activated carbon produced and that the weight loss taking place during the activation step is small.Keywords: waste tire pyrolysis char, chemical activation, central composite design (CCD), adsorption capacity, numerical optimization
Procedia PDF Downloads 2265093 Control of a Quadcopter Using Genetic Algorithm Methods
Authors: Mostafa Mjahed
Abstract:
This paper concerns the control of a nonlinear system using two different methods, reference model and genetic algorithm. The quadcopter is a nonlinear unstable system, which is a part of aerial robots. It is constituted by four rotors placed at the end of a cross. The center of this cross is occupied by the control circuit. Its motions are governed by six degrees of freedom: three rotations around 3 axes (roll, pitch and yaw) and the three spatial translations. The control of such system is complex, because of nonlinearity of its dynamic representation and the number of parameters, which it involves. Numerous studies have been developed to model and stabilize such systems. The classical PID and LQ correction methods are widely used. If the latter represent the advantage to be simple because they are linear, they reveal the drawback to require the presence of a linear model to synthesize. It also implies the complexity of the established laws of command because the latter must be widened on all the domain of flight of these quadcopter. Note that, if the classical design methods are widely used to control aeronautical systems, the Artificial Intelligence methods as genetic algorithms technique receives little attention. In this paper, we suggest comparing two PID design methods. Firstly, the parameters of the PID are calculated according to the reference model. In a second phase, these parameters are established using genetic algorithms. By reference model, we mean that the corrected system behaves according to a reference system, imposed by some specifications: settling time, zero overshoot etc. Inspired from the natural evolution of Darwin's theory advocating the survival of the best, John Holland developed this evolutionary algorithm. Genetic algorithm (GA) possesses three basic operators: selection, crossover and mutation. We start iterations with an initial population. Each member of this population is evaluated through a fitness function. Our purpose is to correct the behavior of the quadcopter around three axes (roll, pitch and yaw) with 3 PD controllers. For the altitude, we adopt a PID controller.Keywords: quadcopter, genetic algorithm, PID, fitness, model, control, nonlinear system
Procedia PDF Downloads 4315092 Optimal Portfolio Selection under Treynor Ratio Using Genetic Algorithms
Authors: Imad Zeyad Ramadan
Abstract:
In this paper a genetic algorithm was developed to construct the optimal portfolio based on the Treynor method. The GA maximizes the Treynor ratio under budget constraint to select the best allocation of the budget for the companies in the portfolio. The results show that the GA was able to construct a conservative portfolio which includes companies from the three sectors. This indicates that the GA reduced the risk on the investor as it choose some companies with positive risks (goes with the market) and some with negative risks (goes against the market).Keywords: oOptimization, genetic algorithm, portfolio selection, Treynor method
Procedia PDF Downloads 4495091 Transcranial Magnetic Stimulation as a Potentiator in the Rehabilitation of Fine Motor Skills: A Literature Review
Authors: Ana Lucia Molina
Abstract:
Introduction: Fine motor skills refer to the use of the hands and coordination of the small muscles that control the fingers. A deficiency in fine motor skills is as important as a change in global movements, as fine motor skills directly affect activities of daily living. Fine movements are involved in some functions, such as motor control of the extremities, sensitivity, strength and tonus of the hands. A growing interest in the effects of non-invasive neuromodulation, such as transcranial stimulation technologies, through transcranial magnetic stimulation (TMS), has been observed in the scientific literature, with promising results in fine motor rehabilitation, as it provides modulation of the corresponding cortical activity in the area primary motor skills of the hands in both hemispheres (according to the International System 10-20, corresponding to C3 and C4). Objectives: to carry out a literature review about the effects of TMS on the cortical motor area corresponding to hand motricity. Methodology: This is a bibliographic survey carried out between October 2022 and March 2023 at Pubmed, Google Scholar, Lillacs and Virtual Health Library (BVS), with a national and international database. Some books on neuromodulation were included. Results: 28 articles and 5 books were initially found, and after reading the abstracts, only 14 articles and 3 books were selected, with publication dates between 2008 and 2022, to compose the literature review since it suited the purpose of this study. Conclusion: TMS has shown promising results in the treatment of fine motor rehabilitation, such as improving coordination, muscle strength and range of motion of the hands, being a complementary technique to existing treatments and thus providing more potent results for manual skills in activities of daily living. It is important to emphasize the need for more specific studies on the application of TMS for the treatment of manual disorders, which describe the uniqueness of each movement.Keywords: transcranial magnetic stimulation, fine motor skills, motor rehabilitation, non-invasive neuromodulation
Procedia PDF Downloads 735090 Design and Analysis of Solar Powered Plane
Authors: Malarvizhi, Venkatesan
Abstract:
This paper summarizes about the design and optimization of solar powered unmanned aerial vehicle. The purpose of this research is to increase the range and endurance. It can be used for environmental research, aerial photography, search and rescue mission and surveillance in other planets. The ultimate aim of this research is to design and analyze the solar powered plane in order to detect lift, drag and other parameters by using cfd analysis. Similarly the numerical investigation has been done to compare the results of earth’s atmosphere to the mars atmosphere. This is the approach made to check whether the solar powered plane is possible to glide in the planet mars by using renewable energy (i.e., solar energy).Keywords: optimization, range, endurance, surveillance, lift and drag parameters
Procedia PDF Downloads 4605089 Optimization and Feasibility Analysis of a PV/Wind/ Battery Hybrid Energy Conversion
Authors: Doaa M. Atia, Faten H. Fahmy, Ninet M. A. El-Rahman, Hassan T. Dorra
Abstract:
In this paper, the optimum design for renewable energy system powered an aquaculture pond was determined. Hybrid Optimization Model for Electric Renewable (HOMER) software program, which is developed by U.S National Renewable Energy Laboratory (NREL), is used for analyzing the feasibility of the stand-alone and hybrid system in this study. HOMER program determines whether renewable energy resources satisfy hourly electric demand or not. The program calculates energy balance for every 8760 hours in a year to simulate operation of the system. This optimization compares the demand for the electrical energy for each hour of the year with the energy supplied by the system for that hour and calculates the relevant energy flow for each component in the model. The essential principle is to minimize the total system cost while HOMER ensures control of the system. Moreover the feasibility analysis of the energy system is also studied. Wind speed, solar irradiance, interest rate and capacity shortage are the parameters which are taken into consideration. The simulation results indicate that the hybrid system is the best choice in this study, yielding lower net present cost. Thus, it provides higher system performance than PV or wind stand-alone systems.Keywords: wind stand-alone system, photovoltaic stand-alone system, hybrid system, optimum system sizing, feasibility, cost analysis
Procedia PDF Downloads 3405088 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink
Authors: Sanjay Rathee, Arti Kashyap
Abstract:
Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining
Procedia PDF Downloads 2945087 Modelling Fluoride Pollution of Groundwater Using Artificial Neural Network in the Western Parts of Jharkhand
Authors: Neeta Kumari, Gopal Pathak
Abstract:
Artificial neural network has been proved to be an efficient tool for non-parametric modeling of data in various applications where output is non-linearly associated with input. It is a preferred tool for many predictive data mining applications because of its power , flexibility, and ease of use. A standard feed forward networks (FFN) is used to predict the groundwater fluoride content. The ANN model is trained using back propagated algorithm, Tansig and Logsig activation function having varying number of neurons. The models are evaluated on the basis of statistical performance criteria like Root Mean Squarred Error (RMSE) and Regression coefficient (R2), bias (mean error), Coefficient of variation (CV), Nash-Sutcliffe efficiency (NSE), and the index of agreement (IOA). The results of the study indicate that Artificial neural network (ANN) can be used for groundwater fluoride prediction in the limited data situation in the hard rock region like western parts of Jharkhand with sufficiently good accuracy.Keywords: Artificial neural network (ANN), FFN (Feed-forward network), backpropagation algorithm, Levenberg-Marquardt algorithm, groundwater fluoride contamination
Procedia PDF Downloads 5505086 Probabilistic Gathering of Agents with Simple Sensors: Distributed Algorithm for Aggregation of Robots Equipped with Binary On-Board Detectors
Authors: Ariel Barel, Rotem Manor, Alfred M. Bruckstein
Abstract:
We present a probabilistic gathering algorithm for agents that can only detect the presence of other agents in front of or behind them. The agents act in the plane and are identical and indistinguishable, oblivious, and lack any means of direct communication. They do not have a common frame of reference in the plane and choose their orientation (direction of possible motion) at random. The analysis of the gathering process assumes that the agents act synchronously in selecting random orientations that remain fixed during each unit time-interval. Two algorithms are discussed. The first one assumes discrete jumps based on the sensing results given the randomly selected motion direction, and in this case, extensive experimental results exhibit probabilistic clustering into a circular region with radius equal to the step-size in time proportional to the number of agents. The second algorithm assumes agents with continuous sensing and motion, and in this case, we can prove gathering into a very small circular region in finite expected time.Keywords: control, decentralized, gathering, multi-agent, simple sensors
Procedia PDF Downloads 1645085 A Fully Interpretable Deep Reinforcement Learning-Based Motion Control for Legged Robots
Authors: Haodong Huang, Zida Zhao, Shilong Sun, Chiyao Li, Wenfu Xu
Abstract:
The control methods for legged robots based on deep reinforcement learning have seen widespread application; however, the inherent black-box nature of neural networks presents challenges in understanding the decision-making motives of the robots. To address this issue, we propose a fully interpretable deep reinforcement learning training method to elucidate the underlying principles of legged robot motion. We incorporate the dynamics of legged robots into the policy, where observations serve as inputs and actions as outputs of the dynamics model. By embedding the dynamics equations within the multi-layer perceptron (MLP) computation process and making the parameters trainable, we enhance interpretability. Additionally, Bayesian optimization is introduced to train these parameters. We validate the proposed fully interpretable motion control algorithm on a legged robot, opening new research avenues for motion control and learning algorithms for legged robots within the deep learning framework.Keywords: deep reinforcement learning, interpretation, motion control, legged robots
Procedia PDF Downloads 215084 Jamun Juice Extraction Using Commercial Enzymes and Optimization of the Treatment with the Help of Physicochemical, Nutritional and Sensory Properties
Authors: Payel Ghosh, Rama Chandra Pradhan, Sabyasachi Mishra
Abstract:
Jamun (Syzygium cuminii L.) is one of the important indigenous minor fruit with high medicinal value. The jamun cultivation is unorganized and there is huge loss of this fruit every year. The perishable nature of the fruit makes its postharvest management further difficult. Due to the strong cell wall structure of pectin-protein bonds and hard seeds, extraction of juice becomes difficult. Enzymatic treatment has been commercially used for improvement of juice quality with high yield. The objective of the study was to optimize the best treatment method for juice extraction. Enzymes (Pectinase and Tannase) from different stains had been used and for each enzyme, best result obtained by using response surface methodology. Optimization had been done on the basis of physicochemical property, nutritional property, sensory quality and cost estimation. According to quality aspect, cost analysis and sensory evaluation, the optimizing enzymatic treatment was obtained by Pectinase from Aspergillus aculeatus strain. The optimum condition for the treatment was 44 oC with 80 minute with a concentration of 0.05% (w/w). At these conditions, 75% of yield with turbidity of 32.21NTU, clarity of 74.39%T, polyphenol content of 115.31 mg GAE/g, protein content of 102.43 mg/g have been obtained with a significant difference in overall acceptability.Keywords: enzymatic treatment, Jamun, optimization, physicochemical property, sensory analysis
Procedia PDF Downloads 2965083 Frequent Itemset Mining Using Rough-Sets
Authors: Usman Qamar, Younus Javed
Abstract:
Frequent pattern mining is the process of finding a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set. It was proposed in the context of frequent itemsets and association rule mining. Frequent pattern mining is used to find inherent regularities in data. What products were often purchased together? Its applications include basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis. However, one of the bottlenecks of frequent itemset mining is that as the data increase the amount of time and resources required to mining the data increases at an exponential rate. In this investigation a new algorithm is proposed which can be uses as a pre-processor for frequent itemset mining. FASTER (FeAture SelecTion using Entropy and Rough sets) is a hybrid pre-processor algorithm which utilizes entropy and rough-sets to carry out record reduction and feature (attribute) selection respectively. FASTER for frequent itemset mining can produce a speed up of 3.1 times when compared to original algorithm while maintaining an accuracy of 71%.Keywords: rough-sets, classification, feature selection, entropy, outliers, frequent itemset mining
Procedia PDF Downloads 4375082 Seamless Mobility in Heterogeneous Mobile Networks
Authors: Mohab Magdy Mostafa Mohamed
Abstract:
The objective of this paper is to introduce a vertical handover (VHO) algorithm between wireless LANs (WLANs) and LTE mobile networks. The proposed algorithm is based on the fuzzy control theory and takes into consideration power level, subscriber velocity, and target cell load instead of only power level in traditional algorithms. Simulation results show that network performance in terms of number of handovers and handover occurrence distance is improved.Keywords: vertical handover, fuzzy control theory, power level, speed, target cell load
Procedia PDF Downloads 3535081 Development of Computational Approach for Calculation of Hydrogen Solubility in Hydrocarbons for Treatment of Petroleum
Authors: Abdulrahman Sumayli, Saad M. AlShahrani
Abstract:
For the hydrogenation process, knowing the solubility of hydrogen (H2) in hydrocarbons is critical to improve the efficiency of the process. We investigated the H2 solubility computation in four heavy crude oil feedstocks using machine learning techniques. Temperature, pressure, and feedstock type were considered as the inputs to the models, while the hydrogen solubility was the sole response. Specifically, we employed three different models: Support Vector Regression (SVR), Gaussian process regression (GPR), and Bayesian ridge regression (BRR). To achieve the best performance, the hyper-parameters of these models are optimized using the whale optimization algorithm (WOA). We evaluated the models using a dataset of solubility measurements in various feedstocks, and we compared their performance based on several metrics. Our results show that the WOA-SVR model tuned with WOA achieves the best performance overall, with an RMSE of 1.38 × 10− 2 and an R-squared of 0.991. These findings suggest that machine learning techniques can provide accurate predictions of hydrogen solubility in different feedstocks, which could be useful in the development of hydrogen-related technologies. Besides, the solubility of hydrogen in the four heavy oil fractions is estimated in different ranges of temperatures and pressures of 150 ◦C–350 ◦C and 1.2 MPa–10.8 MPa, respectivelyKeywords: temperature, pressure variations, machine learning, oil treatment
Procedia PDF Downloads 695080 Supplier Selection and Order Allocation Using a Stochastic Multi-Objective Programming Model and Genetic Algorithm
Authors: Rouhallah Bagheri, Morteza Mahmoudi, Hadi Moheb-Alizadeh
Abstract:
In this paper, we develop a supplier selection and order allocation multi-objective model in stochastic environment in which purchasing cost, percentage of delivered items with delay and percentage of rejected items provided by each supplier are supposed to be stochastic parameters following any arbitrary probability distribution. To do so, we use dependent chance programming (DCP) that maximizes probability of the event that total purchasing cost, total delivered items with delay and total rejected items are less than or equal to pre-determined values given by decision maker. After transforming the above mentioned stochastic multi-objective programming problem into a stochastic single objective problem using minimum deviation method, we apply a genetic algorithm to get the later single objective problem solved. The employed genetic algorithm performs a simulation process in order to calculate the stochastic objective function as its fitness function. At the end, we explore the impact of stochastic parameters on the given solution via a sensitivity analysis exploiting coefficient of variation. The results show that as stochastic parameters have greater coefficients of variation, the value of objective function in the stochastic single objective programming problem is worsened.Keywords: dependent chance programming, genetic algorithm, minimum deviation method, order allocation, supplier selection
Procedia PDF Downloads 2565079 Pioneering Conservation of Aquatic Ecosystems under Australian Law
Authors: Gina M. Newton
Abstract:
Australia’s Environment Protection and Biodiversity Conservation Act (EPBC Act) is the premiere, national law under which species and 'ecological communities' (i.e., like ecosystems) can be formally recognised and 'listed' as threatened across all jurisdictions. The listing process involves assessment against a range of criteria (similar to the IUCN process) to demonstrate conservation status (i.e., vulnerable, endangered, critically endangered, etc.) based on the best available science. Over the past decade in Australia, there’s been a transition from almost solely terrestrial to the first aquatic threatened ecological community (TEC or ecosystem) listings (e.g., River Murray, Macquarie Marshes, Coastal Saltmarsh, Salt-wedge Estuaries). All constitute large areas, with some including multiple state jurisdictions. Development of these conservation and listing advices has enabled, for the first time, a more forensic analysis of three key factors across a range of aquatic and coastal ecosystems: -the contribution of invasive species to conservation status, -how to demonstrate and attribute decline in 'ecological integrity' to conservation status, and, -identification of related priority conservation actions for management. There is increasing global recognition of the disproportionate degree of biodiversity loss within aquatic ecosystems. In Australia, legislative protection at Commonwealth or State levels remains one of the strongest conservation measures. Such laws have associated compliance mechanisms for breaches to the protected status. They also trigger the need for environment impact statements during applications for major developments (which may be denied). However, not all jurisdictions have such laws in place. There remains much opposition to the listing of freshwater systems – for example, the River Murray (Australia's largest river) and Macquarie Marshes (an internationally significant wetland) were both disallowed by parliament four months after formal listing. This was mainly due to a change of government, dissent from a major industry sector, and a 'loophole' in the law. In Australia, at least in the immediate to medium-term time frames, invasive species (aliens, native pests, pathogens, etc.) appear to be the number one biotic threat to the biodiversity and ecological function and integrity of our aquatic ecosystems. Consequently, this should be considered a current priority for research, conservation, and management actions. Another key outcome from this analysis was the recognition that drawing together multiple lines of evidence to form a 'conservation narrative' is a more useful approach to assigning conservation status. This also helps to addresses a glaring gap in long-term ecological data sets in Australia, which often precludes a more empirical data-driven approach. An important lesson also emerged – the recognition that while conservation must be underpinned by the best available scientific evidence, it remains a 'social and policy' goal rather than a 'scientific' goal. Communication, engagement, and 'politics' necessarily play a significant role in achieving conservation goals and need to be managed and resourced accordingly.Keywords: aquatic ecosystem conservation, conservation law, ecological integrity, invasive species
Procedia PDF Downloads 1325078 Evolved Bat Algorithm Based Adaptive Fuzzy Sliding Mode Control with LMI Criterion
Authors: P.-W. Tsai, C.-Y. Chen, C.-W. Chen
Abstract:
In this paper, the stability analysis of a GA-Based adaptive fuzzy sliding model controller for a nonlinear system is discussed. First, a nonlinear plant is well-approximated and described with a reference model and a fuzzy model, both involving FLC rules. Then, FLC rules and the consequent parameter are decided on via an Evolved Bat Algorithm (EBA). After this, we guarantee a new tracking performance inequality for the control system. The tracking problem is characterized to solve an eigenvalue problem (EVP). Next, an adaptive fuzzy sliding model controller (AFSMC) is proposed to stabilize the system so as to achieve good control performance. Lyapunov’s direct method can be used to ensure the stability of the nonlinear system. It is shown that the stability analysis can reduce nonlinear systems into a linear matrix inequality (LMI) problem. Finally, a numerical simulation is provided to demonstrate the control methodology.Keywords: adaptive fuzzy sliding mode control, Lyapunov direct method, swarm intelligence, evolved bat algorithm
Procedia PDF Downloads 4455077 A Review on Control of a Grid Connected Permanent Magnet Synchronous Generator Based Variable Speed Wind Turbine
Authors: Eman M. Eissa, Hany M. Hasanin, Mahmoud Abd-Elhamid, S. M. Muyeen, T. Fernando, H. H. C. Iu
Abstract:
Among all available wind energy conversion systems (WECS), the direct driven permanent magnet synchronous generator integrated with power electronic interfaces is becoming popular due to its capability of extracting optimal energy capture, reduced mechanical stresses, no need to external excitation current, meaning less losses, and more compact size. Simple structure, low maintenance cost; and its decoupling control performance is much less sensitive to the parameter variations of the generator. This paper attempts to present a review of the control and optimization strategies of WECS based on permanent magnet synchronous generator (PMSG) and overview the most recent research trends in this field. The main aims of this review include; the generalized overall WECS starting from turbines, generators, and control strategies including converters, maximum power point tracking (MPPT), ending with DC-link control. The optimization methods of the controller parameters necessary to guarantee the operation of the system efficiently and safely, especially when connected to the power grid are also presented.Keywords: control and optimization techniques, permanent magnet synchronous generator, variable speed wind turbines, wind energy conversion system
Procedia PDF Downloads 2245076 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization
Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller
Abstract:
The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization
Procedia PDF Downloads 355075 Configuration Design and Optimization of the Movable Leg-Foot Lunar Soft-Landing Device
Authors: Shan Jia, Jinbao Chen, Jinhua Zhou, Jiacheng Qian
Abstract:
Lunar exploration is a necessary foundation for deep-space exploration. For the functional limitations of the fixed landers which are widely used currently and are to expand the detection range by the use of wheeled rovers with unavoidable path-repeatability, a movable lunar soft-landing device based on cantilever type buffer mechanism and leg-foot type walking mechanism is presented. Firstly, a 20 DoFs quadruped configuration based on pushrod is proposed. The configuration is of the bionic characteristics such as hip, knee and ankle joints, and can make the kinematics of the whole mechanism unchanged before and after buffering. Secondly, the multi-function main/auxiliary buffers based on crumple-energy absorption and screw-nut mechanism, as well as the telescopic device which could be used to protect the plantar force sensors during the buffer process are designed. Finally, the kinematic model of the whole mechanism is established, and the configuration optimization of the whole mechanism is completed based on the performance requirements of slope adaptation and obstacle crossing. This research can provide a technical solution integrating soft-landing, large-scale inspection and material-transfer for future lunar exploration and even mars exploration, and can also serve as the technical basis for developing the reusable landers.Keywords: configuration design, lunar soft-landing device, movable, optimization
Procedia PDF Downloads 1595074 Investigating Clarity Ultrasound Transperineal Ultrasound Imaging as a Method of Localising the Prostate, Compared to Cone Beam Computed Tomography with Fiducials
Authors: Harley Stephens
Abstract:
Although fiducial marker insertion is regarded as the ‘gold standard’ in terms of image guided radiotherapy (IGRT), its application must be considered carefully as the procedure can be invasive, time-consuming, and reliant on consultant expertise. Precision of the fiducials is dependent on these markers remaining in the same location and on the prostate not changing shape during the course treatment. To facilitate the acquirement of non-ionising IGRT and intra-fractional prostate tracking, Clarity TPUS was developed as an alternative imaging system. The main benefits of Clarity TPUS are that it is non-invasive, non-ionising and cost-effective. Other studies have compared fiducials to transabdominal ultrasound, which has since been proven to not be as accurate as trans-perineal imaging, as included in this study. CBCT fiducial translations and Clarity TPUS translations for 120 images as part of the PACE-C prostate SABR trial were retrospectively evaluated by three imaging specialists. Differences were analysed using correlation and Bland-Altman plots. Inter-observer matches agreed within 3mm 88.3 % of the time in left/right direction, 86.7 % of the time in in superior/inferior direction, and 91.7% of the time in ant/post direction. They agreed within 5mm more than 98.3 % of the time in all directions. The intra-class correlation co-efficient was calculated for each direction to show agreement between imaging specialist for inter-observer variability. Each was 0.95 or above, with 1 indicating perfect reliability. Agreement between observers was slightly higher for CBCT and fiducials at 98.7% agreement within 5 mm, compared to clarity TPUS where 96.7% agreement was seen within 5mm. Clarity TPUS has the benefit of no additional dose and intra-fractional monitoring, and results show a good correlation between the different modalities. Inter-observer variability is to be considered, and further research with a larger population would be of benefit.Keywords: oncology, prostate radiotherapy, image guided radiotherapy, IGRT
Procedia PDF Downloads 1085073 Optimal Design of Friction Dampers for Seismic Retrofit of a Moment Frame
Authors: Hyungoo Kang, Jinkoo Kim
Abstract:
This study investigated the determination of the optimal location and friction force of friction dampers to effectively reduce the seismic response of a reinforced concrete structure designed without considering seismic load. To this end, the genetic algorithm process was applied and the results were compared with those obtained by simplified methods such as distribution of dampers based on the story shear or the inter-story drift ratio. The seismic performance of the model structure with optimally positioned friction dampers was evaluated by nonlinear static and dynamic analyses. The analysis results showed that compared with the system without friction dampers, the maximum roof displacement and the inter-story drift ratio were reduced by about 30% and 40%, respectively. After installation of the dampers about 70% of the earthquake input energy was dissipated by the dampers and the energy dissipated in the structural elements was reduced by about 50%. In comparison with the simplified methods of installation, the genetic algorithm provided more efficient solutions for seismic retrofit of the model structure.Keywords: friction dampers, genetic algorithm, optimal design, RC buildings
Procedia PDF Downloads 2445072 Development of a Few-View Computed Tomographic Reconstruction Algorithm Using Multi-Directional Total Variation
Authors: Chia Jui Hsieh, Jyh Cheng Chen, Chih Wei Kuo, Ruei Teng Wang, Woei Chyn Chu
Abstract:
Compressed sensing (CS) based computed tomographic (CT) reconstruction algorithm utilizes total variation (TV) to transform CT image into sparse domain and minimizes L1-norm of sparse image for reconstruction. Different from the traditional CS based reconstruction which only calculates x-coordinate and y-coordinate TV to transform CT images into sparse domain, we propose a multi-directional TV to transform tomographic image into sparse domain for low-dose reconstruction. Our method considers all possible directions of TV calculations around a pixel, so the sparse transform for CS based reconstruction is more accurate. In 2D CT reconstruction, we use eight-directional TV to transform CT image into sparse domain. Furthermore, we also use 26-directional TV for 3D reconstruction. This multi-directional sparse transform method makes CS based reconstruction algorithm more powerful to reduce noise and increase image quality. To validate and evaluate the performance of this multi-directional sparse transform method, we use both Shepp-Logan phantom and a head phantom as the targets for reconstruction with the corresponding simulated sparse projection data (angular sampling interval is 5 deg and 6 deg, respectively). From the results, the multi-directional TV method can reconstruct images with relatively less artifacts compared with traditional CS based reconstruction algorithm which only calculates x-coordinate and y-coordinate TV. We also choose RMSE, PSNR, UQI to be the parameters for quantitative analysis. From the results of quantitative analysis, no matter which parameter is calculated, the multi-directional TV method, which we proposed, is better.Keywords: compressed sensing (CS), low-dose CT reconstruction, total variation (TV), multi-directional gradient operator
Procedia PDF Downloads 256