Search results for: optimization algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4760

Search results for: optimization algorithms

3350 Logical-Probabilistic Modeling of the Reliability of Complex Systems

Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia

Abstract:

The paper presents logical-probabilistic methods, models and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of weights of elements. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research and designing of optimal structure systems are carried out.

Keywords: Complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability, weight of element

Procedia PDF Downloads 56
3349 An MrPPG Method for Face Anti-Spoofing

Authors: Lan Zhang, Cailing Zhang

Abstract:

In recent years, many face anti-spoofing algorithms have high detection accuracy when detecting 2D face anti-spoofing or 3D mask face anti-spoofing alone in the field of face anti-spoofing, but their detection performance is greatly reduced in multidimensional and cross-datasets tests. The rPPG method used for face anti-spoofing uses the unique vital information of real face to judge real faces and face anti-spoofing, so rPPG method has strong stability compared with other methods, but its detection rate of 2D face anti-spoofing needs to be improved. Therefore, in this paper, we improve an rPPG(Remote Photoplethysmography) method(MrPPG) for face anti-spoofing which through color space fusion, using the correlation of pulse signals between real face regions and background regions, and introducing the cyclic neural network (LSTM) method to improve accuracy in 2D face anti-spoofing. Meanwhile, the MrPPG also has high accuracy and good stability in face anti-spoofing of multi-dimensional and cross-data datasets. The improved method was validated on Replay-Attack, CASIA-FASD, Siw and HKBU_MARs_V2 datasets, the experimental results show that the performance and stability of the improved algorithm proposed in this paper is superior to many advanced algorithms.

Keywords: face anti-spoofing, face presentation attack detection, remote photoplethysmography, MrPPG

Procedia PDF Downloads 167
3348 Comparison between the Conventional Methods and PSO Based MPPT Algorithm for Photovoltaic Systems

Authors: Ramdan B. A. Koad, Ahmed F. Zobaa

Abstract:

Since the output characteristics of Photovoltaic (PV) system depends on the ambient temperature, solar radiation and load impedance, its maximum Power Point (MPP) is not constant. Under each condition PV module has a point at which it can produce its MPP. Therefore, a Maximum Power Point Tracking (MPPT) method is needed to uphold the PV panel operating at its MPP. This paper presents comparative study between the conventional MPPT methods used in (PV) system: Perturb and Observe (P&O), Incremental Conductance (IncCond), and Particle Swarm Optimization (PSO) algorithm for (MPPT) of (PV) system. To evaluate the study, the proposed PSO MPPT is implemented on a DC-DC converter and has been compared with P&O and INcond methods in terms of their tracking speed, accuracy and performance by using the Matlab tool Simulink. The simulation result shows that the proposed algorithm is simple, and is superior to the P&O and IncCond methods.

Keywords: photovoltaic systems, maximum power point tracking, perturb and observe method, incremental conductance, methods and practical swarm optimization algorithm

Procedia PDF Downloads 352
3347 Optimizing Performance of Tablet's Direct Compression Process Using Fuzzy Goal Programming

Authors: Abbas Al-Refaie

Abstract:

This paper aims at improving the performance of the tableting process using statistical quality control and fuzzy goal programming. The tableting process was studied. Statistical control tools were used to characterize the existing process for three critical responses including the averages of a tablet’s weight, hardness, and thickness. At initial process factor settings, the estimated process capability index values for the tablet’s averages of weight, hardness, and thickness were 0.58, 3.36, and 0.88, respectively. The L9 array was utilized to provide experimentation design. Fuzzy goal programming was then employed to find the combination of optimal factor settings. Optimization results showed that the process capability index values for a tablet’s averages of weight, hardness, and thickness were improved to 1.03, 4.42, and 1.42, respectively. Such improvements resulted in significant savings in quality and production costs.

Keywords: fuzzy goal programming, control charts, process capability, tablet optimization

Procedia PDF Downloads 258
3346 Measurement of Sarcopenia Associated with the Extent of Gastrointestinal Oncological Disease

Authors: Adrian Hang Yue Siu, Matthew Holyland, Sharon Carey, Daniel Steffens, Nabila Ansari, Cherry E. Koh

Abstract:

Introduction: Peritoneal malignancies are challenging cancers to manage. While cytoreductive surgery and hyperthermic intraperitoneal chemotherapy (CRS and HIPEC) may offer a cure, it’s considered radical and morbid. Pre-emptive identification of deconditioned patients for optimization may mitigate the risks of surgery. However, the difficulty lies in the scarcity of validated predictive tools to identify high-risk patients. In recent times, there has been growing interest in sarcopenia, which can occur as a result of malnutrition and malignancies. Therefore, the purpose of this study was to assess the utility of sarcopenia in predicting post-operative outcomes. Methods: A single quaternary-center retrospective study of CRS and HIPEC patients between 2017-2020 was conducted to determine the association between pre-operative sarcopenia and post-operative outcomes. Lumbar CT images were analyzed using Slice-o-matic® to measure sarcopenia. Results : Cohort (n=94) analysis found that 40% had sarcopenia, with a majority being female (53.2%) and a mean age of 55 years. Sarcopenia was statistically associated with decreased weight compared to non-sarcopenia patients, 72.7kg vs. 82.2kg (p=0.014) and shorter overall survival, 1.4 years vs. 2.1 years (p=0.032). Post-operatively, patients with sarcopenia experienced more post-operative complications (p=0.001). Conclusion: Complex procedures often require optimization to prevent complications and improve survival. While patient biomarkers – BMI and weight – are used for optimization, this research advocates for the identification of sarcopenia status for pre-operative planning. Sarcopenia may be an indicator of advanced disease requiring further treatment and is an emerging area of research. Larger studies are required to confirm these findings and to assess the reversibility of sarcopenia after surgery.

Keywords: sarcopaenia, cytoreductive surgery, hyperthermic intraperitoneal chemotherapy, surgical oncology

Procedia PDF Downloads 71
3345 Identification of Biological Pathways Causative for Breast Cancer Using Unsupervised Machine Learning

Authors: Karthik Mittal

Abstract:

This study performs an unsupervised machine learning analysis to find clusters of related SNPs which highlight biological pathways that are important for the biological mechanisms of breast cancer. Studying genetic variations in isolation is illogical because these genetic variations are known to modulate protein production and function; the downstream effects of these modifications on biological outcomes are highly interconnected. After extracting the SNPs and their effect on different types of breast cancer using the MRBase library, two unsupervised machine learning clustering algorithms were implemented on the genetic variants: a k-means clustering algorithm and a hierarchical clustering algorithm; furthermore, principal component analysis was executed to visually represent the data. These algorithms specifically used the SNP’s beta value on the three different types of breast cancer tested in this project (estrogen-receptor positive breast cancer, estrogen-receptor negative breast cancer, and breast cancer in general) to perform this clustering. Two significant genetic pathways validated the clustering produced by this project: the MAPK signaling pathway and the connection between the BRCA2 gene and the ESR1 gene. This study provides the first proof of concept showing the importance of unsupervised machine learning in interpreting GWAS summary statistics.

Keywords: breast cancer, computational biology, unsupervised machine learning, k-means, PCA

Procedia PDF Downloads 135
3344 Laser Additive Manufacturing of Carbon Nanotube-Reinforced Polyamide 12 Composites

Authors: Kun Zhou

Abstract:

Additive manufacturing has emerged as a disruptive technology that is capable of manufacturing products with complex geometries through an accumulation of material feedstock in a layer-by-layer fashion. Laser additive manufacturing such as selective laser sintering has excellent printing resolution, high printing speed and robust part strength, and has led to a widespread adoption in the aerospace, automotive and biomedical industries. This talk highlights and discusses the recent work we have undertaken in the development of carbon nanotube-reinforced polyamide 12 (CNT/PA12) composites printed using laser additive manufacturing. Numerical modelling studies have been conducted to simulate various processes within laser additive manufacturing of CNT/PA12 composites, and extensive experimental work has been carried out to investigate the mechanical and functional properties of the printed parts. The results from these studies grant a deeper understanding of the intricate mechanisms occurring within each process and enables an accurate optimization of process parameters for the CNT/PA12 and other polymer composites.

Keywords: CNT/PA12 composites, laser additive manufacturing, process parameter optimization, numerical modeling

Procedia PDF Downloads 143
3343 Semiautomatic Calculation of Ejection Fraction Using Echocardiographic Image Processing

Authors: Diana Pombo, Maria Loaiza, Mauricio Quijano, Alberto Cadena, Juan Pablo Tello

Abstract:

In this paper, we present a semi-automatic tool for calculating ejection fraction from an echocardiographic video signal which is derived from a database in DICOM format, of Clinica de la Costa - Barranquilla. Described in this paper are each of the steps and methods used to find the respective calculation that includes acquisition and formation of the test samples, processing and finally the calculation of the parameters to obtain the ejection fraction. Two imaging segmentation methods were compared following a methodological framework that is similar only in the initial stages of processing (process of filtering and image enhancement) and differ in the end when algorithms are implemented (Active Contour and Region Growing Algorithms). The results were compared with the measurements obtained by two different medical specialists in cardiology who calculated the ejection fraction of the study samples using the traditional method, which consists of drawing the region of interest directly from the computer using echocardiography equipment and a simple equation to calculate the desired value. The results showed that if the quality of video samples are good (i.e., after the pre-processing there is evidence of an improvement in the contrast), the values provided by the tool are substantially close to those reported by physicians; also the correlation between physicians does not vary significantly.

Keywords: echocardiography, DICOM, processing, segmentation, EDV, ESV, ejection fraction

Procedia PDF Downloads 421
3342 Modeling Water Resources Carrying Capacity, Optimizing Water Treatment, Smart Water Management, and Conceptualizing a Watershed Management Approach

Authors: Pius Babuna

Abstract:

Sustainable water use is important for the existence of the human race. Water resources carrying capacity (WRCC) measures the sustainability of water use; however, the calculation and optimization of WRCC remain challenging. This study used a mathematical model (the Logistics Growth of Water Resources -LGWR) and a linear objective function to model water sustainability. We tested the validity of the models using data from Ghana. Total freshwater resources, water withdrawal, and population data were used in MATLAB. The results show that the WRCC remains sustainable until the year 2132 ±18, when half of the total annual water resources will be used. The optimized water treatment cost suggests that Ghana currently wastes GHȼ 1115.782± 50 cedis (~$182.21± 50) per water treatment plant per month or ~ 0.67 million gallons of water in an avoidable loss. Adopting an optimized water treatment scheme and a watershed management approach will help sustain the WRCC.

Keywords: water resources carrying capacity, smart water management, optimization, sustainable water use, water withdrawal

Procedia PDF Downloads 74
3341 Wireless Sensor Networks Optimization by Using 2-Stage Algorithm Based on Imperialist Competitive Algorithm

Authors: Hamid R. Lashgarian Azad, Seyed N. Shetab Boushehri

Abstract:

Wireless sensor networks (WSN) have become progressively popular due to their wide range of applications. Wireless Sensor Network is made of numerous tiny sensor nodes that are battery-powered. It is a very significant problem to maximize the lifetime of wireless sensor networks. In this paper, we propose a two-stage protocol based on an imperialist competitive algorithm (2S-ICA) to solve a sensor network optimization problem. The energy of the sensors can be greatly reduced and the lifetime of the network reduced by long communication distances between the sensors and the sink. We can minimize the overall communication distance considerably, thereby extending the lifetime of the network lifetime through connecting sensors into a series of independent clusters using 2SICA. Comparison results of the proposed protocol and LEACH protocol, which is common to solving WSN problems, show that our protocol has a better performance in terms of improving network life and increasing the number of transmitted data.

Keywords: wireless sensor network, imperialist competitive algorithm, LEACH protocol, k-means clustering

Procedia PDF Downloads 86
3340 Type–2 Fuzzy Programming for Optimizing the Heat Rate of an Industrial Gas Turbine via Absorption Chiller Technology

Authors: T. Ganesan, M. S. Aris, I. Elamvazuthi, Momen Kamal Tageldeen

Abstract:

Terms set in power purchase agreements (PPA) challenge power utility companies in balancing between the returns (from maximizing power production) and securing long term supply contracts at capped production. The production limitation set in the PPA has driven efforts to maximize profits through efficient and economic power production. In this paper, a combined industrial-scale gas turbine (GT) - absorption chiller (AC) system is considered to cool the GT air intake for reducing the plant’s heat rate (HR). This GT-AC system is optimized while considering power output limitations imposed by the PPA. In addition, the proposed formulation accounts for uncertainties in the ambient temperature using Type-2 fuzzy programming. Using the enhanced chaotic differential evolution (CEDE), the Pareto frontier was constructed and the optimization results are analyzed in detail.

Keywords: absorption chillers (AC), turbine inlet air cooling (TIC), power purchase agreement (PPA), multiobjective optimization, type-2 fuzzy programming, chaotic differential evolution (CDDE)

Procedia PDF Downloads 301
3339 Transparency of Algorithmic Decision-Making: Limits Posed by Intellectual Property Rights

Authors: Olga Kokoulina

Abstract:

Today, algorithms are assuming a leading role in various areas of decision-making. Prompted by a promise to provide increased economic efficiency and fuel solutions for pressing societal challenges, algorithmic decision-making is often celebrated as an impartial and constructive substitute for human adjudication. But in the face of this implied objectivity and efficiency, the application of algorithms is also marred with mounting concerns about embedded biases, discrimination, and exclusion. In Europe, vigorous debates on risks and adverse implications of algorithmic decision-making largely revolve around the potential of data protection laws to tackle some of the related issues. For example, one of the often-cited venues to mitigate the impact of potentially unfair decision-making practice is a so-called 'right to explanation'. In essence, the overall right is derived from the provisions of the General Data Protection Regulation (‘GDPR’) ensuring the right of data subjects to access and mandating the obligation of data controllers to provide the relevant information about the existence of automated decision-making and meaningful information about the logic involved. Taking corresponding rights and obligations in the context of the specific provision on automated decision-making in the GDPR, the debates mainly focus on efficacy and the exact scope of the 'right to explanation'. In essence, the underlying logic of the argued remedy lies in a transparency imperative. Allowing data subjects to acquire as much knowledge as possible about the decision-making process means empowering individuals to take control of their data and take action. In other words, forewarned is forearmed. The related discussions and debates are ongoing, comprehensive, and, often, heated. However, they are also frequently misguided and isolated: embracing the data protection law as ultimate and sole lenses are often not sufficient. Mandating the disclosure of technical specifications of employed algorithms in the name of transparency for and empowerment of data subjects potentially encroach on the interests and rights of IPR holders, i.e., business entities behind the algorithms. The study aims at pushing the boundaries of the transparency debate beyond the data protection regime. By systematically analysing legal requirements and current judicial practice, it assesses the limits of the transparency requirement and right to access posed by intellectual property law, namely by copyrights and trade secrets. It is asserted that trade secrets, in particular, present an often-insurmountable obstacle for realising the potential of the transparency requirement. In reaching that conclusion, the study explores the limits of protection afforded by the European Trade Secrets Directive and contrasts them with the scope of respective rights and obligations related to data access and portability enshrined in the GDPR. As shown, the far-reaching scope of the protection under trade secrecy is evidenced both through the assessment of its subject matter as well as through the exceptions from such protection. As a way forward, the study scrutinises several possible legislative solutions, such as flexible interpretation of the public interest exception in trade secrets as well as the introduction of the strict liability regime in case of non-transparent decision-making.

Keywords: algorithms, public interest, trade secrets, transparency

Procedia PDF Downloads 116
3338 Hybrid Genetic Approach for Solving Economic Dispatch Problems with Valve-Point Effect

Authors: Mohamed I. Mahrous, Mohamed G. Ashmawy

Abstract:

Hybrid genetic algorithm (HGA) is proposed in this paper to determine the economic scheduling of electric power generation over a fixed time period under various system and operational constraints. The proposed technique can outperform conventional genetic algorithms (CGAs) in the sense that HGA make it possible to improve both the quality of the solution and reduce the computing expenses. In contrast, any carefully designed GA is only able to balance the exploration and the exploitation of the search effort, which means that an increase in the accuracy of a solution can only occure at the sacrifice of convergent speed, and vice visa. It is unlikely that both of them can be improved simultaneously. The proposed hybrid scheme is developed in such a way that a simple GA is acting as a base level search, which makes a quick decision to direct the search towards the optimal region, and a local search method (pattern search technique) is next employed to do the fine tuning. The aim of the strategy is to achieve the cost reduction within a reasonable computing time. The effectiveness of the proposed hybrid technique is verified on two real public electricity supply systems with 13 and 40 generator units respectively. The simulation results obtained with the HGA for the two real systems are very encouraging with regard to the computational expenses and the cost reduction of power generation.

Keywords: genetic algorithms, economic dispatch, pattern search

Procedia PDF Downloads 430
3337 Research of Stalled Operational Modes of Axial-Flow Compressor for Diagnostics of Pre-Surge State

Authors: F. Mohammadsadeghi

Abstract:

Relevance of research: Axial compressors are used in both aircraft engine construction and ground-based gas turbine engines. The compressor is considered to be one of the main gas turbine engine units, which define absolute and relative indicators of engine in general. Failure of compressor often leads to drastic consequences. Therefore, safe (stable) operation must be maintained when using axial compressor. Currently, we can observe a tendency of increase of power unit, productivity, circumferential velocity and compression ratio of axial compressors in gas turbine engines of aircraft and ground-based application whereas metal consumption of their structure tends to fall. This causes the increase of dynamic loads as well as danger of damage of high load compressor or engine structure elements in general due to transient processes. In operating practices of aeronautical engineering and ground units with gas turbine drive the operational stability failure of gas turbine engines is one of relatively often failure causes what can lead to emergency situations. Surge occurrence is considered to be an absolute buckling failure. This is one of the most dangerous and often occurring types of instability. However detailed were the researches of this phenomenon the development of measures for surge before-the-fact prevention is still relevant. This is why the research of transient processes for axial compressors is necessary in order to provide efficient, stable and secure operation. The paper addresses the problem of automatic control system improvement by integrating the anti-surge algorithms for axial compressor of aircraft gas turbine engine. Paper considers dynamic exhaustion of gas dynamic stability of compressor stage, results of numerical simulation of airflow flowing through the airfoil at design and stalling modes, experimental researches to form the criteria that identify the compressor state at pre-surge mode detection. Authors formulated basic ways for developing surge preventing systems, i.e. forming the algorithms that allow detecting the surge origination and the systems that implement the proposed algorithms.

Keywords: axial compressor, rotation stall, Surg, unstable operation of gas turbine engine

Procedia PDF Downloads 397
3336 Deep Routing Strategy: Deep Learning based Intelligent Routing in Software Defined Internet of Things.

Authors: Zabeehullah, Fahim Arif, Yawar Abbas

Abstract:

Software Defined Network (SDN) is a next genera-tion networking model which simplifies the traditional network complexities and improve the utilization of constrained resources. Currently, most of the SDN based Internet of Things(IoT) environments use traditional network routing strategies which work on the basis of max or min metric value. However, IoT network heterogeneity, dynamic traffic flow and complexity demands intelligent and self-adaptive routing algorithms because traditional routing algorithms lack the self-adaptions, intelligence and efficient utilization of resources. To some extent, SDN, due its flexibility, and centralized control has managed the IoT complexity and heterogeneity but still Software Defined IoT (SDIoT) lacks intelligence. To address this challenge, we proposed a model called Deep Routing Strategy (DRS) which uses Deep Learning algorithm to perform routing in SDIoT intelligently and efficiently. Our model uses real-time traffic for training and learning. Results demonstrate that proposed model has achieved high accuracy and low packet loss rate during path selection. Proposed model has also outperformed benchmark routing algorithm (OSPF). Moreover, proposed model provided encouraging results during high dynamic traffic flow.

Keywords: SDN, IoT, DL, ML, DRS

Procedia PDF Downloads 102
3335 Optimization of Pretreatment Process of Napier Grass for Improved Sugar Yield

Authors: Shashikant Kumar, Chandraraj K.

Abstract:

Perennial grasses have presented interesting choices in the current demand for renewable and sustainable energy sources to alleviate the load of the global energy problem. The perennial grass Napier grass (Pennisetum purpureum Schumach) is a promising feedstock for the production of cellulosic ethanol. The conversion of biomass into glucose and xylose is a crucial stage in the production of bioethanol, and it necessitates optimal pretreatment. Alkali treatment, among the several pretreatments available, effectively reduces lignin concentration and crystallinity of cellulose. Response surface methodology was used to optimize the alkali pretreatment of Napier grass for maximal reducing sugar production. The combined effects of three independent variables, viz. sodium hydroxide concentration, temperature, and reaction time, were studied. A second-order polynomial equation was used to fit the observed data. Maximum reducing sugar (590.54 mg/g) was obtained under the following conditions: 1.6 % sodium hydroxide, a reaction period of 30 min., and 120˚C. The results showed that Napier grass is a desirable feedstock for bioethanol production.

Keywords: Napier grass, optimization, pretreatment, sodium hydroxide

Procedia PDF Downloads 494
3334 Estimation of Synchronous Machine Synchronizing and Damping Torque Coefficients

Authors: Khaled M. EL-Naggar

Abstract:

Synchronizing and damping torque coefficients of a synchronous machine can give a quite clear picture for machine behavior during transients. These coefficients are used as a power system transient stability measurement. In this paper, a crow search optimization algorithm is presented and implemented to study the power system stability during transients. The algorithm makes use of the machine responses to perform the stability study in time domain. The problem is formulated as a dynamic estimation problem. An objective function that minimizes the error square in the estimated coefficients is designed. The method is tested using practical system with different study cases. Results are reported and a thorough discussion is presented. The study illustrates that the proposed method can estimate the stability coefficients for the critical stable cases where other methods may fail. The tests proved that the proposed tool is an accurate and reliable tool for estimating the machine coefficients for assessment of power system stability.

Keywords: optimization, estimation, synchronous, machine, crow search

Procedia PDF Downloads 125
3333 A Robust Optimization Method for Service Quality Improvement in Health Care Systems under Budget Uncertainty

Authors: H. Ashrafi, S. Ebrahimi, H. Kamalzadeh

Abstract:

With the development of business competition, it is important for healthcare providers to improve their service qualities. In order to improve service quality of a clinic, four important dimensions are defined: tangibles, responsiveness, empathy, and reliability. Moreover, there are several service stages in hospitals such as financial screening and examination. One of the most challenging limitations for improving service quality is budget which impressively affects the service quality. In this paper, we present an approach to address budget uncertainty and provide guidelines for service resource allocation. In this paper, a service quality improvement approach is proposed which can be adopted to multistage service processes to improve service quality, while controlling the costs. A multi-objective function based on the importance of each area and dimension is defined to link operational variables to service quality dimensions. The results demonstrate that our approach is not ultra-conservative and it shows the actual condition very well. Moreover, it is shown that different strategies can affect the number of employees in different stages.

Keywords: allocation, budget uncertainty, healthcare resource, service quality assessment, robust optimization

Procedia PDF Downloads 170
3332 Clustering Based Level Set Evaluation for Low Contrast Images

Authors: Bikshalu Kalagadda, Srikanth Rangu

Abstract:

The important object of images segmentation is to extract objects with respect to some input features. One of the important methods for image segmentation is Level set method. Generally medical images and synthetic images with low contrast of pixel profile, for such images difficult to locate interested features in images. In conventional level set function, develops irregularity during its process of evaluation of contour of objects, this destroy the stability of evolution process. For this problem a remedy is proposed, a new hybrid algorithm is Clustering Level Set Evolution. Kernel fuzzy particles swarm optimization clustering with the Distance Regularized Level Set (DRLS) and Selective Binary, and Gaussian Filtering Regularized Level Set (SBGFRLS) methods are used. The ability of identifying different regions becomes easy with improved speed. Efficiency of the modified method can be evaluated by comparing with the previous method for similar specifications. Comparison can be carried out by considering medical and synthetic images.

Keywords: segmentation, clustering, level set function, re-initialization, Kernel fuzzy, swarm optimization

Procedia PDF Downloads 342
3331 Optimal Capacitor Placement in Distribution Using Cuckoo Optimization Algorithm

Authors: Ali Ravangard, S. Mohammadi

Abstract:

Shunt Capacitors have several uses in the electric power systems. They are utilized as sources of reactive power by connecting them in line-to-neutral. Electric utilities have also connected capacitors in series with long lines in order to reduce its impedance. This is particularly common in the transmission level, where the lines have length in several hundreds of kilometers. However, this post will generally discuss shunt capacitors. In distribution systems, shunt capacitors are used to reduce power losses, to improve voltage profile, and to increase the maximum flow through cables and transformers. This paper presents a new method to determine the optimal locations and economical sizing of fixed and/or switched shunt capacitors with a view to power losses reduction and voltage stability enhancement. For solving the problem, a new enhanced cuckoo optimization algorithm is presented.The proposed method is tested on distribution test system and the results show that the algorithm suitable for practical implementation on real systems with any size.

Keywords: capacitor placement, power losses, voltage stability, radial distribution systems

Procedia PDF Downloads 370
3330 Cross-Dipole Right-Hand Circularly Polarized UHF/VHF Yagi-Uda Antenna for Satellite Applications

Authors: Shativel S., Chandana B. R., Kavya B. C., Obli B. Vikram, Suganthi J., Nagendra Rao G.

Abstract:

Satellite communication plays a pivotal role in modern global communication networks, serving as a vital link between terrestrial infrastructure and remote regions. The demand for reliable satellite reception systems, especially in UHF (Ultra High Frequency) and VHF (Very High Frequency) bands, has grown significantly over the years. This research paper presents the design and optimization of a high-gain, dual-band crossed Yagi-Uda antenna in CST Studio Suite, specifically tailored for satellite reception. The proposed antenna system incorporates a circularly polarized (Right-Hand Circular Polarization - RHCP) design to reduce Faraday loss. Our aim was to use fewer elements and achieve gain, so the antenna is constructed using 6x2 elements arranged in cross dipole and supported with a boom. We have achieved 10.67dBi at 146MHz and 9.28dBi at 437.5MHz.The process includes parameter optimization and fine-tuning of the Yagi-Uda array’s elements, such as the length and spacing of directors and reflectors, to achieve high gain and desirable radiation patterns. Furthermore, the optimization process considers the requirements for UHF and VHF frequency bands, ensuring broad frequency coverage for satellite reception. The results of this research are anticipated to significantly contribute to the advancement of satellite reception systems, enhancing their capabilities to reliably connect remote and underserved areas to the global communication network. Through innovative antenna design and simulation techniques, this study seeks to provide a foundation for the development of next-generation satellite communication infrastructure.

Keywords: Yagi-Uda antenna, RHCP, gain, UHF antenna, VHF antenna, CST, radiation pattern.

Procedia PDF Downloads 50
3329 Value Engineering and Its Impact on Drainage Design Optimization for Penang International Airport Expansion

Authors: R.M. Asyraf, A. Norazah, S.M. Khairuddin, B. Noraziah

Abstract:

Designing a system at present requires a vital, challenging task; to ensure the design philosophy is maintained in economical ways. This paper perceived the value engineering (VE) approach applied in infrastructure works, namely stormwater drainage. This method is adopted in line as consultants have completed the detailed design. Function Analysis System Technique (FAST) diagram and VE job plan, information, function analysis, creative judgement, development, and recommendation phase are used to scrutinize the initial design of stormwater drainage. An estimated cost reduction using the VE approach of 2% over the initial proposal was obtained. This cost reduction is obtained from the design optimization of the drainage foundation and structural system, where the pile design and drainage base structure are optimized. Likewise, the design of the on-site detention tank (OSD) pump was revised and contribute to the cost reduction obtained. This case study shows that the VE approach can be an important tool in optimizing the design to reduce costs.

Keywords: value engineering, function analysis system technique, stormwater drainage, cost reduction

Procedia PDF Downloads 134
3328 User-Based Cannibalization Mitigation in an Online Marketplace

Authors: Vivian Guo, Yan Qu

Abstract:

Online marketplaces are not only digital places where consumers buy and sell merchandise, and they are also destinations for brands to connect with real consumers at the moment when customers are in the shopping mindset. For many marketplaces, brands have been important partners through advertising. There can be, however, a risk of advertising impacting a consumer’s shopping journey if it hurts the use experience or takes the user away from the site. Both could lead to the loss of transaction revenue for the marketplace. In this paper, we present user-based methods for cannibalization control by selectively turning off ads to users who are likely to be cannibalized by ads subject to business objectives. We present ways of measuring cannibalization of advertising in the context of an online marketplace and propose novel ways of measuring cannibalization through purchase propensity and uplift modeling. A/B testing has shown that our methods can significantly improve user purchase and engagement metrics while operating within business objectives. To our knowledge, this is the first paper that addresses cannibalization mitigation at the user-level in the context of advertising.

Keywords: cannibalization, machine learning, online marketplace, revenue optimization, yield optimization

Procedia PDF Downloads 150
3327 Molecular and Electronic Structure of Chromium (III) Cyclopentadienyl Complexes

Authors: Salem El-Tohami Ashoor

Abstract:

Here we show that the reduction of [Cr(ArN(CH2)3NAr)2Cl2] (1) where (Ar = 2,6-Pri2C6H3) and in presence of NaCp (2) (Cp= C5H5 = cyclopentadien), with a center coordination η5 interaction between Cp as co-ligand and chromium metal center, this was optimization by using density functional theory (DFT) and then was comparing with experimental data, also other possibility of Cp interacted with ion metal were tested like η1 ,η2 ,η3 and η4 under optimization system. These were carried out under investigation of density functional theory (DFT) calculation, and comparing together. Other methods, explicitly including electron correlation, are necessary for more accurate calculations; MB3LYP ( Becke)( Lee–Yang–Parr ) level of theory often being used to obtain more exact results. These complexes were estimated of electronic energy for molecular system, because it accounts for all electron correlation interactions. The optimised of [Cr(ArN(CH2)3NAr)2(η5-Cp)] (Ar = 2,6-Pri2C6H3 and Cp= C5H5) was found to be thermally more stable than others of chromium cyclopentadienyl. By using Dewar-Chatt-Duncanson model, as a basis of the molecular orbital (MO) analysis and showed the highest occupied molecular orbital (HOMO) and lowest occupied molecular orbital LUMO.

Keywords: Chromium(III) cyclopentadienyl complexes, DFT, MO, HOMO, LUMO

Procedia PDF Downloads 491
3326 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization

Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman

Abstract:

In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.

Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization

Procedia PDF Downloads 229
3325 Utilization of Mustard Leaves (Brassica juncea) Powder for the Development of Cereal Based Extruded Snacks

Authors: Maya S. Rathod, Bahadur Singh Hathan

Abstract:

Mustard leaves are rich in folates, vitamin A, K and B-complex. Mustard greens are low in calories and fats and rich in dietary fiber. They are rich in potassium, manganese, iron, copper, calcium, magnesium and low in sodium. It is very rich in antioxidants and Phytonutrients. For the optimization of process variables (moisture content and mustard leave powder), the experiments were conducted according to central composite Face Centered Composite design of RSM. The mustard leaves powder was replaced with composite flour (a combination of rice, chickpea and corn in the ratio of 70:15:15). The extrudate was extruded in a twin screw extruder at a barrel temperature of 120°C. The independent variables were mustard leaves powder (2-10 %) and moisture content (12-20 %). Responses analyzed were bulk density, water solubility index, water absorption index, lateral expansion, hardness, antioxidant activity, total phenolic content and overall acceptability. The optimum conditions obtained were 7.19 g mustard leaves powder in 100 g premix having 16.8 % moisture content (w.b).

Keywords: extrusion, mustard leaves powder, optimization, response surface methodology

Procedia PDF Downloads 529
3324 Modeling and Optimization of Performance of Four Stroke Spark Ignition Injector Engine

Authors: A. A. Okafor, C. H. Achebe, J. L. Chukwuneke, C. G. Ozoegwu

Abstract:

The performance of an engine whose basic design parameters are known can be predicted with the assistance of simulation programs into the less time, cost and near value of actual. This paper presents a comprehensive mathematical model of the performance parameters of four stroke spark ignition engine. The essence of this research work is to develop a mathematical model for the analysis of engine performance parameters of four stroke spark ignition engine before embarking on full scale construction, this will ensure that only optimal parameters are in the design and development of an engine and also allow to check and develop the design of the engine and it’s operation alternatives in an inexpensive way and less time, instead of using experimental method which requires costly research test beds. To achieve this, equations were derived which describe the performance parameters (sfc, thermal efficiency, mep and A/F). The equations were used to simulate and optimize the engine performance of the model for various engine speeds. The optimal values obtained for the developed bivariate mathematical models are: sfc is 0.2833kg/kwh, efficiency is 28.77% and a/f is 20.75.

Keywords: bivariate models, engine performance, injector engine, optimization, performance parameters, simulation, spark ignition

Procedia PDF Downloads 315
3323 Initial Dip: An Early Indicator of Neural Activity in Functional Near Infrared Spectroscopy Waveform

Authors: Mannan Malik Muhammad Naeem, Jeong Myung Yung

Abstract:

Functional near infrared spectroscopy (fNIRS) has a favorable position in non-invasive brain imaging techniques. The concentration change of oxygenated hemoglobin and de-oxygenated hemoglobin during particular cognitive activity is the basis for this neuro-imaging modality. Two wavelengths of near-infrared light can be used with modified Beer-Lambert law to explain the indirect status of neuronal activity inside brain. The temporal resolution of fNIRS is very good for real-time brain computer-interface applications. The portability, low cost and an acceptable temporal resolution of fNIRS put it on a better position in neuro-imaging modalities. In this study, an optimization model for impulse response function has been used to estimate/predict initial dip using fNIRS data. In addition, the activity strength parameter related to motor based cognitive task has been analyzed. We found an initial dip that remains around 200-300 millisecond and better localize neural activity.

Keywords: fNIRS, brain-computer interface, optimization algorithm, adaptive signal processing

Procedia PDF Downloads 211
3322 Methodology of Construction Equipment Optimization for Earthwork

Authors: Jaehyun Choi, Hyunjung Kim, Namho Kim

Abstract:

Earthwork is one of the critical civil construction operations that require large-quantities of resources due to its intensive dependency upon construction equipment. Therefore, efficient construction equipment management can highly contribute to productivity improvements and cost savings. Earthwork operation utilizes various combinations of construction equipment in order to meet project requirements such as time and cost. Identification of site condition and construction methods should be performed in advance in order to develop a proper execution plan. The factors to be considered include capacity of equipment assigned, the method of construction, the size of the site, and the surrounding condition. In addition, optimal combination of various construction equipment should be selected. However, in real world practice, equipment utilization plan is performed based on experience and intuition of management. The researchers evaluated the efficiency of various alternatives of construction equipment combinations by utilizing the process simulation model, validated the model from a case study project, and presented a methodology to find optimized plan among alternatives.

Keywords: earthwork operation, construction equipment, process simulation, optimization

Procedia PDF Downloads 415
3321 Adsorption of Cerium as One of the Rare Earth Elements Using Multiwall Carbon Nanotubes from Aqueous Solution: Modeling, Equilibrium and Kinetics

Authors: Saeb Ahmadi, Mohsen Vafaie Sefti, Mohammad Mahdi Shadman, Ebrahim Tangestani

Abstract:

Carbon nanotube has shown great potential for the removal of various inorganic and organic components due to properties such as large surface area and high adsorption capacity. Central composite design is widely used method for determining optimal conditions. Also due to the economic reasons and wide application, the rare earth elements are important components. The analyses of cerium (Ce(III)) adsorption as one of the Rare Earth Elements (REEs) adsorption on Multiwall Carbon Nanotubes (MWCNTs) have been studied. The optimization process was performed using Response Surface Methodology (RSM). The optimum amount conditions were pH of 4.5, initial Ce (III) concentration of 90 mg/l and MWCNTs dosage of 80 mg. Under this condition, the optimum adsorption percentage of Ce (III) was obtained about 96%. Next, at the obtained optimum conditions the kinetic and isotherm studied and result showed the pseudo-second order and Langmuir isotherm are more fitted with experimental data than other models.

Keywords: cerium, rare earth element, MWCNTs, adsorption, optimization

Procedia PDF Downloads 154