Search results for: grey wolf optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3486

Search results for: grey wolf optimization

2496 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization

Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman

Abstract:

In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.

Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization

Procedia PDF Downloads 240
2495 Utilization of Mustard Leaves (Brassica juncea) Powder for the Development of Cereal Based Extruded Snacks

Authors: Maya S. Rathod, Bahadur Singh Hathan

Abstract:

Mustard leaves are rich in folates, vitamin A, K and B-complex. Mustard greens are low in calories and fats and rich in dietary fiber. They are rich in potassium, manganese, iron, copper, calcium, magnesium and low in sodium. It is very rich in antioxidants and Phytonutrients. For the optimization of process variables (moisture content and mustard leave powder), the experiments were conducted according to central composite Face Centered Composite design of RSM. The mustard leaves powder was replaced with composite flour (a combination of rice, chickpea and corn in the ratio of 70:15:15). The extrudate was extruded in a twin screw extruder at a barrel temperature of 120°C. The independent variables were mustard leaves powder (2-10 %) and moisture content (12-20 %). Responses analyzed were bulk density, water solubility index, water absorption index, lateral expansion, hardness, antioxidant activity, total phenolic content and overall acceptability. The optimum conditions obtained were 7.19 g mustard leaves powder in 100 g premix having 16.8 % moisture content (w.b).

Keywords: extrusion, mustard leaves powder, optimization, response surface methodology

Procedia PDF Downloads 545
2494 Profit-Based Artificial Neural Network (ANN) Trained by Migrating Birds Optimization: A Case Study in Credit Card Fraud Detection

Authors: Ashkan Zakaryazad, Ekrem Duman

Abstract:

A typical classification technique ranks the instances in a data set according to the likelihood of belonging to one (positive) class. A credit card (CC) fraud detection model ranks the transactions in terms of probability of being fraud. In fact, this approach is often criticized, because firms do not care about fraud probability but about the profitability or costliness of detecting a fraudulent transaction. The key contribution in this study is to focus on the profit maximization in the model building step. The artificial neural network proposed in this study works based on profit maximization instead of minimizing the error of prediction. Moreover, some studies have shown that the back propagation algorithm, similar to other gradient–based algorithms, usually gets trapped in local optima and swarm-based algorithms are more successful in this respect. In this study, we train our profit maximization ANN using the Migrating Birds optimization (MBO) which is introduced to literature recently.

Keywords: neural network, profit-based neural network, sum of squared errors (SSE), MBO, gradient descent

Procedia PDF Downloads 475
2493 Modeling and Optimization of Performance of Four Stroke Spark Ignition Injector Engine

Authors: A. A. Okafor, C. H. Achebe, J. L. Chukwuneke, C. G. Ozoegwu

Abstract:

The performance of an engine whose basic design parameters are known can be predicted with the assistance of simulation programs into the less time, cost and near value of actual. This paper presents a comprehensive mathematical model of the performance parameters of four stroke spark ignition engine. The essence of this research work is to develop a mathematical model for the analysis of engine performance parameters of four stroke spark ignition engine before embarking on full scale construction, this will ensure that only optimal parameters are in the design and development of an engine and also allow to check and develop the design of the engine and it’s operation alternatives in an inexpensive way and less time, instead of using experimental method which requires costly research test beds. To achieve this, equations were derived which describe the performance parameters (sfc, thermal efficiency, mep and A/F). The equations were used to simulate and optimize the engine performance of the model for various engine speeds. The optimal values obtained for the developed bivariate mathematical models are: sfc is 0.2833kg/kwh, efficiency is 28.77% and a/f is 20.75.

Keywords: bivariate models, engine performance, injector engine, optimization, performance parameters, simulation, spark ignition

Procedia PDF Downloads 326
2492 Initial Dip: An Early Indicator of Neural Activity in Functional Near Infrared Spectroscopy Waveform

Authors: Mannan Malik Muhammad Naeem, Jeong Myung Yung

Abstract:

Functional near infrared spectroscopy (fNIRS) has a favorable position in non-invasive brain imaging techniques. The concentration change of oxygenated hemoglobin and de-oxygenated hemoglobin during particular cognitive activity is the basis for this neuro-imaging modality. Two wavelengths of near-infrared light can be used with modified Beer-Lambert law to explain the indirect status of neuronal activity inside brain. The temporal resolution of fNIRS is very good for real-time brain computer-interface applications. The portability, low cost and an acceptable temporal resolution of fNIRS put it on a better position in neuro-imaging modalities. In this study, an optimization model for impulse response function has been used to estimate/predict initial dip using fNIRS data. In addition, the activity strength parameter related to motor based cognitive task has been analyzed. We found an initial dip that remains around 200-300 millisecond and better localize neural activity.

Keywords: fNIRS, brain-computer interface, optimization algorithm, adaptive signal processing

Procedia PDF Downloads 226
2491 Improved Predictive Models for the IRMA Network Using Nonlinear Optimisation

Authors: Vishwesh Kulkarni, Nikhil Bellarykar

Abstract:

Cellular complexity stems from the interactions among thousands of different molecular species. Thanks to the emerging fields of systems and synthetic biology, scientists are beginning to unravel these regulatory, signaling, and metabolic interactions and to understand their coordinated action. Reverse engineering of biological networks has has several benefits but a poor quality of data combined with the difficulty in reproducing it limits the applicability of these methods. A few years back, many of the commonly used predictive algorithms were tested on a network constructed in the yeast Saccharomyces cerevisiae (S. cerevisiae) to resolve this issue. The network was a synthetic network of five genes regulating each other for the so-called in vivo reverse-engineering and modeling assessment (IRMA). The network was constructed in S. cereviase since it is a simple and well characterized organism. The synthetic network included a variety of regulatory interactions, thus capturing the behaviour of larger eukaryotic gene networks on a smaller scale. We derive a new set of algorithms by solving a nonlinear optimization problem and show how these algorithms outperform other algorithms on these datasets.

Keywords: synthetic gene network, network identification, optimization, nonlinear modeling

Procedia PDF Downloads 156
2490 Methodology of Construction Equipment Optimization for Earthwork

Authors: Jaehyun Choi, Hyunjung Kim, Namho Kim

Abstract:

Earthwork is one of the critical civil construction operations that require large-quantities of resources due to its intensive dependency upon construction equipment. Therefore, efficient construction equipment management can highly contribute to productivity improvements and cost savings. Earthwork operation utilizes various combinations of construction equipment in order to meet project requirements such as time and cost. Identification of site condition and construction methods should be performed in advance in order to develop a proper execution plan. The factors to be considered include capacity of equipment assigned, the method of construction, the size of the site, and the surrounding condition. In addition, optimal combination of various construction equipment should be selected. However, in real world practice, equipment utilization plan is performed based on experience and intuition of management. The researchers evaluated the efficiency of various alternatives of construction equipment combinations by utilizing the process simulation model, validated the model from a case study project, and presented a methodology to find optimized plan among alternatives.

Keywords: earthwork operation, construction equipment, process simulation, optimization

Procedia PDF Downloads 426
2489 Adsorption of Cerium as One of the Rare Earth Elements Using Multiwall Carbon Nanotubes from Aqueous Solution: Modeling, Equilibrium and Kinetics

Authors: Saeb Ahmadi, Mohsen Vafaie Sefti, Mohammad Mahdi Shadman, Ebrahim Tangestani

Abstract:

Carbon nanotube has shown great potential for the removal of various inorganic and organic components due to properties such as large surface area and high adsorption capacity. Central composite design is widely used method for determining optimal conditions. Also due to the economic reasons and wide application, the rare earth elements are important components. The analyses of cerium (Ce(III)) adsorption as one of the Rare Earth Elements (REEs) adsorption on Multiwall Carbon Nanotubes (MWCNTs) have been studied. The optimization process was performed using Response Surface Methodology (RSM). The optimum amount conditions were pH of 4.5, initial Ce (III) concentration of 90 mg/l and MWCNTs dosage of 80 mg. Under this condition, the optimum adsorption percentage of Ce (III) was obtained about 96%. Next, at the obtained optimum conditions the kinetic and isotherm studied and result showed the pseudo-second order and Langmuir isotherm are more fitted with experimental data than other models.

Keywords: cerium, rare earth element, MWCNTs, adsorption, optimization

Procedia PDF Downloads 167
2488 Optimization of Temperature for Crystal Violet Dye Adsorption Using Castor Leaf Powder by Response Surface Methodology

Authors: Vipan Kumar Sohpal

Abstract:

Temperature effect on the adsorption of crystal violet dye (CVD) was investigated using a castor leaf powder (CLP) that was prepared from the mature leaves of castor trees, through chemical reaction. The optimum values of pH (8), adsorbent dose (10g/L), initial dye concentration (10g/L), time (2hrs), and stirrer speed (120 rpm) were fixed to investigate the influence of temperature on adsorption capacity, percentage of removal of dye and free energy. A central composite design (CCD) was successfully employed for experimental design and analysis of the results. The combined effect of temperature, absorbance, and concentration on the dye adsorption was studied and optimized using response surface methodology. The optimum values of adsorption capacity, percentage of removal of dye and free energy were found to be 0.965(mg/g), 93.38 %, -8202.7(J/mol) at temperature 55.97 °C having desirability > 90% for removal of crystal violet dye respectively. The experimental values were in good agreement with predicted values.

Keywords: crystal violet dye, CVD, castor leaf powder, CLP, response surface methodology, temperature, optimization

Procedia PDF Downloads 132
2487 Optimal Scheduling of Trains in Complex National Scale Railway Networks

Authors: Sanat Ramesh, Tarun Dutt, Abhilasha Aswal, Anushka Chandrababu, G. N. Srinivasa Prasanna

Abstract:

Optimal Schedule Generation for a large national railway network operating thousands of passenger trains with tens of thousands of kilometers of track is a grand computational challenge in itself. We present heuristics based on a Mixed Integer Program (MIP) formulation for local optimization. These methods provide flexibility in scheduling new trains with varying speed and delays and improve utilization of infrastructure. We propose methods that provide a robust solution with hundreds of trains being scheduled over a portion of the railway network without significant increases in delay. We also provide techniques to validate the nominal schedules thus generated over global correlated variations in travel times thereby enabling us to detect conflicts arising due to delays. Our validation results which assume only the support of the arrival and departure time distributions takes an order of few minutes for a portion of the network and is computationally efficient to handle the entire network.

Keywords: mixed integer programming, optimization, railway network, train scheduling

Procedia PDF Downloads 158
2486 Optimization of 3D Printing Parameters Using Machine Learning to Enhance Mechanical Properties in Fused Deposition Modeling (FDM) Technology

Authors: Darwin Junnior Sabino Diego, Brando Burgos Guerrero, Diego Arroyo Villanueva

Abstract:

Additive manufacturing, commonly known as 3D printing, has revolutionized modern manufacturing by enabling the agile creation of complex objects. However, challenges persist in the consistency and quality of printed parts, particularly in their mechanical properties. This study focuses on addressing these challenges through the optimization of printing parameters in FDM technology, using Machine Learning techniques. Our aim is to improve the mechanical properties of printed objects by optimizing parameters such as speed, temperature, and orientation. We implement a methodology that combines experimental data collection with Machine Learning algorithms to identify relationships between printing parameters and mechanical properties. The results demonstrate the potential of this methodology to enhance the quality and consistency of 3D printed products, with significant applications across various industrial fields. This research not only advances understanding of additive manufacturing but also opens new avenues for practical implementation in industrial settings.

Keywords: 3D printing, additive manufacturing, machine learning, mechanical properties

Procedia PDF Downloads 51
2485 Multi-Objective Optimization (Pareto Sets) and Multi-Response Optimization (Desirability Function) of Microencapsulation of Emamectin

Authors: Victoria Molina, Wendy Franco, Sergio Benavides, José M. Troncoso, Ricardo Luna, Jose R. PéRez-Correa

Abstract:

Emamectin Benzoate (EB) is a crystal antiparasitic that belongs to the avermectin family. It is one of the most common treatments used in Chile to control Caligus rogercresseyi in Atlantic salmon. However, the sea lice acquired resistance to EB when it is exposed at sublethal EB doses. The low solubility rate of EB and its degradation at the acidic pH in the fish digestive tract are the causes of the slow absorption of EB in the intestine. To protect EB from degradation and enhance its absorption, specific microencapsulation technologies must be developed. Amorphous Solid Dispersion techniques such as Spray Drying (SD) and Ionic Gelation (IG) seem adequate for this purpose. Recently, Soluplus® (SOL) has been used to increase the solubility rate of several drugs with similar characteristics than EB. In addition, alginate (ALG) is a widely used polymer in IG for biomedical applications. Regardless of the encapsulation technique, the quality of the obtained microparticles is evaluated with the following responses, yield (Y%), encapsulation efficiency (EE%) and loading capacity (LC%). In addition, it is important to know the percentage of EB released from the microparticles in gastric (GD%) and intestinal (ID%) digestions. In this work, we microencapsulated EB with SOL (EB-SD) and with ALG (EB-IG) using SD and IG, respectively. Quality microencapsulation responses and in vitro gastric and intestinal digestions at pH 3.35 and 7.8, respectively, were obtained. A central composite design was used to find the optimum microencapsulation variables (amount of EB, amount of polymer and feed flow). In each formulation, the behavior of these variables was predicted with statistical models. Then, the response surface methodology was used to find the best combination of the factors that allowed a lower EB release in gastric conditions, while permitting a major release at intestinal digestion. Two approaches were used to determine this. The desirability approach (DA) and multi-objective optimization (MOO) with multi-criteria decision making (MCDM). Both microencapsulation techniques allowed to maintain the integrity of EB in acid pH, given the small amount of EB released in gastric medium, while EB-IG microparticles showed greater EB release at intestinal digestion. For EB-SD, optimal conditions obtained with MOO plus MCDM yielded a good compromise among the microencapsulation responses. In addition, using these conditions, it is possible to reduce microparticles costs due to the reduction of 60% of BE regard the optimal BE proposed by (DA). For EB-GI, the optimization techniques used (DA and MOO) yielded solutions with different advantages and limitations. Applying DA costs can be reduced 21%, while Y, GD and ID showed 9.5%, 84.8% and 2.6% lower values than the best condition. In turn, MOO yielded better microencapsulation responses, but at a higher cost. Overall, EB-SD with operating conditions selected by MOO seems the best option, since a good compromise between costs and encapsulation responses was obtained.

Keywords: microencapsulation, multiple decision-making criteria, multi-objective optimization, Soluplus®

Procedia PDF Downloads 131
2484 Resistivity Tomography Optimization Based on Parallel Electrode Linear Back Projection Algorithm

Authors: Yiwei Huang, Chunyu Zhao, Jingjing Ding

Abstract:

Electrical Resistivity Tomography has been widely used in the medicine and the geology, such as the imaging of the lung impedance and the analysis of the soil impedance, etc. Linear Back Projection is the core algorithm of Electrical Resistivity Tomography, but the traditional Linear Back Projection can not make full use of the information of the electric field. In this paper, an imaging method of Parallel Electrode Linear Back Projection for Electrical Resistivity Tomography is proposed, which generates the electric field distribution that is not linearly related to the traditional Linear Back Projection, captures the new information and improves the imaging accuracy without increasing the number of electrodes by changing the connection mode of the electrodes. The simulation results show that the accuracy of the image obtained by the inverse operation obtained by the Parallel Electrode Linear Back Projection can be improved by about 20%.

Keywords: electrical resistivity tomography, finite element simulation, image optimization, parallel electrode linear back projection

Procedia PDF Downloads 153
2483 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification

Authors: Oumaima Khlifati, Khadija Baba

Abstract:

Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.

Keywords: distress pavement, hyperparameters, automatic classification, deep learning

Procedia PDF Downloads 93
2482 A Biologically Inspired Approach to Automatic Classification of Textile Fabric Prints Based On Both Texture and Colour Information

Authors: Babar Khan, Wang Zhijie

Abstract:

Machine Vision has been playing a significant role in Industrial Automation, to imitate the wide variety of human functions, providing improved safety, reduced labour cost, the elimination of human error and/or subjective judgments, and the creation of timely statistical product data. Despite the intensive research, there have not been any attempts to classify fabric prints based on printed texture and colour, most of the researches so far encompasses only black and white or grey scale images. We proposed a biologically inspired processing architecture to classify fabrics w.r.t. the fabric print texture and colour. We created a texture descriptor based on the HMAX model for machine vision, and incorporated colour descriptor based on opponent colour channels simulating the single opponent and double opponent neuronal function of the brain. We found that our algorithm not only outperformed the original HMAX algorithm on classification of fabric print texture and colour, but we also achieved a recognition accuracy of 85-100% on different colour and different texture fabric.

Keywords: automatic classification, texture descriptor, colour descriptor, opponent colour channel

Procedia PDF Downloads 485
2481 Load Management Using Multiple Sequential Load Shaping Techniques

Authors: Amira M. Attia, Karim H. Youssef, Nabil H. Abbasi

Abstract:

Demand Side Management (DSM) is an essential characteristic of current and future smart grid systems. As one of DSM functions, load management aims to control customers’ total electric consumption and utility’s load factor by using various load shaping techniques. However, applying load shaping techniques such as load shifting, peak clipping, or strategic conservation individually does not provide the desired level of improvement for load factor increment and/or customer’s bill reduction. In this paper, two load shaping techniques will be simulated as constrained optimization problems. The purpose is to reflect the application of combined load shifting and strategic conservation model together at the same time, and the application of combined load shifting and peak clipping model as well. The problem will be formulated and solved by using disciplined convex programming (CVX) based MATLAB® R2013b. Simulation results will be evaluated and compared for studying the most impactful multi-techniques model in improving load curve.

Keywords: convex programing, demand side management, load shaping, multiple, building energy optimization

Procedia PDF Downloads 313
2480 Optimal Allocation of PHEV Parking Lots to Minimize Dstribution System Losses

Authors: Mohsen Mazidi, Ali Abbaspour, Mahmud Fotuhi-Firuzabad, Mohamamd Rastegar

Abstract:

To tackle the air pollution issues, Plug-in Hybrid Electric Vehicles (PHEVs) are proposed as an appropriate solution. Charging a large amount of PHEV batteries, if not controlled, would have negative impacts on the distribution system. The control process of charging of these vehicles can be centralized in parking lots that may provide a chance for better coordination than the individual charging in houses. In this paper, an optimization-based approach is proposed to determine the optimum PHEV parking capacities in candidate nodes of the distribution system. In so doing, a profile for charging and discharging of PHEVs is developed in order to flatten the network load profile. Then, this profile is used in solving an optimization problem to minimize the distribution system losses. The outputs of the proposed method are the proper place for PHEV parking lots and optimum capacity for each parking. The application of the proposed method on the IEEE-34 node test feeder verifies the effectiveness of the method.

Keywords: loss, plug-in hybrid electric vehicle (PHEV), PHEV parking lot, V2G

Procedia PDF Downloads 543
2479 Optimization of Ultrasound Assisted Extraction of Polysaccharides from Plant Waste Materials: Selected Model Material is Hazelnut Skin

Authors: T. Yılmaz, Ş. Tavman

Abstract:

In this study, optimization of ultrasound assisted extraction (UAE) of hemicellulose based polysaccharides from plant waste material has been studied. Selected material is hazelnut skin. Extraction variables for the operation are extraction time, amplitude and application temperature. Optimum conditions have been evaluated depending on responses such as amount of wet crude polysaccharide, total carbohydrate content and dried sample. Pretreated hazelnut skin powders were used for the experiments. 10 grams of samples were suspended in 100 ml water in a jacketed vessel with additional magnetic stirring. Mixture was sonicated by immersing ultrasonic probe processor. After the extraction procedures, ethanol soluble and insoluble sides were separated for further examinations. The obtained experimental data were analyzed by analysis of variance (ANOVA). Second order polynomial models were developed using multiple regression analysis. The individual and interactive effects of applied variables were evaluated by Box Behnken Design. The models developed from the experimental design were predictive and good fit with the experimental data with high correlation coefficient value (R2 more than 0.95). Extracted polysaccharides from hazelnut skin are assumed to be pectic polysaccharides according to the literature survey of Fourier Transform Spectrometry (FTIR) analysis results. No more change can be observed between spectrums of different sonication times. Application of UAE at optimized condition has an important effect on extraction of hemicellulose from plant material by satisfying partial hydrolysis to break the bounds with other components in plant cell wall material. This effect can be summarized by varied intensity of microjets and microstreaming at varied sonication conditions.

Keywords: hazelnut skin, optimization, polysaccharide, ultrasound assisted extraction

Procedia PDF Downloads 331
2478 A Comparison of Alternative Traffic Controls for Interchange Ramp Areas Using Synchro Software

Authors: Mohamed Mesbah, Bruce Janson

Abstract:

An interchange is the most important component of freeway and highway facilities. It is working as a connector between the highway’s elements. The main goal of designing interchanges is to provide an acceptable level of service and delay to make vehicles move smoothly when they are entering and exiting the interchange. There are many factors that can have a significant impact on the level of service; the main factors are traffic volumes, and type of interchange. This paper will discuss interchange with roundabouts under various values of traffic volumes to determine the level of service of the interchanges that will be studied in this paper and replace the system of interchange from roundabout to traffic signal to make a significant compression between these systems. A secondary goal is to propose improvements for scenarios where the level of service is deemed unacceptable. This will be achieved using Synchro traffic simulation software, which facilitates the simulation and optimization of interchanges to enhance operational efficiency and safety.

Keywords: interchange, roundabout, traffic signal, Synchro, delay, level of service, traffic volumes, vehicles, simulation, optimization, adjustment

Procedia PDF Downloads 16
2477 A Deterministic Approach for Solving the Hull and White Interest Rate Model with Jump Process

Authors: Hong-Ming Chen

Abstract:

This work considers the resolution of the Hull and White interest rate model with the jump process. A deterministic process is adopted to model the random behavior of interest rate variation as deterministic perturbations, which is depending on the time t. The Brownian motion and jumps uncertainty are denoted as the integral functions piecewise constant function w(t) and point function θ(t). It shows that the interest rate function and the yield function of the Hull and White interest rate model with jump process can be obtained by solving a nonlinear semi-infinite programming problem. A relaxed cutting plane algorithm is then proposed for solving the resulting optimization problem. The method is calibrated for the U.S. treasury securities at 3-month data and is used to analyze several effects on interest rate prices, including interest rate variability, and the negative correlation between stock returns and interest rates. The numerical results illustrate that our approach essentially generates the yield functions with minimal fitting errors and small oscillation.

Keywords: optimization, interest rate model, jump process, deterministic

Procedia PDF Downloads 161
2476 A Practical Survey on Zero-Shot Prompt Design for In-Context Learning

Authors: Yinheng Li

Abstract:

The remarkable advancements in large language models (LLMs) have brought about significant improvements in natural language processing tasks. This paper presents a comprehensive review of in-context learning techniques, focusing on different types of prompts, including discrete, continuous, few-shot, and zero-shot, and their impact on LLM performance. We explore various approaches to prompt design, such as manual design, optimization algorithms, and evaluation methods, to optimize LLM performance across diverse tasks. Our review covers key research studies in prompt engineering, discussing their methodologies and contributions to the field. We also delve into the challenges faced in evaluating prompt performance, given the absence of a single ”best” prompt and the importance of considering multiple metrics. In conclusion, the paper highlights the critical role of prompt design in harnessing the full potential of LLMs and provides insights into the combination of manual design, optimization techniques, and rigorous evaluation for more effective and efficient use of LLMs in various Natural Language Processing (NLP) tasks.

Keywords: in-context learning, prompt engineering, zero-shot learning, large language models

Procedia PDF Downloads 83
2475 Estimation of Fourier Coefficients of Flux Density for Surface Mounted Permanent Magnet (SMPM) Generators by Direct Search Optimization

Authors: Ramakrishna Rao Mamidi

Abstract:

It is essential for Surface Mounted Permanent Magnet (SMPM) generators to determine the performance prediction and analyze the magnet’s air gap flux density wave shape. The flux density wave shape is neither a pure sine wave or square wave nor a combination. This is due to the variation of air gap reluctance between the stator and permanent magnets. The stator slot openings and the number of slots make the wave shape highly complicated. To reduce the complexity of analysis, approximations are made to the wave shape using Fourier analysis. In contrast to the traditional integration method, the Fourier coefficients, an and bn, are obtained by direct search method optimization. The wave shape with optimized coefficients gives a wave shape close to the desired wave shape. Harmonics amplitudes are worked out and compared with initial values. It can be concluded that the direct search method can be used for estimating Fourier coefficients for irregular wave shapes.

Keywords: direct search, flux plot, fourier analysis, permanent magnets

Procedia PDF Downloads 216
2474 Optimization of Ultrasonic Assisted Extraction of Antioxidants and Phenolic Compounds from Coleus Using Response Surface Methodology

Authors: Reihaneh Ahmadzadeh Ghavidel

Abstract:

Free radicals such as reactive oxygen species (ROS) have detrimental effects on human health through several mechanisms. On the other hand, antioxidant molecules reduce free radical generation in biologic systems. Synthetic antioxidants, which are used in food industry, have also negative impact on human health. Therefore recognition of natural antioxidants such as anthocyanins can solve these problems simultaneously. Coleus (Solenostemon scutellarioides) with red leaves is a rich source of anthocyanins compounds. In this study we evaluated the effect of time (10, 20 and 30 min) and temperature (40, 50 and 60° C) on optimization of anthocyanin extraction using surface response method. In addition, the study was aimed to determine maximum extraction for anthocyanin from coleus plant using ultrasound method. The results indicated that the optimum conditions for extraction were 39.84 min at 69.25° C. At this point, total compounds were achieved 3.7451 mg 100 ml⁻¹. Furthermore, under optimum conditions, anthocyanin concentration, extraction efficiency, ferric reducing ability, total phenolic compounds and EC50 were registered 3.221931, 6.692765, 223.062, 3355.605 and 2.614045, respectively.

Keywords: anthocyanin, antioxidant, coleus, extraction, sonication

Procedia PDF Downloads 320
2473 Optimal Protection Coordination in Distribution Systems with Distributed Generations

Authors: Abdorreza Rabiee, Shahla Mohammad Hoseini Mirzaei

Abstract:

The advantages of distributed generations (DGs) based on renewable energy sources (RESs) leads to high penetration level of DGs in distribution network. With incorporation of DGs in distribution systems, the system reliability and security, as well as voltage profile, is improved. However, the protection of such systems is still challenging. In this paper, at first, the related papers are reviewed and then a practical scheme is proposed for coordination of OCRs in distribution system with DGs. The coordination problem is formulated as a nonlinear programming (NLP) optimization problem with the object function of minimizing total operating time of OCRs. The proposed method is studied based on a simple test system. The optimization problem is solved by General Algebraic Modeling System (GAMS) to calculate the optimal time dial setting (TDS) and also pickup current setting of OCRs. The results show the effectiveness of the proposed method and its applicability.

Keywords: distributed generation, DG, distribution network, over current relay, OCR, protection coordination, pickup current, time dial setting, TDS

Procedia PDF Downloads 138
2472 An IM-COH Algorithm Neural Network Optimization with Cuckoo Search Algorithm for Time Series Samples

Authors: Wullapa Wongsinlatam

Abstract:

Back propagation algorithm (BP) is a widely used technique in artificial neural network and has been used as a tool for solving the time series problems, such as decreasing training time, maximizing the ability to fall into local minima, and optimizing sensitivity of the initial weights and bias. This paper proposes an improvement of a BP technique which is called IM-COH algorithm (IM-COH). By combining IM-COH algorithm with cuckoo search algorithm (CS), the result is cuckoo search improved control output hidden layer algorithm (CS-IM-COH). This new algorithm has a better ability in optimizing sensitivity of the initial weights and bias than the original BP algorithm. In this research, the algorithm of CS-IM-COH is compared with the original BP, the IM-COH, and the original BP with CS (CS-BP). Furthermore, the selected benchmarks, four time series samples, are shown in this research for illustration. The research shows that the CS-IM-COH algorithm give the best forecasting results compared with the selected samples.

Keywords: artificial neural networks, back propagation algorithm, time series, local minima problem, metaheuristic optimization

Procedia PDF Downloads 152
2471 A Queer Approach to the National Irish Identity during 'The Troubles' in Belfast in Paul Mcveigh's 'The Good Son'

Authors: Eduardo Garcia Agustin

Abstract:

This paper focuses on how Mickey – the 10-year-old main character and narrator in Paul McVeigh’s novel The Good Son (2015) – becomes aware of his own queerness and its implications in a conflicting place and time such as Belfast during ‘The Troubles’ in the 1980s. Queer theory allows a comparative reading of identity issues such as national and gender discourses. As opposed to some other excluding social constructs that classify identities in an Us-Others binomial, queer has become a sort of umbrella term where there is room for more identities other than LGTBQ. Therefore, it offers some relevant tools to read this highly awarded novel by focusing on the intersectional construction of Mickey’s identity in progress within the social and familiar realms. The aim of this paper is to offer a queer reading of the The Good Son, which was awarded with the Polari First Book Prize in 2016, by showing the key role of Mickey’s conflictive realization of his own queerness in the polarized society of Northern Ireland in the 1980s, where there is no shade of grey. Within such a polarized context, Mickey’s perception of his own internal and external identity conflicts he is exposed to will show how necessary a certain touch of pink is as a potential escape to those conflicts.

Keywords: conflict, national identity, Northern Ireland, queer identity

Procedia PDF Downloads 533
2470 Design and Development of High Strength Aluminium Alloy from Recycled 7xxx-Series Material Using Bayesian Optimisation

Authors: Alireza Vahid, Santu Rana, Sunil Gupta, Pratibha Vellanki, Svetha Venkatesh, Thomas Dorin

Abstract:

Aluminum is the preferred material for lightweight applications and its alloys are constantly improving. The high strength 7xxx alloys have been extensively used for structural components in aerospace and automobile industries for the past 50 years. In the next decade, a great number of airplanes will be retired, providing an obvious source of valuable used metals and great demand for cost-effective methods to re-use these alloys. The design of proper aerospace alloys is primarily based on optimizing strength and ductility, both of which can be improved by controlling the additional alloying elements as well as heat treatment conditions. In this project, we explore the design of high-performance alloys with 7xxx as a base material. These designed alloys have to be optimized and improved to compare with modern 7xxx-series alloys and to remain competitive for aircraft manufacturing. Aerospace alloys are extremely complex with multiple alloying elements and numerous processing steps making optimization often intensive and costly. In the present study, we used Bayesian optimization algorithm, a well-known adaptive design strategy, to optimize this multi-variable system. An Al alloy was proposed and the relevant heat treatment schedules were optimized, using the tensile yield strength as the output to maximize. The designed alloy has a maximum yield strength and ultimate tensile strength of more than 730 and 760 MPa, respectively, and is thus comparable to the modern high strength 7xxx-series alloys. The microstructure of this alloy is characterized by electron microscopy, indicating that the increased strength of the alloy is due to the presence of a high number density of refined precipitates.

Keywords: aluminum alloys, Bayesian optimization, heat treatment, tensile properties

Procedia PDF Downloads 119
2469 Optimization of Stevia Concentration in Rasgulla (Sweet Syrup Cheese Ball) Based on Quality

Authors: Gurveer Kaur, T. K. Goswami

Abstract:

Rasgulla (a sweet syrup cheese ball), a sweet, spongy dessert represents traditional sweet dish of an Indian subcontinent prepared by chhana. 100 g of Rasgulla contains 186 calories, and so it is a driving force behind obesity and diabetes. To reduce Rasgulla’s energy value sucrose mainly should be minimized, so instead of sucrose, stevia (zero calories natural sweetener) is used to prepare Rasgulla. In this study three samples were prepared with sucrose to stevia ratio taking 100:0 (as control sample), (i) 50:50 (T1); (ii) 25:75 (T2), and (iii) 0:100 (T3) from 4% fat milk. It was found that as the sucrose concentration decreases the percentage of fat increase in the Rasgulla slightly. Sample T2 showed < 0.1% (±0.06) sucrose content. But there was no significant difference on protein and ash content of the samples. Whitening index was highest (78.0 ± 0.13) for T2 and lowest (65.7 ± 0.21) for the control sample since less sucrose in syrup reduces the browning of the sample (T2). Energy value per 100 g was calculated to be 50, 72, 98, and 184 calories for T3, T2, T1 and control samples, respectively. According to optimization study, the preferred (high quality) order of samples was as follows: T1 > T1 > control > T3. Low sugar content Rasgulla with acceptable quality can be prepared with 25:75 ratio of sucrose to stevia.

Keywords: composition, rasgulla, sensory, stevia

Procedia PDF Downloads 206
2468 Key Frame Based Video Summarization via Dependency Optimization

Authors: Janya Sainui

Abstract:

As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches.

Keywords: video summarization, key frame extraction, dependency measure, quadratic mutual information

Procedia PDF Downloads 266
2467 Kinematic Optimization of Energy Extraction Performances for Flapping Airfoil by Using Radial Basis Function Method and Genetic Algorithm

Authors: M. Maatar, M. Mekadem, M. Medale, B. Hadjed, B. Imine

Abstract:

In this paper, numerical simulations have been carried out to study the performances of a flapping wing used as an energy collector. Metamodeling and genetic algorithms are used to detect the optimal configuration, improving power coefficient and/or efficiency. Radial basis functions and genetic algorithms have been applied to solve this problem. Three optimization factors are controlled, namely dimensionless heave amplitude h₀, pitch amplitude θ₀ and flapping frequency f. ANSYS FLUENT software has been used to solve the principal equations at a Reynolds number of 1100, while the heave and pitch motion of a NACA0015 airfoil has been realized using a developed function (UDF). The results reveal an average power coefficient and efficiency of 0.78 and 0.338 with an inexpensive low-fidelity model and a total relative error of 4.1% versus the simulation. The performances of the simulated optimum RBF-NSGA-II have been improved by 1.2% compared with the validated model.

Keywords: numerical simulation, flapping wing, energy extraction, power coefficient, efficiency, RBF, NSGA-II

Procedia PDF Downloads 43