Search results for: non-stationary stochastic optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3646

Search results for: non-stationary stochastic optimization

1996 A Quality Index Optimization Method for Non-Invasive Fetal ECG Extraction

Authors: Lucia Billeci, Gennaro Tartarisco, Maurizio Varanini

Abstract:

Fetal cardiac monitoring by fetal electrocardiogram (fECG) can provide significant clinical information about the healthy condition of the fetus. Despite this potentiality till now the use of fECG in clinical practice has been quite limited due to the difficulties in its measuring. The recovery of fECG from the signals acquired non-invasively by using electrodes placed on the maternal abdomen is a challenging task because abdominal signals are a mixture of several components and the fetal one is very weak. This paper presents an approach for fECG extraction from abdominal maternal recordings, which exploits the characteristics of pseudo-periodicity of fetal ECG. It consists of devising a quality index (fQI) for fECG and of finding the linear combinations of preprocessed abdominal signals, which maximize these fQI (quality index optimization - QIO). It aims at improving the performances of the most commonly adopted methods for fECG extraction, usually based on maternal ECG (mECG) estimating and canceling. The procedure for the fECG extraction and fetal QRS (fQRS) detection is completely unsupervised and based on the following steps: signal pre-processing; maternal ECG (mECG) extraction and maternal QRS detection; mECG component approximation and canceling by weighted principal component analysis; fECG extraction by fQI maximization and fetal QRS detection. The proposed method was compared with our previously developed procedure, which obtained the highest at the Physionet/Computing in Cardiology Challenge 2013. That procedure was based on removing the mECG from abdominal signals estimated by a principal component analysis (PCA) and applying the Independent component Analysis (ICA) on the residual signals. Both methods were developed and tuned using 69, 1 min long, abdominal measurements with fetal QRS annotation of the dataset A provided by PhysioNet/Computing in Cardiology Challenge 2013. The QIO-based and the ICA-based methods were compared in analyzing two databases of abdominal maternal ECG available on the Physionet site. The first is the Abdominal and Direct Fetal Electrocardiogram Database (ADdb) which contains the fetal QRS annotations thus allowing a quantitative performance comparison, the second is the Non-Invasive Fetal Electrocardiogram Database (NIdb), which does not contain the fetal QRS annotations so that the comparison between the two methods can be only qualitative. In particular, the comparison on NIdb was performed defining an index of quality for the fetal RR series. On the annotated database ADdb the QIO method, provided the performance indexes Sens=0.9988, PPA=0.9991, F1=0.9989 overcoming the ICA-based one, which provided Sens=0.9966, PPA=0.9972, F1=0.9969. The comparison on NIdb was performed defining an index of quality for the fetal RR series. The index of quality resulted higher for the QIO-based method compared to the ICA-based one in 35 records out 55 cases of the NIdb. The QIO-based method gave very high performances with both the databases. The results of this study foresees the application of the algorithm in a fully unsupervised way for the implementation in wearable devices for self-monitoring of fetal health.

Keywords: fetal electrocardiography, fetal QRS detection, independent component analysis (ICA), optimization, wearable

Procedia PDF Downloads 279
1995 Dosimetric Dependence on the Collimator Angle in Prostate Volumetric Modulated Arc Therapy

Authors: Muhammad Isa Khan, Jalil Ur Rehman, Muhammad Afzal Khan Rao, James Chow

Abstract:

Purpose: This study investigates the dose-volume variations in planning target volume (PTV) and organs-at-risk (OARs) using different collimator angles for smart arc prostate volumetric modulated arc therapy (VMAT). Awareness of the collimator angle for PTV and OARs sparing is essential for the planner because optimization contains numerous treatment constraints producing a complex, unstable and computationally challenging problem throughout its examination of an optimal plan in a rational time. Materials and Methods: Single arc VMAT plans at different collimator angles varied systematically (0°-90°) were performed on a Harold phantom and a new treatment plan is optimized for each collimator angle. We analyzed the conformity index (CI), homogeneity index (HI), gradient index (GI), monitor units (MUs), dose-volume histogram, mean and maximum doses to PTV. We also explored OARs (e.g. bladder, rectum and femoral heads), dose-volume criteria in the treatment plan (e.g. D30%, D50%, V30Gy and V38Gy of bladder and rectum; D5%,V14Gy and V22Gy of femoral heads), dose-volume histogram, mean and maximum doses for smart arc VMAT at different collimator angles. Results: There was no significance difference found in VMAT optimization at all studied collimator angles. However, if 0.5% accuracy is concerned then collimator angle = 45° provides higher CI and lower HI. Collimator angle = 15° also provides lower HI values like collimator angle 45°. It is seen that collimator angle = 75° is established as a good for rectum and right femur sparing. Collimator angle = 90° and collimator angle = 30° were found good for rectum and left femur sparing respectively. The PTV dose coverage statistics for each plan are comparatively independent of the collimator angles. Conclusion: It is concluded that this study will help the planner to have freedom to choose any collimator angle from (0°-90°) for PTV coverage and select a suitable collimator angle to spare OARs.

Keywords: VMAT, dose-volume histogram, collimator angle, organs-at-risk

Procedia PDF Downloads 510
1994 6G: Emerging Architectures, Technologies and Challenges

Authors: Abdulrahman Yarali

Abstract:

The advancement of technology never stops because the demands for improved internet and communication connectivity are increasing. Just as 5G networks are rolling out, the world has begun to talk about the sixth-generation networks (6G). The semantics of 6G are more or less the same as 5G networks because they strive to boost speeds, machine-to-machine (M2M) communication, and latency reduction. However, some of the distinctive focuses of 6G include the optimization of networks of machines through super speeds and innovative features. This paper discusses many aspects of the technologies, architectures, challenges, and opportunities of 6G wireless communication systems.

Keywords: 6G, characteristics, infrastructures, technologies, AI, ML, IoT, applications

Procedia PDF Downloads 23
1993 FEM for Stress Reduction by Optimal Auxiliary Holes in a Loaded Plate with Elliptical Hole

Authors: Basavaraj R. Endigeri, S. G. Sarganachari

Abstract:

Steel is widely used in machine parts, structural equipment and many other applications. In many steel structural elements, holes of different shapes and orientations are made with a view to satisfy the design requirements. The presence of holes in steel elements creates stress concentration, which eventually reduce the mechanical strength of the structure. Therefore, it is of great importance to investigate the state of stress around the holes for the safety and properties design of such elements. By literature survey, it is known that till date, there is no analytical solution to reduce the stress concentration by providing auxiliary holes at a definite location and radii in a steel plate. The numerical method can be used to determine the optimum location and radii of auxiliary holes. In the present work plate with an elliptical hole, for a steel material subjected to uniaxial load is analyzed and the effect of stress concentration is graphically represented .The introduction of auxiliary holes at a optimum location and radii with its effect on stress concentration is also represented graphically. The finite element analysis package ANSYS 11.0 is used to analyse the steel plate. The analysis is carried out using a plane 42 element. Further the ANSYS optimization model is used to determine the location and radii for optimum values of auxiliary hole to reduce stress concentration. All the results for different diameter to plate width ratio are presented graphically. The results of this study are in the form of the graphs for determining the locations and diameter of optimal auxiliary holes. The graph of stress concentration v/s central hole diameter to plate width ratio. The Finite Elements results of the study indicates that the stress concentration effect of central elliptical hole in an uniaxial loaded plate can be reduced by introducing auxiliary holes on either side of the central circular hole.

Keywords: finite element method, optimization, stress concentration factor, auxiliary holes

Procedia PDF Downloads 452
1992 Polymer Mixing in the Cavity Transfer Mixer

Authors: Giovanna Grosso, Martien A. Hulsen, Arash Sarhangi Fard, Andrew Overend, Patrick. D. Anderson

Abstract:

In many industrial applications and, in particular in polymer industry, the quality of mixing between different materials is fundamental to guarantee the desired properties of finished products. However, properly modelling and understanding polymer mixing often presents noticeable difficulties, because of the variety and complexity of the physical phenomena involved. This is the case of the Cavity Transfer Mixer (CTM), for which a clear understanding of mixing mechanisms is still missing, as well as clear guidelines for the system optimization. This device, invented and patented by Gale at Rapra Technology Limited, is an add-on to be mounted downstream of existing extruders, in order to improve distributive mixing. It consists of two concentric cylinders, the rotor and stator, both provided with staggered rows of hemispherical cavities. The inner cylinder (rotor) rotates, while the outer (stator) remains still. At the same time, the pressure load imposed upstream, pushes the fluid through the CTM. Mixing processes are driven by the flow field generated by the complex interaction between the moving geometry, the imposed pressure load and the rheology of the fluid. In such a context, the present work proposes a complete and accurate three dimensional modelling of the CTM and results of a broad range of simulations assessing the impact on mixing of several geometrical and functioning parameters. Among them, we find: the number of cavities per row, the number of rows, the size of the mixer, the rheology of the fluid and the ratio between the rotation speed and the fluid throughput. The model is composed of a flow part and a mixing part: a finite element solver computes the transient velocity field, which is used in the mapping method implementation in order to simulate the concentration field evolution. Results of simulations are summarized in guidelines for the device optimization.

Keywords: Mixing, non-Newtonian fluids, polymers, rheology.

Procedia PDF Downloads 377
1991 Research on Hangzhou Commercial Center System Based on Point of Interest Data

Authors: Chen Wang, Qiuxiao Chen

Abstract:

With the advent of the information age and the era of big data, urban planning research is no longer satisfied with the analysis and application of traditional data. Because of the limitations of traditional urban commercial center system research, big data provides new opportunities for urban research. Therefore, based on the quantitative evaluation method of big data, the commercial center system of the main city of Hangzhou is analyzed and evaluated, and the scale and hierarchical structure characteristics of the urban commercial center system are studied. In order to make up for the shortcomings of the existing POI extraction method, it proposes a POI extraction method based on adaptive adjustment of search window, which can accurately and efficiently extract the POI data of commercial business in the main city of Hangzhou. Through the visualization and nuclear density analysis of the extracted Point of Interest (POI) data, the current situation of the commercial center system in the main city of Hangzhou is evaluated. Then it compares with the commercial center system structure of 'Hangzhou City Master Plan (2001-2020)', analyzes the problems existing in the planned urban commercial center system, and provides corresponding suggestions and optimization strategy for the optimization of the planning of Hangzhou commercial center system. Then get the following conclusions: The status quo of the commercial center system in the main city of Hangzhou presents a first-level main center, a two-level main center, three third-level sub-centers, and multiple community-level business centers. Generally speaking, the construction of the main center in the commercial center system is basically up to standard, and there is still a big gap in the construction of the sub-center and the regional-level commercial center, further construction is needed. Therefore, it proposes an optimized hierarchical functional system, organizes commercial centers in an orderly manner; strengthens the central radiation to drive surrounding areas; implements the construction guidance of the center, effectively promotes the development of group formation and further improves the commercial center system structure of the main city of Hangzhou.

Keywords: business center system, business format, main city of Hangzhou, POI extraction method

Procedia PDF Downloads 140
1990 Optimized Real Ground Motion Scaling for Vulnerability Assessment of Building Considering the Spectral Uncertainty and Shape

Authors: Chen Bo, Wen Zengping

Abstract:

Based on the results of previous studies, we focus on the research of real ground motion selection and scaling method for structural performance-based seismic evaluation using nonlinear dynamic analysis. The input of earthquake ground motion should be determined appropriately to make them compatible with the site-specific hazard level considered. Thus, an optimized selection and scaling method are established including the use of not only Monte Carlo simulation method to create the stochastic simulation spectrum considering the multivariate lognormal distribution of target spectrum, but also a spectral shape parameter. Its applications in structural fragility analysis are demonstrated through case studies. Compared to the previous scheme with no consideration of the uncertainty of target spectrum, the method shown here can make sure that the selected records are in good agreement with the median value, standard deviation and spectral correction of the target spectrum, and greatly reveal the uncertainty feature of site-specific hazard level. Meanwhile, it can help improve computational efficiency and matching accuracy. Given the important infection of target spectrum’s uncertainty on structural seismic fragility analysis, this work can provide the reasonable and reliable basis for structural seismic evaluation under scenario earthquake environment.

Keywords: ground motion selection, scaling method, seismic fragility analysis, spectral shape

Procedia PDF Downloads 291
1989 Optimization of the Co-Precipitation of Industrial Waste Metals in a Continuous Reactor System

Authors: Thomas S. Abia II, Citlali Garcia-Saucedo

Abstract:

A continuous copper precipitation treatment (CCPT) system was conceived at Intel Chandler Site to serve as a first-of-kind (FOK) facility-scale waste copper (Cu), nickel (Ni), and manganese (Mn) co-precipitation facility. The process was designed to treat highly variable wastewater discharged from a substrate packaging research factory. The paper discusses metals co-precipitation induced by internal changes for manufacturing facilities that lack the capacity for hardware expansion due to real estate restrictions, aggressive schedules, or budgetary constraints. Herein, operating parameters such as pH and oxidation reduction potential (ORP) were examined to analyze the ability of the CCPT System to immobilize various waste metals. Additionally, influential factors such as influent concentrations and retention times were investigated to quantify the environmental variability against system performance. A total of 2,027 samples were analyzed and statistically evaluated to measure the performance of CCPT that was internally retrofitted for Mn abatement to meet environmental regulations. In order to enhance the consistency of the influent, a separate holding tank was cannibalized from another system to collect and slow-feed the segregated Mn wastewater from the factory into CCPT. As a result, the baseline influent Mn decreased from 17.2+18.7 mg1L-1 at pre-pilot to 5.15+8.11 mg1L-1 post-pilot (70.1% reduction). Likewise, the pre-trial and post-trial average influent Cu values to CCPT were 52.0+54.6 mg1L-1 and 33.9+12.7 mg1L-1, respectively (34.8% reduction). However, the raw Ni content of 0.97+0.39 mg1L-1 at pre-pilot increased to 1.06+0.17 mg1L-1 at post-pilot. The average Mn output declined from 10.9+11.7 mg1L-1 at pre-pilot to 0.44+1.33 mg1L-1 at post-pilot (96.0% reduction) as a result of the pH and ORP operating setpoint changes. In similar fashion, the output Cu quality improved from 1.60+5.38 mg1L-1 to 0.55+1.02 mg1L-1 (65.6% reduction) while the Ni output sustained a 50% enhancement during the pilot study (0.22+0.19 mg1L-1 reduced to 0.11+0.06 mg1L-1). pH and ORP were shown to be significantly instrumental to the precipitative versatility of the CCPT System.

Keywords: copper, co-precipitation, industrial wastewater treatment, manganese, optimization, pilot study

Procedia PDF Downloads 268
1988 The Gasoil Hydrofining Kinetics Constants Identification

Authors: C. Patrascioiu, V. Matei, N. Nicolae

Abstract:

The paper describes the experiments and the kinetic parameters calculus of the gasoil hydrofining. They are presented experimental results of gasoil hidrofining using Mo and promoted with Ni on aluminum support catalyst. The authors have adapted a kinetic model gasoil hydrofining. Using this proposed kinetic model and the experimental data they have calculated the parameters of the model. The numerical calculus is based on minimizing the difference between the experimental sulf concentration and kinetic model estimation.

Keywords: hydrofining, kinetic, modeling, optimization

Procedia PDF Downloads 434
1987 Identification of Factors Affecting Technical Efficiency Sugar Cane Farming in East Java

Authors: Noor Rizkiyah, Djoko Koestiono, Budi Setiawan, Nuhfil Hanani

Abstract:

This research aims to identify the factors that affect the production of sugar cane, the level of technical efficiency of farming sugar cane ratooning and factors that affect technical inefficiency. Research carried out in Malang of East Java with sampling in a non random sampling stratified proportioned and obtained 172 household sugar cane farmers who are classified based on the level of ratooning i.e. ratooning I 3-4 times ratoning, ratooning II 5-10 times ratoning as well as ratooning III > 10 times ratoning. The method used is the Stochastic Production Frontier approach MLE (maximum likelihood estimation). From the results obtained by analysis of the factors affecting the production of sugar cane farming land, namely ratooning fertilizer use ZA petroganic, use of fertilizer and seeds of embroidery and labor. While the average level of sugar cane farmers ratooning efficiency of 0.78 and categorized yet efficient technically. For the factors that influence the technical inefficiency i.e. age, number of dependents and the frequency of family ratooning. Though not yet technically efficient but sugar cane farmers cultivate cultivation remains ratooning. But if it is done repeatedly ratooning will result in a decrease in the production of sugar cane. Whereas the results of the analysis of farming level of feasibility or RC ratooning sugar cane ratio of 1.15 so worth trying to accomplish. Thus with increased technology and combining the use of inputs is an attempt to let the technical efficiency can be achieved so that the more worthy to be organised.

Keywords: technical efficiency, production, sugarcane, frontier

Procedia PDF Downloads 169
1986 Optimization of Mechanical Cacao Shelling Parameters Using Unroasted Cocoa Beans

Authors: Jeffrey A. Lavarias, Jessie C. Elauria, Arnold R. Elepano, Engelbert K. Peralta, Delfin C. Suministrado

Abstract:

Shelling process is one of the primary processes and critical steps in the processing of chocolate or any product that is derived from cocoa beans. It affects the quality of the cocoa nibs in terms of flavor and purity. In the Philippines, small-scale food processor cannot really compete with large scale confectionery manufacturers because of lack of available postharvest facilities that are appropriate to their level of operation. The impact of this study is to provide the needed intervention that will pave the way for cacao farmers of engaging on the advantage of value-adding as way to maximize the economic potential of cacao. Thus, provision and availability of needed postharvest machines like mechanical cacao sheller will revolutionize the current state of cacao industry in the Philippines. A mechanical cacao sheller was developed, fabricated, and evaluated to establish optimum shelling conditions such as moisture content of cocoa beans, clearance where of cocoa beans passes through the breaker section and speed of the breaking mechanism on shelling recovery, shelling efficiency, shelling rate, energy utilization and large nib recovery; To establish the optimum level of shelling parameters of the mechanical sheller. These factors were statistically analyzed using design of experiment by Box and Behnken and Response Surface Methodology (RSM). By maximizing shelling recovery, shelling efficiency, shelling rate, large nib recovery and minimizing energy utilization, the optimum shelling conditions were established at moisture content, clearance and breaker speed of 6.5%, 3 millimeters and 1300 rpm, respectively. The optimum values for shelling recovery, shelling efficiency, shelling rate, large nib recovery and minimizing energy utilization were recorded at 86.51%, 99.19%, 21.85kg/hr, 89.75%, and 542.84W, respectively. Experimental values obtained using the optimum conditions were compared with predicted values using predictive models and were found in good agreement.

Keywords: cocoa beans, optimization, RSM, shelling parameters

Procedia PDF Downloads 357
1985 Implementation of Green Deal Policies and Targets in Energy System Optimization Models: The TEMOA-Europe Case

Authors: Daniele Lerede, Gianvito Colucci, Matteo Nicoli, Laura Savoldi

Abstract:

The European Green Deal is the first internationally agreed set of measures to contrast climate change and environmental degradation. Besides the main target of reducing emissions by at least 55% by 2030, it sets the target of accompanying European countries through an energy transition to make the European Union into a modern, resource-efficient, and competitive net-zero emissions economy by 2050, decoupling growth from the use of resources and ensuring a fair adaptation of all social categories to the transformation process. While the general purpose to allow the realization of the purposes of the Green Deal already dates back to 2019, strategies and policies keep being developed coping with recent circumstances and achievements. However, general long-term measures like the Circular Economy Action Plan, the proposals to shift from fossil natural gas to renewable and low-carbon gases, in particular biomethane and hydrogen, and to end the sale of gasoline and diesel cars by 2035, will all have significant effects on energy supply and demand evolution across the next decades. The interactions between energy supply and demand over long-term time frames are usually assessed via energy system models to derive useful insights for policymaking and to address technological choices and research and development. TEMOA-Europe is a newly developed energy system optimization model instance based on the minimization of the total cost of the system under analysis, adopting a technologically integrated, detailed, and explicit formulation and considering the evolution of the system in partial equilibrium in competitive markets with perfect foresight. TEMOA-Europe is developed on the TEMOA platform, an open-source modeling framework totally implemented in Python, therefore ensuring third-party verification even on large and complex models. TEMOA-Europe is based on a single-region representation of the European Union and EFTA countries on a time scale between 2005 and 2100, relying on a set of assumptions for socio-economic developments based on projections by the International Energy Outlook and a large technological dataset including 7 sectors: the upstream and power sectors for the production of all energy commodities and the end-use sectors, including industry, transport, residential, commercial and agriculture. TEMOA-Europe also includes an updated hydrogen module considering its production, storage, transportation, and utilization. Besides, it can rely on a wide set of innovative technologies, ranging from nuclear fusion and electricity plants equipped with CCS in the power sector to electrolysis-based steel production processes and steel in the industrial sector – with a techno-economic characterization based on public literature – to produce insightful energy scenarios and especially to cope with the very long analyzed time scale. The aim of this work is to examine in detail the scheme of measures and policies for the realization of the purposes of the Green Deal and to transform them into a set of constraints and new socio-economic development pathways. Based on them, TEMOA-Europe will be used to produce and comparatively analyze scenarios to assess the consequences of Green Deal-related measures on the future evolution of the energy mix over the whole energy system in an economic optimization environment.

Keywords: European Green Deal, energy system optimization modeling, scenario analysis, TEMOA-Europe

Procedia PDF Downloads 104
1984 Investigations on Enhancement of Fly Ash in Cement Manufacturing through Optimization of Clinker Quality and Fly Ash Fineness

Authors: Suresh Vanguri, Suresh Palla, K. V. Kalyani, S. K. Chaturvedi, B. N. Mohapatra

Abstract:

Enhancing the fly ash utilization in the manufacture of cement is identified as one of the key areas to mitigate the Green House Gas emissions from the cement industry. Though increasing the fly ash content in cement has economic and environmental benefits, it results in a decrease in the compressive strength values, particularly at early ages. Quality of clinker and fly ash were identified as predominant factors that govern the extent of absorption of fly ash in the manufacturing of cement. This paper presents systematic investigations on the effect of clinker and fly ash quality on the properties of resultant cement. Since mechanical activation alters the physicochemical properties such as particle size distribution, surface area, phase morphology, understanding the variation of these properties with activation is required for its applications. The effect of mechanical activation on fly ash surface area, specific gravity, flow properties, lime reactivity, comparative compressive strength (CCS), reactive silica and mineralogical properties were also studied. The fineness of fly ash was determined by Blaine’s method, specific gravity, lime reactivity, CCS were determined as per the method IS 1727-1967. The phase composition of fly ash was studied using the X-ray Diffraction technique. The changes in the microstructure and morphology with activation were examined using the scanning electron microscope. The studies presented in this paper also include evaluation of Portland Pozzolana Cement (PPC), prepared using high volume fly ash. Studies are being carried out using clinker from cement plants located in different regions/clusters in India. Blends of PPC containing higher contents of activated fly ash have been prepared and investigated for their chemical and physical properties, as per Indian Standard procedures. Changes in the microstructure of fly ash with activation and mechanical properties of resultant cement containing high volumes of fly ash indicated the significance of optimization of the quality of clinker and fly ash fineness for better techno-economical benefits.

Keywords: flow properties, fly ash enhancement, lime reactivity, microstructure, mineralogy

Procedia PDF Downloads 462
1983 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas

Authors: Sahithi Yarlagadda

Abstract:

The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.

Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm

Procedia PDF Downloads 108
1982 Generating Spherical Surface of Wear Drain in Cutting Metal by Finite Element Method Analysis

Authors: D. Kabeya Nahum, L. Y. Kabeya Mukeba

Abstract:

In this work, the design of surface defects some support of the anchor rod ball joint. The future adhesion contact was rocking in manufacture machining, for giving by the numerical analysis of a short simple solution of thermo-mechanical coupled problem in process engineering. The analysis of geometrical evaluation and the quasi-static and dynamic states are discussed in kinematic dimensional tolerances onto surfaces of part. Geometric modeling using the finite element method (FEM) in rough part of such phase provides an opportunity to solve the nonlinearity behavior observed by empirical data to improve the discrete functional surfaces. The open question here is to obtain spherical geometry of drain wear with the operation of rolling. The formulation with (1 ± 0.01) mm thickness near the drain wear semi-finishing tool for studying different angles, do not help the professional factor in design cutting metal related vibration, friction and interface solid-solid of part and tool during this physical complex process, with multi-parameters no-defined in Sobolev Spaces. The stochastic approach of cracking, wear and fretting due to the cutting forces face boundary layers small dimensions thickness of the workpiece and the tool in the machining position is predicted neighbor to ‘Yakam Matrix’.

Keywords: FEM, geometry, part, simulation, spherical surface engineering, tool, workpiece

Procedia PDF Downloads 271
1981 Arabic Lexicon Learning to Analyze Sentiment in Microblogs

Authors: Mahmoud B. Rokaya

Abstract:

The study of opinion mining and sentiment analysis includes analysis of opinions, sentiments, evaluations, attitudes, and emotions. The rapid growth of social media, social networks, reviews, forum discussions, microblogs, and Twitter, leads to a parallel growth in the field of sentiment analysis. The field of sentiment analysis tries to develop effective tools to make it possible to capture the trends of people. There are two approaches in the field, lexicon-based and corpus-based methods. A lexicon-based method uses a sentiment lexicon which includes sentiment words and phrases with assigned numeric scores. These scores reveal if sentiment phrases are positive or negative, their intensity, and/or their emotional orientations. Creation of manual lexicons is hard. This brings the need for adaptive automated methods for generating a lexicon. The proposed method generates dynamic lexicons based on the corpus and then classifies text using these lexicons. In the proposed method, different approaches are combined to generate lexicons from text. The proposed method classifies the tweets into 5 classes instead of +ve or –ve classes. The sentiment classification problem is written as an optimization problem, finding optimum sentiment lexicons are the goal of the optimization process. The solution was produced based on mathematical programming approaches to find the best lexicon to classify texts. A genetic algorithm was written to find the optimal lexicon. Then, extraction of a meta-level feature was done based on the optimal lexicon. The experiments were conducted on several datasets. Results, in terms of accuracy, recall and F measure, outperformed the state-of-the-art methods proposed in the literature in some of the datasets. A better understanding of the Arabic language and culture of Arab Twitter users and sentiment orientation of words in different contexts can be achieved based on the sentiment lexicons proposed by the algorithm.

Keywords: social media, Twitter sentiment, sentiment analysis, lexicon, genetic algorithm, evolutionary computation

Procedia PDF Downloads 188
1980 Dido: An Automatic Code Generation and Optimization Framework for Stencil Computations on Distributed Memory Architectures

Authors: Mariem Saied, Jens Gustedt, Gilles Muller

Abstract:

We present Dido, a source-to-source auto-generation and optimization framework for multi-dimensional stencil computations. It enables a large programmer community to easily and safely implement stencil codes on distributed-memory parallel architectures with Ordered Read-Write Locks (ORWL) as an execution and communication back-end. ORWL provides inter-task synchronization for data-oriented parallel and distributed computations. It has been proven to guarantee equity, liveness, and efficiency for a wide range of applications, particularly for iterative computations. Dido consists mainly of an implicitly parallel domain-specific language (DSL) implemented as a source-level transformer. It captures domain semantics at a high level of abstraction and generates parallel stencil code that leverages all ORWL features. The generated code is well-structured and lends itself to different possible optimizations. In this paper, we enhance Dido to handle both Jacobi and Gauss-Seidel grid traversals. We integrate temporal blocking to the Dido code generator in order to reduce the communication overhead and minimize data transfers. To increase data locality and improve intra-node data reuse, we coupled the code generation technique with the polyhedral parallelizer Pluto. The accuracy and portability of the generated code are guaranteed thanks to a parametrized solution. The combination of ORWL features, the code generation pattern and the suggested optimizations, make of Dido a powerful code generation framework for stencil computations in general, and for distributed-memory architectures in particular. We present a wide range of experiments over a number of stencil benchmarks.

Keywords: stencil computations, ordered read-write locks, domain-specific language, polyhedral model, experiments

Procedia PDF Downloads 126
1979 Optimization of Polymerase Chain Reaction Condition to Amplify Exon 9 of PIK3CA Gene in Preventing False Positive Detection Caused by Pseudogene Existence in Breast Cancer

Authors: Dina Athariah, Desriani Desriani, Bugi Ratno Budiarto, Abinawanto Abinawanto, Dwi Wulandari

Abstract:

Breast cancer is a regulated by many genes. Defect in PIK3CA gene especially at position of exon 9 (E542K and E545K), called hot spot mutation induce early transformation of breast cells. The early detection of breast cancer based on mutation profile of this hot spot region would be hampered by the existence of pseudogene, marked by its substitution mutation at base 1658 (E545A) and deletion at 1659 that have been previously proven in several cancers. To the best of the authors’ knowledge, until recently no studies have been reported about pseudogene phenomenon in breast cancer. Here, we reported PCR optimization to to obtain true exon 9 of PIK3CA gene from its pseudogene hence increasing the validity of data. Material and methods: two genomic DNA with Dev and En code were used in this experiment. Two pairs of primer were design for Standard PCR method. The size of PCR products for each primer is 200bp and 400bp. While other primer was designed for Nested-PCR followed with DNA sequencing method. For Nested-PCR, we optimized the annealing temperature in first and second run of PCR, and the PCR cycle for first run PCR (15x versus 25x). Result: standard PCR using both primer pairs designed is failed to detect the true PIK3CA gene, appearing a substitution mutation at 1658 and deletion at 1659 of PCR product in sequence chromatogram indicated pseudogene. Meanwhile, Nested-PCR with optimum condition (annealing temperature for the first round at 55oC, annealing temperatung for the second round at 60,7oC with 15x PCR cycles) and could detect the true PIK3CA gene. Dev sample were identified as WT while En sample contain one substitution mutation at position 545 of exon 9, indicating amino acid changing from E to K. For the conclusion, pseudogene also exists in breast cancer and the apllication of optimazed Nested-PCR in this study could detect the true exon 9 of PIK3CA gene.

Keywords: breast cancer, exon 9, hotspot mutation, PIK3CA, pseudogene

Procedia PDF Downloads 241
1978 Effective Planning of Public Transportation Systems: A Decision Support Application

Authors: Ferdi Sönmez, Nihal Yorulmaz

Abstract:

Decision making on the true planning of the public transportation systems to serve potential users is a must for metropolitan areas. To take attraction of travelers to projected modes of transport, adequately fair overall travel times should be provided. In this fashion, other benefits such as lower traffic congestion, road safety and lower noise and atmospheric pollution may be earned. The congestion which comes with increasing demand of public transportation is becoming a part of our lives and making residents’ life difficult. Hence, regulations should be done to reduce this congestion. To provide a constructive and balanced regulation in public transportation systems, right stations should be located in right places. In this study, it is aimed to design and implement a Decision Support System (DSS) Application to determine the optimal bus stop places for public transport in Istanbul which is one of the biggest and oldest cities in the world. Required information is gathered from IETT (Istanbul Electricity, Tram and Tunnel) Enterprises which manages all public transportation services in Istanbul Metropolitan Area. By using the most real-like values, cost assignments are made. The cost is calculated with the help of equations produced by bi-level optimization model. For this study, 300 buses, 300 drivers, 10 lines and 110 stops are used. The user cost of each station and the operator cost taken place in lines are calculated. Some components like cost, security and noise pollution are considered as significant factors affecting the solution of set covering problem which is mentioned for identifying and locating the minimum number of possible bus stops. Preliminary research and model development for this study refers to previously published article of the corresponding author. Model results are represented with the intent of decision support to the specialists on locating stops effectively.

Keywords: operator cost, bi-level optimization model, user cost, urban transportation

Procedia PDF Downloads 245
1977 A Universal Approach to Categorize Failures in Production

Authors: Konja Knüppel, Gerrit Meyer, Peter Nyhuis

Abstract:

The increasing interconnectedness and complexity of production processes raise the susceptibility of production systems to failure. Therefore, the ability to respond quickly to failures is increasingly becoming a competitive factor. The research project "Sustainable failure management in manufacturing SMEs" is developing a methodology to identify failures in the production and select preventive and reactive measures in order to correct failures and to establish sustainable failure management systems.

Keywords: failure categorization, failure management, logistic performance, production optimization

Procedia PDF Downloads 372
1976 Comprehensive Analysis and Optimization of Alkaline Water Electrolysis for Green Hydrogen Production: Experimental Validation, Simulation Study, and Cost Analysis

Authors: Umair Ahmed, Muhammad Bin Irfan

Abstract:

This study focuses on designing and optimization of an alkaline water electrolyser for the production of green hydrogen. The aim is to enhance the durability and efficiency of this technology while simultaneously reducing the cost associated with the production of green hydrogen. The experimental results obtained from the alkaline water electrolyser are compared with simulated results using Aspen Plus software, allowing a comprehensive analysis and evaluation. To achieve the aforementioned goals, several design and operational parameters are investigated. The electrode material, electrolyte concentration, and operating conditions are carefully selected to maximize the efficiency and durability of the electrolyser. Additionally, cost-effective materials and manufacturing techniques are explored to decrease the overall production cost of green hydrogen. The experimental setup includes a carefully designed alkaline water electrolyser, where various performance parameters (such as hydrogen production rate, current density, and voltage) are measured. These experimental results are then compared with simulated data obtained using Aspen Plus software. The simulation model is developed based on fundamental principles and validated against the experimental data. The comparison between experimental and simulated results provides valuable insight into the performance of an alkaline water electrolyser. It helps to identify the areas where improvements can be made, both in terms of design and operation, to enhance the durability and efficiency of the system. Furthermore, the simulation results allow cost analysis providing an estimate of the overall production cost of green hydrogen. This study aims to develop a comprehensive understanding of alkaline water electrolysis technology. The findings of this research can contribute to the development of more efficient and durable electrolyser technology while reducing the cost associated with this technology. Ultimately, these advancements can pave the way for a more sustainable and economically viable hydrogen economy.

Keywords: sustainable development, green energy, green hydrogen, electrolysis technology

Procedia PDF Downloads 87
1975 Study on Optimization of Air Infiltration at Entrance of a Commercial Complex in Zhejiang Province

Authors: Yujie Zhao, Jiantao Weng

Abstract:

In the past decade, with the rapid development of China's economy, the purchasing power and physical demand of residents have been improved, which results in the vast emergence of public buildings like large shopping malls. However, the architects usually focus on the internal functions and streamlines of these buildings, ignoring the impact of the environment on the subjective feelings of building users. Only in Zhejiang province, the infiltration of cold air in winter frequently occurs at the entrance of sizeable commercial complex buildings that have been in operation, which will affect the environmental comfort of the building lobby and internal public spaces. At present, to reduce these adverse effects, it is usually adopted to add active equipment, such as setting air curtains to block air exchange or adding heating air conditioners. From the perspective of energy consumption, the infiltration of cold air into the entrance will increase the heat consumption of indoor heating equipment, which will indirectly cause considerable economic losses during the whole winter heating stage. Therefore, it is of considerable significance to explore the suitable entrance forms for improving the environmental comfort of commercial buildings and saving energy. In this paper, a commercial complex with apparent cold air infiltration problem in Hangzhou is selected as the research object to establish a model. The environmental parameters of the building entrance, including temperature, wind speed, and infiltration air volume, are obtained by Computational Fluid Dynamics (CFD) simulation, from which the heat consumption caused by the natural air infiltration in the winter and its potential economic loss is estimated as the objective metric. This study finally obtains the optimization direction of the building entrance form of the commercial complex by comparing the simulation results of other local commercial complex projects with different entrance forms. The conclusions will guide the entrance design of the same type of commercial complex in this area.

Keywords: air infiltration, commercial complex, heat consumption, CFD simulation

Procedia PDF Downloads 132
1974 Using Cyclic Structure to Improve Inference on Network Community Structure

Authors: Behnaz Moradijamei, Michael Higgins

Abstract:

Identifying community structure is a critical task in analyzing social media data sets often modeled by networks. Statistical models such as the stochastic block model have proven to explain the structure of communities in real-world network data. In this work, we develop a goodness-of-fit test to examine community structure's existence by using a distinguishing property in networks: cyclic structures are more prevalent within communities than across them. To better understand how communities are shaped by the cyclic structure of the network rather than just the number of edges, we introduce a novel method for deciding on the existence of communities. We utilize these structures by using renewal non-backtracking random walk (RNBRW) to the existing goodness-of-fit test. RNBRW is an important variant of random walk in which the walk is prohibited from returning back to a node in exactly two steps and terminates and restarts once it completes a cycle. We investigate the use of RNBRW to improve the performance of existing goodness-of-fit tests for community detection algorithms based on the spectral properties of the adjacency matrix. Our proposed test on community structure is based on the probability distribution of eigenvalues of the normalized retracing probability matrix derived by RNBRW. We attempt to make the best use of asymptotic results on such a distribution when there is no community structure, i.e., asymptotic distribution under the null hypothesis. Moreover, we provide a theoretical foundation for our statistic by obtaining the true mean and a tight lower bound for RNBRW edge weights variance.

Keywords: hypothesis testing, RNBRW, network inference, community structure

Procedia PDF Downloads 150
1973 Artificial Neural Network Approach for Modeling and Optimization of Conidiospore Production of Trichoderma harzianum

Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Alejandro Tellez-Jurado, Juan C. Seck-Tuoh-Mora, Eva S. Hernandez-Gress, Norberto Hernandez-Romero, Iaina P. Medina-Serna

Abstract:

Trichoderma harzianum is a fungus that has been utilized as a low-cost fungicide for biological control of pests, and it is important to determine the optimal conditions to produce the highest amount of conidiospores of Trichoderma harzianum. In this work, the conidiospore production of Trichoderma harzianum is modeled and optimized by using Artificial Neural Networks (AANs). In order to gather data of this process, 30 experiments were carried out taking into account the number of hours of culture (10 distributed values from 48 to 136 hours) and the culture humidity (70, 75 and 80 percent), obtained as a response the number of conidiospores per gram of dry mass. The experimental results were used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers, and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The ANN with the best performance was chosen in order to simulate the process and be able to maximize the conidiospores production. The obtained ANN with the highest performance has 2 inputs and 1 output, three hidden layers with 3, 10 and 10 neurons in each layer, respectively. The ANN performance shows an R2 value of 0.9900, and the Root Mean Squared Error is 1.2020. This ANN predicted that 644175467 conidiospores per gram of dry mass are the maximum amount obtained in 117 hours of culture and 77% of culture humidity. In summary, the ANN approach is suitable to represent the conidiospores production of Trichoderma harzianum because the R2 value denotes a good fitting of experimental results, and the obtained ANN model was used to find the parameters to produce the biggest amount of conidiospores per gram of dry mass.

Keywords: Trichoderma harzianum, modeling, optimization, artificial neural network

Procedia PDF Downloads 157
1972 A Human Factors Approach to Workload Optimization for On-Screen Review Tasks

Authors: Christina Kirsch, Adam Hatzigiannis

Abstract:

Rail operators and maintainers worldwide are increasingly replacing walking patrols in the rail corridor with mechanized track patrols -essentially data capture on trains- and on-screen reviews of track infrastructure in centralized review facilities. The benefit is that infrastructure workers are less exposed to the dangers of the rail corridor. The impact is a significant change in work design from walking track sections and direct observation in the real world to sedentary jobs in the review facility reviewing captured data on screens. Defects in rail infrastructure can have catastrophic consequences. Reviewer performance regarding accuracy and efficiency of reviews within the available time frame is essential to ensure safety and operational performance. Rail operators must optimize workload and resource loading to transition to on-screen reviews successfully. Therefore, they need to know what workload assessment methodologies will provide reliable and valid data to optimize resourcing for on-screen reviews. This paper compares objective workload measures, including track difficulty ratings and review distance covered per hour, and subjective workload assessments (NASA TLX) and analyses the link between workload and reviewer performance, including sensitivity, precision, and overall accuracy. An experimental study was completed with eight on-screen reviewers, including infrastructure workers and engineers, reviewing track sections with different levels of track difficulty over nine days. Each day the reviewers completed four 90-minute sessions of on-screen inspection of the track infrastructure. Data regarding the speed of review (km/ hour), detected defects, false negatives, and false positives were collected. Additionally, all reviewers completed a subjective workload assessment (NASA TLX) after each 90-minute session and a short employee engagement survey at the end of the study period that captured impacts on job satisfaction and motivation. The results showed that objective measures for tracking difficulty align with subjective mental demand, temporal demand, effort, and frustration in the NASA TLX. Interestingly, review speed correlated with subjective assessments of physical and temporal demand, but to mental demand. Subjective performance ratings correlated with all accuracy measures and review speed. The results showed that subjective NASA TLX workload assessments accurately reflect objective workload. The analysis of the impact of workload on performance showed that subjective mental demand correlated with high precision -accurately detected defects, not false positives. Conversely, high temporal demand was negatively correlated with sensitivity and the percentage of detected existing defects. Review speed was significantly correlated with false negatives. With an increase in review speed, accuracy declined. On the other hand, review speed correlated with subjective performance assessments. Reviewers thought their performance was higher when they reviewed the track sections faster, despite the decline in accuracy. The study results were used to optimize resourcing and ensure that reviewers had enough time to review the allocated track sections to improve defect detection rates in accordance with the efficiency-thoroughness trade-off. Overall, the study showed the importance of a multi-method approach to workload assessment and optimization, combining subjective workload assessments with objective workload and performance measures to ensure that recommendations for work system optimization are evidence-based and reliable.

Keywords: automation, efficiency-thoroughness trade-off, human factors, job design, NASA TLX, performance optimization, subjective workload assessment, workload analysis

Procedia PDF Downloads 118
1971 Extended Kalman Filter and Markov Chain Monte Carlo Method for Uncertainty Estimation: Application to X-Ray Fluorescence Machine Calibration and Metal Testing

Authors: S. Bouhouche, R. Drai, J. Bast

Abstract:

This paper is concerned with a method for uncertainty evaluation of steel sample content using X-Ray Fluorescence method. The considered method of analysis is a comparative technique based on the X-Ray Fluorescence; the calibration step assumes the adequate chemical composition of metallic analyzed sample. It is proposed in this work a new combined approach using the Kalman Filter and Markov Chain Monte Carlo (MCMC) for uncertainty estimation of steel content analysis. The Kalman filter algorithm is extended to the model identification of the chemical analysis process using the main factors affecting the analysis results; in this case, the estimated states are reduced to the model parameters. The MCMC is a stochastic method that computes the statistical properties of the considered states such as the probability distribution function (PDF) according to the initial state and the target distribution using Monte Carlo simulation algorithm. Conventional approach is based on the linear correlation, the uncertainty budget is established for steel Mn(wt%), Cr(wt%), Ni(wt%) and Mo(wt%) content respectively. A comparative study between the conventional procedure and the proposed method is given. This kind of approaches is applied for constructing an accurate computing procedure of uncertainty measurement.

Keywords: Kalman filter, Markov chain Monte Carlo, x-ray fluorescence calibration and testing, steel content measurement, uncertainty measurement

Procedia PDF Downloads 283
1970 Optimization of the Jatropha curcas Supply Chain as a Criteria for the Implementation of Future Collection Points in Rural Areas of Manabi-Ecuador

Authors: Boris G. German, Edward Jiménez, Sebastián Espinoza, Andrés G. Chico, Ricardo A. Narváez

Abstract:

The unique flora and fauna of The Galapagos Islands has leveraged a tourism-driven growth in the islands. Nonetheless, such development is energy-intensive and requires thousands of gallons of diesel each year for thermoelectric electricity generation. The needed transport of fossil fuels from the continent has generated oil spillages and affectations to the fragile ecosystem of the islands. The Zero Fossil Fuels initiative for The Galapagos proposed by the Ecuadorian government as an alternative to reduce the use of fossil fuels in the islands, considers the replacement of diesel in thermoelectric generators, by Jatropha curcas vegetable oil. However, the Jatropha oil supply cannot entirely cover yet the demand for electricity generation in Galapagos. Within this context, the present work aims to provide an optimization model that can be used as a selection criterion for approving new Jatropha Curcas collection points in rural areas of Manabi-Ecuador. For this purpose, existing Jatropha collection points in Manabi were grouped under three regions: north (7 collection points), center (4 collection points) and south (9 collection points). Field work was carried out in every region in order to characterize the collection points, to establish local Jatropha supply and to determine transportation costs. Data collection was complemented using GIS software and an objective function was defined in order to determine the profit associated to Jatropha oil production. The market price of both Jatropha oil and residual cake, were considered for the total revenue; whereas Jatropha price, transportation and oil extraction costs were considered for the total cost. The tonnes of Jatropha fruit and seed, transported from collection points to the extraction plant, were considered as variables. The maximum and minimum amount of the collected Jatropha from each region constrained the optimization problem. The supply chain was optimized using linear programming in order to maximize the profits. Finally, a sensitivity analysis was performed in order to find a profit-based criterion for the acceptance of future collection points in Manabi. The maximum profit reached a value of $ 4,616.93 per year, which represented a total Jatropha collection of 62.3 tonnes Jatropha per year. The northern region of Manabi had the biggest collection share (69%), followed by the southern region (17%). The criteria for accepting new Jatropha collection points in the rural areas of Manabi can be defined by the current maximum profit of the zone and by the variation in the profit when collection points are removed one at a time. The definition of new feasible collection points plays a key role in the supply chain associated to Jatropha oil production. Therefore, a mathematical model that assists decision makers in establishing new collection points while assuring profitability, contributes to guarantee a continued Jatropha oil supply for Galapagos and a sustained economic growth in the rural areas of Ecuador.

Keywords: collection points, Jatropha curcas, linear programming, supply chain

Procedia PDF Downloads 430
1969 Reinforcement Learning For Agile CNC Manufacturing: Optimizing Configurations And Sequencing

Authors: Huan Ting Liao

Abstract:

In a typical manufacturing environment, computer numerical control (CNC) machining is essential for automating production through precise computer-controlled tool operations, significantly enhancing efficiency and ensuring consistent product quality. However, traditional CNC production lines often rely on manual loading and unloading, limiting operational efficiency and scalability. Although automated loading systems have been developed, they frequently lack sufficient intelligence and configuration efficiency, requiring extensive setup adjustments for different products and impacting overall productivity. This research addresses the job shop scheduling problem (JSSP) in CNC machining environments, aiming to minimize total completion time (makespan) and maximize CNC machine utilization. We propose a novel approach using reinforcement learning (RL), specifically the Q-learning algorithm, to optimize scheduling decisions. The study simulates the JSSP, incorporating robotic arm operations, machine processing times, and work order demand allocation to determine optimal processing sequences. The Q-learning algorithm enhances machine utilization by dynamically balancing workloads across CNC machines, adapting to varying job demands and machine states. This approach offers robust solutions for complex manufacturing environments by automating decision-making processes for job assignments. Additionally, we evaluate various layout configurations to identify the most efficient setup. By integrating RL-based scheduling optimization with layout analysis, this research aims to provide a comprehensive solution for improving manufacturing efficiency and productivity in CNC-based job shops. The proposed method's adaptability and automation potential promise significant advancements in tackling dynamic manufacturing challenges.

Keywords: job shop scheduling problem, reinforcement learning, operations sequence, layout optimization, q-learning

Procedia PDF Downloads 23
1968 Solution of Nonlinear Fractional Programming Problem with Bounded Parameters

Authors: Mrinal Jana, Geetanjali Panda

Abstract:

In this paper a methodology is developed to solve a nonlinear fractional programming problem in which the coefficients of the objective function and constraints are interval parameters. This model is transformed into a general optimization problem and relation between the original problem and the transformed problem is established. Finally the proposed methodology is illustrated through a numerical example.

Keywords: fractional programming, interval valued function, interval inequalities, partial order relation

Procedia PDF Downloads 519
1967 The Strategic Entering Time of a Commerce Platform

Authors: Chia-li Wang

Abstract:

The surge of service and commerce platforms, such as e-commerce and internet-of-things, have rapidly changed our lives. How to avoid the congestion and get the job done in the platform is now a common problem that many people encounter every day. This requires platform users to make decisions about when to enter the platform. To that end, we investigate the strategic entering time of a simple platform containing random numbers of buyers and sellers of some item. Upon a trade, the buyer and the seller gain respective profits, yet they pay the cost of waiting in the platform. To maximize their expected payoffs from trading, both buyers and sellers can choose their entering times. This creates an interesting and practical framework of a game that is played among buyers, among sellers, and between them. That is, a strategy employed by a player is not only against players of its type but also a response to those of the other type, and, thus, a strategy profile is composed of strategies of buyers and sellers. The players' best response, the Nash equilibrium (NE) strategy profile, is derived by a pair of differential equations, which, in turn, are used to establish its existence and uniqueness. More importantly, its structure sheds valuable insights of how the entering strategy of one side (buyers or sellers) is affected by the entering behavior of the other side. These results provide a base for the study of dynamic pricing for stochastic demand-supply imbalances. Finally, comparisons between the social welfares (the sum of the payoffs incurred by individual participants) obtained by the optimal strategy and by the NE strategy are conducted for showing the efficiency loss relative to the socially optimal solution. That should help to manage the platform better.

Keywords: double-sided queue, non-cooperative game, nash equilibrium, price of anarchy

Procedia PDF Downloads 85