Search results for: pareto optimal
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3170

Search results for: pareto optimal

2540 Optimal Design of Step-Stress Partially Life Test Using Multiply Censored Exponential Data with Random Removals

Authors: Showkat Ahmad Lone, Ahmadur Rahman, Ariful Islam

Abstract:

The major assumption in accelerated life tests (ALT) is that the mathematical model relating the lifetime of a test unit and the stress are known or can be assumed. In some cases, such life–stress relationships are not known and cannot be assumed, i.e. ALT data cannot be extrapolated to use condition. So, in such cases, partially accelerated life test (PALT) is a more suitable test to be performed for which tested units are subjected to both normal and accelerated conditions. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests using progressive failure-censored hybrid data with random removals. The life data of the units under test is considered to follow exponential life distribution. The removals from the test are assumed to have binomial distributions. The point and interval maximum likelihood estimations are obtained for unknown distribution parameters and tampering coefficient. An optimum test plan is developed using the D-optimality criterion. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.

Keywords: binomial distribution, d-optimality, multiple censoring, optimal design, partially accelerated life testing, simulation study

Procedia PDF Downloads 320
2539 Disposition Kinetics of Ciprofloxacin after Intramuscular Administration in Lohi Sheep

Authors: Zahid Iqbal, Ijaz Javed, Riaz Hussain, Ibadullah Jan, Amir Ali Khan

Abstract:

This study was conducted to investigate the disposition kinetics of ciprofloxacin and calculate its optimal dosage in Pakistani sheep of Lohi breed. Injectable preparation of ciprofloxacin was given intramuscularly to eight sheep at a dose of 5 mg/Kg. Before administration of drug blood sample was drawn from each animal. Post drug administration, blood samples were also drawn at various predetermined time periods. Drug concentration in the blood samples was assessed through high performance liquid chromatograph (HPLC). Data were best described by two compartment open model and different pharmacokinetic (PK) parameters were calculated. Cmax of 1.97 ± 0.15 µg/ml was reached at Tmax of 0.88 ± 0.09 hours. Half life of absorption (t1/2 abs) was observed to be 0.63 ± 0.16 hours while t1/2 α (distribution half life) and t1/2 ß (elimination half life) were found to be 0.46 ± 0.05 and 2.93 ± 0.45 hours, respectively. Vd (apparent volume of distribution) was calculated as 2.89 ± 0.30 L/kg while AUC (area under the curve) was 7.19 ± 0.38 µg.hr/mL and CL (total body clearance) was 0.75 ± 0.04 L/hr/kg. Using these parameters, an optimal intramuscular dosage of ciprofloxacin in adult Lohi sheep was calculated as 21.43 mg/kg, advised to be repeated after 24 hours. From this, we came to the conclusion that calculated dose was much higher than the dose advised by the foreign manufacturer and to avoid antimicrobial resistance, it is advised that this locally investigated dosage regimen should be strictly followed in local sheep.

Keywords: pharmacokinetics, dosage regimen, ciprofloxacin, HPLC, sheep

Procedia PDF Downloads 539
2538 Numerical Simulations on Feasibility of Stochastic Model Predictive Control for Linear Discrete-Time Systems with Random Dither Quantization

Authors: Taiki Baba, Tomoaki Hashimoto

Abstract:

The random dither quantization method enables us to achieve much better performance than the simple uniform quantization method for the design of quantized control systems. Motivated by this fact, the stochastic model predictive control method in which a performance index is minimized subject to probabilistic constraints imposed on the state variables of systems has been proposed for linear feedback control systems with random dither quantization. In other words, a method for solving optimal control problems subject to probabilistic state constraints for linear discrete-time control systems with random dither quantization has been already established. To our best knowledge, however, the feasibility of such a kind of optimal control problems has not yet been studied. Our objective in this paper is to investigate the feasibility of stochastic model predictive control problems for linear discrete-time control systems with random dither quantization. To this end, we provide the results of numerical simulations that verify the feasibility of stochastic model predictive control problems for linear discrete-time control systems with random dither quantization.

Keywords: model predictive control, stochastic systems, probabilistic constraints, random dither quantization

Procedia PDF Downloads 282
2537 Application of GA Optimization in Analysis of Variable Stiffness Composites

Authors: Nasim Fallahi, Erasmo Carrera, Alfonso Pagani

Abstract:

Variable angle tow describes the fibres which are curvilinearly steered in a composite lamina. Significantly, stiffness tailoring freedom of VAT composite laminate can be enlarged and enabled. Composite structures with curvilinear fibres have been shown to improve the buckling load carrying capability in contrast with the straight laminate composites. However, the optimal design and analysis of VAT are faced with high computational efforts due to the increasing number of variables. In this article, an efficient optimum solution has been used in combination with 1D Carrera’s Unified Formulation (CUF) to investigate the optimum fibre orientation angles for buckling analysis. The particular emphasis is on the LE-based CUF models, which provide a Lagrange Expansions to address a layerwise description of the problem unknowns. The first critical buckling load has been considered under simply supported boundary conditions. Special attention is lead to the sensitivity of buckling load corresponding to the fibre orientation angle in comparison with the results which obtain through the Genetic Algorithm (GA) optimization frame and then Artificial Neural Network (ANN) is applied to investigate the accuracy of the optimized model. As a result, numerical CUF approach with an optimal solution demonstrates the robustness and computational efficiency of proposed optimum methodology.

Keywords: beam structures, layerwise, optimization, variable stiffness

Procedia PDF Downloads 142
2536 Flood Planning Based on Risk Optimization: A Case Study in Phan-Calo River Basin in Vinh Phuc Province, Vietnam

Authors: Nguyen Quang Kim, Nguyen Thu Hien, Nguyen Thien Dung

Abstract:

Flood disasters are increasing worldwide in both frequency and magnitude. Every year in Vietnam, flood causes great damage to people, property, and environmental degradation. The flood risk management policy in Vietnam is currently updated. The planning of flood mitigation strategies is reviewed to make a decision how to reach sustainable flood risk reduction. This paper discusses the basic approach where the measures of flood protection are chosen based on minimizing the present value of expected monetary expenses, total residual risk and costs of flood control measures. This approach will be proposed and demonstrated in a case study for flood risk management in Vinh Phuc province of Vietnam. Research also proposed the framework to find a solution of optimal protection level and optimal measures of the flood. It provides an explicit economic basis for flood risk management plans and interactive effects of options for flood damage reduction. The results of the case study are demonstrated and discussed which would provide the processing of actions helped decision makers to choose flood risk reduction investment options.

Keywords: drainage plan, flood planning, flood risk, residual risk, risk optimization

Procedia PDF Downloads 242
2535 A Multi Objective Reliable Location-Inventory Capacitated Disruption Facility Problem with Penalty Cost Solve with Efficient Meta Historic Algorithms

Authors: Elham Taghizadeh, Mostafa Abedzadeh, Mostafa Setak

Abstract:

Logistics network is expected that opened facilities work continuously for a long time horizon without any failure; but in real world problems, facilities may face disruptions. This paper studies a reliable joint inventory location problem to optimize cost of facility locations, customers’ assignment, and inventory management decisions when facilities face failure risks and doesn’t work. In our model we assume when a facility is out of work, its customers may be reassigned to other operational facilities otherwise they must endure high penalty costs associated with losing service. For defining the model closer to real world problems, the model is proposed based on p-median problem and the facilities are considered to have limited capacities. We define a new binary variable (Z_is) for showing that customers are not assigned to any facilities. Our problem involve a bi-objective model; the first one minimizes the sum of facility construction costs and expected inventory holding costs, the second one function that mention for the first one is minimizes maximum expected customer costs under normal and failure scenarios. For solving this model we use NSGAII and MOSS algorithms have been applied to find the pareto- archive solution. Also Response Surface Methodology (RSM) is applied for optimizing the NSGAII Algorithm Parameters. We compare performance of two algorithms with three metrics and the results show NSGAII is more suitable for our model.

Keywords: joint inventory-location problem, facility location, NSGAII, MOSS

Procedia PDF Downloads 525
2534 Optimal Opportunistic Maintenance Policy for a Two-Unit System

Authors: Nooshin Salari, Viliam Makis, Jane Doe

Abstract:

This paper presents a maintenance policy for a system consisting of two units. Unit 1 is gradually deteriorating and is subject to soft failure. Unit 2 has a general lifetime distribution and is subject to hard failure. Condition of unit 1 of the system is monitored periodically and it is considered as failed when its deterioration level reaches or exceeds a critical level N. At the failure time of unit 2 system is considered as failed, and unit 2 will be correctively replaced by the next inspection epoch. Unit 1 or 2 are preventively replaced when deterioration level of unit 1 or age of unit 2 exceeds the related preventive maintenance (PM) levels. At the time of corrective or preventive replacement of unit 2, there is an opportunity to replace unit 1 if its deterioration level reaches the opportunistic maintenance (OM) level. If unit 2 fails in an inspection interval, system stops operating although unit 1 has not failed. A mathematical model is derived to find the preventive and opportunistic replacement levels for unit 1 and preventive replacement age for unit 2, that minimize the long run expected average cost per unit time. The problem is formulated and solved in the semi-Markov decision process (SMDP) framework. Numerical example is provided to illustrate the performance of the proposed model and the comparison of the proposed model with an optimal policy without opportunistic maintenance level for unit 1 is carried out.

Keywords: condition-based maintenance, opportunistic maintenance, preventive maintenance, two-unit system

Procedia PDF Downloads 200
2533 Testing a Flexible Manufacturing System Facility Production Capacity through Discrete Event Simulation: Automotive Case Study

Authors: Justyna Rybicka, Ashutosh Tiwari, Shane Enticott

Abstract:

In the age of automation and computation aiding manufacturing, it is clear that manufacturing systems have become more complex than ever before. Although technological advances provide the capability to gain more value with fewer resources, sometimes utilisation of the manufacturing capabilities available to organisations is difficult to achieve. Flexible manufacturing systems (FMS) provide a unique capability to manufacturing organisations where there is a need for product range diversification by providing line efficiency through production flexibility. This is very valuable in trend driven production set-ups or niche volume production requirements. Although FMS provides flexible and efficient facilities, its optimal set-up is key in achieving production performance. As many variables are interlinked due to the flexibility provided by the FMS, analytical calculations are not always sufficient to predict the FMS’ performance. Simulation modelling is capable of capturing the complexity and constraints associated with FMS. This paper demonstrates how discrete event simulation (DES) can address complexity in an FMS to optimise the production line performance. A case study of an automotive FMS is presented. The DES model demonstrates different configuration options depending on prioritising objectives: utilisation and throughput. Additionally, this paper provides insight into understanding the impact of system set-up constraints on the FMS performance and demonstrates the exploration into the optimal production set-up.

Keywords: discrete event simulation, flexible manufacturing system, capacity performance, automotive

Procedia PDF Downloads 327
2532 Optimal Beam for Accelerator Driven Systems

Authors: M. Paraipan, V. M. Javadova, S. I. Tyutyunnikov

Abstract:

The concept of energy amplifier or accelerator driven system (ADS) involves the use of a particle accelerator coupled with a nuclear reactor. The accelerated particle beam generates a supplementary source of neutrons, which allows the subcritical functioning of the reactor, and consequently a safe exploitation. The harder neutron spectrum realized ensures a better incineration of the actinides. The almost generalized opinion is that the optimal beam for ADS is represented by protons with energy around 1 GeV (gigaelectronvolt). In the present work, a systematic analysis of the energy gain for proton beams with energy from 0.5 to 3 GeV and ion beams from deuteron to neon with energies between 0.25 and 2 AGeV is performed. The target is an assembly of metallic U-Pu-Zr fuel rods in a bath of lead-bismuth eutectic coolant. The rods length is 150 cm. A beryllium converter with length 110 cm is used in order to maximize the energy released in the target. The case of a linear accelerator is considered, with a beam intensity of 1.25‧10¹⁶ p/s, and a total accelerator efficiency of 0.18 for proton beam. These values are planned to be achieved in the European Spallation Source project. The energy gain G is calculated as the ratio between the energy released in the target to the energy spent to accelerate the beam. The energy released is obtained through simulation with the code Geant4. The energy spent is calculating by scaling from the data about the accelerator efficiency for the reference particle (proton). The analysis concerns the G values, the net power produce, the accelerator length, and the period between refueling. The optimal energy for proton is 1.5 GeV. At this energy, G reaches a plateau around a value of 8 and a net power production of 120 MW (megawatt). Starting with alpha, ion beams have a higher G than 1.5 GeV protons. A beam of 0.25 AGeV(gigaelectronvolt per nucleon) ⁷Li realizes the same net power production as 1.5 GeV protons, has a G of 15, and needs an accelerator length 2.6 times lower than for protons, representing the best solution for ADS. Beams of ¹⁶O or ²⁰Ne with energy 0.75 AGeV, accelerated in an accelerator with the same length as 1.5 GeV protons produce approximately 900 MW net power, with a gain of 23-25. The study of the evolution of the isotopes composition during irradiation shows that the increase in power production diminishes the period between refueling. For a net power produced of 120 MW, the target can be irradiated approximately 5000 days without refueling, but only 600 days when the net power reaches 1 GW (gigawatt).

Keywords: accelerator driven system, ion beam, electrical power, energy gain

Procedia PDF Downloads 140
2531 Submicron Laser-Induced Dot, Ripple and Wrinkle Structures and Their Applications

Authors: P. Slepicka, N. Slepickova Kasalkova, I. Michaljanicova, O. Nedela, Z. Kolska, V. Svorcik

Abstract:

Polymers exposed to laser or plasma treatment or modified with different wet methods which enable the introduction of nanoparticles or biologically active species, such as amino-acids, may find many applications both as biocompatible or anti-bacterial materials or on the contrary, can be applied for a decrease in the number of cells on the treated surface which opens application in single cell units. For the experiments, two types of materials were chosen, a representative of non-biodegradable polymers, polyethersulphone (PES) and polyhydroxybutyrate (PHB) as biodegradable material. Exposure of solid substrate to laser well below the ablation threshold can lead to formation of various surface structures. The ripples have a period roughly comparable to the wavelength of the incident laser radiation, and their dimensions depend on many factors, such as chemical composition of the polymer substrate, laser wavelength and the angle of incidence. On the contrary, biopolymers may significantly change their surface roughness and thus influence cell compatibility. The focus was on the surface treatment of PES and PHB by pulse excimer KrF laser with wavelength of 248 nm. The changes of physicochemical properties, surface morphology, surface chemistry and ablation of exposed polymers were studied both for PES and PHB. Several analytical methods involving atomic force microscopy, gravimetry, scanning electron microscopy and others were used for the analysis of the treated surface. It was found that the combination of certain input parameters leads not only to the formation of optimal narrow pattern, but to the combination of a ripple and a wrinkle-like structure, which could be an optimal candidate for cell attachment. The interaction of different types of cells and their interactions with the laser exposed surface were studied. It was found that laser treatment contributes as a major factor for wettability/contact angle change. The combination of optimal laser energy and pulse number was used for the construction of a surface with an anti-cellular response. Due to the simple laser treatment, we were able to prepare a biopolymer surface with higher roughness and thus significantly influence the area of growth of different types of cells (U-2 OS cells).

Keywords: cell response, excimer laser, polymer treatment, periodic pattern, surface morphology

Procedia PDF Downloads 236
2530 Optrix: Energy Aware Cross Layer Routing Using Convex Optimization in Wireless Sensor Networks

Authors: Ali Shareef, Aliha Shareef, Yifeng Zhu

Abstract:

Energy minimization is of great importance in wireless sensor networks in extending the battery lifetime. One of the key activities of nodes in a WSN is communication and the routing of their data to a centralized base-station or sink. Routing using the shortest path to the sink is not the best solution since it will cause nodes along this path to fail prematurely. We propose a cross-layer energy efficient routing protocol Optrix that utilizes a convex formulation to maximize the lifetime of the network as a whole. We further propose, Optrix-BW, a novel convex formulation with bandwidth constraint that allows the channel conditions to be accounted for in routing. By considering this key channel parameter we demonstrate that Optrix-BW is capable of congestion control. Optrix is implemented in TinyOS, and we demonstrate that a relatively large topology of 40 nodes can converge to within 91% of the optimal routing solution. We describe the pitfalls and issues related with utilizing a continuous form technique such as convex optimization with discrete packet based communication systems as found in WSNs. We propose a routing controller mechanism that allows for this transformation. We compare Optrix against the Collection Tree Protocol (CTP) and we found that Optrix performs better in terms of convergence to an optimal routing solution, for load balancing and network lifetime maximization than CTP.

Keywords: wireless sensor network, Energy Efficient Routing

Procedia PDF Downloads 391
2529 Development of a Solar Energy Based Prototype, CyanoClean, for Arsenic Removal from Water with the Use of a Cyanobacterial Consortium in Field Conditions of India

Authors: Anurakti Shukla, Sudhakar Srivastava

Abstract:

Cyanobacteria are known for rapid growth rates, high biomass, and the ability to accumulate potentially toxic elements and contaminants. The present work was planned to develop a low-cost, feasible prototype, CyanoClean, for the growth of a cyanobacterial consortium for the removal of arsenic (As) from water. The cyanobacterial consortium consisting of Oscillatoria, Phormidiumand Gloeotrichiawas used, and the conditions for optimal growth of the consortium were standardized. A pH of 7.6, initial cyanobacterial biomass of 10 g/L, and arsenite [As(III)] and arsenate [As(V)] concentration of 400 μΜand 600 μM, respectively, were found to be suitable. The CyanoClean prototype was designed with acrylic sheet and had arrangements for optimal cyanobacterial growth in natural sunlight and also in artificial light. The As removal experiments in concentration- and duration-dependent manner demonstrated removal of up to 39-69% and 9-33% As respectively from As(III) and As(V)-contaminated water. In field testing of CyanoClean, natural As-contaminated groundwater was used, and As reduction was monitored when a flow rate of 3 L/h was maintained. In a field experiment, As concentration in groundwater was found to reduce from 102.43 μg L⁻¹ to <10 μg L⁻¹ after 6 h in natural sunlight. However, in shaded conditions under artificial light, the same result was achieved after 9 h. The CyanoClean prototype is of simple design and can be easily up-scaled for application at a small- to medium-size land and shall be affordable even for a low- to middle-income group farmer.

Keywords: cyanoclean, gloeotrichia, oscillatoria, phormidium, phycoremediation

Procedia PDF Downloads 143
2528 Multi-Scale Control Model for Network Group Behavior

Authors: Fuyuan Ma, Ying Wang, Xin Wang

Abstract:

Social networks have become breeding grounds for the rapid spread of rumors and malicious information, posing threats to societal stability and causing significant public harm. Existing research focuses on simulating the spread of information and its impact on users through propagation dynamics and applies methods such as greedy approximation strategies to approximate the optimal control solution at the global scale. However, the greedy strategy at the global scale may fall into locally optimal solutions, and the approximate simulation of information spread may accumulate more errors. Therefore, we propose a multi-scale control model for network group behavior, introducing individual and group scales on top of the greedy strategy’s global scale. At the individual scale, we calculate the propagation influence of nodes based on their structural attributes to alleviate the issue of local optimality. At the group scale, we conduct precise propagation simulations to avoid introducing cumulative errors from approximate calculations without increasing computational costs. Experimental results on three real-world datasets demonstrate the effectiveness of our proposed multi-scale model in controlling network group behavior.

Keywords: influence blocking maximization, competitive linear threshold model, social networks, network group behavior

Procedia PDF Downloads 21
2527 Iterative Dynamic Programming for 4D Flight Trajectory Optimization

Authors: Kawser Ahmed, K. Bousson, Milca F. Coelho

Abstract:

4D flight trajectory optimization is one of the key ingredients to improve flight efficiency and to enhance the air traffic capacity in the current air traffic management (ATM). The present paper explores the iterative dynamic programming (IDP) as a potential numerical optimization method for 4D flight trajectory optimization. IDP is an iterative version of the Dynamic programming (DP) method. Due to the numerical framework, DP is very suitable to deal with nonlinear discrete dynamic systems. The 4D waypoint representation of the flight trajectory is similar to the discretization by a grid system; thus DP is a natural method to deal with the 4D flight trajectory optimization. However, the computational time and space complexity demanded by the DP is enormous due to the immense number of grid points required to find the optimum, which prevents the use of the DP in many practical high dimension problems. On the other hand, the IDP has shown potentials to deal successfully with high dimension optimal control problems even with a few numbers of grid points at each stage, which reduces the computational effort over the traditional DP approach. Although the IDP has been applied successfully in chemical engineering problems, IDP is yet to be validated in 4D flight trajectory optimization problems. In this paper, the IDP has been successfully used to generate minimum length 4D optimal trajectory avoiding any obstacle in its path, such as a no-fly zone or residential areas when flying in low altitude to reduce noise pollution.

Keywords: 4D waypoint navigation, iterative dynamic programming, obstacle avoidance, trajectory optimization

Procedia PDF Downloads 162
2526 Sustained-Release Persulfate Tablets for Groundwater Remediation

Authors: Yu-Chen Chang, Yen-Ping Peng, Wei-Yu Chen, Ku-Fan Chen

Abstract:

Contamination of soil and groundwater has become a serious and widespread environmental problem. In this study, sustained-release persulfate tablets were developed using persulfate powder and a modified cellulose binder for organic-contaminated groundwater remediation. Conventional cement-based persulfate-releasing materials were also synthesized for the comparison. The main objectives of this study were to: (1) evaluate the release rates of the remedial tablets; (2) obtain the optimal formulas of the tablets; and (3) evaluate the effects of the tablets on the subsurface environment. The results of batch experiments show that the optimal parameter for the preparation of the persulfate-releasing tablet was persulfate:cellulose = 1:1 (wt:wt) with a 5,000 kg F/cm2 of pressure application. The cellulose-based persulfate tablet was able to release 2,030 mg/L of persulfate per day for 10 days. Compared to cement-based persulfate-releasing materials, the persulfate release rates of the cellulose-based persulfate tablets were much more stable. Moreover, since the tablets are soluble in water, no waste will be produced in the subsurface. The results of column tests show that groundwater flow would shorten the release time of the tablets. This study successfully developed unique persulfate tablets based on green remediation perspective. The efficacy of the persulfate-releasing tablets on the removal of organic pollutants needs to be further evaluated. The persulfate tablets are expected to be applied for site remediation in the future.

Keywords: sustained-release persulfate tablet, modified cellulose, green remediation, groundwater

Procedia PDF Downloads 290
2525 Insulin Resistance in Children and Adolescents in Relation to Body Mass Index, Waist Circumference and Body Fat Weight

Authors: E. Vlachopapadopoulou, E. Dikaiakou, E. Anagnostou, I. Panagiotopoulos, E. Kaloumenou, M. Kafetzi, A. Fotinou, S. Michalacos

Abstract:

Aim: To investigate the relation and impact of Body Mass Index (BMI), Waist Circumference (WC) and Body Fat Weight (BFW) on insulin resistance (MATSUDA INDEX < 2.5) in children and adolescents. Methods: Data from 95 overweight and obese children (47 boys and 48 girls) with mean age 10.7 ± 2.2 years were analyzed. ROC analysis was used to investigate the predictive ability of BMI, WC and BFW for insulin resistance and find the optimal cut-offs. The overall performance of the ROC analysis was quantified by computing area under the curve (AUC). Results: ROC curve analysis indicated that the optimal-cut off of WC for the prediction of insulin resistance was 97 cm with sensitivity equal to 75% and specificity equal to 73.1%. AUC was 0.78 (95% CI: 0.63-0.92, p=0.001). The sensitivity and specificity of obesity for the discrimination of participants with insulin resistance from those without insulin resistance were equal to 58.3% and 75%, respectively (AUC=0.67). BFW had a borderline predictive ability for insulin resistance (AUC=0.58, 95% CI: 0.43-0.74, p=0.101). The predictive ability of WC was equivalent with the correspondence predictive ability of BMI (p=0.891). Obese subjects had 4.2 times greater odds for having insulin resistance (95% CI: 1.71-10.30, p < 0.001), while subjects with WC more than 97 had 8.1 times greater odds for having insulin resistance (95% CI: 2.14-30.86, p=0.002). Conclusion: BMI and WC are important clinical factors that have significant clinical relation with insulin resistance in children and adolescents. The cut off of 97 cm for WC can identify children with greater likelihood for insulin resistance.

Keywords: body fat weight, body mass index, insulin resistance, obese children, waist circumference

Procedia PDF Downloads 320
2524 Preparation of Activated Carbon from Lignocellulosic Precursor for Dyes Adsorption

Authors: H. Mokaddem, D. Miroud, N. Azouaou, F. Si-Ahmed, Z. Sadaoui

Abstract:

The synthesis and characterization of activated carbon from local lignocellulosic precursor (Algerian alfa) was carried out for the removal of cationic dyes from aqueous solutions. The effect of the production variables such as impregnation chemical agents, impregnation ratio, activation temperature and activation time were investigated. Carbon obtained using the optimum conditions (CaCl2/ 1:1/ 500°C/2H) was characterized by various analytical techniques scanning electron microscopy (SEM), infrared spectroscopic analysis (FTIR) and zero-point-of-charge (pHpzc). Adsorption tests of methylene blue on the optimal activated carbon were conducted. The effects of contact time, amount of adsorbent, initial dye concentration and pH were studied. The adsorption equilibrium examined using Langmuir, Freundlich, Temkin and Redlich–Peterson models reveals that the Langmuir model is most appropriate to describe the adsorption process. The kinetics of MB sorption onto activated carbon follows the pseudo-second order rate expression. The examination of the thermodynamic analysis indicates that the adsorption process is spontaneous (ΔG ° < 0) and endothermic (ΔH ° > 0), the positive value of the standard entropy shows the affinity between the activated carbon and the dye. The present study showed that the produced optimal activated carbon prepared from Algerian alfa is an effective low-cost adsorbent and can be employed as alternative to commercial activated carbon for removal of MB dye from aqueous solution.

Keywords: activated carbon, adsorption, cationic dyes, Algerian alfa

Procedia PDF Downloads 228
2523 Optimal Sortation Strategy for a Distribution Network in an E-Commerce Supply Chain

Authors: Pankhuri Dagaonkar, Charumani Singh, Poornima Krothapalli, Krishna Karthik

Abstract:

The backbone of any retail e-commerce success story is a unique design of supply chain network, providing the business an unparalleled speed and scalability. Primary goal of the supply chain strategy is to meet customer expectation by offering fastest deliveries while keeping the cost minimal. Meeting this objective at the large market that India provides is the problem statement that we have targeted here. There are many models and optimization techniques focused on network design to identify the ideal facility location and size, optimizing cost and speed. In this paper we are presenting a tactical approach to optimize cost of an existing network for a predefined speed. We have considered both forward and reverse logistics of a retail e-commerce supply chain consisting of multiple fulfillment (warehouse) and delivery centers, which are connected via sortation nodes. The mathematical model presented here determines if the shipment from a node should get sorted directly for the last mile delivery center or it should travel as consolidated package to another node for further sortation (resort). The objective function minimizes the total cost by varying the resort percentages between nodes and provides the optimal resource allocation and number of sorts at each node.

Keywords: distribution strategy, mathematical model, network design, supply chain management

Procedia PDF Downloads 297
2522 In Working, Career Is Not Everything: A Case Study of Family Friendly Policies on Bank Company

Authors: Trias Setiawati, Rizkika Awalia

Abstract:

The study title is “In Working, Career is not everything: A Case Study of Family Friendly Policies (FFP) on Bank Company.” This study aims to describe the application of FFP in the banking, especially Bank Rakyat Indonesia or BRI (Indonesian People Bank) in Katamso Branch Office in Yogyakarta Katamso Branch Office in Yogyakarta (KBOY) as a support company to create a work-life balance, as well as the achievement of career and family harmony is seen from the work-family conflict faced by the employees. The importance of the application of FFP in an organization is basically to build competitive advantage of a company. This study used qualitative research methods with a case study approach in BRI in KBOY. Data collection techniques used non-participant observation and in-depth structured interviews with three employees. The results showed that FFP is in general adoption and not optimal yet. Optimal FFP policy is not yet implemented; it just in the in-formal policies such as the lack of flexible-time, the lack of daycare, the lack of counseling for employees of personal nature, despite it was the availability of lactation rooms for feeding. The employees found difficulties in balancing between achieving careers at work and reaching family harmony. Not pursued a career does not mean that they do not want to reach a better position, but they do not want to ignore the family harmony because of the hours of work overload.

Keywords: career, family friendly policies, work-family balance, work-family conflict

Procedia PDF Downloads 416
2521 A Folk Theorem with Public Randomization Device in Repeated Prisoner’s Dilemma under Costly Observation

Authors: Yoshifumi Hino

Abstract:

An infinitely repeated prisoner’s dilemma is a typical model that represents teamwork situation. If both players choose costly actions and contribute to the team, then both players are better off. However, each player has an incentive to choose a selfish action. We analyze the game under costly observation. Each player can observe the action of the opponent only when he pays an observation cost in that period. In reality, teamwork situations are often costly observation. Members of some teams sometimes work in distinct rooms, areas, or countries. In those cases, they have to spend their time and money to see other team members if they want to observe it. The costly observation assumption makes the cooperation difficult substantially because the equilibrium must satisfy the incentives not only on the action but also on the observational decision. Especially, it is the most difficult to cooperate each other when the stage-game is prisoner's dilemma because players have to communicate through only two actions. We examine whether or not players can cooperate each other in prisoner’s dilemma under costly observation. Specifically, we check whether symmetric Pareto efficient payoff vectors in repeated prisoner’s dilemma can be approximated by sequential equilibria or not (efficiency result). We show the efficiency result without any randomization device under certain circumstances. It means that players can cooperate with each other without any randomization device even if the observation is costly. Next, we assume that public randomization device is available, and then we show that any feasible and individual rational payoffs in prisoner’s dilemma can be approximated by sequential equilibria under a specific situation (folk theorem). It implies that players can achieve asymmetric teamwork like leadership situation when public randomization device is available.

Keywords: cost observation, efficiency, folk theorem, prisoner's dilemma, private monitoring, repeated games.

Procedia PDF Downloads 240
2520 Parametric Investigation of Wire-Cut Electric Discharge Machining on Steel ST-37

Authors: Mearg Berhe Gebregziabher

Abstract:

Wire-cut electric discharge machining (WEDM) is one of the advanced machining processes. Due to the development of the current manufacturing sector, there has been no research work done before about the optimization of the process parameters based on the availability of the workpiece of the Steel St-37 material in Ethiopia. Material Removal Rate (MRR) is considered as the experimental response of WCEDM. The main objective of this work is to investigate and optimize the process parameters on machining quality that gives high MRR during machining of Steel St-37. Throughout the investigation, Pulse on Time (TON), Pulse off Time (TOFF) and Velocities of Wire Feed (WR) are used as variable parameters at three different levels, and Wire tension, flow rate, type of dielectric fluid, type of the workpiece and wire material and dielectric flow rate are keeping as constants for each experiment. The Taguchi methodology, as per Taguchi‟ 's standard L9 (3^3) Orthogonal Array (OA), has been carried out to investigate their effects and to predict the optimal combination of process parameters over MRR. Signal to Noise ratio (S/N) and Analysis of Variance (ANOVA) were used to analyze the effect of the parameters and to identify the optimum cutting parameters on MRR. MRR was measured by using the Electronic Balance Model SI-32. The results indicated that the most significant factors for MRR are TOFF, TON and lastly WR. Taguchi analysis shows that, the optimal process parameters combination is A2B2C2, i.e., TON 6μs, TOFF 29μs and WR 2 m/min. At this level, the MRR of 0.414 gram/min has been achieved.

Keywords: ANOVA, MRR, parameter, Taguchi Methode

Procedia PDF Downloads 43
2519 Semantic Differences between Bug Labeling of Different Repositories via Machine Learning

Authors: Pooja Khanal, Huaming Zhang

Abstract:

Labeling of issues/bugs, also known as bug classification, plays a vital role in software engineering. Some known labels/classes of bugs are 'User Interface', 'Security', and 'API'. Most of the time, when a reporter reports a bug, they try to assign some predefined label to it. Those issues are reported for a project, and each project is a repository in GitHub/GitLab, which contains multiple issues. There are many software project repositories -ranging from individual projects to commercial projects. The labels assigned for different repositories may be dependent on various factors like human instinct, generalization of labels, label assignment policy followed by the reporter, etc. While the reporter of the issue may instinctively give that issue a label, another person reporting the same issue may label it differently. This way, it is not known mathematically if a label in one repository is similar or different to the label in another repository. Hence, the primary goal of this research is to find the semantic differences between bug labeling of different repositories via machine learning. Independent optimal classifiers for individual repositories are built first using the text features from the reported issues. The optimal classifiers may include a combination of multiple classifiers stacked together. Then, those classifiers are used to cross-test other repositories which leads the result to be deduced mathematically. The produce of this ongoing research includes a formalized open-source GitHub issues database that is used to deduce the similarity of the labels pertaining to the different repositories.

Keywords: bug classification, bug labels, GitHub issues, semantic differences

Procedia PDF Downloads 201
2518 A Variable Neighborhood Search with Tabu Conditions for the Roaming Salesman Problem

Authors: Masoud Shahmanzari

Abstract:

The aim of this paper is to present a Variable Neighborhood Search (VNS) with Tabu Search (TS) conditions for the Roaming Salesman Problem (RSP). The RSP is a special case of the well-known traveling salesman problem (TSP) where a set of cities with time-dependent rewards and a set of campaign days are given. Each city can be visited on any day and a subset of cities can be visited multiple times. The goal is to determine an optimal campaign schedule consist of daily open/closed tours that visit some cities and maximizes the total net benefit while respecting daily maximum tour duration constraints and the necessity to return campaign base frequently. This problem arises in several real-life applications and particularly in election logistics where depots are not fixed. We formulate the problem as a mixed integer linear programming (MILP), in which we capture as many real-world aspects of the RSP as possible. We also present a hybrid metaheuristic algorithm based on a VNS with TS conditions. The initial feasible solution is constructed via a new matheuristc approach based on the decomposition of the original problem. Next, this solution is improved in terms of the collected rewards using the proposed local search procedure. We consider a set of 81 cities in Turkey and a campaign of 30 days as our largest instance. Computational results on real-world instances show that the developed algorithm could find near-optimal solutions effectively.

Keywords: optimization, routing, election logistics, heuristics

Procedia PDF Downloads 92
2517 Multi-Stage Multi-Period Production Planning in Wire and Cable Industry

Authors: Mahnaz Hosseinzadeh, Shaghayegh Rezaee Amiri

Abstract:

This paper presents a methodology for serial production planning problem in wire and cable manufacturing process that addresses the problem of input-output imbalance in different consecutive stations, hoping to minimize the halt of machines in each stage. To this end, a linear Goal Programming (GP) model is developed, in which four main categories of constraints as per the number of runs per machine, machines’ sequences, acceptable inventories of machines at the end of each period, and the necessity of fulfillment of the customers’ orders are considered. The model is formulated based upon on the real data obtained from IKO TAK Company, an important supplier of wire and cable for oil and gas and automotive industries in Iran. By solving the model in GAMS software the optimal number of runs, end-of-period inventories, and the possible minimum idle time for each machine are calculated. The application of the numerical results in the target company has shown the efficiency of the proposed model and the solution in decreasing the lead time of the end product delivery to the customers by 20%. Accordingly, the developed model could be easily applied in wire and cable companies for the aim of optimal production planning to reduce the halt of machines in manufacturing stages.

Keywords: goal programming approach, GP, production planning, serial manufacturing process, wire and cable industry

Procedia PDF Downloads 161
2516 A Cognitive Approach to the Optimization of Power Distribution across an Educational Campus

Authors: Mrinmoy Majumder, Apu Kumar Saha

Abstract:

The ever-increasing human population and its demand for energy is placing stress upon conventional energy sources; and as demand for power continues to outstrip supply, the need to optimize energy distribution and utilization is emerging as an important focus for various stakeholders. The distribution of available energy must be achieved in such a way that the needs of the consumer are satisfied. However, if the availability of resources is not sufficient to satisfy consumer demand, it is necessary to find a method to select consumers based on factors such as their socio-economic or environmental impacts. Weighting consumer types in this way can help separate them based on their relative importance, and cognitive optimization of the allocation process can then be carried out so that, even on days of particularly scarce supply, the socio-economic impacts of not satisfying the needs of consumers can be minimized. In this context, the present study utilized fuzzy logic to assign weightage to different types of consumers based at an educational campus in India, and then established optimal allocation by applying the non-linear mapping capability of neuro-genetic algorithms. The outputs of the algorithms were compared with similar outputs from particle swarm optimization and differential evolution algorithms. The results of the study demonstrate an option for the optimal utilization of available energy based on the socio-economic importance of consumers.

Keywords: power allocation, optimization problem, neural networks, environmental and ecological engineering

Procedia PDF Downloads 479
2515 Addition of Phosphates on Stability of Sterilized Goat Milk in Different Seasons

Authors: Mei-Jen Lin, Yuan-Yuan Yu

Abstract:

Low heat stability of goat milk limited the application of ultra-high temperature (UHT) sterilization on producing sterilized goat milk in order to keep excess goat milk in summer for producing goat dairy products in winter in Taiwan. Therefore, this study aimed to add stabilizers in goat milk to increase the heat stability for producing UHT sterilized goat milk preserved for making goat dairy products in winter. The amounts of 0.05-0.11% blend of sodium phosphates (Na) and blend of sodium/potassium phosphates (Sp) were added in raw goat milk at different seasons a night before autoclaved sterilization at 135°C 4 sec. The coagulation, ion calcium concentration and ethanol stability of sterilized goat milk were analyzed. Results showed that there were seasonal differences on choosing the optimal stabilizers and the addition levels. Addition of 0.05% and 0.22% of both Na and Sp salts in Spring goat milk, 0.10-0.11% of both Na and Sp salts in Summer goat milk, and 0.05%Na Sp group in Autumn goat milk were coagulated after autoclaved, respectively. There was no coagulation found with the addition of 0.08-0.09% both Na and Sp salts in goat milk; furthermore, the ionic calcium concentration were lower than 2.00 mM and ethanol stability higher than 70% in both 0.08-0.09% Na and Sp salts added goat milk. Therefore, the optimal addition level of blend of sodium phosphates and blend of sodium/potassium phosphates were 0.08-0.09% for producing sterilized goat milk at different seasons in Taiwan.

Keywords: coagulation, goat milk, phosphates, stability

Procedia PDF Downloads 372
2514 An Optimal Hybrid EMS System for a Hyperloop Prototype Vehicle

Authors: J. F. Gonzalez-Rojo, Federico Lluesma-Rodriguez, Temoatzin Gonzalez

Abstract:

Hyperloop, a new mode of transport, is gaining significance. It consists of the use of a ground-based transport system which includes a levitation system, that avoids rolling friction forces, and which has been covered with a tube, controlling the inner atmosphere lowering the aerodynamic drag forces. Thus, hyperloop is proposed as a solution to the current limitation on ground transportation. Rolling and aerodynamic problems, that limit large speeds for traditional high-speed rail or even maglev systems, are overcome using a hyperloop solution. Zeleros is one of the companies developing technology for hyperloop application worldwide. It is working on a concept that reduces the infrastructure cost and minimizes the power consumption as well as the losses associated with magnetic drag forces. For this purpose, Zeleros proposes a Hybrid ElectroMagnetic Suspension (EMS) for its prototype. In the present manuscript an active and optimal electromagnetic suspension levitation method based on nearly zero power consumption individual modules is presented. This system consists of several hybrid permanent magnet-coil levitation units that can be arranged along the vehicle. The proposed unit manages to redirect the magnetic field along a defined direction forming a magnetic circuit and minimizing the loses due to field dispersion. This is achieved using an electrical steel core. Each module can stabilize the gap distance using the coil current and either linear or non-linear control methods. The ratio between weight and levitation force for each unit is 1/10. In addition, the quotient between the lifted weight and power consumption at the target gap distance is 1/3 [kg/W]. One degree of freedom (DoF) (along the gap direction) is controlled by a single unit. However, when several units are present, a 5 DoF control (2 translational and 3 rotational) can be achieved, leading to the full attitude control of the vehicle. The proposed system has been successfully tested reaching TRL-4 in a laboratory test bench and is currently in TRL-5 state development if the module association in order to control 5 DoF is considered.

Keywords: active optimal control, electromagnetic levitation, HEMS, high-speed transport, hyperloop

Procedia PDF Downloads 146
2513 The Verification Study of Computational Fluid Dynamics Model of the Aircraft Piston Engine

Authors: Lukasz Grabowski, Konrad Pietrykowski, Michal Bialy

Abstract:

This paper presents the results of the research to verify the combustion in aircraft piston engine Asz62-IR. This engine was modernized and a type of ignition system was developed. Due to the high costs of experiments of a nine-cylinder 1,000 hp aircraft engine, a simulation technique should be applied. Therefore, computational fluid dynamics to simulate the combustion process is a reasonable solution. Accordingly, the tests for varied ignition advance angles were carried out and the optimal value to be tested on a real engine was specified. The CFD model was created with the AVL Fire software. The engine in the research had two spark plugs for each cylinder and ignition advance angles had to be set up separately for each spark. The results of the simulation were verified by comparing the pressure in the cylinder. The courses of the indicated pressure of the engine mounted on a test stand were compared. The real course of pressure was measured with an optical sensor, mounted in a specially drilled hole between the valves. It was the OPTRAND pressure sensor, which was designed especially to engine combustion process research. The indicated pressure was measured in cylinder no 3. The engine was running at take-off power. The engine was loaded by a propeller at a special test bench. The verification of the CFD simulation results was based on the results of the test bench studies. The course of the simulated pressure obtained is within the measurement error of the optical sensor. This error is 1% and reflects the hysteresis and nonlinearity of the sensor. The real indicated pressure measured in the cylinder and the pressure taken from the simulation were compared. It can be claimed that the verification of CFD simulations based on the pressure is a success. The next step was to research on the impact of changing the ignition advance timing of spark plugs 1 and 2 on a combustion process. Moving ignition timing between 1 and 2 spark plug results in a longer and uneven firing of a mixture. The most optimal point in terms of indicated power occurs when ignition is simultaneous for both spark plugs, but so severely separated ignitions are assured that ignition will occur at all speeds and loads of engine. It should be confirmed by a bench experiment of the engine. However, this simulation research enabled us to determine the optimal ignition advance angle to be implemented into the ignition control system. This knowledge allows us to set up the ignition point with two spark plugs to achieve as large power as possible.

Keywords: CFD model, combustion, engine, simulation

Procedia PDF Downloads 361
2512 Statistical Modelling of Maximum Temperature in Rwanda Using Extreme Value Analysis

Authors: Emmanuel Iyamuremye, Edouard Singirankabo, Alexis Habineza, Yunvirusaba Nelson

Abstract:

Temperature is one of the most important climatic factors for crop production. However, severe temperatures cause drought, feverish and cold spells that have various consequences for human life, agriculture, and the environment in general. It is necessary to provide reliable information related to the incidents and the probability of such extreme events occurring. In the 21st century, the world faces a huge number of threats, especially from climate change, due to global warming and environmental degradation. The rise in temperature has a direct effect on the decrease in rainfall. This has an impact on crop growth and development, which in turn decreases crop yield and quality. Countries that are heavily dependent on agriculture use to suffer a lot and need to take preventive steps to overcome these challenges. The main objective of this study is to model the statistical behaviour of extreme maximum temperature values in Rwanda. To achieve such an objective, the daily temperature data spanned the period from January 2000 to December 2017 recorded at nine weather stations collected from the Rwanda Meteorological Agency were used. The two methods, namely the block maxima (BM) method and the Peaks Over Threshold (POT), were applied to model and analyse extreme temperature. Model parameters were estimated, while the extreme temperature return periods and confidence intervals were predicted. The model fit suggests Gumbel and Beta distributions to be the most appropriate models for the annual maximum of daily temperature. The results show that the temperature will continue to increase, as shown by estimated return levels.

Keywords: climate change, global warming, extreme value theory, rwanda, temperature, generalised extreme value distribution, generalised pareto distribution

Procedia PDF Downloads 183
2511 Interval Bilevel Linear Fractional Programming

Authors: F. Hamidi, N. Amiri, H. Mishmast Nehi

Abstract:

The Bilevel Programming (BP) model has been presented for a decision making process that consists of two decision makers in a hierarchical structure. In fact, BP is a model for a static two person game (the leader player in the upper level and the follower player in the lower level) wherein each player tries to optimize his/her personal objective function under dependent constraints; this game is sequential and non-cooperative. The decision making variables are divided between the two players and one’s choice affects the other’s benefit and choices. In other words, BP consists of two nested optimization problems with two objective functions (upper and lower) where the constraint region of the upper level problem is implicitly determined by the lower level problem. In real cases, the coefficients of an optimization problem may not be precise, i.e. they may be interval. In this paper we develop an algorithm for solving interval bilevel linear fractional programming problems. That is to say, bilevel problems in which both objective functions are linear fractional, the coefficients are interval and the common constraint region is a polyhedron. From the original problem, the best and the worst bilevel linear fractional problems have been derived and then, using the extended Charnes and Cooper transformation, each fractional problem can be reduced to a linear problem. Then we can find the best and the worst optimal values of the leader objective function by two algorithms.

Keywords: best and worst optimal solutions, bilevel programming, fractional, interval coefficients

Procedia PDF Downloads 446