Search results for: data center optimization
28145 Optimizing of Machining Parameters of Plastic Material Using Taguchi Method
Authors: Jumazulhisham Abdul Shukor, Mohd. Sazali Said, Roshanizah Harun, Shuib Husin, Ahmad Razlee Ab Kadir
Abstract:
This paper applies Taguchi Optimization Method in determining the best machining parameters for pocket milling process on Polypropylene (PP) using CNC milling machine where the surface roughness is considered and the Carbide inserts cutting tool are used. Three machining parameters; speed, feed rate and depth of cut are investigated along three levels; low, medium and high of each parameter (Taguchi Orthogonal Arrays). The setting of machining parameters were determined by using Taguchi Method and the Signal-to-Noise (S/N) ratio are assessed to define the optimal levels and to predict the effect of surface roughness with assigned parameters based on L9. The final experimental outcomes are presented to prove the optimization parameters recommended by manufacturer are accurate.Keywords: inserts, milling process, signal-to-noise (S/N) ratio, surface roughness, Taguchi Optimization Method
Procedia PDF Downloads 63428144 Application of Groundwater Model for Optimization of Denitrification Strategies to Minimize Public Health Risk
Authors: Mukesh A. Modi
Abstract:
High-nitrate concentration in groundwater of unconfined aquifers has been a serious issue for public health risk at a global scale. Various anthropogenic activities in agricultural land and urban land of alluvial soil have been observed to be responsible for the increment of nitrate in groundwater. The present study was designed to identify suitable denitrification strategies to minimize the effects of high nitrate in groundwater near the Mahi River of Vadodara block, Gujarat. There were 11 wells of Jal Jeevan Mission, Ministry of Jal Shakti, along with 3 observation wells of Gujarat Water Resources Development Corporation have been used for the duration of 21 years. MODFLOW and MT3DMS codes have been used to simulate solute transport phenomena along with attempted effectively for optimization. Current research is one step ahead by optimizing various denitrification strategies with the simulation of the model. The in-situ and ex-situ denitrification strategies viz. NAS (No Action Scenario), CAS (Crop Alternation Scenario), PS (Phytoremediation Scenario), and CAS + PS (Crop Alternation Scenario + Phytoremediation Scenario) have been selected for the optimization. The groundwater model simulates the most suitable denitrification strategy considering the hydrogeological characteristics at the targeted well.Keywords: groundwater, high nitrate, MODFLOW, MT3DMS, optimization, denitrification strategy
Procedia PDF Downloads 3028143 Comparison between Continuous Genetic Algorithms and Particle Swarm Optimization for Distribution Network Reconfiguration
Authors: Linh Nguyen Tung, Anh Truong Viet, Nghien Nguyen Ba, Chuong Trinh Trong
Abstract:
This paper proposes a reconfiguration methodology based on a continuous genetic algorithm (CGA) and particle swarm optimization (PSO) for minimizing active power loss and minimizing voltage deviation. Both algorithms are adapted using graph theory to generate feasible individuals, and the modified crossover is used for continuous variable of CGA. To demonstrate the performance and effectiveness of the proposed methods, a comparative analysis of CGA with PSO for network reconfiguration, on 33-node and 119-bus radial distribution system is presented. The simulation results have shown that both CGA and PSO can be used in the distribution network reconfiguration and CGA outperformed PSO with significant success rate in finding optimal distribution network configuration.Keywords: distribution network reconfiguration, particle swarm optimization, continuous genetic algorithm, power loss reduction, voltage deviation
Procedia PDF Downloads 18628142 Execution Time Optimization of Workflow Network with Activity Lead-Time
Authors: Xiaoping Qiu, Binci You, Yue Hu
Abstract:
The executive time of the workflow network has an important effect on the efficiency of the business process. In this paper, the activity executive time is divided into the service time and the waiting time, then the lead time can be extracted from the waiting time. The executive time formulas of the three basic structures in the workflow network are deduced based on the activity lead time. Taken the process of e-commerce logistics as an example, insert appropriate lead time for key activities by using Petri net, and the executive time optimization model is built to minimize the waiting time with the time-cost constraints. Then the solution program-using VC++6.0 is compiled to get the optimal solution, which reduces the waiting time of key activities in the workflow, and verifies the role of lead time in the timeliness of e-commerce logistics.Keywords: electronic business, execution time, lead time, optimization model, petri net, time workflow network
Procedia PDF Downloads 17128141 Learning Resource Management of the Royal Court Courtier in the Reign of King Rama V
Authors: Chanaphop Vannaolarn, Weena Eiamprapai
Abstract:
Thai noblewomen and lady-in-waiting in the era of King Rama V stayed only inside the palace. King Rama V decided to build Dusit Palace in 1897 and another palace called Suan Sunandha in 1900 after his royal visit to Europe. This palace became the residence for noblewomen in the court until the change of political system in 1932. The study about noblewomen in the palace can educate people about how our nation was affected by western civilization in terms of architecture, food, outfit and recreations. It is a way to develop the modern society by studying the great historical value of the past. A learning center about noblewomen will not only provide knowledge but also create bond and patriotic feeling among Thais.Keywords: noblewomen, palace, management, learning center
Procedia PDF Downloads 36128140 Informed Urban Design: Minimizing Urban Heat Island Intensity via Stochastic Optimization
Authors: Luis Guilherme Resende Santos, Ido Nevat, Leslie Norford
Abstract:
The Urban Heat Island (UHI) is characterized by increased air temperatures in urban areas compared to undeveloped rural surrounding environments. With urbanization and densification, the intensity of UHI increases, bringing negative impacts on livability, health and economy. In order to reduce those effects, it is required to take into consideration design factors when planning future developments. Given design constraints such as population size and availability of area for development, non-trivial decisions regarding the buildings’ dimensions and their spatial distribution are required. We develop a framework for optimization of urban design in order to jointly minimize UHI intensity and buildings’ energy consumption. First, the design constraints are defined according to spatial and population limits in order to establish realistic boundaries that would be applicable in real life decisions. Second, the tools Urban Weather Generator (UWG) and EnergyPlus are used to generate outputs of UHI intensity and total buildings’ energy consumption, respectively. Those outputs are changed based on a set of variable inputs related to urban morphology aspects, such as building height, urban canyon width and population density. Lastly, an optimization problem is cast where the utility function quantifies the performance of each design candidate (e.g. minimizing a linear combination of UHI and energy consumption), and a set of constraints to be met is set. Solving this optimization problem is difficult, since there is no simple analytic form which represents the UWG and EnergyPlus models. We therefore cannot use any direct optimization techniques, but instead, develop an indirect “black box” optimization algorithm. To this end we develop a solution that is based on stochastic optimization method, known as the Cross Entropy method (CEM). The CEM translates the deterministic optimization problem into an associated stochastic optimization problem which is simple to solve analytically. We illustrate our model on a typical residential area in Singapore. Due to fast growth in population and built area and land availability generated by land reclamation, urban planning decisions are of the most importance for the country. Furthermore, the hot and humid climate in the country raises the concern for the impact of UHI. The problem presented is highly relevant to early urban design stages and the objective of such framework is to guide decision makers and assist them to include and evaluate urban microclimate and energy aspects in the process of urban planning.Keywords: building energy consumption, stochastic optimization, urban design, urban heat island, urban weather generator
Procedia PDF Downloads 13028139 Estimation of Fuel Cost Function Characteristics Using Cuckoo Search
Authors: M. R. Al-Rashidi, K. M. El-Naggar, M. F. Al-Hajri
Abstract:
The fuel cost function describes the electric power generation-cost relationship in thermal plants, hence, it sheds light on economical aspects of power industry. Different models have been proposed to describe this relationship with the quadratic function model being the most popular one. Parameters of second order fuel cost function are estimated in this paper using cuckoo search algorithm. It is a new population based meta-heuristic optimization technique that has been used in this study primarily as an accurate estimation tool. Its main features are flexibility, simplicity, and effectiveness when compared to other estimation techniques. The parameter estimation problem is formulated as an optimization one with the goal being minimizing the error associated with the estimated parameters. A case study is considered in this paper to illustrate cuckoo search promising potential as a valuable estimation and optimization technique.Keywords: cuckoo search, parameters estimation, fuel cost function, economic dispatch
Procedia PDF Downloads 57928138 Improved Multi–Objective Firefly Algorithms to Find Optimal Golomb Ruler Sequences for Optimal Golomb Ruler Channel Allocation
Authors: Shonak Bansal, Prince Jain, Arun Kumar Singh, Neena Gupta
Abstract:
Recently nature–inspired algorithms have widespread use throughout the tough and time consuming multi–objective scientific and engineering design optimization problems. In this paper, we present extended forms of firefly algorithm to find optimal Golomb ruler (OGR) sequences. The OGRs have their one of the major application as unequally spaced channel–allocation algorithm in optical wavelength division multiplexing (WDM) systems in order to minimize the adverse four–wave mixing (FWM) crosstalk effect. The simulation results conclude that the proposed optimization algorithm has superior performance compared to the existing conventional computing and nature–inspired optimization algorithms to find OGRs in terms of ruler length, total optical channel bandwidth and computation time.Keywords: channel allocation, conventional computing, four–wave mixing, nature–inspired algorithm, optimal Golomb ruler, lévy flight distribution, optimization, improved multi–objective firefly algorithms, Pareto optimal
Procedia PDF Downloads 31828137 Understanding Nanocarrier Efficacy in Drug Delivery Systems Using Molecular Dynamics
Authors: Maedeh Rahimnejad, Bahman Vahidi, Bahman Ebrahimi Hoseinzadeh, Fatemeh Yazdian, Puria Motamed Fath, Roghieh Jamjah
Abstract:
Introduction: The intensive labor and high cost of developing new vehicles for controlled drug delivery highlights the need for a change in their discovery process. Computational models can be used to accelerate experimental steps and control the high cost of experiments. Methods: In this work, to better understand the interaction of anti-cancer drug and the nanocarrier with the cell membrane, we have done molecular dynamics simulation using NAMD. We have chosen paclitaxel for the drug molecule and dipalmitoylphosphatidylcholine (DPPC) as a natural phospholipid nanocarrier. Results: Next, center of mass (COM) between molecules and the van der Waals interaction energy close to the cell membrane has been analyzed. Furthermore, the simulation results of the paclitaxel interaction with the cell membrane and the interaction of DPPC as a nanocarrier loaded by the drug with the cell membrane have been compared. Discussion: Analysis by molecular dynamics (MD) showed that not only the energy between the nanocarrier and the cell membrane is low, but also the center of mass amount decreases in the nanocarrier and the cell membrane system during the interaction; therefore they show significantly better interaction in comparison to the individual drug with the cell membrane.Keywords: anti-cancer drug, center of mass, interaction energy, molecular dynamics simulation, nanocarrier
Procedia PDF Downloads 29628136 Optimization of Air Pollution Control Model for Mining
Authors: Zunaira Asif, Zhi Chen
Abstract:
The sustainable measures on air quality management are recognized as one of the most serious environmental concerns in the mining region. The mining operations emit various types of pollutants which have significant impacts on the environment. This study presents a stochastic control strategy by developing the air pollution control model to achieve a cost-effective solution. The optimization method is formulated to predict the cost of treatment using linear programming with an objective function and multi-constraints. The constraints mainly focus on two factors which are: production of metal should not exceed the available resources, and air quality should meet the standard criteria of the pollutant. The applicability of this model is explored through a case study of an open pit metal mine, Utah, USA. This method simultaneously uses meteorological data as a dispersion transfer function to support the practical local conditions. The probabilistic analysis and the uncertainties in the meteorological conditions are accomplished by Monte Carlo simulation. Reasonable results have been obtained to select the optimized treatment technology for PM2.5, PM10, NOx, and SO2. Additional comparison analysis shows that baghouse is the least cost option as compared to electrostatic precipitator and wet scrubbers for particulate matter, whereas non-selective catalytical reduction and dry-flue gas desulfurization are suitable for NOx and SO2 reduction respectively. Thus, this model can aid planners to reduce these pollutants at a marginal cost by suggesting control pollution devices, while accounting for dynamic meteorological conditions and mining activities.Keywords: air pollution, linear programming, mining, optimization, treatment technologies
Procedia PDF Downloads 20628135 A Digital Twin Approach to Support Real-time Situational Awareness and Intelligent Cyber-physical Control in Energy Smart Buildings
Authors: Haowen Xu, Xiaobing Liu, Jin Dong, Jianming Lian
Abstract:
Emerging smart buildings often employ cyberinfrastructure, cyber-physical systems, and Internet of Things (IoT) technologies to increase the automation and responsiveness of building operations for better energy efficiency and lower carbon emission. These operations include the control of Heating, Ventilation, and Air Conditioning (HVAC) and lighting systems, which are often considered a major source of energy consumption in both commercial and residential buildings. Developing energy-saving control models for optimizing HVAC operations usually requires the collection of high-quality instrumental data from iterations of in-situ building experiments, which can be time-consuming and labor-intensive. This abstract describes a digital twin approach to automate building energy experiments for optimizing HVAC operations through the design and development of an adaptive web-based platform. The platform is created to enable (a) automated data acquisition from a variety of IoT-connected HVAC instruments, (b) real-time situational awareness through domain-based visualizations, (c) adaption of HVAC optimization algorithms based on experimental data, (d) sharing of experimental data and model predictive controls through web services, and (e) cyber-physical control of individual instruments in the HVAC system using outputs from different optimization algorithms. Through the digital twin approach, we aim to replicate a real-world building and its HVAC systems in an online computing environment to automate the development of building-specific model predictive controls and collaborative experiments in buildings located in different climate zones in the United States. We present two case studies to demonstrate our platform’s capability for real-time situational awareness and cyber-physical control of the HVAC in the flexible research platforms within the Oak Ridge National Laboratory (ORNL) main campus. Our platform is developed using adaptive and flexible architecture design, rendering the platform generalizable and extendable to support HVAC optimization experiments in different types of buildings across the nation.Keywords: energy-saving buildings, digital twins, HVAC, cyber-physical system, BIM
Procedia PDF Downloads 10728134 Advanced Hybrid Particle Swarm Optimization for Congestion and Power Loss Reduction in Distribution Networks with High Distributed Generation Penetration through Network Reconfiguration
Authors: C. Iraklis, G. Evmiridis, A. Iraklis
Abstract:
Renewable energy sources and distributed power generation units already have an important role in electrical power generation. A mixture of different technologies penetrating the electrical grid, adds complexity in the management of distribution networks. High penetration of distributed power generation units creates node over-voltages, huge power losses, unreliable power management, reverse power flow and congestion. This paper presents an optimization algorithm capable of reducing congestion and power losses, both described as a function of weighted sum. Two factors that describe congestion are being proposed. An upgraded selective particle swarm optimization algorithm (SPSO) is used as a solution tool focusing on the technique of network reconfiguration. The upgraded SPSO algorithm is achieved with the addition of a heuristic algorithm specializing in reduction of power losses, with several scenarios being tested. Results show significant improvement in minimization of losses and congestion while achieving very small calculation times.Keywords: congestion, distribution networks, loss reduction, particle swarm optimization, smart grid
Procedia PDF Downloads 44328133 Global Direct Search Optimization of a Tuned Liquid Column Damper Subject to Stochastic Load
Authors: Mansour H. Alkmim, Adriano T. Fabro, Marcus V. G. De Morais
Abstract:
In this paper, a global direct search optimization algorithm to reduce vibration of a tuned liquid column damper (TLCD), a class of passive structural control device, is presented. The objective is to find optimized parameters for the TLCD under stochastic load from different wind power spectral density. A verification is made considering the analytical solution of an undamped primary system under white noise excitation. Finally, a numerical example considering a simplified wind turbine model is given to illustrate the efficacy of the TLCD. Results from the random vibration analysis are shown for four types of random excitation wind model where the response PSDs obtained showed good vibration attenuation.Keywords: generalized pattern search, parameter optimization, random vibration analysis, vibration suppression
Procedia PDF Downloads 27428132 Crime Victim Support Services in Bangladesh: An Analysis
Authors: Mohammad Shahjahan, Md. Monoarul Haque
Abstract:
In the research work information and data were collected from both types of sources, direct and indirect. Numerological, qualitative and participatory analysis methods have been followed. There were two principal sources of collecting information and data. Firstly, the data provided by the service recipients (300 nos. of women and children victims) in the Victim Support Centre and service providing policemen, executives and staffs (60 nos.). Secondly, data collected from Specialists, Criminologists and Sociologists involved in victim support services through Consultative Interview, KII, Case Study and FGD etc. The initial data collection has been completed with the help of questionnaires as per strategic variations and with the help of guidelines. It is to be noted that the main objective of this research was to determine whether services provided to the victims for their facilities, treatment/medication and rehabilitation by different government/non-government organizations was veritable at all. At the same time socio-economic background and demographic characteristics of the victims have also been revealed through this research. The results of the study show that although the number of victims has increased gradually due to socio-economic, political and cultural realities in Bangladesh, the number of victim support centers has not increased as expected. Awareness among the victims about the effectiveness of the 8 centers working in this regard is also not up to the mark. Two thirds of the victims coming to get service were not cognizant regarding the victim support services at all before getting the service. Most of those who have finally been able to come under the services of the Victim Support Center through various means, have received sheltering (15.5%), medical services (13.32%), counseling services (13.10%) and legal aid (12.66%). The opportunity to stay in security custody and psycho-physical services were also notable. Usually, women and children from relatively poor and marginalized families of the society come to victim support center for getting services. Among the women, young unmarried women are the biggest victims of crime. Again, women and children employed as domestic workers are more affected. A number of serious negative impacts fall on the lives of the victims. Being deprived of employment opportunities (26.62%), suffering from psycho-somatic disorder (20.27%), carrying sexually transmitted diseases (13.92%) are among them. It seems apparent to urgently enact distinct legislation, increase the number of Victim Support Centers, expand the area and purview of services and take initiative to increase public awareness and to create mass movement.Keywords: crime, victim, support, Bangladesh
Procedia PDF Downloads 8828131 Real-Time Path Planning for Unmanned Air Vehicles Using Improved Rapidly-Exploring Random Tree and Iterative Trajectory Optimization
Authors: A. Ramalho, L. Romeiro, R. Ventura, A. Suleman
Abstract:
A real-time path planning framework for Unmanned Air Vehicles, and in particular multi-rotors is proposed. The framework is designed to provide feasible trajectories from the current UAV position to a goal state, taking into account constraints such as obstacle avoidance, problem kinematics, and vehicle limitations such as maximum speed and maximum acceleration. The framework computes feasible paths online, allowing to avoid new, unknown, dynamic obstacles without fully re-computing the trajectory. These features are achieved using an iterative process in which the robot computes and optimizes the trajectory while performing the mission objectives. A first trajectory is computed using a modified Rapidly-Exploring Random Tree (RRT) algorithm, that provides trajectories that respect a maximum curvature constraint. The trajectory optimization is accomplished using the Interior Point Optimizer (IPOPT) as a solver. The framework has proven to be able to compute a trajectory and optimize to a locally optimal with computational efficiency making it feasible for real-time operations.Keywords: interior point optimization, multi-rotors, online path planning, rapidly exploring random trees, trajectory optimization
Procedia PDF Downloads 13428130 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization
Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman
Abstract:
In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization
Procedia PDF Downloads 23828129 Investigated Optimization of Davidson Path Loss Model for Digital Terrestrial Television (DTTV) Propagation in Urban Area
Authors: Pitak Keawbunsong, Sathaporn Promwong
Abstract:
This paper presents an investigation on the efficiency of the optimized Davison path loss model in order to look for a suitable path loss model to design and planning DTTV propagation for small and medium urban areas in southern Thailand. Hadyai City in Songkla Province is chosen as the case study to collect the analytical data on the electric field strength. The optimization is conducted through the least square method while the efficiency index is through the statistical value of relative error (RE). The result of the least square method is the offset and slop of the frequency to be used in the optimized process. The statistical result shows that RE of the old Davidson model is at the least when being compared with the optimized Davison and the Hata models. Thus, the old Davison path loss model is the most accurate that further becomes the most optimized for the plan on the propagation network design.Keywords: DTTV propagation, path loss model, Davidson model, least square method
Procedia PDF Downloads 33728128 Globally Convergent Sequential Linear Programming for Multi-Material Topology Optimization Using Ordered Solid Isotropic Material with Penalization Interpolation
Authors: Darwin Castillo Huamaní, Francisco A. M. Gomes
Abstract:
The aim of the multi-material topology optimization (MTO) is to obtain the optimal topology of structures composed by many materials, according to a given set of constraints and cost criteria. In this work, we seek the optimal distribution of materials in a domain, such that the flexibility of the structure is minimized, under certain boundary conditions and the intervention of external forces. In the case we have only one material, each point of the discretized domain is represented by two values from a function, where the value of the function is 1 if the element belongs to the structure or 0 if the element is empty. A common way to avoid the high computational cost of solving integer variable optimization problems is to adopt the Solid Isotropic Material with Penalization (SIMP) method. This method relies on the continuous interpolation function, power function, where the base variable represents a pseudo density at each point of domain. For proper exponent values, the SIMP method reduces intermediate densities, since values other than 0 or 1 usually does not have a physical meaning for the problem. Several extension of the SIMP method were proposed for the multi-material case. The one that we explore here is the ordered SIMP method, that has the advantage of not being based on the addition of variables to represent material selection, so the computational cost is independent of the number of materials considered. Although the number of variables is not increased by this algorithm, the optimization subproblems that are generated at each iteration cannot be solved by methods that rely on second derivatives, due to the cost of calculating the second derivatives. To overcome this, we apply a globally convergent version of the sequential linear programming method, which solves a linear approximation sequence of optimization problems.Keywords: globally convergence, multi-material design ordered simp, sequential linear programming, topology optimization
Procedia PDF Downloads 31428127 Multi Tier Data Collection and Estimation, Utilizing Queue Model in Wireless Sensor Networks
Authors: Amirhossein Mohajerzadeh, Abolghasem Mohajerzadeh
Abstract:
In this paper, target parameter is estimated with desirable precision in hierarchical wireless sensor networks (WSN) while the proposed algorithm also tries to prolong network lifetime as much as possible, using efficient data collecting algorithm. Target parameter distribution function is considered unknown. Sensor nodes sense the environment and send the data to the base station called fusion center (FC) using hierarchical data collecting algorithm. FC builds underlying phenomena based on collected data. Considering the aggregation level, x, the goal is providing the essential infrastructure to find the best value for aggregation level in order to prolong network lifetime as much as possible, while desirable accuracy is guaranteed (required sample size is fully depended on desirable precision). First, the sample size calculation algorithm is discussed, second, the average queue length based on M/M[x]/1/K queue model is determined and it is used for energy consumption calculation. Nodes can decrease transmission cost by aggregating incoming data. Furthermore, the performance of the new algorithm is evaluated in terms of lifetime and estimation accuracy.Keywords: aggregation, estimation, queuing, wireless sensor network
Procedia PDF Downloads 18628126 Quantification of Soft Tissue Artefacts Using Motion Capture Data and Ultrasound Depth Measurements
Authors: Azadeh Rouhandeh, Chris Joslin, Zhen Qu, Yuu Ono
Abstract:
The centre of rotation of the hip joint is needed for an accurate simulation of the joint performance in many applications such as pre-operative planning simulation, human gait analysis, and hip joint disorders. In human movement analysis, the hip joint center can be estimated using a functional method based on the relative motion of the femur to pelvis measured using reflective markers attached to the skin surface. The principal source of errors in estimation of hip joint centre location using functional methods is soft tissue artefacts due to the relative motion between the markers and bone. One of the main objectives in human movement analysis is the assessment of soft tissue artefact as the accuracy of functional methods depends upon it. Various studies have described the movement of soft tissue artefact invasively, such as intra-cortical pins, external fixators, percutaneous skeletal trackers, and Roentgen photogrammetry. The goal of this study is to present a non-invasive method to assess the displacements of the markers relative to the underlying bone using optical motion capture data and tissue thickness from ultrasound measurements during flexion, extension, and abduction (all with knee extended) of the hip joint. Results show that the artefact skin marker displacements are non-linear and larger in areas closer to the hip joint. Also marker displacements are dependent on the movement type and relatively larger in abduction movement. The quantification of soft tissue artefacts can be used as a basis for a correction procedure for hip joint kinematics.Keywords: hip joint center, motion capture, soft tissue artefact, ultrasound depth measurement
Procedia PDF Downloads 27828125 Investigation on Remote Sense Surface Latent Heat Temperature Associated with Pre-Seismic Activities in Indian Region
Authors: Vijay S. Katta, Vinod Kushwah, Rudraksh Tiwari, Mulayam Singh Gaur, Priti Dimri, Ashok Kumar Sharma
Abstract:
The formation process of seismic activities because of abrupt slip on faults, tectonic plate moments due to accumulated stress in the Earth’s crust. The prediction of seismic activity is a very challenging task. We have studied the changes in surface latent heat temperatures which are observed prior to significant earthquakes have been investigated and could be considered for short term earthquake prediction. We analyzed the surface latent heat temperature (SLHT) variation for inland earthquakes occurred in Chamba, Himachal Pradesh (32.5 N, 76.1E, M-4.5, depth-5km) nearby the main boundary fault region, the data of SLHT have been taken from National Center for Environmental Prediction (NCEP). In this analysis, we have calculated daily variations with surface latent heat temperature (0C) in the range area 1⁰x1⁰ (~120/KM²) with the pixel covering epicenter of earthquake at the center for a three months period prior to and after the seismic activities. The mean value during that period has been considered in order to take account of the seasonal effect. The monthly mean has been subtracted from daily value to study anomalous behavior (∆SLHT) of SLHT during the earthquakes. The results found that the SLHTs adjacent the epicenters all are anomalous high value 3-5 days before the seismic activities. The abundant surface water and groundwater in the epicenter and its adjacent region can provide the necessary condition for the change of SLHT. To further confirm the reliability of SLHT anomaly, it is necessary to explore its physical mechanism in depth by more earthquakes cases.Keywords: surface latent heat temperature, satellite data, earthquake, magnetic storm
Procedia PDF Downloads 13128124 Efficient Chiller Plant Control Using Modern Reinforcement Learning
Authors: Jingwei Du
Abstract:
The need of optimizing air conditioning systems for existing buildings calls for control methods designed with energy-efficiency as a primary goal. The majority of current control methods boil down to two categories: empirical and model-based. To be effective, the former heavily relies on engineering expertise and the latter requires extensive historical data. Reinforcement Learning (RL), on the other hand, is a model-free approach that explores the environment to obtain an optimal control strategy often referred to as “policy”. This research adopts Proximal Policy Optimization (PPO) to improve chiller plant control, and enable the RL agent to collaborate with experienced engineers. It exploits the fact that while the industry lacks historical data, abundant operational data is available and allows the agent to learn and evolve safely under human supervision. Thanks to the development of language models, renewed interest in RL has led to modern, online, policy-based RL algorithms such as the PPO. This research took inspiration from “alignment”, a process that utilizes human feedback to finetune the pretrained model in case of unsafe content. The methodology can be summarized into three steps. First, an initial policy model is generated based on minimal prior knowledge. Next, the prepared PPO agent is deployed so feedback from both critic model and human experts can be collected for future finetuning. Finally, the agent learns and adapts itself to the specific chiller plant, updates the policy model and is ready for the next iteration. Besides the proposed approach, this study also used traditional RL methods to optimize the same simulated chiller plants for comparison, and it turns out that the proposed method is safe and effective at the same time and needs less to no historical data to start up.Keywords: chiller plant, control methods, energy efficiency, proximal policy optimization, reinforcement learning
Procedia PDF Downloads 2628123 Determining the Collaboration and Challenges of Public Employment Service with Stakeholders, Employers and Job Seekers: In Case of Amhara National Regional State, Ethiopia
Authors: Redie Bezabih Hailu
Abstract:
Unemployment is a problem of nations that needs a continuous research. This study aimed to determine the collaborations and challenges of public employment service (PES) with special emphasis of stakeholders, employers and job seekers. The researcher used pragmatic philosophy, exploratory design and inductive approach to collect data from the respondents using interview and focused group discussion techniques. PES provides job market information, vocational counseling, and training. As PES is not fully furnished with man power, budget, modern technologies, it is providing less adequate services to the employers and job seekers. Matching job seekers with job vacancies is the major challenge for the center and using paper-based data management system too. There is also a number of job seekers in spite of very limited number of vacancies that the service provision is poor due to the fact that there is low level of vacancies and high level of job seekers. The center has collaboration with AFE, AYA, BoTVED, BoWCY, and CETU. The major challenges with this collaborations was the absence of operational guidelines to evaluate effectiveness and performance, lottery method of selecting candidates for vacancies and nepotism or favoritism were challenges for job seekers. On the other hand, (COVID-19) pandemic, inability to get skilled labor, absence of standardized payment, expectation of job seekers and less educational quality and mass graduation were another challenges for employment services. The study recommended quality education and training, operational guideline for collaboration, technology based labor market information system and suggested further studies on quality of PES.Keywords: public employment service, collaborations, stakeholders, employers, job seekers
Procedia PDF Downloads 4728122 Statistical Optimization and Production of Rhamnolipid by P. aeruginosa PAO1 Using Prickly Pear Peel as a Carbon Source
Authors: Mostafa M. Abo Elsoud, Heba I. Elkhouly, Nagwa M. Sidkey
Abstract:
Production of rhamnolipids by Pseudomonas aeruginosa has attracted a growing interest during the last few decades due to its high productivity compared with other microorganisms. In the current work, rhamnolipids production by P. aeruginosa PAO1 was statistically modeled using Taguchi orthogonal array, numerically optimized and validated. Prickly Pear Peel (Opuntia ficus-indica) has been used as a carbon source for production of rhamnolipid. Finally, the optimum conditions for rhamnolipid production were applied in 5L working volume bioreactors at different aerations, agitation and controlled pH for maximum rhamnolipid production. In addition, kinetic studies of rhamnolipids production have been reported. At the end of the batch bioreactor optimization process, rhamnolipids production by P. aeruginosa PAO1 has reached the worldwide levels and can be applied for its industrial production.Keywords: rhamnolipids, pseudomonas aeruginosa, statistical optimization, tagushi, opuntia ficus-indica
Procedia PDF Downloads 17728121 Financial Portfolio Optimization in Turkish Electricity Market via Value at Risk
Authors: F. Gökgöz, M. E. Atmaca
Abstract:
Electricity has an indispensable role in human daily life, technological development and economy. It is a special product or service that should be instantaneously generated and consumed. Sources of the world are limited so that effective and efficient use of them is very important not only for human life and environment but also for technological and economic development. Competitive electricity market is one of the important way that provides suitable platform for effective and efficient use of electricity. Besides benefits, it brings along some risks that should be carefully managed by a market player like Electricity Generation Company. Risk management is an essential part in market players’ decision making. In this paper, risk management through diversification is applied with the help of Value at Risk methods for case studies. Performance of optimal electricity sale solutions are measured and the portfolio performance has been evaluated via Sharpe-Ratio, and compared with conventional approach. Biennial historical electricity price data of Turkish Day Ahead Market are used to demonstrate the approach.Keywords: electricity market, portfolio optimization, risk management, value at risk
Procedia PDF Downloads 31128120 A Modified Nonlinear Conjugate Gradient Algorithm for Large Scale Unconstrained Optimization Problems
Authors: Tsegay Giday Woldu, Haibin Zhang, Xin Zhang, Yemane Hailu Fissuh
Abstract:
It is well known that nonlinear conjugate gradient method is one of the widely used first order methods to solve large scale unconstrained smooth optimization problems. Because of the low memory requirement, attractive theoretical features, practical computational efficiency and nice convergence properties, nonlinear conjugate gradient methods have a special role for solving large scale unconstrained optimization problems. Large scale optimization problems are with important applications in practical and scientific world. However, nonlinear conjugate gradient methods have restricted information about the curvature of the objective function and they are likely less efficient and robust compared to some second order algorithms. To overcome these drawbacks, the new modified nonlinear conjugate gradient method is presented. The noticeable features of our work are that the new search direction possesses the sufficient descent property independent of any line search and it belongs to a trust region. Under mild assumptions and standard Wolfe line search technique, the global convergence property of the proposed algorithm is established. Furthermore, to test the practical computational performance of our new algorithm, numerical experiments are provided and implemented on the set of some large dimensional unconstrained problems. The numerical results show that the proposed algorithm is an efficient and robust compared with other similar algorithms.Keywords: conjugate gradient method, global convergence, large scale optimization, sufficient descent property
Procedia PDF Downloads 20328119 Hybrid CNN-SAR and Lee Filtering for Enhanced InSAR Phase Unwrapping and Coherence Optimization
Authors: Hadj Sahraoui Omar, Kebir Lahcen Wahib, Bennia Ahmed
Abstract:
Interferometric Synthetic Aperture Radar (InSAR) coherence is a crucial parameter for accurately monitoring ground deformation and environmental changes. However, coherence can be degraded by various factors such as temporal decorrelation, atmospheric disturbances, and geometric misalignments, limiting the reliability of InSAR measurements (Omar Hadj‐Sahraoui and al. 2019). To address this challenge, we propose an innovative hybrid approach that combines artificial intelligence (AI) with advanced filtering techniques to optimize interferometric coherence in InSAR data. Specifically, we introduce a Convolutional Neural Network (CNN) integrated with the Lee filter to enhance the performance of radar interferometry. This hybrid method leverages the strength of CNNs to automatically identify and mitigate the primary sources of decorrelation, while the Lee filter effectively reduces speckle noise, improving the overall quality of interferograms. We develop a deep learning-based model trained on multi-temporal and multi-frequency SAR datasets, enabling it to predict coherence patterns and enhance low-coherence regions. This hybrid CNN-SAR with Lee filtering significantly reduces noise and phase unwrapping errors, leading to more precise deformation maps. Experimental results demonstrate that our approach improves coherence by up to 30% compared to traditional filtering techniques, making it a robust solution for challenging scenarios such as urban environments, vegetated areas, and rapidly changing landscapes. Our method has potential applications in geohazard monitoring, urban planning, and environmental studies, offering a new avenue for enhancing InSAR data reliability through AI-powered optimization combined with robust filtering techniques.Keywords: CNN-SAR, Lee Filter, hybrid optimization, coherence, InSAR phase unwrapping, speckle noise reduction
Procedia PDF Downloads 628118 Optimization of Roster Construction In Sports
Authors: Elijah Cavan
Abstract:
In Major League Sports (MLB, NBA, NHL, NFL), it is the Front Office Staff (FOS) who make decisions about who plays for their respective team. The FOS bear the brunt of the responsibility for acquiring players through drafting, trading and signing players in free agency while typically contesting with maximum roster salary constraints. The players themselves are volatile assets of these teams- their value fluctuates with age and performance. A simple comparison can be made when viewing players as assets. The problem here is similar to that of optimizing your investment portfolio. The The goal is ultimately to maximize your periodic returns while tolerating a fixed risk (degree of uncertainty/ potential loss). Each franchise may value assets differently, and some may only tolerate lower risk levels- these are examples of factors that introduce additional constraints into the model. In this talk, we will detail the mathematical formulation of this problem as a constrained optimization problem- which can be solved with classical machine learning methods but is also well posed as a problem to be solved on quantum computersKeywords: optimization, financial mathematics, sports analytics, simulated annealing
Procedia PDF Downloads 12028117 Optimal Trailing Edge Flap Positions of Helicopter Rotor for Various Thrust Coefficient to Solidity (Ct/σ) Ratios
Authors: K. K. Saijaand, K. Prabhakaran Nair
Abstract:
This study aims to determine change in optimal lo-cations of dual trailing-edge flaps for various thrust coefficient to solidity (Ct /σ) ratios of helicopter to achieve minimum hub vibration levels, with low penalty in terms of required trailing-edge flap control power. Polynomial response functions are used to approximate hub vibration and flap power objective functions. Single objective and multi-objective optimization is carried with the objective of minimizing hub vibration and flap power. The optimization results shows that the inboard flap location at low Ct/σ ratio move farther from the baseline value and at high Ct/σ ratio move towards the root of the blade for minimizing hub vibration.Keywords: helicopter rotor, trailing-edge flap, thrust coefficient to solidity (Ct /σ) ratio, optimization
Procedia PDF Downloads 47428116 A Shift in the Structure of Economy and Synergy of University: Developing Potential Through Research and Development Center of SMEs in Jember
Authors: Muhamad Nugraha
Abstract:
Economic growth always correlate positively with the magnitude of the unemployment rate. This is caused by labor which one of important variable to keep growth in the real sector of the region. Meanwhile, the economic structure in districts of Jember showed an increase of economic activity began to shift towards the industrial sector and some other economic sectors, so they have an affects to considerations for policy makers to increase economic growth in Jember as an autonomous region in East Java Province. At the fact, SMEs is among the factors driving economic growth in the region. This is shown by the high amount of SMEs. However, employment in the sector grew slightly slowed. It is caused by a lack of productivity in SMEs. Through the analysis of the transformation of economic structure theory, and the theory of Triple Helix using descriptive analytical method Location Quotient and Shift - Share, found that the results of the economic structure in Jember slowly shifting from the agricultural sector to the industrial sector, because it is dominated by trade sector, hotel and restaurant sector. In addition, SMEs is the potential sector of economic growth in Jember. While to maximizing role and functions of the institution's Research and Development Center of SMEs, there are three points to be known, that are Business Landscape, Business Architecture and Value Added.Keywords: economic growth, SMEs, labor, Research and Development Center of SMEs
Procedia PDF Downloads 443