Search results for: statistical optimization
1874 MHD Boundary Layer Flow of a Nanofluid Past a Wedge Shaped Wick in Heat Pipe
Authors: Ziya Uddin
Abstract:
This paper deals with the theoretical and numerical investigation of magneto hydrodynamic boundary layer flow of a nanofluid past a wedge shaped wick in heat pipe used for the cooling of electronic components and different type of machines. To incorporate the effect of nanoparticle diameter, concentration of nanoparticles in the pure fluid, nanothermal layer formed around the nanoparticle and Brownian motion of nanoparticles etc., appropriate models are used for the effective thermal and physical properties of nanofluids. To model the rotation of nanoparticles inside the base fluid, microfluidics theory is used. In this investigation ethylene glycol (EG) based nanofluids, are taken into account. The non-linear equations governing the flow and heat transfer are solved by using a very effective particle swarm optimization technique along with Runge-Kutta method. The values of heat transfer coefficient are found for different parameters involved in the formulation viz. nanoparticle concentration, nanoparticle size, magnetic field and wedge angle etc. It is found that, the wedge angle, presence of magnetic field, nanoparticle size and nanoparticle concentration etc. have prominent effects on fluid flow and heat transfer characteristics for the considered configuration.
Keywords: Heat transfer, Heat pipe, numerical modeling, nanofluid applications, particle swarm optimization, wedge shaped wick.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23091873 Optimization of Two Quality Characteristics in Injection Molding Processes via Taguchi Methodology
Authors: Joseph C. Chen, Venkata Karthik Jakka
Abstract:
The main objective of this research is to optimize tensile strength and dimensional accuracy in injection molding processes using Taguchi Parameter Design. An L16 orthogonal array (OA) is used in Taguchi experimental design with five control factors at four levels each and with non-controllable factor vibration. A total of 32 experiments were designed to obtain the optimal parameter setting for the process. The optimal parameters identified for the shrinkage are shot volume, 1.7 cubic inch (A4); mold term temperature, 130 ºF (B1); hold pressure, 3200 Psi (C4); injection speed, 0.61 inch3/sec (D2); and hold time of 14 seconds (E2). The optimal parameters identified for the tensile strength are shot volume, 1.7 cubic inch (A4); mold temperature, 160 ºF (B4); hold pressure, 3100 Psi (C3); injection speed, 0.69 inch3/sec (D4); and hold time of 14 seconds (E2). The Taguchi-based optimization framework was systematically and successfully implemented to obtain an adjusted optimal setting in this research. The mean shrinkage of the confirmation runs is 0.0031%, and the tensile strength value was found to be 3148.1 psi. Both outcomes are far better results from the baseline, and defects have been further reduced in injection molding processes.
Keywords: Injection molding processes, Taguchi Parameter Design, tensile strength, shrinkage test, high-density polyethylene, HDPE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8401872 Optimal Current Control of Externally Excited Synchronous Machines in Automotive Traction Drive Applications
Authors: Oliver Haala, Bernhard Wagner, Maximilian Hofmann, Martin Marz
Abstract:
The excellent suitability of the externally excited synchronous machine (EESM) in automotive traction drive applications is justified by its high efficiency over the whole operation range and the high availability of materials. Usually, maximum efficiency is obtained by modelling each single loss and minimizing the sum of all losses. As a result, the quality of the optimization highly depends on the precision of the model. Moreover, it requires accurate knowledge of the saturation dependent machine inductances. Therefore, the present contribution proposes a method to minimize the overall losses of a salient pole EESM and its inverter in steady state operation based on measurement data only. Since this method does not require any manufacturer data, it is well suited for an automated measurement data evaluation and inverter parametrization. The field oriented control (FOC) of an EESM provides three current components resp. three degrees of freedom (DOF). An analytic minimization of the copper losses in the stator and the rotor (assuming constant inductances) is performed and serves as a first approximation of how to choose the optimal current reference values. After a numeric offline minimization of the overall losses based on measurement data the results are compared to a control strategy that satisfies cos (ϕ) = 1.
Keywords: Current control, efficiency, externally excited synchronous machine, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 43961871 Virtual Container Yard: Assessing the Perceived Impact of Legal Implications to Container Carriers
Authors: L. Edirisinghe, P. Mukherjee, H. Edirisinghe
Abstract:
Virtual Container Yard (VCY) is a modern concept that helps to reduce the empty container repositioning cost of carriers. The concept of VCY is based on container interchange between shipping lines. Although this mechanism has been theoretically accepted by the shipping community as a feasible solution, it has not yet achieved the necessary momentum among container shipping lines (CSL). This paper investigates whether there is any legal influence on this industry myopia about the VCY. It is believed that this is the first publication that focuses on the legal aspects of container exchange between carriers. Not much literature on this subject is available. This study establishes with statistical evidence that there is a phobia prevailing in the shipping industry that exchanging containers with other carriers may lead to various legal implications. The complexity of exchange is two faceted. CSLs assume that offering a container to another carrier (obviously, a competitor in terms of commercial context) or using a container offered by another carrier may lead to undue legal implications. This research reveals that this fear is reflected through four types of perceived components, namely: shipping associate; warehouse associate; network associate; and trading associate. These components carry eighteen subcomponents that comprehensively cover the entire process of a container shipment. The statistical explanation has been supported through regression analysis; INCO terms were used to illustrate the shipping process.
Keywords: Container, legal, shipping, virtual.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6201870 An ACO Based Algorithm for Distribution Networks Including Dispersed Generations
Authors: B. Bahmani Firouzi, T. Niknam, M. Nayeripour
Abstract:
With Power system movement toward restructuring along with factors such as life environment pollution, problems of transmission expansion and with advancement in construction technology of small generation units, it is expected that small units like wind turbines, fuel cells, photovoltaic, ... that most of the time connect to the distribution networks play a very essential role in electric power industry. With increase in developing usage of small generation units, management of distribution networks should be reviewed. The target of this paper is to present a new method for optimal management of active and reactive power in distribution networks with regard to costs pertaining to various types of dispersed generations, capacitors and cost of electric energy achieved from network. In other words, in this method it-s endeavored to select optimal sources of active and reactive power generation and controlling equipments such as dispersed generations, capacitors, under load tapchanger transformers and substations in a way that firstly costs in relation to them are minimized and secondly technical and physical constraints are regarded. Because the optimal management of distribution networks is an optimization problem with continuous and discrete variables, the new evolutionary method based on Ant Colony Algorithm has been applied. The simulation results of the method tested on two cases containing 23 and 34 buses exist and will be shown at later sections.
Keywords: Distributed Generation, Optimal Operation Management of distribution networks, Ant Colony Optimization(ACO).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17101869 Multi Objective Simultaneous Assembly Line Balancing and Buffer Sizing
Authors: Saif Ullah, Guan Zailin, Xu Xianhao, He Zongdong, Wang Baoxi
Abstract:
Assembly line balancing problem is aimed to divide the tasks among the stations in assembly lines and optimize some objectives. In assembly lines the workload on stations is different from each other due to different tasks times and the difference in workloads between stations can cause blockage or starvation in some stations in assembly lines. Buffers are used to store the semi-finished parts between the stations and can help to smooth the assembly production. The assembly line balancing and buffer sizing problem can affect the throughput of the assembly lines. Assembly line balancing and buffer sizing problems have been studied separately in literature and due to their collective contribution in throughput rate of assembly lines, balancing and buffer sizing problem are desired to study simultaneously and therefore they are considered concurrently in current research. Current research is aimed to maximize throughput, minimize total size of buffers in assembly line and minimize workload variations in assembly line simultaneously. A multi objective optimization objective is designed which can give better Pareto solutions from the Pareto front and a simple example problem is solved for assembly line balancing and buffer sizing simultaneously. Current research is significant for assembly line balancing research and it can be significant to introduce optimization approaches which can optimize current multi objective problem in future.
Keywords: Assembly line balancing, Buffer sizing, Pareto solutions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33621868 Optimization of Energy Conservation Potential for VAV Air Conditioning System using Fuzzy based Genetic Algorithm
Authors: R. Parameshwaran, R. Karunakaran, S. Iniyan, Anand A. Samuel
Abstract:
The objective of this study is to present the test results of variable air volume (VAV) air conditioning system optimized by two objective genetic algorithm (GA). The objective functions are energy savings and thermal comfort. The optimal set points for fuzzy logic controller (FLC) are the supply air temperature (Ts), the supply duct static pressure (Ps), the chilled water temperature (Tw), and zone temperature (Tz) that is taken as the problem variables. Supply airflow rate and chilled water flow rate are considered to be the constraints. The optimal set point values are obtained from GA process and assigned into fuzzy logic controller (FLC) in order to conserve energy and maintain thermal comfort in real time VAV air conditioning system. A VAV air conditioning system with FLC installed in a software laboratory has been taken for the purpose of energy analysis. The total energy saving obtained in VAV GA optimization system with FLC compared with constant air volume (CAV) system is expected to achieve 31.5%. The optimal duct static pressure obtained through Genetic fuzzy methodology attributes to better air distribution by delivering the optimal quantity of supply air to the conditioned space. This combination enhanced the advantages of uniform air distribution, thermal comfort and improved energy savings potential.Keywords: Energy savings, fuzzy logic, Genetic algorithm, Thermal Comfort
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32101867 A Machine Learning Approach for Anomaly Detection in Environmental IoT-Driven Wastewater Purification Systems
Authors: Giovanni Cicceri, Roberta Maisano, Nathalie Morey, Salvatore Distefano
Abstract:
The main goal of this paper is to present a solution for a water purification system based on an Environmental Internet of Things (EIoT) platform to monitor and control water quality and machine learning (ML) models to support decision making and speed up the processes of purification of water. A real case study has been implemented by deploying an EIoT platform and a network of devices, called Gramb meters and belonging to the Gramb project, on wastewater purification systems located in Calabria, south of Italy. The data thus collected are used to control the wastewater quality, detect anomalies and predict the behaviour of the purification system. To this extent, three different statistical and machine learning models have been adopted and thus compared: Autoregressive Integrated Moving Average (ARIMA), Long Short Term Memory (LSTM) autoencoder, and Facebook Prophet (FP). The results demonstrated that the ML solution (LSTM) out-perform classical statistical approaches (ARIMA, FP), in terms of both accuracy, efficiency and effectiveness in monitoring and controlling the wastewater purification processes.Keywords: EIoT, machine learning, anomaly detection, environment monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10271866 Review of the Model-Based Supply Chain Management Research in the Construction Industry
Authors: Aspasia Koutsokosta, Stefanos Katsavounis
Abstract:
This paper reviews the model-based qualitative and quantitative Operations Management research in the context of Construction Supply Chain Management (CSCM). Construction industry has been traditionally blamed for low productivity, cost and time overruns, waste, high fragmentation and adversarial relationships. The construction industry has been slower than other industries to employ the Supply Chain Management (SCM) concept and develop models that support the decision-making and planning. However the last decade there is a distinct shift from a project-based to a supply-based approach of construction management. CSCM comes up as a new promising management tool of construction operations and improves the performance of construction projects in terms of cost, time and quality. Modeling the Construction Supply Chain (CSC) offers the means to reap the benefits of SCM, make informed decisions and gain competitive advantage. Different modeling approaches and methodologies have been applied in the multi-disciplinary and heterogeneous research field of CSCM. The literature review reveals that a considerable percentage of the CSC modeling research accommodates conceptual or process models which present general management frameworks and do not relate to acknowledged soft Operations Research methods. We particularly focus on the model-based quantitative research and categorize the CSCM models depending on their scope, objectives, modeling approach, solution methods and software used. Although over the last few years there has been clearly an increase of research papers on quantitative CSC models, we identify that the relevant literature is very fragmented with limited applications of simulation, mathematical programming and simulation-based optimization. Most applications are project-specific or study only parts of the supply system. Thus, some complex interdependencies within construction are neglected and the implementation of the integrated supply chain management is hindered. We conclude this paper by giving future research directions and emphasizing the need to develop optimization models for integrated CSCM. We stress that CSC modeling needs a multi-dimensional, system-wide and long-term perspective. Finally, prior applications of SCM to other industries have to be taken into account in order to model CSCs, but not without translating the generic concepts to the context of construction industry.Keywords: Construction supply chain management, modeling, operations research, optimization and simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28251865 Optimization of Process Parameters for Friction Stir Welding of Cast Alloy AA7075 by Taguchi Method
Authors: Dhairya Partap Sing, Vikram Singh, Sudhir Kumar
Abstract:
This investigation proposes Friction stir welding technique to solve the fusion welding problems. Objectives of this investigation are fabrication of AA7075-10%wt. Silicon carbide (SiC) aluminum metal matrix composite and optimization of optimal process parameters of friction stir welded AA7075-10%wt. SiC Composites. Composites were prepared by the mechanical stir casting process. Experiments were performed with four process parameters such as tool rotational speed, weld speed, axial force and tool geometry considering three levels of each. The quality characteristics considered is joint efficiency (JE). The welding experiments were conducted using L27 orthogonal array. An orthogonal array and design of experiments were used to give best possible welding parameters that give optimal JE. The fabricated welded joints using rotational speed of 1500 rpm, welding speed (1.3 mm/sec), axial force (7 k/n) of and tool geometry (square) give best possible results. Experimental result reveals that the tool rotation speed, welding speed and axial force are the significant process parameters affecting the welding performance. The predicted optimal value of percentage JE is 95.621. The confirmation tests also have been done for verifying the results.
Keywords: Metal matrix composite, axial force, joint efficiency, rotational speed, traverse speed, tool geometry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8701864 QSAR Studies of Certain Novel Heterocycles Derived from Bis-1, 2, 4 Triazoles as Anti-Tumor Agents
Authors: Madhusudan Purohit, Stephen Philip, Bharathkumar Inturi
Abstract:
In this paper we report the quantitative structure activity relationship of novel bis-triazole derivatives for predicting the activity profile. The full model encompassed a dataset of 46 Bis- triazoles. Tripos Sybyl X 2.0 program was used to conduct CoMSIA QSAR modeling. The Partial Least-Squares (PLS) analysis method was used to conduct statistical analysis and to derive a QSAR model based on the field values of CoMSIA descriptor. The compounds were divided into test and training set. The compounds were evaluated by various CoMSIA parameters to predict the best QSAR model. An optimum numbers of components were first determined separately by cross-validation regression for CoMSIA model, which were then applied in the final analysis. A series of parameters were used for the study and the best fit model was obtained using donor, partition coefficient and steric parameters. The CoMSIA models demonstrated good statistical results with regression coefficient (r2) and the cross-validated coefficient (q2) of 0.575 and 0.830 respectively. The standard error for the predicted model was 0.16322. In the CoMSIA model, the steric descriptors make a marginally larger contribution than the electrostatic descriptors. The finding that the steric descriptor is the largest contributor for the CoMSIA QSAR models is consistent with the observation that more than half of the binding site area is occupied by steric regions.
Keywords: 3D QSAR, CoMSIA, Triazoles.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14801863 Using Business Intelligence Capabilities to Improve the Quality of Decision-Making: A Case Study of Mellat Bank
Authors: Jalal Haghighat Monfared, Zahra Akbari
Abstract:
Today, business executives need to have useful information to make better decisions. Banks have also been using information tools so that they can direct the decision-making process in order to achieve their desired goals by rapidly extracting information from sources with the help of business intelligence. The research seeks to investigate whether there is a relationship between the quality of decision making and the business intelligence capabilities of Mellat Bank. Each of the factors studied is divided into several components, and these and their relationships are measured by a questionnaire. The statistical population of this study consists of all managers and experts of Mellat Bank's General Departments (including 190 people) who use commercial intelligence reports. The sample size of this study was 123 randomly determined by statistical method. In this research, relevant statistical inference has been used for data analysis and hypothesis testing. In the first stage, using the Kolmogorov-Smirnov test, the normalization of the data was investigated and in the next stage, the construct validity of both variables and their resulting indexes were verified using confirmatory factor analysis. Finally, using the structural equation modeling and Pearson's correlation coefficient, the research hypotheses were tested. The results confirmed the existence of a positive relationship between decision quality and business intelligence capabilities in Mellat Bank. Among the various capabilities, including data quality, correlation with other systems, user access, flexibility and risk management support, the flexibility of the business intelligence system was the most correlated with the dependent variable of the present research. This shows that it is necessary for Mellat Bank to pay more attention to choose the required business intelligence systems with high flexibility in terms of the ability to submit custom formatted reports. Subsequently, the quality of data on business intelligence systems showed the strongest relationship with quality of decision making. Therefore, improving the quality of data, including the source of data internally or externally, the type of data in quantitative or qualitative terms, the credibility of the data and perceptions of who uses the business intelligence system, improves the quality of decision making in Mellat Bank.
Keywords: Business intelligence, business intelligence capability, decision making, decision quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13841862 Research on the Optimization of the Facility Layout of Efficient Cafeterias for Troops
Authors: Qing Zhang, Jiachen Nie, Yujia Wen, Guanyuan Kou, Peng Yu, Kun Xia, Qin Yang, Li Ding
Abstract:
Background: A facility layout problem (FLP) is an NP-complete (non-deterministic polynomial) problem, for which is hard to obtain an exact optimal solution. FLP has been widely studied in various limited spaces and workflows. For example, cafeterias with many types of equipment for troops cause chaotic processes when dining. Objective: This article tried to optimize the layout of a troops’ cafeteria and to improve the overall efficiency of the dining process. Methods: First, the original cafeteria layout design scheme was analyzed from an ergonomic perspective and two new design schemes were generated. Next, three facility layout models were designed, and further simulation was applied to compare the total time and density of troops between each scheme. Last, an experiment of the dining process with video observation and analysis verified the simulation results. Results: In a simulation, the dining time under the second new layout is shortened by 2.25% and 1.89% (p<0.0001, p=0.0001) compared with the other two layouts, while troops-flow density and interference both greatly reduced in the two new layouts. In the experiment, process completing time and the number of interferences reduced as well, which verified corresponding simulation results. Conclusion: Our two new layout schemes are tested to be optimal by a series of simulation and space experiments. In future research, similar approaches could be applied when taking layout-design algorithm calculation into consideration.
Keywords: Troops’ cafeteria, layout optimization, dining efficiency, AnyLogic simulation, field experiment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5091861 Using Genetic Algorithms to Outline Crop Rotations and a Cropping-System Model
Authors: Nicolae Bold, Daniel Nijloveanu
Abstract:
The idea of cropping-system is a method used by farmers. It is an environmentally-friendly method, protecting the natural resources (soil, water, air, nutritive substances) and increase the production at the same time, taking into account some crop particularities. The combination of this powerful method with the concepts of genetic algorithms results into a possibility of generating sequences of crops in order to form a rotation. The usage of this type of algorithms has been efficient in solving problems related to optimization and their polynomial complexity allows them to be used at solving more difficult and various problems. In our case, the optimization consists in finding the most profitable rotation of cultures. One of the expected results is to optimize the usage of the resources, in order to minimize the costs and maximize the profit. In order to achieve these goals, a genetic algorithm was designed. This algorithm ensures the finding of several optimized solutions of cropping-systems possibilities which have the highest profit and, thus, which minimize the costs. The algorithm uses genetic-based methods (mutation, crossover) and structures (genes, chromosomes). A cropping-system possibility will be considered a chromosome and a crop within the rotation is a gene within a chromosome. Results about the efficiency of this method will be presented in a special section. The implementation of this method would bring benefits into the activity of the farmers by giving them hints and helping them to use the resources efficiently.Keywords: Genetic algorithm, chromosomes, genes, cropping, agriculture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16021860 Statistical Relation between Vegetation Cover and Land Surface Temperature in Phnom Penh City
Authors: Gulam Mohiuddin, Jan-Peter Mund
Abstract:
This study assessed the correlation between Normalized Difference Vegetation Index (NDVI) and Land Surface Temperature (LST) in Phnom Penh City (Cambodia) from 2016 to 2020. Understanding the LST and NDVI can be helpful to understand the Urban Heat Island (UHI) scenario, and it can contribute to planning urban greening and combating the effects of UHI. The study used Landsat-8 images as the data for analysis. They have 100 m spatial resolution (per pixel) in the thermal band. The current study used an approach for the statistical analysis that considers every pixel from the study area instead of taking few sample points or analyzing descriptive statistics. Also, this study is examining the correlation between NDVI and LST with a spatially explicit approach. The study found a strong negative correlation between NDVI and LST (coefficient range -0.56 to -0.59), and this relationship is linear. This study showed a way to avoid the probable error from the sample-based approach in examining two spatial variables. The method is reproducible for a similar type of analysis on the correlation between spatial phenomena. The findings of this study will be used further to understand the causation behind LST change in that area triangulating LST, NDVI and land-use changes.
Keywords: Land Surface Temperature, NDVI, Normalized Difference Vegetation Index, remote sensing, methodological development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4711859 Hybrid Approach for Software Defect Prediction Using Machine Learning with Optimization Technique
Authors: C. Manjula, Lilly Florence
Abstract:
Software technology is developing rapidly which leads to the growth of various industries. Now-a-days, software-based applications have been adopted widely for business purposes. For any software industry, development of reliable software is becoming a challenging task because a faulty software module may be harmful for the growth of industry and business. Hence there is a need to develop techniques which can be used for early prediction of software defects. Due to complexities in manual prediction, automated software defect prediction techniques have been introduced. These techniques are based on the pattern learning from the previous software versions and finding the defects in the current version. These techniques have attracted researchers due to their significant impact on industrial growth by identifying the bugs in software. Based on this, several researches have been carried out but achieving desirable defect prediction performance is still a challenging task. To address this issue, here we present a machine learning based hybrid technique for software defect prediction. First of all, Genetic Algorithm (GA) is presented where an improved fitness function is used for better optimization of features in data sets. Later, these features are processed through Decision Tree (DT) classification model. Finally, an experimental study is presented where results from the proposed GA-DT based hybrid approach is compared with those from the DT classification technique. The results show that the proposed hybrid approach achieves better classification accuracy.
Keywords: Decision tree, genetic algorithm, machine learning, software defect prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14651858 Thermal Behavior of a Ventilated Façade Using Perforated Ceramic Bricks
Authors: H. López-Moreno, A. Rodríguez-Sánchez, C. Viñas-Arrebola, C. Porras-Amores
Abstract:
The ventilated façade has great advantages when compared to traditional façades as it reduces the air conditioning thermal loads due to the stack effect induced by solar radiation in the air chamber. Optimizing energy consumption by using a ventilated façade can be used not only in newly built buildings but also it can be implemented in existing buildings, opening the field of implementation to energy building retrofitting works. In this sense, the following three prototypes of façade where designed, built and further analyzed in this research: non-ventilated façade (NVF); slightly ventilated façade (SLVF) and strongly ventilated façade (STVF). The construction characteristics of the three facades are based on the Spanish regulation of building construction “Technical Building Code”. The façades have been monitored by type-k thermocouples in a representative day of the summer season in Madrid (Spain). Moreover, an analysis of variance (ANOVA) with repeated measures, studying the thermal lag in the ventilated and no-ventilated façades has been designed. Results show that STVF façade presents higher levels of thermal inertia as the thermal lag reduces up to 17% (daily mean) compared to the non-ventilated façade. In addition, the statistical analysis proves that an increase of the ventilation holes size in STVF façades can improve the thermal lag significantly (p >0.05) when compared to the SLVF façade.Keywords: Energy efficiency, experimental study, statistical analysis, thermal behavior, ventilated façade.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41171857 Probabilistic Approach of Dealing with Uncertainties in Distributed Constraint Optimization Problems and Situation Awareness for Multi-agent Systems
Authors: Sagir M. Yusuf, Chris Baber
Abstract:
In this paper, we describe how Bayesian inferential reasoning will contributes in obtaining a well-satisfied prediction for Distributed Constraint Optimization Problems (DCOPs) with uncertainties. We also demonstrate how DCOPs could be merged to multi-agent knowledge understand and prediction (i.e. Situation Awareness). The DCOPs functions were merged with Bayesian Belief Network (BBN) in the form of situation, awareness, and utility nodes. We describe how the uncertainties can be represented to the BBN and make an effective prediction using the expectation-maximization algorithm or conjugate gradient descent algorithm. The idea of variable prediction using Bayesian inference may reduce the number of variables in agents’ sampling domain and also allow missing variables estimations. Experiment results proved that the BBN perform compelling predictions with samples containing uncertainties than the perfect samples. That is, Bayesian inference can help in handling uncertainties and dynamism of DCOPs, which is the current issue in the DCOPs community. We show how Bayesian inference could be formalized with Distributed Situation Awareness (DSA) using uncertain and missing agents’ data. The whole framework was tested on multi-UAV mission for forest fire searching. Future work focuses on augmenting existing architecture to deal with dynamic DCOPs algorithms and multi-agent information merging.
Keywords: DCOP, multi-agent reasoning, Bayesian reasoning, swarm intelligence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10101856 A Green Design for Assembly Model for Integrated Design Evaluation and Assembly and Disassembly Sequence Planning
Authors: Yuan-Jye Tseng, Fang-Yu Yu, Feng-Yi Huang
Abstract:
A green design for assembly model is presented to integrate design evaluation and assembly and disassembly sequence planning by evaluating the three activities in one integrated model. For an assembled product, an assembly sequence planning model is required for assembling the product at the start of the product life cycle. A disassembly sequence planning model is needed for disassembling the product at the end. In a green product life cycle, it is important to plan how a product can be disassembled, reused, or recycled, before the product is actually assembled and produced. Given a product requirement, there may be several design alternative cases to design the same product. In the different design cases, the assembly and disassembly sequences for producing the product can be different. In this research, a new model is presented to concurrently evaluate the design and plan the assembly and disassembly sequences. First, the components are represented by using graph based models. Next, a particle swarm optimization (PSO) method with a new encoding scheme is developed. In the new PSO encoding scheme, a particle is represented by a position matrix defining an assembly sequence and a disassembly sequence. The assembly and disassembly sequences can be simultaneously planned with an objective of minimizing the total of assembly costs and disassembly costs. The test results show that the presented method is feasible and efficient for solving the integrated design evaluation and assembly and disassembly sequence planning problem. An example product is implemented and illustrated in this paper.Keywords: green design, assembly and disassembly sequence planning, green design for assembly, particle swarm optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17781855 Body Mass Index and Dietary Habits among Nursing College Students Living in the University Residence in Kirkuk City, Iraq
Authors: Jenan Shakoor
Abstract:
Obesity prevalence is increasing worldwide. University life is a challenging period especially for students who have to leave their familiar surroundings and settle in a new environment. The current study aimed to assess the diet and exercise habits and their association with body mass index (BMI) among nursing college students living at Kirkuk University residence. This was a descriptive study. A non-probability (purposive) sample of 101 students living in Kirkuk University residence was recruited during the period from the 15th November 2015 to the 5th May 2016. A questionnaire was constructed for the purpose of the study which consisted of four parts: the demographic characteristics of the study sample, eating habits, eating at college and healthy habits. The data were collected by interviewing the study sample and the weight and height were measured by a trained researcher at the college. Descriptive statistical analysis was undertaken. Data were prepared, organized and entered into the computer file; the Statistical Package for Social Science (SPSS 20) was used for data analysis. A p value≤ 0.05 was accepted as statistical significant. A total of 63 (62.4%) of the sample were aged20-21with a mean age of 22.1 (SD±0.653). A third of the sample 38 (37.6%) were from level four at college, 67 (66.3%) were female and 46 45.5% of participants were from a middle socio-economic status. 14 (13.9%) of the study sample were overweight (BMI =25-29.9kg/m2) and 6 (5.9%) were obese (BMI≥30kg/m2) compared to 73 (72.3%) were of normal weight (BMI =18.5-24.9kg/m2). With regard to eating habits and exercise, 42 (41.6%) of the students rarely ate breakfast, 79 (78.2%) eat lunch at university residence, 77 (78.2%) of the students reported rarely doing exercise and 62 (61.4%) of them were sleeping for less than eight hours. No significant association was found between the variables age, sex, level of college and socio-economic status and BMI, while there was a significant association between eating lunch at university and BMI (p =0.03). No significant association was found between eating habits, healthy habits and BMI. The prevalence of overweight and obesity among the study sample was 19.8% with female students being more obese than males. Further studies are needed to identify BMI among residence students in other colleges and increasing the awareness of undergraduate students to healthy food habits.
Keywords: Body mass index, diet, obesity, university residence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12791854 Complex Condition Monitoring System of Aircraft Gas Turbine Engine
Authors: A. M. Pashayev, D. D. Askerov, C. Ardil, R. A. Sadiqov, P. S. Abdullayev
Abstract:
Researches show that probability-statistical methods application, especially at the early stage of the aviation Gas Turbine Engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods is considered. According to the purpose of this problem training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. For GTE technical condition more adequate model making dynamics of skewness and kurtosis coefficients- changes are analysed. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE workand output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stage-by-stage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine technical condition was made.Keywords: aviation gas turbine engine, technical condition, fuzzy logic, neural networks, fuzzy statistics
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25451853 Lineup Optimization Model of Basketball Players Based on the Prediction of Recursive Neural Networks
Authors: Wang Yichen, Haruka Yamashita
Abstract:
In recent years, in the field of sports, decision making such as member in the game and strategy of the game based on then analysis of the accumulated sports data are widely attempted. In fact, in the NBA basketball league where the world's highest level players gather, to win the games, teams analyze the data using various statistical techniques. However, it is difficult to analyze the game data for each play such as the ball tracking or motion of the players in the game, because the situation of the game changes rapidly, and the structure of the data should be complicated. Therefore, it is considered that the analysis method for real time game play data is proposed. In this research, we propose an analytical model for "determining the optimal lineup composition" using the real time play data, which is considered to be difficult for all coaches. In this study, because replacing the entire lineup is too complicated, and the actual question for the replacement of players is "whether or not the lineup should be changed", and “whether or not Small Ball lineup is adopted”. Therefore, we propose an analytical model for the optimal player selection problem based on Small Ball lineups. In basketball, we can accumulate scoring data for each play, which indicates a player's contribution to the game, and the scoring data can be considered as a time series data. In order to compare the importance of players in different situations and lineups, we combine RNN (Recurrent Neural Network) model, which can analyze time series data, and NN (Neural Network) model, which can analyze the situation on the field, to build the prediction model of score. This model is capable to identify the current optimal lineup for different situations. In this research, we collected all the data of accumulated data of NBA from 2019-2020. Then we apply the method to the actual basketball play data to verify the reliability of the proposed model.Keywords: Recurrent Neural Network, players lineup, basketball data, decision making model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8311852 Optimal Design of Flat – Gain Wide-Band Discrete Raman Amplifiers
Authors: Banaz Omer Rasheed, Parexan M. Aljaff
Abstract:
In this paper, a wide band gain–flattened discrete Raman amplifiers utilizing four optimum pump wavelengths is demonstrated.Keywords: Fiber Raman Amplifiers, Optimization, WaveLength Division Multiplexing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14631851 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis
Authors: N. R. N. Idris, S. Baharom
Abstract:
A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates.On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.
Keywords: Aggregate data, combined-level data, Individual patient data, meta analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17401850 Robust Integrated Design for a Mechatronic Feed Drive System of Machine Tools
Authors: Chin-Yin Chen, Chi-Cheng Cheng
Abstract:
This paper aims at to develop a robust optimization methodology for the mechatronic modules of machine tools by considering all important characteristics from all structural and control domains in one single process. The relationship between these two domains is strongly coupled. In order to reduce the disturbance caused by parameters in either one, the mechanical and controller design domains need to be integrated. Therefore, the concurrent integrated design method Design For Control (DFC), will be employed in this paper. In this connect, it is not only applied to achieve minimal power consumption but also enhance structural performance and system response at same time. To investigate the method for integrated optimization, a mechatronic feed drive system of the machine tools is used as a design platform. Pro/Engineer and AnSys are first used to build the 3D model to analyze and design structure parameters such as elastic deformation, nature frequency and component size, based on their effects and sensitivities to the structure. In addition, the robust controller,based on Quantitative Feedback Theory (QFT), will be applied to determine proper control parameters for the controller. Therefore, overall physical properties of the machine tool will be obtained in the initial stage. Finally, the technology of design for control will be carried out to modify the structural and control parameters to achieve overall system performance. Hence, the corresponding productivity is expected to be greatly improved.
Keywords: Machine tools, integrated structure and control design, design for control, multilevel decomposition, quantitative feedback theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19481849 Resistance and Sub-Resistances of RC Beams Subjected to Multiple Failure Modes
Authors: F. Sangiorgio, J. Silfwerbrand, G. Mancini
Abstract:
Geometric and mechanical properties all influence the resistance of RC structures and may, in certain combination of property values, increase the risk of a brittle failure of the whole system. This paper presents a statistical and probabilistic investigation on the resistance of RC beams designed according to Eurocodes 2 and 8, and subjected to multiple failure modes, under both the natural variation of material properties and the uncertainty associated with cross-section and transverse reinforcement geometry. A full probabilistic model based on JCSS Probabilistic Model Code is derived. Different beams are studied through material nonlinear analysis via Monte Carlo simulations. The resistance model is consistent with Eurocode 2. Both a multivariate statistical evaluation and the data clustering analysis of outcomes are then performed. Results show that the ultimate load behaviour of RC beams subjected to flexural and shear failure modes seems to be mainly influenced by the combination of the mechanical properties of both longitudinal reinforcement and stirrups, and the tensile strength of concrete, of which the latter appears to affect the overall response of the system in a nonlinear way. The model uncertainty of the resistance model used in the analysis plays undoubtedly an important role in interpreting results.
Keywords: Modelling, Monte Carlo Simulations, Probabilistic Models, Data Clustering, Reinforced Concrete Members, Structural Design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21091848 A Statistical Model for the Geotechnical Parameters of Cement-Stabilised Hightown’s Soft Soil: A Case Stufy of Liverpool, UK
Authors: Hassnen M. Jafer, Khalid S. Hashim, W. Atherton, Ali W. Alattabi
Abstract:
This study investigates the effect of two important parameters (length of curing period and percentage of the added binder) on the strength of soil treated with OPC. An intermediate plasticity silty clayey soil with medium organic content was used in this study. This soft soil was treated with different percentages of a commercially available cement type 32.5-N. laboratory experiments were carried out on the soil treated with 0, 1.5, 3, 6, 9, and 12% OPC by the dry weight to determine the effect of OPC on the compaction parameters, consistency limits, and the compressive strength. Unconfined compressive strength (UCS) test was carried out on cement-treated specimens after exposing them to different curing periods (1, 3, 7, 14, 28, and 90 days). The results of UCS test were used to develop a non-linear multi-regression model to find the relationship between the predicted and the measured maximum compressive strength of the treated soil (qu). The results indicated that there was a significant improvement in the index of plasticity (IP) by treating with OPC; IP was decreased from 20.2 to 14.1 by using 12% of OPC; this percentage was enough to increase the UCS of the treated soil up to 1362 kPa after 90 days of curing. With respect to the statistical model of the predicted qu, the results showed that the regression coefficients (R2) was equal to 0.8534 which indicates a good reproducibility for the constructed model.Keywords: Cement admixtures, soft soil stabilisation, geotechnical parameters, unconfined compressive strength, multi-regression model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13931847 The Evaluation of Complete Blood Cell Count-Based Inflammatory Markers in Pediatric Obesity and Metabolic Syndrome
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Obesity is defined as a severe chronic disease characterized by a low-grade inflammatory state. Therefore, inflammatory markers gained utmost importance during the evaluation of obesity and metabolic syndrome (MetS), a disease characterized by central obesity, elevated blood pressure, increased fasting blood glucose and elevated triglycerides or reduced high density lipoprotein cholesterol (HDL-C) values. Some inflammatory markers based upon complete blood cell count (CBC) are available. In this study, it was questioned which inflammatory marker was the best to evaluate the differences between various obesity groups. 514 pediatric individuals were recruited. 132 children with MetS, 155 morbid obese (MO), 90 obese (OB), 38 overweight (OW) and 99 children with normal BMI (N-BMI) were included into the scope of this study. Obesity groups were constituted using age- and sex-dependent body mass index (BMI) percentiles tabulated by World Health Organization. MetS components were determined to be able to specify children with MetS. CBC were determined using automated hematology analyzer. HDL-C analysis was performed. Using CBC parameters and HDL-C values, ratio markers of inflammation, which cover neutrophil-to-lymphocyte ratio (NLR), derived neutrophil-to-lymphocyte ratio (dNLR), platelet-to-lymphocyte ratio (PLR), lymphocyte-to-monocyte ratio (LMR), monocyte-to-HDL-C ratio (MHR) were calculated. Statistical analyses were performed. The statistical significance degree was considered as p < 0.05. There was no statistically significant difference among the groups in terms of platelet count, neutrophil count, lymphocyte count, monocyte count, and NLR. PLR differed significantly between OW and N-BMI as well as MetS. Monocyte-to HDL-C value exhibited statistical significance between MetS and N-BMI, OB, and MO groups. HDL-C value differed between MetS and N-BMI, OW, OB, MO groups. MHR was the ratio, which exhibits the best performance among the other CBC-based inflammatory markers. On the other hand, when MHR was compared to HDL-C only, it was suggested that HDL-C has given much more valuable information. Therefore, this parameter still keeps its value from the diagnostic point of view. Our results suggest that MHR can be an inflammatory marker during the evaluation of pediatric MetS, but the predictive value of this parameter was not superior to HDL-C during the evaluation of obesity.
Keywords: Children, complete blood cell count, high density lipoprotein cholesterol, metabolic syndrome, obesity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8511846 Conventional and PSO Based Approaches for Model Reduction of SISO Discrete Systems
Authors: S. K. Tomar, R. Prasad, S. Panda, C. Ardil
Abstract:
Reduction of Single Input Single Output (SISO) discrete systems into lower order model, using a conventional and an evolutionary technique is presented in this paper. In the conventional technique, the mixed advantages of Modified Cauer Form (MCF) and differentiation are used. In this method the original discrete system is, first, converted into equivalent continuous system by applying bilinear transformation. The denominator of the equivalent continuous system and its reciprocal are differentiated successively, the reduced denominator of the desired order is obtained by combining the differentiated polynomials. The numerator is obtained by matching the quotients of MCF. The reduced continuous system is converted back into discrete system using inverse bilinear transformation. In the evolutionary technique method, Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example.
Keywords: Discrete System, Single Input Single Output (SISO), Bilinear Transformation, Reduced Order Model, Modified CauerForm, Polynomial Differentiation, Particle Swarm Optimization, Integral Squared Error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19431845 Biosorption of Heavy Metals by Low Cost Adsorbents
Authors: Azam Tabatabaee, Fereshteh Dastgoshadeh, Akram Tabatabaee
Abstract:
This paper describes the use of by-products as adsorbents for removing heavy metals from aqueous effluent solutions. Products of almond skin, walnut shell, saw dust, rice bran and egg shell were evaluated as metal ion adsorbents in aqueous solutions. A comparative study was done with commercial adsorbents like ion exchange resins and activated carbon too. Batch experiments were investigated to determine the affinity of all of biomasses for, Cd(ΙΙ), Cr(ΙΙΙ), Ni(ΙΙ), and Pb(ΙΙ) metal ions at pH 5. The rate of metal ion removal in the synthetic wastewater by the biomass was evaluated by measuring final concentration of synthetic wastewater. At a concentration of metal ion (50 mg/L), egg shell adsorbed high levels (98.6 – 99.7%) of Pb(ΙΙ) and Cr(ΙΙΙ) and walnut shell adsorbed high levels (35.3 – 65.4%) of Ni(ΙΙ) and Cd(ΙΙ). In this study, it has been shown that by-products were excellent adsorbents for removal of toxic ions from wastewater with efficiency comparable to commercially available adsorbents, but at a reduced cost. Also statistical studies using Independent Sample t Test and ANOVA Oneway for statistical comparison between various elements adsorption showed that there isn’t a significant difference in some elements adsorption percentage by by-products and commercial adsorbents.Keywords: Adsorbents, heavy metals, commercial adsorbents, wastewater, by-products.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2471