Search results for: e-content producing algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4962

Search results for: e-content producing algorithm

3132 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport

Authors: Aamir Shahzad, Mao-Gang He

Abstract:

Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.

Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow

Procedia PDF Downloads 275
3131 An Efficient Process Analysis and Control Method for Tire Mixing Operation

Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park

Abstract:

Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.

Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process

Procedia PDF Downloads 266
3130 Transformer Fault Diagnostic Predicting Model Using Support Vector Machine with Gradient Decent Optimization

Authors: R. O. Osaseri, A. R. Usiobaifo

Abstract:

The power transformer which is responsible for the voltage transformation is of great relevance in the power system and oil-immerse transformer is widely used all over the world. A prompt and proper maintenance of the transformer is of utmost importance. The dissolved gasses content in power transformer, oil is of enormous importance in detecting incipient fault of the transformer. There is a need for accurate prediction of the incipient fault in transformer oil in order to facilitate the prompt maintenance and reducing the cost and error minimization. Study on fault prediction and diagnostic has been the center of many researchers and many previous works have been reported on the use of artificial intelligence to predict incipient failure of transformer faults. In this study machine learning technique was employed by using gradient decent algorithms and Support Vector Machine (SVM) in predicting incipient fault diagnosis of transformer. The method focuses on creating a system that improves its performance on previous result and historical data. The system design approach is basically in two phases; training and testing phase. The gradient decent algorithm is trained with a training dataset while the learned algorithm is applied to a set of new data. This two dataset is used to prove the accuracy of the proposed model. In this study a transformer fault diagnostic model based on Support Vector Machine (SVM) and gradient decent algorithms has been presented with a satisfactory diagnostic capability with high percentage in predicting incipient failure of transformer faults than existing diagnostic methods.

Keywords: diagnostic model, gradient decent, machine learning, support vector machine (SVM), transformer fault

Procedia PDF Downloads 325
3129 Digital Platform for Psychological Assessment Supported by Sensors and Efficiency Algorithms

Authors: Francisco M. Silva

Abstract:

Technology is evolving, creating an impact on our everyday lives and the telehealth industry. Telehealth encapsulates the provision of healthcare services and information via a technological approach. There are several benefits of using web-based methods to provide healthcare help. Nonetheless, few health and psychological help approaches combine this method with wearable sensors. This paper aims to create an online platform for users to receive self-care help and information using wearable sensors. In addition, researchers developing a similar project obtain a solid foundation as a reference. This study provides descriptions and analyses of the software and hardware architecture. Exhibits and explains a heart rate dynamic and efficient algorithm that continuously calculates the desired sensors' values. Presents diagrams that illustrate the website deployment process and the webserver means of handling the sensors' data. The goal is to create a working project using Arduino compatible hardware. Heart rate sensors send their data values to an online platform. A microcontroller board uses an algorithm to calculate the sensor heart rate values and outputs it to a web server. The platform visualizes the sensor's data, summarizes it in a report, and creates alerts for the user. Results showed a solid project structure and communication from the hardware and software. The web server displays the conveyed heart rate sensor's data on the online platform, presenting observations and evaluations.

Keywords: Arduino, heart rate BPM, microcontroller board, telehealth, wearable sensors, web-based healthcare

Procedia PDF Downloads 128
3128 Scheduling Method for Electric Heater in HEMS considering User’s Comfort

Authors: Yong-Sung Kim, Je-Seok Shin, Ho-Jun Jo, Jin-O Kim

Abstract:

Home Energy Management System (HEMS) which makes the residential consumers contribute to the demand response is attracting attention in recent years. An aim of HEMS is to minimize their electricity cost by controlling the use of their appliances according to electricity price. The use of appliances in HEMS may be affected by some conditions such as external temperature and electricity price. Therefore, the user’s usage pattern of appliances should be modeled according to the external conditions, and the resultant usage pattern is related to the user’s comfortability on use of each appliances. This paper proposes a methodology to model the usage pattern based on the historical data with the copula function. Through copula function, the usage range of each appliance can be obtained and is able to satisfy the appropriate user’s comfort according to the external conditions for next day. Within the usage range, an optimal scheduling for appliances would be conducted so as to minimize an electricity cost with considering user’s comfort. Among the home appliance, electric heater (EH) is a representative appliance which is affected by the external temperature. In this paper, an optimal scheduling algorithm for an electric heater (EH) is addressed based on the method of branch and bound. As a result, scenarios for the EH usage are obtained according to user’s comfort levels and then the residential consumer would select the best scenario. The case study shows the effects of the proposed algorithm compared with the traditional operation of the EH, and it also represents impacts of the comfort level on the scheduling result.

Keywords: load scheduling, usage pattern, user’s comfort, copula function, branch and bound, electric heater

Procedia PDF Downloads 587
3127 Solving the Economic Load Dispatch Problem Using Differential Evolution

Authors: Alaa Sheta

Abstract:

Economic Load Dispatch (ELD) is one of the vital optimization problems in power system planning. Solving the ELD problems mean finding the best mixture of power unit outputs of all members of the power system network such that the total fuel cost is minimized while sustaining operation requirements limits satisfied across the entire dispatch phases. Many optimization techniques were proposed to solve this problem. A famous one is the Quadratic Programming (QP). QP is a very simple and fast method but it still suffer many problem as gradient methods that might trapped at local minimum solutions and cannot handle complex nonlinear functions. Numbers of metaheuristic algorithms were used to solve this problem such as Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). In this paper, another meta-heuristic search algorithm named Differential Evolution (DE) is used to solve the ELD problem in power systems planning. The practicality of the proposed DE based algorithm is verified for three and six power generator system test cases. The gained results are compared to existing results based on QP, GAs and PSO. The developed results show that differential evolution is superior in obtaining a combination of power loads that fulfill the problem constraints and minimize the total fuel cost. DE found to be fast in converging to the optimal power generation loads and capable of handling the non-linearity of ELD problem. The proposed DE solution is able to minimize the cost of generated power, minimize the total power loss in the transmission and maximize the reliability of the power provided to the customers.

Keywords: economic load dispatch, power systems, optimization, differential evolution

Procedia PDF Downloads 284
3126 Swirling Flows with Heat Transfer in a Cylindrical under Axial Magnetic Field

Authors: B. Mahfoud, R. Harouz

Abstract:

The present work examine numerically the effect of axial magnetic field on mixed convection through a cylindrical cavity, filled with a liquid metal and having a rotating top and bottom disks. Effects of Richardson number (Ri = 0, 0.5, 1, and 2) and Hartman number (Ha = 0, 5, 10, and 20) on temperature and flow fields were analyzed. The basic state of this system is steady and axisymmetric, when the counter-rotation is sufficiently large, producing a free shear layer. This shear layer is unstable and different complex flows appear successively: steady states with an azimuthal wavenumber of 1; travelling waves and steady states with an azimuthal wavenumber of 2. Mixed modes and azimuthal wavenumber of 3 are also found with increasing Hartmann number. The stability diagram (Recr-Ha) corresponding to the axisymmetric-three-dimensional transition for increasing values of the axial magnetic field is obtained.

Keywords: axisymmetric, counter-rotating, instabilities, magnetohydrodynamic, magnetic field, wavenumber

Procedia PDF Downloads 550
3125 Optimal Design of Storm Water Networks Using Simulation-Optimization Technique

Authors: Dibakar Chakrabarty, Mebada Suiting

Abstract:

Rapid urbanization coupled with changes in land use pattern results in increasing peak discharge and shortening of catchment time of concentration. The consequence is floods, which often inundate roads and inhabited areas of cities and towns. Management of storm water resulting from rainfall has, therefore, become an important issue for the municipal bodies. Proper management of storm water obviously includes adequate design of storm water drainage networks. The design of storm water network is a costly exercise. Least cost design of storm water networks assumes significance, particularly when the fund available is limited. Optimal design of a storm water system is a difficult task as it involves the design of various components, like, open or closed conduits, storage units, pumps etc. In this paper, a methodology for least cost design of storm water drainage systems is proposed. The methodology proposed in this study consists of coupling a storm water simulator with an optimization method. The simulator used in this study is EPA’s storm water management model (SWMM), which is linked with Genetic Algorithm (GA) optimization method. The model proposed here is a mixed integer nonlinear optimization formulation, which takes care of minimizing the sectional areas of the open conduits of storm water networks, while satisfactorily conveying the runoff resulting from rainfall to the network outlet. Performance evaluations of the developed model show that the proposed method can be used for cost effective design of open conduit based storm water networks.

Keywords: genetic algorithm (GA), optimal design, simulation-optimization, storm water network, SWMM

Procedia PDF Downloads 250
3124 Diversity of Microbial Ground Improvements

Authors: V. Ivanov, J. Chu, V. Stabnikov

Abstract:

Low cost, sustainable, and environmentally friendly microbial cements, grouts, polysaccharides and bioplastics are useful in construction and geotechnical engineering. Construction-related biotechnologies are based on activity of different microorganisms: urease-producing, acidogenic, halophilic, alkaliphilic, denitrifying, iron- and sulphate-reducing bacteria, cyanobacteria, algae, microscopic fungi. The bio-related materials and processes can be used for the bioaggregation, soil biogrouting and bioclogging, biocementation, biodesaturation of water-satured soil, bioencapsulation of soft clay, biocoating, and biorepair of the concrete surface. Altogether with the most popular calcium- and urea based biocementation, there are possible and often are more effective such methods of ground improvement as calcium- and magnesium based biocementation, calcium phosphate strengthening of soil, calcium bicarbonate biocementation, and iron- or polysaccharide based bioclogging. The construction-related microbial biotechnologies have a lot of advantages over conventional construction materials and processes.

Keywords: ground improvement, biocementation, biogrouting, microorganisms

Procedia PDF Downloads 230
3123 Development of Locally Fabricated Honey Extracting Machine

Authors: Akinfiresoye W. A., Olarewaju O. O., Okunola, Okunola I. O.

Abstract:

An indigenous honey-extracting machine was designed, fabricated and evaluated at the workshop of the department of Agricultural Technology, Federal Polytechnic, Ile-Oluji, Nigeria using locally available materials. It has the extraction unit, the presser, the honey collector and the frame. The harvested honeycomb is placed inside the cylindrical extraction unit with perforated holes. The press plate was then placed on the comb while the hydraulic press of 3 tons was placed on it, supported by the frame. The hydraulic press, which is manually operated, forces the oil out of the extraction chamber through the perforated holes into the honey collector positioned at the lowest part of the extraction chamber. The honey-extracting machine has an average throughput of 2.59 kg/min and an efficiency of about 91%. The cost of producing the honey extracting machine is NGN 31, 700: 00, thirty-one thousand and seven hundred nairas only or $70 at NGN 452.8 to a dollar. This cost is affordable to beekeepers and would-be honey entrepreneurs. The honey-extracting machine is easy to operate and maintain without any complex technical know-how.

Keywords: honey, extractor, cost, efficiency

Procedia PDF Downloads 80
3122 Plasma Chemical Gasification of Solid Fuel with Mineral Mass Processing

Authors: V. E. Messerle, O. A. Lavrichshev, A. B. Ustimenko

Abstract:

Currently and in the foreseeable future (up to 2100), the global economy is oriented to the use of organic fuel, mostly, solid fuels, the share of which constitutes 40% in the generation of electric power. Therefore, the development of technologies for their effective and environmentally friendly application represents a priority problem nowadays. This work presents the results of thermodynamic and experimental investigations of plasma technology for processing of low-grade coals. The use of this technology for producing target products (synthesis gas, hydrogen, technical carbon, and valuable components of mineral mass of coals) meets the modern environmental and economic requirements applied to basic industrial sectors. The plasma technology of coal processing for the production of synthesis gas from the coal organic mass (COM) and valuable components from coal mineral mass (CMM) is highly promising. Its essence is heating the coal dust by reducing electric arc plasma to the complete gasification temperature, when the COM converts into synthesis gas, free from particles of ash, nitrogen oxides and sulfur. At the same time, oxides of the CMM are reduced by the carbon residue, producing valuable components, such as technical silicon, ferrosilicon, aluminum and carbon silicon, as well as microelements of rare metals, such as uranium, molybdenum, vanadium, titanium. Thermodynamic analysis of the process was made using a versatile computation program TERRA. Calculations were carried out in the temperature range 300 - 4000 K and a pressure of 0.1 MPa. Bituminous coal with the ash content of 40% and the heating value 16,632 kJ/kg was taken for the investigation. The gaseous phase of coal processing products includes, basically, a synthesis gas with a concentration of up to 99 vol.% at 1500 K. CMM components completely converts from the condensed phase into the gaseous phase at a temperature above 2600 K. At temperatures above 3000 K, the gaseous phase includes, basically, Si, Al, Ca, Fe, Na, and compounds of SiO, SiH, AlH, and SiS. The latter compounds dissociate into relevant elements with increasing temperature. Complex coal conversion for the production of synthesis gas from COM and valuable components from CMM was investigated using a versatile experimental plant the main element of which was plug and flow plasma reactor. The material and thermal balances helped to find the integral indicators for the process. Plasma-steam gasification of the low-grade coal with CMM processing gave the synthesis gas yield 95.2%, the carbon gasification 92.3%, and coal desulfurization 95.2%. The reduced material of the CMM was found in the slag in the form of ferrosilicon as well as silicon and iron carbides. The maximum reduction of the CMM oxides was observed in the slag from the walls of the plasma reactor in the areas with maximum temperatures, reaching 47%. The thusly produced synthesis gas can be used for synthesis of methanol, or as a high-calorific reducing gas instead of blast-furnace coke as well as power gas for thermal power plants. Reduced material of CMM can be used in metallurgy.

Keywords: gasification, mineral mass, organic mass, plasma, processing, solid fuel, synthesis gas, valuable components

Procedia PDF Downloads 609
3121 Frequent Pattern Mining for Digenic Human Traits

Authors: Atsuko Okazaki, Jurg Ott

Abstract:

Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.

Keywords: digenic traits, DNA variants, epistasis, statistical genetics

Procedia PDF Downloads 126
3120 Determination of Steel Cleanliness of Non-Grain Oriented Electrical Steels

Authors: Emre Alan, Zafer Cetin

Abstract:

Electrical steels are widely used as a magnetic core materials in many electrical applications such as transformers, electric motors, and generators. Core loss property of these magnetic materials refers to dissipation of electrical energy during magnetization in service conditions. Therefore, in order to minimize the magnetic core loss, certain precautions are taken from steel producers; “Steel Cleanliness” is one of the major points among them. For obtaining lower core loss values, increasing proper elements in chemical composition such as silicon is a must. Therefore, impurities of these alloys are a key value for producing a cleaner steel. In this study, effects of impurity levels of different FeSi alloying materials to the steel cleanliness will be investigated. One of the important element content in FeSi alloy materials is Calcium. A SEM investigation will be done in order to present if Ca content in FeSi alloy is enough for proper inclusion modification or an additional Ca-treatment is required.

Keywords: electrical steels, FeSi alloy, impurities, steel cleanliness

Procedia PDF Downloads 334
3119 Accuracy of VCCT for Calculating Stress Intensity Factor in Metal Specimens Subjected to Bending Load

Authors: Sanjin Kršćanski, Josip Brnić

Abstract:

Virtual Crack Closure Technique (VCCT) is a method used for calculating stress intensity factor (SIF) of a cracked body that is easily implemented on top of basic finite element (FE) codes and as such can be applied on the various component geometries. It is a relatively simple method that does not require any special finite elements to be used and is usually used for calculating stress intensity factors at the crack tip for components made of brittle materials. This paper studies applicability and accuracy of VCCT applied on standard metal specimens containing trough thickness crack, subjected to an in-plane bending load. Finite element analyses were performed using regular 4-node, regular 8-node and a modified quarter-point 8-node 2D elements. Stress intensity factor was calculated from the FE model results for a given crack length, using data available from FE analysis and a custom programmed algorithm based on virtual crack closure technique. Influence of the finite element size on the accuracy of calculated SIF was also studied. The final part of this paper includes a comparison of calculated stress intensity factors with results obtained from analytical expressions found in available literature and in ASTM standard. Results calculated by this algorithm based on VCCT were found to be in good correlation with results obtained with mentioned analytical expressions.

Keywords: VCCT, stress intensity factor, finite element analysis, 2D finite elements, bending

Procedia PDF Downloads 307
3118 Fabrication of Tin Oxide and Metal Doped Tin Oxide for Gas Sensor Application

Authors: Goban Kumar Panneer Selvam

Abstract:

In past years, there is lots of death caused due to harmful gases. So its very important to monitor harmful gases for human safety, and semiconductor material play important role in producing effective gas sensors.A novel solvothermal synthesis method based on sol-gel processing was prepared to deposit tin oxide thin films on glass substrate at high temperature for gas sensing application. The structure and morphology of tin oxide were analyzed by X-ray diffraction (XRD), Fourier transforms infrared spectroscopy (FTIR) and scanning electron microscopy (SEM). The SEM analysis of how spheres shape in tin oxide nanoparticles. The structure characterization of tin oxide studied by X-ray diffraction shows 8.95 nm (calculated by sheers equation). The UV visible spectroscopy indicated a maximum absorption band shown at 390 nm. Further dope tin oxide with selected metals to attain maximum sensitivity using dip coating technique with different immersion and sensing characterization are measured.

Keywords: tin oxide, gas sensor, chlorine free, sensitivity, crystalline size

Procedia PDF Downloads 150
3117 Optimization of Multi Commodities Consumer Supply Chain: Part 1-Modelling

Authors: Zeinab Haji Abolhasani, Romeo Marian, Lee Luong

Abstract:

This paper and its companions (Part II, Part III) will concentrate on optimizing a class of supply chain problems known as Multi- Commodities Consumer Supply Chain (MCCSC) problem. MCCSC problem belongs to production-distribution (P-D) planning category. It aims to determine facilities location, consumers’ allocation, and facilities configuration to minimize total cost (CT) of the entire network. These facilities can be manufacturer units (MUs), distribution centres (DCs), and retailers/end-users (REs) but not limited to them. To address this problem, three major tasks should be undertaken. At the first place, a mixed integer non-linear programming (MINP) mathematical model is developed. Then, system’s behaviors under different conditions will be observed using a simulation modeling tool. Finally, the most optimum solution (minimum CT) of the system will be obtained using a multi-objective optimization technique. Due to the large size of the problem, and the uncertainties in finding the most optimum solution, integration of modeling and simulation methodologies is proposed followed by developing new approach known as GASG. It is a genetic algorithm on the basis of granular simulation which is the subject of the methodology of this research. In part II, MCCSC is simulated using discrete-event simulation (DES) device within an integrated environment of SimEvents and Simulink of MATLAB® software package followed by a comprehensive case study to examine the given strategy. Also, the effect of genetic operators on the obtained optimal/near optimal solution by the simulation model will be discussed in part III.

Keywords: supply chain, genetic algorithm, optimization, simulation, discrete event system

Procedia PDF Downloads 319
3116 Introduction to Multi-Agent Deep Deterministic Policy Gradient

Authors: Xu Jie

Abstract:

As a key network security method, cryptographic services must fully cope with problems such as the wide variety of cryptographic algorithms, high concurrency requirements, random job crossovers, and instantaneous surges in workloads. Its complexity and dynamics also make it difficult for traditional static security policies to cope with the ever-changing situation. Cyber Threats and Environment. Traditional resource scheduling algorithms are inadequate when facing complex decisionmaking problems in dynamic environments. A network cryptographic resource allocation algorithm based on reinforcement learning is proposed, aiming to optimize task energy consumption, migration cost, and fitness of differentiated services (including user, data, and task security). By modeling the multi-job collaborative cryptographic service scheduling problem as a multiobjective optimized job flow scheduling problem, and using a multi-agent reinforcement learning method, efficient scheduling and optimal configuration of cryptographic service resources are achieved. By introducing reinforcement learning, resource allocation strategies can be adjusted in real time in a dynamic environment, improving resource utilization and achieving load balancing. Experimental results show that this algorithm has significant advantages in path planning length, system delay and network load balancing, and effectively solves the problem of complex resource scheduling in cryptographic services.

Keywords: multi-agent reinforcement learning, non-stationary dynamics, multi-agent systems, cooperative and competitive agents

Procedia PDF Downloads 27
3115 Preparation of Geopolymer Cements from Tunisian Illito-Kaolinitic Clay Mineral

Authors: N. Hamdi, E. Srasra

Abstract:

In this work geopolymer cement are synthesized from Tunisian (illito-kaolinitic) clay. This product can be used as binding material in place of cement Portland. The clay fractions used were characterized with physico-chemical and thermal analyses. The clays materials react with alkaline solution (10, 14 and 18 mol(NaOH)/L) in order to produce geopolymer cements whose pastes were characterized by determining their water adsorption and compressive strength. The compressive strength of the hardened geopolymer cement paste samples aged 28 days attained its highest value (32.3MPa) around 950°C for NaOH concentration of 14M. The water adsorption value of the prepared samples decreased with increasing the calcination temperature of clay fractions. It can be concluded that the most suitable temperature for the calcination of illitio-kaolinitic clays in view of producing geopolymer cements is around 950°C.

Keywords: compressive strength, geopolymer cement, illitio-kaolinitic clay, mineral

Procedia PDF Downloads 255
3114 Accuracy of a 3D-Printed Polymer Model for Producing Casting Mold

Authors: Ariangelo Hauer Dias Filho, Gustavo Antoniácomi de Carvalho, Benjamim de Melo Carvalho

Abstract:

The work´s purpose was to evaluate the possibility of manufacturing casting tools utilizing Fused Filament Fabrication, a 3D printing technique, without any post-processing on the printed part. Taguchi Orthogonal array was used to evaluate the influence of extrusion temperature, bed temperature, layer height, and infill on the dimensional accuracy of a 3D-Printed Polymer Model. A Zeiss T-SCAN CS 3D Scanner was used for dimensional evaluation of the printed parts within the limit of ±0,2 mm. The mold capabilities were tested with the printed model to check how it would interact with the green sand. With little adjustments in the 3D model, it was possible to produce rapid tools without the need for post-processing for iron casting. The results are important for reducing time and cost in the development of such tools.

Keywords: additive manufacturing, Taguchi method, rapid tooling, fused filament fabrication, casting mold

Procedia PDF Downloads 146
3113 Carbon Nanotubes Based Porous Framework for Filtration Applications Using Industrial Grinding Waste

Authors: V. J. Pillewan, D. N. Raut, K. N. Patil, D. K. Shinde

Abstract:

Forging, milling, turning, grinding and shaping etc. are the various industrial manufacturing processes which generate the metal waste. Grinding is extensively used in the finishing operation. The waste generated contains significant impurities apart from the metal particles. Due to these significant impurities, it becomes difficult to process and gets usually dumped in the landfills which create environmental problems. Therefore, it becomes essential to reuse metal waste to create value added products. Powder injection molding process is used for producing the porous metal matrix framework. This paper discusses the presented design of the porous framework to be used for the liquid filter application. Different parameters are optimized to obtain the better strength framework with variable porosity. Carbon nanotubes are used as reinforcing materials to enhance the strength of the metal matrix framework.

Keywords: grinding waste, powder injection molding (PIM), carbon nanotubes (CNTs), matrix composites (MMCs)

Procedia PDF Downloads 307
3112 Bioremediation of Arsenic from Industrially Polluted Soil of Vatva, Ahmedabad, Gujarat, India

Authors: C. Makwana, S. R. Dave

Abstract:

Arsenic is toxic to almost all living cells. Its contamination in natural sources affects the growth of microorganisms. The presence of arsenic is associated with various human disorders also. The attempt of this sort of study provides information regarding the performance of our isolated microorganisms in the presence of Arsenic, which have ample scope for bioremediation. Six isolates were selected from the polluted sample of industrial zone Vatva, Ahmedabad, Gujarat, India, out of which two were Thermophilic organisms. The thermophilic exopolysaccharide (EPS) producing Bacillus was used for microbial enhance oil recovery (MEOR) and in the bio beneficiation. Inorganic arsenic primarily exists in the form of arsenate or arsenite. This arsenic resistance isolate was capable of transforming As +3 to As+5. This isolate would be useful for arsenic remediation standpoint from aquatic systems. The study revealed that the thermophilic microorganism was growing at 55 degree centigrade showed considerable remediation property. The results on the growth and enzyme catalysis would be discussed in response to Arsenic remediation.

Keywords: aquatic systems, thermophilic, exopolysacchride, arsenic

Procedia PDF Downloads 215
3111 Improvement on a CNC Gantry Machine Structure Design for Higher Machining Speed Capability

Authors: Ahmed A. D. Sarhan, S. R. Besharaty, Javad Akbaria, M. Hamdi

Abstract:

The capability of CNC gantry milling machines in manufacturing long components has caused the expanded use of such machines. On the other hand, the machines’ gantry rigidity can reduce under severe loads or vibration during operation. Indeed, the quality of machining is dependent on the machine’s dynamic behavior throughout the operating process. For this reason, this type of machines has always been used prudently and are non efficient. Therefore, they can usually be employed for rough machining and may not produce adequate surface finishing. In this paper, a CNC gantry milling machine with the potential to produce good surface finish has been designed and analyzed. The lowest natural frequency of this machine is 202 Hz at all motion amplitudes with a full range of suitable frequency responses. Meanwhile, the maximum deformation under dead loads for the gantry machine is 0.565µm, indicating that this machine tool is capable of producing higher product quality.

Keywords: frequency response, finite element, gantry machine, gantry design, static and dynamic analysis

Procedia PDF Downloads 360
3110 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant

Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula

Abstract:

Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.

Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning

Procedia PDF Downloads 137
3109 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 286
3108 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation

Procedia PDF Downloads 154
3107 Information Exchange Process Analysis between Authoring Design Tools and Lighting Simulation Tools

Authors: Rudan Xue, Annika Moscati, Rehel Zeleke Kebede, Peter Johansson

Abstract:

Successful buildings’ simulation and analysis inevitably require information exchange between multiple building information modeling (BIM) software. The BIM infor-mation exchange based on IFC is widely used. However, Industry Foundation Classifi-cation (IFC) files are not always reliable and information can get lost when using dif-ferent software for modeling and simulations. In this research, interviews with lighting simulation experts and a case study provided by a company producing lighting devices have been the research methods used to identify the necessary steps and data for suc-cessful information exchange between lighting simulation tools and authoring design tools. Model creation, information exchange, and model simulation have been identi-fied as key aspects for the success of information exchange. The paper concludes with recommendations for improved information exchange and more reliable simulations that take all the needed parameters into consideration.

Keywords: BIM, data exchange, interoperability issues, lighting simulations

Procedia PDF Downloads 245
3106 AI-based Optimization Model for Plastics Biodegradable Substitutes

Authors: Zaid Almahmoud, Rana Mahmoud

Abstract:

To mitigate the environmental impacts of throwing away plastic waste, there has been a recent interest in manufacturing and producing biodegradable plastics. Here, we study a new class of biodegradable plastics which are mixed with external natural additives, including catalytic additives that lead to a successful degradation of the resulting material. To recommend the best alternative among multiple materials, we propose a multi-objective AI model that evaluates the material against multiple objectives given the material properties. As a proof of concept, the AI model was implemented in an expert system and evaluated using multiple materials. Our findings showed that Polyethylene Terephalate is potentially the best biodegradable plastic substitute based on its material properties. Therefore, it is recommended that governments shift the attention to the use of Polyethylene Terephalate in the manufacturing of bottles to gain a great environmental and sustainable benefits.

Keywords: plastic bottles, expert systems, multi-objective model, biodegradable substitutes

Procedia PDF Downloads 116
3105 Production of Low-Density Nanocellular Foam Based on PMMA/PEBAX Blends

Authors: Nigus Maregu Demewoz, Shu-Kai Yeh

Abstract:

Low-density nanocellular foam is a fascinating new-generation advanced material due to its mechanical strength and thermal insulation properties. In nanocellular foam, reducing the density increases the insulation ability. However, producing a nanocellular foam of densities less than 0.3 with a cell size of less than 100 nm is very challenging. In this study, poly (methyl methacrylate) (PMMA) was blended with Polyether block amide (PEBAX) to study the effects of PEBAX on the nanocellular foam structure of the PMMA matrix. We added 2 wt% of PEBAX in the PMMA matrix, and the PEBAX nanostructured domain size of 45 nm was well dispersed in the PMMA matrix. The foaming result produced a new generation special bouquet-like nanocellular foam of cell size less than 50 nm with a relative density of 0.24. Also, we were able to produce a nanocellular foam of a relative density of about 0.17. In addition to thermal insulation applications, bouquet-like nanocellular foam may be expected for filtration applications.

Keywords: nanocellular foam, low-density, cell size, relative density, PMMA/PEBAX

Procedia PDF Downloads 82
3104 Molding Properties of Cobalt-Chrome-Based Feedstocks Used in Low-Pressure Powder Injection Molding

Authors: Ehsan Gholami, Vincent Demers

Abstract:

Low-pressure powder injection molding is an emerging technology for cost-effectively producing complex shape metallic parts with the proper dimensional tolerances, either in high or in low production volumes. In this study, the molding properties of cobalt-chrome-based feedstocks were evaluated for use in a low-pressure powder injection molding process. The rheological properties of feedstock formulations were obtained by mixing metallic powder with a proprietary wax-based binder system. Rheological parameters such as reference viscosity, shear rate sensitivity index, and activation energy for viscous flow, were extracted from the viscosity profiles and introduced into the Weir model to calculate the moldability index. Feedstocks were experimentally injected into a spiral mold cavity to validate the injection performance calculated with the model.

Keywords: binder, feedstock, moldability, powder injection molding, viscosity

Procedia PDF Downloads 274
3103 Enabling Participation of Deaf People in the Co-Production of Services: An Example in Service Design, Commissioning and Delivery in a London Borough

Authors: Stephen Bahooshy

Abstract:

Co-producing services with the people that access them is considered best practice in the United Kingdom, with the Care Act 2014 arguing that people who access services and their carers should be involved in the design, commissioning and delivery of services. Co-production is a way of working with the community, breaking down barriers of access and providing meaningful opportunity for people to engage. Unfortunately, owing to a number of reported factors such as time constraints, practitioner experience and departmental budget restraints, this process is not always followed. In 2019, in a south London borough, d/Deaf people who access services were engaged in the design, commissioning and delivery of an information and advice service that would support their community to access local government services. To do this, sensory impairment social workers and commissioners collaborated to host a series of engagement events with the d/Deaf community. Interpreters were used to enable communication between the commissioners and d/Deaf participants. Initially, the community’s opinions, ideas and requirements were noted. This was then summarized and fed back to the community to ensure accuracy. Subsequently, a service specification was developed which included performance metrics, inclusive of qualitative and quantitative indicators, such as ‘I statements’, whereby participants respond on an adapted Likert scale how much they agree or disagree with a particular statement in relation to their experience of the service. The service specification was reviewed by a smaller group of d/Deaf residents and social workers, to ensure that it met the community’s requirements. The service was then tendered using the local authority’s e-tender process. Bids were evaluated and scored in two parts; part one was by commissioners and social workers and part two was a presentation by prospective providers to an evaluation panel formed of four d/Deaf residents. The internal evaluation panel formed 75% of the overall score, whilst the d/Deaf resident evaluation panel formed 25% of the overall tender score. Co-producing the evaluation panel with social workers and the d/Deaf community meant that commissioners were able to meet the requirements of this community by developing evaluation questions and tools that were easily understood and use by this community. For example, the wording of questions were reviewed and the scoring mechanism consisted of three faces to reflect the d/Deaf residents’ scores instead of traditional numbering. These faces were a happy face, a neutral face and a sad face. By making simple changes to the commissioning and tender evaluation process, d/Deaf people were able to have meaningful involvement in the design and commissioning process for a service that would benefit their community. Co-produced performance metrics means that it is incumbent on the successful provider to continue to engage with people accessing the service and ensure that the feedback is utilized. d/Deaf residents were grateful to have been involved in this process as this was not an opportunity that they had previously been afforded. In recognition of their time, each d/Deaf resident evaluator received a £40 gift voucher, bringing the total cost of this co-production to £160.

Keywords: co-production, community engagement, deaf and hearing impaired, service design

Procedia PDF Downloads 274