Search results for: data transfer optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29556

Search results for: data transfer optimization

27726 Optimization of Tundish Geometry for Minimizing Dead Volume Using OpenFOAM

Authors: Prateek Singh, Dilshad Ahmad

Abstract:

Growing demand for high-quality steel products has inspired researchers to investigate the unit operations involved in the manufacturing of these products (slabs, rods, sheets, etc.). One such operation is tundish operation, in which a vessel (tundish) acts as a buffer of molten steel for the solidification operation in mold. It is observed that tundish also plays a crucial role in the quality and cleanliness of the steel produced, besides merely acting as a reservoir for the mold. It facilitates removal of dissolved oxygen (inclusions) from the molten steel thus improving its cleanliness. Inclusion removal can be enhanced by increasing the residence time of molten steel in the tundish by incorporation of flow modifiers like dams, weirs, turbo-pad, etc. These flow modifiers also help in reducing the dead or short circuit zones within the tundish which is significant for maintaining thermal and chemical homogeneity of molten steel. Thus, it becomes important to analyze the flow of molten steel in the tundish for different configuration of flow modifiers. In the present work, effect of varying positions and heights/depths of dam and weir on the dead volume in tundish is studied. Steady state thermal and flow profiles of molten steel within the tundish are obtained using OpenFOAM. Subsequently, Residence Time Distribution analysis is performed to obtain the percentage of dead volume in the tundish. Design of Experiment method is then used to configure different tundish geometries for varying positions and heights/depths of dam and weir, and dead volume for each tundish design is obtained. A second-degree polynomial with two-term interactions of independent variables to predict the dead volume in the tundish with positions and heights/depths of dam and weir as variables are computed using Multiple Linear Regression model. This polynomial is then used in an optimization framework to obtain the optimal tundish geometry for minimizing dead volume using Sequential Quadratic Programming optimization.

Keywords: design of experiments, multiple linear regression, OpenFOAM, residence time distribution, sequential quadratic programming optimization, steel, tundish

Procedia PDF Downloads 208
27725 Origin of Hydrogen Bonding: Natural Bond Orbital Electron Donor-Acceptor Interactions

Authors: Mohamed Ayoub

Abstract:

We perform computational investigation using density functional theory, B3LYP with aug-cc-pVTZ basis set followed by natural bond orbital analysis (NBO), which provides best single “natural Lewis structure” (NLS) representation of chosen wavefunction (Ψ) with natural resonance theory (NRT) to provide an analysis of molecular electron density in terms of resonance structures (RS) and weights (w). We selected for the study a wide range of gas phase dimers (B…HA), with hydrogen bond dissociation energies (ΔEB…H) that span more than two orders of magnitude. We demonstrate that charge transfer from a donor Lewis-type NBO (nB:) to an acceptor non-Lewis-type NBO (σHA*) is the primary cause for H-bonding not classical electrostatic (dipole-dipole or ionic). We provide a variety of structure, and spectroscopic descriptors to support the conclusion, such as IR frequency shift (ΔνHA), H-bond penetration distance (ΔRB..H), bond order (bB..H), charge-transfer (CTB→HA) and the corresponding donor-acceptor stabilization energy (ΔE(2)).

Keywords: natural bond orbital, hydrogen bonding, electron donor, electron acceptor

Procedia PDF Downloads 438
27724 Numerical Investigation of Solid Subcooling on a Low Melting Point Metal in Latent Thermal Energy Storage Systems Based on Flat Slab Configuration

Authors: Cleyton S. Stampa

Abstract:

This paper addresses the perspectives of using low melting point metals (LMPMs) as phase change materials (PCMs) in latent thermal energy storage (LTES) units, through a numerical approach. This is a new class of PCMs that has been one of the most prospective alternatives to be considered in LTES, due to these materials present high thermal conductivity and elevated heat of fusion, per unit volume. The chosen type of LTES consists of several horizontal parallel slabs filled with PCM. The heat transfer fluid (HTF) circulates through the channel formed between each two consecutive slabs on a laminar regime through forced convection. The study deals with the LTES charging process (heat-storing) by using pure gallium as PCM, and it considers heat conduction in the solid phase during melting driven by natural convection in the melt. The transient heat transfer problem is analyzed in one arbitrary slab under the influence of the HTF. The mathematical model to simulate the isothermal phase change is based on a volume-averaged enthalpy method, which is successfully verified by comparing its predictions with experimental data from works available in the pertinent literature. Regarding the convective heat transfer problem in the HTF, it is assumed that the flow is thermally developing, whereas the velocity profile is already fully developed. The study aims to learn about the effect of the solid subcooling in the melting rate through comparisons with the melting process of the solid in which it starts to melt from its fusion temperature. In order to best understand this effect in a metallic compound, as it is the case of pure gallium, the study also evaluates under the same conditions established for the gallium, the melting process of commercial paraffin wax (organic compound) and of the calcium chloride hexahydrate (CaCl₂ 6H₂O-inorganic compound). In the present work, it is adopted the best options that have been established by several researchers in their parametric studies with respect to this type of LTES, which lead to high values of thermal efficiency. To do so, concerning with the geometric aspects, one considers a gap of the channel formed by two consecutive slabs, thickness and length of the slab. About the HTF, one considers the type of fluid, the mass flow rate, and inlet temperature.

Keywords: flat slab, heat storing, pure metal, solid subcooling

Procedia PDF Downloads 141
27723 Optimization Techniques for Microwave Structures

Authors: Malika Ourabia

Abstract:

A new and efficient method is presented for the analysis of arbitrarily shaped discontinuities. The discontinuities is characterized using a hybrid spectral/numerical technique. This structure presents an arbitrary number of ports, each one with different orientation and dimensions. This article presents a hybrid method based on multimode contour integral and mode matching techniques. The process is based on segmentation and dividing the structure into key building blocks. We use the multimode contour integral method to analyze the blocks including irregular shape discontinuities. Finally, the multimode scattering matrix of the whole structure can be found by cascading the blocks. Therefore, the new method is suitable for analysis of a wide range of waveguide problems. Therefore, the present approach can be applied easily to the analysis of any multiport junctions and cascade blocks. The accuracy of the method is validated comparing with results for several complex problems found in the literature. CPU times are also included to show the efficiency of the new method proposed.

Keywords: segmentation, s parameters, simulation, optimization

Procedia PDF Downloads 528
27722 The Benefits of End-To-End Integrated Planning from the Mine to Client Supply for Minimizing Penalties

Authors: G. Martino, F. Silva, E. Marchal

Abstract:

The control over delivered iron ore blend characteristics is one of the most important aspects of the mining business. The iron ore price is a function of its composition, which is the outcome of the beneficiation process. So, end-to-end integrated planning of mine operations can reduce risks of penalties on the iron ore price. In a standard iron mining company, the production chain is composed of mining, ore beneficiation, and client supply. When mine planning and client supply decisions are made uncoordinated, the beneficiation plant struggles to deliver the best blend possible. Technological improvements in several fields allowed bridging the gap between departments and boosting integrated decision-making processes. Clusterization and classification algorithms over historical production data generate reasonable previsions for quality and volume of iron ore produced for each pile of run-of-mine (ROM) processed. Mathematical modeling can use those deterministic relations to propose iron ore blends that better-fit specifications within a delivery schedule. Additionally, a model capable of representing the whole production chain can clearly compare the overall impact of different decisions in the process. This study shows how flexibilization combined with a planning optimization model between the mine and the ore beneficiation processes can reduce risks of out of specification deliveries. The model capabilities are illustrated on a hypothetical iron ore mine with magnetic separation process. Finally, this study shows ways of cost reduction or profit increase by optimizing process indicators across the production chain and integrating the different plannings with the sales decisions.

Keywords: clusterization and classification algorithms, integrated planning, mathematical modeling, optimization, penalty minimization

Procedia PDF Downloads 123
27721 Stock Prediction and Portfolio Optimization Thesis

Authors: Deniz Peksen

Abstract:

This thesis aims to predict trend movement of closing price of stock and to maximize portfolio by utilizing the predictions. In this context, the study aims to define a stock portfolio strategy from models created by using Logistic Regression, Gradient Boosting and Random Forest. Recently, predicting the trend of stock price has gained a significance role in making buy and sell decisions and generating returns with investment strategies formed by machine learning basis decisions. There are plenty of studies in the literature on the prediction of stock prices in capital markets using machine learning methods but most of them focus on closing prices instead of the direction of price trend. Our study differs from literature in terms of target definition. Ours is a classification problem which is focusing on the market trend in next 20 trading days. To predict trend direction, fourteen years of data were used for training. Following three years were used for validation. Finally, last three years were used for testing. Training data are between 2002-06-18 and 2016-12-30 Validation data are between 2017-01-02 and 2019-12-31 Testing data are between 2020-01-02 and 2022-03-17 We determine Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate as benchmarks which we should outperform. We compared our machine learning basis portfolio return on test data with return of Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate. We assessed our model performance with the help of roc-auc score and lift charts. We use logistic regression, Gradient Boosting and Random Forest with grid search approach to fine-tune hyper-parameters. As a result of the empirical study, the existence of uptrend and downtrend of five stocks could not be predicted by the models. When we use these predictions to define buy and sell decisions in order to generate model-based-portfolio, model-based-portfolio fails in test dataset. It was found that Model-based buy and sell decisions generated a stock portfolio strategy whose returns can not outperform non-model portfolio strategies on test dataset. We found that any effort for predicting the trend which is formulated on stock price is a challenge. We found same results as Random Walk Theory claims which says that stock price or price changes are unpredictable. Our model iterations failed on test dataset. Although, we built up several good models on validation dataset, we failed on test dataset. We implemented Random Forest, Gradient Boosting and Logistic Regression. We discovered that complex models did not provide advantage or additional performance while comparing them with Logistic Regression. More complexity did not lead us to reach better performance. Using a complex model is not an answer to figure out the stock-related prediction problem. Our approach was to predict the trend instead of the price. This approach converted our problem into classification. However, this label approach does not lead us to solve the stock prediction problem and deny or refute the accuracy of the Random Walk Theory for the stock price.

Keywords: stock prediction, portfolio optimization, data science, machine learning

Procedia PDF Downloads 80
27720 Performance Optimization on Waiting Time Using Queuing Theory in an Advanced Manufacturing Environment: Robotics to Enhance Productivity

Authors: Ganiyat Soliu, Glen Bright, Chiemela Onunka

Abstract:

Performance optimization plays a key role in controlling the waiting time during manufacturing in an advanced manufacturing environment to improve productivity. Queuing mathematical modeling theory was used to examine the performance of the multi-stage production line. Robotics as a disruptive technology was implemented into a virtual manufacturing scenario during the packaging process to study the effect of waiting time on productivity. The queuing mathematical model was used to determine the optimum service rate required by robots during the packaging stage of manufacturing to yield an optimum production cost. Different rates of production were assumed in a virtual manufacturing environment, cost of packaging was estimated with optimum production cost. An equation was generated using queuing mathematical modeling theory and the theorem adopted for analysis of the scenario is the Newton Raphson theorem. Queuing theory presented here provides an adequate analysis of the number of robots required to regulate waiting time in order to increase the number of output. Arrival rate of the product was fast which shows that queuing mathematical model was effective in minimizing service cost and the waiting time during manufacturing. At a reduced waiting time, there was an improvement in the number of products obtained per hour. The overall productivity was improved based on the assumptions used in the queuing modeling theory implemented in the virtual manufacturing scenario.

Keywords: performance optimization, productivity, queuing theory, robotics

Procedia PDF Downloads 154
27719 Optimization Study of Adsorption of Nickel(II) on Bentonite

Authors: B. Medjahed, M. A. Didi, B. Guezzen

Abstract:

This work concerns with the experimental study of the adsorption of the Ni(II) on bentonite. The effects of various parameters such as contact time, stirring rate, initial concentration of Ni(II), masse of clay, initial pH of aqueous solution and temperature on the adsorption yield, were carried out. The study of the effect of the ionic strength on the yield of adsorption was examined by the identification and the quantification of the present chemical species in the aqueous phase containing the metallic ion Ni(II). The adsorbed species were investigated by a calculation program using CHEAQS V. L20.1 in order to determine the relation between the percentages of the adsorbed species and the adsorption yield. The optimization process was carried out using 23 factorial designs. The individual and combined effects of three process parameters, i.e. initial Ni(II) concentration in aqueous solution (2.10−3 and 5.10−3 mol/L), initial pH of the solution (2 and 6.5), and mass of bentonite (0.03 and 0.3 g) on Ni(II) adsorption, were studied.

Keywords: adsorption, bentonite, factorial design, Nickel(II)

Procedia PDF Downloads 159
27718 Bias Optimization of Mach-Zehnder Modulator Considering RF Gain on OFDM Radio-Over-Fiber System

Authors: Ghazi Al Sukkar, Yazid Khattabi, Shifen Zhong

Abstract:

Most of the recent wireless LANs, broadband access networks, and digital broadcasting use Orthogonal Frequency Division Multiplexing techniques. In addition, the increasing demand of Data and Internet makes fiber optics an important technology, as fiber optics has many characteristics that make it the best solution for transferring huge frames of Data from a point to another. Radio over fiber is the place where high quality RF is converted to optical signals over single mode fiber. Optimum values for the bias level and the switching voltage for Mach-Zehnder modulator are important for the performance of radio over fiber links. In this paper, we propose a method to optimize the two parameters simultaneously; the bias and the switching voltage point of the external modulator of a radio over fiber system considering RF gain. Simulation results show the optimum gain value under these two parameters.

Keywords: OFDM, Mach Zehnder bias voltage, switching voltage, radio-over-fiber, RF gain

Procedia PDF Downloads 477
27717 Investigation of Solar Concentrator Prototypes under Tunisian Conditions

Authors: Moncef Balghouthi, Mahmoud Ben Amara, Abdessalem Ben Hadj Ali, Amenallah Guizani

Abstract:

Concentrated solar power technology constitutes an interesting option to meet a part of future energy demand, especially when considering the high levels of solar radiation and clearness index that are available particularly in Tunisia. In this work, we present three experimental prototypes of solar concentrators installed in the research center of energy CRTEn in Tunisia. Two are medium temperature parabolic trough solar collector used to drive a cooling installation and for steam generation. The third is a parabolic dish concentrator used for hybrid generation of thermal and electric power. Optical and thermal evaluations were presented. Solutions and possibilities to construct locally the mirrors of the concentrator were discussed. In addition, the enhancement of the performances of the receivers by nano selective absorption coatings was studied. The improvement of heat transfer between the receiver and the heat transfer fluid was discussed for each application.

Keywords: solar concentrators, optical and thermal evaluations, cooling and process heat, hybrid thermal and electric generation

Procedia PDF Downloads 255
27716 Simulation Study of the Microwave Heating of the Hematite and Coal Mixture

Authors: Prasenjit Singha, Sunil Yadav, Soumya Ranjan Mohantry, Ajay Kumar Shukla

Abstract:

Temperature distribution in the hematite ore mixed with 7.5% coal was predicted by solving a 1-D heat conduction equation using an implicit finite difference approach. In this work, it was considered a square slab of 20 cm x 20 cm, which assumed the coal to be uniformly mixed with hematite ore. It was solved the equations with the use of MATLAB 2018a software. Heat transfer effects in this 1D dimensional slab convective and the radiative boundary conditions are also considered. Temperature distribution obtained inside hematite slab by considering microwave heating time, thermal conductivity, heat capacity, carbon percentage, sample dimensions, and many other factors such as penetration depth, permittivity, and permeability of coal and hematite ore mixtures. The resulting temperature profile can be used as a guiding tool for optimizing the microwave-assisted carbothermal reduction process of hematite slab was extended to other dimensions as well, viz., 1 cm x 1 cm, 5 cm x 5 cm, 10 cm x 10 cm, 20 cm x 20 cm. The model predictions are in good agreement with experimental results.

Keywords: hematite ore, coal, microwave processing, heat transfer, implicit method, temperature distribution

Procedia PDF Downloads 169
27715 Thermodynamic Optimization of an R744 Based Transcritical Refrigeration System with Dedicated Mechanical Subcooling Cycle

Authors: Mihir Mouchum Hazarika, Maddali Ramgopal, Souvik Bhattacharyya

Abstract:

The thermodynamic analysis shows that the performance of the R744 based transcritical refrigeration cycle drops drastically for higher ambient temperatures. This is due to the peculiar s-shape of the isotherm in the supercritical region. However, subcooling of the refrigerant at the gas cooler exit enhances the performance of the R744 based system. The present study is carried out to analyze the R744 based transcritical system with dedicated mechanical subcooling cycle. Based on this proposed cycle, the thermodynamic analysis is performed, and optimum operating parameters are determined. The amount of subcooling and the pressure ratio in the subcooling cycle are the parameters which are needed to be optimized to extract the maximum COP from this proposed cycle. It is expected that this study will be helpful in implementing the dedicated subcooling cycle with R744 based transcritical system to improve the performance.

Keywords: optimization, R744, subcooling, transcritical

Procedia PDF Downloads 306
27714 Krill-Herd Step-Up Approach Based Energy Efficiency Enhancement Opportunities in the Offshore Mixed Refrigerant Natural Gas Liquefaction Process

Authors: Kinza Qadeer, Muhammad Abdul Qyyum, Moonyong Lee

Abstract:

Natural gas has become an attractive energy source in comparison with other fossil fuels because of its lower CO₂ and other air pollutant emissions. Therefore, compared to the demand for coal and oil, that for natural gas is increasing rapidly world-wide. The transportation of natural gas over long distances as a liquid (LNG) preferable for several reasons, including economic, technical, political, and safety factors. However, LNG production is an energy-intensive process due to the tremendous amount of power requirements for compression of refrigerants, which provide sufficient cold energy to liquefy natural gas. Therefore, one of the major issues in the LNG industry is to improve the energy efficiency of existing LNG processes through a cost-effective approach that is 'optimization'. In this context, a bio-inspired Krill-herd (KH) step-up approach was examined to enhance the energy efficiency of a single mixed refrigerant (SMR) natural gas liquefaction (LNG) process, which is considered as a most promising candidate for offshore LNG production (FPSO). The optimal design of a natural gas liquefaction processes involves multivariable non-linear thermodynamic interactions, which lead to exergy destruction and contribute to process irreversibility. As key decision variables, the optimal values of mixed refrigerant flow rates and process operating pressures were determined based on the herding behavior of krill individuals corresponding to the minimum energy consumption for LNG production. To perform the rigorous process analysis, the SMR process was simulated in Aspen Hysys® software and the resulting model was connected with the Krill-herd approach coded in MATLAB. The optimal operating conditions found by the proposed approach significantly reduced the overall energy consumption of the SMR process by ≤ 22.5% and also improved the coefficient of performance in comparison with the base case. The proposed approach was also compared with other well-proven optimization algorithms, such as genetic and particle swarm optimization algorithms, and was found to exhibit a superior performance over these existing approaches.

Keywords: energy efficiency, Krill-herd, LNG, optimization, single mixed refrigerant

Procedia PDF Downloads 155
27713 Optimization of Electrical Discharge Machining Parameters in Machining AISI D3 Tool Steel by Grey Relational Analysis

Authors: Othman Mohamed Altheni, Abdurrahman Abusaada

Abstract:

This study presents optimization of multiple performance characteristics [material removal rate (MRR), surface roughness (Ra), and overcut (OC)] of hardened AISI D3 tool steel in electrical discharge machining (EDM) using Taguchi method and Grey relational analysis. Machining process parameters selected were pulsed current Ip, pulse-on time Ton, pulse-off time Toff and gap voltage Vg. Based on ANOVA, pulse current is found to be the most significant factor affecting EDM process. Optimized process parameters are simultaneously leading to a higher MRR, lower Ra, and lower OC are then verified through a confirmation experiment. Validation experiment shows an improved MRR, Ra and OC when Taguchi method and grey relational analysis were used

Keywords: edm parameters, grey relational analysis, Taguchi method, ANOVA

Procedia PDF Downloads 294
27712 Piezoelectric Approach on Harvesting Acoustic Energy

Authors: Khin Fai Chen, Jee-Hou Ho, Eng Hwa Yap

Abstract:

An acoustic micro-energy harvester (AMEH) is developed to convert wasted acoustical energy into useful electrical energy. AMEH is mathematically modeled using lumped element modelling (LEM) and Euler-Bernoulli beam (EBB) modelling. An experiment is designed to validate the mathematical model and assess the feasibility of AMEH. Comparison of theoretical and experimental data on critical parameter value such as Mm, Cms, dm and Ceb showed the variances are within 1% to 6%, which is reasonably acceptable. Hence, AMEH mathematical model is validated. Then, AMEH undergoes bandwidth tuning for performance optimization for further experimental work. The AMEH successfully produces 0.9 V⁄(m⁄s^2) and 1.79 μW⁄(m^2⁄s^4) at 60Hz and 400kΩ resistive load which only show variances about 7% compared to theoretical data. By integrating a capacitive load of 200µF, the discharge cycle time of AMEH is 1.8s and the usable energy bandwidth is available as low as 0.25g. At 1g and 60Hz resonance frequency, the averaged power output is about 2.2mW which fulfilled a range of wireless sensors and communication peripherals power requirements. Finally, the design for AMEH is assessed, validated and deemed as a feasible design.

Keywords: piezoelectric, acoustic, energy harvester

Procedia PDF Downloads 282
27711 An Optimization Model for the Arrangement of Assembly Areas Considering Time Dynamic Area Requirements

Authors: Michael Zenker, Henrik Prinzhorn, Christian Böning, Tom Strating

Abstract:

Large-scale products are often assembled according to the job-site principle, meaning that during the assembly the product is located at a fixed position, while the area requirements are constantly changing. On one hand, the product itself is growing with each assembly step, whereas varying areas for storage, machines or working areas are temporarily required. This is an important factor when arranging products to be assembled within the factory. Currently, it is common to reserve a fixed area for each product to avoid overlaps or collisions with the other assemblies. Intending to be large enough to include the product and all adjacent areas, this reserved area corresponds to the superposition of the maximum extents of all required areas of the product. In this procedure, the reserved area is usually poorly utilized over the course of the entire assembly process; instead a large part of it remains unused. If the available area is a limited resource, a systematic arrangement of the products, which complies with the dynamic area requirements, will lead to an increased area utilization and productivity. This paper presents the results of a study on the arrangement of assembly objects assuming dynamic, competing area requirements. First, the problem situation is extensively explained, and existing research on associated topics is described and evaluated on the possibility of an adaptation. Then, a newly developed mathematical optimization model is introduced. This model allows an optimal arrangement of dynamic areas, considering logical and practical constraints. Finally, in order to quantify the potential of the developed method, some test series results are presented, showing the possible increase in area utilization.

Keywords: dynamic area requirements, facility layout problem, optimization model, product assembly

Procedia PDF Downloads 233
27710 The Design of Fire in Tube Boiler

Authors: Yoftahe Nigussie

Abstract:

This report presents a final year project pertaining to the design of Fire tube boiler for the purpose of producing saturated steam. The objective of the project is to produce saturated steam for different purpose with a capacity of 2000kg/h at 12bar design pressure by performing a design of a higher performance fire tube boiler that considered the requirements of cost minimization and parameters improvement. This is mostly done in selection of appropriate material for component parts, construction materials and production methods in different steps of analysis. In the analysis process, most of the design parameters are obtained by iterating with related formulas like selection of diameter of tubes with overall heat transfer coefficient optimization, and the other selections are also as like considered. The number of passes is two because of the size and area of the tubes and shell. As the analysis express by using heavy oil fuel no6 with a higher heating value of 44000kJ/kg and lower heating value of 41300kJ/kg and the amount of fuel consumed 140.37kg/hr. and produce 1610kw of heat with efficiency of 85.25%. The flow of the fluid is a cross flow because of its own advantage and the arrangement of the tube in-side the shell is welded with the tube sheet, and the tube sheet is attached with the shell and the end by using a gasket and weld. The design of the shell, using European Standard code section, is as like pressure vessel by considering the weight, including content and the supplementary accessories such as lifting lugs, openings, ends, man hole and supports with detail and assembly drawing.

Keywords: steam generation, external treatment, internal treatment, steam velocity

Procedia PDF Downloads 97
27709 Government Big Data Ecosystem: A Systematic Literature Review

Authors: Syed Iftikhar Hussain Shah, Vasilis Peristeras, Ioannis Magnisalis

Abstract:

Data that is high in volume, velocity, veracity and comes from a variety of sources is usually generated in all sectors including the government sector. Globally public administrations are pursuing (big) data as new technology and trying to adopt a data-centric architecture for hosting and sharing data. Properly executed, big data and data analytics in the government (big) data ecosystem can be led to data-driven government and have a direct impact on the way policymakers work and citizens interact with governments. In this research paper, we conduct a systematic literature review. The main aims of this paper are to highlight essential aspects of the government (big) data ecosystem and to explore the most critical socio-technical factors that contribute to the successful implementation of government (big) data ecosystem. The essential aspects of government (big) data ecosystem include definition, data types, data lifecycle models, and actors and their roles. We also discuss the potential impact of (big) data in public administration and gaps in the government data ecosystems literature. As this is a new topic, we did not find specific articles on government (big) data ecosystem and therefore focused our research on various relevant areas like humanitarian data, open government data, scientific research data, industry data, etc.

Keywords: applications of big data, big data, big data types. big data ecosystem, critical success factors, data-driven government, egovernment, gaps in data ecosystems, government (big) data, literature review, public administration, systematic review

Procedia PDF Downloads 229
27708 Fragment Domination for Many-Objective Decision-Making Problems

Authors: Boris Djartov, Sanaz Mostaghim

Abstract:

This paper presents a number-based dominance method. The main idea is how to fragment the many attributes of the problem into subsets suitable for the well-established concept of Pareto dominance. Although other similar methods can be found in the literature, they focus on comparing the solutions one objective at a time, while the focus of this method is to compare entire subsets of the objective vector. Given the nature of the method, it is computationally costlier than other methods and thus, it is geared more towards selecting an option from a finite set of alternatives, where each solution is defined by multiple objectives. The need for this method was motivated by dynamic alternate airport selection (DAAS). In DAAS, pilots, while en route to their destination, can find themselves in a situation where they need to select a new landing airport. In such a predicament, they need to consider multiple alternatives with many different characteristics, such as wind conditions, available landing distance, the fuel needed to reach it, etc. Hence, this method is primarily aimed at human decision-makers. Many methods within the field of multi-objective and many-objective decision-making rely on the decision maker to initially provide the algorithm with preference points and weight vectors; however, this method aims to omit this very difficult step, especially when the number of objectives is so large. The proposed method will be compared to Favour (1 − k)-Dom and L-dominance (LD) methods. The test will be conducted using well-established test problems from the literature, such as the DTLZ problems. The proposed method is expected to outperform the currently available methods in the literature and hopefully provide future decision-makers and pilots with support when dealing with many-objective optimization problems.

Keywords: multi-objective decision-making, many-objective decision-making, multi-objective optimization, many-objective optimization

Procedia PDF Downloads 91
27707 Federated Knowledge Distillation with Collaborative Model Compression for Privacy-Preserving Distributed Learning

Authors: Shayan Mohajer Hamidi

Abstract:

Federated learning has emerged as a promising approach for distributed model training while preserving data privacy. However, the challenges of communication overhead, limited network resources, and slow convergence hinder its widespread adoption. On the other hand, knowledge distillation has shown great potential in compressing large models into smaller ones without significant loss in performance. In this paper, we propose an innovative framework that combines federated learning and knowledge distillation to address these challenges and enhance the efficiency of distributed learning. Our approach, called Federated Knowledge Distillation (FKD), enables multiple clients in a federated learning setting to collaboratively distill knowledge from a teacher model. By leveraging the collaborative nature of federated learning, FKD aims to improve model compression while maintaining privacy. The proposed framework utilizes a coded teacher model that acts as a reference for distilling knowledge to the client models. To demonstrate the effectiveness of FKD, we conduct extensive experiments on various datasets and models. We compare FKD with baseline federated learning methods and standalone knowledge distillation techniques. The results show that FKD achieves superior model compression, faster convergence, and improved performance compared to traditional federated learning approaches. Furthermore, FKD effectively preserves privacy by ensuring that sensitive data remains on the client devices and only distilled knowledge is shared during the training process. In our experiments, we explore different knowledge transfer methods within the FKD framework, including Fine-Tuning (FT), FitNet, Correlation Congruence (CC), Similarity-Preserving (SP), and Relational Knowledge Distillation (RKD). We analyze the impact of these methods on model compression and convergence speed, shedding light on the trade-offs between size reduction and performance. Moreover, we address the challenges of communication efficiency and network resource utilization in federated learning by leveraging the knowledge distillation process. FKD reduces the amount of data transmitted across the network, minimizing communication overhead and improving resource utilization. This makes FKD particularly suitable for resource-constrained environments such as edge computing and IoT devices. The proposed FKD framework opens up new avenues for collaborative and privacy-preserving distributed learning. By combining the strengths of federated learning and knowledge distillation, it offers an efficient solution for model compression and convergence speed enhancement. Future research can explore further extensions and optimizations of FKD, as well as its applications in domains such as healthcare, finance, and smart cities, where privacy and distributed learning are of paramount importance.

Keywords: federated learning, knowledge distillation, knowledge transfer, deep learning

Procedia PDF Downloads 75
27706 An Assessment of the Digital Transformation of Radio

Authors: Fatih Sogut

Abstract:

Developments in information technologies have caused significant changes in terms of radio and television broadcasting. With these changes in terms of production format, transmission techniques and service delivery, the distinction between traditional media and New Media has emerged. The viewer/listener, who was in a passive position before, is now in an active position and has a say in many matters, including content production. Visual and auditory data transfer has diversified and become easier thanks to the convergence phenomenon. These transformations and developments also affected one of the oldest electronic communication tools, radio. In this study, in order to adapt to the new era that emerged with the digital age, the change in radio broadcasting and the factors that led to this change were tried to be explained.

Keywords: Internet, radio broadcasting, digital transformation, Internet broadcasting

Procedia PDF Downloads 171
27705 A Machine Learning Decision Support Framework for Industrial Engineering Purposes

Authors: Anli Du Preez, James Bekker

Abstract:

Data is currently one of the most critical and influential emerging technologies. However, the true potential of data is yet to be exploited since, currently, about 1% of generated data are ever actually analyzed for value creation. There is a data gap where data is not explored due to the lack of data analytics infrastructure and the required data analytics skills. This study developed a decision support framework for data analytics by following Jabareen’s framework development methodology. The study focused on machine learning algorithms, which is a subset of data analytics. The developed framework is designed to assist data analysts with little experience, in choosing the appropriate machine learning algorithm given the purpose of their application.

Keywords: Data analytics, Industrial engineering, Machine learning, Value creation

Procedia PDF Downloads 168
27704 Numerical Simulation of Convective Flow of Nanofluids with an Oriented Magnetic Field in a Half Circular-Annulus

Authors: M. J. Uddin, M. M. Rahman

Abstract:

The unsteady convective heat transfer flow of nanofluids in a half circular-annulus shape enclosure using nonhomogeneous dynamic model has been investigated numerically. The round upper wall of the enclosure is maintained at constant low temperature whereas the bottom wall is heated by three different thermal conditions. The enclosure is permeated by a uniform magnetic field having variable orientation. The Brownian motion and thermophoretic phenomena of the nanoparticles are taken into account in model construction. The governing nonlinear momentum, energy, and concentration equations are solved numerically using Galerkin weighted residual finite element method. To discover the best performer, the average Nusselt number is demonstrated for different types of nanofluids. The heat transfer rate for different flow parameters, positions of the annulus, thicknesses of the half circular-annulus and thermal conditions is also exhibited.

Keywords: nanofluid, convection, semicircular-annulus, nonhomogeneous dynamic model, finite element method

Procedia PDF Downloads 221
27703 Core Number Optimization Based Scheduler to Order/Mapp Simulink Application

Authors: Asma Rebaya, Imen Amari, Kaouther Gasmi, Salem Hasnaoui

Abstract:

Over these last years, the number of cores witnessed a spectacular increase in digital signal and general use processors. Concurrently, significant researches are done to get benefit from the high degree of parallelism. Indeed, these researches are focused to provide an efficient scheduling from hardware/software systems to multicores architecture. The scheduling process consists on statically choose one core to execute one task and to specify an execution order for the application tasks. In this paper, we describe an efficient scheduler that calculates the optimal number of cores required to schedule an application, gives a heuristic scheduling solution and evaluates its cost. Our proposal results are evaluated and compared with Preesm scheduler results and we prove that ours allows better scheduling in terms of latency, computation time and number of cores.

Keywords: computation time, hardware/software system, latency, optimization, multi-cores platform, scheduling

Procedia PDF Downloads 283
27702 An Approach to Electricity Production Utilizing Waste Heat of a Triple-Pressure Cogeneration Combined Cycle Power Plant

Authors: Soheil Mohtaram, Wu Weidong, Yashar Aryanfar

Abstract:

This research investigates the points with heat recovery potential in a triple-pressure cogeneration combined cycle power plant and determines the amount of waste heat that can be recovered. A modified cycle arrangement is then adopted for accessing thermal potentials. Modeling the energy system is followed by thermodynamic and energetic evaluation, and then the price of the manufactured products is also determined using the Total Revenue Requirement (TRR) method and term economic analysis. The results of optimization are then presented in a Pareto chart diagram by implementing a new model with dual objective functions, which include power cost and produce heat. This model can be utilized to identify the optimal operating point for such power plants based on electricity and heat prices in different regions.

Keywords: heat loss, recycling, unused energy, efficient production, optimization, triple-pressure cogeneration

Procedia PDF Downloads 82
27701 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System

Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko

Abstract:

Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.

Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic

Procedia PDF Downloads 61
27700 Performance Study of Scraped Surface Heat Exchanger with Helical Ribbons

Authors: S. Ali, M. Baccar

Abstract:

In this work, numerical simulations were carried out using a specific CFD code in order to study the performance of an innovative Scraped Surface Heat Exchanger (SSHE) with helical ribbons for Bingham fluids (threshold fluids). The resolution of three-dimensional form of the conservation equations (continuity, momentum and energy equations) was carried out basing on the finite volume method (FVM). After studying the effect of dimensionless numbers (axial Reynolds, rotational Reynolds and Oldroyd numbers) on the hydrodynamic and thermal behaviors within SSHE, a parametric study was developed, by varying the width of the helical ribbon, the clearance between the stator wall and the tip of the ribbon and the number of turns of the helical ribbon, in order to improve the heat transfer inside the exchanger. The effect of these geometrical numbers on the hydrodynamic and thermal behaviors was discussed.

Keywords: heat transfer, helical ribbons, hydrodynamic behavior, parametric study, SSHE, thermal behavior

Procedia PDF Downloads 214
27699 An Improved C-Means Model for MRI Segmentation

Authors: Ying Shen, Weihua Zhu

Abstract:

Medical images are important to help identifying different diseases, for example, Magnetic resonance imaging (MRI) can be used to investigate the brain, spinal cord, bones, joints, breasts, blood vessels, and heart. Image segmentation, in medical image analysis, is usually the first step to find out some characteristics with similar color, intensity or texture so that the diagnosis could be further carried out based on these features. This paper introduces an improved C-means model to segment the MRI images. The model is based on information entropy to evaluate the segmentation results by achieving global optimization. Several contributions are significant. Firstly, Genetic Algorithm (GA) is used for achieving global optimization in this model where fuzzy C-means clustering algorithm (FCMA) is not capable of doing that. Secondly, the information entropy after segmentation is used for measuring the effectiveness of MRI image processing. Experimental results show the outperformance of the proposed model by comparing with traditional approaches.

Keywords: magnetic resonance image (MRI), c-means model, image segmentation, information entropy

Procedia PDF Downloads 226
27698 Direct Electrical Communication of Redox Enzyme Based on 3-Dimensional Cross-Linked Redox Enzyme/Nanomaterials

Authors: A. K. M. Kafi, S. N. Nina, Mashitah M. Yusoff

Abstract:

In this work, we have described a new 3-dimensional (3D) network of cross-linked Horseradish Peroxidase/Carbon Nanotube (HRP/CNT) on a thiol-modified Au surface in order to build up the effective electrical wiring of the enzyme units with the electrode. This was achieved by the electropolymerization of aniline-functionalized carbon nanotubes (CNTs) and 4-aminothiophenol -modified-HRP on a 4-aminothiophenol monolayer-modified Au electrode. The synthesized 3D HRP/CNT networks were characterized with cyclic voltammetry and amperometry, resulting the establishment direct electron transfer between the redox active unit of HRP and the Au surface. Electrochemical measurements reveal that the immobilized HRP exhibits high biological activity and stability and a quasi-reversible redox peak of the redox center of HRP was observed at about −0.355 and −0.275 V vs. Ag/AgCl. The electron transfer rate constant, KS and electron transfer co-efficient were found to be 0.57 s-1 and 0.42, respectively. Based on the electrocatalytic process by direct electrochemistry of HRP, a biosensor for detecting H2O2 was developed. The developed biosensor exhibits excellent electrocatalytic activity for the reduction of H2O2. The proposed biosensor modified with HRP/CNT 3D network displays a broader linear range and a lower detection limit for H2O2 determination. The linear range is from 1.0×10−7 to 1.2×10−4M with a detection limit of 2.2.0×10−8M at 3σ. Moreover, this biosensor exhibits very high sensitivity, good reproducibility and long-time stability. In summary, ease of fabrication, a low cost, fast response and high sensitivity are the main advantages of the new biosensor proposed in this study. These obvious advantages would really help for the real analytical applicability of the proposed biosensor.

Keywords: redox enzyme, nanomaterials, biosensors, electrical communication

Procedia PDF Downloads 454
27697 Optimization of Multi Commodities Consumer Supply Chain: Part 1-Modelling

Authors: Zeinab Haji Abolhasani, Romeo Marian, Lee Luong

Abstract:

This paper and its companions (Part II, Part III) will concentrate on optimizing a class of supply chain problems known as Multi- Commodities Consumer Supply Chain (MCCSC) problem. MCCSC problem belongs to production-distribution (P-D) planning category. It aims to determine facilities location, consumers’ allocation, and facilities configuration to minimize total cost (CT) of the entire network. These facilities can be manufacturer units (MUs), distribution centres (DCs), and retailers/end-users (REs) but not limited to them. To address this problem, three major tasks should be undertaken. At the first place, a mixed integer non-linear programming (MINP) mathematical model is developed. Then, system’s behaviors under different conditions will be observed using a simulation modeling tool. Finally, the most optimum solution (minimum CT) of the system will be obtained using a multi-objective optimization technique. Due to the large size of the problem, and the uncertainties in finding the most optimum solution, integration of modeling and simulation methodologies is proposed followed by developing new approach known as GASG. It is a genetic algorithm on the basis of granular simulation which is the subject of the methodology of this research. In part II, MCCSC is simulated using discrete-event simulation (DES) device within an integrated environment of SimEvents and Simulink of MATLAB® software package followed by a comprehensive case study to examine the given strategy. Also, the effect of genetic operators on the obtained optimal/near optimal solution by the simulation model will be discussed in part III.

Keywords: supply chain, genetic algorithm, optimization, simulation, discrete event system

Procedia PDF Downloads 316