Search results for: data transfer optimization
27935 Mathematics Bridging Theory and Applications for a Data-Driven World
Authors: Zahid Ullah, Atlas Khan
Abstract:
In today's data-driven world, the role of mathematics in bridging the gap between theory and applications is becoming increasingly vital. This abstract highlights the significance of mathematics as a powerful tool for analyzing, interpreting, and extracting meaningful insights from vast amounts of data. By integrating mathematical principles with real-world applications, researchers can unlock the full potential of data-driven decision-making processes. This abstract delves into the various ways mathematics acts as a bridge connecting theoretical frameworks to practical applications. It explores the utilization of mathematical models, algorithms, and statistical techniques to uncover hidden patterns, trends, and correlations within complex datasets. Furthermore, it investigates the role of mathematics in enhancing predictive modeling, optimization, and risk assessment methodologies for improved decision-making in diverse fields such as finance, healthcare, engineering, and social sciences. The abstract also emphasizes the need for interdisciplinary collaboration between mathematicians, statisticians, computer scientists, and domain experts to tackle the challenges posed by the data-driven landscape. By fostering synergies between these disciplines, novel approaches can be developed to address complex problems and make data-driven insights accessible and actionable. Moreover, this abstract underscores the importance of robust mathematical foundations for ensuring the reliability and validity of data analysis. Rigorous mathematical frameworks not only provide a solid basis for understanding and interpreting results but also contribute to the development of innovative methodologies and techniques. In summary, this abstract advocates for the pivotal role of mathematics in bridging theory and applications in a data-driven world. By harnessing mathematical principles, researchers can unlock the transformative potential of data analysis, paving the way for evidence-based decision-making, optimized processes, and innovative solutions to the challenges of our rapidly evolving society.Keywords: mathematics, bridging theory and applications, data-driven world, mathematical models
Procedia PDF Downloads 7527934 Phosphorus Recovery Optimization in Microbial Fuel Cell
Authors: Abdullah Almatouq
Abstract:
Understanding the impact of key operational variables on concurrent energy generation and phosphorus recovery in microbial fuel cell is required to improve the process and reduce the operational cost. In this study, full factorial design (FFD) and central composite designs (CCD) were employed to identify the effect of influent COD concentration and cathode aeration flow rate on energy generation and phosphorus (P) recovery and to optimise MFC power density and P recovery. Results showed that influent chemical oxygen demand (COD) concentration and cathode aeration flow rate had a significant effect on power density, coulombic efficiency, phosphorus precipitation efficiency and phosphorus precipitation rate at the cathode. P precipitation was negatively affected by the generated current during the batch duration. The generated energy was reduced due to struvite being precipitated on the cathode surface, which might obstruct the mass transfer of ions and oxygen. Response surface mathematical model was used to predict the optimum operating conditions that resulted in a maximum power density and phosphorus precipitation efficiency of 184 mW/m² and 84%, and this corresponds to COD= 1700 mg/L and aeration flow rate=210 mL/min. The findings highlight the importance of the operational conditions of energy generation and phosphorus recovery.Keywords: energy, microbial fuel cell, phosphorus, struvite
Procedia PDF Downloads 15727933 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors
Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin
Abstract:
IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)
Procedia PDF Downloads 13927932 The Design, Development, and Optimization of a Capacitive Pressure Sensor Utilizing an Existing 9DOF Platform
Authors: Andrew Randles, Ilker Ocak, Cheam Daw Don, Navab Singh, Alex Gu
Abstract:
Nine Degrees of Freedom (9 DOF) systems are already in development in many areas. In this paper, an integrated pressure sensor is proposed that will make use of an already existing monolithic 9 DOF inertial MEMS platform. Capacitive pressure sensors can suffer from limited sensitivity for a given size of membrane. This novel pressure sensor design increases the sensitivity by over 5 times compared to a traditional array of square diaphragms while still fitting within a 2 mm x 2 mm chip and maintaining a fixed static capacitance. The improved design uses one large diaphragm supported by pillars with fixed electrodes placed above the areas of maximum deflection. The design optimization increases the sensitivity from 0.22 fF/kPa to 1.16 fF/kPa. Temperature sensitivity was also examined through simulation.Keywords: capacitive pressure sensor, 9 DOF, 10 DOF, sensor, capacitive, inertial measurement unit, IMU, inertial navigation system, INS
Procedia PDF Downloads 54727931 Experimental Study on Performance of a Planar Membrane Humidifier for a Proton Exchange Membrane Fuel Cell Stack
Authors: Chen-Yu Chen, Wei-Mon Yan, Chi-Nan Lai, Jian-Hao Su
Abstract:
The proton exchange membrane fuel cell (PEMFC) becomes more important as an alternative energy source recently. Maintaining proper water content in the membrane is one of the key requirements for optimizing the PEMFC performance. The planar membrane humidifier has the advantages of simple structure, low cost, low-pressure drop, light weight, reliable performance and good gas separability. Thus, it is a common external humidifier for PEMFCs. In this work, a planar membrane humidifier for kW-scale PEMFCs is developed successfully. The heat and mass transfer of humidifier is discussed, and its performance is analyzed in term of dew point approach temperature (DPAT), water vapor transfer rate (WVTR) and water recovery ratio (WRR). The DPAT of the humidifier with the counter flow approach reaches about 6°C under inlet dry air of 50°C and 60% RH and inlet humid air of 70°C and 100% RH. The rate of pressure loss of the humidifier is 5.0×10² Pa/min at the torque of 7 N-m, which reaches the standard of commercial planar membrane humidifiers. From the tests, it is found that increasing the air flow rate increases the WVTR. However, the DPAT and the WRR are not improved by increasing the WVTR as the air flow rate is higher than the optimal value. In addition, increasing the inlet temperature or the humidity of dry air decreases the WVTR and the WRR. Nevertheless, the DPAT is improved at elevated inlet temperatures or humidities of dry air. Furthermore, the performance of the humidifier with the counter flow approach is better than that with the parallel flow approach. The DPAT difference between the two flow approaches reaches up to 8 °C.Keywords: heat and mass transfer, humidifier performance, PEM fuel cell, planar membrane humidifier
Procedia PDF Downloads 30727930 Adaptive Few-Shot Deep Metric Learning
Authors: Wentian Shi, Daming Shi, Maysam Orouskhani, Feng Tian
Abstract:
Whereas currently the most prevalent deep learning methods require a large amount of data for training, few-shot learning tries to learn a model from limited data without extensive retraining. In this paper, we present a loss function based on triplet loss for solving few-shot problem using metric based learning. Instead of setting the margin distance in triplet loss as a constant number empirically, we propose an adaptive margin distance strategy to obtain the appropriate margin distance automatically. We implement the strategy in the deep siamese network for deep metric embedding, by utilizing an optimization approach by penalizing the worst case and rewarding the best. Our experiments on image recognition and co-segmentation model demonstrate that using our proposed triplet loss with adaptive margin distance can significantly improve the performance.Keywords: few-shot learning, triplet network, adaptive margin, deep learning
Procedia PDF Downloads 17127929 Optimization of Fenton Process for the Treatment of Young Municipal Leachate
Authors: Bouchra Wassate, Younes Karhat, Khadija El Falaki
Abstract:
Leachate is a source of surface water and groundwater contamination if it has not been pretreated. Indeed, due to its complex structure and its pollution load make its treatment extremely difficult to achieve the standard limits required. The objective of this work is to show the interest of advanced oxidation processes on leachate treatment of urban waste containing high concentrations of organic pollutants. The efficiency of Fenton (Fe2+ +H2O2 + H+) reagent for young leachate recovered from collection trucks household waste in the city of Casablanca, Morocco, was evaluated with the objectives of chemical oxygen demand (COD) and discoloration reductions. The optimization of certain physicochemical parameters (initial pH value, reaction time, and [Fe2+], [H2O2]/ [Fe2+] ratio) has yielded good results in terms of reduction of COD and discoloration of the leachate.Keywords: COD removal, color removal, Fenton process, oxidation process, leachate
Procedia PDF Downloads 28627928 Hybrid Approach for the Min-Interference Frequency Assignment
Authors: F. Debbat, F. T. Bendimerad
Abstract:
The efficient frequency assignment for radio communications becomes more and more crucial when developing new information technologies and their applications. It is consists in defining an assignment of frequencies to radio links, to be established between base stations and mobile transmitters. Separation of the frequencies assigned is necessary to avoid interference. However, unnecessary separation causes an excess requirement for spectrum, the cost of which may be very high. This problem is NP-hard problem which cannot be solved by conventional optimization algorithms. It is therefore necessary to use metaheuristic methods to solve it. This paper proposes Hybrid approach based on simulated annealing (SA) and Tabu Search (TS) methods to solve this problem. Computational results, obtained on a number of standard problem instances, testify the effectiveness of the proposed approach.Keywords: cellular mobile communication, frequency assignment problem, optimization, tabu search, simulated annealing
Procedia PDF Downloads 38527927 Development of a General Purpose Computer Programme Based on Differential Evolution Algorithm: An Application towards Predicting Elastic Properties of Pavement
Authors: Sai Sankalp Vemavarapu
Abstract:
This paper discusses the application of machine learning in the field of transportation engineering for predicting engineering properties of pavement more accurately and efficiently. Predicting the elastic properties aid us in assessing the current road conditions and taking appropriate measures to avoid any inconvenience to commuters. This improves the longevity and sustainability of the pavement layer while reducing its overall life-cycle cost. As an example, we have implemented differential evolution (DE) in the back-calculation of the elastic modulus of multi-layered pavement. The proposed DE global optimization back-calculation approach is integrated with a forward response model. This approach treats back-calculation as a global optimization problem where the cost function to be minimized is defined as the root mean square error in measured and computed deflections. The optimal solution which is elastic modulus, in this case, is searched for in the solution space by the DE algorithm. The best DE parameter combinations and the most optimum value is predicted so that the results are reproducible whenever the need arises. The algorithm’s performance in varied scenarios was analyzed by changing the input parameters. The prediction was well within the permissible error, establishing the supremacy of DE.Keywords: cost function, differential evolution, falling weight deflectometer, genetic algorithm, global optimization, metaheuristic algorithm, multilayered pavement, pavement condition assessment, pavement layer moduli back calculation
Procedia PDF Downloads 16427926 Optimization of Biomass Production and Lipid Formation from Chlorococcum sp. Cultivation on Dairy and Paper-Pulp Wastewater
Authors: Emmanuel C. Ngerem
Abstract:
The ever-increasing depletion of the dominant global form of energy (fossil fuels) calls for the development of sustainable and green alternative energy sources such as bioethanol, biohydrogen, and biodiesel. The production of the major biofuels relies on biomass feedstocks that are mainly derived from edible food crops and some inedible plants. One suitable feedstock with great potential as raw material for biofuel production is microalgal biomass. Despite the tremendous attributes of microalgae as a source of biofuel, their cultivation requires huge volumes of freshwater, thus posing a serious threat to commercial-scale production and utilization of algal biomass. In this study, a multi-media wastewater mixture for microalgae growth was formulated and optimized. Moreover, the obtained microalgae biomass was pre-treated to reduce sugar recovery and was compared with previous studies on microalgae biomass pre-treatment. The formulated and optimized mixed wastewater media for biomass and lipid accumulation was established using the simplex lattice mixture design. Based on the superposition approach of the potential results, numerical optimization was conducted, followed by the analysis of biomass concentration and lipid accumulation. The coefficients of regression (R²) of 0.91 and 0.98 were obtained for biomass concentration and lipid accumulation models, respectively. The developed optimization model predicted optimal biomass concentration and lipid accumulation of 1.17 g/L and 0.39 g/g, respectively. It suggested 64.69% dairy wastewater (DWW) and 35.31% paper and pulp wastewater (PWW) mixture for biomass concentration, 34.21% DWW, and 65.79% PWW for lipid accumulation. Experimental validation generated 0.94 g/L and 0.39 g/g of biomass concentration and lipid accumulation, respectively. The obtained microalgae biomass was pre-treated, enzymatically hydrolysed, and subsequently assessed for reducing sugars. The optimization of microwave pre-treatment of Chlorococcum sp. was achieved using response surface methodology (RSM). Microwave power (100 – 700 W), pre-treatment time (1 – 7 min), and acid-liquid ratio (1 – 5%) were selected as independent variables for RSM optimization. The optimum conditions were achieved at microwave power, pre-treatment time, and acid-liquid ratio of 700 W, 7 min, and 32.33:1, respectively. These conditions provided the highest amount of reducing sugars at 10.73 g/L. Process optimization predicted reducing sugar yields of 11.14 g/L on microwave-assisted pre-treatment of 2.52% HCl for 4.06 min at 700 watts. Experimental validation yielded reducing sugars of 15.67 g/L. These findings demonstrate that dairy wastewater and paper and pulp wastewater that could pose a serious environmental nuisance. They could be blended to form a suitable microalgae growth media, consolidating the potency of microalgae as a viable feedstock for fermentable sugars. Also, the outcome of this study supports the microalgal wastewater biorefinery concept, where wastewater remediation is coupled with bioenergy production.Keywords: wastewater cultivation, mixture design, lipid, biomass, nutrient removal, microwave, Chlorococcum, raceway pond, fermentable sugar, modelling, optimization
Procedia PDF Downloads 4127925 A Randomized Controlled Intervention Study of the Effect of Music Training on Mathematical and Working Memory Performances
Authors: Ingo Roden, Stefana Lupu, Mara Krone, Jasmin Chantah, Gunter Kreutz, Stephan Bongard, Dietmar Grube
Abstract:
The present experimental study examined the effects of music and math training on mathematical skills and visuospatial working memory capacity in kindergarten children. For this purpose, N = 54 children (mean age: 5.46 years; SD = .29) were randomly assigned to three groups. Children in the music group (n = 18) received weekly sessions of 60 min music training over a period of eight weeks, whereas children in the math group (n = 18) received the same amount of training focusing on mathematical basic skills, such as numeracy skills, quantity comparison, and counting objectives. The third group of children (n = 18) served as waiting controls. The groups were matched for sex, age, IQ and previous music experiences at baseline. Pre-Post intervention measurements revealed a significant interaction effect of group x time, showing that children in both music and math groups significantly improved their early numeracy skills, whereas children in the control group did not. No significant differences between groups were observed for the visuospatial working memory performances. These results confirm and extend previous findings on transfer effects of music training on mathematical abilities and visuospatial working memory capacity. They show that music and math interventions are similarly effective to enhance children’s mathematical skills. More research is necessary to establish, whether cognitive transfer effects arising from music interventions might facilitate children’s transition from kindergarten to first-grade.Keywords: music training, mathematical skills, working memory, transfer
Procedia PDF Downloads 27227924 ACBM: Attention-Based CNN and Bi-LSTM Model for Continuous Identity Authentication
Authors: Rui Mao, Heming Ji, Xiaoyu Wang
Abstract:
Keystroke dynamics are widely used in identity recognition. It has the advantage that the individual typing rhythm is difficult to imitate. It also supports continuous authentication through the keyboard without extra devices. The existing keystroke dynamics authentication methods based on machine learning have a drawback in supporting relatively complex scenarios with massive data. There are drawbacks to both feature extraction and model optimization in these methods. To overcome the above weakness, an authentication model of keystroke dynamics based on deep learning is proposed. The model uses feature vectors formed by keystroke content and keystroke time. It ensures efficient continuous authentication by cooperating attention mechanisms with the combination of CNN and Bi-LSTM. The model has been tested with Open Data Buffalo dataset, and the result shows that the FRR is 3.09%, FAR is 3.03%, and EER is 4.23%. This proves that the model is efficient and accurate on continuous authentication.Keywords: keystroke dynamics, identity authentication, deep learning, CNN, LSTM
Procedia PDF Downloads 15527923 A Genetic Algorithm Approach to Solve a Weaving Job Scheduling Problem, Aiming Tardiness Minimization
Authors: Carolina Silva, João Nuno Oliveira, Rui Sousa, João Paulo Silva
Abstract:
This study uses genetic algorithms to solve a job scheduling problem in a weaving factory. The underline problem regards an NP-Hard problem concerning unrelated parallel machines, with sequence-dependent setup times. This research uses real data regarding a weaving industry located in the North of Portugal, with a capacity of 96 looms and a production, on average, of 440000 meters of fabric per month. Besides, this study includes a high level of complexity once most of the real production constraints are applied, and several real data instances are tested. Topics such as data analyses and algorithm performance are addressed and tested, to offer a solution that can generate reliable and due date results. All the approaches will be tested in the operational environment, and the KPIs monitored, to understand the solution's impact on the production, with a particular focus on the total number of weeks of late deliveries to clients. Thus, the main goal of this research is to develop a solution that allows for the production of automatically optimized production plans, aiming to the tardiness minimizing.Keywords: genetic algorithms, textile industry, job scheduling, optimization
Procedia PDF Downloads 15727922 Application of Finite Volume Method for Numerical Simulation of Contaminant Transfer in a Two-Dimensional Reservoir
Authors: Atousa Ataieyan, Salvador A. Gomez-Lopera, Gennaro Sepede
Abstract:
Today, due to the growing urban population and consequently, the increasing water demand in cities, the amount of contaminants entering the water resources is increasing. This can impose harmful effects on the quality of the downstream water. Therefore, predicting the concentration of discharged pollutants at different times and distances of the interested area is of high importance in order to carry out preventative and controlling measures, as well as to avoid consuming the contaminated water. In this paper, the concentration distribution of an injected conservative pollutant in a square reservoir containing four symmetric blocks and three sources using Finite Volume Method (FVM) is simulated. For this purpose, after estimating the flow velocity, classical Advection-Diffusion Equation (ADE) has been discretized over the studying domain by Backward Time- Backward Space (BTBS) scheme. Then, the discretized equations for each node have been derived according to the initial condition, boundary conditions and point contaminant sources. Finally, taking into account the appropriate time step and space step, a computational code was set up in MATLAB. Contaminant concentration was then obtained at different times and distances. Simulation results show how using BTBS differentiating scheme and FVM as a numerical method for solving the partial differential equation of transport is an appropriate approach in the case of two-dimensional contaminant transfer in an advective-diffusive flow.Keywords: BTBS differentiating scheme, contaminant concentration, finite volume, mass transfer, water pollution
Procedia PDF Downloads 13527921 Optimisation of B2C Supply Chain Resource Allocation
Authors: Firdaous Zair, Zoubir Elfelsoufi, Mohammed Fourka
Abstract:
The allocation of resources is an issue that is needed on the tactical and operational strategic plan. This work considers the allocation of resources in the case of pure players, manufacturers and Click & Mortars that have launched online sales. The aim is to improve the level of customer satisfaction and maintaining the benefits of e-retailer and of its cooperators and reducing costs and risks. Our contribution is a decision support system and tool for improving the allocation of resources in logistics chains e-commerce B2C context. We first modeled the B2C chain with all operations that integrates and possible scenarios since online retailers offer a wide selection of personalized service. The personalized services that online shopping companies offer to the clients can be embodied in many aspects, such as the customizations of payment, the distribution methods, and after-sales service choices. In addition, every aspect of customized service has several modes. At that time, we analyzed the optimization problems of supply chain resource allocation in customized online shopping service mode, which is different from the supply chain resource allocation under traditional manufacturing or service circumstances. Then we realized an optimization model and algorithm for the development based on the analysis of the allocation of the B2C supply chain resources. It is a multi-objective optimization that considers the collaboration of resources in operations, time and costs but also the risks and the quality of services as well as dynamic and uncertain characters related to the request.Keywords: e-commerce, supply chain, B2C, optimisation, resource allocation
Procedia PDF Downloads 27227920 Computationally Efficient Stacking Sequence Blending for Composite Structures with a Large Number of Design Regions Using Cellular Automata
Authors: Ellen Van Den Oord, Julien Marie Jan Ferdinand Van Campen
Abstract:
This article introduces a computationally efficient method for stacking sequence blending of composite structures. The computational efficiency makes the presented method especially interesting for composite structures with a large number of design regions. Optimization of composite structures with an unequal load distribution may lead to locally optimized thicknesses and ply orientations that are incompatible with one another. Blending constraints can be enforced to achieve structural continuity. In literature, many methods can be found to implement structural continuity by means of stacking sequence blending in one way or another. The complexity of the problem makes the blending of a structure with a large number of adjacent design regions, and thus stacking sequences, prohibitive. In this work the local stacking sequence optimization is preconditioned using a method found in the literature that couples the mechanical behavior of the laminate, in the form of lamination parameters, to blending constraints, yielding near-optimal easy-to-blend designs. The preconditioned design is then fed to the scheme using cellular automata that have been developed by the authors. The method is applied to the benchmark 18-panel horseshoe blending problem to demonstrate its performance. The computational efficiency of the proposed method makes it especially suited for composite structures with a large number of design regions.Keywords: composite, blending, optimization, lamination parameters
Procedia PDF Downloads 22727919 Numerical Investigation of a Supersonic Ejector for Refrigeration System
Authors: Karima Megdouli, Bourhan Taschtouch
Abstract:
Supersonic ejectors have many applications in refrigeration systems. And improving ejector performance is the key to improve the efficiency of these systems. One of the main advantages of the ejector is its geometric simplicity and the absence of moving parts. This paper presents a theoretical model for evaluating the performance of a new supersonic ejector configuration for refrigeration system applications. The relationship between the flow field and the key parameters of the new configuration has been illustrated by analyzing the Mach number and flow velocity contours. The method of characteristics (MOC) is used to design the supersonic nozzle of the ejector. The results obtained are compared with those obtained by CFD. The ejector is optimized by minimizing exergy destruction due to irreversibility and shock waves. The optimization converges to an efficient optimum solution, ensuring improved and stable performance over the whole considered range of uncertain operating conditions.Keywords: supersonic ejector, theoretical model, CFD, optimization, performance
Procedia PDF Downloads 7627918 A New Framework for ECG Signal Modeling and Compression Based on Compressed Sensing Theory
Authors: Siavash Eftekharifar, Tohid Yousefi Rezaii, Mahdi Shamsi
Abstract:
The purpose of this paper is to exploit compressed sensing (CS) method in order to model and compress the electrocardiogram (ECG) signals at a high compression ratio. In order to obtain a sparse representation of the ECG signals, first a suitable basis matrix with Gaussian kernels, which are shown to nicely fit the ECG signals, is constructed. Then the sparse model is extracted by applying some optimization technique. Finally, the CS theory is utilized to obtain a compressed version of the sparse signal. Reconstruction of the ECG signal from the compressed version is also done to prove the reliability of the algorithm. At this stage, a greedy optimization technique is used to reconstruct the ECG signal and the Mean Square Error (MSE) is calculated to evaluate the precision of the proposed compression method.Keywords: compressed sensing, ECG compression, Gaussian kernel, sparse representation
Procedia PDF Downloads 46227917 Application of Response Surface Methodology in Optimizing Chitosan-Argan Nutshell Beads for Radioactive Wastewater Treatment
Authors: F. F. Zahra, E. G. Touria, Y. Samia, M. Ahmed, H. Hasna, B. M. Latifa
Abstract:
The presence of radioactive contaminants in wastewater poses a significant environmental and health risk, necessitating effective treatment solutions. This study investigates the optimization of chitosan-Argan nutshell beads for the removal of radioactive elements from wastewater, utilizing Response Surface Methodology (RSM) to enhance the treatment efficiency. Chitosan, known for its biocompatibility and adsorption properties, was combined with Argan nutshell powder to form composite beads. These beads were then evaluated for their capacity to remove radioactive contaminants from synthetic wastewater. The Box-Behnken design (BBD) under RSM was employed to analyze the influence of key operational parameters, including initial contaminant concentration, pH, bead dosage, and contact time, on the removal efficiency. Experimental results indicated that all tested parameters significantly affected the removal efficiency, with initial contaminant concentration and pH showing the most substantial impact. The optimized conditions, as determined by RSM, were found to be an initial contaminant concentration of 50 mg/L, a pH of 6, a bead dosage of 0.5 g/L, and a contact time of 120 minutes. Under these conditions, the removal efficiency reached up to 95%, demonstrating the potential of chitosan-Argan nutshell beads as a viable solution for radioactive wastewater treatment. Furthermore, the adsorption process was characterized by fitting the experimental data to various isotherm and kinetic models. The adsorption isotherms conformed well to the Langmuir model, indicating monolayer adsorption, while the kinetic data were best described by the pseudo-second-order model, suggesting chemisorption as the primary mechanism. This study highlights the efficacy of chitosan-Argan nutshell beads in removing radioactive contaminants from wastewater and underscores the importance of optimizing treatment parameters using RSM. The findings provide a foundation for developing cost-effective and environmentally friendly treatment technologies for radioactive wastewater.Keywords: adsorption, argan nutshell, beads, chitosan, mechanism, optimization, radioactive wastewater, response surface methodology
Procedia PDF Downloads 3227916 Ficus Carica as Adsorbent for Removal of Phenol from Aqueous Solutions: Modelling and Optimization
Authors: Tizi Hayet, Berrama Tarek, Bounif Nadia
Abstract:
Phenol and its derivatives are organic compounds utilized in the chemical industry. They are introduced into the environment by accidental spills and illegal release of industrial and municipal wastewater. Phenols are organic intermediaries that considered as potential pollutants. Adsorption is one of the purification and separation techniques used in this area. Algeria produces annually 131000 tones of fig; therefore, a large amount of fig leaves is generated, and the conversion of this waste into adsorbent allows the valorization of agricultural residue. The main purpose of this present work is to describe an application of the statistical method for modeling and optimization of the conditions of the phenol (Ph) adsorption from agricultural by-product locally available (fig leaves). The best experimental performance of Ph elimination on the adsorbent was obtained with: Adsorbent concentration (X2) = 0.2 g L-1; Initial concentration (X3) = 150 mg L-1; Speed agitation (X1) = 300 rpm.Keywords: low-cost adsorbents, fig leaves, full factorial design, phenol, biosorption
Procedia PDF Downloads 9727915 Control of Oil Content of Fried Zucchini Slices by Partial Predrying and Process Optimization
Authors: E. Karacabey, Ş. G. Özçelik, M. S. Turan, C. Baltacıoğlu, E. Küçüköner
Abstract:
Main concern about deep-fat-fried food materials is their high final oil contents absorbed during frying process and/or after cooling period, since diet including high content of oil is accepted unhealthy by consumers. Different methods have been evaluated to decrease oil content of fried food stuffs. One promising method is partially drying of food material before frying. In the present study it was aimed to control and decrease the final oil content of zucchini slices by means of partial drying and to optimize process conditions. Conventional oven drying was used to decrease moisture content of zucchini slices at a certain extent. Process performance in terms of oil uptake was evaluated by comparing oil content of predried and then fried zucchini slices with those determined for directly fried ones. For predrying and frying processes, oven temperature and weight loss and frying oil temperature and time pairs were controlled variables, respectively. Zucchini slices were also directly fried for sensory evaluations revealing preferred properties of final product in terms of surface color, moisture content, texture and taste. These properties of directly fried zucchini slices taking the highest score at the end of sensory evaluation were determined and used as targets in optimization procedure. Response surface methodology was used for process optimization. The properties, determined after sensory evaluation, were selected as targets; meanwhile oil content was aimed to be minimized. Results indicated that final oil content of zucchini slices could be reduced from 58% to 46% by controlling conditions of predrying and frying processes. As a result, it was suggested that predrying could be one choose to reduce oil content of fried zucchini slices for health diet. This project (113R015) has been supported by TUBITAK.Keywords: health process, optimization, response surface methodology, oil uptake, conventional oven
Procedia PDF Downloads 36627914 A Game-Theory-Based Price-Optimization Algorithm for the Simulation of Markets Using Agent-Based Modelling
Authors: Juan Manuel Sanchez-Cartas, Gonzalo Leon
Abstract:
A price competition algorithm for ABMs based on game theory principles is proposed to deal with the simulation of theoretical market models. The algorithm is applied to the classical Hotelling’s model and to a two-sided market model to show it leads to the optimal behavior predicted by theoretical models. However, when theoretical models fail to predict the equilibrium, the algorithm is capable of reaching a feasible outcome. Results highlight that the algorithm can be implemented in other simulation models to guarantee rational users and endogenous optimal behaviors. Also, it can be applied as a tool of verification given that is theoretically based.Keywords: agent-based models, algorithmic game theory, multi-sided markets, price optimization
Procedia PDF Downloads 45527913 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data
Authors: K. Sathishkumar, V. Thiagarasu
Abstract:
Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.Keywords: microarray technology, gene expression data, clustering, gene Selection
Procedia PDF Downloads 32327912 Comparison of Machine Learning Models for the Prediction of System Marginal Price of Greek Energy Market
Authors: Ioannis P. Panapakidis, Marios N. Moschakis
Abstract:
The Greek Energy Market is structured as a mandatory pool where the producers make their bid offers in day-ahead basis. The System Operator solves an optimization routine aiming at the minimization of the cost of produced electricity. The solution of the optimization problem leads to the calculation of the System Marginal Price (SMP). Accurate forecasts of the SMP can lead to increased profits and more efficient portfolio management from the producer`s perspective. Aim of this study is to provide a comparative analysis of various machine learning models such as artificial neural networks and neuro-fuzzy models for the prediction of the SMP of the Greek market. Machine learning algorithms are favored in predictions problems since they can capture and simulate the volatilities of complex time series.Keywords: deregulated energy market, forecasting, machine learning, system marginal price
Procedia PDF Downloads 21527911 Optimum Design of Grillage Systems Using Firefly Algorithm Optimization Method
Authors: F. Erdal, E. Dogan, F. E. Uz
Abstract:
In this study, firefly optimization based optimum design algorithm is presented for the grillage systems. Naming of the algorithm is derived from the fireflies, whose sense of movement is taken as a model in the development of the algorithm. Fireflies’ being unisex and attraction between each other constitute the basis of the algorithm. The design algorithm considers the displacement and strength constraints which are implemented from LRFD-AISC (Load and Resistance Factor Design-American Institute of Steel Construction). It selects the appropriate W (Wide Flange)-sections for the transverse and longitudinal beams of the grillage system among 272 discrete W-section designations given in LRFD-AISC so that the design limitations described in LRFD are satisfied and the weight of the system is confined to be minimal. Number of design examples is considered to demonstrate the efficiency of the algorithm presented.Keywords: firefly algorithm, steel grillage systems, optimum design, stochastic search techniques
Procedia PDF Downloads 43427910 Applications of Artificial Neural Networks in Civil Engineering
Authors: Naci Büyükkaracığan
Abstract:
Artificial neural networks (ANN) is an electrical model based on the human brain nervous system and working principle. Artificial neural networks have been the subject of an active field of research that has matured greatly over the past 55 years. ANN now is used in many fields. But, it has been viewed that artificial neural networks give better results in particular optimization and control systems. There are requirements of optimization and control system in many of the area forming the subject of civil engineering applications. In this study, the first artificial intelligence systems are widely used in the solution of civil engineering systems were examined with the basic principles and technical aspects. Finally, the literature reviews for applications in the field of civil engineering were conducted and also artificial intelligence techniques were informed about the study and its results.Keywords: artificial neural networks, civil engineering, Fuzzy logic, statistics
Procedia PDF Downloads 41227909 Numeric Modeling of Condensation of Water Vapor from Humid Air in a Room
Authors: Nguyen Van Que, Nguyen Huy The
Abstract:
This paper presents combined natural and forced convection of humid air flow. The film condensation of water vapour on a cold floor was investigated using ANSYS Fluent software. User-defined Functions(UDFs) were developed and added to address the issue of film condensation at the surface of the floor. Those UDFs were validated by analytical results on a flat plate. The film condensation model based on mass transfer was used to solve phase change. On the floor, condensation rate was obtained by mass fraction change near the floor. The study investigated effects of inlet velocity, inlet relative humidity and cold floor temperature on the condensation rate. The simulations were done in both 2D and 3D models to show the difference and need for 3D modeling of condensation.Keywords: heat and mass transfer, convection, condensation, relative humidity, user-defined functions
Procedia PDF Downloads 33127908 Photoluminescence and Energy Transfer Studies of Dy3+ Ions Doped Lithium Lead Alumino Borate Glasses for W-LED and Laser Applications
Authors: Nisha Deopa, A. S. Rao
Abstract:
Lithium Lead Alumino Borate (LiPbAlB) glasses doped with different Dy3+ ions concentration were synthesized to investigate their viability in solid state lighting (SSL) technology by melt quenching techniques. From the absorption spectra, bonding parameters (ð) were investigated to study the nature of bonding between Dy3+ ions and its surrounding ligands. Judd-Ofelt (J-O) intensity parameters (Ω = 2, 4, 6), estimated from the experimental oscillator strengths (fex) of the absorption spectral features were used to evaluate the radiative parameters of different transition levels. From the decay curves, experimental lifetime (τex) were measured and coupled with the radiative lifetime to evaluate the quantum efficiency of the as-prepared glasses. As Dy3+ ions concentration increases, decay profile changes from exponential to non-exponential through energy transfer mechanism (ETM) in turn decreasing experimental lifetime. In order to investigate the nature of ETM, non-exponential decay curves were fitted to Inkuti–Hirayama (I-H) model which further confirms dipole-dipole interaction. Among all the emission transition, 4F9/2 6H15/2 transition (483 nm) is best suitable for lasing potentialities. By exciting titled glasses in n-UV to blue regions, CIE chromaticity coordinates and Correlated Color Temperature (CCT) were calculated to understand their capability in cool white light generation. From the evaluated radiative parameters, CIE co-ordinates, quantum efficiency and confocal images it was observed that glass B (0.5 mol%) is a potential candidate for developing w-LEDs and lasers.Keywords: energy transfer, glasses, J-O parameters, photoluminescence
Procedia PDF Downloads 21527907 The Optimization of an Industrial Recycling Line: Improving the Durability of Recycled Polyethyene Blends
Authors: Alae Lamtai, Said Elkoun, Hniya Kharmoudi, Mathieu Robert, Carl Diez
Abstract:
This study applies Taguchi's design of experiment methodology and grey relational analysis (GRA) for multi objective optimization of an industrial recycling line. This last is composed mainly of a mono and twin-screw extruder and a filtration system. Experiments were performed according to L₁₆ standard orthogonal array based on five process parameters, namely: mono screw design, screw speed of the mono and twin-screw extruder, melt pump pressure, and filter mesh size. The objective of this optimization is to improve the durability of the Polyethylene (PE) blend by decreasing the loss of Stress Crack resistance (SCR) using Notched Crack Ligament Stress (NCLS) test and Unnotched Crack Ligament Stress (UCLS) in parallel with increasing the gain of Izod impact strength of the Polyethylene (PE) blend before and after recycling. Based on Grey Relational Analysis (GRA), the optimal setting of process parameters was identified, and the results indicated that the mono-screw design and screw speed of both mono and twin-screw extruder impact significantly the mechanical properties of recycled Polyethylene (PE) blend.Keywords: Taguchi, recycling line, polyethylene, stress crack resistance, Izod impact strength, grey relational analysis
Procedia PDF Downloads 8327906 Transferable Knowledge: Expressing Lessons Learnt from Failure to Outsiders
Authors: Stijn Horck
Abstract:
Background: The value of lessons learned from failure increases when these insights can be put to use by those who did not experience the failure. While learning from others has mostly been researched between individuals or teams within the same environment, transferring knowledge from the person who experienced the failure to an outsider comes with extra challenges. As sense-making of failure is an individual process leading to different learning experiences, the potential of lessons learned from failure is highly variable depending on who is transferring the lessons learned. Using an integrated framework of linguistic aspects related to attributional egotism, this study aims to offer a complete explanation of the challenges in transferring lessons learned from failures that are experienced by others. Method: A case study of a failed foundation established to address the information needs for GPs in times of COVID-19 has been used. An overview of failure causes and lessons learned were made through a preliminary analysis of data collected in two phases with metaphoric examples of failure types. This was followed up by individual narrative interviews with the board members who have all experienced the same events to analyse the individual variance of lessons learned through discourse analysis. This research design uses the researcher-as-instrument approach since the recipient of these lessons learned is the author himself. Results: Thirteen causes were given why the foundation has failed, and nine lessons were formulated. Based on the individually emphasized events, the explanation of the failure events mentioned by all or three respondents consisted of more linguistic aspects related to attributional egotism than failure events mentioned by only one or two. Moreover, the learning events mentioned by all or three respondents involved lessons learned that are based on changed insight, while the lessons expressed by only one or two are more based on direct value. Retrospectively, the lessons expressed as a group in the first data collection phase seem to have captured some but not all of the direct value lessons. Conclusion: Individual variance in expressing lessons learned to outsiders can be reduced using metaphoric or analogical explanations from a third party. In line with the attributional egotism theory, individuals separated from a group that has experienced the same failure are more likely to refer to failure causes of which the chances to be contradicted are the smallest. Lastly, this study contributes to the academic literature by demonstrating that the use of linguistic analysis is suitable for investigating the knowledge transfer from lessons learned after failure.Keywords: failure, discourse analysis, knowledge transfer, attributional egotism
Procedia PDF Downloads 115