Search results for: real output
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6937

Search results for: real output

5557 Recommendations Using Online Water Quality Sensors for Chlorinated Drinking Water Monitoring at Drinking Water Distribution Systems Exposed to Glyphosate

Authors: Angela Maria Fasnacht

Abstract:

Detection of anomalies due to contaminants’ presence, also known as early detection systems in water treatment plants, has become a critical point that deserves an in-depth study for their improvement and adaptation to current requirements. The design of these systems requires a detailed analysis and processing of the data in real-time, so it is necessary to apply various statistical methods appropriate to the data generated, such as Spearman’s Correlation, Factor Analysis, Cross-Correlation, and k-fold Cross-validation. Statistical analysis and methods allow the evaluation of large data sets to model the behavior of variables; in this sense, statistical treatment or analysis could be considered a vital step to be able to develop advanced models focused on machine learning that allows optimized data management in real-time, applied to early detection systems in water treatment processes. These techniques facilitate the development of new technologies used in advanced sensors. In this work, these methods were applied to identify the possible correlations between the measured parameters and the presence of the glyphosate contaminant in the single-pass system. The interaction between the initial concentration of glyphosate and the location of the sensors on the reading of the reported parameters was studied.

Keywords: glyphosate, emergent contaminants, machine learning, probes, sensors, predictive

Procedia PDF Downloads 108
5556 Valorization of Banana Peels for Mercury Removal in Environmental Realist Conditions

Authors: E. Fabre, C. Vale, E. Pereira, C. M. Silva

Abstract:

Introduction: Mercury is one of the most troublesome toxic metals responsible for the contamination of the aquatic systems due to its accumulation and bioamplification along the food chain. The 2030 agenda for sustainable development of United Nations promotes the improving of water quality by reducing water pollution and foments an enhance in wastewater treatment, encouraging their recycling and safe water reuse globally. Sorption processes are widely used in wastewater treatments due to their many advantages such as high efficiency and low operational costs. In these processes the target contaminant is removed from the solution by a solid sorbent. The more selective and low cost is the biosorbent the more attractive becomes the process. Agricultural wastes are especially attractive approaches for sorption. They are largely available, have no commercial value and require little or no processing. In this work, banana peels were tested for mercury removal from low concentrated solutions. In order to investigate the applicability of this solid, six water matrices were used increasing the complexity from natural waters to a real wastewater. Studies of kinetics and equilibrium were also performed using the most known models to evaluate the viability of the process In line with the concept of circular economy, this study adds value to this by-product as well as contributes to liquid waste management. Experimental: The solutions were prepared with Hg(II) initial concentration of 50 µg L-1 in natural waters, at 22 ± 1 ºC, pH 6, magnetically stirring at 650 rpm and biosorbent mass of 0.5 g L-1. NaCl was added to obtain the salt solutions, seawater was collected from the Portuguese coast and the real wastewater was kindly provided by ISQ - Instituto de Soldadura e qualidade (Welding and Quality Institute) and diluted until the same concentration of 50 µg L-1. Banana peels were previously freeze-drying, milled, sieved and the particles < 1 mm were used. Results: Banana peels removed more than 90% of Hg(II) from all the synthetic solutions studied. In these cases, the enhance in the complexity of the water type promoted a higher mercury removal. In salt waters, the biosorbent showed removals of 96%, 95% and 98 % for 3, 15 and 30 g L-1 of NaCl, respectively. The residual concentration of Hg(II) in solution achieved the level of drinking water regulation (1 µg L-1). For real matrices, the lower Hg(II) elimination (93 % for seawater and 81 % for the real wastewaters), can be explained by the competition between the Hg(II) ions and the other elements present in these solutions for the sorption sites. Regarding the equilibrium study, the experimental data are better described by the Freundlich isotherm (R ^ 2=0.991). The Elovich equation provided the best fit to the kinetic points. Conclusions: The results exhibited the great ability of the banana peels to remove mercury. The environmental realist conditions studied in this work, highlight their potential usage as biosorbents in water remediation processes.

Keywords: banana peels, mercury removal, sorption, water treatment

Procedia PDF Downloads 144
5555 Identification of Microbial Community in an Anaerobic Reactor Treating Brewery Wastewater

Authors: Abimbola M. Enitan, John O. Odiyo, Feroz M. Swalaha

Abstract:

The study of microbial ecology and their function in anaerobic digestion processes are essential to control the biological processes. This is to know the symbiotic relationship between the microorganisms that are involved in the conversion of complex organic matter in the industrial wastewater to simple molecules. In this study, diversity and quantity of bacterial community in the granular sludge taken from the different compartments of a full-scale upflow anaerobic sludge blanket (UASB) reactor treating brewery wastewater was investigated using polymerase chain reaction (PCR) and real-time quantitative PCR (qPCR). The phylogenetic analysis showed three major eubacteria phyla that belong to Proteobacteria, Firmicutes and Chloroflexi in the full-scale UASB reactor, with different groups populating different compartment. The result of qPCR assay showed high amount of eubacteria with increase in concentration along the reactor’s compartment. This study extends our understanding on the diverse, topological distribution and shifts in concentration of microbial communities in the different compartments of a full-scale UASB reactor treating brewery wastewater. The colonization and the trophic interactions among these microbial populations in reducing and transforming complex organic matter within the UASB reactors were established.

Keywords: bacteria, brewery wastewater, real-time quantitative PCR, UASB reactor

Procedia PDF Downloads 250
5554 Bayesian Analysis of Topp-Leone Generalized Exponential Distribution

Authors: Najrullah Khan, Athar Ali Khan

Abstract:

The Topp-Leone distribution was introduced by Topp- Leone in 1955. In this paper, an attempt has been made to fit Topp-Leone Generalized exponential (TPGE) distribution. A real survival data set is used for illustrations. Implementation is done using R and JAGS and appropriate illustrations are made. R and JAGS codes have been provided to implement censoring mechanism using both optimization and simulation tools. The main aim of this paper is to describe and illustrate the Bayesian modelling approach to the analysis of survival data. Emphasis is placed on the modeling of data and the interpretation of the results. Crucial to this is an understanding of the nature of the incomplete or 'censored' data encountered. Analytic approximation and simulation tools are covered here, but most of the emphasis is on Markov chain based Monte Carlo method including independent Metropolis algorithm, which is currently the most popular technique. For analytic approximation, among various optimization algorithms and trust region method is found to be the best. In this paper, TPGE model is also used to analyze the lifetime data in Bayesian paradigm. Results are evaluated from the above mentioned real survival data set. The analytic approximation and simulation methods are implemented using some software packages. It is clear from our findings that simulation tools provide better results as compared to those obtained by asymptotic approximation.

Keywords: Bayesian Inference, JAGS, Laplace Approximation, LaplacesDemon, posterior, R Software, simulation

Procedia PDF Downloads 519
5553 Forms of Social Provision for Housing Investments in Local Planning Acts for European Capitals: Comparative Study and Spatial References

Authors: Agata Twardoch

Abstract:

The processes of commodification of real estate and changes in housing markets have led to a situation where the prices of free market housing in European capitals are significantly higher than the purchasing value of average wages. This phenomenon has many negative social and spatial consequences. At the same time, the attractiveness of real estate as an asset makes these processes progress. Out of concern for sustainable social development, city authorities apply solutions to balance the burdensome effects of codification of housing. One of them is a social provision for housing investments. The article presents a comparative study of solutions applied in selected European capitals, on the example of Warsaw, Paris, London, Berlin, Copenhagen, and Vienna. The study was conducted along with works on expert report for the master plan for Warsaw. The forms of commissions applied in Local Planning Acts were compared, with particular reference to spatial solutions. The results of the analysis made it possible to determine common features of the solutions applied and to establish recommendations for further practice. Major findings of the study indicate that requirement of social provision is achievable in spatial planning documents. Study shows that application of social provision in private housing investments is a useful tool in housing policy against commodification.

Keywords: affordable housing, housing provision, spatial planning, sustainable social development

Procedia PDF Downloads 161
5552 Math Word Problems: Context and Achievement

Authors: Irena Smetackova

Abstract:

The important part of school mathematics are word problems which represent the connection between school knowledge and life reality. To find the reasons why students consider word problems to be difficult, it is necessary to take into consideration the motivational settings, besides mathematical knowledge and reading skills. Our goal is to identify whether the familiar or unfamiliar context of math word problem influences solving success rate and if so, whether the reasons are motivational or cognitive. For this purpose, we conducted three steps study in group of fifty pupils 9-10 years old. In the first step, we asked pupils to create ‘the best’ word problems for entered numerical formula. The set of 19 word problems with different contexts were selected. In the second step, pupils were asked to evaluate (without solving) how they like each item and how easy it is for them. The 6 word problems with low preference and low estimated success rate were selected and combined with other 6 problems with high preference and success rate. In the third step, the same pupils were asked to solve the word problems. The analysis showed that pupils attitudes and solving toward word problems varied by the context. The strong gender patterns both in preferred contexts and in estimated success rates were identified however the real success rate did not differ so strongly. The success gap between word problems with and without preferred contexts were stronger than the gap between problems with and without real experience with the context. The hypothesis that motivational factors are more important than cognitive factors was confirmed.

Keywords: mathematics, context of reality, motivation, cognition, word problems

Procedia PDF Downloads 186
5551 Sustainable Geographic Information System-Based Map for Suitable Landfill Sites in Aley and Chouf, Lebanon

Authors: Allaw Kamel, Bazzi Hasan

Abstract:

Municipal solid waste (MSW) generation is among the most significant sources which threaten the global environmental health. Solid Waste Management has been an important environmental problem in developing countries because of the difficulties in finding sustainable solutions for solid wastes. Therefore, more efforts are needed to be implemented to overcome this problem. Lebanon has suffered a severe solid waste management problem in 2015, and a new landfill site was proposed to solve the existing problem. The study aims to identify and locate the most suitable area to construct a landfill taking into consideration the sustainable development to overcome the present situation and protect the future demands. Throughout the article, a landfill site selection methodology was discussed using Geographic Information System (GIS) and Multi Criteria Decision Analysis (MCDA). Several environmental, economic and social factors were taken as criterion for selection of a landfill. Soil, geology, and LUC (Land Use and Land Cover) indices with the Sustainable Development Index were main inputs to create the final map of Environmentally Sensitive Area (ESA) for landfill site. Different factors were determined to define each index. Input data of each factor was managed, visualized and analyzed using GIS. GIS was used as an important tool to identify suitable areas for landfill. Spatial Analysis (SA), Analysis and Management GIS tools were implemented to produce input maps capable of identifying suitable areas related to each index. Weight has been assigned to each factor in the same index, and the main weights were assigned to each index used. The combination of the different indices map generates the final output map of ESA. The output map was reclassified into three suitability classes of low, moderate, and high suitability. Results showed different locations suitable for the construction of a landfill. Results also reflected the importance of GIS and MCDA in helping decision makers finding a solution of solid wastes by a sanitary landfill.

Keywords: sustainable development, landfill, municipal solid waste (MSW), geographic information system (GIS), multi criteria decision analysis (MCDA), environmentally sensitive area (ESA)

Procedia PDF Downloads 140
5550 A Stepwise Approach for Piezoresistive Microcantilever Biosensor Optimization

Authors: Amal E. Ahmed, Levent Trabzon

Abstract:

Due to the low concentration of the analytes in biological samples, the use of Biological Microelectromechanical System (Bio-MEMS) biosensors for biomolecules detection results in a minuscule output signal that is not good enough for practical applications. In response to this, a need has arisen for an optimized biosensor capable of giving high output signal in response the detection of few analytes in the sample; the ultimate goal is being able to convert the attachment of a single biomolecule into a measurable quantity. For this purpose, MEMS microcantilevers based biosensors emerged as a promising sensing solution because it is simple, cheap, very sensitive and more importantly does not need analytes optical labeling (Label-free). Among the different microcantilever transducing techniques, piezoresistive based microcantilever biosensors became more prominent because it works well in liquid environments and has an integrated readout system. However, the design of piezoresistive microcantilevers is not a straightforward problem due to coupling between the design parameters, constraints, process conditions, and performance. It was found that the parameters that can be optimized to enhance the sensitivity of Piezoresistive microcantilever-based sensors are: cantilever dimensions, cantilever material, cantilever shape, piezoresistor material, piezoresistor doping level, piezoresistor dimensions, piezoresistor position, Stress Concentration Region's (SCR) shape and position. After a systematic analyzation of the effect of each design and process parameters on the sensitivity, a step-wise optimization approach was developed in which almost all these parameters were variated one at each step while fixing the others to get the maximum possible sensitivity at the end. At each step, the goal was to optimize the parameter in a way that it maximizes and concentrates the stress in the piezoresistor region for the same applied force thus get the higher sensitivity. Using this approach, an optimized sensor that has 73.5x times higher electrical sensitivity (ΔR⁄R) than the starting sensor was obtained. In addition to that, this piezoresistive microcantilever biosensor it is more sensitive than the other similar sensors previously reported in the open literature. The mechanical sensitivity of the final senior is -1.5×10-8 Ω/Ω ⁄pN; which means that for each 1pN (10-10 g) biomolecules attach to this biosensor; the piezoresistor resistivity will decrease by 1.5×10-8 Ω. Throughout this work COMSOL Multiphysics 5.0, a commercial Finite Element Analysis (FEA) tool, has been used to simulate the sensor performance.

Keywords: biosensor, microcantilever, piezoresistive, stress concentration region (SCR)

Procedia PDF Downloads 561
5549 Predicting Daily Patient Hospital Visits Using Machine Learning

Authors: Shreya Goyal

Abstract:

The study aims to build user-friendly software to understand patient arrival patterns and compute the number of potential patients who will visit a particular health facility for a given period by using a machine learning algorithm. The underlying machine learning algorithm used in this study is the Support Vector Machine (SVM). Accurate prediction of patient arrival allows hospitals to operate more effectively, providing timely and efficient care while optimizing resources and improving patient experience. It allows for better allocation of staff, equipment, and other resources. If there's a projected surge in patients, additional staff or resources can be allocated to handle the influx, preventing bottlenecks or delays in care. Understanding patient arrival patterns can also help streamline processes to minimize waiting times for patients and ensure timely access to care for patients in need. Another big advantage of using this software is adhering to strict data protection regulations such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States as the hospital will not have to share the data with any third party or upload it to the cloud because the software can read data locally from the machine. The data needs to be arranged in. a particular format and the software will be able to read the data and provide meaningful output. Using software that operates locally can facilitate compliance with these regulations by minimizing data exposure. Keeping patient data within the hospital's local systems reduces the risk of unauthorized access or breaches associated with transmitting data over networks or storing it in external servers. This can help maintain the confidentiality and integrity of sensitive patient information. Historical patient data is used in this study. The input variables used to train the model include patient age, time of day, day of the week, seasonal variations, and local events. The algorithm uses a Supervised learning method to optimize the objective function and find the global minima. The algorithm stores the values of the local minima after each iteration and at the end compares all the local minima to find the global minima. The strength of this study is the transfer function used to calculate the number of patients. The model has an output accuracy of >95%. The method proposed in this study could be used for better management planning of personnel and medical resources.

Keywords: machine learning, SVM, HIPAA, data

Procedia PDF Downloads 57
5548 A Different Approach to Optimize Fuzzy Membership Functions with Extended FIR Filter

Authors: Jun-Ho Chung, Sung-Hyun Yoo, In-Hwan Choi, Hyun-Kook Lee, Moon-Kyu Song, Choon-Ki Ahn

Abstract:

The extended finite impulse response (EFIR) filter is addressed to optimize membership functions (MFs) of the fuzzy model that has strong nonlinearity. MFs are important parts of the fuzzy logic system (FLS) and, thus optimizing MFs of FLS is one of approaches to improve the performance of output. We employ the EFIR as an alternative optimization option to nonlinear fuzzy model. The performance of EFIR is demonstrated on a fuzzy cruise control via a numerical example.

Keywords: fuzzy logic system, optimization, membership function, extended FIR filter

Procedia PDF Downloads 713
5547 Introduction of Integrated Image Deep Learning Solution and How It Brought Laboratorial Level Heart Rate and Blood Oxygen Results to Everyone

Authors: Zhuang Hou, Xiaolei Cao

Abstract:

The general public and medical professionals recognized the importance of accurately measuring and storing blood oxygen levels and heart rate during the COVID-19 pandemic. The demand for accurate contactless devices was motivated by the need for cross-infection reduction and the shortage of conventional oximeters, partially due to the global supply chain issue. This paper evaluated a contactless mini program HealthyPai’s heart rate (HR) and oxygen saturation (SpO2) measurements compared with other wearable devices. In the HR study of 185 samples (81 in the laboratory environment, 104 in the real-life environment), the mean absolute error (MAE) ± standard deviation was 1.4827 ± 1.7452 in the lab, 6.9231 ± 5.6426 in the real-life setting. In the SpO2 study of 24 samples, the MAE ± standard deviation of the measurement was 1.0375 ± 0.7745. Our results validated that HealthyPai utilizing the Integrated Image Deep Learning Solution (IIDLS) framework, can accurately measure HR and SpO2, providing the test quality at least comparable to other FDA-approved wearable devices in the market and surpassing the consumer-grade and research-grade wearable standards.

Keywords: remote photoplethysmography, heart rate, oxygen saturation, contactless measurement, mini program

Procedia PDF Downloads 126
5546 Mathematical Model to Simulate Liquid Metal and Slag Accumulation, Drainage and Heat Transfer in Blast Furnace Hearth

Authors: Hemant Upadhyay, Tarun Kumar Kundu

Abstract:

It is utmost important for a blast furnace operator to understand the mechanisms governing the liquid flow, accumulation, drainage and heat transfer between various phases in blast furnace hearth for a stable and efficient blast furnace operation. Abnormal drainage behavior may lead to high liquid build up in the hearth. Operational problems such as pressurization, low wind intake, and lower material descent rates, normally be encountered if the liquid levels in the hearth exceed a critical limit when Hearth coke and Deadman start to float. Similarly, hot metal temperature is an important parameter to be controlled in the BF operation; it should be kept at an optimal level to obtain desired product quality and a stable BF performance. It is not possible to carry out any direct measurement of above due to the hostile conditions in the hearth with chemically aggressive hot liquids. The objective here is to develop a mathematical model to simulate the variation in hot metal / slag accumulation and temperature during the tapping of the blast furnace based on the computed drainage rate, production rate, mass balance, heat transfer between metal and slag, metal and solids, slag and solids as well as among the various zones of metal and slag itself. For modeling purpose, the BF hearth is considered as a pressurized vessel, filled with solid coke particles. Liquids trickle down in hearth from top and accumulate in voids between the coke particles which are assumed thermally saturated. A set of generic mass balance equations gives the amount of metal and slag intake in hearth. A small drainage (tap hole) is situated at the bottom of the hearth and flow rate of liquids from tap hole is computed taking in account the amount of both the phases accumulated their level in hearth, pressure from gases in the furnace and erosion behaviors of tap hole itself. Heat transfer equations provide the exchange of heat between various layers of liquid metal and slag, and heat loss to cooling system through refractories. Based on all that information a dynamic simulation is carried out which provides real time information of liquids accumulation in hearth before and during tapping, drainage rate and its variation, predicts critical event timings during tapping and expected tapping temperature of metal and slag on preset time intervals. The model is in use at JSPL, India BF-II and its output is regularly cross-checked with actual tapping data, which are in good agreement.

Keywords: blast furnace, hearth, deadman, hotmetal

Procedia PDF Downloads 179
5545 Challenges and Opportunities in Computing Logistics Cost in E-Commerce Supply Chain

Authors: Pramod Ghadge, Swadesh Srivastava

Abstract:

Revenue generation of a logistics company depends on how the logistics cost of a shipment is calculated. Logistics cost of a shipment is a function of distance & speed of the shipment travel in a particular network, its volumetric size and dead weight. Logistics billing is based mainly on the consumption of the scarce resource (space or weight carrying capacity of a carrier). Shipment’s size or deadweight is a function of product and packaging weight, dimensions and flexibility. Hence, to arrive at a standard methodology to compute accurate cost to bill the customer, the interplay among above mentioned physical attributes along with their measurement plays a key role. This becomes even more complex for an ecommerce company, like Flipkart, which caters to shipments from both warehouse and marketplace in an unorganized non-standard market like India. In this paper, we will explore various methodologies to define a standard way of billing the non-standard shipments across a wide range of size, shape and deadweight. Those will be, usage of historical volumetric/dead weight data to arrive at a factor which can be used to compute the logistics cost of a shipment, also calculating the real/contour volume of a shipment to address the problem of irregular shipment shapes which cannot be solved by conventional bounding box volume measurements. We will also discuss certain key business practices and operational quality considerations needed to bring standardization and drive appropriate ownership in the ecosystem.

Keywords: contour volume, logistics, real volume, volumetric weight

Procedia PDF Downloads 257
5544 Quantum Cum Synaptic-Neuronal Paradigm and Schema for Human Speech Output and Autism

Authors: Gobinathan Devathasan, Kezia Devathasan

Abstract:

Objective: To improve the current modified Broca-Wernicke-Lichtheim-Kussmaul speech schema and provide insight into autism. Methods: We reviewed the pertinent literature. Current findings, involving Brodmann areas 22, 46, 9,44,45,6,4 are based on neuropathology and functional MRI studies. However, in primary autism, there is no lucid explanation and changes described, whether neuropathology or functional MRI, appear consequential. Findings: We forward an enhanced model which may explain the enigma related to autism. Vowel output is subcortical and does need cortical representation whereas consonant speech is cortical in origin. Left lateralization is needed to commence the circuitry spin as our life have evolved with L-amino acids and left spin of electrons. A fundamental species difference is we are capable of three syllable-consonants and bi-syllable expression whereas cetaceans and songbirds are confined to single or dual consonants. The 4 key sites for speech are superior auditory cortex, Broca’s two areas, and the supplementary motor cortex. Using the Argand’s diagram and Reimann’s projection, we theorize that the Euclidean three dimensional synaptic neuronal circuits of speech are quantized to coherent waves, and then decoherence takes place at area 6 (spherical representation). In this quantum state complex, 3-consonant languages are instantaneously integrated and multiple languages can be learned, verbalized and differentiated. Conclusion: We postulate that evolutionary human speech is elevated to quantum interaction unlike cetaceans and birds to achieve the three consonants/bi-syllable speech. In classical primary autism, the sudden speech switches off and on noted in several cases could now be explained not by any anatomical lesion but failure of coherence. Area 6 projects directly into prefrontal saccadic area (8); and this further explains the second primary feature in autism: lack of eye contact. The third feature which is repetitive finger gestures, located adjacent to the speech/motor areas, are actual attempts to communicate with the autistic child akin to sign language for the deaf.

Keywords: quantum neuronal paradigm, cetaceans and human speech, autism and rapid magnetic stimulation, coherence and decoherence of speech

Procedia PDF Downloads 180
5543 Comparative Performance Analysis of Nonlinearity Cancellation Techniques for MOS-C Realization in Integrator Circuits

Authors: Hasan Çiçekli, Ahmet Gökçen, Uğur Çam

Abstract:

In this paper, a comparative performance analysis of mostly used four nonlinearity cancellation techniques used to realize the passive resistor by MOS transistors is presented. The comparison is done by using an integrator circuit which is employing sequentially Op-amp, OTRA and ICCII as active element. All of the circuits are implemented by MOS-C realization and simulated by PSPICE program using 0.35 µm process TSMC MOSIS model parameters. With MOS-C realization, the circuits became electronically tunable and fully integrable which is very important in IC design. The output waveforms, frequency responses, THD analysis results and features of the nonlinearity cancellation techniques are also given.

Keywords: integrator circuits, MOS-C realization, nonlinearity cancellation, tuneable resistors

Procedia PDF Downloads 521
5542 An Improved Photovolatic System Balancer Architecture

Authors: Chih-Chiang Hua, Yi-Hsiung Fang, Cyuan-Jyun Wong

Abstract:

An improved PV balancer for photovoltaic applications is proposed in this paper. The proposed PV balancer senses the voltage and current of PV module and adjusts the output voltage of converter. Thus, the PV system can implement maximum power point tracking (MPPT) independently for each module whether it is under shading, different irradiation or degradation of PV cell. In addition, the cost of PV balancer can be reduced due to the low power rating of converter. To assess the effectiveness of the proposed system, two PV balancers are designed and verified through simulation under different shading conditions. The proposed PV balancers can provide more energy than the traditional PV balancer.

Keywords: MPPT, partial shading, PV System, converter

Procedia PDF Downloads 283
5541 Predictive Maintenance of Industrial Shredders: Efficient Operation through Real-Time Monitoring Using Statistical Machine Learning

Authors: Federico Pittino, Thomas Arnold

Abstract:

The shredding of waste materials is a key step in the recycling process towards the circular economy. Industrial shredders for waste processing operate in very harsh operating conditions, leading to the need for frequent maintenance of critical components. Maintenance optimization is particularly important also to increase the machine’s efficiency, thereby reducing the operational costs. In this work, a monitoring system has been developed and deployed on an industrial shredder located at a waste recycling plant in Austria. The machine has been monitored for one year, and methods for predictive maintenance have been developed for two key components: the cutting knives and the drive belt. The large amount of collected data is leveraged by statistical machine learning techniques, thereby not requiring very detailed knowledge of the machine or its live operating conditions. The results show that, despite the wide range of operating conditions, a reliable estimate of the optimal time for maintenance can be derived. Moreover, the trade-off between the cost of maintenance and the increase in power consumption due to the wear state of the monitored components of the machine is investigated. This work proves the benefits of real-time monitoring system for the efficient operation of industrial shredders.

Keywords: predictive maintenance, circular economy, industrial shredder, cost optimization, statistical machine learning

Procedia PDF Downloads 113
5540 Fuzzy Availability Analysis of a Battery Production System

Authors: Merve Uzuner Sahin, Kumru D. Atalay, Berna Dengiz

Abstract:

In today’s competitive market, there are many alternative products that can be used in similar manner and purpose. Therefore, the utility of the product is an important issue for the preferability of the brand. This utility could be measured in terms of its functionality, durability, reliability. These all are affected by the system capabilities. Reliability is an important system design criteria for the manufacturers to be able to have high availability. Availability is the probability that a system (or a component) is operating properly to its function at a specific point in time or a specific period of times. System availability provides valuable input to estimate the production rate for the company to realize the production plan. When considering only the corrective maintenance downtime of the system, mean time between failure (MTBF) and mean time to repair (MTTR) are used to obtain system availability. Also, the MTBF and MTTR values are important measures to improve system performance by adopting suitable maintenance strategies for reliability engineers and practitioners working in a system. Failure and repair time probability distributions of each component in the system should be known for the conventional availability analysis. However, generally, companies do not have statistics or quality control departments to store such a large amount of data. Real events or situations are defined deterministically instead of using stochastic data for the complete description of real systems. A fuzzy set is an alternative theory which is used to analyze the uncertainty and vagueness in real systems. The aim of this study is to present a novel approach to compute system availability using representation of MTBF and MTTR in fuzzy numbers. Based on the experience in the system, it is decided to choose 3 different spread of MTBF and MTTR such as 15%, 20% and 25% to obtain lower and upper limits of the fuzzy numbers. To the best of our knowledge, the proposed method is the first application that is used fuzzy MTBF and fuzzy MTTR for fuzzy system availability estimation. This method is easy to apply in any repairable production system by practitioners working in industry. It is provided that the reliability engineers/managers/practitioners could analyze the system performance in a more consistent and logical manner based on fuzzy availability. This paper presents a real case study of a repairable multi-stage production line in lead-acid battery production factory in Turkey. The following is focusing on the considered wet-charging battery process which has a higher production level than the other types of battery. In this system, system components could exist only in two states, working or failed, and it is assumed that when a component in the system fails, it becomes as good as new after repair. Instead of classical methods, using fuzzy set theory and obtaining intervals for these measures would be very useful for system managers, practitioners to analyze system qualifications to find better results for their working conditions. Thus, much more detailed information about system characteristics is obtained.

Keywords: availability analysis, battery production system, fuzzy sets, triangular fuzzy numbers (TFNs)

Procedia PDF Downloads 216
5539 Determination of Optimum Fin Wave Angle and Its Effect on the Performance of an Intercooler

Authors: Mahdi Hamzehei, Seyyed Amin Hakim, Nahid Taherian

Abstract:

Fins play an important role in increasing the efficiency of compact shell and tube heat exchangers by increasing heat transfer. The objective of this paper is to determine the optimum fin wave angle, as one of the geometric parameters affecting the efficiency of the heat exchangers. To this end, finite volume method is used to model and simulate the flow in heat exchanger. In this study, computational fluid dynamics simulations of wave channel are done. The results show that the wave angle affects the temperature output of the heat exchanger.

Keywords: fin wave angle, tube, intercooler, optimum, performance

Procedia PDF Downloads 370
5538 Three-Dimensional Fluid-Structure-Thermal Coupling Dynamics Simulation Model of a Gas-Filled Fluid-Resistance Damper and Experimental Verification

Authors: Wenxue Xu

Abstract:

Fluid resistance damper is an important damping element to attenuate vehicle vibration. It converts vibration energy into thermal energy dissipation through oil throttling. It is a typical fluid-solid-heat coupling problem. A complete three-dimensional flow-structure-thermal coupling dynamics simulation model of a gas-filled fluid-resistance damper was established. The flow-condition-based interpolation (FCBI) method and direct coupling calculation method, the unit's FCBI-C fluid numerical analysis method and iterative coupling calculation method are used to achieve the damper dynamic response of the piston rod under sinusoidal excitation; the air chamber inflation pressure, spring compression characteristics, constant flow passage cross-sectional area and oil parameters, etc. The system parameters, excitation frequency, and amplitude and other excitation parameters are analyzed and compared in detail for the effects of differential pressure characteristics, velocity characteristics, flow characteristics and dynamic response of valve opening, floating piston response and piston rod output force characteristics. Experiments were carried out on some simulation analysis conditions. The results show that the node-based FCBI (flow-condition-based interpolation) fluid numerical analysis method and direct coupling calculation method can better guarantee the conservation of flow field calculation, and the calculation step is larger, but the memory is also larger; if the chamber inflation pressure is too low, the damper will become cavitation. The inflation pressure will cause the speed characteristic hysteresis to increase, and the sealing requirements are too strict. The spring compression characteristics have a great influence on the damping characteristics of the damper, and reasonable damping characteristic needs to properly design the spring compression characteristics; the larger the cross-sectional area of the constant flow channel, the smaller the maximum output force, but the more stable when the valve plate is opening.

Keywords: damper, fluid-structure-thermal coupling, heat generation, heat transfer

Procedia PDF Downloads 135
5537 MIMO PID Controller of a Power Plant Boiler–Turbine Unit

Authors: N. Ben-Mahmoud, M. Elfandi, A. Shallof

Abstract:

This paper presents a methodology to design multivariable PID controllers for multi-input and multi-output systems. The proposed control strategy, which is centralized, combines of PID controllers. The proportional gains in the P controllers act as tuning parameters of (SISO) in order to modify the behavior of the loops almost independently. The design procedure consists of three steps: first, an ideal decoupler including integral action is determined. Second, the decoupler is approximated with PID controllers. Third, the proportional gains are tuned to achieve the specified performance. The proposed method is applied to representative processes.

Keywords: boiler turbine, MIMO, PID controller, control by decoupling, anti wind-up techniques

Procedia PDF Downloads 322
5536 Physical Parameters Influencing the Yield of Nigella Sativa Oil Extracted by Hydraulic Pressing

Authors: Hadjadj Naima, K. Mahdi, D. Belhachat, F. S. Ait Chaouche, A. Ferradji

Abstract:

The Nigella Sativa oil yield extracted by hydraulic pressing is influenced by the pressure temperature and size particles. The optimization of oil extraction is investigated. The rate of extraction of the whole seeds is very weak, a crushing of seeds is necessary to facilitate the extraction. This rate augments with the rise of the temperature and the pressure, and decrease of size particles. The best output (66%) is obtained for a granulometry lower than 1mm, a temperature of 50°C and a pressure of 120 bars.

Keywords: oil, Nigella sativa, extraction, optimization, temperature, pressure

Procedia PDF Downloads 469
5535 The Verification Study of Computational Fluid Dynamics Model of the Aircraft Piston Engine

Authors: Lukasz Grabowski, Konrad Pietrykowski, Michal Bialy

Abstract:

This paper presents the results of the research to verify the combustion in aircraft piston engine Asz62-IR. This engine was modernized and a type of ignition system was developed. Due to the high costs of experiments of a nine-cylinder 1,000 hp aircraft engine, a simulation technique should be applied. Therefore, computational fluid dynamics to simulate the combustion process is a reasonable solution. Accordingly, the tests for varied ignition advance angles were carried out and the optimal value to be tested on a real engine was specified. The CFD model was created with the AVL Fire software. The engine in the research had two spark plugs for each cylinder and ignition advance angles had to be set up separately for each spark. The results of the simulation were verified by comparing the pressure in the cylinder. The courses of the indicated pressure of the engine mounted on a test stand were compared. The real course of pressure was measured with an optical sensor, mounted in a specially drilled hole between the valves. It was the OPTRAND pressure sensor, which was designed especially to engine combustion process research. The indicated pressure was measured in cylinder no 3. The engine was running at take-off power. The engine was loaded by a propeller at a special test bench. The verification of the CFD simulation results was based on the results of the test bench studies. The course of the simulated pressure obtained is within the measurement error of the optical sensor. This error is 1% and reflects the hysteresis and nonlinearity of the sensor. The real indicated pressure measured in the cylinder and the pressure taken from the simulation were compared. It can be claimed that the verification of CFD simulations based on the pressure is a success. The next step was to research on the impact of changing the ignition advance timing of spark plugs 1 and 2 on a combustion process. Moving ignition timing between 1 and 2 spark plug results in a longer and uneven firing of a mixture. The most optimal point in terms of indicated power occurs when ignition is simultaneous for both spark plugs, but so severely separated ignitions are assured that ignition will occur at all speeds and loads of engine. It should be confirmed by a bench experiment of the engine. However, this simulation research enabled us to determine the optimal ignition advance angle to be implemented into the ignition control system. This knowledge allows us to set up the ignition point with two spark plugs to achieve as large power as possible.

Keywords: CFD model, combustion, engine, simulation

Procedia PDF Downloads 356
5534 Overcoming 4-to-1 Decryption Failure of the Rabin Cryptosystem

Authors: Muhammad Rezal Kamel Ariffin, Muhammad Asyraf Asbullah

Abstract:

The square root modulo problem is a known primitive in designing an asymmetric cryptosystem. It was first attempted by Rabin. Decryption failure of the Rabin cryptosystem caused by the 4-to-1 decryption output is overcome efficiently in this work. The proposed scheme to overcome the decryption failure issue (known as the AAβ-cryptosystem) is constructed using a simple mathematical structure, it has low computational requirements and would enable communication devices with low computing power to deploy secure communication procedures efficiently.

Keywords: Rabin cryptosystem, 4-to-1 decryption failure, square root modulo problem, integer factorization problem

Procedia PDF Downloads 455
5533 Urban Logistics Dynamics: A User-Centric Approach to Traffic Modelling and Kinetic Parameter Analysis

Authors: Emilienne Lardy, Eric Ballot, Mariam Lafkihi

Abstract:

Efficient urban logistics requires a comprehensive understanding of traffic dynamics, particularly as it pertains to kinetic parameters influencing energy consumption and trip duration estimations. While real-time traffic information is increasingly accessible, current high-precision forecasting services embedded in route planning often function as opaque 'black boxes' for users. These services, typically relying on AI-processed counting data, fall short in accommodating open design parameters essential for management studies, notably within Supply Chain Management. This work revisits the modelling of traffic conditions in the context of city logistics, emphasizing its significance from the user’s point of view, with two focuses. Firstly, the focus is not on the vehicle flow but on the vehicles themselves and the impact of the traffic conditions on their driving behaviour. This means opening the range of studied indicators beyond vehicle speed, to describe extensively the kinetic and dynamic aspects of the driving behaviour. To achieve this, we leverage the Art. Kinema parameters are designed to characterize driving cycles. Secondly, this study examines how the driving context (i.e., exogenous factors to the traffic flow) determines the mentioned driving behaviour. Specifically, we explore how accurately the kinetic behaviour of a vehicle can be predicted based on a limited set of exogenous factors, such as time, day, road type, orientation, slope, and weather conditions. To answer this question, statistical analysis was conducted on real-world driving data, which includes high-frequency measurements of vehicle speed. A Factor Analysis and a Generalized Linear Model have been established to link kinetic parameters with independent categorical contextual variables. The results include an assessment of the adjustment quality and the robustness of the models, as well as an overview of the model’s outputs.

Keywords: factor analysis, generalised linear model, real world driving data, traffic congestion, urban logistics, vehicle kinematics

Procedia PDF Downloads 58
5532 A Regression Model for Predicting Sugar Crystal Size in a Fed-Batch Vacuum Evaporative Crystallizer

Authors: Sunday B. Alabi, Edikan P. Felix, Aniediong M. Umo

Abstract:

Crystal size distribution is of great importance in the sugar factories. It determines the market value of granulated sugar and also influences the cost of production of sugar crystals. Typically, sugar is produced using fed-batch vacuum evaporative crystallizer. The crystallization quality is examined by crystal size distribution at the end of the process which is quantified by two parameters: the average crystal size of the distribution in the mean aperture (MA) and the width of the distribution of the coefficient of variation (CV). Lack of real-time measurement of the sugar crystal size hinders its feedback control and eventual optimisation of the crystallization process. An attractive alternative is to use a soft sensor (model-based method) for online estimation of the sugar crystal size. Unfortunately, the available models for sugar crystallization process are not suitable as they do not contain variables that can be measured easily online. The main contribution of this paper is the development of a regression model for estimating the sugar crystal size as a function of input variables which are easy to measure online. This has the potential to provide real-time estimates of crystal size for its effective feedback control. Using 7 input variables namely: initial crystal size (Lo), temperature (T), vacuum pressure (P), feed flowrate (Ff), steam flowrate (Fs), initial super-saturation (S0) and crystallization time (t), preliminary studies were carried out using Minitab 14 statistical software. Based on the existing sugar crystallizer models, and the typical ranges of these 7 input variables, 128 datasets were obtained from a 2-level factorial experimental design. These datasets were used to obtain a simple but online-implementable 6-input crystal size model. It seems the initial crystal size (Lₒ) does not play a significant role. The goodness of the resulting regression model was evaluated. The coefficient of determination, R² was obtained as 0.994, and the maximum absolute relative error (MARE) was obtained as 4.6%. The high R² (~1.0) and the reasonably low MARE values are an indication that the model is able to predict sugar crystal size accurately as a function of the 6 easy-to-measure online variables. Thus, the model can be used as a soft sensor to provide real-time estimates of sugar crystal size during sugar crystallization process in a fed-batch vacuum evaporative crystallizer.

Keywords: crystal size, regression model, soft sensor, sugar, vacuum evaporative crystallizer

Procedia PDF Downloads 200
5531 The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos

Authors: Nassima Noufail, Sara Bouhali

Abstract:

In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%.

Keywords: video segmentation, action detection, classification, Kmeans, C3D

Procedia PDF Downloads 66
5530 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data

Authors: S. Jurado, E. Pazmino

Abstract:

Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.

Keywords: medial axis, pore-throat distribution, porosity, porous media

Procedia PDF Downloads 106
5529 Non Interferometric Quantitative Phase Imaging of Yeast Cells

Authors: P. Praveen Kumar, P. Vimal Prabhu, Renu John

Abstract:

In biology most microscopy specimens, in particular living cells are transparent. In cell imaging, it is hard to create an image of a cell which is transparent with a very small refractive index change with respect to the surrounding media. Various techniques like addition of staining and contrast agents, markers have been applied in the past for creating contrast. Many of the staining agents or markers are not applicable to live cell imaging as they are toxic. In this paper, we report theoretical and experimental results from quantitative phase imaging of yeast cells with a commercial bright field microscope. We reconstruct the phase of cells non-interferometrically based on the transport of intensity equations (TIE). This technique estimates the axial derivative from positive through-focus intensity measurements. This technique allows phase imaging using a regular microscope with white light illumination. We demonstrate nano-metric depth sensitivity in imaging live yeast cells using this technique. Experimental results will be shown in the paper demonstrating the capability of the technique in 3-D volume estimation of living cells. This real-time imaging technique would be highly promising in real-time digital pathology applications, screening of pathogens and staging of diseases like malaria as it does not need any pre-processing of samples.

Keywords: axial derivative, non-interferometric imaging, quantitative phase imaging, transport of intensity equation

Procedia PDF Downloads 377
5528 X-Ray Dynamical Diffraction Rocking Curves in Case of Third Order Nonlinear Renninger Effect

Authors: Minas Balyan

Abstract:

In the third-order nonlinear Takagi’s equations for monochromatic waves and in the third-order nonlinear time-dependent dynamical diffraction equations for X-ray pulses for forbidden reflections the Fourier-coefficients of the linear and the third order nonlinear susceptibilities are zero. The dynamical diffraction in the nonlinear case is related to the presence in the nonlinear equations the terms proportional to the zero order and the second order nonzero Fourier coefficients of the third order nonlinear susceptibility. Thus in the third order nonlinear Bragg diffraction case a nonlinear analogue of the well known Renninger effect takes place. In this work, the ‘third order nonlinear Renninger effect’ is considered theoretically and numerically. If the reflection exactly is forbidden the diffracted wave’s amplitude is zero both in Laue and Bragg cases since the boundary conditions and dynamical diffraction equations are compatible with zero solution. But in real crystals due to some percent of dislocations and other localized defects, the atoms are displaced with respect to their equilibrium positions. Thus in real crystals susceptibilities of forbidden reflection are by some order small than for usual not forbidden reflections but are not exactly equal to zero. The numerical calculations for susceptibilities two order less than for not forbidden reflection show that in Bragg geometry case the nonlinear reflection curve’s behavior is the same as for not forbidden reflection, but for forbidden reflection the rocking curves’ width, center and boundaries are two order sensitive on the input intensity value. This gives an opportunity to investigate third order nonlinear X-ray dynamical diffraction for not intense beams – 0.001 in the units of critical intensity.

Keywords: third order nonlinearity, Bragg diffraction, nonlinear Renninger effect, rocking curves

Procedia PDF Downloads 398