Search results for: e-content producing algorithm
581 An Amended Method for Assessment of Hypertrophic Scars Viscoelastic Parameters
Authors: Iveta Bryjova
Abstract:
Recording of viscoelastic strain-vs-time curves with the aid of the suction method and a follow-up analysis, resulting into evaluation of standard viscoelastic parameters, is a significant technique for non-invasive contact diagnostics of mechanical properties of skin and assessment of its conditions, particularly in acute burns, hypertrophic scarring (the most common complication of burn trauma) and reconstructive surgery. For elimination of the skin thickness contribution, usable viscoelastic parameters deduced from the strain-vs-time curves are restricted to the relative ones (i.e. those expressed as a ratio of two dimensional parameters), like grosselasticity, net-elasticity, biological elasticity or Qu’s area parameters, in literature and practice conventionally referred to as R2, R5, R6, R7, Q1, Q2, and Q3. With the exception of parameters R2 and Q1, the remaining ones substantially depend on the position of inflection point separating the elastic linear and viscoelastic segments of the strain-vs-time curve. The standard algorithm implemented in commercially available devices relies heavily on the experimental fact that the inflection time comes about 0.1 sec after the suction switch-on/off, which depreciates credibility of parameters thus obtained. Although the Qu’s US 7,556,605 patent suggests a method of improving the precision of the inflection determination, there is still room for nonnegligible improving. In this contribution, a novel method of inflection point determination utilizing the advantageous properties of the Savitzky–Golay filtering is presented. The method allows computation of derivatives of smoothed strain-vs-time curve, more exact location of inflection and consequently more reliable values of aforementioned viscoelastic parameters. An improved applicability of the five inflection-dependent relative viscoelastic parameters is demonstrated by recasting a former study under the new method, and by comparing its results with those provided by the methods that have been used so far.Keywords: Savitzky–Golay filter, scarring, skin, viscoelasticity
Procedia PDF Downloads 304580 Photoluminescence of Barium and Lithium Silicate Glasses and Glass Ceramics Doped with Rare Earth Ions
Authors: Augustas Vaitkevicius, Mikhail Korjik, Eugene Tretyak, Ekaterina Trusova, Gintautas Tamulaitis
Abstract:
Silicate materials are widely used as luminescent materials in amorphous and crystalline phase. Lithium silicate glass is popular for making neutron sensitive scintillation glasses. Cerium-doped single crystalline silicates of rare earth elements and yttrium have been demonstrated to be good scintillation materials. Due to their high thermal and photo-stability, silicate glass ceramics are supposed to be suitable materials for producing light converters for high power white light emitting diodes. In this report, the influence of glass composition and crystallization on photoluminescence (PL) of different silicate glasses was studied. Barium (BaO-2SiO₂) and lithium (Li₂O-2SiO₂) glasses were under study. Cerium, dysprosium, erbium and europium ions as well as their combinations were used for doping. The influence of crystallization was studied after transforming the doped glasses into glass ceramics by heat treatment in the temperature range of 550-850 degrees Celsius for 1 hour. The study was carried out by comparing the photoluminescence (PL) spectra, spatial distributions of PL parameters and quantum efficiency in the samples under study. The PL spectra and spatial distributions of their parameters were obtained by using confocal PL microscopy. A WITec Alpha300 S confocal microscope coupled with an air cooled CCD camera was used. A CW laser diode emitting at 405 nm was exploited for excitation. The spatial resolution was in sub-micrometer domain in plane and ~1 micrometer perpendicularly to the sample surface. An integrating sphere with a xenon lamp coupled with a monochromator was used to measure the external quantum efficiency. All measurements were performed at room temperature. Chromatic properties of the light emission from the glasses and glass ceramics have been evaluated. We observed that the quantum efficiency of the glass ceramics is higher than that of the corresponding glass. The investigation of spatial distributions of PL parameters revealed that heat treatment of the glasses leads to a decrease in sample homogeneity. In the case of BaO-2SiO₂: Eu, 10 micrometer long needle-like objects are formed, when transforming the glass into glass ceramics. The comparison of PL spectra from within and outside the needle-like structure reveals that the ratio between intensities of PL bands associated with Eu²⁺ and Eu³⁺ ions is larger in the bright needle-like structures. This indicates a higher degree of crystallinity in the needle-like objects. We observed that the spectral positions of the PL bands are the same in the background and the needle-like areas, indicating that heat treatment imposes no significant change to the valence state of the europium ions. The evaluation of chromatic properties confirms applicability of the glasses under study for fabrication of white light sources with high thermal stability. The ability to combine barium and lithium glass matrixes and doping by Eu, Ce, Dy, and Tb enables optimization of chromatic properties.Keywords: glass ceramics, luminescence, phosphor, silicate
Procedia PDF Downloads 317579 Analysing Trends in Rice Cropping Intensity and Seasonality across the Philippines Using 14 Years of Moderate Resolution Remote Sensing Imagery
Authors: Bhogendra Mishra, Andy Nelson, Mirco Boschetti, Lorenzo Busetto, Alice Laborte
Abstract:
Rice is grown on over 100 million hectares in almost every country of Asia. It is the most important staple crop for food security and has high economic and cultural importance in Asian societies. The combination of genetic diversity and management options, coupled with the large geographic extent means that there is a large variation in seasonality (when it is grown) and cropping intensity (how often it is grown per year on the same plot of land), even over relatively small distances. Seasonality and intensity can and do change over time depending on climatic, environmental and economic factors. Detecting where and when these changes happen can provide information to better understand trends in regional and even global rice production. Remote sensing offers a unique opportunity to estimate these trends. We apply the recently published PhenoRice algorithm to 14 years of moderate resolution remote sensing (MODIS) data (utilizing 250m resolution 16 day composites from Terra and Aqua) to estimate seasonality and cropping intensity per year and changes over time. We compare the results to the surveyed data collected by International Rice Research Institute (IRRI). The study results in a unique and validated dataset on the extent and change of extent, the seasonality and change in seasonality and the cropping intensity and change in cropping intensity between 2003 and 2016 for the Philippines. Observed trends and their implications for food security and trade policies are also discussed.Keywords: rice, cropping intensity, moderate resolution remote sensing (MODIS), phenology, seasonality
Procedia PDF Downloads 311578 Machine Learning Prediction of Compressive Damage and Energy Absorption in Carbon Fiber-Reinforced Polymer Tubular Structures
Authors: Milad Abbasi
Abstract:
Carbon fiber-reinforced polymer (CFRP) composite structures are increasingly being utilized in the automotive industry due to their lightweight and specific energy absorption capabilities. Although it is impossible to predict composite mechanical properties directly using theoretical methods, various research has been conducted so far in the literature for accurate simulation of CFRP structures' energy-absorbing behavior. In this research, axial compression experiments were carried out on hand lay-up unidirectional CFRP composite tubes. The fabrication method allowed the authors to extract the material properties of the CFRPs using ASTM D3039, D3410, and D3518 standards. A neural network machine learning algorithm was then utilized to build a robust prediction model to forecast the axial compressive properties of CFRP tubes while reducing high-cost experimental efforts. The predicted results have been compared with the experimental outcomes in terms of load-carrying capacity and energy absorption capability. The results showed high accuracy and precision in the prediction of the energy-absorption capacity of the CFRP tubes. This research also demonstrates the effectiveness and challenges of machine learning techniques in the robust simulation of composites' energy-absorption behavior. Interestingly, the proposed method considerably condensed numerical and experimental efforts in the simulation and calibration of CFRP composite tubes subjected to compressive loading.Keywords: CFRP composite tubes, energy absorption, crushing behavior, machine learning, neural network
Procedia PDF Downloads 154577 Suppressing Vibration in a Three-axis Flexible Satellite: An Approach with Composite Control
Authors: Jalal Eddine Benmansour, Khouane Boulanoir, Nacera Bekhadda, Elhassen Benfriha
Abstract:
This paper introduces a novel composite control approach that addresses the challenge of stabilizing the three-axis attitude of a flexible satellite in the presence of vibrations caused by flexible appendages. The key contribution of this research lies in the development of a disturbance observer, which effectively observes and estimates the unwanted torques induced by the vibrations. By utilizing the estimated disturbance, the proposed approach enables efficient compensation for the detrimental effects of vibrations on the satellite system. To govern the attitude angles of the spacecraft, a proportional derivative controller (PD) is specifically designed and proposed. The PD controller ensures precise control over all attitude angles, facilitating stable and accurate spacecraft maneuvering. In order to demonstrate the global stability of the system, the Lyapunov method, a well-established technique in control theory, is employed. Through rigorous analysis, the Lyapunov method verifies the convergence of system dynamics, providing strong evidence of system stability. To evaluate the performance and efficacy of the proposed control algorithm, extensive simulations are conducted. The simulation results validate the effectiveness of the combined approach, showcasing significant improvements in the stabilization and control of the satellite's attitude, even in the presence of disruptive vibrations from flexible appendages. This novel composite control approach presented in this paper contributes to the advancement of satellite attitude control techniques, offering a promising solution for achieving enhanced stability and precision in challenging operational environments.Keywords: attitude control, flexible satellite, vibration control, disturbance observer
Procedia PDF Downloads 87576 Solutions for Food-Safe 3D Printing
Authors: Geremew Geidare Kailo, Igor Gáspár, András Koris, Ivana Pajčin, Flóra Vitális, Vanja Vlajkov
Abstract:
Three-dimension (3D) printing, a very popular additive manufacturing technology, has recently undergone rapid growth and replaced the use of conventional technology from prototyping to producing end-user parts and products. The 3D Printing technology involves a digital manufacturing machine that produces three-dimensional objects according to designs created by the user via 3D modeling or computer-aided design/manufacturing (CAD/CAM) software. The most popular 3D printing system is Fused Deposition Modeling (FDM) or also called Fused Filament Fabrication (FFF). A 3D-printed object is considered food safe if it can have direct contact with the food without any toxic effects, even after cleaning, storing, and reusing the object. This work analyzes the processing timeline of the filament (material for 3D printing) from unboxing to the extrusion through the nozzle. It is an important task to analyze the growth of bacteria on the 3D printed surface and in gaps between the layers. By default, the 3D-printed object is not food safe after longer usage and direct contact with food (even though they use food-safe filaments), but there are solutions for this problem. The aim of this work was to evaluate the 3D-printed object from different perspectives of food safety. Firstly, testing antimicrobial 3D printing filaments from a food safety aspect since the 3D Printed object in the food industry may have direct contact with the food. Therefore, the main purpose of the work is to reduce the microbial load on the surface of a 3D-printed part. Coating with epoxy resin was investigated, too, to see its effect on mechanical strength, thermal resistance, surface smoothness and food safety (cleanability). Another aim of this study was to test new temperature-resistant filaments and the effect of high temperature on 3D printed materials to see if they can be cleaned with boiling or similar hi-temp treatment. This work proved that all three mentioned methods could improve the food safety of the 3D printed object, but the size of this effect variates. The best result we got was with coating with epoxy resin, and the object was cleanable like any other injection molded plastic object with a smooth surface. Very good results we got by boiling the objects, and it is good to see that nowadays, more and more special filaments have a food-safe certificate and can withstand boiling temperatures too. Using antibacterial filaments reduced bacterial colonies to 1/5, but the biggest advantage of this method is that it doesn’t require any post-processing. The object is ready out of the 3D printer. Acknowledgements: The research was supported by the Hungarian and Serbian bilateral scientific and technological cooperation project funded by the Hungarian National Office for Research, Development and Innovation (NKFI, 2019-2.1.11-TÉT-2020-00249) and the Ministry of Education, Science and Technological Development of the Republic of Serbia. The authors acknowledge the Hungarian University of Agriculture and Life Sciences’s Doctoral School of Food Science for the support in this studyKeywords: food safety, 3D printing, filaments, microbial, temperature
Procedia PDF Downloads 143575 Post Harvest Fungi Diversity and Level of Aflatoxin Contamination in Stored Maize: Cases of Kitui, Nakuru and Trans-Nzoia Counties in Kenya
Authors: Gachara Grace, Kebira Anthony, Harvey Jagger, Wainaina James
Abstract:
Aflatoxin contamination of maize in Africa poses a major threat to food security and the health of many African people. In Kenya, aflatoxin contamination of maize is high due to the environmental, agricultural and socio-economic factors. Many studies have been conducted to understand the scope of the problem, especially at pre-harvest level. This research was carried out to gather scientific information on the fungi population, diversity and aflatoxin level during the post-harvest period. The study was conducted in three geographical locations of; Kitui, Kitale and Nakuru. Samples were collected from storage structures of farmers and transported to the Biosciences eastern and central Africa (BecA), International Livestock and Research Institute (ILRI) hub laboratories. Mycoflora was recovered using the direct plating method. A total of five fungal genera (Aspergillus, Penicillium, Fusarium, Rhizopus and Bssyochlamys spp.) were isolated from the stored maize samples. The most common fungal species that were isolated from the three study sites included A. flavus at 82.03% followed by A.niger and F.solani at 49% and 26% respectively. The aflatoxin producing fungi A. flavus was recovered in 82.03% of the samples. Aflatoxin levels were analysed on both the maize samples and in vitro. Most of the A. flavus isolates recorded a high level of aflatoxin when they were analysed for presence of aflatoxin B1 using ELISA. In Kitui, all the samples (100%) had aflatoxin levels above 10ppb with a total aflatoxin mean of 219.2ppb. In Kitale, only 3 samples (n=39) had their aflatoxin levels less than 10ppb while in Nakuru, the total aflatoxin mean level of this region was 239.7ppb. When individual samples were analysed using Vicam fluorometer method, aflatoxin analysis revealed that most of the samples (58.4%) had been contaminated. The means were significantly different (p=0.00<0.05) in all the three locations. Genetic relationships of A. flavus isolates were determined using 13 Simple Sequence Repeats (SSRs) markers. The results were used to generate a phylogenetic tree using DARwin5 software program. A total of 5 distinct clusters were revealed among the genotypes. The isolates appeared to cluster separately according to the geographical locations. Principal Coordinates Analysis (PCoA) of the genetic distances among the 91 A. flavus isolates explained over 50.3% of the total variation when two coordinates were used to cluster the isolates. Analysis of Molecular Variance (AMOVA) showed a high variation of 87% within populations and 13% among populations. This research has shown that A. flavus is the main fungal species infecting maize grains in Kenya. The influence of aflatoxins on human populations in Kenya demonstrates a clear need for tools to manage contamination of locally produced maize. Food basket surveys for aflatoxin contamination should be conducted on a regular basis. This would assist in obtaining reliable data on aflatoxin incidence in different food crops. This would go a long way in defining control strategies for this menace.Keywords: aflatoxin, Aspergillus flavus, genotyping, Kenya
Procedia PDF Downloads 278574 Harmonic Assessment and Mitigation in Medical Diagonesis Equipment
Authors: S. S. Adamu, H. S. Muhammad, D. S. Shuaibu
Abstract:
Poor power quality in electrical power systems can lead to medical equipment at healthcare centres to malfunction and present wrong medical diagnosis. Equipment such as X-rays, computerized axial tomography, etc. can pollute the system due to their high level of harmonics production, which may cause a number of undesirable effects like heating, equipment damages and electromagnetic interferences. The conventional approach of mitigation uses passive inductor/capacitor (LC) filters, which has some drawbacks such as, large sizes, resonance problems and fixed compensation behaviours. The current trends of solutions generally employ active power filters using suitable control algorithms. This work focuses on assessing the level of Total Harmonic Distortion (THD) on medical facilities and various ways of mitigation, using radiology unit of an existing hospital as a case study. The measurement of the harmonics is conducted with a power quality analyzer at the point of common coupling (PCC). The levels of measured THD are found to be higher than the IEEE 519-1992 standard limits. The system is then modelled as a harmonic current source using MATLAB/SIMULINK. To mitigate the unwanted harmonic currents a shunt active filter is developed using synchronous detection algorithm to extract the fundamental component of the source currents. Fuzzy logic controller is then developed to control the filter. The THD without the active power filter are validated using the measured values. The THD with the developed filter show that the harmonics are now within the recommended limits.Keywords: power quality, total harmonics distortion, shunt active filters, fuzzy logic
Procedia PDF Downloads 479573 Research on Level Adjusting Mechanism System of Large Space Environment Simulator
Authors: Han Xiao, Zhang Lei, Huang Hai, Lv Shizeng
Abstract:
Space environment simulator is a device for spacecraft test. KM8 large space environment simulator built in Tianjing Space City is the largest as well as the most advanced space environment simulator in China. Large deviation of spacecraft level will lead to abnormally work of the thermal control device in spacecraft during the thermal vacuum test. In order to avoid thermal vacuum test failure, level adjusting mechanism system is developed in the KM8 large space environment simulator as one of the most important subsystems. According to the level adjusting requirements of spacecraft’s thermal vacuum tests, the four fulcrums adjusting model is established. By means of collecting level instruments and displacement sensors data, stepping motors controlled by PLC drive four supporting legs simultaneous movement. In addition, a PID algorithm is used to control the temperature of supporting legs and level instruments which long time work under the vacuum cold and black environment in KM8 large space environment simulator during thermal vacuum tests. Based on the above methods, the data acquisition and processing, the analysis and calculation, real time adjustment and fault alarming of the level adjusting mechanism system are implemented. The level adjusting accuracy reaches 1mm/m, and carrying capacity is 20 tons. Debugging showed that the level adjusting mechanism system of KM8 large space environment simulator can meet the thermal vacuum test requirement of the new generation spacecraft. The performance and technical indicators of the level adjusting mechanism system which provides important support for the development of spacecraft in China have been ahead of similar equipment in the world.Keywords: space environment simulator, thermal vacuum test, level adjusting, spacecraft, parallel mechanism
Procedia PDF Downloads 248572 One Step Further: Pull-Process-Push Data Processing
Authors: Romeo Botes, Imelda Smit
Abstract:
In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list
Procedia PDF Downloads 245571 Decision Support System Based On GIS and MCDM to Identify Land Suitability for Agriculture
Authors: Abdelkader Mendas
Abstract:
The integration of MultiCriteria Decision Making (MCDM) approaches in a Geographical Information System (GIS) provides a powerful spatial decision support system which offers the opportunity to efficiently produce the land suitability maps for agriculture. Indeed, GIS is a powerful tool for analyzing spatial data and establishing a process for decision support. Because of their spatial aggregation functions, MCDM methods can facilitate decision making in situations where several solutions are available, various criteria have to be taken into account and decision-makers are in conflict. The parameters and the classification system used in this work are inspired from the FAO (Food and Agriculture Organization) approach dedicated to a sustainable agriculture. A spatial decision support system has been developed for establishing the land suitability map for agriculture. It incorporates the multicriteria analysis method ELECTRE Tri (ELimitation Et Choix Traduisant la REalité) in a GIS within the GIS program package environment. The main purpose of this research is to propose a conceptual and methodological framework for the combination of GIS and multicriteria methods in a single coherent system that takes into account the whole process from the acquisition of spatially referenced data to decision-making. In this context, a spatial decision support system for developing land suitability maps for agriculture has been developed. The algorithm of ELECTRE Tri is incorporated into a GIS environment and added to the other analysis functions of GIS. This approach has been tested on an area in Algeria. A land suitability map for durum wheat has been produced. Through the obtained results, it appears that ELECTRE Tri method, integrated into a GIS, is better suited to the problem of land suitability for agriculture. The coherence of the obtained maps confirms the system effectiveness.Keywords: multicriteria decision analysis, decision support system, geographical information system, land suitability for agriculture
Procedia PDF Downloads 642570 Weakly Solving Kalah Game Using Artificial Intelligence and Game Theory
Authors: Hiba El Assibi
Abstract:
This study aims to weakly solve Kalah, a two-player board game, by developing a start-to-finish winning strategy using an optimized Minimax algorithm with Alpha-Beta Pruning. In weakly solving Kalah, our focus is on creating an optimal strategy from the game's beginning rather than analyzing every possible position. The project will explore additional enhancements like symmetry checking and code optimizations to speed up the decision-making process. This approach is expected to give insights into efficient strategy formulation in board games and potentially help create games with a fair distribution of outcomes. Furthermore, this research provides a unique perspective on human versus Artificial Intelligence decision-making in strategic games. By comparing the AI-generated optimal moves with human choices, we can explore how seemingly advantageous moves can, in the long run, be harmful, thereby offering a deeper understanding of strategic thinking and foresight in games. Moreover, this paper discusses the evaluation of our strategy against existing methods, providing insights on performance and computational efficiency. We also discuss the scalability of our approach to the game, considering different board sizes (number of pits and stones) and rules (different variations) and studying how that affects performance and complexity. The findings have potential implications for the development of AI applications in strategic game planning, enhancing our understanding of human cognitive processes in game settings, and offer insights into creating balanced and engaging game experiences.Keywords: minimax, alpha beta pruning, transposition tables, weakly solving, game theory
Procedia PDF Downloads 55569 A Context Aware Mobile Learning System with a Cognitive Recommendation Engine
Authors: Jalal Maqbool, Gyu Myoung Lee
Abstract:
Using smart devices for context aware mobile learning is becoming increasingly popular. This has led to mobile learning technology becoming an indispensable part of today’s learning environment and platforms. However, some fundamental issues remain - namely, mobile learning still lacks the ability to truly understand human reaction and user behaviour. This is due to the fact that current mobile learning systems are passive and not aware of learners’ changing contextual situations. They rely on static information about mobile learners. In addition, current mobile learning platforms lack the capability to incorporate dynamic contextual situations into learners’ preferences. Thus, this thesis aims to address these issues highlighted by designing a context aware framework which is able to sense learner’s contextual situations, handle data dynamically, and which can use contextual information to suggest bespoke learning content according to a learner’s preferences. This is to be underpinned by a robust recommendation system, which has the capability to perform these functions, thus providing learners with a truly context-aware mobile learning experience, delivering learning contents using smart devices and adapting to learning preferences as and when it is required. In addition, part of designing an algorithm for the recommendation engine has to be based on learner and application needs, personal characteristics and circumstances, as well as being able to comprehend human cognitive processes which would enable the technology to interact effectively and deliver mobile learning content which is relevant, according to the learner’s contextual situations. The concept of this proposed project is to provide a new method of smart learning, based on a capable recommendation engine for providing an intuitive mobile learning model based on learner actions.Keywords: aware, context, learning, mobile
Procedia PDF Downloads 245568 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations
Authors: Karthikeyan Kalirajan, Ashok Joshi
Abstract:
An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection
Procedia PDF Downloads 429567 The Effect of Online Analyzer Malfunction on the Performance of Sulfur Recovery Unit and Providing a Temporary Solution to Reduce the Emission Rate
Authors: Hamid Reza Mahdipoor, Mehdi Bahrami, Mohammad Bodaghi, Seyed Ali Akbar Mansoori
Abstract:
Nowadays, with stricter limitations to reduce emissions, considerable penalties are imposed if pollution limits are exceeded. Therefore, refineries, along with focusing on improving the quality of their products, are also focused on producing products with the least environmental impact. The duty of the sulfur recovery unit (SRU) is to convert H₂S gas coming from the upstream units to elemental sulfur and minimize the burning of sulfur compounds to SO₂. The Claus process is a common process for converting H₂S to sulfur, including a reaction furnace followed by catalytic reactors and sulfur condensers. In addition to a Claus section, SRUs usually consist of a tail gas treatment (TGT) section to decrease the concentration of SO₂ in the flue gas below the emission limits. To operate an SRU properly, the flow rate of combustion air to the reaction furnace must be adjusted so that the Claus reaction is performed according to stoichiometry. Accurate control of the air demand leads to an optimum recovery of sulfur during the flow and composition fluctuations in the acid gas feed. Therefore, the major control system in the SRU is the air demand control loop, which includes a feed-forward control system based on predetermined feed flow rates and a feed-back control system based on the signal from the tail gas online analyzer. The use of online analyzers requires compliance with the installation and operation instructions. Unfortunately, most of these analyzers in Iran are out of service for different reasons, like the low importance of environmental issues and a lack of access to after-sales services due to sanctions. In this paper, an SRU in Iran was simulated and calibrated using industrial experimental data. Afterward, the effect of the malfunction of the online analyzer on the performance of SRU was investigated using the calibrated simulation. The results showed that an increase in the SO₂ concentration in the tail gas led to an increase in the temperature of the reduction reactor in the TGT section. This increase in temperature caused the failure of TGT and increased the concentration of SO₂ from 750 ppm to 35,000 ppm. In addition, the lack of a control system for the adjustment of the combustion air caused further increases in SO₂ emissions. In some processes, the major variable cannot be controlled directly due to difficulty in measurement or a long delay in the sampling system. In these cases, a secondary variable, which can be measured more easily, is considered to be controlled. With the correct selection of this variable, the main variable is also controlled along with the secondary variable. This strategy for controlling a process system is referred to as inferential control" and is considered in this paper. Therefore, a sensitivity analysis was performed to investigate the sensitivity of other measurable parameters to input disturbances. The results revealed that the output temperature of the first Claus reactor could be used for inferential control of the combustion air. Applying this method to the operation led to maximizing the sulfur recovery in the Claus section.Keywords: sulfur recovery, online analyzer, inferential control, SO₂ emission
Procedia PDF Downloads 76566 Development of a Geomechanical Risk Assessment Model for Underground Openings
Authors: Ali Mortazavi
Abstract:
The main objective of this research project is to delve into a multitude of geomechanical risks associated with various mining methods employed within the underground mining industry. Controlling geotechnical design parameters and operational factors affecting the selection of suitable mining techniques for a given underground mining condition will be considered from a risk assessment point of view. Important geomechanical challenges will be investigated as appropriate and relevant to the commonly used underground mining methods. Given the complicated nature of rock mass in-situ and complicated boundary conditions and operational complexities associated with various underground mining methods, the selection of a safe and economic mining operation is of paramount significance. Rock failure at varying scales within the underground mining openings is always a threat to mining operations and causes human and capital losses worldwide. Geotechnical design is a major design component of all underground mines and basically dominates the safety of an underground mine. With regard to uncertainties that exist in rock characterization prior to mine development, there are always risks associated with inappropriate design as a function of mining conditions and the selected mining method. Uncertainty often results from the inherent variability of rock masse, which in turn is a function of both geological materials and rock mass in-situ conditions. The focus of this research is on developing a methodology which enables a geomechanical risk assessment of given underground mining conditions. The outcome of this research is a geotechnical risk analysis algorithm, which can be used as an aid in selecting the appropriate mining method as a function of mine design parameters (e.g., rock in-situ properties, design method, governing boundary conditions such as in-situ stress and groundwater, etc.).Keywords: geomechanical risk assessment, rock mechanics, underground mining, rock engineering
Procedia PDF Downloads 147565 Mammographic Multi-View Cancer Identification Using Siamese Neural Networks
Authors: Alisher Ibragimov, Sofya Senotrusova, Aleksandra Beliaeva, Egor Ushakov, Yuri Markin
Abstract:
Mammography plays a critical role in screening for breast cancer in women, and artificial intelligence has enabled the automatic detection of diseases in medical images. Many of the current techniques used for mammogram analysis focus on a single view (mediolateral or craniocaudal view), while in clinical practice, radiologists consider multiple views of mammograms from both breasts to make a correct decision. Consequently, computer-aided diagnosis (CAD) systems could benefit from incorporating information gathered from multiple views. In this study, the introduce a method based on a Siamese neural network (SNN) model that simultaneously analyzes mammographic images from tri-view: bilateral and ipsilateral. In this way, when a decision is made on a single image of one breast, attention is also paid to two other images – a view of the same breast in a different projection and an image of the other breast as well. Consequently, the algorithm closely mimics the radiologist's practice of paying attention to the entire examination of a patient rather than to a single image. Additionally, to the best of our knowledge, this research represents the first experiments conducted using the recently released Vietnamese dataset of digital mammography (VinDr-Mammo). On an independent test set of images from this dataset, the best model achieved an AUC of 0.87 per image. Therefore, this suggests that there is a valuable automated second opinion in the interpretation of mammograms and breast cancer diagnosis, which in the future may help to alleviate the burden on radiologists and serve as an additional layer of verification.Keywords: breast cancer, computer-aided diagnosis, deep learning, multi-view mammogram, siamese neural network
Procedia PDF Downloads 139564 Vibration Analysis of Stepped Nanoarches with Defects
Authors: Jaan Lellep, Shahid Mubasshar
Abstract:
A numerical solution is developed for simply supported nanoarches based on the non-local theory of elasticity. The nanoarch under consideration has a step-wise variable cross-section and is weakened by crack-like defects. It is assumed that the cracks are stationary and the mechanical behaviour of the nanoarch can be modeled by Eringen’s non-local theory of elasticity. The physical and thermal properties are sensitive with respect to changes of dimensions in the nano level. The classical theory of elasticity is unable to describe such changes in material properties. This is because, during the development of the classical theory of elasticity, the speculation of molecular objects was avoided. Therefore, the non-local theory of elasticity is applied to study the vibration of nanostructures and it has been accepted by many researchers. In the non-local theory of elasticity, it is assumed that the stress state of the body at a given point depends on the stress state of each point of the structure. However, within the classical theory of elasticity, the stress state of the body depends only on the given point. The system of main equations consists of equilibrium equations, geometrical relations and constitutive equations with boundary and intermediate conditions. The system of equations is solved by using the method of separation of variables. Consequently, the governing differential equations are converted into a system of algebraic equations whose solution exists if the determinant of the coefficients of the matrix vanishes. The influence of cracks and steps on the natural vibration of the nanoarches is prescribed with the aid of additional local compliance at the weakened cross-section. An algorithm to determine the eigenfrequencies of the nanoarches is developed with the help of computer software. The effects of various physical and geometrical parameters are recorded and drawn graphically.Keywords: crack, nanoarches, natural frequency, step
Procedia PDF Downloads 129563 Fatigue Life Prediction under Variable Loading Based a Non-Linear Energy Model
Authors: Aid Abdelkrim
Abstract:
A method of fatigue damage accumulation based upon application of energy parameters of the fatigue process is proposed in the paper. Using this model is simple, it has no parameter to be determined, it requires only the knowledge of the curve W–N (W: strain energy density N: number of cycles at failure) determined from the experimental Wöhler curve. To examine the performance of nonlinear models proposed in the estimation of fatigue damage and fatigue life of components under random loading, a batch of specimens made of 6082 T 6 aluminium alloy has been studied and some of the results are reported in the present paper. The paper describes an algorithm and suggests a fatigue cumulative damage model, especially when random loading is considered. This work contains the results of uni-axial random load fatigue tests with different mean and amplitude values performed on 6082T6 aluminium alloy specimens. The proposed model has been formulated to take into account the damage evolution at different load levels and it allows the effect of the loading sequence to be included by means of a recurrence formula derived for multilevel loading, considering complex load sequences. It is concluded that a ‘damaged stress interaction damage rule’ proposed here allows a better fatigue damage prediction than the widely used Palmgren–Miner rule, and a formula derived in random fatigue could be used to predict the fatigue damage and fatigue lifetime very easily. The results obtained by the model are compared with the experimental results and those calculated by the most fatigue damage model used in fatigue (Miner’s model). The comparison shows that the proposed model, presents a good estimation of the experimental results. Moreover, the error is minimized in comparison to the Miner’s model.Keywords: damage accumulation, energy model, damage indicator, variable loading, random loading
Procedia PDF Downloads 396562 Peril´s Environment of Energetic Infrastructure Complex System, Modelling by the Crisis Situation Algorithms
Authors: Jiří F. Urbánek, Alena Oulehlová, Hana Malachová, Jiří J. Urbánek Jr.
Abstract:
Crisis situations investigation and modelling are introduced and made within the complex system of energetic critical infrastructure, operating on peril´s environments. Every crisis situations and perils has an origin in the emergency/ crisis event occurrence and they need critical/ crisis interfaces assessment. Here, the emergency events can be expected - then crisis scenarios can be pre-prepared by pertinent organizational crisis management authorities towards their coping; or it may be unexpected - without pre-prepared scenario of event. But the both need operational coping by means of crisis management as well. The operation, forms, characteristics, behaviour and utilization of crisis management have various qualities, depending on real critical infrastructure organization perils, and prevention training processes. An aim is always - better security and continuity of the organization, which successful obtainment needs to find and investigate critical/ crisis zones and functions in critical infrastructure organization models, operating in pertinent perils environment. Our DYVELOP (Dynamic Vector Logistics of Processes) method is disposables for it. Here, it is necessary to derive and create identification algorithm of critical/ crisis interfaces. The locations of critical/ crisis interfaces are the flags of crisis situation in organization of critical infrastructure models. Then, the model of crisis situation will be displayed at real organization of Czech energetic crisis infrastructure subject in real peril environment. These efficient measures are necessary for the infrastructure protection. They will be derived for peril mitigation, crisis situation coping and for environmentally friendly organization survival, continuity and its sustainable development advanced possibilities.Keywords: algorithms, energetic infrastructure complex system, modelling, peril´s environment
Procedia PDF Downloads 403561 Exploring Data Stewardship in Fog Networking Using Blockchain Algorithm
Authors: Ruvaitha Banu, Amaladhithyan Krishnamoorthy
Abstract:
IoT networks today solve various consumer problems, from home automation systems to aiding in driving autonomous vehicles with the exploration of multiple devices. For example, in an autonomous vehicle environment, multiple sensors are available on roads to monitor weather and road conditions and interact with each other to aid the vehicle in reaching its destination safely and timely. IoT systems are predominantly dependent on the cloud environment for data storage, and computing needs that result in latency problems. With the advent of Fog networks, some of this storage and computing is pushed to the edge/fog nodes, saving the network bandwidth and reducing the latency proportionally. Managing the data stored in these fog nodes becomes crucial as it might also store sensitive information required for a certain application. Data management in fog nodes is strenuous because Fog networks are dynamic in terms of their availability and hardware capability. It becomes more challenging when the nodes in the network also live a short span, detaching and joining frequently. When an end-user or Fog Node wants to access, read, or write data stored in another Fog Node, then a new protocol becomes necessary to access/manage the data stored in the fog devices as a conventional static way of managing the data doesn’t work in Fog Networks. The proposed solution discusses a protocol that acts by defining sensitivity levels for the data being written and read. Additionally, a distinct data distribution and replication model among the Fog nodes is established to decentralize the access mechanism. In this paper, the proposed model implements stewardship towards the data stored in the Fog node using the application of Reinforcement Learning so that access to the data is determined dynamically based on the requests.Keywords: IoT, fog networks, data stewardship, dynamic access policy
Procedia PDF Downloads 60560 Agri-Food Transparency and Traceability: A Marketing Tool to Satisfy Consumer Awareness Needs
Authors: Angelo Corallo, Maria Elena Latino, Marta Menegoli
Abstract:
The link between man and food plays, in the social and economic system, a central role where cultural and multidisciplinary aspects intertwine: food is not only nutrition, but also communication, culture, politics, environment, science, ethics, fashion. This multi-dimensionality has many implications in the food economy. In recent years, the consumer became more conscious about his food choices, involving a consistent change in consumption models. This change concerns several aspects: awareness of food system issues, employment of socially and environmentally conscious decision-making, food choices based on different characteristics than nutritional ones i.e. origin of food, how it’s produced, and who’s producing it. In this frame the ‘consumption choices’ and the ‘interests of the citizen’ become one part of the others. The figure of the ‘Citizen Consumer’ is born, a responsible and ethically motivated individual to change his lifestyle, achieving the goal of sustainable consumption. Simultaneously the branding, that before was guarantee of the product quality, today is questioned. In order to meet these needs, Agri-Food companies are developing specific product lines that follow two main philosophies: ‘Back to basics’ and ‘Less is more’. However, the issue of ethical behavior does not seem to find an adequate on market offer. Most likely due to a lack of attention on the communication strategy used, very often based on market logic and rarely on ethical one. The label in its classic concept of ‘clean labeling’ can no longer be the only instrument through which to convey product information and its evolution towards a concept of ‘clear label’ is necessary to embrace ethical and transparent concepts in progress the process of democratization of the Food System. The implementation of a voluntary traceability path, relying on the technological models of the Internet of Things or Industry 4.0, would enable the Agri-Food Supply Chain to collect data that, if properly treated, could satisfy the information need of consumers. A change of approach is therefore proposed towards Agri-Food traceability that is no longer intended as a tool to be used to respond to the legislator, but rather as a promotional tool useful to tell the company in a transparent manner and then reach the slice of the market of food citizens. The use of mobile technology can also facilitate this information transfer. However, in order to guarantee maximum efficiency, an appropriate communication model based on the ethical communication principles should be used, which aims to overcome the pipeline communication model, to offer the listener a new way of telling the food product, based on real data collected through processes traceability. The Citizen Consumer is therefore placed at the center of the new model of communication in which he has the opportunity to choose what to know and how. The new label creates a virtual access point capable of telling the product according to different point of views, following the personal interests and offering the possibility to give several content modalities to support different situations and usability.Keywords: agri food traceability, agri-food transparency, clear label, food system, internet of things
Procedia PDF Downloads 159559 Evaluating the Validity of CFD Model of Dispersion in a Complex Urban Geometry Using Two Sets of Experimental Measurements
Authors: Mohammad R. Kavian Nezhad, Carlos F. Lange, Brian A. Fleck
Abstract:
This research presents the validation study of a computational fluid dynamics (CFD) model developed to simulate the scalar dispersion emitted from rooftop sources around the buildings at the University of Alberta North Campus. The ANSYS CFX code was used to perform the numerical simulation of the wind regime and pollutant dispersion by solving the 3D steady Reynolds-averaged Navier-Stokes (RANS) equations on a building-scale high-resolution grid. The validation study was performed in two steps. First, the CFD model performance in 24 cases (eight wind directions and three wind speeds) was evaluated by comparing the predicted flow fields with the available data from the previous measurement campaign designed at the North Campus, using the standard deviation method (SDM), while the estimated results of the numerical model showed maximum average percent errors of approximately 53% and 37% for wind incidents from the North and Northwest, respectively. Good agreement with the measurements was observed for the other six directions, with an average error of less than 30%. In the second step, the reliability of the implemented turbulence model, numerical algorithm, modeling techniques, and the grid generation scheme was further evaluated using the Mock Urban Setting Test (MUST) dispersion dataset. Different statistical measures, including the fractional bias (FB), the geometric mean bias (MG), and the normalized mean square error (NMSE), were used to assess the accuracy of the predicted dispersion field. Our CFD results are in very good agreement with the field measurements.Keywords: CFD, plume dispersion, complex urban geometry, validation study, wind flow
Procedia PDF Downloads 137558 Numerical Simulations of Acoustic Imaging in Hydrodynamic Tunnel with Model Adaptation and Boundary Layer Noise Reduction
Authors: Sylvain Amailland, Jean-Hugh Thomas, Charles Pézerat, Romuald Boucheron, Jean-Claude Pascal
Abstract:
The noise requirements for naval and research vessels have seen an increasing demand for quieter ships in order to fulfil current regulations and to reduce the effects on marine life. Hence, new methods dedicated to the characterization of propeller noise, which is the main source of noise in the far-field, are needed. The study of cavitating propellers in closed-section is interesting for analyzing hydrodynamic performance but could involve significant difficulties for hydroacoustic study, especially due to reverberation and boundary layer noise in the tunnel. The aim of this paper is to present a numerical methodology for the identification of hydroacoustic sources on marine propellers using hydrophone arrays in a large hydrodynamic tunnel. The main difficulties are linked to the reverberation of the tunnel and the boundary layer noise that strongly reduce the signal-to-noise ratio. In this paper it is proposed to estimate the reflection coefficients using an inverse method and some reference transfer functions measured in the tunnel. This approach allows to reduce the uncertainties of the propagation model used in the inverse problem. In order to reduce the boundary layer noise, a cleaning algorithm taking advantage of the low rank and sparse structure of the cross-spectrum matrices of the acoustic and the boundary layer noise is presented. This approach allows to recover the acoustic signal even well under the boundary layer noise. The improvement brought by this method is visible on acoustic maps resulting from beamforming and DAMAS algorithms.Keywords: acoustic imaging, boundary layer noise denoising, inverse problems, model adaptation
Procedia PDF Downloads 336557 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data
Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora
Abstract:
Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.Keywords: drilling optimization, geological formations, machine learning, rate of penetration
Procedia PDF Downloads 133556 Clustering for Detection of the Population at Risk of Anticholinergic Medication
Authors: A. Shirazibeheshti, T. Radwan, A. Ettefaghian, G. Wilson, C. Luca, Farbod Khanizadeh
Abstract:
Anticholinergic medication has been associated with events such as falls, delirium, and cognitive impairment in older patients. To further assess this, anticholinergic burden scores have been developed to quantify risk. A risk model based on clustering was deployed in a healthcare management system to cluster patients into multiple risk groups according to anticholinergic burden scores of multiple medicines prescribed to patients to facilitate clinical decision-making. To do so, anticholinergic burden scores of drugs were extracted from the literature, which categorizes the risk on a scale of 1 to 3. Given the patients’ prescription data on the healthcare database, a weighted anticholinergic risk score was derived per patient based on the prescription of multiple anticholinergic drugs. This study was conducted on over 300,000 records of patients currently registered with a major regional UK-based healthcare provider. The weighted risk scores were used as inputs to an unsupervised learning algorithm (mean-shift clustering) that groups patients into clusters that represent different levels of anticholinergic risk. To further evaluate the performance of the model, any association between the average risk score within each group and other factors such as socioeconomic status (i.e., Index of Multiple Deprivation) and an index of health and disability were investigated. The clustering identifies a group of 15 patients at the highest risk from multiple anticholinergic medication. Our findings also show that this group of patients is located within more deprived areas of London compared to the population of other risk groups. Furthermore, the prescription of anticholinergic medicines is more skewed to female than male patients, indicating that females are more at risk from this kind of multiple medications. The risk may be monitored and controlled in well artificial intelligence-equipped healthcare management systems.Keywords: anticholinergic medicines, clustering, deprivation, socioeconomic status
Procedia PDF Downloads 212555 Exploring the Role of Data Mining in Crime Classification: A Systematic Literature Review
Authors: Faisal Muhibuddin, Ani Dijah Rahajoe
Abstract:
This in-depth exploration, through a systematic literature review, scrutinizes the nuanced role of data mining in the classification of criminal activities. The research focuses on investigating various methodological aspects and recent developments in leveraging data mining techniques to enhance the effectiveness and precision of crime categorization. Commencing with an exposition of the foundational concepts of crime classification and its evolutionary dynamics, this study details the paradigm shift from conventional methods towards approaches supported by data mining, addressing the challenges and complexities inherent in the modern crime landscape. Specifically, the research delves into various data mining techniques, including K-means clustering, Naïve Bayes, K-nearest neighbour, and clustering methods. A comprehensive review of the strengths and limitations of each technique provides insights into their respective contributions to improving crime classification models. The integration of diverse data sources takes centre stage in this research. A detailed analysis explores how the amalgamation of structured data (such as criminal records) and unstructured data (such as social media) can offer a holistic understanding of crime, enriching classification models with more profound insights. Furthermore, the study explores the temporal implications in crime classification, emphasizing the significance of considering temporal factors to comprehend long-term trends and seasonality. The availability of real-time data is also elucidated as a crucial element in enhancing responsiveness and accuracy in crime classification.Keywords: data mining, classification algorithm, naïve bayes, k-means clustering, k-nearest neigbhor, crime, data analysis, sistematic literature review
Procedia PDF Downloads 68554 Forecasting Nokoué Lake Water Levels Using Long Short-Term Memory Network
Authors: Namwinwelbere Dabire, Eugene C. Ezin, Adandedji M. Firmin
Abstract:
The prediction of hydrological flows (rainfall-depth or rainfall-discharge) is becoming increasingly important in the management of hydrological risks such as floods. In this study, the Long Short-Term Memory (LSTM) network, a state-of-the-art algorithm dedicated to time series, is applied to predict the daily water level of Nokoue Lake in Benin. This paper aims to provide an effective and reliable method enable of reproducing the future daily water level of Nokoue Lake, which is influenced by a combination of two phenomena: rainfall and river flow (runoff from the Ouémé River, the Sô River, the Porto-Novo lagoon, and the Atlantic Ocean). Performance analysis based on the forecasting horizon indicates that LSTM can predict the water level of Nokoué Lake up to a forecast horizon of t+10 days. Performance metrics such as Root Mean Square Error (RMSE), coefficient of correlation (R²), Nash-Sutcliffe Efficiency (NSE), and Mean Absolute Error (MAE) agree on a forecast horizon of up to t+3 days. The values of these metrics remain stable for forecast horizons of t+1 days, t+2 days, and t+3 days. The values of R² and NSE are greater than 0.97 during the training and testing phases in the Nokoué Lake basin. Based on the evaluation indices used to assess the model's performance for the appropriate forecast horizon of water level in the Nokoué Lake basin, the forecast horizon of t+3 days is chosen for predicting future daily water levels.Keywords: forecasting, long short-term memory cell, recurrent artificial neural network, Nokoué lake
Procedia PDF Downloads 64553 ZigBee Wireless Sensor Nodes with Hybrid Energy Storage System Based on Li-Ion Battery and Solar Energy Supply
Authors: Chia-Chi Chang, Chuan-Bi Lin, Chia-Min Chan
Abstract:
Most ZigBee sensor networks to date make use of nodes with limited processing, communication, and energy capabilities. Energy consumption is of great importance in wireless sensor applications as their nodes are commonly battery-driven. Once ZigBee nodes are deployed outdoors, limited power may make a sensor network useless before its purpose is complete. At present, there are two strategies for long node and network lifetime. The first strategy is saving energy as much as possible. The energy consumption will be minimized through switching the node from active mode to sleep mode and routing protocol with ultra-low energy consumption. The second strategy is to evaluate the energy consumption of sensor applications as accurately as possible. Erroneous energy model may render a ZigBee sensor network useless before changing batteries. In this paper, we present a ZigBee wireless sensor node with four key modules: a processing and radio unit, an energy harvesting unit, an energy storage unit, and a sensor unit. The processing unit uses CC2530 for controlling the sensor, carrying out routing protocol, and performing wireless communication with other nodes. The harvesting unit uses a 2W solar panel to provide lasting energy for the node. The storage unit consists of a rechargeable 1200 mAh Li-ion battery and a battery charger using a constant-current/constant-voltage algorithm. Our solution to extend node lifetime is implemented. Finally, a long-term sensor network test is used to exhibit the functionality of the solar powered system.Keywords: ZigBee, Li-ion battery, solar panel, CC2530
Procedia PDF Downloads 376552 Implementation of an Image Processing System Using Artificial Intelligence for the Diagnosis of Malaria Disease
Authors: Mohammed Bnebaghdad, Feriel Betouche, Malika Semmani
Abstract:
Image processing become more sophisticated over time due to technological advances, especially artificial intelligence (AI) technology. Currently, AI image processing is used in many areas, including surveillance, industry, science, and medicine. AI in medical image processing can help doctors diagnose diseases faster, with minimal mistakes, and with less effort. Among these diseases is malaria, which remains a major public health challenge in many parts of the world. It affects millions of people every year, particularly in tropical and subtropical regions. Early detection of malaria is essential to prevent serious complications and reduce the burden of the disease. In this paper, we propose and implement a scheme based on AI image processing to enhance malaria disease diagnosis through automated analysis of blood smear images. The scheme is based on the convolutional neural network (CNN) method. So, we have developed a model that classifies infected and uninfected single red cells using images available on Kaggle, as well as real blood smear images obtained from the Central Laboratory of Medical Biology EHS Laadi Flici (formerly El Kettar) in Algeria. The real images were segmented into individual cells using the watershed algorithm in order to match the images from the Kaagle dataset. The model was trained and tested, achieving an accuracy of 99% and 97% accuracy for new real images. This validates that the model performs well with new real images, although with slightly lower accuracy. Additionally, the model has been embedded in a Raspberry Pi4, and a graphical user interface (GUI) was developed to visualize the malaria diagnostic results and facilitate user interaction.Keywords: medical image processing, malaria parasite, classification, CNN, artificial intelligence
Procedia PDF Downloads 23