Search results for: optimization algorithms
1401 Enhancer: An Effective Transformer Architecture for Single Image Super Resolution
Authors: Pitigalage Chamath Chandira Peiris
Abstract:
A widely researched domain in the field of image processing in recent times has been single image super-resolution, which tries to restore a high-resolution image from a single low-resolution image. Many more single image super-resolution efforts have been completed utilizing equally traditional and deep learning methodologies, as well as a variety of other methodologies. Deep learning-based super-resolution methods, in particular, have received significant interest. As of now, the most advanced image restoration approaches are based on convolutional neural networks; nevertheless, only a few efforts have been performed using Transformers, which have demonstrated excellent performance on high-level vision tasks. The effectiveness of CNN-based algorithms in image super-resolution has been impressive. However, these methods cannot completely capture the non-local features of the data. Enhancer is a simple yet powerful Transformer-based approach for enhancing the resolution of images. A method for single image super-resolution was developed in this study, which utilized an efficient and effective transformer design. This proposed architecture makes use of a locally enhanced window transformer block to alleviate the enormous computational load associated with non-overlapping window-based self-attention. Additionally, it incorporates depth-wise convolution in the feed-forward network to enhance its ability to capture local context. This study is assessed by comparing the results obtained for popular datasets to those obtained by other techniques in the domain.Keywords: single image super resolution, computer vision, vision transformers, image restoration
Procedia PDF Downloads 1091400 Modification of Polyurethane Adhesive for OSB/EPS Panel Production
Authors: Stepan Hysek, Premysl Sedivka, Petra Gajdacova
Abstract:
Currently, structural composite materials contain cellulose-based particles (wood chips, fibers) bonded with synthetic adhesives containing formaldehyde (urea-formaldehyde, melamine-formaldehyde adhesives and others). Formaldehyde is classified as a volatile substance with provable carcinogenic effects on live organisms, and an emphasis has been put on continual reduction of its content in products. One potential solution could be the development of an agglomerated material which does not contain adhesives releasing formaldehyde. A potential alternative to formaldehyde-based adhesives could be polyurethane adhesives containing no formaldehyde. Such adhesives have been increasingly used in applications where a few years ago formaldehyde-based adhesives were the only option. Advantages of polyurethane adhesive in comparison with others in the industry include the high elasticity of the joint, which is able to resist dynamic stress, and resistance to increased humidity and climatic effects. These properties predict polyurethane adhesives to be used in OSB/EPS panel production. The objective of this paper is to develop an adhesive for bonding of sandwich panels made of material based on wood and other materials, e.g. SIP) and optimization of input components in order to obtain an adhesive with required properties suitable for bonding of the given materials without involvement of formaldehyde. It was found that polyurethane recyclate as a filler is suitable modification of polyurethane adhesive and results have clearly revealed that modified adhesive can be used for OSB/EPS panel production.Keywords: adhesive, polyurethane, recyclate, SIP
Procedia PDF Downloads 2781399 Comprehensive Feature Extraction for Optimized Condition Assessment of Fuel Pumps
Authors: Ugochukwu Ejike Akpudo, Jank-Wook Hur
Abstract:
The increasing demand for improved productivity, maintainability, and reliability has prompted rapidly increasing research studies on the emerging condition-based maintenance concept- Prognostics and health management (PHM). Varieties of fuel pumps serve critical functions in several hydraulic systems; hence, their failure can have daunting effects on productivity, safety, etc. The need for condition monitoring and assessment of these pumps cannot be overemphasized, and this has led to the uproar in research studies on standard feature extraction techniques for optimized condition assessment of fuel pumps. By extracting time-based, frequency-based and the more robust time-frequency based features from these vibrational signals, a more comprehensive feature assessment (and selection) can be achieved for a more accurate and reliable condition assessment of these pumps. With the aid of emerging deep classification and regression algorithms like the locally linear embedding (LLE), we propose a method for comprehensive condition assessment of electromagnetic fuel pumps (EMFPs). Results show that the LLE as a comprehensive feature extraction technique yields better feature fusion/dimensionality reduction results for condition assessment of EMFPs against the use of single features. Also, unlike other feature fusion techniques, its capabilities as a fault classification technique were explored, and the results show an acceptable accuracy level using standard performance metrics for evaluation.Keywords: electromagnetic fuel pumps, comprehensive feature extraction, condition assessment, locally linear embedding, feature fusion
Procedia PDF Downloads 1221398 Combining Chiller and Variable Frequency Drives
Authors: Nasir Khalid, S. Thirumalaichelvam
Abstract:
In most buildings, according to US Department of Energy Data Book, the electrical consumption attributable to centralized heating and ventilation of air- condition (HVAC) component can be as high as 40-60% of the total electricity consumption for an entire building. To provide efficient energy management for the market today, researchers are finding new ways to develop a system that can save electrical consumption of buildings even more. In this concept paper, a system known as Intelligent Chiller Energy Efficiency (iCEE) System is being developed that is capable of saving up to 25% from the chiller’s existing electrical energy consumption. In variable frequency drives (VFDs), research has found significant savings up to 30% of electrical energy consumption. Together with the VFDs at specific Air Handling Unit (AHU) of HVAC component, this system will save even more electrical energy consumption. The iCEE System is compatible with any make, model or age of centrifugal, rotary or reciprocating chiller air-conditioning systems which are electrically driven. The iCEE system uses engineering principles of efficiency analysis, enthalpy analysis, heat transfer, mathematical prediction, modified genetic algorithm, psychometrics analysis, and optimization formulation to achieve true and tangible energy savings for consumers.Keywords: variable frequency drives, adjustable speed drives, ac drives, chiller energy system
Procedia PDF Downloads 5631397 Arithmetic Operations Based on Double Base Number Systems
Authors: K. Sanjayani, C. Saraswathy, S. Sreenivasan, S. Sudhahar, D. Suganya, K. S. Neelukumari, N. Vijayarangan
Abstract:
Double Base Number System (DBNS) is an imminent system of representing a number using two bases namely 2 and 3, which has its application in Elliptic Curve Cryptography (ECC) and Digital Signature Algorithm (DSA).The previous binary method representation included only base 2. DBNS uses an approximation algorithm namely, Greedy Algorithm. By using this algorithm, the number of digits required to represent a larger number is less when compared to the standard binary method that uses base 2 algorithms. Hence, the computational speed is increased and time being reduced. The standard binary method uses binary digits 0 and 1 to represent a number whereas the DBNS method uses binary digit 1 alone to represent any number (canonical form). The greedy algorithm uses two ways to represent the number, one is by using only the positive summands and the other is by using both positive and negative summands. In this paper, arithmetic operations are used for elliptic curve cryptography. Elliptic curve discrete logarithm problem is the foundation for most of the day to day elliptic curve cryptography. This appears to be a momentous hard slog compared to digital logarithm problem. In elliptic curve digital signature algorithm, the key generation requires 160 bit of data by usage of standard binary representation. Whereas, the number of bits required generating the key can be reduced with the help of double base number representation. In this paper, a new technique is proposed to generate key during encryption and extraction of key in decryption.Keywords: cryptography, double base number system, elliptic curve cryptography, elliptic curve digital signature algorithm
Procedia PDF Downloads 3991396 The Application of AI in Developing Assistive Technologies for Non-Verbal Individuals with Autism
Authors: Ferah Tesfaye Admasu
Abstract:
Autism Spectrum Disorder (ASD) often presents significant communication challenges, particularly for non-verbal individuals who struggle to express their needs and emotions effectively. Assistive technologies (AT) have emerged as vital tools in enhancing communication abilities for this population. Recent advancements in artificial intelligence (AI) hold the potential to revolutionize the design and functionality of these technologies. This study explores the application of AI in developing intelligent, adaptive, and user-centered assistive technologies for non-verbal individuals with autism. Through a review of current AI-driven tools, including speech-generating devices, predictive text systems, and emotion-recognition software, this research investigates how AI can bridge communication gaps, improve engagement, and support independence. Machine learning algorithms, natural language processing (NLP), and facial recognition technologies are examined as core components in creating more personalized and responsive communication aids. The study also discusses the challenges and ethical considerations involved in deploying AI-based AT, such as data privacy and the risk of over-reliance on technology. Findings suggest that integrating AI into assistive technologies can significantly enhance the quality of life for non-verbal individuals with autism, providing them with greater opportunities for social interaction and participation in daily activities. However, continued research and development are needed to ensure these technologies are accessible, affordable, and culturally sensitive.Keywords: artificial intelligence, autism spectrum disorder, non-verbal communication, assistive technology, machine learning
Procedia PDF Downloads 281395 Control of Base Isolated Benchmark using Combined Control Strategy with Fuzzy Algorithm Subjected to Near-Field Earthquakes
Authors: Hashem Shariatmadar, Mozhgansadat Momtazdargahi
Abstract:
The purpose of control structure against earthquake is to dissipate earthquake input energy to the structure and reduce the plastic deformation of structural members. There are different methods for control structure against earthquake to reduce the structure response that they are active, semi-active, inactive and hybrid. In this paper two different combined control systems are used first system comprises base isolator and multi tuned mass dampers (BI & MTMD) and another combination is hybrid base isolator and multi tuned mass dampers (HBI & MTMD) for controlling an eight story isolated benchmark steel structure. Active control force of hybrid isolator is estimated by fuzzy logic algorithms. The influences of the combined systems on the responses of the benchmark structure under the two near-field earthquake (Newhall & Elcentro) are evaluated by nonlinear dynamic time history analysis. Applications of combined control systems consisting of passive or active systems installed in parallel to base-isolation bearings have the capability of reducing response quantities of base-isolated (relative and absolute displacement) structures significantly. Therefore in design and control of irregular isolated structures using the proposed control systems, structural demands (relative and absolute displacement and etc.) in each direction must be considered separately.Keywords: base-isolated benchmark structure, multi-tuned mass dampers, hybrid isolators, near-field earthquake, fuzzy algorithm
Procedia PDF Downloads 3081394 Modeling and Characterization of the SiC Single Crystal Growth Process
Authors: T. Wejrzanowski, M. Grybczuk, E. Tymicki, K. J. Kurzydlowski
Abstract:
In the present study numerical simulations silicon carbide single crystal growth process in Physical Vapor Transport reactor are addressed. Silicon Carbide is a perspective material for many applications in modern electronics. One of the main challenges for wider applications of SiC is high price of high quality mono crystals. Improvement of silicon carbide manufacturing process has a significant influence on the product price. Better understanding of crystal growth allows for optimization of the process, and it can be achieved by numerical simulations. In this work Virtual Reactor software was used to simulate the process. Predicted geometrical properties of the final product and information about phenomena occurring inside process reactor were obtained. The latter is especially valuable because reactor chamber is inaccessible during the process due to high temperature inside the reactor (over 2000˚C). Obtained data was used for improvement of the process and reactor geometry. Resultant crystal quality was also predicted basing on crystallization front shape evolution and threading dislocation paths. Obtained results were confronted with experimental data and the results are in good agreement.Keywords: Finite Volume Method, semiconductors, Physical Vapor Transport, silicon carbide
Procedia PDF Downloads 5341393 On the Implementation of The Pulse Coupled Neural Network (PCNN) in the Vision of Cognitive Systems
Authors: Hala Zaghloul, Taymoor Nazmy
Abstract:
One of the great challenges of the 21st century is to build a robot that can perceive and act within its environment and communicate with people, while also exhibiting the cognitive capabilities that lead to performance like that of people. The Pulse Coupled Neural Network, PCNN, is a relative new ANN model that derived from a neural mammal model with a great potential in the area of image processing as well as target recognition, feature extraction, speech recognition, combinatorial optimization, compressed encoding. PCNN has unique feature among other types of neural network, which make it a candid to be an important approach for perceiving in cognitive systems. This work show and emphasis on the potentials of PCNN to perform different tasks related to image processing. The main drawback or the obstacle that prevent the direct implementation of such technique, is the need to find away to control the PCNN parameters toward perform a specific task. This paper will evaluate the performance of PCNN standard model for processing images with different properties, and select the important parameters that give a significant result, also, the approaches towards find a way for the adaptation of the PCNN parameters to perform a specific task.Keywords: cognitive system, image processing, segmentation, PCNN kernels
Procedia PDF Downloads 2831392 Big Data in Telecom Industry: Effective Predictive Techniques on Call Detail Records
Authors: Sara ElElimy, Samir Moustafa
Abstract:
Mobile network operators start to face many challenges in the digital era, especially with high demands from customers. Since mobile network operators are considered a source of big data, traditional techniques are not effective with new era of big data, Internet of things (IoT) and 5G; as a result, handling effectively different big datasets becomes a vital task for operators with the continuous growth of data and moving from long term evolution (LTE) to 5G. So, there is an urgent need for effective Big data analytics to predict future demands, traffic, and network performance to full fill the requirements of the fifth generation of mobile network technology. In this paper, we introduce data science techniques using machine learning and deep learning algorithms: the autoregressive integrated moving average (ARIMA), Bayesian-based curve fitting, and recurrent neural network (RNN) are employed for a data-driven application to mobile network operators. The main framework included in models are identification parameters of each model, estimation, prediction, and final data-driven application of this prediction from business and network performance applications. These models are applied to Telecom Italia Big Data challenge call detail records (CDRs) datasets. The performance of these models is found out using a specific well-known evaluation criteria shows that ARIMA (machine learning-based model) is more accurate as a predictive model in such a dataset than the RNN (deep learning model).Keywords: big data analytics, machine learning, CDRs, 5G
Procedia PDF Downloads 1431391 Comparative Analysis of the Computer Methods' Usage for Calculation of Hydrocarbon Reserves in the Baltic Sea
Authors: Pavel Shcherban, Vlad Golovanov
Abstract:
Nowadays, the depletion of hydrocarbon deposits on the land of the Kaliningrad region leads to active geological exploration and development of oil and natural gas reserves in the southeastern part of the Baltic Sea. LLC 'Lukoil-Kaliningradmorneft' implements a comprehensive program for the development of the region's shelf in 2014-2023. Due to heterogeneity of reservoir rocks in various open fields, as well as with ambiguous conclusions on the contours of deposits, additional geological prospecting and refinement of the recoverable oil reserves are carried out. The key element is use of an effective technique of computer stock modeling at the first stage of processing of the received data. The following step uses information for the cluster analysis, which makes it possible to optimize the field development approaches. The article analyzes the effectiveness of various methods for reserves' calculation and computer modelling methods of the offshore hydrocarbon fields. Cluster analysis allows to measure influence of the obtained data on the development of a technical and economic model for mining deposits. The relationship between the accuracy of the calculation of recoverable reserves and the need of modernization of existing mining infrastructure, as well as the optimization of the scheme of opening and development of oil deposits, is observed.Keywords: cluster analysis, computer modelling of deposits, correction of the feasibility study, offshore hydrocarbon fields
Procedia PDF Downloads 1681390 Artificial Intelligence in Melanoma Prognosis: A Narrative Review
Authors: Shohreh Ghasemi
Abstract:
Introduction: Melanoma is a complex disease with various clinical and histopathological features that impact prognosis and treatment decisions. Traditional methods of melanoma prognosis involve manual examination and interpretation of clinical and histopathological data by dermatologists and pathologists. However, the subjective nature of these assessments can lead to inter-observer variability and suboptimal prognostic accuracy. AI, with its ability to analyze vast amounts of data and identify patterns, has emerged as a promising tool for improving melanoma prognosis. Methods: A comprehensive literature search was conducted to identify studies that employed AI techniques for melanoma prognosis. The search included databases such as PubMed and Google Scholar, using keywords such as "artificial intelligence," "melanoma," and "prognosis." Studies published between 2010 and 2022 were considered. The selected articles were critically reviewed, and relevant information was extracted. Results: The review identified various AI methodologies utilized in melanoma prognosis, including machine learning algorithms, deep learning techniques, and computer vision. These techniques have been applied to diverse data sources, such as clinical images, dermoscopy images, histopathological slides, and genetic data. Studies have demonstrated the potential of AI in accurately predicting melanoma prognosis, including survival outcomes, recurrence risk, and response to therapy. AI-based prognostic models have shown comparable or even superior performance compared to traditional methods.Keywords: artificial intelligence, melanoma, accuracy, prognosis prediction, image analysis, personalized medicine
Procedia PDF Downloads 861389 Depolymerization of Lignin in Sugarcane Bagasse by Hydrothermal Liquefaction to Optimize Catechol Formation
Authors: Nirmala Deenadayalu, Kwanele B. Mazibuko, Lethiwe D. Mthembu
Abstract:
Sugarcane bagasse is the residue obtained after the extraction of sugar from the sugarcane. The main aim of this work was to produce catechol from sugarcane bagasse. The optimization of catechol production was investigated using a Box-Behnken design of experiments. The sugarcane bagasse was heated in a Parr reactor at a set temperature. The reactions were carried out at different temperatures (100-250) °C, catalyst loading (1% -10% KOH (m/v)) and reaction times (60 – 240 min) at 17 bar pressure. The solid and liquid fractions were then separated by vacuum filtration. The liquid fraction was analyzed for catechol using high-pressure liquid chromatography (HPLC) and characterized for the functional groups using Fourier transform infrared spectroscopy (FTIR). The optimized condition for catechol production was 175 oC, 240 min, and 10 % KOH with a catechol yield of 79.11 ppm. Since the maximum time was 240 min and 10 % KOH, a further series of experiments were conducted at 175 oC, 260 min, and 20 % KOH and yielded 2.46 ppm catechol, which was a large reduction in catechol produced. The HPLC peak for catechol was obtained at 2.5 min for the standards and the samples. The FTIR peak at 1750 cm⁻¹ was due to the C=C vibration band of the aromatic ring in the catechol present for both the standard and the samples. The peak at 3325 cm⁻¹ was due to the hydrogen-bonded phenolic OH vibration bands for the catechol. The ANOVA analysis was also performed on the set of experimental data to obtain the factors that most affected the amount of catechol produced.Keywords: catechol, sugarcane bagasse, lignin, hydrothermal liquefaction
Procedia PDF Downloads 1051388 Multiaxial Stress Based High Cycle Fatigue Model for Adhesive Joint Interfaces
Authors: Martin Alexander Eder, Sergei Semenov
Abstract:
Many glass-epoxy composite structures, such as large utility wind turbine rotor blades (WTBs), comprise of adhesive joints with typically thick bond lines used to connect the different components during assembly. Performance optimization of rotor blades to increase power output by simultaneously maintaining high stiffness-to-low-mass ratios entails intricate geometries in conjunction with complex anisotropic material behavior. Consequently, adhesive joints in WTBs are subject to multiaxial stress states with significant stress gradients depending on the local joint geometry. Moreover, the dynamic aero-elastic interaction of the WTB with the airflow generates non-proportional, variable amplitude stress histories in the material. Empiricism shows that a prominent failure type in WTBs is high cycle fatigue failure of adhesive bond line interfaces, which in fact over time developed into a design driver as WTB sizes increase rapidly. Structural optimization employed at an early design stage, therefore, sets high demands on computationally efficient interface fatigue models capable of predicting the critical locations prone for interface failure. The numerical stress-based interface fatigue model presented in this work uses the Drucker-Prager criterion to compute three different damage indices corresponding to the two interface shear tractions and the outward normal traction. The two-parameter Drucker-Prager model was chosen because of its ability to consider shear strength enhancement under compression and shear strength reduction under tension. The governing interface damage index is taken as the maximum of the triple. The damage indices are computed through the well-known linear Palmgren-Miner rule after separate rain flow-counting of the equivalent shear stress history and the equivalent pure normal stress history. The equivalent stress signals are obtained by self-similar scaling of the Drucker-Prager surface whose shape is defined by the uniaxial tensile strength and the shear strength such that it intersects with the stress point at every time step. This approach implicitly assumes that the damage caused by the prevailing multiaxial stress state is the same as the damage caused by an amplified equivalent uniaxial stress state in the three interface directions. The model was implemented as Python plug-in for the commercially available finite element code Abaqus for its use with solid elements. The model was used to predict the interface damage of an adhesively bonded, tapered glass-epoxy composite cantilever I-beam tested by LM Wind Power under constant amplitude compression-compression tip load in the high cycle fatigue regime. Results show that the model was able to predict the location of debonding in the adhesive interface between the webfoot and the cap. Moreover, with a set of two different constant life diagrams namely in shear and tension, it was possible to predict both the fatigue lifetime and the failure mode of the sub-component with reasonable accuracy. It can be concluded that the fidelity, robustness and computational efficiency of the proposed model make it especially suitable for rapid fatigue damage screening of large 3D finite element models subject to complex dynamic load histories.Keywords: adhesive, fatigue, interface, multiaxial stress
Procedia PDF Downloads 1731387 Separation Performance of CO₂ by Mixed Matrix Membrane Comprising Carbide-Derived Carbon
Authors: Musa Najimu, Isam Aljundi
Abstract:
In this study, the development of mixed matrix membrane (MMM) containing carbide-derived carbon (CDC) for the separation of CO₂ was investigated. MMM with four different loadings (0.1 to 2 wt%) were prepared by the dry/wet phase inversion technique. Prior to this, the formula of the control polysulfone (PSF) membrane was optimized in terms of the PSF concentration in a mixture of NMP/THF solvents and ethanol. Prepared samples were characterized and tested for CO₂ and CH₄ gas permeation. The optimization of the control PSF membrane revealed that 30 wt% PSF is the critical polymer concentration in the formulation. Characterization results unveiled reinforcement of thermal stability and improved polarity imparted by CDC in the MMM, in addition to uniform dispersion of filler up to 1 wt% loading. Furthermore, the incorporation of CDC in PSF membrane formulation enhanced both the CO₂ permeance and ideal selectivity over the control membrane. A CDC loading of 0.5 wt% resulted in the highest CO₂ permeance of 5.5 GPU corresponding to 120% increase in permeance while a CDC loading of 1 wt% resulted in the highest selectivity (CO₂ /CH₄) of 27 corresponding to 29% increase in selectivity. Studies of operating temperature effect showed that an optimum operating temperature for M1.0 membrane is 20 ⁰C. In addition, the feed pressure studies showed that high pressure feeds will favor high performance of the membrane and a good CO₂ /CH₄ separation.Keywords: carbide derived carbon, mixed matrix membrane, CO₂ separation, polysulfone
Procedia PDF Downloads 2101386 An Approach for Association Rules Ranking
Authors: Rihab Idoudi, Karim Saheb Ettabaa, Basel Solaiman, Kamel Hamrouni
Abstract:
Medical association rules induction is used to discover useful correlations between pertinent concepts from large medical databases. Nevertheless, ARs algorithms produce huge amount of delivered rules and do not guarantee the usefulness and interestingness of the generated knowledge. To overcome this drawback, we propose an ontology based interestingness measure for ARs ranking. According to domain expert, the goal of the use of ARs is to discover implicit relationships between items of different categories such as ‘clinical features and disorders’, ‘clinical features and radiological observations’, etc. That’s to say, the itemsets which are composed of ‘similar’ items are uninteresting. Therefore, the dissimilarity between the rule’s items can be used to judge the interestingness of association rules; the more different are the items, the more interesting the rule is. In this paper, we design a distinct approach for ranking semantically interesting association rules involving the use of an ontology knowledge mining approach. The basic idea is to organize the ontology’s concepts into a hierarchical structure of conceptual clusters of targeted subjects, where each cluster encapsulates ‘similar’ concepts suggesting a specific category of the domain knowledge. The interestingness of association rules is, then, defined as the dissimilarity between corresponding clusters. That is to say, the further are the clusters of the items in the AR, the more interesting the rule is. We apply the method in our domain of interest – mammographic domain- using an existing mammographic ontology called Mammo with the goal of deriving interesting rules from past experiences, to discover implicit relationships between concepts modeling the domain.Keywords: association rule, conceptual clusters, interestingness measures, ontology knowledge mining, ranking
Procedia PDF Downloads 3241385 Design of an Improved Distributed Framework for Intrusion Detection System Based on Artificial Immune System and Neural Network
Authors: Yulin Rao, Zhixuan Li, Burra Venkata Durga Kumar
Abstract:
Intrusion detection refers to monitoring the actions of internal and external intruders on the system and detecting the behaviours that violate security policies in real-time. In intrusion detection, there has been much discussion about the application of neural network technology and artificial immune system (AIS). However, many solutions use static methods (signature-based and stateful protocol analysis) or centralized intrusion detection systems (CIDS), which are unsuitable for real-time intrusion detection systems that need to process large amounts of data and detect unknown intrusions. This article proposes a framework for a distributed intrusion detection system (DIDS) with multi-agents based on the concept of AIS and neural network technology to detect anomalies and intrusions. In this framework, multiple agents are assigned to each host and work together, improving the system's detection efficiency and robustness. The trainer agent in the central server of the framework uses the artificial neural network (ANN) rather than the negative selection algorithm of AIS to generate mature detectors. Mature detectors can distinguish between self-files and non-self-files after learning. Our analyzer agents use genetic algorithms to generate memory cell detectors. This kind of detector will effectively reduce false positive and false negative errors and act quickly on known intrusions.Keywords: artificial immune system, distributed artificial intelligence, multi-agent, intrusion detection system, neural network
Procedia PDF Downloads 1121384 Numerical Analysis of Engine Performance and Emission of a 2-Stroke Opposed Piston Hydrogen Engine
Authors: Bahamin Bazooyar, Xinyan Wang, Hua Zhao
Abstract:
As a zero-carbon fuel, hydrogen can be used in combustion engines to avoid carbon emissions. This paper numerically investigates the engine performance of a two-stroke opposed piston hydrogen engine by using three-dimensional (3D) Computational Fluid Dynamics (CFD) simulations. The engine displacement is 12.2 cm, and the compression ratio of 39. RANS simulations with the k-ε turbulence model and coupled chemistry combustion models are performed at an engine speed of 4500 rpm and hydrogen flow rate of up to 100 gr/s. In order to model the hydrogen injection process, the hydrogen nozzle was meshed with refined mesh, and injection pressure varied between 100 and 200 bars. In order to optimize the hydrogen combustion process, the injection timing was optimized between 15 before the top dead center and 10. The results showed that the combustion efficiency was mostly influenced by the injection pressures due to its impact on the fuel/air mixing and charge inhomogeneity. Nitrogen oxide (NOₓ) emissions are well correlated with engine peak temperatures, demonstrating that the thermal NO mechanism is dominant under engine conditions. Through the optimization of hydrogen injection timing and pressure, the peak thermal efficiency of 45 and NOx emission of 15 ppm/kWh can be achieved at an injection timing of 350 CA and pressure of 160 bars.Keywords: engine, hydrogen, diesel, two-stroke, opposed-piston, decarbonisation
Procedia PDF Downloads 191383 Integrated Wastewater Reuse Project of the Faculty of Sciences AinChock, Morocco
Authors: Nihad Chakri, Btissam El Amrani, Faouzi Berrada, Fouad Amraoui
Abstract:
In Morocco, water scarcity requires the exploitation of non-conventional resources. Rural areas are under-equipped with sanitation infrastructure, unlike urban areas. Decentralized and low-cost solutions could improve the quality of life of the population and the environment. In this context, the Faculty of Sciences Ain Chock "FSAC" has undertaken an integrated project to treat part of its wastewater using a decentralized compact system. The project will propose alternative solutions that are inexpensive and adapted to the context of peri-urban and rural areas in order to treat the wastewater generated and use it for irrigation, watering, and cleaning. For this purpose, several tests were carried out in the laboratory in order to develop a liquid waste treatment system optimized for local conditions. Based on the results obtained at the laboratory scale of the different proposed scenarios, we designed and implemented a prototype of a mini wastewater treatment plant for the Faculty. In this article, we will outline the steps of dimensioning, construction, and monitoring of the mini-station in our Faculty.Keywords: wastewater, purification, optimization, vertical filter, MBBR process, sizing, decentralized pilot, reuse, irrigation, sustainable development
Procedia PDF Downloads 1181382 Optimization of Element Type for FE Model and Verification of Analyses with Physical Tests
Authors: Mustafa Tufekci, Caner Guven
Abstract:
In Automotive Industry, sliding door systems that are also used as body closures, are safety members. Extreme product tests are realized to prevent failures in a design process, but these tests realized experimentally result in high costs. Finite element analysis is an effective tool used for the design process. These analyses are used before production of a prototype for validation of design according to customer requirement. In result of this, the substantial amount of time and cost is saved. Finite element model is created for geometries that are designed in 3D CAD programs. Different element types as bar, shell and solid, can be used for creating mesh model. The cheaper model can be created by the selection of element type, but combination of element type that was used in model, number and geometry of element and degrees of freedom affects the analysis result. Sliding door system is a good example which used these methods for this study. Structural analysis was realized for sliding door mechanism by using FE models. As well, physical tests that have same boundary conditions with FE models were realized. Comparison study for these element types, were done regarding test and analyses results then the optimum combination was achieved.Keywords: finite element analysis, sliding door mechanism, element type, structural analysis
Procedia PDF Downloads 3321381 An Optimal Algorithm for Finding (R, Q) Policy in a Price-Dependent Order Quantity Inventory System with Soft Budget Constraint
Authors: S. Hamid Mirmohammadi, Shahrazad Tamjidzad
Abstract:
This paper is concerned with the single-item continuous review inventory system in which demand is stochastic and discrete. The budget consumed for purchasing the ordered items is not restricted but it incurs extra cost when exceeding specific value. The unit purchasing price depends on the quantity ordered under the all-units discounts cost structure. In many actual systems, the budget as a resource which is occupied by the purchased items is limited and the system is able to confront the resource shortage by charging more costs. Thus, considering the resource shortage costs as a part of system costs, especially when the amount of resource occupied by the purchased item is influenced by quantity discounts, is well motivated by practical concerns. In this paper, an optimization problem is formulated for finding the optimal (R, Q) policy, when the system is influenced by the budget limitation and a discount pricing simultaneously. Properties of the cost function are investigated and then an algorithm based on a one-dimensional search procedure is proposed for finding an optimal (R, Q) policy which minimizes the expected system costs .Keywords: (R, Q) policy, stochastic demand, backorders, limited resource, quantity discounts
Procedia PDF Downloads 6441380 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals
Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty
Abstract:
A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs, and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine-learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient but not the magnitude. A neural network with two hidden layers were then used to learn the coefficient magnitudes along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.Keywords: quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction
Procedia PDF Downloads 1211379 Leveraging Business to Business Collaborations to Optimize Reverse Haul Logistics
Authors: Pallav Singh, Rajesh Yabaji, Rajesh Dhir, Chanakya Hridaya
Abstract:
Supply Chain Costs for the Indian Industries have been on an exponential trend due to steep inflation on fundamental cost factors – Fuel, Labour, Rents. In this changing context organizations have been focusing on adopting multiple approaches to keep logistics costs under control to protect the profit margins. The lever of ‘Business to Business (B2B) collaboration’ can be used by organizations to garner higher value. Given the context of Indian Logistics Industry the penetration of B2B Collaboration initiatives have been limited. This paper outlines a structured framework for adoption of B2B collaboration through discussion of a successful initiative between ITC’s Leaf Tobacco Business and a leading Indian Media House. Multiple barriers to such a collaborative process exist which need to be addressed through comprehensive structured approaches. This paper outlines a generic framework approach to B2B collaboration for the Indian Logistics Space, outlining the guidelines for arriving at potential opportunities, identification of collaborators, effective tie-up process, design of operations and sustenance factors. The generic methods outlined can be used in any other industry and also builds a foundation for further research on many topics.Keywords: business to business collaboration, reverse haul logistics, transportation cost optimization, exports logistics
Procedia PDF Downloads 3311378 Hybrid Thresholding Lifting Dual Tree Complex Wavelet Transform with Wiener Filter for Quality Assurance of Medical Image
Authors: Hilal Naimi, Amelbahahouda Adamou-Mitiche, Lahcene Mitiche
Abstract:
The main problem in the area of medical imaging has been image denoising. The most defying for image denoising is to secure data carrying structures like surfaces and edges in order to achieve good visual quality. Different algorithms with different denoising performances have been proposed in previous decades. More recently, models focused on deep learning have shown a great promise to outperform all traditional approaches. However, these techniques are limited to the necessity of large sample size training and high computational costs. This research proposes a denoising approach basing on LDTCWT (Lifting Dual Tree Complex Wavelet Transform) using Hybrid Thresholding with Wiener filter to enhance the quality image. This research describes the LDTCWT as a type of lifting wavelets remodeling that produce complex coefficients by employing a dual tree of lifting wavelets filters to get its real part and imaginary part. Permits the remodel to produce approximate shift invariance, directionally selective filters and reduces the computation time (properties lacking within the classical wavelets transform). To develop this approach, a hybrid thresholding function is modeled by integrating the Wiener filter into the thresholding function.Keywords: lifting wavelet transform, image denoising, dual tree complex wavelet transform, wavelet shrinkage, wiener filter
Procedia PDF Downloads 1671377 Reduction of Aerodynamic Drag Using Vortex Generators
Authors: Siddharth Ojha, Varun Dua
Abstract:
Classified as one of the most important reasons of aerodynamic drag in the sedan automobiles is the fluid flow separation near the vehicle’s rear end. To retard the separation of flow, bump-shaped vortex generators are being tested for its implementation to the roof end of a sedan vehicle. Frequently used in the aircrafts to prevent the separation of fluid flow, vortex generators themselves produce drag, but they also substantially reduce drag by preventing flow separation at the downstream. The net effects of vortex generators can be calculated by summing the positive and negative impacts and effects. Since this effect depends on dimensions and geometry of vortex generators, those present on the vehicle roof are optimized for maximum efficiency and performance. The model was tested through ANSYS CFD analysis and modeling. The model was tested in the wind tunnel for observing it’s properties such as aerodynamic drag and flow separation and a major time lag was gained by employing vortex generators in the scaled model. Major conclusions which were recorded during the analysis were a substantial 24% reduction in the aerodynamic drag and 14% increase in the efficiency of the sedan automobile as the flow separation from the surface is delayed. This paper presents the results of optimization, the effect of vortex generators in the flow field and the mechanism by which these effects occur and are regulated.Keywords: aerodynamics, aerodynamic devices, body, computational fluid dynamics (CFD), flow visualization
Procedia PDF Downloads 2251376 Hot Air Flow Annealing of MAPbI₃ Perovskite: Structural and Optical Properties
Authors: Mouad Ouafi, Lahoucine Atourki, Larbi Laanab, Erika Vega, Miguel Mollar, Bernabe Marib, Boujemaa Jaber
Abstract:
Despite the astonishing emergence of the methylammonium lead triiodide perovskite as a promising light harvester for solar cells, their physical properties in solution-processed MAPbI₃ are still crucial and need to be improved. The objective of this work is to investigate the hot airflow effect during the growth of MAPbI₃ films using the spin-coating process on their structural, optical and morphological proprieties. The experimental results show that many physical proprieties of the perovskite strongly depend on the air flow temperature and the optimization which has a beneficial effect on the perovskite quality. In fact, a clear improvement of the crystallinity and the crystallite size of MAPbI₃ perovskite is demonstrated by the XRD analyses, when the airflow temperature is increased up to 100°C. Alternatively, as far as the surface morphology is concerned, SEM micrographs show that significant homogenous nucleation, uniform surface distribution and pin holes free with highest surface coverture of 98% are achieved when the airflow temperature reaches 100°C. At this temperature, the improvement is also observed when considering the optical properties of the films. By contrast, a remarkable degradation of the MAPbI₃ perovskites associated to the PbI₂ phase formation is noticed, when the hot airflow temperature is higher than 100°C, especially 300°C.Keywords: hot air flow, crystallinity, surface coverage, perovskite morphology
Procedia PDF Downloads 1661375 Optimization in Locating Firefighting Stations Using GIS Data and AHP Model; A Case Study on Arak City
Authors: Hasan Heydari
Abstract:
In recent decades, locating urban services is one of the significant discussions in urban planning. Among these considerations, cities require more accurate planning in order to supply citizen needs, especially part of urban safety. In order to gain this goal, one of the main tasks of urban planners and managers is specifying suitable sites to locate firefighting stations. This study has been done to reach this purpose. Therefore effective criteria consist of coverage radius, population density, proximity to pathway network, land use (compatible and incompatible neighborhood) have been specified. After that, descriptive and local information of the criteria was provided and their layers were created in ArcGIS 9.3. Using Analytic Hierarchy Process (AHP) these criteria and their sub-criteria got the weights. These layers were classified regarding their weights and finally were overlaid by Index Overlay Model and provided the final site selection map for firefighting stations of Arak city. The results gained by analyzing in GIS environment indicate the existing fire station don’t cover the whole city sufficiently and some of the stations have established on the unsuitable sites. The output map indicates the best sites to locate firefighting stations of Arak.Keywords: site-selection, firefighting stations, analytic hierarchy process (AHP), GIS, index overlay model
Procedia PDF Downloads 3511374 Automatic Classification of Lung Diseases from CT Images
Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari
Abstract:
Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life of the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or Covidi-19 induced pneumonia. The early prediction and classification of such lung diseases help to reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans have pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publically available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.Keywords: CT scan, Covid-19, deep learning, image processing, lung disease classification
Procedia PDF Downloads 1631373 Wall Heat Flux Mapping in Liquid Rocket Combustion Chamber with Different Jet Impingement Angles
Authors: O. S. Pradeep, S. Vigneshwaran, K. Praveen Kumar, K. Jeyendran, V. R. Sanal Kumar
Abstract:
The influence of injector attitude on wall heat flux plays an important role in predicting the start-up transient and also determining the combustion chamber wall durability of liquid rockets. In this paper comprehensive numerical studies have been carried out on an idealized liquid rocket combustion chamber to examine the transient wall heat flux during its start-up transient at different injector attitude. Numerical simulations have been carried out with the help of a validated 2d axisymmetric, double precision, pressure-based, transient, species transport, SST k-omega model with laminar finite rate model for governing turbulent-chemistry interaction for four cases with different jet intersection angles, viz., 0o, 30o, 45o, and 60o. We concluded that the jets intersection angle is having a bearing on the time and location of the maximum wall-heat flux zone of the liquid rocket combustion chamber during the start-up transient. We also concluded that the wall heat flux mapping in liquid rocket combustion chamber during the start-up transient is a meaningful objective for the chamber wall material selection and the lucrative design optimization of the combustion chamber for improving the payload capability of the rocket.Keywords: combustion chamber, injector, liquid rocket, rocket engine wall heat flux
Procedia PDF Downloads 4911372 Exclusive Value Adding by iCenter Analytics on Transient Condition
Authors: Zhu Weimin, Allegorico Carmine, Ruggiero Gionata
Abstract:
During decades of Baker Hughes (BH) iCenter experience, it is demonstrated that in addition to conventional insights on equipment steady operation conditions, insights on transient conditions can add significant and exclusive value for anomaly detection, downtime saving, and predictive maintenance. Our work shows examples from the BH iCenter experience to introduce the advantages and features of using transient condition analytics: (i) Operation under critical engine conditions: e.g., high level or high change rate of temperature, pressure, flow, vibration, etc., that would not be reachable in normal operation, (ii) Management of dedicated sub-systems or components, many of which are often bottlenecks for reliability and maintenance, (iii) Indirect detection of anomalies in the absence of instrumentation, (iv) Repetitive sequences: if data is properly processed, the engineering features of transients provide not only anomaly detection but also problem characterization and prognostic indicators for predictive maintenance, (v) Engine variables accounting for fatigue analysis. iCenter has been developing and deploying a series of analytics based on transient conditions. They are contributing to exclusive value adding in the following areas: (i) Reliability improvement, (ii) Startup reliability improvement, (iii) Predictive maintenance, (iv) Repair/overhaul cost down. Illustrative examples for each of the above areas are presented in our study, focusing on challenges and adopted techniques ranging from purely statistical approaches to the implementation of machine learning algorithms. The obtained results demonstrate how the value is obtained using transient condition analytics in the BH iCenter experience.Keywords: analytics, diagnostics, monitoring, turbomachinery
Procedia PDF Downloads 78