Search results for: optimization algorithms
3022 Optimizing Recycling and Reuse Strategies for Circular Construction Materials with Life Cycle Assessment
Authors: Zhongnan Ye, Xiaoyi Liu, Shu-Chien Hsu
Abstract:
Rapid urbanization has led to a significant increase in construction and demolition waste (C&D waste), underscoring the need for sustainable waste management strategies in the construction industry. Aiming to enhance the sustainability of urban construction practices, this study develops an optimization model to effectively suggest the optimal recycling and reuse strategies for C&D waste, including concrete and steel. By employing Life Cycle Assessment (LCA), the model evaluates the environmental impacts of adopted construction materials throughout their lifecycle. The model optimizes the quantity of materials to recycle or reuse, the selection of specific recycling and reuse processes, and logistics decisions related to the transportation and storage of recycled materials with the objective of minimizing the overall environmental impact, quantified in terms of carbon emissions, energy consumption, and associated costs, while adhering to a range of constraints. These constraints include capacity limitations, quality standards for recycled materials, compliance with environmental regulations, budgetary limits, and temporal considerations such as project deadlines and material availability. The strategies are expected to be both cost-effective and environmentally beneficial, promoting a circular economy within the construction sector, aligning with global sustainability goals, and providing a scalable framework for managing construction waste in densely populated urban environments. The model is helpful in reducing the carbon footprint of construction projects, conserving valuable resources, and supporting the industry’s transition towards a more sustainable future.Keywords: circular construction, construction and demolition waste, material recycling, optimization modeling
Procedia PDF Downloads 623021 Isolation, Characterization and Optimization of Alkalophilic and Thermotolerant Lipase from Bacillus subtilis Strain
Authors: Indu Bhushan Sharma, Rashmi Saraswat
Abstract:
The thermotolerant, solvent stable and alkalophilic lipase producing bacterial strain was isolated from the water sample of the foothills of Trikuta Mountain in Kakryal (Reasi district) in Jammu and Kashmir, India. The lipase-producing microorganisms were screened using tributyrin agar plates. The selected microbe was optimized for maximum lipase production by subjecting to various carbon and nitrogen sources, incubation period and inoculum size. The selected strain was identified as Bacillus subtilis strain kakrayal_1 (BSK_1) using 16S rRNA sequence analysis. Effect of pH, temperature, metal ions, detergents and organic solvents were studied on lipase activity. Lipase was found to be stable over a pH range of 6.0 to 9.0 and exhibited maximum activity at pH 8. Lipolytic activity was highest at 37°C and the enzyme activity remained at 60°C for 24hrs, hence, established as thermo-tolerant. Production of lipase was significantly induced by vegetable oil and the best nitrogen source was found to be peptone. The isolated Bacillus lipase was stimulated by pre-treatment with Mn2+, Ca2+, K+, Zn2+, and Fe2+. Lipase was stable in detergents such as triton X 100, tween 20 and Tween 80. The 100% ethyl acetate enhanced lipase activity whereas, lipase activity were found to be stable in Hexane. The optimization resulted in 4 fold increase in lipase production. Bacillus lipases are ‘generally recognized as safe’ (GRAS) and are industrially interesting. The inducible alkaline, thermo-tolerant lipase exhibited the ability to be stable in detergents and organic solvents. This could be further researched as a potential biocatalyst for industrial applications such as biotransformation, detergent formulation, bioremediation and organic synthesis.Keywords: bacillus, lipase, thermotolerant, alkalophilic
Procedia PDF Downloads 2573020 Real-Time Inventory Management and Operational Efficiency in Manufacturing
Authors: Tom Wanyama
Abstract:
We have developed a weight-based parts inventory monitoring system utilizing the Industrial Internet of Things (IIoT) to enhance operational efficiencies in manufacturing. The system addresses various challenges, including eliminating downtimes caused by stock-outs, preventing human errors in parts delivery and product assembly, and minimizing motion waste by reducing unnecessary worker movements. The system incorporates custom QR codes for simplified inventory tracking and retrieval processes. The generated data serves a dual purpose by enabling real-time optimization of parts flow within manufacturing facilities and facilitating retroactive optimization of stock levels for informed decision-making in inventory management. The pilot implementation at SEPT Learning Factory successfully eradicated data entry errors, optimized parts delivery, and minimized workstation downtimes, resulting in a remarkable increase of over 10% in overall equipment efficiency across all workstations. Leveraging the IIoT features, the system seamlessly integrates information into the process control system, contributing to the enhancement of product quality. This approach underscores the importance of effective tracking of parts inventory in manufacturing to achieve transparency, improved inventory control, and overall profitability. In the broader context, our inventory monitoring system aligns with the evolving focus on optimizing supply chains and maintaining well-managed warehouses to ensure maximum efficiency in the manufacturing industry.Keywords: industrial Internet of things, industrial systems integration, inventory monitoring, inventory control in manufacturing
Procedia PDF Downloads 423019 Large Core Silica Few-Mode Optical Fibers with Reduced Differential Mode Delay and Enhanced Mode Effective Area over 'C'-Band
Authors: Anton V. Bourdine, Vladimir A. Burdin, Oleg R. Delmukhametov
Abstract:
This work presents a fast and simple method for the design of large core silica optical fibers with differential mode delay (DMD) management. Some results are reported concerned with refractive index profile optimization for 42 µm core 16-LP-mode optical fiber for next-generation optical networks. Here special refractive index profile form provides total DMD reducing over all mode staff under desired enhanced mode effective area. Method for the simulation of 'real manufactured' few-mode optical fiber (FMF) core geometry differing from the desired optimized structure by core non-symmetrical ellipticity and refractive index profile deviation including local fluctuations is proposed. Results of the following analysis of optimized FMF with inserted geometry distortions performed by earlier on developed modification of rigorous mixed finite-element method showed strong DMD degradation that requires additional higher-order mode management. In addition, this work also presents a method for design mode division multiplexer channel precision spatial positioning scheme at FMF core end that provides one of the potentiality solutions of described DMD degradation problem concerned with 'distorted' core geometry due to features of optical fiber manufacturing techniques.Keywords: differential mode delay, few-mode optical fibers, nonlinear Shannon limit, optical fiber non-circularity, ‘real manufactured’ optical fiber core geometry simulation, refractive index profile optimization
Procedia PDF Downloads 1613018 The Optimization of the Parameters for Eco-Friendly Leaching of Precious Metals from Waste Catalyst
Authors: Silindile Gumede, Amir Hossein Mohammadi, Mbuyu Germain Ntunka
Abstract:
Goal 12 of the 17 Sustainable Development Goals (SDGs) encourages sustainable consumption and production patterns. This necessitates achieving the environmentally safe management of chemicals and all wastes throughout their life cycle and the proper disposal of pollutants and toxic waste. Fluid catalytic cracking (FCC) catalysts are widely used in the refinery to convert heavy feedstocks to lighter ones. During the refining processes, the catalysts are deactivated and discarded as hazardous toxic solid waste. Spent catalysts (SC) contain high-cost metal, and the recovery of metals from SCs is a tactical plan for supplying part of the demand for these substances and minimizing the environmental impacts. Leaching followed by solvent extraction, has been found to be the most efficient method to recover valuable metals with high purity from spent catalysts. However, the use of inorganic acids during the leaching process causes a secondary environmental issue. Therefore, it is necessary to explore other alternative efficient leaching agents that are economical and environmentally friendly. In this study, the waste catalyst was collected from a domestic refinery and was characterised using XRD, ICP, XRF, and SEM. Response surface methodology (RSM) and Box Behnken design were used to model and optimize the influence of some parameters affecting the acidic leaching process. The parameters selected in this investigation were the acid concentration, temperature, and leaching time. From the characterisation results, it was found that the spent catalyst consists of high concentrations of Vanadium (V) and Nickel (Ni); hence this study focuses on the leaching of Ni and V using a biodegradable acid to eliminate the formation of the secondary pollution.Keywords: eco-friendly leaching, optimization, metal recovery, leaching
Procedia PDF Downloads 723017 Maximum Power Point Tracking Using FLC Tuned with GA
Authors: Mohamed Amine Haraoubia, Abdelaziz Hamzaoui, Najib Essounbouli
Abstract:
The pursuit of the MPPT has led to the development of many kinds of controllers, one of which is the Fuzzy Logic Controller, which has proven its worth. To further tune this controller this paper will discuss and analyze the use of Genetic Algorithms to tune the Fuzzy Logic Controller. It will provide an introduction to both systems, and test their compatibility and performance.Keywords: fuzzy logic controller, fuzzy logic, genetic algorithm, maximum power point, maximum power point tracking
Procedia PDF Downloads 3803016 Colored Image Classification Using Quantum Convolutional Neural Networks Approach
Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins
Abstract:
Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning
Procedia PDF Downloads 1343015 Building Biodiversity Conservation Plans Robust to Human Land Use Uncertainty
Authors: Yingxiao Ye, Christopher Doehring, Angelos Georghiou, Hugh Robinson, Phebe Vayanos
Abstract:
Human development is a threat to biodiversity, and conservation organizations (COs) are purchasing land to protect areas for biodiversity preservation. However, COs have limited budgets and thus face hard prioritization decisions that are confounded by uncertainty in future human land use. This research proposes a data-driven sequential planning model to help COs choose land parcels that minimize the uncertain human impact on biodiversity. The proposed model is robust to uncertain development, and the sequential decision-making process is adaptive, allowing land purchase decisions to adapt to human land use as it unfolds. The cellular automata model is leveraged to simulate land use development based on climate data, land characteristics, and development threat index from NASA Socioeconomic Data and Applications Center. This simulation is used to model uncertainty in the problem. This research leverages state-of-the-art techniques in the robust optimization literature to propose a computationally tractable reformulation of the model, which can be solved routinely by off-the-shelf solvers like Gurobi or CPLEX. Numerical results based on real data from the Jaguar in Central and South America show that the proposed method reduces conservation loss by 19.46% on average compared to standard approaches such as MARXAN used in practice for biodiversity conservation. Our method may better help guide the decision process in land acquisition and thereby allow conservation organizations to maximize the impact of limited resources.Keywords: data-driven robust optimization, biodiversity conservation, uncertainty simulation, adaptive sequential planning
Procedia PDF Downloads 2143014 Transforming Data Science Curriculum Through Design Thinking
Authors: Samar Swaid
Abstract:
Today, corporates are moving toward the adoption of Design-Thinking techniques to develop products and services, putting their consumer as the heart of the development process. One of the leading companies in Design-Thinking, IDEO (Innovation, Design, Engineering Organization), defines Design-Thinking as an approach to problem-solving that relies on a set of multi-layered skills, processes, and mindsets that help people generate novel solutions to problems. Design thinking may result in new ideas, narratives, objects or systems. It is about redesigning systems, organizations, infrastructures, processes, and solutions in an innovative fashion based on the users' feedback. Tim Brown, president and CEO of IDEO, sees design thinking as a human-centered approach that draws from the designer's toolkit to integrate people's needs, innovative technologies, and business requirements. The application of design thinking has been witnessed to be the road to developing innovative applications, interactive systems, scientific software, healthcare application, and even to utilizing Design-Thinking to re-think business operations, as in the case of Airbnb. Recently, there has been a movement to apply design thinking to machine learning and artificial intelligence to ensure creating the "wow" effect on consumers. The Association of Computing Machinery task force on Data Science program states that" Data scientists should be able to implement and understand algorithms for data collection and analysis. They should understand the time and space considerations of algorithms. They should follow good design principles developing software, understanding the importance of those principles for testability and maintainability" However, this definition hides the user behind the machine who works on data preparation, algorithm selection and model interpretation. Thus, the Data Science program includes design thinking to ensure meeting the user demands, generating more usable machine learning tools, and developing ways of framing computational thinking. Here, describe the fundamentals of Design-Thinking and teaching modules for data science programs.Keywords: data science, design thinking, AI, currculum, transformation
Procedia PDF Downloads 863013 The Optimization of TICSI in the Convergence Mechanism of Urban Water Management
Authors: M. Macchiaroli, L. Dolores, V. Pellecchia
Abstract:
With the recent Resolution n. 580/2019/R/idr, the Italian Regulatory Authority for Energy, Networks, and Environment (ARERA) for the Urban Water Management has introduced, for water managements characterized by persistent critical issues regarding the planning and organization of the service and the implementation of the necessary interventions for the improvement of infrastructures and management quality, a new mechanism for determining tariffs: the regulatory scheme of Convergence. The aim of this regulatory scheme is the overcoming of the Water Service Divided in order to improve the stability of the local institutional structures, technical quality, contractual quality, as well as in order to guarantee transparency elements for Users of the Service. Convergence scheme presupposes the identification of the cost items to be considered in the tariff in parametric terms, distinguishing three possible cases according to the type of historical data available to the Manager. The study, in particular, focuses on operations that have neither data on tariff revenues nor data on operating costs. In this case, the Manager's Constraint on Revenues (VRG) is estimated on the basis of a reference benchmark and becomes the starting point for defining the structure of the tariff classes, in compliance with the TICSI provisions (Integrated Text for tariff classes, ARERA's Resolution n. 665/2017/R/idr). The proposed model implements the recent studies on optimization models for the definition of tariff classes in compliance with the constraints dictated by TICSI in the application of the Convergence mechanism, proposing itself as a support tool for the Managers and the local water regulatory Authority in the decision-making process.Keywords: decision-making process, economic evaluation of projects, optimizing tools, urban water management, water tariff
Procedia PDF Downloads 1243012 Systematic and Meta-Analysis of Navigation in Oral and Maxillofacial Trauma and Impact of Machine Learning and AI in Management
Authors: Shohreh Ghasemi
Abstract:
Introduction: Managing oral and maxillofacial trauma is a multifaceted challenge, as it can have life-threatening consequences and significant functional and aesthetic impact. Navigation techniques have been introduced to improve surgical precision to meet this challenge. A machine learning algorithm was also developed to support clinical decision-making regarding treating oral and maxillofacial trauma. Given these advances, this systematic meta-analysis aims to assess the efficacy of navigational techniques in treating oral and maxillofacial trauma and explore the impact of machine learning on their management. Methods: A detailed and comprehensive analysis of studies published between January 2010 and September 2021 was conducted through a systematic meta-analysis. This included performing a thorough search of Web of Science, Embase, and PubMed databases to identify studies evaluating the efficacy of navigational techniques and the impact of machine learning in managing oral and maxillofacial trauma. Studies that did not meet established entry criteria were excluded. In addition, the overall quality of studies included was evaluated using Cochrane risk of bias tool and the Newcastle-Ottawa scale. Results: Total of 12 studies, including 869 patients with oral and maxillofacial trauma, met the inclusion criteria. An analysis of studies revealed that navigation techniques effectively improve surgical accuracy and minimize the risk of complications. Additionally, machine learning algorithms have proven effective in predicting treatment outcomes and identifying patients at high risk for complications. Conclusion: The introduction of navigational technology has great potential to improve surgical precision in oral and maxillofacial trauma treatment. Furthermore, developing machine learning algorithms offers opportunities to improve clinical decision-making and patient outcomes. Still, further studies are necessary to corroborate these results and establish the optimal use of these technologies in managing oral and maxillofacial traumaKeywords: trauma, machine learning, navigation, maxillofacial, management
Procedia PDF Downloads 623011 Machine Learning in Agriculture: A Brief Review
Authors: Aishi Kundu, Elhan Raza
Abstract:
"Necessity is the mother of invention" - Rapid increase in the global human population has directed the agricultural domain toward machine learning. The basic need of human beings is considered to be food which can be satisfied through farming. Farming is one of the major revenue generators for the Indian economy. Agriculture is not only considered a source of employment but also fulfils humans’ basic needs. So, agriculture is considered to be the source of employment and a pillar of the economy in developing countries like India. This paper provides a brief review of the progress made in implementing Machine Learning in the agricultural sector. Accurate predictions are necessary at the right time to boost production and to aid the timely and systematic distribution of agricultural commodities to make their availability in the market faster and more effective. This paper includes a thorough analysis of various machine learning algorithms applied in different aspects of agriculture (crop management, soil management, water management, yield tracking, livestock management, etc.).Due to climate changes, crop production is affected. Machine learning can analyse the changing patterns and come up with a suitable approach to minimize loss and maximize yield. Machine Learning algorithms/ models (regression, support vector machines, bayesian models, artificial neural networks, decision trees, etc.) are used in smart agriculture to analyze and predict specific outcomes which can be vital in increasing the productivity of the Agricultural Food Industry. It is to demonstrate vividly agricultural works under machine learning to sensor data. Machine Learning is the ongoing technology benefitting farmers to improve gains in agriculture and minimize losses. This paper discusses how the irrigation and farming management systems evolve in real-time efficiently. Artificial Intelligence (AI) enabled programs to emerge with rich apprehension for the support of farmers with an immense examination of data.Keywords: machine Learning, artificial intelligence, crop management, precision farming, smart farming, pre-harvesting, harvesting, post-harvesting
Procedia PDF Downloads 1103010 Evolutional Substitution Cipher on Chaotic Attractor
Authors: Adda Ali-Pacha, Naima Hadj-Said
Abstract:
Nowadays, the security of information is primarily founded on the calculation of algorithms that confidentiality depend on the number of bits necessary to define a cryptographic key. In this work, we introduce a new chaotic cryptosystem that we call evolutional substitution cipher on a chaotic attractor. In this research paper, we take the Henon attractor. The evolutional substitution cipher on Henon attractor is based on the principle of monoalphabetic cipher and it associates the plaintext at a succession of real numbers calculated from the attractor equations.Keywords: cryptography, substitution cipher, chaos theory, Henon attractor, evolutional substitution cipher
Procedia PDF Downloads 4353009 Principal Component Analysis Combined Machine Learning Techniques on Pharmaceutical Samples by Laser Induced Breakdown Spectroscopy
Authors: Kemal Efe Eseller, Göktuğ Yazici
Abstract:
Laser-induced breakdown spectroscopy (LIBS) is a rapid optical atomic emission spectroscopy which is used for material identification and analysis with the advantages of in-situ analysis, elimination of intensive sample preparation, and micro-destructive properties for the material to be tested. LIBS delivers short pulses of laser beams onto the material in order to create plasma by excitation of the material to a certain threshold. The plasma characteristics, which consist of wavelength value and intensity amplitude, depends on the material and the experiment’s environment. In the present work, medicine samples’ spectrum profiles were obtained via LIBS. Medicine samples’ datasets include two different concentrations for both paracetamol based medicines, namely Aferin and Parafon. The spectrum data of the samples were preprocessed via filling outliers based on quartiles, smoothing spectra to eliminate noise and normalizing both wavelength and intensity axis. Statistical information was obtained and principal component analysis (PCA) was incorporated to both the preprocessed and raw datasets. The machine learning models were set based on two different train-test splits, which were 70% training – 30% test and 80% training – 20% test. Cross-validation was preferred to protect the models against overfitting; thus the sample amount is small. The machine learning results of preprocessed and raw datasets were subjected to comparison for both splits. This is the first time that all supervised machine learning classification algorithms; consisting of Decision Trees, Discriminant, naïve Bayes, Support Vector Machines (SVM), k-NN(k-Nearest Neighbor) Ensemble Learning and Neural Network algorithms; were incorporated to LIBS data of paracetamol based pharmaceutical samples, and their different concentrations on preprocessed and raw dataset in order to observe the effect of preprocessing.Keywords: machine learning, laser-induced breakdown spectroscopy, medicines, principal component analysis, preprocessing
Procedia PDF Downloads 923008 Green Supply Chain Network Optimization with Internet of Things
Authors: Sema Kayapinar, Ismail Karaoglan, Turan Paksoy, Hadi Gokcen
Abstract:
Green Supply Chain Management is gaining growing interest among researchers and supply chain management. The concept of Green Supply Chain Management is to integrate environmental thinking into the Supply Chain Management. It is the systematic concept emphasis on environmental problems such as reduction of greenhouse gas emissions, energy efficiency, recycling end of life products, generation of solid and hazardous waste. This study is to present a green supply chain network model integrated Internet of Things applications. Internet of Things provides to get precise and accurate information of end-of-life product with sensors and systems devices. The forward direction consists of suppliers, plants, distributions centres and sales and collect centres while, the reverse flow includes the sales and collects centres, disassembled centre, recycling and disposal centre. The sales and collection centre sells the new products are transhipped from factory via distribution centre and also receive the end-of life product according their value level. We describe green logistics activities by presenting specific examples including “recycling of the returned products and “reduction of CO2 gas emissions”. The different transportation choices are illustrated between echelons according to their CO2 gas emissions. This problem is formulated as a mixed integer linear programming model to solve the green supply chain problems which are emerged from the environmental awareness and responsibilities. This model is solved by using Gams package program. Numerical examples are suggested to illustrate the efficiency of the proposed model.Keywords: green supply chain optimization, internet of things, greenhouse gas emission, recycling
Procedia PDF Downloads 3333007 Optimization of the Culture Medium, Incubation Period, pH and Temperatures for Maximal Dye Bioremoval Using A. Fumigates
Authors: Wafaa M. Abd El-Rahim, Magda A. El-Meleigy, Eman Refaat
Abstract:
This study dealing with optimization the conditions affecting the formation of extracellular lignin- degrading enzymes to achieve maximal decolorization activity of Direct Violet dye by one fungal strain. In this study Aspergillus fumigates fungal strain used for production extracellular ligninolytic enzymes for removing Direct Violet dye under different conditions: culture medium, incubation period, pH and temperatures. The results indicted that the removal efficiency of A. fumigatus was enhanced by addition glucose and peptone to the culture medium. The addition of peptone and glucose was found to increase the decolorization activity of the fungal isolate from 51.38% to 93.74% after 4 days of incubation. The highest production of extracellular lignin degrading enzymes also recorded in Direct Violet dye medium supplemented with peptone and glucose. It was also found the decolorization activity of A. fumigatus was decreased gradually by increasing the incubation period up to 4 days. Also it was found that the fungal strain can grow and produce extracellular ligninolytic enzymes which accompanied by efficient removal of Direct Violet dye in a wide pH range of 4-8. The results also found that the maximal biosynthesis of ligninolytic enzymes which accompanied with maximal removal of Direct Violet dye was obtained at a temperature of 28C. This indicates that the different conditions of culture medium, incubation period, pH and temperatures are effective on dye decolorization on the fungal biomass and played a role in Direct Violet dye removal along with enzymatic activity of A. fumigatus.Keywords: A. fumigates, extracellular lignin- degrading enzymes, textile dye, dye removing
Procedia PDF Downloads 2803006 Airon Project: IoT-Based Agriculture System for the Optimization of Irrigation Water Consumption
Authors: África Vicario, Fernando J. Álvarez, Felipe Parralejo, Fernando Aranda
Abstract:
The irrigation systems of traditional agriculture, such as gravity-fed irrigation, produce a great waste of water because, generally, there is no control over the amount of water supplied in relation to the water needed. The AIRON Project tries to solve this problem by implementing an IoT-based system to sensor the irrigation plots so that the state of the crops and the amount of water used for irrigation can be known remotely. The IoT system consists of a sensor network that measures the humidity of the soil, the weather conditions (temperature, relative humidity, wind and solar radiation) and the irrigation water flow. The communication between this network and a central gateway is conducted by means of long-range wireless communication that depends on the characteristics of the irrigation plot. The main objective of the AIRON project is to deploy an IoT sensor network in two different plots of the irrigation community of Aranjuez in the Spanish region of Madrid. The first plot is 2 km away from the central gateway, so LoRa has been used as the base communication technology. The problem with this plot is the absence of mains electric power, so devices with energy-saving modes have had to be used to maximize the external batteries' use time. An ESP32 SOC board with a LoRa module is employed in this case to gather data from the sensor network and send them to a gateway consisting of a Raspberry Pi with a LoRa hat. The second plot is located 18 km away from the gateway, a range that hampers the use of LoRa technology. In order to establish reliable communication in this case, the long-term evolution (LTE) standard is used, which makes it possible to reach much greater distances by using the cellular network. As mains electric power is available in this plot, a Raspberry Pi has been used instead of the ESP32 board to collect sensor data. All data received from the two plots are stored on a proprietary server located at the irrigation management company's headquarters. The analysis of these data by means of machine learning algorithms that are currently under development should allow a short-term prediction of the irrigation water demand that would significantly reduce the waste of this increasingly valuable natural resource. The major finding of this work is the real possibility of deploying a remote sensing system for irrigated plots by using Commercial-Off-The-Shelf (COTS) devices, easily scalable and adaptable to design requirements such as the distance to the control center or the availability of mains electrical power at the site.Keywords: internet of things, irrigation water control, LoRa, LTE, smart farming
Procedia PDF Downloads 923005 Financial Ethics: A Review of 2010 Flash Crash
Authors: Omer Farooq, Salman Ahmed Khan, Sadaf Khalid
Abstract:
Modern day stock markets have almost entirely became automated. Even though it means increased profits for the investors by algorithms acting upon the slightest price change in order of microseconds, it also has given birth to many ethical dilemmas in the sense that slightest mistake can cause people to lose all of their livelihoods. This paper reviews one such event that happened on May 06, 2010 in which $1 trillion dollars disappeared from the Dow Jones Industrial Average. We are going to discuss its various aspects and the ethical dilemmas that have arisen due to it.Keywords: flash crash, market crash, stock market, stock market crash
Procedia PDF Downloads 5243004 FracXpert: Ensemble Machine Learning Approach for Localization and Classification of Bone Fractures in Cricket Athletes
Authors: Madushani Rodrigo, Banuka Athuraliya
Abstract:
In today's world of medical diagnosis and prediction, machine learning stands out as a strong tool, transforming old ways of caring for health. This study analyzes the use of machine learning in the specialized domain of sports medicine, with a focus on the timely and accurate detection of bone fractures in cricket athletes. Failure to identify bone fractures in real time can result in malunion or non-union conditions. To ensure proper treatment and enhance the bone healing process, accurately identifying fracture locations and types is necessary. When interpreting X-ray images, it relies on the expertise and experience of medical professionals in the identification process. Sometimes, radiographic images are of low quality, leading to potential issues. Therefore, it is necessary to have a proper approach to accurately localize and classify fractures in real time. The research has revealed that the optimal approach needs to address the stated problem and employ appropriate radiographic image processing techniques and object detection algorithms. These algorithms should effectively localize and accurately classify all types of fractures with high precision and in a timely manner. In order to overcome the challenges of misidentifying fractures, a distinct model for fracture localization and classification has been implemented. The research also incorporates radiographic image enhancement and preprocessing techniques to overcome the limitations posed by low-quality images. A classification ensemble model has been implemented using ResNet18 and VGG16. In parallel, a fracture segmentation model has been implemented using the enhanced U-Net architecture. Combining the results of these two implemented models, the FracXpert system can accurately localize exact fracture locations along with fracture types from the available 12 different types of fracture patterns, which include avulsion, comminuted, compressed, dislocation, greenstick, hairline, impacted, intraarticular, longitudinal, oblique, pathological, and spiral. This system will generate a confidence score level indicating the degree of confidence in the predicted result. Using ResNet18 and VGG16 architectures, the implemented fracture segmentation model, based on the U-Net architecture, achieved a high accuracy level of 99.94%, demonstrating its precision in identifying fracture locations. Simultaneously, the classification ensemble model achieved an accuracy of 81.0%, showcasing its ability to categorize various fracture patterns, which is instrumental in the fracture treatment process. In conclusion, FracXpert has become a promising ML application in sports medicine, demonstrating its potential to revolutionize fracture detection processes. By leveraging the power of ML algorithms, this study contributes to the advancement of diagnostic capabilities in cricket athlete healthcare, ensuring timely and accurate identification of bone fractures for the best treatment outcomes.Keywords: multiclass classification, object detection, ResNet18, U-Net, VGG16
Procedia PDF Downloads 1303003 Experimental and Numerical Studies of Droplet Formation
Authors: Khaled Al-Badani, James Ren, Lisa Li, David Allanson
Abstract:
Droplet formation is an important process in many engineering systems and manufacturing procedures, which includes welding, biotechnologies, 3D printing, biochemical, biomedical fields and many more. The volume and the characteristics of droplet formation are generally depended on various material properties, microfluidics and fluid mechanics considerations. Hence, a detailed investigation of this process, with the aid of numerical computational tools, are essential for future design optimization and process controls of many engineering systems. This will also improve the understanding of changes in the properties and the structures of materials, during the formation of the droplet, which is important for new material developments to achieve different functions, pending the requirements of the application. For example, the shape of the formed droplet is critical for the function of some final products, such as the welding nugget during Capacitor Discharge Welding process, or PLA 3D printing, etc. Although, most academic journals on droplet formation, focused on issued with material transfer rate, surface tension and residual stresses, the general emphasis on the characteristics of droplet shape has been overlooked. The proposed work for this project will examine theoretical methodologies, experimental techniques, and numerical modelling, using ANSYS FLUENT, to critically analyse and highlight optimization methods regarding the formation of pendant droplet. The project will also compare results from published data with experimental and numerical work, concerning the effects of key material parameters on the droplet shape. These effects include changes in heating/cooling rates, solidification/melting progression and separation/break-up times. From these tests, a set of objectives is prepared, with an intention of improving quality, stability and productivity in modelling metal welding and 3D printing.Keywords: computer modelling, droplet formation, material distortion, materials forming, welding
Procedia PDF Downloads 2893002 Enhanced Production of Endo-β-1,4-Xylanase from a Newly Isolated Thermophile Geobacillus stearothermophilus KIBGE-IB29 for Prospective Industrial Applications
Authors: Zainab Bibi, Afsheen Aman, Shah Ali Ul Qader
Abstract:
Endo-β-1,4-xylanases [EC 3.2.1.8] are one of the major groups of enzymes that are involved in degradation process of xylan and have several applications in food, textile and paper processing industries. Due to broad utility of endo-β-1,4-xylanase, researchers are focusing to increase the productivity of this hydrolase from various microbial species. Harsh industrial condition, faster reaction rate and efficient hydrolysis of xylan with low risk of contamination are critical requirements of industry that can be fulfilled by synthesizing the enzyme with efficient properties. In the current study, a newly isolated thermophile Geobacillus stearothermophilus KIBGE-IB29 was used in order to attain the maximum production of endo-1,4-β-xylanase. Bacterial culture was isolated from soil, collected around the blast furnace site of a steel processing mill, Karachi. Optimization of various nutritional and physical factors resulted the maximum synthesis of endo-1,4-β-xylanase from a thermophile. High production yield was achieved at 60°C and pH-6.0 after 24 hours of incubation period. Various nitrogen sources viz. peptone, yeast extract and meat extract improved the enzyme synthesis with 0.5%, 0.2% and 0.1% optimum concentrations. Dipotassium hydrogen phosphate (0.25%), potassium dihydrogen phosphate (0.05%), ammonium sulfate (0.05%) and calcium chloride (0.01%) were noticed as valuable salts to improve the production of enzyme. The thermophilic nature of isolate, with its broad pH stability profile and reduced fermentation time indicates its importance for effective xylan saccharification and for large scale production of endo-1,4-β-xylanase.Keywords: geobacillus, optimization, production, xylanase
Procedia PDF Downloads 3133001 Improving Patient-Care Services at an Oncology Center with a Flexible Adaptive Scheduling Procedure
Authors: P. Hooshangitabrizi, I. Contreras, N. Bhuiyan
Abstract:
This work presents an online scheduling problem which accommodates multiple requests of patients for chemotherapy treatments in a cancer center of a major metropolitan hospital in Canada. To solve the problem, an adaptive flexible approach is proposed which systematically combines two optimization models. The first model is intended to dynamically schedule arriving requests in the form of waiting lists whereas the second model is used to reschedule the already booked patients with the goal of finding better resource allocations when new information becomes available. Both models are created as mixed integer programming formulations. Various controllable and flexible parameters such as deviating the prescribed target dates by a pre-determined threshold, changing the start time of already booked appointments and the maximum number of appointments to move in the schedule are included in the proposed approach to have sufficient degrees of flexibility in handling arrival requests and unexpected changes. Several computational experiments are conducted to evaluate the performance of the proposed approach using historical data provided by the oncology clinic. Our approach achieves outstandingly better results as compared to those of the scheduling system being used in practice. Moreover, several analyses are conducted to evaluate the effect of considering different levels of flexibility on the obtained results and to assess the performance of the proposed approach in dealing with last-minute changes. We strongly believe that the proposed flexible adaptive approach is very well-suited for implementation at the clinic to provide better patient-care services and to utilize available resource more efficiently.Keywords: chemotherapy scheduling, multi-appointment modeling, optimization of resources, satisfaction of patients, mixed integer programming
Procedia PDF Downloads 1753000 Performance Evaluation of Packet Scheduling with Channel Conditioning Aware Based on Wimax Networks
Authors: Elmabruk Laias, Abdalla M. Hanashi, Mohammed Alnas
Abstract:
Worldwide Interoperability for Microwave Access (WiMAX) became one of the most challenging issues, since it was responsible for distributing available resources of the network among all users this leaded to the demand of constructing and designing high efficient scheduling algorithms in order to improve the network utilization, to increase the network throughput, and to minimize the end-to-end delay. In this study, the proposed algorithm focuses on an efficient mechanism to serve non-real time traffic in congested networks by considering channel status.Keywords: WiMAX, Quality of Services (QoS), OPNE, Diff-Serv (DS).
Procedia PDF Downloads 2932999 Optimizing Data Integration and Management Strategies for Upstream Oil and Gas Operations
Authors: Deepak Singh, Rail Kuliev
Abstract:
The abstract highlights the critical importance of optimizing data integration and management strategies in the upstream oil and gas industry. With its complex and dynamic nature generating vast volumes of data, efficient data integration and management are essential for informed decision-making, cost reduction, and maximizing operational performance. Challenges such as data silos, heterogeneity, real-time data management, and data quality issues are addressed, prompting the proposal of several strategies. These strategies include implementing a centralized data repository, adopting industry-wide data standards, employing master data management (MDM), utilizing real-time data integration technologies, and ensuring data quality assurance. Training and developing the workforce, “reskilling and upskilling” the employees and establishing robust Data Management training programs play an essential role and integral part in this strategy. The article also emphasizes the significance of data governance and best practices, as well as the role of technological advancements such as big data analytics, cloud computing, Internet of Things (IoT), and artificial intelligence (AI) and machine learning (ML). To illustrate the practicality of these strategies, real-world case studies are presented, showcasing successful implementations that improve operational efficiency and decision-making. In present study, by embracing the proposed optimization strategies, leveraging technological advancements, and adhering to best practices, upstream oil and gas companies can harness the full potential of data-driven decision-making, ultimately achieving increased profitability and a competitive edge in the ever-evolving industry.Keywords: master data management, IoT, AI&ML, cloud Computing, data optimization
Procedia PDF Downloads 732998 Designing and Simulation of the Rotor and Hub of the Unmanned Helicopter
Authors: Zbigniew Czyz, Ksenia Siadkowska, Krzysztof Skiba, Karol Scislowski
Abstract:
Today’s progress in the rotorcraft is mostly associated with an optimization of aircraft performance achieved by active and passive modifications of main rotor assemblies and a tail propeller. The key task is to improve their performance, improve the hover quality factor for rotors but not change in specific fuel consumption. One of the tasks to improve the helicopter is an active optimization of the main rotor providing for flight stages, i.e., an ascend, flight, a descend. An active interference with the airflow around the rotor blade section can significantly change characteristics of the aerodynamic airfoil. The efficiency of actuator systems modifying aerodynamic coefficients in the current solutions is relatively high and significantly affects the increase in strength. The solution to actively change aerodynamic characteristics assumes a periodic change of geometric features of blades depending on flight stages. Changing geometric parameters of blade warping enables an optimization of main rotor performance depending on helicopter flight stages. Structurally, an adaptation of shape memory alloys does not significantly affect rotor blade fatigue strength, which contributes to reduce costs associated with an adaptation of the system to the existing blades, and gains from a better performance can easily amortize such a modification and improve profitability of such a structure. In order to obtain quantitative and qualitative data to solve this research problem, a number of numerical analyses have been necessary. The main problem is a selection of design parameters of the main rotor and a preliminary optimization of its performance to improve the hover quality factor for rotors. This design concept assumes a three-bladed main rotor with a chord of 0.07 m and radius R = 1 m. The value of rotor speed is a calculated parameter of an optimization function. To specify the initial distribution of geometric warping, a special software has been created that uses a numerical method of a blade element which respects dynamic design features such as fluctuations of a blade in its joints. A number of performance analyses as a function of rotor speed, forward speed, and altitude have been performed. The calculations were carried out for the full model assembly. This approach makes it possible to observe the behavior of components and their mutual interaction resulting from the forces. The key element of each rotor is the shaft, hub and pins holding the joints and blade yokes. These components are exposed to the highest loads. As a result of the analysis, the safety factor was determined at the level of k > 1.5, which gives grounds to obtain certification for the strength of the structure. The construction of the joint rotor has numerous moving elements in its structure. Despite the high safety factor, the places with the highest stresses, where the signs of wear and tear may appear, have been indicated. The numerical analysis carried out showed that the most loaded element is the pin connecting the modular bearing of the blade yoke with the element of the horizontal oscillation joint. The stresses in this element result in a safety factor of k=1.7. The other analysed rotor components have a safety factor of more than 2 and in the case of the shaft, this factor is more than 3. However, it must be remembered that the structure is as strong as the weakest cell is. Designed rotor for unmanned aerial vehicles adapted to work with blades with intelligent materials in its structure meets the requirements for certification testing. Acknowledgement: This work has been financed by the Polish National Centre for Research and Development under the LIDER program, Grant Agreement No. LIDER/45/0177/L-9/17/NCBR/2018.Keywords: main rotor, rotorcraft aerodynamics, shape memory alloy, materials, unmanned helicopter
Procedia PDF Downloads 1622997 An A-Star Approach for the Quickest Path Problem with Time Windows
Authors: Christofas Stergianos, Jason Atkin, Herve Morvan
Abstract:
As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling
Procedia PDF Downloads 2332996 Quantifying Multivariate Spatiotemporal Dynamics of Malaria Risk Using Graph-Based Optimization in Southern Ethiopia
Authors: Yonas Shuke Kitawa
Abstract:
Background: Although malaria incidence has substantially fallen sharply over the past few years, the rate of decline varies by district, time, and malaria type. Despite this turn-down, malaria remains a major public health threat in various districts of Ethiopia. Consequently, the present study is aimed at developing a predictive model that helps to identify the spatio-temporal variation in malaria risk by multiple plasmodium species. Methods: We propose a multivariate spatio-temporal Bayesian model to obtain a more coherent picture of the temporally varying spatial variation in disease risk. The spatial autocorrelation in such a data set is typically modeled by a set of random effects that assign a conditional autoregressive prior distribution. However, the autocorrelation considered in such cases depends on a binary neighborhood matrix specified through the border-sharing rule. Over here, we propose a graph-based optimization algorithm for estimating the neighborhood matrix that merely represents the spatial correlation by exploring the areal units as the vertices of a graph and the neighbor relations as the series of edges. Furthermore, we used aggregated malaria count in southern Ethiopia from August 2013 to May 2019. Results: We recognized that precipitation, temperature, and humidity are positively associated with the malaria threat in the area. On the other hand, enhanced vegetation index, nighttime light (NTL), and distance from coastal areas are negatively associated. Moreover, nonlinear relationships were observed between malaria incidence and precipitation, temperature, and NTL. Additionally, lagged effects of temperature and humidity have a significant effect on malaria risk by either species. More elevated risk of P. falciparum was observed following the rainy season, and unstable transmission of P. vivax was observed in the area. Finally, P. vivax risks are less sensitive to environmental factors than those of P. falciparum. Conclusion: The improved inference was gained by employing the proposed approach in comparison to the commonly used border-sharing rule. Additionally, different covariates are identified, including delayed effects, and elevated risks of either of the cases were observed in districts found in the central and western regions. As malaria transmission operates in a spatially continuous manner, a spatially continuous model should be employed when it is computationally feasible.Keywords: disease mapping, MSTCAR, graph-based optimization algorithm, P. falciparum, P. vivax, waiting matrix
Procedia PDF Downloads 862995 Study and Analysis of the Factors Affecting Road Safety Using Decision Tree Algorithms
Authors: Naina Mahajan, Bikram Pal Kaur
Abstract:
The purpose of traffic accident analysis is to find the possible causes of an accident. Road accidents cannot be totally prevented but by suitable traffic engineering and management the accident rate can be reduced to a certain extent. This paper discusses the classification techniques C4.5 and ID3 using the WEKA Data mining tool. These techniques use on the NH (National highway) dataset. With the C4.5 and ID3 technique it gives best results and high accuracy with less computation time and error rate.Keywords: C4.5, ID3, NH(National highway), WEKA data mining tool
Procedia PDF Downloads 3412994 Deep Reinforcement Learning-Based Computation Offloading for 5G Vehicle-Aware Multi-Access Edge Computing Network
Authors: Ziying Wu, Danfeng Yan
Abstract:
Multi-Access Edge Computing (MEC) is one of the key technologies of the future 5G network. By deploying edge computing centers at the edge of wireless access network, the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios. Meanwhile, with the development of IOV (Internet of Vehicles) technology, various delay-sensitive and compute-intensive in-vehicle applications continue to appear. Compared with traditional internet business, these computation tasks have higher processing priority and lower delay requirements. In this paper, we design a 5G-based Vehicle-Aware Multi-Access Edge Computing Network (VAMECN) and propose a joint optimization problem of minimizing total system cost. In view of the problem, a deep reinforcement learning-based joint computation offloading and task migration optimization (JCOTM) algorithm is proposed, considering the influences of multiple factors such as concurrent multiple computation tasks, system computing resources distribution, and network communication bandwidth. And, the mixed integer nonlinear programming problem is described as a Markov Decision Process. Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption, optimize computing offloading and resource allocation schemes, and improve system resource utilization, compared with other computing offloading policies.Keywords: multi-access edge computing, computation offloading, 5th generation, vehicle-aware, deep reinforcement learning, deep q-network
Procedia PDF Downloads 1232993 CO2 Emission and Cost Optimization of Reinforced Concrete Frame Designed by Performance Based Design Approach
Authors: Jin Woo Hwang, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
As greenhouse effect has been recognized as serious environmental problem of the world, interests in carbon dioxide (CO2) emission which comprises major part of greenhouse gas (GHG) emissions have been increased recently. Since construction industry takes a relatively large portion of total CO2 emissions of the world, extensive studies about reducing CO2 emissions in construction and operation of building have been carried out after the 2000s. Also, performance based design (PBD) methodology based on nonlinear analysis has been robustly developed after Northridge Earthquake in 1994 to assure and assess seismic performance of building more exactly because structural engineers recognized that prescriptive code based design approach cannot address inelastic earthquake responses directly and assure performance of building exactly. Although CO2 emissions and PBD approach are recent rising issues on construction industry and structural engineering, there were few or no researches considering these two issues simultaneously. Thus, the objective of this study is to minimize the CO2 emissions and cost of building designed by PBD approach in structural design stage considering structural materials. 4 story and 4 span reinforced concrete building optimally designed to minimize CO2 emissions and cost of building and to satisfy specific seismic performance (collapse prevention in maximum considered earthquake) of building satisfying prescriptive code regulations using non-dominated sorting genetic algorithm-II (NSGA-II). Optimized design result showed that minimized CO2 emissions and cost of building were acquired satisfying specific seismic performance. Therefore, the methodology proposed in this paper can be used to reduce both CO2 emissions and cost of building designed by PBD approach.Keywords: CO2 emissions, performance based design, optimization, sustainable design
Procedia PDF Downloads 410