Search results for: optimization algorithms
1010 Assimilating Remote Sensing Data into Crop Models: A Global Systematic Review
Authors: Luleka Dlamini, Olivier Crespo, Jos van Dam
Abstract:
Accurately estimating crop growth and yield is pivotal for timely sustainable agricultural management and ensuring food security. Crop models and remote sensing can complement each other and form a robust analysis tool to improve crop growth and yield estimations when combined. This study thus aims to systematically evaluate how research that exclusively focuses on assimilating RS data into crop models varies among countries, crops, data assimilation methods, and farming conditions. A strict search string was applied in the Scopus and Web of Science databases, and 497 potential publications were obtained. After screening for relevance with predefined inclusion/exclusion criteria, 123 publications were considered in the final review. Results indicate that over 81% of the studies were conducted in countries associated with high socio-economic and technological advancement, mainly China, the United States of America, France, Germany, and Italy. Many of these studies integrated MODIS or Landsat data into WOFOST to improve crop growth and yield estimation of staple crops at the field and regional scales. Most studies use recalibration or updating methods alongside various algorithms to assimilate remotely sensed leaf area index into crop models. However, these methods cannot account for the uncertainties in remote sensing observations and the crop model itself. l. Over 85% of the studies were based on commercial and irrigated farming systems. Despite a great global interest in data assimilation into crop models, limited research has been conducted in resource- and data-limited regions like Africa. We foresee a great potential for such application in those conditions. Hence facilitating and expanding the use of such an approach, from which developing farming communities could benefit.Keywords: crop models, remote sensing, data assimilation, crop yield estimation
Procedia PDF Downloads 841009 Study on Heat Transfer Capacity Limits of Heat Pipe with Working Fluids Ammonia and Water
Authors: M. Heydari, A. Ghanami
Abstract:
Heat pipe is simple heat transfer device which combines the conduction and phase change phenomena to control the heat transfer without any need for external power source. At hot surface of heat pipe, the liquid phase absorbs heat and changes to vapor phase. The vapor phase flows to condenser region and with the loss of heat changes to liquid phase. Due to gravitational force the liquid phase flows to evaporator section. In HVAC systems the working fluid is chosen based on the operating temperature. The heat pipe has significant capability to reduce the humidity in HVAC systems. Each HVAC system which uses heater, humidifier or dryer is a suitable nominate for the utilization of heat pipes. Generally heat pipes have three main sections: condenser, adiabatic region, and evaporator. Performance investigation and optimization of heat pipes operation in order to increase their efficiency is crucial. In the present article, a parametric study is performed to improve the heat pipe performance. Therefore, the heat capacity of heat pipe with respect to geometrical and confining parameters is investigated. For the better observation of heat pipe operation in HVAC systems, a CFD simulation in Eulerian- Eulerian multiphase approach is also performed. The results show that heat pipe heat transfer capacity is higher for water as working fluid with the operating temperature of 340 K. It is also showed that the vertical orientation of heat pipe enhances it’s heat transfer capacity.used in the abstract.Keywords: heat pipe, HVAC system, grooved heat pipe, heat pipe limits
Procedia PDF Downloads 4041008 Mining Riding Patterns in Bike-Sharing System Connecting with Public Transportation
Authors: Chong Zhang, Guoming Tang, Bin Ge, Jiuyang Tang
Abstract:
With the fast growing road traffic and increasingly severe traffic congestion, more and more citizens choose to use the public transportation for daily travelling. Meanwhile, the shared bike provides a convenient option for the first and last mile to the public transit. As of 2016, over one thousand cities around the world have deployed the bike-sharing system. The combination of these two transportations have stimulated the development of each other and made significant contribution to the reduction of carbon footprint. A lot of work has been done on mining the riding behaviors in various bike-sharing systems. Most of them, however, treated the bike-sharing system as an isolated system and thus their results provide little reference for the public transit construction and optimization. In this work, we treat the bike-sharing and public transit as a whole and investigate the customers’ bike-and-ride behaviors. Specifically, we develop a spatio-temporal traffic delivery model to study the riding patterns between the two transportation systems and explore the traffic characteristics (e.g., distributions of customer arrival/departure and traffic peak hours) from the time and space dimensions. During the model construction and evaluation, we make use of large open datasets from real-world bike-sharing systems (the CitiBike in New York, GoBike in San Francisco and BIXI in Montreal) along with corresponding public transit information. The developed two-dimension traffic model, as well as the mined bike-and-ride behaviors, can provide great help to the deployment of next-generation intelligent transportation systems.Keywords: riding pattern mining, bike-sharing system, public transportation, bike-and-ride behavior
Procedia PDF Downloads 7901007 Static Application Security Testing Approach for Non-Standard Smart Contracts
Authors: Antonio Horta, Renato Marinho, Raimir Holanda
Abstract:
Considered as an evolution of the Blockchain, the Ethereum platform, besides allowing transactions of its cryptocurrency named Ether, it allows the programming of decentralised applications (DApps) and smart contracts. However, this functionality into blockchains has raised other types of threats, and the exploitation of smart contracts vulnerabilities has taken companies to experience big losses. This research intends to figure out the number of contracts that are under risk of being drained. Through a deep investigation, more than two hundred thousand smart contracts currently available in the Ethereum platform were scanned and estimated how much money is at risk. The experiment was based in a query run on Google Big Query in July 2022 and returned 50,707,133 contracts published on the Ethereum platform. After applying the filtering criteria, the experimentgot 430,584 smart contracts to download and analyse. The filtering criteria consisted of filtering out: ERC20 and ERC721 contracts, contracts without transactions, and contracts without balance. From this amount of 430,584 smart contracts selected, only 268,103 had source codes published on Etherscan, however, we discovered, using a hashing process, that there were contracts duplication. Removing the duplicated contracts, the process ended up with 20,417 source codes, which were analysed using the open source SAST tool smartbugswith oyente and securify algorithms. In the end, there was nearly $100,000 at risk of being drained from the potentially vulnerable smart contracts. It is important to note that the tools used in this study may generate false positives, which may interfere with the number of vulnerable contracts. To address this point, our next step in this research is to develop an application to test the contract in a parallel environment to verify the vulnerability. Finally, this study aims to alert users and companies about the risk on not properly creating and analysing their smart contracts before publishing them into the platform. As any other application, smart contracts are at risk of having vulnerabilities which, in this case, may result in direct financial losses.Keywords: blockchain, reentrancy, static application security testing, smart contracts
Procedia PDF Downloads 891006 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks
Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar
Abstract:
DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)
Procedia PDF Downloads 3221005 Study of Complex (CO) 3Ti (PHND) and CpV (PHND) (PHND = Phénanthridine)
Authors: Akila Tayeb-Benmachiche, Saber-Mustapha Zendaoui, Salah-Eddine Bouaoud, Bachir Zouchoune
Abstract:
The variation of the metal coordination site in π-coordinated polycyclic aromatic hydrocarbons (PAH) corresponds to the haptotropic rearrangement or haptotropic migration in which the metal fragment MLn is considered as the moveable moiety that is shifted between two rings of polycyclic or heteropolycyclic ligands. These structural characteristics and dynamical properties give to this category of transition metal complexes a considerable interest. We have investigated the coordination and the haptotropic shifts of (CO)3Ti and CpV moieties over the phenanthridine aromatic system and according to the metal atom nature. The optimization of (CO)3Ti(PHND) and CpV(PHND), using the Amsterdam Density Functional (ADF) program, without a symmetrical restriction of geometry gives an η6 coordination mode of the C6 and C5N rings, which in turn give rise to a six low-lying deficient 16-MVE of each (CO)3Ti(PHND) and CpV(PHND) structure (three singlet and three triplet state structures for Ti complexes and three triplet and three quintet state structures for V complexes). Thus, the η6–η6 haptotropic migration of the metal fragment MLn from the terminal C6 ring to the central C5N ring has been achieved by a loss of energy. However, its η6–η6 haptotropic migration from central C5N ring to the terminal C6 rings has been accomplished by a gain of energy. These results show the capability of the phenanthridine ligand to adapt itself to the electronic demand of the metal in agreement with the nature of the metal–ligand bonding and demonstrate that this theoretical study can also be applied to large fused π-systems.Keywords: electronic structure, bonding analysis, density functional theory, coordination chemistry haptotropic migration
Procedia PDF Downloads 3061004 Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering
Authors: Waqqas-ur-Rehman Butt, Martin Servin, Marion Pause
Abstract:
In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods.Keywords: image processing, illumination equalization, shadow filtering, object detection
Procedia PDF Downloads 2181003 Genetic Programming: Principles, Applications and Opportunities for Hydrological Modelling
Authors: Oluwaseun K. Oyebode, Josiah A. Adeyemo
Abstract:
Hydrological modelling plays a crucial role in the planning and management of water resources, most especially in water stressed regions where the need to effectively manage the available water resources is of critical importance. However, due to the complex, nonlinear and dynamic behaviour of hydro-climatic interactions, achieving reliable modelling of water resource systems and accurate projection of hydrological parameters are extremely challenging. Although a significant number of modelling techniques (process-based and data-driven) have been developed and adopted in that regard, the field of hydrological modelling is still considered as one that has sluggishly progressed over the past decades. This is majorly as a result of the identification of some degree of uncertainty in the methodologies and results of techniques adopted. In recent times, evolutionary computation (EC) techniques have been developed and introduced in response to the search for efficient and reliable means of providing accurate solutions to hydrological related problems. This paper presents a comprehensive review of the underlying principles, methodological needs and applications of a promising evolutionary computation modelling technique – genetic programming (GP). It examines the specific characteristics of the technique which makes it suitable to solving hydrological modelling problems. It discusses the opportunities inherent in the application of GP in water related-studies such as rainfall estimation, rainfall-runoff modelling, streamflow forecasting, sediment transport modelling, water quality modelling and groundwater modelling among others. Furthermore, the means by which such opportunities could be harnessed in the near future are discussed. In all, a case for total embracement of GP and its variants in hydrological modelling studies is made so as to put in place strategies that would translate into achieving meaningful progress as it relates to modelling of water resource systems, and also positively influence decision-making by relevant stakeholders.Keywords: computational modelling, evolutionary algorithms, genetic programming, hydrological modelling
Procedia PDF Downloads 3031002 Artificial Intelligence in the Design of a Retaining Structure
Authors: Kelvin Lo
Abstract:
Nowadays, numerical modelling in geotechnical engineering is very common but sophisticated. Many advanced input settings and considerable computational efforts are required to optimize the design to reduce the construction cost. To optimize a design, it usually requires huge numerical models. If the optimization is conducted manually, there is a potentially dangerous consequence from human errors, and the time spent on the input and data extraction from output is significant. This paper presents an automation process introduced to numerical modelling (Plaxis 2D) of a trench excavation supported by a secant-pile retaining structure for a top-down tunnel project. Python code is adopted to control the process, and numerical modelling is conducted automatically in every 20m chainage along the 200m tunnel, with maximum retained height occurring in the middle chainage. Python code continuously changes the geological stratum and excavation depth under groundwater flow conditions in each 20m section. It automatically conducts trial and error to determine the required pile length and the use of props to achieve the required factor of safety and target displacement. Once the bending moment of the pile exceeds its capacity, it will increase in size. When the pile embedment reaches the default maximum length, it will turn on the prop system. Results showed that it saves time, increases efficiency, lowers design costs, and replaces human labor to minimize error.Keywords: automation, numerical modelling, Python, retaining structures
Procedia PDF Downloads 591001 Optimal Wind Based DG Placement Considering Monthly Changes Modeling in Wind Speed
Authors: Belal Mohamadi Kalesar, Raouf Hasanpour
Abstract:
Proper placement of Distributed Generation (DG) units such as wind turbine generators in distribution system are still very challenging issue for obtaining their maximum potential benefits because inappropriate placement may increase the system losses. This paper proposes Particle Swarm Optimization (PSO) technique for optimal placement of wind based DG (WDG) in the primary distribution system to reduce energy losses and voltage profile improvement with four different wind levels modeling in year duration. Also, wind turbine is modeled as a DFIG that will be operated at unity power factor and only one wind turbine tower will be considered to install at each bus of network. Finally, proposed method will be implemented on widely used 69 bus power distribution system in MATLAB software environment under four scenario (without, one, two and three WDG units) and for capability test of implemented program it is supposed that all buses of standard system can be candidate for WDG installing (large search space), though this program can consider predetermined number of candidate location in WDG placement to model financial limitation of project. Obtained results illustrate that wind speed increasing in some months will increase output power generated but this can increase / decrease power loss in some wind level, also results show that it is required about 3MW WDG capacity to install in different buses but when this is distributed in overall network (more number of WDG) it can cause better solution from point of view of power loss and voltage profile.Keywords: wind turbine, DG placement, wind levels effect, PSO algorithm
Procedia PDF Downloads 4511000 Self-Efficacy in Online Vocal Learning: Current Situation, Influencing Factors and Optimization Strategies
Authors: Tianyou Wang
Abstract:
Students' own intrinsic motivation is the main source of energy for learning activities, and their self-efficacy becomes a key factor affecting the learning effect. In today's increasingly common situation of online vocal music teaching, virtualized teaching scenarios have brought a considerable impact on students' personal efficacy. Since personal efficacy is the result of the interaction between environmental factors and subject characteristics, an empirical study was conducted to investigate the changes in students' self-efficacy, influencing factors, and characteristics in online vocal teaching scenarios based on the three dimensions of teachers, students, and technology. One hundred valid questionnaires were studied through a quantitative survey. The results showed that students' personal efficacy was significantly lower in online learning environments compared to offline vocal teaching and showed significant differences due to factors such as gender and class type; students' self-efficacy in online vocal teaching was significantly affected by factors such as technological environment, teaching style, and information technology ability. Based on the results of the study, it is recommended to pay attention to inquiry and practice in the teaching design, use singing projects as the teaching organization, grasp the learning process with the orientation of problem-solving, push the applicable vocal music teaching resources in time, lead students to explore and refine the problems and push students to learn independently according to the goals and plans.Keywords: vocal pedagogy, self-efficacy, online learning, intrinsic motivation, information technology
Procedia PDF Downloads 60999 Elaboration and Investigation of the New Ecologically Clean Friction Composite Materials on the Basis of Nanoporous Raw Materials
Authors: Lia Gventsadze, Elguja Kutelia, David Gventsadze
Abstract:
The purpose of the article is to show the possibility for the development of a new generation, eco-friendly (asbestos free) nano-porous friction materials on the basis of Georgian raw materials, along with the determination of technological parameters for their production, as well as the optimization of tribological properties and the investigation of structural aspects of wear peculiarities of elaborated materials using the scanning electron microscopy (SEM) and Auger electron spectroscopy (AES) methods. The study investigated the tribological properties of the polymer friction materials on the basis of the phenol-formaldehyde resin using the porous diatomite filler modified by silane with the aim to improve the thermal stability, while the composition was modified by iron phosphate, technical carbon and basalt fibre. As a result of testing the stable values of friction factor (0.3-0,45) were reached, both in dry and wet friction conditions, the friction working parameters (friction factor and wear stability) remained stable up to 500 OC temperatures, the wear stability of gray cast-iron disk increased 3-4 times, the soundless operation of materials without squeaking were achieved. Herewith it was proved that small amount of ingredients (5-6) are enough to compose the nano-porous friction materials. The study explains the mechanism of the action of nano-porous composition base brake lining materials and its tribological efficiency on the basis of the triple phase model of the tribo-pair.Keywords: brake lining, friction coefficient, wear, nanoporous composite, phenolic resin
Procedia PDF Downloads 394998 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals
Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar
Abstract:
Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks
Procedia PDF Downloads 193997 Analysis and Identification of Different Factors Affecting Students’ Performance Using a Correlation-Based Network Approach
Authors: Jeff Chak-Fu Wong, Tony Chun Yin Yip
Abstract:
The transition from secondary school to university seems exciting for many first-year students but can be more challenging than expected. Enabling instructors to know students’ learning habits and styles enhances their understanding of the students’ learning backgrounds, allows teachers to provide better support for their students, and has therefore high potential to improve teaching quality and learning, especially in any mathematics-related courses. The aim of this research is to collect students’ data using online surveys, to analyze students’ factors using learning analytics and educational data mining and to discover the characteristics of the students at risk of falling behind in their studies based on students’ previous academic backgrounds and collected data. In this paper, we use correlation-based distance methods and mutual information for measuring student factor relationships. We then develop a factor network using the Minimum Spanning Tree method and consider further study for analyzing the topological properties of these networks using social network analysis tools. Under the framework of mutual information, two graph-based feature filtering methods, i.e., unsupervised and supervised infinite feature selection algorithms, are used to analyze the results for students’ data to rank and select the appropriate subsets of features and yield effective results in identifying the factors affecting students at risk of failing. This discovered knowledge may help students as well as instructors enhance educational quality by finding out possible under-performers at the beginning of the first semester and applying more special attention to them in order to help in their learning process and improve their learning outcomes.Keywords: students' academic performance, correlation-based distance method, social network analysis, feature selection, graph-based feature filtering method
Procedia PDF Downloads 135996 Challenges and Insights by Electrical Characterization of Large Area Graphene Layers
Authors: Marcus Klein, Martina GrießBach, Richard Kupke
Abstract:
The current advances in the research and manufacturing of large area graphene layers are promising towards the introduction of this exciting material in the display industry and other applications that benefit from excellent electrical and optical characteristics. New production technologies in the fabrication of flexible displays, touch screens or printed electronics apply graphene layers on non-metal substrates and bring new challenges to the required metrology. Traditional measurement concepts of layer thickness, sheet resistance, and layer uniformity, are difficult to apply to graphene production processes and are often harmful to the product layer. New non-contact sensor concepts are required to adapt to the challenges and even the foreseeable inline production of large area graphene. Dedicated non-contact measurement sensors are a pioneering method to leverage these issues in a large variety of applications, while significantly lowering the costs of development and process setup. Transferred and printed graphene layers can be characterized with high accuracy in a huge measurement range using a very high resolution. Large area graphene mappings are applied for process optimization and for efficient quality control for transfer, doping, annealing and stacking processes. Examples of doped, defected and excellent Graphene are presented as quality images and implications for manufacturers are explained.Keywords: graphene, doping and defect testing, non-contact sheet resistance measurement, inline metrology
Procedia PDF Downloads 310995 Optimizing Agricultural Packaging in Fiji: Strategic Barrier Analysis Using Interpretive Structural Modeling and Cross-Impact Matrix Multiplication Applied to Classification
Authors: R. Ananthanarayanan, S. B. Nakula, D. R. Seenivasagam, J. Naua, B. Sharma
Abstract:
Product packaging is a critical component of production, trade, and marketing, playing numerous vital roles that often go unnoticed by consumers. Packaging is essential for maintaining the shelf life, quality assurance, and safety of both manufactured and agricultural products. For example, harvested produce or processed foods can quickly lose quality and freshness, making secure packaging crucial for preservation and safety throughout the food supply chain. In Fiji, agricultural packaging has primarily been managed by local companies for international trade, with gradual advancements in these practices. To further enhance the industry’s performance, this study examines the challenges and constraints hindering the optimization of agricultural packaging practices in Fiji. The study utilizes Multi-Criteria Decision Making (MCDM) tools, specifically Interpretive Structural Modeling (ISM) and Cross-Impact Matrix Multiplication Applied to Classification (MICMAC). ISM analyzes the hierarchical structure of barriers, categorizing them from the least to the most influential, while MICMAC classifies barriers based on their driving and dependence power. This approach helps identify the interrelationships between barriers, providing valuable insights for policymakers and decision-makers to propose innovative solutions for sustainable development in the agricultural packaging sector, ultimately shaping the future of packaging practices in Fiji.Keywords: agricultural packaging, barriers, ISM, MICMAC
Procedia PDF Downloads 37994 Big Data in Construction Project Management: The Colombian Northeast Case
Authors: Sergio Zabala-Vargas, Miguel Jiménez-Barrera, Luz VArgas-Sánchez
Abstract:
In recent years, information related to project management in organizations has been increasing exponentially. Performance data, management statistics, indicator results have forced the collection, analysis, traceability, and dissemination of project managers to be essential. In this sense, there are current trends to facilitate efficient decision-making in emerging technology projects, such as: Machine Learning, Data Analytics, Data Mining, and Big Data. The latter is the most interesting in this project. This research is part of the thematic line Construction methods and project management. Many authors present the relevance that the use of emerging technologies, such as Big Data, has taken in recent years in project management in the construction sector. The main focus is the optimization of time, scope, budget, and in general mitigating risks. This research was developed in the northeastern region of Colombia-South America. The first phase was aimed at diagnosing the use of emerging technologies (Big-Data) in the construction sector. In Colombia, the construction sector represents more than 50% of the productive system, and more than 2 million people participate in this economic segment. The quantitative approach was used. A survey was applied to a sample of 91 companies in the construction sector. Preliminary results indicate that the use of Big Data and other emerging technologies is very low and also that there is interest in modernizing project management. There is evidence of a correlation between the interest in using new data management technologies and the incorporation of Building Information Modeling BIM. The next phase of the research will allow the generation of guidelines and strategies for the incorporation of technological tools in the construction sector in Colombia.Keywords: big data, building information modeling, tecnology, project manamegent
Procedia PDF Downloads 133993 The Effects of Logistical Centers Realization on Society and Economy
Authors: Anna Dolinayova, Juraj Camaj, Martin Loch
Abstract:
Presently it is necessary to ensure the sustainable development of passenger and freight transport. Increasing performance of road freight have been a negative impact to environment and society. It is therefore necessary to increase the competitiveness of intermodal transport, which is more environmentally friendly. The study describe the effectiveness of logistical centers realization for companies and society and research how the partial internalization of external costs reflected in the efficient use of these centers and increase the competitiveness of intermodal transport to road freight. In our research, we use the method of comparative analysis and market research to describe the advantages of logistic centers for their users as well as for society as a whole. Method normal costing is used for calculation infrastructure and total costs, method of conversion costing for determine the external costs. We modelling of total society costs for road freight transport and inter modal transport chain (we assumed that most of the traffic is carried by rail) with different loading schemes for condition in the Slovak Republic. Our research has shown that higher utilization of inter modal transport chain do good not only for society, but for companies providing freight services too. Increase in use of inter modal transport chain can bring many benefits to society that do not bring direct immediate financial return. They often bring the multiplier effects, such as greater use of environmentally friendly transport mode and reduce the total society costs.Keywords: delivery time, economy effectiveness, logistical centers, ecological efficiency, optimization, society
Procedia PDF Downloads 451992 Sheathed Cotton Fibers: Material for Oil-Spill Cleanup
Authors: Benjamin M Dauda, Esther Ibrahim, Sylvester Gadimoh, Asabe Mustapha, Jiyah Mohammed
Abstract:
Despite diverse optimization techniques on natural hydrophilic fibers, hydrophobic synthetic fibers are still the best oil sorption materials. However, these hydrophobic fibers are not biodegradable, making their disposal problematic. To this end, this work sets out to develop Nonwoven sorbents from epoxy-coated Cotton fibers. As a way of improving the compatibility of the crude oil and reduction of moisture absorption, cotton fibers were coated with epoxy resin by immersion in acetone-thinned epoxy solution. A needle-punching machine was used to convert the fibers into coherent nonwoven sheets. An oil sorption experiment was then carried out. The result indicates that the developed epoxy-modified sorbent has a higher crude oil-sorption capacity compared with those of untreated cotton and commercial polypropylene sorbents. Absorption Curves show that the coated fiber and polypropylene sorbent saturated faster than the uncoated cotton fiber pad. The result also shows that the coated cotton sorbent adsorbed crude faster than the polypropylene sorbent, and the equilibrium exhaustion was also higher. After a simple mechanical squeezing process, the Nonwoven pads could be restored to their original form and repeatedly recycled for oil/water separation. The results indicate that the cotton-coated non-woven pads hold promise for the cleanup of oil spills. Our data suggests that the sorption behaviors of the epoxy-coated Nonwoven pads and their crude oil sorption capacity are relatively stable under various environmental conditions compared to the commercial sheet.Keywords: oil spill, adsorption, cotton, epoxy, nonwoven
Procedia PDF Downloads 59991 Optimizing Microwave Assisted Extraction of Anti-Diabetic Plant Tinospora cordifolia Used in Ayush System for Estimation of Berberine Using Taguchi L-9 Orthogonal Design
Authors: Saurabh Satija, Munish Garg
Abstract:
Present work reports an efficient extraction method using microwaves based solvent–sample duo-heating mechanism, for the extraction of an important anti-diabetic plant Tinospora cordifolia from AYUSH system for estimation of berberine content. The process is based on simultaneous heating of sample matrix and extracting solvent under microwave energy. Methanol was used as the extracting solvent, which has excellent berberine solubilizing power and warms up under microwave attributable to its great dispersal factor. Extraction conditions like time of irradition, microwave power, solute-solvent ratio and temperature were optimized using Taguchi design and berberine was quantified using high performance thin layer chromatography. The ranked optimized parameters were microwave power (rank 1), irradiation time (rank 2) and temperature (rank 3). This kind of extraction mechanism under dual heating provided choice of extraction parameters for better precision and higher yield with significant reduction in extraction time under optimum extraction conditions. This developed extraction protocol will lead to extract higher amounts of berberine which is a major anti-diabetic moiety in Tinospora cordifolia which can lead to development of cheaper formulations of the plant Tinospora cordifolia and can help in rapid prevention of diabetes in the world.Keywords: berberine, microwave, optimization, Taguchi
Procedia PDF Downloads 351990 Numerical Modal Analysis of a Multi-Material 3D-Printed Composite Bushing and Its Application
Authors: Paweł Żur, Alicja Żur, Andrzej Baier
Abstract:
Modal analysis is a crucial tool in the field of engineering for understanding the dynamic behavior of structures. In this study, numerical modal analysis was conducted on a multi-material 3D-printed composite bushing, which comprised a polylactic acid (PLA) outer shell and a thermoplastic polyurethane (TPU) flexible filling. The objective was to investigate the modal characteristics of the bushing and assess its potential for practical applications. The analysis involved the development of a finite element model of the bushing, which was subsequently subjected to modal analysis techniques. Natural frequencies, mode shapes, and damping ratios were determined to identify the dominant vibration modes and their corresponding responses. The numerical modal analysis provided valuable insights into the dynamic behavior of the bushing, enabling a comprehensive understanding of its structural integrity and performance. Furthermore, the study expanded its scope by investigating the entire shaft mounting of a small electric car, incorporating the 3D-printed composite bushing. The shaft mounting system was subjected to numerical modal analysis to evaluate its dynamic characteristics and potential vibrational issues. The results of the modal analysis highlighted the effectiveness of the 3D-printed composite bushing in minimizing vibrations and optimizing the performance of the shaft mounting system. The findings contribute to the broader field of composite material applications in automotive engineering and provide valuable insights for the design and optimization of similar components.Keywords: 3D printing, composite bushing, modal analysis, multi-material
Procedia PDF Downloads 116989 Resilience Assessment of Mountain Cities from the Perspective of Disaster Prevention: Taking Chongqing as an Example
Abstract:
President Xi Jinping has clearly stated the need to more effectively advance the process of urbanization centered on people, striving to shape cities into spaces that are healthier, safer, and more livable. However, during the development and construction of mountainous cities, numerous uncertain disruptive factors have emerged, one after another, posing severe challenges to the city's overall development. Therefore, building resilient cities and creating high-quality urban ecosystems and safety systems have become the core and crux of achieving sustainable urban development. This paper takes the central urban area of Chongqing as the research object and establishes an urban resilience assessment indicator system from four dimensions: society, economy, ecology, and infrastructure. It employs the entropy weight method and TOPSIS model to assess the urban resilience level of the central urban area of Chongqing from 2019 to 2022. The results indicate that i. the resilience level of the central urban area of Chongqing is unevenly distributed, showing a spatial pattern of "high in the middle and low around"; it also demonstrates differentiation across different dimensions; ii. due to the impact of the COVID-19 pandemic, the overall resilience level of the central urban area of Chongqing has declined significantly, with low recovery capacity and slow improvement in urban resilience. Finally, based on the four selected dimensions, this paper proposes optimization strategies for urban resilience in mountainous cities, providing a basis for Chongqing to build a safe and livable new city.Keywords: mountainous urban areas, central urban area of chongqing, entropy weight method, TOPSIS model, ArcGIS
Procedia PDF Downloads 14988 A Sensor Placement Methodology for Chemical Plants
Authors: Omid Ataei Nia, Karim Salahshoor
Abstract:
In this paper, a new precise and reliable sensor network methodology is introduced for unit processes and operations using the Constriction Coefficient Particle Swarm Optimization (CPSO) method. CPSO is introduced as a new search engine for optimal sensor network design purposes. Furthermore, a Square Root Unscented Kalman Filter (SRUKF) algorithm is employed as a new data reconciliation technique to enhance the stability and accuracy of the filter. The proposed design procedure incorporates precision, cost, observability, reliability together with importance-of-variables (IVs) as a novel measure in Instrumentation Criteria (IC). To the best of our knowledge, no comprehensive approach has yet been proposed in the literature to take into account the importance of variables in the sensor network design procedure. In this paper, specific weight is assigned to each sensor, measuring a process variable in the sensor network to indicate the importance of that variable over the others to cater to the ultimate sensor network application requirements. A set of distinct scenarios has been conducted to evaluate the performance of the proposed methodology in a simulated Continuous Stirred Tank Reactor (CSTR) as a highly nonlinear process plant benchmark. The obtained results reveal the efficacy of the proposed method, leading to significant improvement in accuracy with respect to other alternative sensor network design approaches and securing the definite allocation of sensors to the most important process variables in sensor network design as a novel achievement.Keywords: constriction coefficient PSO, importance of variable, MRMSE, reliability, sensor network design, square root unscented Kalman filter
Procedia PDF Downloads 164987 Gradient-Based Reliability Optimization of Integrated Energy Systems Under Extreme Weather Conditions: A Case Study in Ningbo, China
Abstract:
Recent extreme weather events, such as the 2021 European floods and North American heatwaves, have exposed the vulnerability of energy systems to both extreme demand scenarios and potential physical damage. Current integrated energy system designs often overlook performance under these challenging conditions. This research, focusing on a regional integrated energy system in Ningbo, China, proposes a distinct design method to optimize system reliability during extreme events. A multi-scenario model was developed, encompassing various extreme load conditions and potential system damages caused by severe weather. Based on this model, a comprehensive reliability improvement scheme was designed, incorporating a gradient approach to address different levels of disaster severity through the integration of advanced technologies like distributed energy storage. The scheme's effectiveness was validated through Monte Carlo simulations. Results demonstrate significant enhancements in energy supply reliability and peak load reduction capability under extreme scenarios. The findings provide several insights for improving energy system adaptability in the face of climate-induced challenges, offering valuable references for building reliable energy infrastructure capable of withstanding both extreme demands and physical threats across a spectrum of disaster intensities.Keywords: extreme weather events, integrated energy systems, reliability improvement, climate change adaptation
Procedia PDF Downloads 32986 Correlation Between Ore Mineralogy and the Dissolution Behavior of K-Feldspar
Authors: Adrian Keith Caamino, Sina Shakibania, Lena Sunqvist-Öqvist, Jan Rosenkranz, Yousef Ghorbani
Abstract:
Feldspar minerals are one of the main components of the earth’s crust. They are tectosilicate, meaning that they mainly contain aluminum and silicon. Besides aluminum and silicon, they contain either potassium, sodium, or calcium. Accordingly, feldspar minerals are categorized into three main groups: K-feldspar, Na-feldspar, and Ca-feldspar. In recent years, the trend to use K-feldspar has grown tremendously, considering its potential to produce potash and alumina. However, the feldspar minerals, in general, are difficult to decompose for the dissolution of their metallic components. Several methods, including intensive milling, leaching under elevated pressure and temperature, thermal pretreatment, and the use of corrosive leaching reagents, have been proposed to improve its low dissolving efficiency. In this study, as part of the POTASSIAL EU project, to overcome the low dissolution efficiency of the K-feldspar components, mechanical activation using intensive milling followed by leaching using hydrochloric acid (HCl) was practiced. Grinding operational parameters, namely time, rotational speed, and ball-to-sample weight ratio, were studied using the Taguchi optimization method. Then, the mineralogy of the grinded samples was analyzed using a scanning electron microscope (SEM) equipped with automated quantitative mineralogy. After grinding, the prepared samples were subjected to HCl leaching. In the end, the dissolution efficiency of the main elements and impurities of different samples were correlated to the mineralogical characterization results. K-feldspar component dissolution is correlated with ore mineralogy, which provides insight into how to best optimize leaching conditions for selective dissolution. Further, it will have an effect on purifying steps taken afterward and the final value recovery proceduresKeywords: K-feldspar, grinding, automated mineralogy, impurity, leaching
Procedia PDF Downloads 80985 A Convolutional Neural Network Based Vehicle Theft Detection, Location, and Reporting System
Authors: Michael Moeti, Khuliso Sigama, Thapelo Samuel Matlala
Abstract:
One of the principal challenges that the world is confronted with is insecurity. The crime rate is increasing exponentially, and protecting our physical assets especially in the motorist industry, is becoming impossible when applying our own strength. The need to develop technological solutions that detect and report theft without any human interference is inevitable. This is critical, especially for vehicle owners, to ensure theft detection and speedy identification towards recovery efforts in cases where a vehicle is missing or attempted theft is taking place. The vehicle theft detection system uses Convolutional Neural Network (CNN) to recognize the driver's face captured using an installed mobile phone device. The location identification function uses a Global Positioning System (GPS) to determine the real-time location of the vehicle. Upon identification of the location, Global System for Mobile Communications (GSM) technology is used to report or notify the vehicle owner about the whereabouts of the vehicle. The installed mobile app was implemented by making use of python as it is undoubtedly the best choice in machine learning. It allows easy access to machine learning algorithms through its widely developed library ecosystem. The graphical user interface was developed by making use of JAVA as it is better suited for mobile development. Google's online database (Firebase) was used as a means of storage for the application. The system integration test was performed using a simple percentage analysis. Sixty (60) vehicle owners participated in this study as a sample, and questionnaires were used in order to establish the acceptability of the system developed. The result indicates the efficiency of the proposed system, and consequently, the paper proposes the use of the system can effectively monitor the vehicle at any given place, even if it is driven outside its normal jurisdiction. More so, the system can be used as a database to detect, locate and report missing vehicles to different security agencies.Keywords: CNN, location identification, tracking, GPS, GSM
Procedia PDF Downloads 178984 Enhancement of Density-Based Spatial Clustering Algorithm with Noise for Fire Risk Assessment and Warning in Metro Manila
Authors: Pinky Mae O. De Leon, Franchezka S. P. Flores
Abstract:
This study focuses on applying an enhanced density-based spatial clustering algorithm with noise for fire risk assessments and warnings in Metro Manila. Unlike other clustering algorithms, DBSCAN is known for its ability to identify arbitrary-shaped clusters and its resistance to noise. However, its performance diminishes when handling high dimensional data, wherein it can read the noise points as relevant data points. Also, the algorithm is dependent on the parameters (eps & minPts) set by the user; choosing the wrong parameters can greatly affect its clustering result. To overcome these challenges, the study proposes three key enhancements: first is to utilize multiple MinHash and locality-sensitive hashing to decrease the dimensionality of the data set, second is to implement Jaccard Similarity before applying the parameter Epsilon to ensure that only similar data points are considered neighbors, and third is to use the concept of Jaccard Neighborhood along with the parameter MinPts to improve in classifying core points and identifying noise in the data set. The results show that the modified DBSCAN algorithm outperformed three other clustering methods, achieving fewer outliers, which facilitated a clearer identification of fire-prone areas, high Silhouette score, indicating well-separated clusters that distinctly identify areas with potential fire hazards and exceptionally achieved a low Davies-Bouldin Index and a high Calinski-Harabasz score, highlighting its ability to form compact and well-defined clusters, making it an effective tool for assessing fire hazard zones. This study is intended for assessing areas in Metro Manila that are most prone to fire risk.Keywords: DBSCAN, clustering, Jaccard similarity, MinHash LSH, fires
Procedia PDF Downloads 15983 Improving Chest X-Ray Disease Detection with Enhanced Data Augmentation Using Novel Approach of Diverse Conditional Wasserstein Generative Adversarial Networks
Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Daniyal Haider, Xiaodong Yang
Abstract:
Chest X-rays are instrumental in the detection and monitoring of a wide array of diseases, including viral infections such as COVID-19, tuberculosis, pneumonia, lung cancer, and various cardiac and pulmonary conditions. To enhance the accuracy of diagnosis, artificial intelligence (AI) algorithms, particularly deep learning models like Convolutional Neural Networks (CNNs), are employed. However, these deep learning models demand a substantial and varied dataset to attain optimal precision. Generative Adversarial Networks (GANs) can be employed to create new data, thereby supplementing the existing dataset and enhancing the accuracy of deep learning models. Nevertheless, GANs have their limitations, such as issues related to stability, convergence, and the ability to distinguish between authentic and fabricated data. In order to overcome these challenges and advance the detection and classification of CXR normal and abnormal images, this study introduces a distinctive technique known as DCWGAN (Diverse Conditional Wasserstein GAN) for generating synthetic chest X-ray (CXR) images. The study evaluates the effectiveness of this Idiosyncratic DCWGAN technique using the ResNet50 model and compares its results with those obtained using the traditional GAN approach. The findings reveal that the ResNet50 model trained on the DCWGAN-generated dataset outperformed the model trained on the classic GAN-generated dataset. Specifically, the ResNet50 model utilizing DCWGAN synthetic images achieved impressive performance metrics with an accuracy of 0.961, precision of 0.955, recall of 0.970, and F1-Measure of 0.963. These results indicate the promising potential for the early detection of diseases in CXR images using this Inimitable approach.Keywords: CNN, classification, deep learning, GAN, Resnet50
Procedia PDF Downloads 92982 Particle Size Dependent Enhancement of Compressive Strength and Carbonation Efficiency in Steel Slag Cementitious Composites
Authors: Jason Ting Jing Cheng, Lee Foo Wei, Yew Ming Kun, Chin Ren Jie, Yip Chun Chieh
Abstract:
The utilization of industrial by-products, such as steel slag in cementitious materials, not only mitigates environmental impact but also enhances material properties. This study investigates the dual influence of steel slag particle size on the compressive strength and carbonation efficiency of cementitious composites. Through a systematic experimental approach, steel slag particles were incorporated into cement at varying sizes, and the resulting composites were subjected to mechanical and carbonation tests. Scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDX) are conducted in this paper. The findings reveal a positive correlation between increased particle size and compressive strength, attributed to the improved interfacial transition zone and packing density. Conversely, smaller particle sizes exhibited enhanced carbonation efficiency, likely due to the increased surface area facilitating the carbonation reaction. The presence of higher silica and calcium content in finer particles was confirmed by EDX, which contributed to the accelerated carbonation process. This study underscores the importance of particle size optimization in designing sustainable cementitious materials with balanced mechanical performance and carbon sequestration potential. The insights gained from the advanced analytical techniques offer a comprehensive understanding of the mechanisms at play, paving the way for the strategic use of steel slag in eco-friendly construction practices.Keywords: steel slag, carbonation efficiency, particle size enhancement, compressive strength
Procedia PDF Downloads 65981 Bridge Healthcare Access Gap with Artifical Intelligence
Authors: Moshmi Sangavarapu
Abstract:
The US healthcare industry has undergone tremendous digital transformation in recent years, but critical care access to lower-income ethnicities is still in its nascency. This population has historically showcased substantial hesitation to seek any medical assistance. While the lack of sufficient financial resources plays a critical role, the existing cultural and knowledge barriers also contribute significantly to widening the access gap. It is imperative to break these barriers to ensure timely access to therapeutic procedures that can save important lives! Based on ongoing research, healthcare access barriers can be best addressed by tapping the untapped potential of caregiver communities first. They play a critical role in patients’ diagnoses, building healthcare knowledge and instilling confidence in required therapeutic procedures. Recent technological advancements have opened many avenues by developing smart ways of reaching the large caregiver community. A digitized go-to-market strategy featuring connected media coupled with smart IoT devices and geo-location targeting can be collectively leveraged to reach this key audience group. AI/ML algorithms can be thoroughly trained to identify relevant data signals from users' location and browsing behavior and determine useful marketing touchpoints. The web behavior can be further assimilated with natural language processing to identify contextually relevant interest topics and decipher potential caregivers on digital avenues to serve that brand message. In conclusion, grasping the true health access journey of any lower-income ethnic group is important to design beneficial touchpoints that can alleviate patients’ concerns and allow them to break their own access barriers and opt for timely and quality healthcare.Keywords: healthcare access, market access, diversity barriers, patient journey
Procedia PDF Downloads 59