Search results for: prediction modelling
2107 A New Tactical Optimization Model for Bioenergy Supply Chain
Authors: Birome Holo Ba, Christian Prins, Caroline Prodhon
Abstract:
Optimization is an important aspect of logistics management. It can reduce significantly logistics costs and also be a good tool for decision support. In this paper, we address a planning problem specific to biomass supply chain. We propose a new mixed integer linear programming (MILP) model dealing with different feed stock production operations such as harvesting, packing, storage, pre-processing and transportation, with the objective of minimizing the total logistic cost of the system on a regional basis. It determines the optimal number of harvesting machine, the fleet size of trucks for transportation and the amount of each type of biomass harvested, stored and pre-processed in each period to satisfy demands of refineries in each period. We illustrate the effectiveness of the proposal model with a numerical example, a case study in Aube (France department), which gives preliminary and interesting, results on a small test case.Keywords: biomass logistics, supply chain, modelling, optimization, bioenergy, biofuels
Procedia PDF Downloads 5162106 Aerodynamic Designing of Supersonic Centrifugal Compressor Stages
Authors: Y. Galerkin, A. Rekstin, K. Soldatova
Abstract:
Universal modeling method well proven for industrial compressors was applied for design of the high flow rate supersonic stage. Results were checked by ANSYS CFX and NUMECA Fine Turbo calculations. The impeller appeared to be very effective at transonic flow velocities. Stator elements efficiency is acceptable at design Mach numbers too. Their loss coefficient versus inlet flow angle performances correlates well with Universal modeling prediction. The impeller demonstrated ability of satisfactory operation at design flow rate. Supersonic flow behavior in the impeller inducer at the shroud blade to blade surface Φdes deserves additional study.Keywords: centrifugal compressor stage, supersonic impeller, inlet flow angle, loss coefficient, return channel, shock wave, vane diffuser
Procedia PDF Downloads 4712105 Identifying Common Sports Injuries in Karate and Presenting a Model for Preventing Identified Injuries (A Case Study of East Azerbaijan, Iranian Karatekas)
Authors: Nadia Zahra Karimi Khiavi, Amir Ghiami Rad
Abstract:
Due to the high likelihood of injuries in karate, karatekas' injuries warrant special treatment. This study explores the prevalence of karate injuries in East Azerbaijan, Iran and provides a model for karatekas to use in the prevention of such injuries. This study employs a descriptive approach. Male and female participants with a brown belt or above in either control or non-control styles in East Azerbaijan province are included in the study's statistical population. A statistical sample size of 100 people was computed using the tools employed (smartpls), and the samples were drawn at random from all clubs in the province with the assistance of the Karate Board in order to give a model for the prevention of karate injuries. Information was gathered by means of a survey that made use of the Standard Questionnaire for Australian Sports Medicine Injury Reports. The information is presented in the form of tables and samples, and descriptive statistics were used to organise and summarise the data. Control and non-control independent t-tests were conducted using SPSS version 20, and structural equation modelling (pls) was utilised for injury prevention modelling at a 0.05 level of significance. The results showed that the most common areas of injury among the control groups were the upper limbs (46.15%), lower limbs (34.61%), trunk (15.38%), and head and neck (3.84%). The most common types of injuries were broken bones (34.61%), sprain or strain (23.13%), bruising and contusions (23.13%), trauma to the face and mouth (11.53%), and damage to the nerves (69.69%). Uncontrolled committees are most likely to sustain injuries to the head and neck (33.33%), trunk (25.92%), upper limbs (22.22%), and lower limbs (18.51%). The most common injuries were to the mouth and face (33.33%), dislocations and fractures (22.22%), aspirin and strain (22.22%), bruises and contusions (18.51%), and nerves (70%), in that order. Among those who practice control kata, injuries to the upper limb account for 45.83%, the lower limb for 41.666%, the trunk for 8.33%, and the head and neck for 4.166%. The most common types of injuries are dislocations and fractures (41.66 per cent), aspirin and strain (29.16 per cent), bruising and bruises (16.66 per cent), and nerves (12.5%). Injuries to the face and mouth were not reported among those practising the control kata. By far, the most common sites of injury for those practising uncontrolled kata were the lower limb (43.74%), upper limb (39.13%), trunk (13.14%), and head and neck (4.34%). The most common types of injuries were dislocations and fractures (34.82%), aspirin and strain (26.08%), bruises and contusions (21.73%), mouth and face (13.14%), and nerves. Teaching the concepts of cooling and warming (0.591) and enhancing the degree of safety in the sports environment (0.413) were shown to play the most essential roles in reducing sports injuries among karate practitioners of controlling and uncontrolled styles, respectively. Use of common sports gear (0.390), Modification of training programme principles (0.341), Formulation of an effective diet plan for athletes (0.284), Evaluation of athletes' physical anatomy, physiology, chemistry, and physics (0.247).Keywords: sports injuries, karate, prevention, cooling and warming
Procedia PDF Downloads 1022104 Creative Mapping Landuse and Human Activities: From the Inventories of Factories to the History of the City and Citizens
Authors: R. Tamborrino, F. Rinaudo
Abstract:
Digital technologies offer possibilities to effectively convert historical archives into instruments of knowledge able to provide a guide for the interpretation of historical phenomena. Digital conversion and management of those documents allow the possibility to add other sources in a unique and coherent model that permits the intersection of different data able to open new interpretations and understandings. Urban history uses, among other sources, the inventories that register human activities in a specific space (e.g. cadastres, censuses, etc.). The geographic localisation of that information inside cartographic supports allows for the comprehension and visualisation of specific relationships between different historical realities registering both the urban space and the peoples living there. These links that merge the different nature of data and documentation through a new organisation of the information can suggest a new interpretation of other related events. In all these kinds of analysis, the use of GIS platforms today represents the most appropriate answer. The design of the related databases is the key to realise the ad-hoc instrument to facilitate the analysis and the intersection of data of different origins. Moreover, GIS has become the digital platform where it is possible to add other kinds of data visualisation. This research deals with the industrial development of Turin at the beginning of the 20th century. A census of factories realized just prior to WWI provides the opportunity to test the potentialities of GIS platforms for the analysis of urban landscape modifications during the first industrial development of the town. The inventory includes data about location, activities, and people. GIS is shaped in a creative way linking different sources and digital systems aiming to create a new type of platform conceived as an interface integrating different kinds of data visualisation. The data processing allows linking this information to an urban space, and also visualising the growth of the city at that time. The sources, related to the urban landscape development in that period, are of a different nature. The emerging necessity to build, enlarge, modify and join different buildings to boost the industrial activities, according to their fast development, is recorded by different official permissions delivered by the municipality and now stored in the Historical Archive of the Municipality of Turin. Those documents, which are reports and drawings, contain numerous data on the buildings themselves, including the block where the plot is located, the district, and the people involved such as the owner, the investor, and the engineer or architect designing the industrial building. All these collected data offer the possibility to firstly re-build the process of change of the urban landscape by using GIS and 3D modelling technologies thanks to the access to the drawings (2D plans, sections and elevations) that show the previous and the planned situation. Furthermore, they access information for different queries of the linked dataset that could be useful for different research and targets such as economics, biographical, architectural, or demographical. By superimposing a layer of the present city, the past meets to the present-industrial heritage, and people meet urban history.Keywords: digital urban history, census, digitalisation, GIS, modelling, digital humanities
Procedia PDF Downloads 1912103 Finite Element Modelling and Analysis of Human Knee Joint
Authors: R. Ranjith Kumar
Abstract:
Computer modeling and simulation of human movement is playing an important role in sports and rehabilitation. Accurate modeling and analysis of human knee join is more complex because of complicated structure whose geometry is not easily to represent by a solid model. As part of this project, from the number of CT scan images of human knee join surface reconstruction is carried out using 3D slicer software, an open source software. From this surface reconstruction model, using mesh lab (another open source software) triangular meshes are created on reconstructed surface. This final triangular mesh model is imported to Solid Works, 3D mechanical CAD modeling software. Finally this CAD model is imported to ABAQUS, finite element analysis software for analyzing the knee joints. The results obtained are encouraging and provides an accurate way of modeling and analysis of biological parts without human intervention.Keywords: solid works, CATIA, Pro-e, CAD
Procedia PDF Downloads 1252102 COSMO-RS Prediction for Choline Chloride/Urea Based Deep Eutectic Solvent: Chemical Structure and Application as Agent for Natural Gas Dehydration
Authors: Tayeb Aissaoui, Inas M. AlNashef
Abstract:
In recent years, green solvents named deep eutectic solvents (DESs) have been found to possess significant properties and to be applicable in several technologies. Choline chloride (ChCl) mixed with urea at a ratio of 1:2 and 80 °C was the first discovered DES. In this article, chemical structure and combination mechanism of ChCl: urea based DES were investigated. Moreover, the implementation of this DES in water removal from natural gas was reported. Dehydration of natural gas by ChCl:urea shows significant absorption efficiency compared to triethylene glycol. All above operations were retrieved from COSMOthermX software. This article confirms the potential application of DESs in gas industry.Keywords: COSMO-RS, deep eutectic solvents, dehydration, natural gas, structure, organic salt
Procedia PDF Downloads 2942101 A Comparative Evaluation of Finite Difference Methods for the Extended Boussinesq Equations and Application to Tsunamis Modelling
Authors: Aurore Cauquis, Philippe Heinrich, Mario Ricchiuto, Audrey Gailler
Abstract:
In this talk, we look for an accurate time scheme to model the propagation of waves. Several numerical schemes have been developed to solve the extended weakly nonlinear weakly dispersive Boussinesq Equations. The temporal schemes used are two Lax-Wendroff schemes, second or third order accurate, two Runge-Kutta schemes of second and third order and a simplified third order accurate Lax-Wendroff scheme. Spatial derivatives are evaluated with fourth order accuracy. The numerical model is applied to two monodimensional benchmarks on a flat bottom. It is also applied to the simulation of the Algerian tsunami generated by a Mw=6 seism on the 18th March 2021. The tsunami propagation was highly dispersive and propagated across the Mediterranean Sea. We study here the effects of the order of temporal discretization on the accuracy of the results and on the time of computation.Keywords: numerical analysis, tsunami propagation, water wave, boussinesq equations
Procedia PDF Downloads 2422100 Review of Concepts and Tools Applied to Assess Risks Associated with Food Imports
Authors: A. Falenski, A. Kaesbohrer, M. Filter
Abstract:
Introduction: Risk assessments can be performed in various ways and in different degrees of complexity. In order to assess risks associated with imported foods additional information needs to be taken into account compared to a risk assessment on regional products. The present review is an overview on currently available best practise approaches and data sources used for food import risk assessments (IRAs). Methods: A literature review has been performed. PubMed was searched for articles about food IRAs published in the years 2004 to 2014 (English and German texts only, search string “(English [la] OR German [la]) (2004:2014 [dp]) import [ti] risk”). Titles and abstracts were screened for import risks in the context of IRAs. The finally selected publications were analysed according to a predefined questionnaire extracting the following information: risk assessment guidelines followed, modelling methods used, data and software applied, existence of an analysis of uncertainty and variability. IRAs cited in these publications were also included in the analysis. Results: The PubMed search resulted in 49 publications, 17 of which contained information about import risks and risk assessments. Within these 19 cross references were identified to be of interest for the present study. These included original articles, reviews and guidelines. At least one of the guidelines of the World Organisation for Animal Health (OIE) and the Codex Alimentarius Commission were referenced in any of the IRAs, either for import of animals or for imports concerning foods, respectively. Interestingly, also a combination of both was used to assess the risk associated with the import of live animals serving as the source of food. Methods ranged from full quantitative IRAs using probabilistic models and dose-response models to qualitative IRA in which decision trees or severity tables were set up using parameter estimations based on expert opinions. Calculations were done using @Risk, R or Excel. Most heterogeneous was the type of data used, ranging from general information on imported goods (food, live animals) to pathogen prevalence in the country of origin. These data were either publicly available in databases or lists (e.g., OIE WAHID and Handystatus II, FAOSTAT, Eurostat, TRACES), accessible on a national level (e.g., herd information) or only open to a small group of people (flight passenger import data at national airport customs office). In the IRAs, an uncertainty analysis has been mentioned in some cases, but calculations have been performed only in a few cases. Conclusion: The current state-of-the-art in the assessment of risks of imported foods is characterized by a great heterogeneity in relation to general methodology and data used. Often information is gathered on a case-by-case basis and reformatted by hand in order to perform the IRA. This analysis therefore illustrates the need for a flexible, modular framework supporting the connection of existing data sources with data analysis and modelling tools. Such an infrastructure could pave the way to IRA workflows applicable ad-hoc, e.g. in case of a crisis situation.Keywords: import risk assessment, review, tools, food import
Procedia PDF Downloads 3022099 Evaluating Service Trustworthiness for Service Selection in Cloud Environment
Authors: Maryam Amiri, Leyli Mohammad-Khanli
Abstract:
Cloud computing is becoming increasingly popular and more business applications are moving to cloud. In this regard, services that provide similar functional properties are increasing. So, the ability to select a service with the best non-functional properties, corresponding to the user preference, is necessary for the user. This paper presents an Evaluation Framework of Service Trustworthiness (EFST) that evaluates the trustworthiness of equivalent services without need to additional invocations of them. EFST extracts user preference automatically. Then, it assesses trustworthiness of services in two dimensions of qualitative and quantitative metrics based on the experiences of past usage of services. Finally, EFST determines the overall trustworthiness of services using Fuzzy Inference System (FIS). The results of experiments and simulations show that EFST is able to predict the missing values of Quality of Service (QoS) better than other competing approaches. Also, it propels users to select the most appropriate services.Keywords: user preference, cloud service, trustworthiness, QoS metrics, prediction
Procedia PDF Downloads 2892098 Mathematical Modeling and Optimization of Burnishing Parameters for 15NiCr6 Steel
Authors: Tarek Litim, Ouahiba Taamallah
Abstract:
The present paper is an investigation of the effect of burnishing on the surface integrity of a component made of 15NiCr6 steel. This work shows a statistical study based on regression, and Taguchi's design has allowed the development of mathematical models to predict the output responses as a function of the technological parameters studied. The response surface methodology (RSM) showed a simultaneous influence of the burnishing parameters and observe the optimal processing parameters. ANOVA analysis of the results resulted in the validation of the prediction model with a determination coefficient R=90.60% and 92.41% for roughness and hardness, respectively. Furthermore, a multi-objective optimization allowed to identify a regime characterized by P=10kgf, i=3passes, and f=0.074mm/rev, which favours minimum roughness and maximum hardness. The result was validated by the desirability of D= (0.99 and 0.95) for roughness and hardness, respectively.Keywords: 15NiCr6 steel, burnishing, surface integrity, Taguchi, RSM, ANOVA
Procedia PDF Downloads 1942097 Competitor Analysis to Quantify the Benefits and for Different Use of Transport Infrastructure
Authors: Dimitrios J. Dimitriou, Maria F. Sartzetaki
Abstract:
Different transportation modes have key operational advantages and disadvantages, providing a variety of different transport options to users and passengers. This paper reviews key variables for the competition between air transport and other transport modes. The aim of this paper is to review the competition between air transport and other transport modes, providing results in terms of perceived cost for the users, for destinations high competitiveness for all transport modes. The competitor analysis variables include the cost and time outputs for each transport option, highlighting the level of competitiveness on high demanded Origin-Destination corridors. The case study presents the output of a such analysis for the OD corridor in Greece that connects the Capital city (Athens) with the second largest city (Thessaloniki) and the different transport modes have been considered (air, train, road). Conventional wisdom is to present an easy to handle tool for planners, managers and decision makers towards pricing policy effectiveness and demand attractiveness, appropriate to use for other similar cases.Keywords: competitor analysis, transport economics, transport generalized cost, quantitative modelling
Procedia PDF Downloads 2482096 An Approach for Thermal Resistance Prediction of Plain Socks in Wet State
Authors: Tariq Mansoor, Lubos Hes, Vladimir Bajzik
Abstract:
Socks comfort has great significance in our daily life. This significance even increased when we have undergone a work of low or high activity. It causes the sweating of our body with different rates. In this study, plain socks with differential fibre composition were wetted to saturated level. Then after successive intervals of conditioning, these socks are characterized by thermal resistance in dry and wet states. Theoretical thermal resistance is predicted by using combined filling coefficients and thermal conductivity of wet polymers instead of dry polymer (fibre) in different models. By this modification, different mathematical models could predict thermal resistance at different moisture levels. Furthermore, predicted thermal resistance by different models has reasonable correlation range between (0.84 -0.98) with experimental results in both dry (lab conditions moisture) and wet states. "This work is supported by Technical University of Liberec under SGC-2019. Project number is 21314".Keywords: thermal resistance, mathematical model, plain socks, moisture loss rate
Procedia PDF Downloads 2002095 A Modelling Analysis of Monetary Policy Rule
Authors: Wael Bakhit, Salma Bakhit
Abstract:
This paper employs a quarterly time series to determine the timing of structural breaks for interest rates in USA over the last 60 years. The Chow test is used for investigating the non-stationary, where the date of the potential break is assumed to be known. Moreover, an empirical examination of the financial sector was made to check if it is positively related to deviations from an assumed interest rate as given in a standard Taylor rule. The empirical analysis is strengthened by analysing the rule from a historical perspective and a look at the effect of setting the interest rate by the central bank on financial imbalances. The empirical evidence indicates that deviation in monetary policy has a potential causal factor in the build-up of financial imbalances and the subsequent crisis where macro prudential intervention could have beneficial effect. Thus, our findings tend to support the view which states that the probable existence of central banks has been a source of global financial crisis since the past decade.Keywords: Taylor rule, financial imbalances, central banks, econometrics
Procedia PDF Downloads 3912094 Modelling of Composite Steel and Concrete Beam with the Lightweight Concrete Slab
Authors: Veronika Přivřelová
Abstract:
Well-designed composite steel and concrete structures highlight the good material properties and lower the deficiencies of steel and concrete, in particular they make use of high tensile strength of steel and high stiffness of concrete. The most common composite steel and concrete structure is a simply supported beam, which concrete slab transferring the slab load to a beam is connected to the steel cross-section. The aim of this paper is to find the most adequate numerical model of a simply supported composite beam with the cross-sectional and material parameters based on the results of a processed parametric study and numerical analysis. The paper also evaluates the suitability of using compact concrete with the lightweight aggregates for composite steel and concrete beams. The most adequate numerical model will be used in the resent future to compare the results of laboratory tests.Keywords: composite beams, high-performance concrete, high-strength steel, lightweight concrete slab, modeling
Procedia PDF Downloads 4102093 Edmonton Urban Growth Model as a Support Tool for the City Plan Growth Scenarios Development
Authors: Sinisa J. Vukicevic
Abstract:
Edmonton is currently one of the youngest North American cities and has achieved significant growth over the past 40 years. Strong urban shift requires a new approach to how the city is envisioned, planned, and built. This approach is evidence-based scenario development, and an urban growth model was a key support tool in framing Edmonton development strategies, developing urban policies, and assessing policy implications. The urban growth model has been developed using the Metronamica software platform. The Metronamica land use model evaluated the dynamic of land use change under the influence of key development drivers (population and employment), zoning, land suitability, and land and activity accessibility. The model was designed following the Big City Moves ideas: become greener as we grow, develop a rebuildable city, ignite a community of communities, foster a healing city, and create a city of convergence. The Big City Moves were converted to three development scenarios: ‘Strong Central City’, ‘Node City’, and ‘Corridor City’. Each scenario has a narrative story that expressed scenario’s high level goal, scenario’s approach to residential and commercial activities, to transportation vision, and employment and environmental principles. Land use demand was calculated for each scenario according to specific density targets. Spatial policies were analyzed according to their level of importance within the policy set definition for the specific scenario, but also through the policy measures. The model was calibrated on the way to reproduce known historical land use pattern. For the calibration, we used 2006 and 2011 land use data. The validation is done independently, which means we used the data we did not use for the calibration. The model was validated with 2016 data. In general, the modeling process contain three main phases: ‘from qualitative storyline to quantitative modelling’, ‘model development and model run’, and ‘from quantitative modelling to qualitative storyline’. The model also incorporates five spatial indicators: distance from residential to work, distance from residential to recreation, distance to river valley, urban expansion and habitat fragmentation. The major finding of this research could be looked at from two perspectives: the planning perspective and technology perspective. The planning perspective evaluates the model as a tool for scenario development. Using the model, we explored the land use dynamic that is influenced by a different set of policies. The model enables a direct comparison between the three scenarios. We explored the similarities and differences of scenarios and their quantitative indicators: land use change, population change (and spatial allocation), job allocation, density (population, employment, and dwelling unit), habitat connectivity, proximity to objects of interest, etc. From the technology perspective, the model showed one very important characteristic: the model flexibility. The direction for policy testing changed many times during the consultation process and model flexibility in applying all these changes was highly appreciated. The model satisfied our needs as scenario development and evaluation tool, but also as a communication tool during the consultation process.Keywords: urban growth model, scenario development, spatial indicators, Metronamica
Procedia PDF Downloads 952092 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 1302091 Enhancing Residential Architecture through Generative Design: Balancing Aesthetics, Legal Constraints, and Environmental Considerations
Authors: Milena Nanova, Radul Shishkov, Martin Georgiev, Damyan Damov
Abstract:
This research paper presents an in-depth exploration of the use of generative design in urban residential architecture, with a dual focus on aligning aesthetic values with legal and environmental constraints. The study aims to demonstrate how generative design methodologies can innovate residential building designs that are not only legally compliant and environmentally conscious but also aesthetically compelling. At the core of our research is a specially developed generative design framework tailored for urban residential settings. This framework employs computational algorithms to produce diverse design solutions, meticulously balancing aesthetic appeal with practical considerations. By integrating site-specific features, urban legal restrictions, and environmental factors, our approach generates designs that resonate with the unique character of urban landscapes while adhering to regulatory frameworks. The paper explores how modern digital tools, particularly computational design, and algorithmic modelling, can optimize the early stages of residential building design. By creating a basic parametric model of a residential district, the paper investigates how automated design tools can explore multiple design variants based on predefined parameters (e.g., building cost, dimensions, orientation) and constraints. The paper aims to demonstrate how these tools can rapidly generate and refine architectural solutions that meet the required criteria for quality of life, cost efficiency, and functionality. The study utilizes computational design for database processing and algorithmic modelling within the fields of applied geodesy and architecture. It focuses on optimizing the forms of residential development by adjusting specific parameters and constraints. The results of multiple iterations are analysed, refined, and selected based on their alignment with predefined quality and cost criteria. The findings of this research will contribute to a modern, complex approach to residential area design. The paper demonstrates the potential for integrating BIM models into the design process and their application in virtual 3D Geographic Information Systems (GIS) environments. The study also examines the transformation of BIM models into suitable 3D GIS file formats, such as CityGML, to facilitate the visualization and evaluation of urban planning solutions. In conclusion, our research demonstrates that a generative parametric approach based on real geodesic data and collaborative decision-making could be introduced in the early phases of the design process. This gives the designers powerful tools to explore diverse design possibilities, significantly improving the qualities of the investment during its entire lifecycle.Keywords: architectural design, residential buildings, urban development, geodesic data, generative design, parametric models, workflow optimization
Procedia PDF Downloads 152090 Predicting Shortage of Hospital Beds during COVID-19 Pandemic in United States
Authors: Saba Ebrahimi, Saeed Ahmadian, Hedie Ashrafi
Abstract:
World-wide spread of coronavirus grows the concern about planning for the excess demand of hospital services in response to COVID-19 pandemic. The surge in the hospital services demand beyond the current capacity leads to shortage of ICU beds and ventilators in some parts of US. In this study, we forecast the required number of hospital beds and possible shortage of beds in US during COVID-19 pandemic to be used in the planning and hospitalization of new cases. In this paper, we used a data on COVID-19 deaths and patients’ hospitalization besides the data on hospital capacities and utilization in US from publicly available sources and national government websites. we used a novel ensemble modelling of deep learning networks, based on stacking different linear and non-linear layers to predict the shortage in hospital beds. The results showed that our proposed approach can predict the excess hospital beds demand very well and this can be helpful in developing strategies and plans to mitigate this gap.Keywords: COVID-19, deep learning, ensembled models, hospital capacity planning
Procedia PDF Downloads 1582089 Dynamic Model of Heterogeneous Markets with Imperfect Information for the Optimization of Company's Long-Time Strategy
Authors: Oleg Oborin
Abstract:
This paper is dedicated to the development of the model, which can be used to evaluate the effectiveness of long-term corporate strategies and identify the best strategies. The theoretical model of the relatively homogenous product market (such as iron and steel industry, mobile services or road transport) has been developed. In the model, the market consists of a large number of companies with different internal characteristics and objectives. The companies can perform mergers and acquisitions in order to increase their market share. The model allows the simulation of long-time dynamics of the market (for a period longer than 20 years). Therefore, a large number of simulations on random input data was conducted in the framework of the model. After that, the results of the model were compared with the dynamics of real markets, such as the US steel industry from the beginning of the XX century to the present day, and the market of mobile services in Germany for the period between 1990 and 2015.Keywords: Economic Modelling, Long-Time Strategy, Mergers and Acquisitions, Simulation
Procedia PDF Downloads 3682088 A Non-Parametric Based Mapping Algorithm for Use in Audio Fingerprinting
Authors: Analise Borg, Paul Micallef
Abstract:
Over the past few years, the online multimedia collection has grown at a fast pace. Several companies showed interest to study the different ways to organize the amount of audio information without the need of human intervention to generate metadata. In the past few years, many applications have emerged on the market which are capable of identifying a piece of music in a short time. Different audio effects and degradation make it much harder to identify the unknown piece. In this paper, an audio fingerprinting system which makes use of a non-parametric based algorithm is presented. Parametric analysis is also performed using Gaussian Mixture Models (GMMs). The feature extraction methods employed are the Mel Spectrum Coefficients and the MPEG-7 basic descriptors. Bin numbers replaced the extracted feature coefficients during the non-parametric modelling. The results show that non-parametric analysis offer potential results as the ones mentioned in the literature.Keywords: audio fingerprinting, mapping algorithm, Gaussian Mixture Models, MFCC, MPEG-7
Procedia PDF Downloads 4232087 The Implementation of a Numerical Technique to Thermal Design of Fluidized Bed Cooler
Authors: Damiaa Saad Khudor
Abstract:
The paper describes an investigation for the thermal design of a fluidized bed cooler and prediction of heat transfer rate among the media categories. It is devoted to the thermal design of such equipment and their application in the industrial fields. It outlines the strategy for the fluidization heat transfer mode and its implementation in industry. The thermal design for fluidized bed cooler is used to furnish a complete design for a fluidized bed cooler of Sodium Bicarbonate. The total thermal load distribution between the air-solid and water-solid along the cooler is calculated according to the thermal equilibrium. The step by step technique was used to accomplish the thermal design of the fluidized bed cooler. It predicts the load, air, solid and water temperature along the trough. The thermal design for fluidized bed cooler revealed to the installation of a heat exchanger consists of (65) horizontal tubes with (33.4) mm diameter and (4) m length inside the bed trough.Keywords: fluidization, powder technology, thermal design, heat exchangers
Procedia PDF Downloads 5152086 Comparison of ANN and Finite Element Model for the Prediction of Ultimate Load of Thin-Walled Steel Perforated Sections in Compression
Authors: Zhi-Jun Lu, Qi Lu, Meng Wu, Qian Xiang, Jun Gu
Abstract:
The analysis of perforated steel members is a 3D problem in nature, therefore the traditional analytical expressions for the ultimate load of thin-walled steel sections cannot be used for the perforated steel member design. In this study, finite element method (FEM) and artificial neural network (ANN) were used to simulate the process of stub column tests based on specific codes. Results show that compared with those of the FEM model, the ultimate load predictions obtained from ANN technique were much closer to those obtained from the physical experiments. The ANN model for the solving the hard problem of complex steel perforated sections is very promising.Keywords: artificial neural network (ANN), finite element method (FEM), perforated sections, thin-walled Steel, ultimate load
Procedia PDF Downloads 3532085 Research on Pollutant Characterization and Timing Decomposition in Beijing During the 2018-2022
Authors: Gao Fangting
Abstract:
With the accelerated pace of industrialization and urbanization, the economic level has been significantly improved, and at the same time, the air quality situation has also become a focus of attention, which not only affects people's health but also has certain impacts on the economy and ecology. As the capital city of China, the air quality situation in Beijing has attracted much attention. In this paper, based on the day-by-day PM2.5, PM10, CO, NO₂, SO₂ and O₃ conditions in Beijing from 2018 to 2022, the characterization of pollutants is launched, and the seasonal decomposition and prediction of the main pollutants, PM2.5, PM10 and O3, are performed in STL. The results of the study show that (1) the overall air quality of Beijing has significantly improved from 2018 to 2022, and the main pollutants are PM2.5, PM10, and O₃; (2) the seasonal intensities of the main pollutants are higher, and they are influenced by seasonal factors; and (3) it is predicted that the O₃ concentration will have a trend of slowly increasing from 2023 to 2026, and the PM10 and PM2.5 pollution situation slowly improves.Keywords: air pollutants, Beijing, characteristic analysis, STL
Procedia PDF Downloads 272084 Multi-Label Approach to Facilitate Test Automation Based on Historical Data
Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally
Abstract:
The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.Keywords: machine learning, multi-class, multi-label, supervised learning, test automation
Procedia PDF Downloads 1342083 A Meso Macro Model Prediction of Laminated Composite Damage Elastic Behaviour
Authors: A. Hocine, A. Ghouaoula, S. M. Medjdoub, M. Cherifi
Abstract:
The present paper proposed a meso–macro model describing the mechanical behaviour composite laminates of staking sequence [+θ/-θ]s under tensil loading. The behaviour of a layer is ex-pressed through elasticity coupled to damage. The elastic strain is due to the elasticity of the layer and can be modeled by using the classical laminate theory, and the laminate is considered as an orthotropic material. This means that no coupling effect between strain and curvature is considered. In the present work, the damage is associated to cracking of the matrix and parallel to the fibers and it being taken into account by the changes in the stiffness of the layers. The anisotropic damage is completely described by a single scalar variable and its evolution law is specified from the principle of maximum dissipation. The stress/strain relationship is investigated in plane stress loading.Keywords: damage, behavior modeling, meso-macro model, composite laminate, membrane loading
Procedia PDF Downloads 4772082 Towards Optimising Building Information Modelling and Building Management System in Higher Education Institutions Facility Management: A Review
Authors: Zhuoqun Sun, Francisco Sierra, A. Booth
Abstract:
With BIM rapidly implemented in the design and construction stage of a project, researchers begin to focus on improving the operation and maintenance stage with the aid of BIM. Since the increasing amount of electronic equipment installed in the building, building management system becomes mainstream for controlling a building, especially in higher education institutions that can play an important role in terms of reducing carbon emission and improving energy efficiency. Currently, an approach to integrate BIM and BMS to improve HEIs facility management has not been established yet. Thus, this paper aims to analyse the benefits, issues, and trends of BIM and BMS integration and their application in HEIs. A systematic literature review was carried out on SCOPUS by applying the PRISMA methodology. 73 articles have been chosen based on keywords, abstracts, and the full content of the articles. The benefit and existed issues from the articles are analysed. The review shows the need to develop a tool to improve facility management through BIM BMS integration.Keywords: BIM, BMS, HEIs, review
Procedia PDF Downloads 1642081 The UAV Feasibility Trajectory Prediction Using Convolution Neural Networks
Authors: Adrien Marque, Daniel Delahaye, Pierre Maréchal, Isabelle Berry
Abstract:
Wind direction and uncertainty are crucial in aircraft or unmanned aerial vehicle trajectories. By computing wind covariance matrices on each spatial grid point, these spatial grids can be defined as images with symmetric positive definite matrix elements. A data pre-processing step, a specific convolution, a specific max-pooling, and a specific flatten layers are implemented to process such images. Then, the neural network is applied to spatial grids, whose elements are wind covariance matrices, to solve classification problems related to the feasibility of unmanned aerial vehicles based on wind direction and wind uncertainty.Keywords: wind direction, uncertainty level, unmanned aerial vehicle, convolution neural network, SPD matrices
Procedia PDF Downloads 562080 Effect of Filler Size and Shape on Positive Temperature Coefficient Effect
Authors: Eric Asare, Jamie Evans, Mark Newton, Emiliano Bilotti
Abstract:
Two types of filler shapes (sphere and flakes) and three different sizes are employed to study the size effect on PTC. The composite is prepared using a mini-extruder with high-density polyethylene (HDPE) as the matrix. A computer modelling is used to fit the experimental results. The percolation threshold decreases with decreasing filler size and this was observed for both the spherical particles as well as the flakes. This was caused by the decrease in interparticle distance with decreasing filler size. The 100 µm particles showed a larger PTC intensity compared to the 5 µm particles for the metal coated glass sphere and flake. The small particles have a large surface area and agglomeration and this makes it difficult for the conductive network to e disturbed. Increasing the filler content decreased the PTC intensity and this is due to an increase in the conductive network within the polymer matrix hence more energy is needed to disrupt the network.Keywords: positive temperature coefficient (PTC) effect, conductive polymer composite (CPC), electrical conductivity
Procedia PDF Downloads 4292079 Modelling of Factors Affecting Bond Strength of Fibre Reinforced Polymer Externally Bonded to Timber and Concrete
Authors: Abbas Vahedian, Rijun Shrestha, Keith Crews
Abstract:
In recent years, fibre reinforced polymers as applications of strengthening materials have received significant attention by civil engineers and environmentalists because of their excellent characteristics. Currently, these composites have become a mainstream technology for strengthening of infrastructures such as steel, concrete and more recently, timber and masonry structures. However, debonding is identified as the main problem which limit the full utilisation of the FRP material. In this paper, a preliminary analysis of factors affecting bond strength of FRP-to-concrete and timber bonded interface has been conducted. A novel theoretical method through regression analysis has been established to evaluate these factors. Results of proposed model are then assessed with results of pull-out tests and satisfactory comparisons are achieved between measured failure loads (R2 = 0.83, P < 0.0001) and the predicted loads (R2 = 0.78, P < 0.0001).Keywords: debonding, fibre reinforced polymers (FRP), pull-out test, stepwise regression analysis
Procedia PDF Downloads 2482078 A Semantic Analysis of Modal Verbs in Barak Obama’s 2012 Presidential Campaign Speech
Authors: Kais A. Kadhim
Abstract:
This paper is a semantic analysis of the English modals in Obama’s speech. The main objective of this study is to analyze selected modal auxiliaries identified in selected speeches of Obama’s campaign based on Coates’ (1983) semantic clusters. A total of fifteen speeches of Obama’s campaign were selected as the primary data and the modal auxiliaries selected for analysis include will, would, can, could, should, must, ought, shall, may and might. All the modal auxiliaries taken from the speeches of Barack Obama were analyzed based on the framework of Coates’ semantic clusters. Such analytical framework was carried out to examine how modal auxiliaries are used in the context of persuading people in Obama’s campaign speeches. The findings reveal that modals of intention, prediction, futurity and modals of possibility, ability, permission are mostly used in Obama’s campaign speeches.Keywords: modals, meaning, persuasion, speech
Procedia PDF Downloads 409