Search results for: predictive modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2682

Search results for: predictive modelling

1932 An Investigation on Opportunities and Obstacles on Implementation of Building Information Modelling for Pre-fabrication in Small and Medium Sized Construction Companies in Germany: A Practical Approach

Authors: Nijanthan Mohan, Rolf Gross, Fabian Theis

Abstract:

The conventional method used in the construction industries often resulted in significant rework since most of the decisions were taken onsite under the pressure of project deadlines and also due to the improper information flow, which results in ineffective coordination. However, today’s architecture, engineering, and construction (AEC) stakeholders demand faster and accurate deliverables, efficient buildings, and smart processes, which turns out to be a tall order. Hence, the building information modelling (BIM) concept was developed as a solution to fulfill the above-mentioned necessities. Even though BIM is successfully implemented in most of the world, it is still in the early stages in Germany, since the stakeholders are sceptical of its reliability and efficiency. Due to the huge capital requirement, the small and medium-sized construction companies are still reluctant to implement BIM workflow in their projects. The purpose of this paper is to analyse the opportunities and obstacles to implementing BIM for prefabrication. Among all other advantages of BIM, pre-fabrication is chosen for this paper because it plays a vital role in creating an impact on time as well as cost factors of a construction project. The positive impact of prefabrication can be explicitly observed by the project stakeholders and participants, which enables the breakthrough of the skepticism factor among the small scale construction companies. The analysis consists of the development of a process workflow for implementing prefabrication in building construction, followed by a practical approach, which was executed with two case studies. The first case study represents on-site prefabrication, and the second was done for off-site prefabrication. It was planned in such a way that the first case study gives a first-hand experience for the workers at the site on the BIM model so that they can make much use of the created BIM model, which is a better representation compared to the traditional 2D plan. The main aim of the first case study is to create a belief in the implementation of BIM models, which was succeeded by the execution of offshore prefabrication in the second case study. Based on the case studies, the cost and time analysis was made, and it is inferred that the implementation of BIM for prefabrication can reduce construction time, ensures minimal or no wastes, better accuracy, less problem-solving at the construction site. It is also observed that this process requires more planning time, better communication, and coordination between different disciplines such as mechanical, electrical, plumbing, architecture, etc., which was the major obstacle for successful implementation. This paper was carried out in the perspective of small and medium-sized mechanical contracting companies for the private building sector in Germany.

Keywords: building information modelling, construction wastes, pre-fabrication, small and medium sized company

Procedia PDF Downloads 101
1931 Numerical and Experimental Investigation of Fracture Mechanism in Paintings on Wood

Authors: Mohammad Jamalabadi, Noemi Zabari, Lukasz Bratasz

Abstract:

Panel paintings -complex multi-layer structures consisting of wood support and a paint layer composed of a preparatory layer of gesso, paints, and varnishes- are among the category of cultural objects most vulnerable to relative humidity fluctuations and frequently found in museum collections. The current environmental specifications in museums have been derived using the criterion of crack initiation in an undamaged, usually new gesso layer laid on wood. In reality, historical paintings exhibit complex crack patterns called craquelures. The present paper analyses the structural response of a paint layer with a virtual network of rectangular cracks under environmental loadings using a three-dimensional model of a panel painting. Two modes of loading are considered -one induced by one-dimensional moisture response of wood support, termed the tangential loading, and the other isotropic induced by drying shrinkage of the gesso layer. The superposition of the two modes is also analysed. The modelling showed that minimum distances between cracks parallel to the wood grain depended on the gesso stiffness under the tangential loading. In spite of a non-zero Poisson’s ratio, gesso cracks perpendicular to the wood grain could not be generated by the moisture response of wood support. The isotropic drying shrinkage of gesso produced cracks that were almost evenly spaced in both directions. The modelling results were cross-checked with crack patterns obtained on a mock-up of a panel painting exposed to a number of extreme environmental variations in an environmental chamber.

Keywords: fracture saturation, surface cracking, paintings on wood, wood panels

Procedia PDF Downloads 253
1930 Investigation of Detectability of Orbital Objects/Debris in Geostationary Earth Orbit by Microwave Kinetic Inductance Detectors

Authors: Saeed Vahedikamal, Ian Hepburn

Abstract:

Microwave Kinetic Inductance Detectors (MKIDs) are considered as one of the most promising photon detectors of the future in many Astronomical applications such as exoplanet detections. The MKID advantages stem from their single photon sensitivity (ranging from UV to optical and near infrared), photon energy resolution and high temporal capability (~microseconds). There has been substantial progress in the development of these detectors and MKIDs with Megapixel arrays is now possible. The unique capability of recording an incident photon and its energy (or wavelength) while also registering its time of arrival to within a microsecond enables an array of MKIDs to produce a four-dimensional data block of x, y, z and t comprising x, y spatial, z axis per pixel spectral and t axis per pixel which is temporal. This offers the possibility that the spectrum and brightness variation for any detected piece of space debris as a function of time might offer a unique identifier or fingerprint. Such a fingerprint signal from any object identified in multiple detections by different observers has the potential to determine the orbital features of the object and be used for their tracking. Modelling performed so far shows that with a 20 cm telescope located at an Astronomical observatory (e.g. La Palma, Canary Islands) we could detect sub cm objects at GEO. By considering a Lambertian sphere with a 10 % reflectivity (albedo of the Moon) we anticipate the following for a GEO object: 10 cm object imaged in a 1 second image capture; 1.2 cm object for a 70 second image integration or 0.65 cm object for a 4 minute image integration. We present details of our modelling and the potential instrument for a dedicated GEO surveillance system.

Keywords: space debris, orbital debris, detection system, observation, microwave kinetic inductance detectors, MKID

Procedia PDF Downloads 83
1929 Validating Quantitative Stormwater Simulations in Edmonton Using MIKE URBAN

Authors: Mohamed Gaafar, Evan Davies

Abstract:

Many municipalities within Canada and abroad use chloramination to disinfect drinking water so as to avert the production of the disinfection by-products (DBPs) that result from conventional chlorination processes and their consequential public health risks. However, the long-lasting monochloramine disinfectant (NH2Cl) can pose a significant risk to the environment. As, it can be introduced into stormwater sewers, from different water uses, and thus freshwater sources. Little research has been undertaken to monitor and characterize the decay of NH2Cl and to study the parameters affecting its decomposition in stormwater networks. Therefore, the current study was intended to investigate this decay starting by building a stormwater model and validating its hydraulic and hydrologic computations, and then modelling water quality in the storm sewers and examining the effects of different parameters on chloramine decay. The presented work here is only the first stage of this study. The 30th Avenue basin in Southern Edmonton was chosen as a case study, because the well-developed basin has various land-use types including commercial, industrial, residential, parks and recreational. The City of Edmonton has already built a MIKE-URBAN stormwater model for modelling floods. Nevertheless, this model was built to the trunk level which means that only the main drainage features were presented. Additionally, this model was not calibrated and known to consistently compute pipe flows higher than the observed values; not to the benefit of studying water quality. So the first goal was to complete modelling and updating all stormwater network components. Then, available GIS Data was used to calculate different catchment properties such as slope, length and imperviousness. In order to calibrate and validate this model, data of two temporary pipe flow monitoring stations, collected during last summer, was used along with records of two other permanent stations available for eight consecutive summer seasons. The effect of various hydrological parameters on model results was investigated. It was found that model results were affected by the ratio of impervious areas. The catchment length was tested, however calculated, because it is approximate representation of the catchment shape. Surface roughness coefficients were calibrated using. Consequently, computed flows at the two temporary locations had correlation coefficients of values 0.846 and 0.815, where the lower value pertained to the larger attached catchment area. Other statistical measures, such as peak error of 0.65%, volume error of 5.6%, maximum positive and negative differences of 2.17 and -1.63 respectively, were all found in acceptable ranges.

Keywords: stormwater, urban drainage, simulation, validation, MIKE URBAN

Procedia PDF Downloads 284
1928 Fast Aerodynamic Evaluation of Transport Aircraft in Early Phases

Authors: Xavier Bertrand, Alexandre Cayrel

Abstract:

The early phase of an aircraft development is instrumental as it really drives the potential of a new concept. Any weakness in the high-level design (wing planform, moveable surfaces layout etc.) will be extremely difficult and expensive to recover later in the aircraft development process. Aerodynamic evaluation in this very early development phase is driven by two main criteria: a short lead-time to allow quick iterations of the geometrical design, and a high quality of the calculations to get an accurate & reliable assessment of the current status. These two criteria are usually quite contradictory. Actually, short lead time of a couple of hours from end-to-end can be obtained with very simple tools (semi-empirical methods for instance) although their accuracy is limited, whereas higher quality calculations require heavier/more complex tools, which obviously need more complex inputs as well, and a significantly longer lead time. At this point, the choice has to be done between accuracy and lead-time. A brand new approach has been developed within Airbus, aiming at obtaining quickly high quality evaluations of the aerodynamic of an aircraft. This methodology is based on a joint use of Surrogate Modelling and a lifting line code. The Surrogate Modelling is used to get the wing sections characteristics (e.g. lift coefficient vs. angle of attack), whatever the airfoil geometry, the status of the moveable surfaces (aileron/spoilers) or the high-lift devices deployment. From these characteristics, the lifting line code is used to get the 3D effects on the wing whatever the flow conditions (low/high Mach numbers etc.). This methodology has been applied successfully to a concept of medium range aircraft.

Keywords: aerodynamics, lifting line, surrogate model, CFD

Procedia PDF Downloads 338
1927 Predictive Factors of Prognosis in Acute Stroke Patients Receiving Traditional Chinese Medicine Therapy: A Retrospective Study

Authors: Shaoyi Lu

Abstract:

Background: Traditional Chinese medicine has been used to treat stroke, which is a major cause of morbidity and mortality. There is, however, no clear agreement about the optimal timing, population, efficacy, and predictive prognosis factors of traditional Chinese medicine supplemental therapy. Method: In this study, we used a retrospective analysis with data collection from stroke patients in Stroke Registry In Chang Gung Healthcare System (SRICHS). Stroke patients who received traditional Chinese medicine consultation in neurology ward of Keelung Chang Gung Memorial Hospital from Jan 2010 to Dec 2014 were enrolled. Clinical profiles including the neurologic deficit, activities of daily living and other basic characteristics were analyzed. Through propensity score matching, we compared the NIHSS and Barthel index before and after the hospitalization, and applied with subgroup analysis, and adjusted by multivariate regression method. Results: Totally 115 stroke patients were enrolled with experiment group in 23 and control group in 92. The most important factor for prognosis prediction were the scores of National Institutes of Health Stroke Scale and Barthel index right before the hospitalization. Traditional Chinese medicine intervention had no statistically significant influence on the neurological deficit of acute stroke patients, and mild negative influence on daily activity performance of acute hemorrhagic stroke patient. Conclusion: Efficacy of traditional Chinese medicine as a supplemental therapy for acute stroke patients was controversial. The reason for this phenomenon might be complex and require more research to comprehend. Key words: traditional Chinese medicine, acupuncture, Stroke, NIH stroke scale, Barthel index, predictive factor. Method: In this study, we used a retrospective analysis with data collection from stroke patients in Stroke Registry In Chang Gung Healthcare System (SRICHS). Stroke patients who received traditional Chinese medicine consultation in neurology ward of Keelung Chang Gung Memorial Hospital from Jan 2010 to Dec 2014 were enrolled. Clinical profiles including the neurologic deficit, activities of daily living and other basic characteristics were analyzed. Through propensity score matching, we compared the NIHSS and Barthel index before and after the hospitalization, and applied with subgroup analysis, and adjusted by multivariate regression method. Results: Totally 115 stroke patients were enrolled with experiment group in 23 and control group in 92. The most important factor for prognosis prediction were the scores of National Institutes of Health Stroke Scale and Barthel index right before the hospitalization. Traditional Chinese medicine intervention had no statistically significant influence on the neurological deficit of acute stroke patients, and mild negative influence on daily activity performance of acute hemorrhagic stroke patient. Conclusion: Efficacy of traditional Chinese medicine as a supplemental therapy for acute stroke patients was controversial. The reason for this phenomenon might be complex and require more research to comprehend.

Keywords: traditional Chinese medicine, complementary and alternative medicine, stroke, acupuncture

Procedia PDF Downloads 354
1926 A Digital Twin Approach to Support Real-time Situational Awareness and Intelligent Cyber-physical Control in Energy Smart Buildings

Authors: Haowen Xu, Xiaobing Liu, Jin Dong, Jianming Lian

Abstract:

Emerging smart buildings often employ cyberinfrastructure, cyber-physical systems, and Internet of Things (IoT) technologies to increase the automation and responsiveness of building operations for better energy efficiency and lower carbon emission. These operations include the control of Heating, Ventilation, and Air Conditioning (HVAC) and lighting systems, which are often considered a major source of energy consumption in both commercial and residential buildings. Developing energy-saving control models for optimizing HVAC operations usually requires the collection of high-quality instrumental data from iterations of in-situ building experiments, which can be time-consuming and labor-intensive. This abstract describes a digital twin approach to automate building energy experiments for optimizing HVAC operations through the design and development of an adaptive web-based platform. The platform is created to enable (a) automated data acquisition from a variety of IoT-connected HVAC instruments, (b) real-time situational awareness through domain-based visualizations, (c) adaption of HVAC optimization algorithms based on experimental data, (d) sharing of experimental data and model predictive controls through web services, and (e) cyber-physical control of individual instruments in the HVAC system using outputs from different optimization algorithms. Through the digital twin approach, we aim to replicate a real-world building and its HVAC systems in an online computing environment to automate the development of building-specific model predictive controls and collaborative experiments in buildings located in different climate zones in the United States. We present two case studies to demonstrate our platform’s capability for real-time situational awareness and cyber-physical control of the HVAC in the flexible research platforms within the Oak Ridge National Laboratory (ORNL) main campus. Our platform is developed using adaptive and flexible architecture design, rendering the platform generalizable and extendable to support HVAC optimization experiments in different types of buildings across the nation.

Keywords: energy-saving buildings, digital twins, HVAC, cyber-physical system, BIM

Procedia PDF Downloads 89
1925 Assessment of a Coupled Geothermal-Solar Thermal Based Hydrogen Production System

Authors: Maryam Hamlehdar, Guillermo A. Narsilio

Abstract:

To enhance the feasibility of utilising geothermal hot sedimentary aquifers (HSAs) for clean hydrogen production, one approach is the implementation of solar-integrated geothermal energy systems. This detailed modelling study conducts a thermo-economic assessment of an advanced Organic Rankine Cycle (ORC)-based hydrogen production system that uses low-temperature geothermal reservoirs, with a specific focus on hot sedimentary aquifers (HSAs) over a 30-year period. In the proposed hybrid system, solar-thermal energy is used to raise the water temperature extracted from the geothermal production well. This temperature increase leads to a higher steam output, powering the turbine and subsequently enhancing the electricity output for running the electrolyser. Thermodynamic modeling of a parabolic trough solar (PTS) collector is developed and integrated with modeling for a geothermal-based configuration. This configuration includes a closed regenerator cycle (CRC), proton exchange membrane (PEM) electrolyser, and thermoelectric generator (TEG). Following this, the study investigates the impact of solar energy use on the temperature enhancement of the geothermal reservoir. It assesses the resulting consequences on the lifecycle performance of the hydrogen production system in comparison with a standalone geothermal system. The results indicate that, with the appropriate solar collector area, a combined solar-geothermal hydrogen production system outperforms a standalone geothermal system in both cost and rate of production. These findings underscore a solar-assisted geothermal hybrid system holds the potential to generate lower-cost hydrogen with enhanced efficiency, thereby boosting the appeal of numerous low to medium-temperature geothermal sources for hydrogen production.

Keywords: clean hydrogen production, integrated solar-geothermal, low-temperature geothermal energy, numerical modelling

Procedia PDF Downloads 52
1924 Prediction of Anticancer Potential of Curcumin Nanoparticles by Means of Quasi-Qsar Analysis Using Monte Carlo Method

Authors: Ruchika Goyal, Ashwani Kumar, Sandeep Jain

Abstract:

The experimental data for anticancer potential of curcumin nanoparticles was calculated by means of eclectic data. The optimal descriptors were examined using Monte Carlo method based CORAL SEA software. The statistical quality of the model is following: n = 14, R² = 0.6809, Q² = 0.5943, s = 0.175, MAE = 0.114, F = 26 (sub-training set), n =5, R²= 0.9529, Q² = 0.7982, s = 0.086, MAE = 0.068, F = 61, Av Rm² = 0.7601, ∆R²m = 0.0840, k = 0.9856 and kk = 1.0146 (test set) and n = 5, R² = 0.6075 (validation set). This data can be used to build predictive QSAR models for anticancer activity.

Keywords: anticancer potential, curcumin, model, nanoparticles, optimal descriptors, QSAR

Procedia PDF Downloads 308
1923 Use of Cassava Waste and Its Energy Potential

Authors: I. Inuaeyen, L. Phil, O. Eni

Abstract:

Fossil fuels have been the main source of global energy for many decades, accounting for about 80% of global energy need. This is beginning to change however with increasing concern about greenhouse gas emissions which comes mostly from fossil fuel combustion. Greenhouse gases such as carbon dioxide are responsible for stimulating climate change. As a result, there has been shift towards more clean and renewable energy sources of energy as a strategy for stemming greenhouse gas emission into the atmosphere. The production of bio-products such as bio-fuel, bio-electricity, bio-chemicals, and bio-heat etc. using biomass materials in accordance with the bio-refinery concept holds a great potential for reducing high dependence on fossil fuel and their resources. The bio-refinery concept promotes efficient utilisation of biomass material for the simultaneous production of a variety of products in order to minimize or eliminate waste materials. This will ultimately reduce greenhouse gas emissions into the environment. In Nigeria, cassava solid waste from cassava processing facilities has been identified as a vital feedstock for bio-refinery process. Cassava is generally a staple food in Nigeria and one of the most widely cultivated foodstuff by farmers across Nigeria. As a result, there is an abundant supply of cassava waste in Nigeria. In this study, the aim is to explore opportunities for converting cassava waste to a range of bio-products such as butanol, ethanol, electricity, heat, methanol, furfural etc. using a combination of biochemical, thermochemical and chemical conversion routes. . The best process scenario will be identified through the evaluation of economic analysis, energy efficiency, life cycle analysis and social impact. The study will be carried out by developing a model representing different process options for cassava waste conversion to useful products. The model will be developed using Aspen Plus process simulation software. Process economic analysis will be done using Aspen Icarus software. So far, comprehensive survey of literature has been conducted. This includes studies on conversion of cassava solid waste to a variety of bio-products using different conversion techniques, cassava waste production in Nigeria, modelling and simulation of waste conversion to useful products among others. Also, statistical distribution of cassava solid waste production in Nigeria has been established and key literatures with useful parameters for developing different cassava waste conversion process has been identified. In the future work, detailed modelling of the different process scenarios will be carried out and the models validated using data from literature and demonstration plants. A techno-economic comparison of the various process scenarios will be carried out to identify the best scenario using process economics, life cycle analysis, energy efficiency and social impact as the performance indexes.

Keywords: bio-refinery, cassava waste, energy, process modelling

Procedia PDF Downloads 357
1922 Impact of Diabetes Mellitus Type 2 on Clinical In-Stent Restenosis in First Elective Percutaneous Coronary Intervention Patients

Authors: Leonard Simoni, Ilir Alimehmeti, Ervina Shirka, Endri Hasimi, Ndricim Kallashi, Verona Beka, Suerta Kabili, Artan Goda

Abstract:

Background: Diabetes Mellitus type 2, small vessel calibre, stented length of vessel, complex lesion morphology, and prior bypass surgery have resulted risk factors for In-Stent Restenosis (ISR). However, there are some contradictory results about body mass index (BMI) as a risk factor for ISR. Purpose: We want to identify clinical, lesional and procedural factors that can predict clinical ISR in our patients. Methods: Were enrolled 759 patients who underwent first-time elective PCI with Bare Metal Stents (BMS) from September 2011 to December 2013 in our Department of Cardiology and followed them for at least 1.5 years with a median of 862 days (2 years and 4 months). Only the patients re-admitted with ischemic heart disease underwent control coronary angiography but no routine angiographic control was performed. Patients were categorized in ISR and non-ISR groups and compared between them. Multivariate analysis - Binary Logistic Regression: Forward Conditional Method was used to identify independent predictive risk factors. P was considered statistically significant when <0.05. Results: ISR compared to non-ISR individuals had a significantly lower BMI (25.7±3.3 vs. 26.9±3.7, p=0.004), higher risk anatomy (LM + 3-vessel CAD) (23% vs. 14%, p=0.03), higher number of stents/person used (2.1±1.1 vs. 1.75±0.96, p=0.004), greater length of stents/person used (39.3±21.6 vs. 33.3±18.5, p=0.01), and a lower use of clopidogrel and ASA (together) (95% vs. 99%, p=0.012). They also had a higher, although not statistically significant, prevalence of Diabetes Mellitus (42% vs. 32%, p=0.072) and a greater number of treated vessels (1.36±0.5 vs. 1.26±0.5, p=0.08). In the multivariate analysis, Diabetes Mellitus type 2 and multiple stents used were independent predictors risk factors for In-Stent Restenosis, OR 1.66 [1.03-2.68], p=0.039, and OR 1.44 [1.16-1.78,] p=0.001, respectively. On the other side higher BMI and use of clopidogrel and ASA together resulted protective factors OR 0.88 [0.81-0.95], p=0.001 and OR 0.2 [0.06-0.72] p=0.013, respectively. Conclusion: Diabetes Mellitus and multiple stents are strong predictive risk factors, whereas the use of clopidogrel and ASA together are protective factors for clinical In-Stent Restenosis. Paradoxically High BMI is a protective factor for In-stent Restenosis, probably related to a larger diameter of vessels and consequently a larger diameter of stents implanted in these patients. Further studies are needed to clarify this finding.

Keywords: body mass index, diabetes mellitus, in-stent restenosis, percutaneous coronary intervention

Procedia PDF Downloads 195
1921 High Resolution Sandstone Connectivity Modelling: Implications for Outcrop Geological and Its Analog Studies

Authors: Numair Ahmed Siddiqui, Abdul Hadi bin Abd Rahman, Chow Weng Sum, Wan Ismail Wan Yousif, Asif Zameer, Joel Ben-Awal

Abstract:

Advances in data capturing from outcrop studies have made possible the acquisition of high-resolution digital data, offering improved and economical reservoir modelling methods. Terrestrial laser scanning utilizing LiDAR (Light detection and ranging) provides a new method to build outcrop based reservoir models, which provide a crucial piece of information to understand heterogeneities in sandstone facies with high-resolution images and data set. This study presents the detailed application of outcrop based sandstone facies connectivity model by acquiring information gathered from traditional fieldwork and processing detailed digital point-cloud data from LiDAR to develop an intermediate small-scale reservoir sandstone facies model of the Miocene Sandakan Formation, Sabah, East Malaysia. The software RiScan pro (v1.8.0) was used in digital data collection and post-processing with an accuracy of 0.01 m and point acquisition rate of up to 10,000 points per second. We provide an accurate and descriptive workflow to triangulate point-clouds of different sets of sandstone facies with well-marked top and bottom boundaries in conjunction with field sedimentology. This will provide highly accurate qualitative sandstone facies connectivity model which is a challenge to obtain from subsurface datasets (i.e., seismic and well data). Finally, by applying this workflow, we can build an outcrop based static connectivity model, which can be an analogue to subsurface reservoir studies.

Keywords: LiDAR, outcrop, high resolution, sandstone faceis, connectivity model

Procedia PDF Downloads 207
1920 Decision Support System for the Management of the Shandong Peninsula, China

Authors: Natacha Fery, Guilherme L. Dalledonne, Xiangyang Zheng, Cheng Tang, Roberto Mayerle

Abstract:

A Decision Support System (DSS) for supporting decision makers in the management of the Shandong Peninsula has been developed. Emphasis has been given to coastal protection, coastal cage aquaculture and harbors. The investigations were done in the framework of a joint research project funded by the German Ministry of Education and Research (BMBF) and the Chinese Academy of Sciences (CAS). In this paper, a description of the DSS, the development of its components, and results of its application are presented. The system integrates in-situ measurements, process-based models, and a database management system. Numerical models for the simulation of flow, waves, sediment transport and morphodynamics covering the entire Bohai Sea are set up based on the Delft3D modelling suite (Deltares). Calibration and validation of the models were realized based on the measurements of moored Acoustic Doppler Current Profilers (ADCP) and High Frequency (HF) radars. In order to enable cost-effective and scalable applications, a database management system was developed. It enhances information processing, data evaluation, and supports the generation of data products. Results of the application of the DSS to the management of coastal protection, coastal cage aquaculture and harbors are presented here. Model simulations covering the most severe storms observed during the last decades were carried out leading to an improved understanding of hydrodynamics and morphodynamics. Results helped in the identification of coastal stretches subjected to higher levels of energy and improved support for coastal protection measures.

Keywords: coastal protection, decision support system, in-situ measurements, numerical modelling

Procedia PDF Downloads 184
1919 Combination of Modelling and Environmental Life Cycle Assessment Approach for Demand Driven Biogas Production

Authors: Juan A. Arzate, Funda C. Ertem, M. Nicolas Cruz-Bournazou, Peter Neubauer, Stefan Junne

Abstract:

— One of the biggest challenges the world faces today is global warming that is caused by greenhouse gases (GHGs) coming from the combustion of fossil fuels for energy generation. In order to mitigate climate change, the European Union has committed to reducing GHG emissions to 80–95% below the level of the 1990s by the year 2050. Renewable technologies are vital to diminish energy-related GHG emissions. Since water and biomass are limited resources, the largest contributions to renewable energy (RE) systems will have to come from wind and solar power. Nevertheless, high proportions of fluctuating RE will present a number of challenges, especially regarding the need to balance the variable energy demand with the weather dependent fluctuation of energy supply. Therefore, biogas plants in this content would play an important role, since they are easily adaptable. Feedstock availability varies locally or seasonally; however there is a lack of knowledge in how biogas plants should be operated in a stable manner by local feedstock. This problem may be prevented through suitable control strategies. Such strategies require the development of convenient mathematical models, which fairly describe the main processes. Modelling allows us to predict the system behavior of biogas plants when different feedstocks are used with different loading rates. Life cycle assessment (LCA) is a technique for analyzing several sides from evolution of a product till its disposal in an environmental point of view. It is highly recommend to use as a decision making tool. In order to achieve suitable strategies, the combination of a flexible energy generation provided by biogas plants, a secure production process and the maximization of the environmental benefits can be obtained by the combination of process modelling and LCA approaches. For this reason, this study focuses on the biogas plant which flexibly generates required energy from the co-digestion of maize, grass and cattle manure, while emitting the lowest amount of GHG´s. To achieve this goal AMOCO model was combined with LCA. The program was structured in Matlab to simulate any biogas process based on the AMOCO model and combined with the equations necessary to obtain climate change, acidification and eutrophication potentials of the whole production system based on ReCiPe midpoint v.1.06 methodology. Developed simulation was optimized based on real data from operating biogas plants and existing literature research. The results prove that AMOCO model can successfully imitate the system behavior of biogas plants and the necessary time required for the process to adapt in order to generate demanded energy from available feedstock. Combination with LCA approach provided opportunity to keep the resulting emissions from operation at the lowest possible level. This would allow for a prediction of the process, when the feedstock utilization supports the establishment of closed material circles within a smart bio-production grid – under the constraint of minimal drawbacks for the environment and maximal sustainability.

Keywords: AMOCO model, GHG emissions, life cycle assessment, modelling

Procedia PDF Downloads 180
1918 Influence of Reinforcement Stiffness on the Performance of Back-to-Back Reinforced Earth Wall upon Rainwater Infiltration

Authors: Gopika Rajagopal, Sudheesh Thiyyakkandi

Abstract:

Back-to-back reinforced earth (RE) walls are extensively used in these days as bridge abutments and highway ramps, owing to their cost efficiency and ease of construction. High quality select fill is the most suitable backfill material due to its excellent engineering properties and constructability. However, industries are compelled to use low quality, locally available soil because of its ample availability on site. However, several failure cases of such walls are reported, especially subsequent to rainfall events. The stiffness of reinforcement is one of the major factors affecting the performance of RE walls. The present study focused on analyzing the effect of reinforcement stiffness on the performance of complete select fill, complete marginal fill, and hybrid-fill (i.e., combination of select and marginal fills) back-to-back RE walls, immediately after construction and upon rainwater infiltration through finite element modelling. A constant width to height (W/H) ratio of 3 and height (H) of 6 m was considered for the numerical analysis and the stiffness of reinforcement layers was varied from 500 kN/m to 10000 kN/m. Results showed that reinforcement stiffness had a noticeable influence on the response of RE wall, subsequent to construction as well as rainwater infiltration. Facing displacement was found to decrease and maximum reinforcement tension and factor of safety were observed to increase with increasing the stiffness of reinforcement. However, beyond a stiffness of 5000 kN/m, no significant reduction in facing displacement was observed. The behavior of fully marginal fill wall considered in this study was found to be reasonable even after rainwater infiltration when the high stiffness reinforcement layers are used.

Keywords: back-to-back reinforced earth wall, finite element modelling, rainwater infiltration, reinforcement stiffness

Procedia PDF Downloads 144
1917 A Comprehensive Study and Evaluation on Image Fashion Features Extraction

Authors: Yuanchao Sang, Zhihao Gong, Longsheng Chen, Long Chen

Abstract:

Clothing fashion represents a human’s aesthetic appreciation towards everyday outfits and appetite for fashion, and it reflects the development of status in society, humanity, and economics. However, modelling fashion by machine is extremely challenging because fashion is too abstract to be efficiently described by machines. Even human beings can hardly reach a consensus about fashion. In this paper, we are dedicated to answering a fundamental fashion-related problem: what image feature best describes clothing fashion? To address this issue, we have designed and evaluated various image features, ranging from traditional low-level hand-crafted features to mid-level style awareness features to various current popular deep neural network-based features, which have shown state-of-the-art performance in various vision tasks. In summary, we tested the following 9 feature representations: color, texture, shape, style, convolutional neural networks (CNNs), CNNs with distance metric learning (CNNs&DML), AutoEncoder, CNNs with multiple layer combination (CNNs&MLC) and CNNs with dynamic feature clustering (CNNs&DFC). Finally, we validated the performance of these features on two publicly available datasets. Quantitative and qualitative experimental results on both intra-domain and inter-domain fashion clothing image retrieval showed that deep learning based feature representations far outweigh traditional hand-crafted feature representation. Additionally, among all deep learning based methods, CNNs with explicit feature clustering performs best, which shows feature clustering is essential for discriminative fashion feature representation.

Keywords: convolutional neural network, feature representation, image processing, machine modelling

Procedia PDF Downloads 127
1916 The Potential in the Use of Building Information Modelling and Life-Cycle Assessment for Retrofitting Buildings: A Study Based on Interviews with Experts in Both Fields

Authors: Alex Gonzalez Caceres, Jan Karlshøj, Tor Arvid Vik

Abstract:

Life cycle of residential buildings are expected to be several decades, 40% of European residential buildings have inefficient energy conservation measure. The existing building represents 20-40% of the energy use and the CO₂ emission. Since net zero energy buildings are a short-term goal, (should be achieved by EU countries after 2020), is necessary to plan the next logical step, which is to prepare the existing outdated stack of building to retrofit them into an energy efficiency buildings. In order to accomplish this, two specialize and widespread tool can be used Building Information Modelling (BIM) and life-cycle assessment (LCA). BIM and LCA are tools used by a variety of disciplines; both are able to represent and analyze the constructions in different stages. The combination of these technologies could improve greatly the retrofitting techniques. The incorporation of the carbon footprint, introducing a single database source for different material analysis. To this is added the possibility of considering different analysis approaches such as costs and energy saving. Is expected with these measures, enrich the decision-making. The methodology is based on two main activities; the first task involved the collection of data this is accomplished by literature review and interview with experts in the retrofitting field and BIM technologies. The results of this task are presented as an evaluation checklist of BIM ability to manage data and improve decision-making in retrofitting projects. The last activity involves an evaluation using the results of the previous tasks, to check how far the IFC format can support the requirements by each specialist, and its uses by third party software. The result indicates that BIM/LCA have a great potential to improve the retrofitting process in existing buildings, but some modification must be done in order to meet the requirements of the specialists for both, retrofitting and LCA evaluators.

Keywords: retrofitting, BIM, LCA, energy efficiency

Procedia PDF Downloads 207
1915 Utilization of Standard Paediatric Observation Chart to Evaluate Infants under Six Months Presenting with Non-Specific Complaints

Authors: Michael Zhang, Nicholas Marriage, Valerie Astle, Marie-Louise Ratican, Jonathan Ash, Haddijatou Hughes

Abstract:

Objective: Young infants are often brought to the Emergency Department (ED) with a variety of complaints, some of them are non-specific and present as a diagnostic challenge to the attending clinician. Whilst invasive investigations such as blood tests and lumbar puncture are necessary in some cases to exclude serious infections, some basic clinical tools in additional to thorough clinical history can be useful to assess the risks of serious conditions in these young infants. This study aimed to examine the utilization of one of clinical tools in this regard. Methods: This retrospective observational study examined the medical records of infants under 6 months presenting to a mixed urban ED between January 2013 and December 2014. The infants deemed to have non-specific complaints or diagnoses by the emergency clinicians were selected for analysis. The ones with clear systemic diagnoses were excluded. Among all relevant clinical information and investigation results, utilization of Standard Paediatric Observation Chart (SPOC) was particularly scrutinized in these medical records. This specific chart was developed by the expert clinicians in local health department. It categorizes important clinical signs into some color-coded zones as a visual cue for serious implication of some abnormalities. An infant is regarded as SPOC positive when fulfills 1 red zone or 2 yellow zones criteria, and the attending clinician would be prompted to investigate and treat for potential serious conditions accordingly. Results: Eight hundred and thirty-five infants met the inclusion criteria for this project. The ones admitted to the hospital for further management were more likely to have SPOC positive criteria than the discharged infants (Odds ratio: 12.26, 95% CI: 8.04 – 18.69). Similarly, Sepsis alert criteria on SPOC were positive in a higher percentage of patients with serious infections (56.52%) in comparison to those with mild conditions (15.89%) (p < 0.001). The SPOC sepsis criteria had a sensitivity of 56.5% (95% CI: 47.0% - 65.7%) and a moderate specificity of 84.1% (95% CI: 80.8% - 87.0%) to identify serious infections. Applying to this infant population, with a 17.4% prevalence of serious infection, the positive predictive value was only 42.8% (95% CI: 36.9% - 49.0%). However, the negative predictive value was high at 90.2% (95% CI: 88.1% - 91.9%). Conclusions: Standard Paediatric Observation Chart has been applied as a useful clinical tool in the clinical practice to help identify and manage young sick infants in ED effectively.

Keywords: clinical tool, infants, non-specific complaints, Standard Paediatric Observation Chart

Procedia PDF Downloads 237
1914 Examining Motivational Dynamics and L2 Learning Transitions of Air Cadets Between Year One and Year Two: A Retrodictive Qualitative Modelling Approach

Authors: Kanyaporn Sommeechai

Abstract:

Air cadets who aspire to become military pilots upon graduation undergo rigorous training at military academies. As first-year cadets are akin to civilian freshmen, they encounter numerous challenges within the seniority-based military academy system. Imposed routines, such as mandatory morning runs and restrictions on mobile phone usage for two semesters, have the potential to impact their learning process and motivation to study, including second language (L2) acquisition. This study aims to investigate the motivational dynamics and L2 learning transitions experienced by air cadets. To achieve this, a Retrodictive Qualitative Modelling approach will be employed, coupled with the adaptation of the three-barrier structure encompassing institutional factors, situational factors, and dispositional factors. Semi-structured interviews will be conducted to gather rich qualitative data. By analyzing and interpreting the collected data, this research seeks to shed light on the motivational factors that influence air cadets' L2 learning journey. The three-barrier structure will provide a comprehensive framework to identify and understand the institutional, situational, and dispositional factors that may impede or facilitate their motivation and language learning progress. Moreover, the study will explore how these factors interact and shape cadets' motivation and learning experiences. The outcomes of this research will yield fundamental data that can inform strategies and interventions to enhance the motivation and language learning outcomes of air cadets. By better understanding their motivational dynamics and transitions, educators and institutions can create targeted initiatives, tailored pedagogical approaches, and supportive environments that effectively inspire and engage air cadets as L2 learners.

Keywords: second language, education, motivational dynamics, learning transitions

Procedia PDF Downloads 59
1913 A Study for Area-level Mosquito Abundance Prediction by Using Supervised Machine Learning Point-level Predictor

Authors: Theoktisti Makridou, Konstantinos Tsaprailis, George Arvanitakis, Charalampos Kontoes

Abstract:

In the literature, the data-driven approaches for mosquito abundance prediction relaying on supervised machine learning models that get trained with historical in-situ measurements. The counterpart of this approach is once the model gets trained on pointlevel (specific x,y coordinates) measurements, the predictions of the model refer again to point-level. These point-level predictions reduce the applicability of those solutions once a lot of early warning and mitigation actions applications need predictions for an area level, such as a municipality, village, etc... In this study, we apply a data-driven predictive model, which relies on public-open satellite Earth Observation and geospatial data and gets trained with historical point-level in-Situ measurements of mosquito abundance. Then we propose a methodology to extract information from a point-level predictive model to a broader area-level prediction. Our methodology relies on the randomly spatial sampling of the area of interest (similar to the Poisson hardcore process), obtaining the EO and geomorphological information for each sample, doing the point-wise prediction for each sample, and aggregating the predictions to represent the average mosquito abundance of the area. We quantify the performance of the transformation from the pointlevel to the area-level predictions, and we analyze it in order to understand which parameters have a positive or negative impact on it. The goal of this study is to propose a methodology that predicts the mosquito abundance of a given area by relying on point-level prediction and to provide qualitative insights regarding the expected performance of the area-level prediction. We applied our methodology to historical data (of Culex pipiens) of two areas of interest (Veneto region of Italy and Central Macedonia of Greece). In both cases, the results were consistent. The mean mosquito abundance of a given area can be estimated with similar accuracy to the point-level predictor, sometimes even better. The density of the samples that we use to represent one area has a positive effect on the performance in contrast to the actual number of sampling points which is not informative at all regarding the performance without the size of the area. Additionally, we saw that the distance between the sampling points and the real in-situ measurements that were used for training did not strongly affect the performance.

Keywords: mosquito abundance, supervised machine learning, culex pipiens, spatial sampling, west nile virus, earth observation data

Procedia PDF Downloads 135
1912 Is School Misbehavior a Decision: Implications for School Guidance

Authors: Rachel C. F. Sun

Abstract:

This study examined the predictive effects of moral competence, prosocial norms and positive behavior recognition on school misbehavior among Chinese junior secondary school students. Results of multiple regression analysis showed that students were more likely to misbehave in school when they had lower levels of moral competence and prosocial norms, and when they perceived their positive behavior being less likely recognized. Practical implications were discussed on how to guide students to make the right choices to behave appropriately in school. Implications for future research were also discussed.

Keywords: moral competence, positive behavior recognition, prosocial norms, school misbehavior

Procedia PDF Downloads 374
1911 Thermal Effect in Power Electrical for HEMTs Devices with InAlN/GaN

Authors: Zakarya Kourdi, Mohammed Khaouani, Benyounes Bouazza, Ahlam Guen-Bouazza, Amine Boursali

Abstract:

In this paper, we have evaluated the thermal effect for high electron mobility transistors (HEMTs) heterostructure InAlN/GaN with a gate length 30nm high-performance. It also shows the analysis and simulated these devices, and how can be used in different application. The simulator Tcad-Silvaco software has used for predictive results good for the DC, AC and RF characteristic, Devices offered max drain current 0.67A; transconductance is 720 mS/mm the unilateral power gain of 180 dB. A cutoff frequency of 385 GHz, and max frequency 810 GHz These results confirm the feasibility of using HEMTs with InAlN/GaN in high power amplifiers, as well as thermal places.

Keywords: HEMT, Thermal Effect, Silvaco, InAlN/GaN

Procedia PDF Downloads 456
1910 The Comparison between Modelled and Measured Nitrogen Dioxide Concentrations in Cold and Warm Seasons in Kaunas

Authors: A. Miškinytė, A. Dėdelė

Abstract:

Road traffic is one of the main sources of air pollution in urban areas associated with adverse effects on human health and environment. Nitrogen dioxide (NO2) is considered as traffic-related air pollutant, which concentrations tend to be higher near highways, along busy roads and in city centres and exceedances are mainly observed in air quality monitoring stations located close to traffic. Atmospheric dispersion models can be used to examine emissions from many various sources and to predict the concentration of pollutants emitted from these sources into the atmosphere. The study aim was to compare modelled concentrations of nitrogen dioxide using ADMS-Urban dispersion model with air quality monitoring network in cold and warm seasons in Kaunas city. Modelled average seasonal concentrations of nitrogen dioxide for 2011 year have been verified with automatic air quality monitoring data from two stations in the city. Traffic station is located near high traffic street in industrial district and background station far away from the main sources of nitrogen dioxide pollution. The modelling results showed that the highest nitrogen dioxide concentration was modelled and measured in station located near intensive traffic street, both in cold and warm seasons. Modelled and measured nitrogen dioxide concentration was respectively 25.7 and 25.2 µg/m3 in cold season and 15.5 and 17.7 µg/m3 in warm season. While the lowest modelled and measured NO2 concentration was determined in background monitoring station, respectively 12.2 and 13.3 µg/m3 in cold season and 6.1 and 7.6 µg/m3 in warm season. The difference between monitoring station located near high traffic street and background monitoring station showed that better agreement between modelled and measured NO2 concentration was observed at traffic monitoring station.

Keywords: air pollution, nitrogen dioxide, modelling, ADMS-Urban model

Procedia PDF Downloads 399
1909 Expression of uPA, tPA, and PAI-1 in Calcified Aortic Valves

Authors: Abdullah M. Alzahrani

Abstract:

Our physiopathological assumption is that u-PA, t-PA, and PAI-1 are released by calcified aortic valves and play a role in the calcification of these valves. Sixty-five calcified aortic valves were collected from patients suffering from aortic stenosis. Each valve was incubated for 24 hours in culture medium. The supernatants were used to measure u-PA, t-PA, and PAI-1 concentrations; the valve calcification was evaluated using biphotonic absorptiometry. Aortic stenosis valves expressed normal plasminogen activators concentrations and overexpressed PAI-1 (u-PA, t-PA, and PAI-1 mean concentrations were, resp., 1.69 ng/mL ± 0.80, 2.76 ng/mL ± 1.33, and 53.27 ng/mL ± 36.39). There was no correlation between u-PA and PAI-1 (r = 0.3) but t-PA and PAI-1 were strongly correlated with each other (r = 0.6). Over expression of PAI-1 was proportional to the calcium content of theAS valves. Our results demonstrate a consistent increase of PAI-1 proportional to the calcification. The over expression of PAI-1 may be useful as a predictive indicator in patients with aortic stenosis.

Keywords: aortic valve, PAI-1, tPA gene, uPA gene

Procedia PDF Downloads 462
1908 Reducing the Risk of Alcohol Relapse after Liver-Transplantation

Authors: Rebeca V. Tholen, Elaine Bundy

Abstract:

Background: Liver transplantation (LT) is considered the only curative treatment for end-stage liver disease Background: Liver transplantation (LT) is considered the only curative treatment for end-stage liver disease (ESLD). The effects of alcoholism can cause irreversible liver damage, cirrhosis and subsequent liver failure. Alcohol relapse after transplant occurs in 20-50% of patients and increases the risk for recurrent cirrhosis, organ rejection, and graft failure. Alcohol relapse after transplant has been identified as a problem among liver transplant recipients at a large urban academic transplant center in the United States. Transplantation will reverse the complications of ESLD, but it does not treat underlying alcoholism or reduce the risk of relapse after transplant. The purpose of this quality improvement project is to implement and evaluate the effectiveness of a High-Risk Alcoholism Relapse (HRAR) Scale to screen and identify patients at high-risk for alcohol relapse after receiving an LT. Methods: The HRAR Scale is a predictive tool designed to determine the severity of alcoholism and risk of relapse after transplant. The scale consists of three variables identified as having the highest predictive power for early relapse including, daily number of drinks, history of previous inpatient treatment for alcoholism, and the number of years of heavy drinking. All adult liver transplant recipients at a large urban transplant center were screened with the HRAR Scale prior to hospital discharge. A zero to two ordinal score is ranked for each variable, and the total score ranges from zero to six. High-risk scores are between three to six. Results: Descriptive statistics revealed 25 patients were newly transplanted and discharged from the hospital during an 8-week period. 40% of patients (n=10) were identified as being high-risk for relapse and 60% low-risk (n=15). The daily number of drinks were determined by alcohol content (1 drink = 15g of ethanol) and number of drinks per day. 60% of patients reported drinking 9-17 drinks per day, and 40% reported ≤ 9 drinks. 50% of high-risk patients reported drinking ≥ 25 years, 40% for 11-25 years, and 10% ≤ 11 years. For number of inpatient treatments for alcoholism, 50% received inpatient treatment one time, 20% ≥ 1, and 30% reported never receiving inpatient treatment. Findings reveal the importance and value of a validated screening tool as a more efficient method than other screening methods alone. Integration of a structured clinical tool will help guide the drinking history portion of the psychosocial assessment. Targeted interventions can be implemented for all high-risk patients. Conclusions: Our findings validate the effectiveness of utilizing the HRAR scale to screen and identify patients who are a high-risk for alcohol relapse post-LT. Recommendations to help maintain post-transplant sobriety include starting a transplant support group within the organization for all high-risk patients. (ESLD). The effects of alcoholism can cause irreversible liver damage, cirrhosis and subsequent liver failure. Alcohol relapse after transplant occurs in 20-50% of patients, and increases the risk for recurrent cirrhosis, organ rejection, and graft failure. Alcohol relapse after transplant has been identified as a problem among liver transplant recipients at a large urban academic transplant center in the United States. Transplantation will reverse the complications of ESLD, but it does not treat underlying alcoholism or reduce the risk of relapse after transplant. The purpose of this quality improvement project is to implement and evaluate the effectiveness of a High-Risk Alcoholism Relapse (HRAR) Scale to screen and identify patients at high-risk for alcohol relapse after receiving a LT. Methods: The HRAR Scale is a predictive tool designed to determine severity of alcoholism and risk of relapse after transplant. The scale consists of three variables identified as having the highest predictive power for early relapse including, daily number of drinks, history of previous inpatient treatment for alcoholism, and the number of years of heavy drinking. All adult liver transplant recipients at a large urban transplant center were screened with the HRAR Scale prior to hospital discharge. A zero to two ordinal score is ranked for each variable, and the total score ranges from zero to six. High-risk scores are between three to six. Results: Descriptive statistics revealed 25 patients were newly transplanted and discharged from the hospital during an 8-week period. 40% of patients (n=10) were identified as being high-risk for relapse and 60% low-risk (n=15). The daily number of drinks were determined by alcohol content (1 drink = 15g of ethanol) and number of drinks per day. 60% of patients reported drinking 9-17 drinks per day, and 40% reported ≤ 9 drinks. 50% of high-risk patients reported drinking ≥ 25 years, 40% for 11-25 years, and 10% ≤ 11 years. For number of inpatient treatments for alcoholism, 50% received inpatient treatment one time, 20% ≥ 1, and 30% reported never receiving inpatient treatment. Findings reveal the importance and value of a validated screening tool as a more efficient method than other screening methods alone. Integration of a structured clinical tool will help guide the drinking history portion of the psychosocial assessment. Targeted interventions can be implemented for all high-risk patients. Conclusions: Our findings validate the effectiveness of utilizing the HRAR scale to screen and identify patients who are a high-risk for alcohol relapse post-LT. Recommendations to help maintain post-transplant sobriety include starting a transplant support group within the organization for all high-risk patients.

Keywords: alcoholism, liver transplant, quality improvement, substance abuse

Procedia PDF Downloads 104
1907 Multi-Particle Finite Element Modelling Simulation Based on Cohesive Zone Method of Cold Compaction Behavior of Laminar Al and NaCl Composite Powders

Authors: Yanbing Feng, Deqing Mei, Yancheng Wang, Zichen Chen

Abstract:

With the advantage of low volume density, high specific surface area, light weight and good permeability, porous aluminum material has the potential to be used in automotive, railway, chemistry and construction industries, etc. A layered powder sintering and dissolution method were developed to fabricate the porous surface Al structure with high efficiency. However, the densification mechanism during the cold compaction of laminar composite powders is still unclear. In this study, multi particle finite element modelling (MPFEM) based on the cohesive zone method (CZM) is used to simulate the cold compaction behavior of laminar Al and NaCl composite powders. To obtain its densification mechanism, the macro and micro properties of final compacts are characterized and analyzed. The robustness and accuracy of the numerical model is firstly verified by experimental results and data fitting. The results indicate that the CZM-based multi particle FEM is an effective way to simulate the compaction of the laminar powders and the fracture process of the NaCl powders. In the compaction of the laminar powders, the void is mainly filled by the particle rearrangement, plastic deformation of Al powders and brittle fracture of NaCl powders. Large stress is mainly concentrated within the NaCl powers and the contact force network is formed. The Al powder near the NaCl powder or the mold has larger stress distribution on its contact surface. Therefore, the densification process of cold compaction of laminar Al and NaCl composite powders is successfully analyzed by the CZM-based multi particle FEM.

Keywords: cold compaction, cohesive zone, multi-particle FEM, numerical modeling, powder forming

Procedia PDF Downloads 141
1906 Agreement between Basal Metabolic Rate Measured by Bioelectrical Impedance Analysis and Estimated by Prediction Equations in Obese Groups

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Basal metabolic rate (BMR) is widely used and an accepted measure of energy expenditure. Its principal determinant is body mass. However, this parameter is also correlated with a variety of other factors. The objective of this study is to measure BMR and compare it with the values obtained from predictive equations in adults classified according to their body mass index (BMI) values. 276 adults were included into the scope of this study. Their age, height and weight values were recorded. Five groups were designed based on their BMI values. First group (n = 85) was composed of individuals with BMI values varying between 18.5 and 24.9 kg/m2. Those with BMI values varying from 25.0 to 29.9 kg/m2 constituted Group 2 (n = 90). Individuals with 30.0-34.9 kg/m2, 35.0-39.9 kg/m2, > 40.0 kg/m2 were included in Group 3 (n = 53), 4 (n = 28) and 5 (n = 20), respectively. The most commonly used equations to be compared with the measured BMR values were selected. For this purpose, the values were calculated by the use of four equations to predict BMR values, by name, introduced by Food and Agriculture Organization (FAO)/World Health Organization (WHO)/United Nations University (UNU), Harris and Benedict, Owen and Mifflin. Descriptive statistics, ANOVA, post-Hoc Tukey and Pearson’s correlation tests were performed by a statistical program designed for Windows (SPSS, version 16.0). p values smaller than 0.05 were accepted as statistically significant. Mean ± SD of groups 1, 2, 3, 4 and 5 for measured BMR in kcal were 1440.3 ± 210.0, 1618.8 ± 268.6, 1741.1 ± 345.2, 1853.1 ± 351.2 and 2028.0 ± 412.1, respectively. Upon evaluation of the comparison of means among groups, differences were highly significant between Group 1 and each of the remaining four groups. The values were increasing from Group 2 to Group 5. However, differences between Group 2 and Group 3, Group 3 and Group 4, Group 4 and Group 5 were not statistically significant. These insignificances were lost in predictive equations proposed by Harris and Benedict, FAO/WHO/UNU and Owen. For Mifflin, the insignificance was limited only to Group 4 and Group 5. Upon evaluation of the correlations of measured BMR and the estimated values computed from prediction equations, the lowest correlations between measured BMR and estimated BMR values were observed among the individuals within normal BMI range. The highest correlations were detected in individuals with BMI values varying between 30.0 and 34.9 kg/m2. Correlations between measured BMR values and BMR values calculated by FAO/WHO/UNU as well as Owen were the same and the highest. In all groups, the highest correlations were observed between BMR values calculated from Mifflin and Harris and Benedict equations using age as an additional parameter. In conclusion, the unique resemblance of the FAO/WHO/UNU and Owen equations were pointed out. However, mean values obtained from FAO/WHO/UNU were much closer to the measured BMR values. Besides, the highest correlations were found between BMR calculated from FAO/WHO/UNU and measured BMR. These findings suggested that FAO/WHO/UNU was the most reliable equation, which may be used in conditions when the measured BMR values are not available.

Keywords: adult, basal metabolic rate, fao/who/unu, obesity, prediction equations

Procedia PDF Downloads 122
1905 Service Interactions Coordination Using a Declarative Approach: Focuses on Deontic Rule from Semantics of Business Vocabulary and Rules Models

Authors: Nurulhuda A. Manaf, Nor Najihah Zainal Abidin, Nur Amalina Jamaludin

Abstract:

Coordinating service interactions are a vital part of developing distributed applications that are built up as networks of autonomous participants, e.g., software components, web services, online resources, involve a collaboration between a diverse number of participant services on different providers. The complexity in coordinating service interactions reflects how important the techniques and approaches require for designing and coordinating the interaction between participant services to ensure the overall goal of a collaboration between participant services is achieved. The objective of this research is to develop capability of steering a complex service interaction towards a desired outcome. Therefore, an efficient technique for modelling, generating, and verifying the coordination of service interactions is developed. The developed model describes service interactions using service choreographies approach and focusing on a declarative approach, advocating an Object Management Group (OMG) standard, Semantics of Business Vocabulary and Rules (SBVR). This model, namely, SBVR model for service choreographies focuses on a declarative deontic rule expressing both obligation and prohibition, which can be more useful in working with coordinating service interactions. The generated SBVR model is then be formulated and be transformed into Alloy model using Alloy Analyzer for verifying the generated SBVR model. The transformation of SBVR into Alloy allows to automatically generate the corresponding coordination of service interactions (service choreography), hence producing an immediate instance of execution that satisfies the constraints of the specification and verifies whether a specific request can be realised in the given choreography in the generated choreography.

Keywords: service choreography, service coordination, behavioural modelling, complex interactions, declarative specification, verification, model transformation, semantics of business vocabulary and rules, SBVR

Procedia PDF Downloads 137
1904 Introduction of Mass Rapid Transit System and Its Impact on Para-Transit

Authors: Khalil Ahmad Kakar

Abstract:

In developing countries increasing the automobile and low capacity public transport (para-transit) which are creating congestion, pollution, noise, and traffic accident are the most critical quandary. These issues are under the analysis of assessors to break down the puzzle and propose sustainable urban public transport system. Kabul city is one of those urban areas that the inhabitants are suffering from lack of tolerable and friendly public transport system. The city is the most-populous and overcrowded with around 4.5 million population. The para-transit is the only dominant public transit system with a very poor level of services and low capacity vehicles (6-20 passengers). Therefore, this study after detailed investigations suggests bus rapid transit (BRT) system in Kabul City. It is aimed to mitigate the role of informal transport and decreases congestion. The research covers three parts. In the first part, aggregated travel demand modelling (four-step) is applied to determine the number of users for para-transit and assesses BRT network based on higher passenger demand for public transport mode. In the second part, state preference (SP) survey and binary logit model are exerted to figure out the utility of existing para-transit mode and planned BRT system. Finally, the impact of predicted BRT system on para-transit is evaluated. The extracted outcome based on high travel demand suggests 10 km network for the proposed BRT system, which is originated from the district tenth and it is ended at Kabul International Airport. As well as, the result from the disaggregate travel mode-choice model, based on SP and logit model indicates that the predicted mass rapid transit system has higher utility with the significant impact regarding the reduction of para-transit.

Keywords: BRT, para-transit, travel demand modelling, Kabul City, logit model

Procedia PDF Downloads 171
1903 Application of Granular Computing Paradigm in Knowledge Induction

Authors: Iftikhar U. Sikder

Abstract:

This paper illustrates an application of granular computing approach, namely rough set theory in data mining. The paper outlines the formalism of granular computing and elucidates the mathematical underpinning of rough set theory, which has been widely used by the data mining and the machine learning community. A real-world application is illustrated, and the classification performance is compared with other contending machine learning algorithms. The predictive performance of the rough set rule induction model shows comparative success with respect to other contending algorithms.

Keywords: concept approximation, granular computing, reducts, rough set theory, rule induction

Procedia PDF Downloads 519