Search results for: proportional hazard model
15579 Structural Damage Detection Using Modal Data Employing Teaching Learning Based Optimization
Authors: Subhajit Das, Nirjhar Dhang
Abstract:
Structural damage detection is a challenging work in the field of structural health monitoring (SHM). The damage detection methods mainly focused on the determination of the location and severity of the damage. Model updating is a well known method to locate and quantify the damage. In this method, an error function is defined in terms of difference between the signal measured from ‘experiment’ and signal obtained from undamaged finite element model. This error function is minimised with a proper algorithm, and the finite element model is updated accordingly to match the measured response. Thus, the damage location and severity can be identified from the updated model. In this paper, an error function is defined in terms of modal data viz. frequencies and modal assurance criteria (MAC). MAC is derived from Eigen vectors. This error function is minimized by teaching-learning-based optimization (TLBO) algorithm, and the finite element model is updated accordingly to locate and quantify the damage. Damage is introduced in the model by reduction of stiffness of the structural member. The ‘experimental’ data is simulated by the finite element modelling. The error due to experimental measurement is introduced in the synthetic ‘experimental’ data by adding random noise, which follows Gaussian distribution. The efficiency and robustness of this method are explained through three examples e.g., one truss, one beam and one frame problem. The result shows that TLBO algorithm is efficient to detect the damage location as well as the severity of damage using modal data.Keywords: damage detection, finite element model updating, modal assurance criteria, structural health monitoring, teaching learning based optimization
Procedia PDF Downloads 21615578 Comparing Groundwater Fluoride Level with WHO Guidelines and Classifying At-Risk Age Groups; Based on Health Risk Assessment
Authors: Samaneh Abolli, Kamyar Yaghmaeian, Ali Arab Aradani, Mahmood Alimohammadi
Abstract:
The main route of fluoride uptake is drinking water. Fluoride absorption in the acceptable range (0.5-1.5 mg L-¹) is suitable for the body, but it's too much consumption can have irreversible health effects. To compare fluoride concentration with the WHO guidelines, 112 water samples were taken from groundwater aquifers in 22 villages of Garmsar County, the central part of Iran, during 2018 to 2019.Fluoride concentration was measured by the SPANDS method, and its non-carcinogenic impacts were calculated using EDI and HQ. The statistical population was divided into four categories of infant, children, teenagers, and adults. Linear regression and Spearman rank correlation coefficient tests were used to investigate the relationships between the well's depth and fluoride concentration in the water samples. The annual mean concentrations of fluoride in 2018 and2019 were 0.75 and 0.64 mg -¹ and, the fluoride mean concentration in the samples classifying the cold and hot seasons of the studied years was 0.709 and 0.689 mg L-¹, respectively. The amount of fluoride in 27% of the samples in both years was less than the acceptable minimum (0.5 mg L-¹). Also, 11% of the samples in2018 (6 samples) had fluoride levels higher than 1.5 mg L-¹. The HQ showed that the children were vulnerable; teenagers and adults were in the next ranks, respectively. Statistical tests showed a reverse and significant correlation (R2 = 0.02, < 0.0001) between well depth and fluoride content. The border between the usefulness/harmfulness of fluoride is very narrow and requires extensive studies.Keywords: fluoride, groundwater, health risk assessment, hazard quotient, Garmsar
Procedia PDF Downloads 7215577 Causes and Effects of the 2012 Flood Disaster on Affected Communities in Nigeria
Authors: Abdulquadri Ade Bilau, Richard Ajayi Jimoh, Adejoh Amodu Adaji
Abstract:
The increasing exposures to natural hazards have continued to severely impair on the built environment causing huge fatalities, mass damage and destruction of housing and civil infrastructure while leaving psychosocial impacts on affected communities. The 2012 flood disaster in Nigeria which affected over 7 million inhabitants in 30 of the 36 states resulted in 363 recorded fatalities with about 600,000 houses and a number of civil infrastructure damaged or destroyed. In Kogi State, over 500 thousand people were displaced in 9 out of the 21 local government affected while Ibaji and Lokoja local governments were worst hit. This study identifies the causes and 2012 flood disasters and its effect on housing and livelihood. Personal observation and questionnaire survey were instruments used in carrying out the study and data collected were analysed using descriptive statistical tool. Findings show that the 2012 flood disaster was aided by the gap in hydrological data, sudden dam failure, and inadequate drainage capacity to reduce flood risk. The study recommends that communities residing along the river banks in Lokoja and Ibaji LGAs must be adequately educated on their exposure to flood hazard and mitigation and risk reduction measures such as construction of adequate drainage channel are constructed in affected communities.Keywords: flood, hazards, housing, risk reduction, vulnerability
Procedia PDF Downloads 26715576 Microwave Dielectric Relaxation Study of Diethanolamine with Triethanolamine from 10 MHz-20 GHz
Authors: A. V. Patil
Abstract:
The microwave dielectric relaxation study of diethanolamine with triethanolamine binary mixture have been determined over the frequency range of 10 MHz to 20 GHz, at various temperatures using time domain reflectometry (TDR) method for 11 concentrations of the system. The present work reveals molecular interaction between same multi-functional groups [−OH and –NH2] of the alkanolamines (diethanolamine and triethanolamine) using different models such as Debye model, Excess model, and Kirkwood model. The dielectric parameters viz. static dielectric constant (ε0) and relaxation time (τ) have been obtained with Debye equation characterized by a single relaxation time without relaxation time distribution by the least squares fit method.Keywords: diethanolamine, excess properties, kirkwood properties, time domain reflectometry, triethanolamine
Procedia PDF Downloads 30515575 Performance Evaluation of the Classic seq2seq Model versus a Proposed Semi-supervised Long Short-Term Memory Autoencoder for Time Series Data Forecasting
Authors: Aswathi Thrivikraman, S. Advaith
Abstract:
The study is aimed at designing encoders for deciphering intricacies in time series data by redescribing the dynamics operating on a lower-dimensional manifold. A semi-supervised LSTM autoencoder is devised and investigated to see if the latent representation of the time series data can better forecast the data. End-to-end training of the LSTM autoencoder, together with another LSTM network that is connected to the latent space, forces the hidden states of the encoder to represent the most meaningful latent variables relevant for forecasting. Furthermore, the study compares the predictions with those of a traditional seq2seq model.Keywords: LSTM, autoencoder, forecasting, seq2seq model
Procedia PDF Downloads 15715574 Influence of Aluminium on Grain Refinement in As-Rolled Vanadium-Microalloyed Steels
Authors: Kevin Mark Banks, Dannis Rorisang Nkarapa Maubane, Carel Coetzee
Abstract:
The influence of aluminium content, reheating temperature, and sizing (final) strain on the as-rolled microstructure was systematically investigated in vanadium-microalloyed and C-Mn plate steels. Reheating, followed by hot rolling and air cooling simulations were performed on steels containing a range of aluminium and nitrogen contents. Natural air cooling profiles, corresponding to 6 and 20mm thick plates, were applied. The austenite and ferrite/pearlite microstructures were examined using light optical microscopy. Precipitate species and volume fraction were determined on selected specimens. No influence of aluminium content was found below 0.08% on the as-rolled grain size in all steels studied. A low Al-V-steel produced the coarsest initial austenite grain size due to AlN dissolution at low temperatures leading to abnormal grain growth. An Al-free V-N steel had the finest initial microstructure. Although the as-rolled grain size for 20mm plate was similar in all steels tested, the grain distribution was relatively mixed. The final grain size in 6mm plate was similar for most compositions; the exception was an as-cast V low N steel, where the size of the second phase was inversely proportional to the sizing strain. This was attributed to both segregation and a low VN volume fraction available for effective pinning of austenite grain boundaries during cooling. Increasing the sizing strain refined the microstructure significantly in all steels.Keywords: aluminium, grain size, nitrogen, reheating, sizing strain, steel, vanadium
Procedia PDF Downloads 15515573 Hierarchical Tree Long Short-Term Memory for Sentence Representations
Authors: Xiuying Wang, Changliang Li, Bo Xu
Abstract:
A fixed-length feature vector is required for many machine learning algorithms in NLP field. Word embeddings have been very successful at learning lexical information. However, they cannot capture the compositional meaning of sentences, which prevents them from a deeper understanding of language. In this paper, we introduce a novel hierarchical tree long short-term memory (HTLSTM) model that learns vector representations for sentences of arbitrary syntactic type and length. We propose to split one sentence into three hierarchies: short phrase, long phrase and full sentence level. The HTLSTM model gives our algorithm the potential to fully consider the hierarchical information and long-term dependencies of language. We design the experiments on both English and Chinese corpus to evaluate our model on sentiment analysis task. And the results show that our model outperforms several existing state of the art approaches significantly.Keywords: deep learning, hierarchical tree long short-term memory, sentence representation, sentiment analysis
Procedia PDF Downloads 34915572 'Gender' and 'Gender Equalities': Conceptual Issues
Authors: Moustafa Ali
Abstract:
The aim of this paper is to discuss and question some of the widely accepted concepts within the conceptual framework of gender from terminological, scientific, and Muslim cultural perspectives, and to introduce a new definition and a model of gender in the Arab and Muslim societies. This paper, therefore, uses a generic methodology and document analysis and comes in three sections and a conclusion. The first section discusses some of the terminological issues in the conceptual framework of gender. The second section highlights scientific issues, introduces a definition and a model of gender, whereas the third section offers Muslim cultural perspectives on some issues related to gender in the Muslim world. The paper, then, concludes with findings and recommendations reached so far.Keywords: gender definition, gender equalities, sex-gender separability, fairness-based model of gender
Procedia PDF Downloads 14015571 A Mathematical Programming Model for Lot Sizing and Production Planning in Multi-Product Companies: A Case Study of Azar Battery Company
Authors: Farzad Jafarpour Taher, Maghsud Solimanpur
Abstract:
Production planning is one of the complex tasks in multi-product firms that produce a wide range of products. Since resources in mass production companies are limited and different products use common resources, there must be a careful plan so that firms can respond to customer needs efficiently. Azar-battery Company is a firm that provides twenty types of products for its customers. Therefore, careful planning must be performed in this company. In this research, the current conditions of Azar-battery Company were investigated to provide a mathematical programming model to determine the optimum production rate of the products in this company. The production system of this company is multi-stage, multi-product and multi-period. This system is studied in terms of a one-year planning horizon regarding the capacity of machines and warehouse space limitation. The problem has been modeled as a linear programming model with deterministic demand in which shortage is not allowed. The objective function of this model is to minimize costs (including raw materials, assembly stage, energy costs, packaging, and holding). Finally, this model has been solved by Lingo software using the branch and bound approach. Since the computation time was very long, the solver interrupted, and the obtained feasible solution was used for comparison. The proposed model's solution costs have been compared to the company’s real data. This non-optimal solution reduces the total production costs of the company by about %35.Keywords: multi-period, multi-product production, multi-stage, production planning
Procedia PDF Downloads 10015570 Corrosivity of Smoke Generated by Polyvinyl Chloride and Polypropylene with Different Mixing Ratios towards Carbon Steel
Authors: Xufei Liu, Shouxiang Lu, Kim Meow Liew
Abstract:
Because a relatively small fire could potentially cause damage by smoke corrosion far exceed thermal fire damage, it has been realized that the corrosion of metal exposed to smoke atmospheres is a significant fire hazard, except for toxicity or evacuation considerations. For the burning materials in an actual fire may often be the mixture of combustible matters, a quantitative study on the corrosivity of smoke produced by the combustion of mixture is more conducive to the application of the basic theory to the actual engineering. In this paper, carbon steel samples were exposed to smoke generated by polyvinyl chloride and polypropylene, two common combustibles in industrial plants, with different mixing ratios in high humidity for 120 hours. The separate and combined corrosive effects of smoke were examined subsequently by weight loss measurement, scanning electron microscope, energy dispersive spectroscopy and X-ray diffraction. It was found that, although the corrosivity of smoke from polypropylene was much smaller than that of smoke from polyvinyl chloride, smoke from polypropylene enhanced the major corrosive effect of smoke from polyvinyl chloride to carbon steel. Furthermore, the corrosion kinetics of carbon steel under smoke were found to obey the power function. Possible corrosion mechanisms were also proposed. All the analysis helps to provide basic information for the determination of smoke damage and timely rescue after fire.Keywords: corrosion kinetics, corrosion mechanism, mixed combustible, SEM/EDS, smoke corrosivity, XRD
Procedia PDF Downloads 22015569 Logistics Model for Improving Quality in Railway Transport
Authors: Eva Nedeliakova, Juraj Camaj, Jaroslav Masek
Abstract:
This contribution is focused on the methodology for identifying levels of quality and improving quality through new logistics model in railway transport. It is oriented on the application of dynamic quality models, which represent an innovative method of evaluation quality services. Through this conception, time factor, expected, and perceived quality in each moment of the transportation process within logistics chain can be taken into account. Various models describe the improvement of the quality which emphases the time factor throughout the whole transportation logistics chain. Quality of services in railway transport can be determined by the existing level of service quality, by detecting the causes of dissatisfaction employees but also customers, to uncover strengths and weaknesses. This new logistics model is able to recognize critical processes in logistic chain. It includes service quality rating that must respect its specific properties, which are unrepeatability, impalpability, their use right at the time they are provided and particularly changeability, which is significant factor in the conditions of rail transport as well. These peculiarities influence the quality of service regarding the constantly increasing requirements and that result in new ways of finding progressive attitudes towards the service quality rating.Keywords: logistics model, quality, railway transport
Procedia PDF Downloads 57215568 Simple Multiple-Attribute Rating Technique for Optimal Decision-Making Model on Selecting Best Spiker of World Grand Prix
Authors: Chen Chih-Cheng, Chen I-Cheng, Lee Yung-Tan, Kuo Yen-Whea, Yu Chin-Hung
Abstract:
The purpose of this study is to construct a model for best spike player selection in a top volleyball tournament of the world. Data consisted of the records of 2013 World Grand Prix declared by International Volleyball Federation (FIVB). Simple Multiple-Attribute Rating Technique (SMART) was used for optimal decision-making model on the best spike player selection. The research results showed that the best spike player ranking by SMART is different than the ranking by FIVB. The results demonstrated the effectiveness and feasibility of the proposed model.Keywords: simple multiple-attribute rating technique, World Grand Prix, best spike player, International Volleyball Federation
Procedia PDF Downloads 47715567 An Experimental Study on Some Conventional and Hybrid Models of Fuzzy Clustering
Authors: Jeugert Kujtila, Kristi Hoxhalli, Ramazan Dalipi, Erjon Cota, Ardit Murati, Erind Bedalli
Abstract:
Clustering is a versatile instrument in the analysis of collections of data providing insights of the underlying structures of the dataset and enhancing the modeling capabilities. The fuzzy approach to the clustering problem increases the flexibility involving the concept of partial memberships (some value in the continuous interval [0, 1]) of the instances in the clusters. Several fuzzy clustering algorithms have been devised like FCM, Gustafson-Kessel, Gath-Geva, kernel-based FCM, PCM etc. Each of these algorithms has its own advantages and drawbacks, so none of these algorithms would be able to perform superiorly in all datasets. In this paper we will experimentally compare FCM, GK, GG algorithm and a hybrid two-stage fuzzy clustering model combining the FCM and Gath-Geva algorithms. Firstly we will theoretically dis-cuss the advantages and drawbacks for each of these algorithms and we will describe the hybrid clustering model exploiting the advantages and diminishing the drawbacks of each algorithm. Secondly we will experimentally compare the accuracy of the hybrid model by applying it on several benchmark and synthetic datasets.Keywords: fuzzy clustering, fuzzy c-means algorithm (FCM), Gustafson-Kessel algorithm, hybrid clustering model
Procedia PDF Downloads 51515566 The Development of Directed-Project Based Learning as Language Learning Model to Improve Students' English Achievement
Authors: Tri Pratiwi, Sufyarma Marsidin, Hermawati Syarif, Yahya
Abstract:
The 21st-century skills being highly promoted today are Creativity and Innovation, Critical Thinking and Problem Solving, Communication and Collaboration. Communication Skill is one of the essential skills that should be mastered by the students. To master Communication Skills, students must first master their Language Skills. Language Skills is one of the main supporting factors in improving Communication Skills of a person because by learning Language Skills students are considered capable of communicating well and correctly so that the message or how to deliver the message to the listener can be conveyed clearly and easily understood. However, it cannot be denied that English output or learning outcomes which are less optimal is the problem which is frequently found in the implementation of the learning process. This research aimed to improve students’ language skills by developing learning model in English subject for VIII graders of SMP N 1 Uram Jaya through Directed-Project Based Learning (DPjBL) implementation. This study is designed in Research and Development (R & D) using ADDIE model development. The researcher collected data through observation, questionnaire, interview, test, and documentation which were then analyzed qualitatively and quantitatively. The results showed that DPjBL is effective to use, it is seen from the difference in value between the pretest and posttest of the control class and the experimental class. From the results of a questionnaire filled in general, the students and teachers agreed to DPjBL learning model. This learning model can increase the students' English achievement.Keywords: language skills, learning model, Directed-Project Based Learning (DPjBL), English achievement
Procedia PDF Downloads 16615565 Modeling Geogenic Groundwater Contamination Risk with the Groundwater Assessment Platform (GAP)
Authors: Joel Podgorski, Manouchehr Amini, Annette Johnson, Michael Berg
Abstract:
One-third of the world’s population relies on groundwater for its drinking water. Natural geogenic arsenic and fluoride contaminate ~10% of wells. Prolonged exposure to high levels of arsenic can result in various internal cancers, while high levels of fluoride are responsible for the development of dental and crippling skeletal fluorosis. In poor urban and rural settings, the provision of drinking water free of geogenic contamination can be a major challenge. In order to efficiently apply limited resources in the testing of wells, water resource managers need to know where geogenically contaminated groundwater is likely to occur. The Groundwater Assessment Platform (GAP) fulfills this need by providing state-of-the-art global arsenic and fluoride contamination hazard maps as well as enabling users to create their own groundwater quality models. The global risk models were produced by logistic regression of arsenic and fluoride measurements using predictor variables of various soil, geological and climate parameters. The maps display the probability of encountering concentrations of arsenic or fluoride exceeding the World Health Organization’s (WHO) stipulated concentration limits of 10 µg/L or 1.5 mg/L, respectively. In addition to a reconsideration of the relevant geochemical settings, these second-generation maps represent a great improvement over the previous risk maps due to a significant increase in data quantity and resolution. For example, there is a 10-fold increase in the number of measured data points, and the resolution of predictor variables is generally 60 times greater. These same predictor variable datasets are available on the GAP platform for visualization as well as for use with a modeling tool. The latter requires that users upload their own concentration measurements and select the predictor variables that they wish to incorporate in their models. In addition, users can upload additional predictor variable datasets either as features or coverages. Such models can represent an improvement over the global models already supplied, since (a) users may be able to use their own, more detailed datasets of measured concentrations and (b) the various processes leading to arsenic and fluoride groundwater contamination can be isolated more effectively on a smaller scale, thereby resulting in a more accurate model. All maps, including user-created risk models, can be downloaded as PDFs. There is also the option to share data in a secure environment as well as the possibility to collaborate in a secure environment through the creation of communities. In summary, GAP provides users with the means to reliably and efficiently produce models specific to their region of interest by making available the latest datasets of predictor variables along with the necessary modeling infrastructure.Keywords: arsenic, fluoride, groundwater contamination, logistic regression
Procedia PDF Downloads 34915564 Gas Pressure Evaluation through Radial Velocity Measurement of Fluid Flow Modeled by Drift Flux Model
Authors: Aicha Rima Cheniti, Hatem Besbes, Joseph Haggege, Christophe Sintes
Abstract:
In this paper, we consider a drift flux mixture model of the blood flow. The mixture consists of gas phase which is carbon dioxide and liquid phase which is an aqueous carbon dioxide solution. This model was used to determine the distributions of the mixture velocity, the mixture pressure, and the carbon dioxide pressure. These theoretical data are used to determine a measurement method of mean gas pressure through the determination of radial velocity distribution. This method can be applicable in experimental domain.Keywords: mean carbon dioxide pressure, mean mixture pressure, mixture velocity, radial velocity
Procedia PDF Downloads 32515563 A Predictive Model of Supply and Demand in the State of Jalisco, Mexico
Authors: M. Gil, R. Montalvo
Abstract:
Business Intelligence (BI) has become a major source of competitive advantages for firms around the world. BI has been defined as the process of data visualization and reporting for understanding what happened and what is happening. Moreover, BI has been studied for its predictive capabilities in the context of trade and financial transactions. The current literature has identified that BI permits managers to identify market trends, understand customer relations, and predict demand for their products and services. This last capability of BI has been of special concern to academics. Specifically, due to its power to build predictive models adaptable to specific time horizons and geographical regions. However, the current literature of BI focuses on predicting specific markets and industries because the impact of such predictive models was relevant to specific industries or organizations. Currently, the existing literature has not developed a predictive model of BI that takes into consideration the whole economy of a geographical area. This paper seeks to create a predictive model of BI that would show the bigger picture of a geographical area. This paper uses a data set from the Secretary of Economic Development of the state of Jalisco, Mexico. Such data set includes data from all the commercial transactions that occurred in the state in the last years. By analyzing such data set, it will be possible to generate a BI model that predicts supply and demand from specific industries around the state of Jalisco. This research has at least three contributions. Firstly, a methodological contribution to the BI literature by generating the predictive supply and demand model. Secondly, a theoretical contribution to BI current understanding. The model presented in this paper incorporates the whole picture of the economic field instead of focusing on a specific industry. Lastly, a practical contribution might be relevant to local governments that seek to improve their economic performance by implementing BI in their policy planning.Keywords: business intelligence, predictive model, supply and demand, Mexico
Procedia PDF Downloads 12415562 Market Integration in the ECCAS Sub-Region
Authors: Mouhamed Mbouandi Njikam
Abstract:
This work assesses the trade potential of countries in the Economic Community of Central Africa States (ECCAS). The gravity model of trade is used to evaluate the trade flows of member countries, and to compute the trade potential index of ECCAS during 1995-2010. The focus is on the removal of tariffs and non-tariff barriers in the sub-region. Estimates from the gravity model are used for the calculation of the sub-region’s commercial potential. Its three main findings are: (i) the background research shows a low level of integration in the sub-region and open economies; (ii) a low level of industrialization and diversification are the main factors reducing trade potential in the sub-region; (iii) the trade creation predominate on the deflections of trade between member countries.Keywords: gravity model, ECCAS, trade flows, trade potential, regional cooperation
Procedia PDF Downloads 42715561 Development of Time Series Forecasting Model for Dengue Cases in Nakhon Si Thammarat, Southern Thailand
Authors: Manit Pollar
Abstract:
Identifying the dengue epidemic periods early would be helpful to take necessary actions to prevent the dengue outbreaks. Providing an accurate prediction on dengue epidemic seasons will allow sufficient time to take the necessary decisions and actions to safeguard the situation for local authorities. This study aimed to develop a forecasting model on number of dengue incidences in Nakhon Si Thammarat Province, Southern Thailand using time series analysis. We develop Seasonal Autoregressive Moving Average (SARIMA) models on the monthly data collected between 2003-2011 and validated the models using data collected between January-September 2012. The result of this study revealed that the SARIMA(1,1,0)(1,2,1)12 model closely described the trends and seasons of dengue incidence and confirmed the existence of dengue fever cases in Nakhon Si Thammarat for the years between 2003-2011. The study showed that the one-step approach for predicting dengue incidences provided significantly more accurate predictions than the twelve-step approach. The model, even if based purely on statistical data analysis, can provide a useful basis for allocation of resources for disease prevention.Keywords: SARIMA, time series model, dengue cases, Thailand
Procedia PDF Downloads 36015560 Structural Analysis and Detail Design of APV Module Structure Using Topology Optimization Design
Authors: Hyun Kyu Cho, Jun Soo Kim, Young Hoon Lee, Sang Hoon Kang, Young Chul Park
Abstract:
In the study, structure for one of offshore drilling system APV(Air Pressure Vessle) modules was designed by using topology optimum design and performed structural safety evaluation according to DNV rules. 3D model created base on design area and non-design area separated by using topology optimization for the environmental loads. This model separated 17 types for wind loads and dynamic loads and performed structural analysis evaluation for each model. As a result, the maximum stress occurred 181.25MPa.Keywords: APV, topology optimum design, DNV, structural analysis, stress
Procedia PDF Downloads 42715559 Developing Integrated Model for Building Design and Evacuation Planning
Authors: Hao-Hsi Tseng, Hsin-Yun Lee
Abstract:
In the process of building design, the designers have to complete the spatial design and consider the evacuation performance at the same time. It is usually difficult to combine the two planning processes and it results in the gap between spatial design and evacuation performance. Then the designers cannot complete an integrated optimal design solution. In addition, the evacuation routing models proposed by previous researchers is different from the practical evacuation decisions in the real field. On the other hand, more and more building design projects are executed by Building Information Modeling (BIM) in which the design content is formed by the object-oriented framework. Thus, the integration of BIM and evacuation simulation can make a significant contribution for designers. Therefore, this research plan will establish a model that integrates spatial design and evacuation planning. The proposed model will provide the support for the spatial design modifications and optimize the evacuation planning. The designers can complete the integrated design solution in BIM. Besides, this research plan improves the evacuation routing method to make the simulation results more practical. The proposed model will be applied in a building design project for evaluation and validation when it will provide the near-optimal design suggestion. By applying the proposed model, the integration and efficiency of the design process are improved and the evacuation plan is more useful. The quality of building spatial design will be better.Keywords: building information modeling, evacuation, design, floor plan
Procedia PDF Downloads 45715558 An Optimization Model for Waste Management in Demolition Works
Authors: Eva Queheille, Franck Taillandier, Nadia Saiyouri
Abstract:
Waste management has become a major issue in demolition works, because of its environmental impact (energy consumption, resource consumption, pollution…). However, improving waste management requires to take also into account the overall demolition process and to consider demolition main objectives (e.g. cost, delay). Establishing a strategy with these conflicting objectives (economic and environment) remains complex. In order to provide a decision-support for demolition companies, a multi-objective optimization model was developed. In this model, a demolition strategy is computed from a set of 80 decision variables (worker team composition, machines, treatment for each type of waste, choice of treatment platform…), which impacts the demolition objectives. The model has experimented on a real-case study (demolition of several buildings in France). To process the optimization, different optimization algorithms (NSGA2, MOPSO, DBEA…) were tested. Results allow the engineer in charge of this case, to build a sustainable demolition strategy without affecting cost or delay.Keywords: deconstruction, life cycle assessment, multi-objective optimization, waste management
Procedia PDF Downloads 15315557 Application of Public Access Two-Dimensional Hydrodynamic and Distributed Hydrological Models for Flood Forecasting in Ungauged Basins
Authors: Ahmad Shayeq Azizi, Yuji Toda
Abstract:
In Afghanistan, floods are the most frequent and recurrent events among other natural disasters. On the other hand, lack of monitoring data is a severe problem, which increases the difficulty of making the appropriate flood countermeasures of flood forecasting. This study is carried out to simulate the flood inundation in Harirud River Basin by application of distributed hydrological model, Integrated Flood Analysis System (IFAS) and 2D hydrodynamic model, International River Interface Cooperative (iRIC) based on satellite rainfall combined with historical peak discharge and global accessed data. The results of the simulation can predict the inundation area, depth and velocity, and the hardware countermeasures such as the impact of levee installation can be discussed by using the present method. The methodology proposed in this study is suitable for the area where hydrological and geographical data including river survey data are poorly observed.Keywords: distributed hydrological model, flood inundation, hydrodynamic model, ungauged basins
Procedia PDF Downloads 16715556 Numerical Modeling of Flow in USBR II Stilling Basin with End Adverse Slope
Authors: Hamidreza Babaali, Alireza Mojtahedi, Nasim Soori, Saba Soori
Abstract:
Hydraulic jump is one of the effective ways of energy dissipation in stilling basins that the energy is highly dissipated by jumping. Adverse slope surface at the end stilling basin is caused to increase energy dissipation and stability of the hydraulic jump. In this study, the adverse slope has been added to end of United States Bureau of Reclamation (USBR) II stilling basin in hydraulic model of Nazloochay dam with scale 1:40, and flow simulated into stilling basin using Flow-3D software. The numerical model is verified by experimental data of water depth in stilling basin. Then, the parameters of water level profile, Froude Number, pressure, air entrainment and turbulent dissipation investigated for discharging 300 m3/s using K-Ɛ and Re-Normalization Group (RNG) turbulence models. The results showed a good agreement between numerical and experimental model as numerical model can be used to optimize of stilling basins.Keywords: experimental and numerical modelling, end adverse slope, flow parameters, USBR II stilling basin
Procedia PDF Downloads 18115555 A Novel Machining Method and Tool-Path Generation for Bent Mandrel
Authors: Hong Lu, Yongquan Zhang, Wei Fan, Xiangang Su
Abstract:
Bent mandrel has been widely used as precise mould in automobile industry, shipping industry and aviation industry. To improve the versatility and efficiency of turning method of bent mandrel with fixed rotational center, an instantaneous machining model based on cutting parameters and machine dimension is prospered in this paper. The spiral-like tool path generation approach in non-axisymmetric turning process of bent mandrel is developed as well to deal with the error of part-to-part repeatability in existed turning model. The actual cutter-location points are calculated by cutter-contact points, which are obtained from the approach of spiral sweep process using equal-arc-length segment principle in polar coordinate system. The tool offset is set to avoid the interference between tool and work piece is also considered in the machining model. Depend on the spindle rotational angle, synchronization control of X-axis, Z-axis and C-axis is adopted to generate the tool-path of the turning process. The simulation method is developed to generate NC program according to the presented model, which includes calculation of cutter-location points and generation of tool-path of cutting process. With the approach of a bent mandrel taken as an example, the maximum offset of center axis is 4mm in the 3D space. Experiment results verify that the machining model and turning method are appropriate for the characteristics of bent mandrel.Keywords: bent mandrel, instantaneous machining model, simulation method, tool-path generation
Procedia PDF Downloads 33615554 Assessing Effects of an Intervention on Bottle-Weaning and Reducing Daily Milk Intake from Bottles in Toddlers Using Two-Part Random Effects Models
Authors: Yungtai Lo
Abstract:
Two-part random effects models have been used to fit semi-continuous longitudinal data where the response variable has a point mass at 0 and a continuous right-skewed distribution for positive values. We review methods proposed in the literature for analyzing data with excess zeros. A two-part logit-log-normal random effects model, a two-part logit-truncated normal random effects model, a two-part logit-gamma random effects model, and a two-part logit-skew normal random effects model were used to examine effects of a bottle-weaning intervention on reducing bottle use and daily milk intake from bottles in toddlers aged 11 to 13 months in a randomized controlled trial. We show in all four two-part models that the intervention promoted bottle-weaning and reduced daily milk intake from bottles in toddlers drinking from a bottle. We also show that there are no differences in model fit using either the logit link function or the probit link function for modeling the probability of bottle-weaning in all four models. Furthermore, prediction accuracy of the logit or probit link function is not sensitive to the distribution assumption on daily milk intake from bottles in toddlers not off bottles.Keywords: two-part model, semi-continuous variable, truncated normal, gamma regression, skew normal, Pearson residual, receiver operating characteristic curve
Procedia PDF Downloads 35115553 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation
Authors: Jonathan Gong
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning
Procedia PDF Downloads 13115552 Typhoon Disaster Risk Assessment of Mountain Village: A Case Study of Shanlin District in Kaohsiung
Abstract:
Taiwan is mountainous country, 70% of land is covered with mountains. Because of extreme climate, the mountain villages with sensitive and fragile environment often get easily affected by inundation and debris flow from typhoon which brings huge rainfall. Due to inappropriate development, overuse and fewer access roads, occurrence of disaster becomes more frequent through downpour and rescue actions are postponed. However, risk map is generally established through administrative boundaries, the difference of urban and rural area is ignored. The neglect of mountain village characteristics eventually underestimates the importance of factors related to vulnerability and reduces the effectiveness. In disaster management, there are different strategies and actions at each stage. According to different tasks, there will be different risk indices and weights to analyze disaster risk for each stage and then it will contribute to confront threat and reduce impact appropriately on right time. Risk map is important in mitigation, but also in response stage because some factors such as road network will be changed by disaster. This study will use risk assessment to establish risk map of Shanlin District which is mountain village in Kaohsiung as a case study in mitigation and response stage through Analytic Hierarchy Process (AHP). AHP helps to recognize the composition and weights of risk factors in mountain village by experts’ opinions through survey design and is combined with present potential hazard map to produce risk map.Keywords: risk assessment, mountain village, risk map, analytic hierarchy process
Procedia PDF Downloads 40215551 The Impact of the Composite Expanded Graphite PCM on the PV Panel Whole Year Electric Output: Case Study Milan
Authors: Hasan A Al-Asadi, Ali Samir, Afrah Turki Awad, Ali Basem
Abstract:
Integrating the phase change material (PCM) with photovoltaic (PV) panels is one of the effective techniques to minimize the PV panel temperature and increase their electric output. In order to investigate the impact of the PCM on the electric output of the PV panels for a whole year, a lumped-distributed parameter model for the PV-PCM module has been developed. This development has considered the impact of the PCM density variation between the solid phase and liquid phase. This contribution will increase the assessment accuracy of the electric output of the PV-PCM module. The second contribution is to assess the impact of the expanded composite graphite-PCM on the PV electric output in Milan for a whole year. The novel one-dimensional model has been solved using MATLAB software. The results of this model have been validated against literature experiment work. The weather and the solar radiation data have been collected. The impact of expanded graphite-PCM on the electric output of the PV panel for a whole year has been investigated. The results indicate this impact has an enhancement rate of 2.39% for the electric output of the PV panel in Milan for a whole year.Keywords: PV panel efficiency, PCM, numerical model, solar energy
Procedia PDF Downloads 17415550 Lessons Learnt from Moment Magnitude 7.8 Gorkha, Nepal Earthquake
Authors: Narayan Gurung, Fawu Wang, Ranjan Kumar Dahal
Abstract:
Nepal is highly prone to earthquakes and has witnessed at least one major earthquake in 80 to 90 years interval. The Gorkha earthquake, that measured 7.8 RS in magnitude and struck Nepal on 25th April 2015, after 81 years since Mw 8.3 Nepal Bihar earthquake in 1934, was the largest earthquake after Mw 8.3 Nepal Bihar earthquake. In this paper, an attempt has been made to highlight the lessons learnt from the MwW 7.8 Gorkha (Nepal) earthquake. Several types of damage patterns in buildings were observed for reinforced concrete buildings, as well as for unreinforced masonry and adobe houses in the earthquake of 25 April 2015. Many field visits in the affected areas were conducted, and thus, associated failure and damage patterns were identified and analyzed. Damage patterns in non-engineered buildings, middle and high-rise buildings, commercial complexes, administrative buildings, schools and other critical facilities are also included from the affected districts. For most buildings, the construction and structural deficiencies have been identified as the major causes of failure; however, topography, local soil amplification, foundation settlement, liquefaction associated damages and buildings built in hazard-prone areas were also significantly observed for the failure or damages to buildings and hence are reported. Finally, the lessons learnt from Mw 7.8 Gorkha (Nepal) earthquake are presented in order to mitigate impacts of future earthquakes in Nepal.Keywords: Gorkha earthquake, reinforced concrete structure, Nepal, lesson learnt
Procedia PDF Downloads 204