Search results for: model base testing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19987

Search results for: model base testing

19717 Effects of Damper Locations and Base Isolators on Seismic Response of a Building Frame

Authors: Azin Shakibabarough, Mojtaba Valinejadshoubi, Ashutosh Bagchi

Abstract:

Structural vibration means repetitive motion that causes fatigue and reduction of the performance of a structure. An earthquake may release high amount of energy that can have adverse effect on all components of a structure. Therefore, decreasing of vibration or maintaining performance of structures such as bridges, dams, roads and buildings is important for life safety and reducing economic loss. When earthquake or any vibration happens, investigation on parts of a structure which sustain the seismic loads is mandatory to provide a safe condition for the occupants. One of the solutions for reducing the earthquake vibration in a structure is using of vibration control devices such as dampers and base isolators. The objective of this study is to investigate the optimal positions of friction dampers and base isolators for better seismic response of 2D frame. For this purpose, a two bay and six story frame with different distribution formats was modeled and some of their responses to earthquake such as inter-story drift, max joint displacement, max axial force and max bending moment were determined and compared using non-linear dynamic analysis.

Keywords: fast nonlinear analysis, friction damper, base isolator, seismic vibration control, seismic response

Procedia PDF Downloads 295
19716 Production and Distribution Network Planning Optimization: A Case Study of Large Cement Company

Authors: Lokendra Kumar Devangan, Ajay Mishra

Abstract:

This paper describes the implementation of a large-scale SAS/OR model with significant pre-processing, scenario analysis, and post-processing work done using SAS. A large cement manufacturer with ten geographically distributed manufacturing plants for two variants of cement, around 400 warehouses serving as transshipment points, and several thousand distributor locations generating demand needed to optimize this multi-echelon, multi-modal transport supply chain separately for planning and allocation purposes. For monthly planning as well as daily allocation, the demand is deterministic. Rail and road networks connect any two points in this supply chain, creating tens of thousands of such connections. Constraints include the plant’s production capacity, transportation capacity, and rail wagon batch size constraints. Each demand point has a minimum and maximum for shipments received. Price varies at demand locations due to local factors. A large mixed integer programming model built using proc OPTMODEL decides production at plants, demand fulfilled at each location, and the shipment route to demand locations to maximize the profit contribution. Using base SAS, we did significant pre-processing of data and created inputs for the optimization. Using outputs generated by OPTMODEL and other processing completed using base SAS, we generated several reports that went into their enterprise system and created tables for easy consumption of the optimization results by operations.

Keywords: production planning, mixed integer optimization, network model, network optimization

Procedia PDF Downloads 32
19715 Estimation Model for Concrete Slump Recovery by Using Superplasticizer

Authors: Chaiyakrit Raoupatham, Ram Hari Dhakal, Chalermchai Wanichlamlert

Abstract:

This paper is aimed to introduce the solution of concrete slump recovery using chemical admixture type-F (superplasticizer, naphthalene base) to the practice, in order to solve unusable concrete problem due to concrete loss its slump, especially for those tropical countries that have faster slump loss rate. In the other hand, randomly adding superplasticizer into concrete can cause concrete to segregate. Therefore, this paper also develops the estimation model used to calculate amount of second dose of superplasticizer need for concrete slump recovery. Fresh properties of ordinary Portland cement concrete with volumetric ratio of paste to void between aggregate (paste content) of 1.1-1.3 with water-cement ratio zone of 0.30 to 0.67 and initial superplasticizer (naphthalene base) of 0.25%- 1.6% were tested for initial slump and slump loss for every 30 minutes for one and half hour by slump cone test. Those concretes with slump loss range from 10% to 90% were re-dosed and successfully recovered back to its initial slump. Slump after re-dosed was tested by slump cone test. From the result, it has been concluded that, slump loss was slower for those mix with high initial dose of superplasticizer due to addition of superplasticizer will disturb cement hydration. The required second dose of superplasticizer was affected by two major parameter, which were water-cement ratio and paste content, where lower water-cement ratio and paste content cause an increase in require second dose of superplasticizer. The amount of second dose of superplasticizer is higher as the solid content within the system is increase, solid can be either from cement particles or aggregate. The data was analyzed to form an equation use to estimate the amount of second dosage requirement of superplasticizer to recovery slump to its original.

Keywords: estimation model, second superplasticizer dosage, slump loss, slump recovery

Procedia PDF Downloads 174
19714 Random Subspace Ensemble of CMAC Classifiers

Authors: Somaiyeh Dehghan, Mohammad Reza Kheirkhahan Haghighi

Abstract:

The rapid growth of domains that have data with a large number of features, while the number of samples is limited has caused difficulty in constructing strong classifiers. To reduce the dimensionality of the feature space becomes an essential step in classification task. Random subspace method (or attribute bagging) is an ensemble classifier that consists of several classifiers that each base learner in ensemble has subset of features. In the present paper, we introduce Random Subspace Ensemble of CMAC neural network (RSE-CMAC), each of which has training with subset of features. Then we use this model for classification task. For evaluation performance of our model, we compare it with bagging algorithm on 36 UCI datasets. The results reveal that the new model has better performance.

Keywords: classification, random subspace, ensemble, CMAC neural network

Procedia PDF Downloads 303
19713 Research on Straightening Process Model Based on Iteration and Self-Learning

Authors: Hong Lu, Xiong Xiao

Abstract:

Shaft parts are widely used in machinery industry, however, bending deformation often occurred when this kind of parts is being heat treated. This parts needs to be straightened to meet the requirement of straightness. As for the pressure straightening process, a good straightening stroke algorithm is related to the precision and efficiency of straightening process. In this paper, the relationship between straightening load and deflection during the straightening process is analyzed, and the mathematical model of the straightening process has been established. By the mathematical model, the iterative method is used to solve the straightening stroke. Compared to the traditional straightening stroke algorithm, straightening stroke calculated by this method is much more precise; because it can adapt to the change of material performance parameters. Considering that the straightening method is widely used in the mass production of the shaft parts, knowledge base is used to store the data of the straightening process, and a straightening stroke algorithm based on empirical data is set up. In this paper, the straightening process control model which combine the straightening stroke method based on iteration and straightening stroke algorithm based on empirical data has been set up. Finally, an experiment has been designed to verify the straightening process control model.

Keywords: straightness, straightening stroke, deflection, shaft parts

Procedia PDF Downloads 300
19712 E-learning resources for radiology training: Is an ideal program available?

Authors: Eric Fang, Robert Chen, Ghim Song Chia, Bien Soo Tan

Abstract:

Objective and Rationale: Training of radiology residents hinges on practical, on-the-job training in all facets and modalities of diagnostic radiology. Although residency is structured to be comprehensive, clinical exposure depends on the case mix available locally and during the posting period. To supplement clinical training, there are several e-learning resources available to allow for greater exposure to radiological cases. The objective of this study was to survey residents and faculty on the usefulness of these e-learning resources. Methods: E-learning resources were shortlisted with input from radiology residents, Google search and online discussion groups, and screened by their purported focus. Twelve e-learning resources were found to meet the criteria. Both radiology residents and experienced radiology faculty were then surveyed electronically. The e-survey asked for ratings on breadth, depth, testing capability and user-friendliness for each resource, as well as for rankings for the top 3 resources. Statistical analysis was performed using SAS 9.4. Results: Seventeen residents and fifteen faculties completed an e-survey. Mean response rate was 54% ± 8% (Range: 14- 96%). Ratings and rankings were statistically identical between residents and faculty. On a 5-point rating scale, breadth was 3.68 ± 0.18, depth was 3.95 ± 0.14, testing capability was 2.64 ± 0.16 and user-friendliness was 3.39 ± 0.13. Top-ranked resources were STATdx (first), Radiopaedia (second) and Radiology Assistant (third). 9% of responders singled out R-ITI as potentially good but ‘prohibitively costly’. Statistically significant predictive factors for higher rankings are familiarity with the resource (p = 0.001) and user-friendliness (p = 0.006). Conclusion: A good e-learning system will complement on-the-job training with a broad case base, deep discussion and quality trainee evaluation. Based on our study on twelve e-learning resources, no single program fulfilled all requirements. The perception and use of radiology e-learning resources depended more on familiarity and user-friendliness than on content differences and testing capability.

Keywords: e-learning, medicine, radiology, survey

Procedia PDF Downloads 311
19711 Modified Plastic-Damage Model for FRP-Confined Repaired Concrete Columns

Authors: I. A Tijani, Y. F Wu, C.W. Lim

Abstract:

Concrete Damaged Plasticity Model (CDPM) is capable of modeling the stress-strain behavior of confined concrete. Nevertheless, the accuracy of the model largely depends on its parameters. To date, most research works mainly focus on the identification and modification of the parameters for fiber reinforced polymer (FRP) confined concrete prior to damage. And, it has been established that the FRP-strengthened concrete behaves differently to FRP-repaired concrete. This paper presents a modified plastic damage model within the context of the CDPM in ABAQUS for modelling of a uniformly FRP-confined repaired concrete under monotonic loading. The proposed model includes infliction damage, elastic stiffness, yield criterion and strain hardening rule. The distinct feature of damaged concrete is elastic stiffness reduction; this is included in the model. Meanwhile, the test results were obtained from a physical testing of repaired concrete. The dilation model is expressed as a function of the lateral stiffness of the FRP-jacket. The finite element predictions are shown to be in close agreement with the obtained test results of the repaired concrete. It was observed from the study that with necessary modifications, finite element method is capable of modeling FRP-repaired concrete structures.

Keywords: Concrete, FRP, Damage, Repairing, Plasticity, and Finite element method

Procedia PDF Downloads 108
19710 Capacity Loss of Urban Arterial Roads under the Influence of Bus Stop

Authors: Sai Chand, Ashish Dhamaniya, Satish Chandra

Abstract:

Curbside bus stops are provided on urban roads when sufficient land is not available to construct bus bays. The present study demonstrates the effect of curbside bus stops on midblock capacity of an urban arterial road. Data were collected on seven sections of 6-lane urban arterial roads in New Delhi. Three sections were selected without any side friction to estimate the base value of capacity. Remaining four sections were with curbside bus stop. Speed and volume data were collected in field and these data were used to estimate the capacity of a section. The average base midblock capacity of a 6–lane divided urban road was found to be 6314 PCU/hr which was further referred as base capacity. Effect of curbside bus stop on midblock capacity of urban road was evaluated by comparing the capacity of a section with curbside bus stop with that of the base capacity. Finally, a mathematical relation has been developed between bus frequency and capacity loss. Also a relation has been suggested between dwell time and capacity loss. The developed relations would be very useful for practising engineers to estimate capacity loss due to bus stop.

Keywords: bus frequency, bus stops, capacity loss, urban arterial

Procedia PDF Downloads 324
19709 Risk Assessment of Oil Spill Pollution by Integration of Gnome, Aloha and Gis in Bandar Abbas Coast, Iran

Authors: Mehrnaz Farzingohar, Mehran Yasemi, Ahmad Savari

Abstract:

The oil products are imported and exported via Rajaee’s tanker terminal. Within loading and discharging in several cases the oil is released into the berths and made oil spills. The spills are distributed within short time and seriously affected Rajaee port’s environment and even extended areas. The trajectory and fate of oil spills investigated by modeling and parted by three risk levels base on the modeling results. First GNOME (General NOAA Operational Modeling Environment) applied to trajectory the liquid oil. Second, ALOHA (Areal Location Of Hazardous Atmosphere) air quality model, is integrated to predict the oil evaporation path within the air. Base on the identified zones the high risk areas are signed by colored dots which their densities calculated and clarified on a map which displayed the harm places. Wind and water circulation moved the pollution to the East of Rajaee Port that accumulated about 12 km of coastline. Approximately 20 km of north east of Qeshm Island shore is covered by the three levels of risky areas. Since the main wind direction is SSW the pollution pushed to the east and the highest risk zones formed on the crests edges hence the low risk appeared on the concavities. This assessment help the management and emergency systems to monitor the exposure places base on the priority factors and find the best approaches to protect the environment.

Keywords: oil spill, modeling, pollution, risk assessment

Procedia PDF Downloads 354
19708 Conspicuous and Significant Learner Errors in Algebra

Authors: Michael Lousis

Abstract:

The kind of the most important and conspicuous errors the students made during the three-years of testing of their progress in Algebra are presented in this article. The way these students’ errors changed over three-years of school Algebra learning also is shown. The sample is comprised of two hundred (200) English students and one hundred and fifty (150) Greek students, who were purposefully culled according to their participation in each occasion of testing in the development of the three-year Kassel Project in England and Greece, in both domains at once of Arithmetic and Algebra. Hence, for each of these English and Greek students, six test-scripts were available and corresponded to the three occasions of testing in both Arithmetic and Algebra respectively.

Keywords: algebra, errors, Kassel Project, progress of learning

Procedia PDF Downloads 273
19707 Optimizing Oxidation Process Parameters of Al-Li Base Alloys Using Taguchi Method

Authors: Muna K. Abbass, Laith A. Mohammed, Muntaha K. Abbas

Abstract:

The oxidation of Al-Li base alloy containing small amounts of rare earth (RE) oxides such as 0.2 wt% Y2O3 and 0.2wt% Nd2O3 particles have been studied at temperatures: 400ºC, 500ºC and 550°C for 60hr in a dry air. Alloys used in this study were prepared by melting and casting in a permanent steel mould under controlled atmosphere. Identification of oxidation kinetics was carried out by using weight gain/surface area (∆W/A) measurements while scanning electron microscopy (SEM) and x-ray diffraction analysis were used for micro structural morphologies and phase identification of the oxide scales. It was observed that the oxidation kinetic for all studied alloys follows the parabolic law in most experimental tests under the different oxidation temperatures. It was also found that the alloy containing 0.2 wt %Y 2O3 particles possess the lowest oxidation rate and shows great improvements in oxidation resistance compared to the alloy containing 0.2 wt % Nd2O3 particles and Al-Li base alloy. In this work, Taguchi method is performed to estimate the optimum weight gain /area (∆W/A) parameter in oxidation process of Al-Li base alloys to obtain a minimum thickness of oxidation layer. Taguchi method is used to formulate the experimental layout, to analyses the effect of each parameter (time, temperature and alloy type) on the oxidation generation and to predict the optimal choice for each parameter and analyzed the effect of these parameters on the weight gain /area (∆W/A) parameter. The analysis shows that, the temperature significantly affects on the (∆W/A) parameter.

Keywords: Al-Li base alloy, oxidation, Taguchi method, temperature

Procedia PDF Downloads 341
19706 A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied, known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity, which cannot be explained by modern physics, and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe, which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature can be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a "neutral state," possessing an energy level that is referred to as the "base energy." The governing principles of base energy are discussed in detail in our second paper in the series "A Conceptual Study for Addressing the Singularity of the Emerging Universe," which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution

Procedia PDF Downloads 66
19705 Prediction of California Bearing Ratio of a Black Cotton Soil Stabilized with Waste Glass and Eggshell Powder using Artificial Neural Network

Authors: Biruhi Tesfaye, Avinash M. Potdar

Abstract:

The laboratory test process to determine the California bearing ratio (CBR) of black cotton soils is not only overpriced but also time-consuming as well. Hence advanced prediction of CBR plays a significant role as it is applicable In pavement design. The prediction of CBR of treated soil was executed by Artificial Neural Networks (ANNs) which is a Computational tool based on the properties of the biological neural system. To observe CBR values, combined eggshell and waste glass was added to soil as 4, 8, 12, and 16 % of the weights of the soil samples. Accordingly, the laboratory related tests were conducted to get the required best model. The maximum CBR value found at 5.8 at 8 % of eggshell waste glass powder addition. The model was developed using CBR as an output layer variable. CBR was considered as a function of the joint effect of liquid limit, plastic limit, and plastic index, optimum moisture content and maximum dry density. The best model that has been found was ANN with 5, 6 and 1 neurons in the input, hidden and output layer correspondingly. The performance of selected ANN has been 0.99996, 4.44E-05, 0.00353 and 0.0067 which are correlation coefficient (R), mean square error (MSE), mean absolute error (MAE) and root mean square error (RMSE) respectively. The research presented or summarized above throws light on future scope on stabilization with waste glass combined with different percentages of eggshell that leads to the economical design of CBR acceptable to pavement sub-base or base, as desired.

Keywords: CBR, artificial neural network, liquid limit, plastic limit, maximum dry density, OMC

Procedia PDF Downloads 154
19704 Computational Modelling of Epoxy-Graphene Composite Adhesive towards the Development of Cryosorption Pump

Authors: Ravi Verma

Abstract:

Cryosorption pump is the best solution to achieve clean, vibration free ultra-high vacuum. Furthermore, the operation of cryosorption pump is free from the influence of electric and magnetic fields. Due to these attributes, this pump is used in the space simulation chamber to create the ultra-high vacuum. The cryosorption pump comprises of three parts (a) panel which is cooled with the help of cryogen or cryocooler, (b) an adsorbent which is used to adsorb the gas molecules, (c) an epoxy which holds the adsorbent and the panel together thereby aiding in heat transfer from adsorbent to the panel. The performance of cryosorption pump depends on the temperature of the adsorbent and hence, on the thermal conductivity of the epoxy. Therefore we have made an attempt to increase the thermal conductivity of epoxy adhesive by mixing nano-sized graphene filler particles. The thermal conductivity of epoxy-graphene composite adhesive is measured with the help of indigenously developed experimental setup in the temperature range from 4.5 K to 7 K, which is generally the operating temperature range of cryosorption pump for efficiently pumping of hydrogen and helium gas. In this article, we have presented the experimental results of epoxy-graphene composite adhesive in the temperature range from 4.5 K to 7 K. We have also proposed an analytical heat conduction model to find the thermal conductivity of the composite. In this case, the filler particles, such as graphene, are randomly distributed in a base matrix of epoxy. The developed model considers the complete spatial random distribution of filler particles and this distribution is explained by Binomial distribution. The results obtained by the model have been compared with the experimental results as well as with the other established models. The developed model is able to predict the thermal conductivity in both isotropic regions as well as in anisotropic region over the required temperature range from 4.5 K to 7 K. Due to the non-empirical nature of the proposed model, it will be useful for the prediction of other properties of composite materials involving the filler in a base matrix. The present studies will aid in the understanding of low temperature heat transfer which in turn will be useful towards the development of high performance cryosorption pump.

Keywords: composite adhesive, computational modelling, cryosorption pump, thermal conductivity

Procedia PDF Downloads 64
19703 Numerical Simulation of Seismic Process Accompanying the Formation of Shear-Type Fault Zone in Chuya-Kuray Depressions

Authors: Mikhail O. Eremin

Abstract:

Seismic activity around the world is clearly a threat to people's lives, as well as infrastructure and capital construction. It is the instability of the latter to powerful earthquakes that most often causes human casualties. Therefore, during construction it is necessary to take into account the risks of large-scale natural disasters. The task of assessing the risks of natural disasters is one of the most urgent at the present time. The final goal of any study of earthquakes is forecasting. This is especially important for seismically active regions of the planet where earthquakes occur frequently. Gorni Altai is one of such regions. In work, we developed the physical-mathematical model of stress-strain state evolution of loaded geomedium with the purpose of numerical simulation of seismic process accompanying the formation of Chuya-Kuray fault zone Gorni Altay, Russia. We build a structural model on the base of seismotectonic and paleoseismogeological investigations, as well as SRTM-data. Base of mathematical model is the system of equations of solid mechanics which includes the fundamental conservation laws and constitutive equations for elastic (Hooke's law) and inelastic deformation (modified model of Drucker-Prager-Nikolaevskii). An initial stress state of the model correspond to gravitational. Then we simulate an activation of a buried dextral strike-slip paleo-fault located in the basement of the model. We obtain the stages of formation and the structure of Chuya-Kuray fault zone. It is shown that results of numerical simulation are in good agreement with field observations in statistical sense. Simulated seismic process is strongly bound to the faults - lineaments with high degree of inelastic strain localization. Fault zone represents en-echelon system of dextral strike-slips according to the Riedel model. The system of surface lineaments is represented with R-, R'-shear bands, X- and Y-shears, T-fractures. Simulated seismic process obeys the laws of Gutenberg-Richter and Omori. Thus, the model describes a self-similar character of deformation and fracture of rocks and geomedia. We also modified the algorithm of determination of separate slip events in the model due to the features of strain rates dependence vs time.

Keywords: Drucker-Prager model, fault zone, numerical simulation, Riedel bands, seismic process, strike-slip fault

Procedia PDF Downloads 111
19702 An Intelligent Nondestructive Testing System of Ultrasonic Infrared Thermal Imaging Based on Embedded Linux

Authors: Hao Mi, Ming Yang, Tian-yue Yang

Abstract:

Ultrasonic infrared nondestructive testing is a kind of testing method with high speed, accuracy and localization. However, there are still some problems, such as the detection requires manual real-time field judgment, the methods of result storage and viewing are still primitive. An intelligent non-destructive detection system based on embedded linux is put forward in this paper. The hardware part of the detection system is based on the ARM (Advanced Reduced Instruction Set Computer Machine) core and an embedded linux system is built to realize image processing and defect detection of thermal images. The CLAHE algorithm and the Butterworth filter are used to process the thermal image, and then the boa server and CGI (Common Gateway Interface) technology are used to transmit the test results to the display terminal through the network for real-time monitoring and remote monitoring. The system also liberates labor and eliminates the obstacle of manual judgment. According to the experiment result, the system provides a convenient and quick solution for industrial non-destructive testing.

Keywords: remote monitoring, non-destructive testing, embedded Linux system, image processing

Procedia PDF Downloads 191
19701 An Adaptive Cooperative Scheme for Reliability of Transmission Using STBC and CDD in Wireless Communications

Authors: Hyun-Jun Shin, Jae-Jeong Kim, Hyoung-Kyu Song

Abstract:

In broadcasting and cellular system, a cooperative scheme is proposed for the improvement of performance of bit error rate. Up to date, the coverage of broadcasting system coexists with the coverage of cellular system. Therefore each user in a cellular coverage is frequently involved in a broadcasting coverage. The proposed cooperative scheme is derived from the shared areas. The users receive signals from both broadcasting base station and cellular base station. The proposed scheme selects a cellular base station of a worse channel to achieve better performance of bit error rate in cooperation. The performance of the proposed scheme is evaluated in fading channel.

Keywords: cooperative communication, diversity, STBC, CDD, channel condition, broadcasting system, cellular system

Procedia PDF Downloads 471
19700 Numerical Simulation of Kangimi Reservoir Sedimentation, Kaduna State, Nigeria

Authors: Abdurrasheed Sa'id, Abubakar Isma'il, Waheed Alayande

Abstract:

This study focused on carrying out numerical simulations of Kangimi reservoir sedimentation by reviewing different numerical sediment transport models, and GSTARS3 was selected. The model was developed using the 1977 data. It was calibrated by simulating the 2012 profile and sediment deposition and compared with 2012 hydrographic survey results of NWRI. The model was validated by simulating the 2016 deposition and compared the results with NWRI estimates. Also, the performance of the proposed model was tested using statistical parameters such as MSE (Mean Square Error), MAPE (Mean Average Percentage Error) and R2 (Coefficient of determination) with values of 1.32m, 0.17% and 0.914 respectively which shows strong agreement. After the calibration, validation and performance testing the model was used to simulate the 2032 and 2062 profiles and deposition. The results showed that by 2032 the reservoir will be silted by 25.34MCM or 43.3% of the design capacity and 60.7% of the capacity by the year 2062. A number of sedimentation mitigation measures were recommended.

Keywords: NWRI- national water resources institute, sedimentation, GSTARS3, model

Procedia PDF Downloads 194
19699 Impact of Data and Model Choices to Urban Flood Risk Assessments

Authors: Abhishek Saha, Serene Tay, Gerard Pijcke

Abstract:

The availability of high-resolution topography and rainfall information in urban areas has made it necessary to revise modeling approaches used for simulating flood risk assessments. Lidar derived elevation models that have 1m or lower resolutions are becoming widely accessible. The classical approaches of 1D-2D flow models where channel flow is simulated and coupled with a coarse resolution 2D overland flow models may not fully utilize the information provided by high-resolution data. In this context, a study was undertaken to compare three different modeling approaches to simulate flooding in an urban area. The first model used is the base model used is Sobek, which uses 1D model formulation together with hydrologic boundary conditions and couples with an overland flow model in 2D. The second model uses a full 2D model for the entire area with shallow water equations at the resolution of the digital elevation model (DEM). These models are compared against another shallow water equation solver in 2D, which uses a subgrid method for grid refinement. These models are simulated for different horizontal resolutions of DEM varying between 1m to 5m. The results show a significant difference in inundation extents and water levels for different DEMs. They are also sensitive to the different numerical models with the same physical parameters, such as friction. The study shows the importance of having reliable field observations of inundation extents and levels before a choice of model and data can be made for spatial flood risk assessments.

Keywords: flooding, DEM, shallow water equations, subgrid

Procedia PDF Downloads 113
19698 The Study of Applying Models: House, Temple and School for Sufficiency Development to Participate in ASEAN Economic Community: A Case Study of Trimitra Temple (China Town) Bangkok, Thailand

Authors: Saowapa Phaithayawat

Abstract:

The purposes of this study are: 1) to study the impact of the 3-community-core model: House (H), Temple (T), and School (S) with the co-operation of official departments on community development to ASEAN economic community involvement, and 2) to study the procedures and extension of the model. The research which is a qualitative research based on formal and informal interviews. Local people in a community are observed. Group interview is also operated by executors and cooperators in the school in the community. In terms of social and cultural dimension, the 3-community-core model consisting of house, temple and school is the base of Thai cultures bringing about understanding, happiness and unity to the community. The result of this research is that the official departments in accompanied with this model developers cooperatively work together in the community to support such factors as budget, plan, activities. Moreover, the need of community, and the continual result to sustain the community are satisfied by the model implementation. In terms of the procedures of the model implementation, executors and co-operators can work, coordinate, think, and launch their public relation altogether. Concerning the model development, this enables the community to achieve its goal to prepare the community’s readiness for ASEAN Economic Community involvement.

Keywords: ASEAN Economic Community, the applying models and sufficiency development, house, temple, school

Procedia PDF Downloads 285
19697 Predicting Foreign Direct Investment of IC Design Firms from Taiwan to East and South China Using Lotka-Volterra Model

Authors: Bi-Huei Tsai

Abstract:

This work explores the inter-region investment behaviors of integrated circuit (IC) design industry from Taiwan to China using the amount of foreign direct investment (FDI). According to the mutual dependence among different IC design industrial locations, Lotka-Volterra model is utilized to explore the FDI interactions between South and East China. Effects of inter-regional collaborations on FDI flows into China are considered. Evolutions of FDIs into South China for IC design industry significantly inspire the subsequent FDIs into East China, while FDIs into East China for Taiwan’s IC design industry significantly hinder the subsequent FDIs into South China. The supply chain along IC industry includes IC design, manufacturing, packing and testing enterprises. I C manufacturing, packaging and testing industries depend on IC design industry to gain advanced business benefits. The FDI amount from Taiwan’s IC design industry into East China is the greatest among the four regions: North, East, Mid-West and South China. The FDI amount from Taiwan’s IC design industry into South China is the second largest. If IC design houses buy more equipment and bring more capitals in South China, those in East China will have pressure to undertake more FDIs into East China to maintain the leading position advantages of the supply chain in East China. On the other hand, as the FDIs in East China rise, the FDIs in South China will successively decline since capitals have concentrated in East China. Prediction of Lotka-Volterra model in FDI trends is accurate because the industrial interactions between the two regions are included. Finally, this work confirms that the FDI flows cannot reach a stable equilibrium point, so the FDI inflows into East and South China will expand in the future.

Keywords: Lotka-Volterra model, foreign direct investment, competitive, Equilibrium analysis

Procedia PDF Downloads 334
19696 Comparative Assessment of a Distributed Model and a Lumped Model for Estimating of Sediments Yielding in Small Urban Areas

Authors: J.Zambrano Nájera, M.Gómez Valentín

Abstract:

Increases in urbanization during XX century, have brought as one major problem the increased of sediment production. Hydraulic erosion is one of the major causes of increasing of sediments in small urban catchments. Such increments in sediment yielding in header urban catchments can caused obstruction of drainage systems, making impossible to capture urban runoff, increasing runoff volumes and thus exacerbating problems of urban flooding. For these reasons, it is more and more important to study of sediment production in urban watershed for properly analyze and solve problems associated to sediments. The study of sediments production has improved with the use of mathematical modeling. For that reason, it is proposed a new physically based model applicable to small header urban watersheds that includes the advantages of distributed physically base models, but with more realistic data requirements. Additionally, in this paper the model proposed is compared with a lumped model, reviewing the results, the advantages and disadvantages between the both of them.

Keywords: erosion, hydrologic modeling, urban runoff, sediment modeling, sediment yielding, urban planning

Procedia PDF Downloads 321
19695 The Value of Store Choice Criteria on Perceived Patronage Intentions

Authors: Susana Marques

Abstract:

Research on how store environment cues influence consumers’ store choice decision criteria, such as store operations, product quality, monetary price, store image and sales promotion, is sparse. Especially absent research on the simultaneous impact of multiple store environment cues. The authors propose a comprehensive store choice model that includes: three types of store environment cues as exogenous constructs; various store choice criteria as possible mediating constructs, and store patronage intentions as an endogenous construct. On the basis of testing with a sample of 561 customers of hypermarkets, the model is partially supported. This study used structural equation modelling to test the proposed model.

Keywords: store choice, store patronage, structural equation modelling, retailing

Procedia PDF Downloads 250
19694 Experimental Investigation on Utility and Suitability of Lateritic Soil as a Pavement Material

Authors: J. Hemanth, B. G. Shivaprakash, S. V. Dinesh

Abstract:

The locally available Lateritic soil in Dakshina Kanadda and Udupi districts are traditionally being used as building blocks for construction purpose but they do not meet the conventional requirements (L L ≤ 25% & P I ≤6%) and desired four days soaked CBR value to be used as a sub-base course material in pavements. In order to improve its properties to satisfy the Atterberg’s Limits, the soil is blended with sand, cement and quarry dust at various percentages and also to meet the CBR strength requirements, individual and combined gradation of various sized aggregates along with Laterite soil and other filler materials has been done for coarse graded granular sub-base materials (Grading II and Grading III). The effect of additives blended with lateritic soil and aggregates are studied in terms of Atterberg’s limits, compaction, California Bearing Ratio (CBR), and permeability. It has been observed that the addition of sand, cement and quarry dust are found to be effective in improving Atterberg’s limits, CBR values, and permeability values. The obtained CBR and permeability values of Grading III, and Grading II materials found to be sufficient to be used as sub-base course for low volume roads and high volume roads respectively.

Keywords: lateritic soil, sand, quarry dust, gradation, sub-base course, permeability

Procedia PDF Downloads 287
19693 Laboratory Investigation on the Waste Road Construction Material Using Conventional and Chemical Additives

Authors: Paulos Meles Yihdego

Abstract:

To address the environmental impact of the cement industry and road building waste, the use of chemical stabilizers in conjunction with recycled asphalt and cement components was investigated. The silica-based chemical stabilizers and their potential effects on the base layer stabilized by cement are discussed in this paper. Strength, moisture compaction interaction, and microstructural characteristics are all examined. According to the outcome, using this stabilizer has improved the mechanical properties. The inclusion of chemical stabilizers in the combination, which is responsible for the mixture's improved strength, raised the intensity of the C-S-H (Calcium Silicate Hydrate) gel, according to a microstructural study. The design was demonstrated to be durable by the little ettringites found in the later phases. The application of this stabilizer ensures a strong, eco-friendly, durable base layer.

Keywords: ettringites, microstructure analysis, durability properties, cement stabilized base

Procedia PDF Downloads 33
19692 Development of a Predictive Model to Prevent Financial Crisis

Authors: Tengqin Han

Abstract:

Delinquency has been a crucial factor in economics throughout the years. Commonly seen in credit card and mortgage, it played one of the crucial roles in causing the most recent financial crisis in 2008. In each case, a delinquency is a sign of the loaner being unable to pay off the debt, and thus may cause a lost of property in the end. Individually, one case of delinquency seems unimportant compared to the entire credit system. China, as an emerging economic entity, the national strength and economic strength has grown rapidly, and the gross domestic product (GDP) growth rate has remained as high as 8% in the past decades. However, potential risks exist behind the appearance of prosperity. Among the risks, the credit system is the most significant one. Due to long term and a large amount of balance of the mortgage, it is critical to monitor the risk during the performance period. In this project, about 300,000 mortgage account data are analyzed in order to develop a predictive model to predict the probability of delinquency. Through univariate analysis, the data is cleaned up, and through bivariate analysis, the variables with strong predictive power are detected. The project is divided into two parts. In the first part, the analysis data of 2005 are split into 2 parts, 60% for model development, and 40% for in-time model validation. The KS of model development is 31, and the KS for in-time validation is 31, indicating the model is stable. In addition, the model is further validation by out-of-time validation, which uses 40% of 2006 data, and KS is 33. This indicates the model is still stable and robust. In the second part, the model is improved by the addition of macroeconomic economic indexes, including GDP, consumer price index, unemployment rate, inflation rate, etc. The data of 2005 to 2010 is used for model development and validation. Compared with the base model (without microeconomic variables), KS is increased from 41 to 44, indicating that the macroeconomic variables can be used to improve the separation power of the model, and make the prediction more accurate.

Keywords: delinquency, mortgage, model development, model validation

Procedia PDF Downloads 198
19691 Learners’ Conspicuous and Significant Errors in Arithmetic

Authors: Michael Lousis

Abstract:

The systematic identification of the most conspicuous and significant errors made by learners during three-years of testing of their progress in learning Arithmetic are presented in this article. How these errors have changed over three-years of school instruction of Arithmetic also is shown. The sample is comprised of two hundred (200) English students and one hundred and fifty (150) Greek students. These students were purposefully selected according to their participation in each testing session in the development of the three-year Kassel Project in England and Greece, in both domains simultaneously in Arithmetic and Algebra. The data sample includes six test-scripts corresponding to three testing sessions in both Arithmetic and Algebra respectively.

Keywords: arithmetic, errors, Kassel Project, progress of learning

Procedia PDF Downloads 240
19690 Autonomic Recovery Plan with Server Virtualization

Authors: S. Hameed, S. Anwer, M. Saad, M. Saady

Abstract:

For autonomic recovery with server virtualization, a cogent plan that includes recovery techniques and backups with virtualized servers can be developed instead of assigning an idle server to backup operations. In addition to hardware cost reduction and data center trail, the disaster recovery plan can ensure system uptime and to meet objectives of high availability, recovery time, recovery point, server provisioning, and quality of services. This autonomic solution would also support disaster management, testing, and development of the recovery site. In this research, a workflow plan is proposed for supporting disaster recovery with virtualization providing virtual monitoring, requirements engineering, solution decision making, quality testing, and disaster management. This recovery model would make disaster recovery a lot easier, faster, and less error prone.

Keywords: autonomous intelligence, disaster recovery, cloud computing, server virtualization

Procedia PDF Downloads 135
19689 A New Nonlinear State-Space Model and Its Application

Authors: Abdullah Eqal Al Mazrooei

Abstract:

In this work, a new nonlinear model will be introduced. The model is in the state-space form. The nonlinearity of this model is in the state equation where the state vector is multiplied by its self. This technique makes our model generalizes many famous models as Lotka-Volterra model and Lorenz model which have many applications in the real life. We will apply our new model to estimate the wind speed by using a new nonlinear estimator which suitable to work with our model.

Keywords: nonlinear systems, state-space model, Kronecker product, nonlinear estimator

Procedia PDF Downloads 657
19688 Development and Range Testing of a LoRaWAN System in an Urban Environment

Authors: N. R. Harris, J. Curry

Abstract:

This paper describes the construction and operation of an experimental LoRaWAN network surrounding the University of Southampton in the United Kingdom. Following successful installation, an experimental node design is built and characterised, with particular emphasis on radio range. Several configurations are investigated, including different data rates, and varying heights of node. It is concluded that although range can be great (over 8 km in this case), environmental topology is critical. However, shorter range implementations, up to about 2 km in an urban environment, are relatively insensitive although care is still needed. The example node and the relatively simple base station reported demonstrate that LoraWan can be a very low cost and practical solution to Internet of Things type applications for distributed monitoring systems with sensors spread over distances of several km.

Keywords: long-range, wireless, sensor, network

Procedia PDF Downloads 112