Search results for: Paper assessment
961 Decision Support System for Flood Crisis Management using Artificial Neural Network
Authors: Muhammad Aqil, Ichiro Kita, Akira Yano, Nishiyama Soichi
Abstract:
This paper presents an alternate approach that uses artificial neural network to simulate the flood level dynamics in a river basin. The algorithm was developed in a decision support system environment in order to enable users to process the data. The decision support system is found to be useful due to its interactive nature, flexibility in approach and evolving graphical feature and can be adopted for any similar situation to predict the flood level. The main data processing includes the gauging station selection, input generation, lead-time selection/generation, and length of prediction. This program enables users to process the flood level data, to train/test the model using various inputs and to visualize results. The program code consists of a set of files, which can as well be modified to match other purposes. This program may also serve as a tool for real-time flood monitoring and process control. The running results indicate that the decision support system applied to the flood level seems to have reached encouraging results for the river basin under examination. The comparison of the model predictions with the observed data was satisfactory, where the model is able to forecast the flood level up to 5 hours in advance with reasonable prediction accuracy. Finally, this program may also serve as a tool for real-time flood monitoring and process control.Keywords: Decision Support System, Neural Network, Flood Level
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630960 Nonlinear Thermal Hydraulic Model to Analyze Parallel Channel Density Wave Instabilities in Natural Circulation Boiling Water Reactor with Asymmetric Power Distribution
Authors: Sachin Kumar, Vivek Tiwari, Goutam Dutta
Abstract:
The paper investigates parallel channel instabilities of natural circulation boiling water reactor. A thermal-hydraulic model is developed to simulate two-phase flow behavior in the natural circulation boiling water reactor (NCBWR) with the incorporation of ex-core components and recirculation loop such as steam separator, down-comer, lower-horizontal section and upper-horizontal section and then, numerical analysis is carried out for parallel channel instabilities of the reactor undergoing both in-phase and out-of-phase modes of oscillations. To analyze the relative effect on stability of the reactor due to inclusion of various ex-core components and recirculation loop, marginal stable point is obtained at a particular inlet enthalpy of the reactor core without the inclusion of ex-core components and recirculation loop and then with the inclusion of the same. Numerical simulations are also conducted to determine the relative dominance between two modes of oscillations i.e. in-phase and out-of-phase. Simulations are also carried out when the channels are subjected to asymmetric power distribution keeping the inlet enthalpy same.Keywords: Asymmetric power distribution, Density wave oscillations, In-phase and out-of-phase modes of instabilities, Natural circulation boiling water reactor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2262959 Generalized Inverse Eigenvalue Problems for Symmetric Arrow-head Matrices
Authors: Yongxin Yuan
Abstract:
In this paper, we first give the representation of the general solution of the following inverse eigenvalue problem (IEP): Given X ∈ Rn×p and a diagonal matrix Λ ∈ Rp×p, find nontrivial real-valued symmetric arrow-head matrices A and B such that AXΛ = BX. We then consider an optimal approximation problem: Given real-valued symmetric arrow-head matrices A, ˜ B˜ ∈ Rn×n, find (A, ˆ Bˆ) ∈ SE such that Aˆ − A˜2 + Bˆ − B˜2 = min(A,B)∈SE (A−A˜2 +B −B˜2), where SE is the solution set of IEP. We show that the optimal approximation solution (A, ˆ Bˆ) is unique and derive an explicit formula for it.
Keywords: Partially prescribed spectral information, symmetric arrow-head matrix, inverse problem, optimal approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1802958 An Intelligent Scheme Switching for MIMO Systems Using Fuzzy Logic Technique
Authors: Robert O. Abolade, Olumide O. Ajayi, Zacheaus K. Adeyemo, Solomon A. Adeniran
Abstract:
Link adaptation is an important strategy for achieving robust wireless multimedia communications based on quality of service (QoS) demand. Scheme switching in multiple-input multiple-output (MIMO) systems is an aspect of link adaptation, and it involves selecting among different MIMO transmission schemes or modes so as to adapt to the varying radio channel conditions for the purpose of achieving QoS delivery. However, finding the most appropriate switching method in MIMO links is still a challenge as existing methods are either computationally complex or not always accurate. This paper presents an intelligent switching method for the MIMO system consisting of two schemes - transmit diversity (TD) and spatial multiplexing (SM) - using fuzzy logic technique. In this method, two channel quality indicators (CQI) namely average received signal-to-noise ratio (RSNR) and received signal strength indicator (RSSI) are measured and are passed as inputs to the fuzzy logic system which then gives a decision – an inference. The switching decision of the fuzzy logic system is fed back to the transmitter to switch between the TD and SM schemes. Simulation results show that the proposed fuzzy logic – based switching technique outperforms conventional static switching technique in terms of bit error rate and spectral efficiency.Keywords: Channel quality indicator, fuzzy logic, link adaptation, MIMO, spatial multiplexing, transmit diversity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 737957 Use of Vegetation and Geo-Jute in Erosion Control of Slopes in a Sub-Tropical Climate
Authors: Mohammad Shariful Islam, Shamima Nasrin, Md. Shahidul Islam, Farzana Rahman Moury
Abstract:
Protection of slope and embankment from erosion has become an important issue in Bangladesh. The constructions of strong structures require large capital, integrated designing, high maintenance cost. Strong structure methods have negative impact on the environment and sometimes not function for the design period. Plantation of vetiver system along the slopes is an alternative solution. Vetiver not only serves the purpose of slope protection but also adds green environment reducing pollution. Vetiver is available in almost all the districts of Bangladesh. This paper presents the application of vetiver system with geo-jute, for slope protection and erosion control of embankments and slopes. In-situ shear tests have been conducted on vetiver rooted soil system to find the shear strength. The shear strength and effective soil cohesion of vetiver rooted soil matrix are respectively 2.0 times and 2.1 times higher than that of the bared soil. Similar trends have been found in direct shear tests conducted on laboratory reconstituted samples. Field trials have been conducted in road embankment and slope protection with vetiver at different sites. During the time of vetiver root growth the soil protection has been accomplished by geo-jute. As the geo-jute degrades with time, vetiver roots grow and take over the function of geo-jutes. Slope stability analyses showed that vegetation increase the factor of safety significantly.Keywords: Erosion, geo-jute, green technology, vegetation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4169956 PetriNets Manipulation to Reduce Roaming Duration: Criterion to Improve Handoff Management
Authors: Hossam el-ddin Mostafa, Pavel Čičak
Abstract:
IETF RFC 2002 originally introduced the wireless Mobile-IP protocol to support portable IP addresses for mobile devices that often change their network access points to the Internet. The inefficiency of this protocol mainly within the handoff management produces large end-to-end packet delays, during registration process, and further degrades the system efficiency due to packet losses between subnets. The criterion to initiate a simple and fast full-duplex connection between the home agent and foreign agent, to reduce the roaming duration, is a very important issue to be considered by a work in this paper. State-transition Petri-Nets of the modeling scenario-based CIA: communication inter-agents procedure as an extension to the basic Mobile-IP registration process was designed and manipulated. The heuristic of configuration file during practical Setup session for registration parameters, on Cisco platform Router-1760 using IOS 12.3 (15)T is created. Finally, stand-alone performance simulations results from Simulink Matlab, within each subnet and also between subnets, are illustrated for reporting better end-to-end packet delays. Results verified the effectiveness of our Mathcad analytical manipulation and experimental implementation. It showed lower values of end-to-end packet delay for Mobile-IP using CIA procedure. Furthermore, it reported packets flow between subnets to improve packet losses between subnets.Keywords: Cisco configuration, handoff, packet delay, Petri-Nets, registration process, Simulink.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1316955 Context Detection in Spreadsheets Based on Automatically Inferred Table Schema
Authors: Alexander Wachtel, Michael T. Franzen, Walter F. Tichy
Abstract:
Programming requires years of training. With natural language and end user development methods, programming could become available to everyone. It enables end users to program their own devices and extend the functionality of the existing system without any knowledge of programming languages. In this paper, we describe an Interactive Spreadsheet Processing Module (ISPM), a natural language interface to spreadsheets that allows users to address ranges within the spreadsheet based on inferred table schema. Using the ISPM, end users are able to search for values in the schema of the table and to address the data in spreadsheets implicitly. Furthermore, it enables them to select and sort the spreadsheet data by using natural language. ISPM uses a machine learning technique to automatically infer areas within a spreadsheet, including different kinds of headers and data ranges. Since ranges can be identified from natural language queries, the end users can query the data using natural language. During the evaluation 12 undergraduate students were asked to perform operations (sum, sort, group and select) using the system and also Excel without ISPM interface, and the time taken for task completion was compared across the two systems. Only for the selection task did users take less time in Excel (since they directly selected the cells using the mouse) than in ISPM, by using natural language for end user software engineering, to overcome the present bottleneck of professional developers.Keywords: Natural language processing, end user development; natural language interfaces, human computer interaction, data recognition, dialog systems, spreadsheet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1124954 To Cloudify or Not to Cloudify
Authors: Laila Yasir Al-Harthy, Ali H. Al-Badi
Abstract:
As an emerging business model, cloud computing has been initiated to satisfy the need of organizations and to push Information Technology as a utility. The shift to the cloud has changed the way Information Technology departments are managed traditionally and has raised many concerns for both, public and private sectors.
The purpose of this study is to investigate the possibility of cloud computing services replacing services provided traditionally by IT departments. Therefore, it aims to 1) explore whether organizations in Oman are ready to move to the cloud; 2) identify the deciding factors leading to the adoption or rejection of cloud computing services in Oman; and 3) provide two case studies, one for a successful Cloud provider and another for a successful adopter.
This paper is based on multiple research methods including conducting a set of interviews with cloud service providers and current cloud users in Oman; and collecting data using questionnaires from experts in the field and potential users of cloud services.
Despite the limitation of bandwidth capacity and Internet coverage offered in Oman that create a challenge in adopting the cloud, it was found that many information technology professionals are encouraged to move to the cloud while few are resistant to change.
The recent launch of a new Omani cloud service provider and the entrance of other international cloud service providers in the Omani market make this research extremely valuable as it aims to provide real-life experience as well as two case studies on the successful provision of cloud services and the successful adoption of these services.
Keywords: Cloud computing, cloud deployment models, cloud service models and deciding factors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2296953 Current Status and Future Trends of Mechanized Fruit Thinning Devices and Sensor Technology
Authors: Marco Lopes, Pedro D. Gaspar, Maria P. Simões
Abstract:
This paper reviews the different concepts that have been investigated concerning the mechanization of fruit thinning as well as multiple working principles and solutions that have been developed for feature extraction of horticultural products, both in the field and industrial environments. The research should be committed towards selective methods, which inevitably need to incorporate some kinds of sensor technology. Computer vision often comes out as an obvious solution for unstructured detection problems, although leaves despite the chosen point of view frequently occlude fruits. Further research on non-traditional sensors that are capable of object differentiation is needed. Ultrasonic and Near Infrared (NIR) technologies have been investigated for applications related to horticultural produce and show a potential to satisfy this need while simultaneously providing spatial information as time of flight sensors. Light Detection and Ranging (LIDAR) technology also shows a huge potential but it implies much greater costs and the related equipment is usually much larger, making it less suitable for portable devices, which may serve a purpose on smaller unstructured orchards. Portable devices may serve a purpose on these types of orchards. In what concerns sensor methods, on-tree fruit detection, major challenge is to overcome the problem of fruits’ occlusion by leaves and branches. Hence, nontraditional sensors capable of providing some type of differentiation should be investigated.
Keywords: Fruit thinning, horticultural field, portable devices, sensor technologies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 987952 Brief Review of the Self-Tightening, Left-Handed Thread
Authors: Robert S. Giachetti, Emanuele Grossi
Abstract:
Loosening of bolted joints in rotating machines can adversely affect their performance, cause mechanical damage, and lead to injuries. In this paper, two potential loosening phenomena in rotating applications are discussed. First, ‘precession,’ is governed by thread/nut contact forces, while the second is based on inertial effects of the fastened assembly. These mechanisms are reviewed within the context of historical usage of left-handed fasteners in rotating machines which appears absent in the literature and common machine design texts. Historically, to prevent loosening of wheel nuts, vehicle manufacturers have used right-handed and left-handed threads on different sides of the vehicle, but most modern vehicles have abandoned this custom and only use right-handed, tapered lug nuts on all sides of the vehicle. Other classical machines such as the bicycle continue to use different handed threads on each side while other machines such as, bench grinders, circular saws and brush cutters still use left-handed threads to fasten rotating components. Despite the continued use of left-handed fasteners, the rationale and analysis of left-handed threads to mitigate self-loosening of fasteners in rotating applications is not commonly, if at all, discussed in the literature or design textbooks. Without scientific literature to support these design selections, these implementations may be the result of experimental findings or aged institutional knowledge. Based on a review of rotating applications, historical documents and mechanical design references, a formal study of the paradoxical nature of left-handed threads in various applications is merited.
Keywords: Rotating machinery, self-loosening fasteners, wheel fastening, vibration loosening.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 565951 Region Segmentation based on Gaussian Dirichlet Process Mixture Model and its Application to 3D Geometric Stricture Detection
Authors: Jonghyun Park, Soonyoung Park, Sanggyun Kim, Wanhyun Cho, Sunworl Kim
Abstract:
In general, image-based 3D scenes can now be found in many popular vision systems, computer games and virtual reality tours. So, It is important to segment ROI (region of interest) from input scenes as a preprocessing step for geometric stricture detection in 3D scene. In this paper, we propose a method for segmenting ROI based on tensor voting and Dirichlet process mixture model. In particular, to estimate geometric structure information for 3D scene from a single outdoor image, we apply the tensor voting and Dirichlet process mixture model to a image segmentation. The tensor voting is used based on the fact that homogeneous region in an image are usually close together on a smooth region and therefore the tokens corresponding to centers of these regions have high saliency values. The proposed approach is a novel nonparametric Bayesian segmentation method using Gaussian Dirichlet process mixture model to automatically segment various natural scenes. Finally, our method can label regions of the input image into coarse categories: “ground", “sky", and “vertical" for 3D application. The experimental results show that our method successfully segments coarse regions in many complex natural scene images for 3D.
Keywords: Region segmentation, tensor voting, image-based 3D, geometric structure, Gaussian Dirichlet process mixture model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1894950 The Effect of Treated Waste-Water on Compaction and Compression of Fine Soil
Authors: M. Attom, F. Abed, M. Elemam, M. Nazal, N. ElMessalami
Abstract:
—The main objective of this paper is to study the effect of treated waste-water (TWW) on the compaction and compressibility properties of fine soil. Two types of fine soils (clayey soils) were selected for this study and classified as CH soil and Cl type of soil. Compaction and compressibility properties such as optimum water content, maximum dry unit weight, consolidation index and swell index, maximum past pressure and volume change were evaluated using both tap and treated waste water. It was found that the use of treated waste water affects all of these properties. The maximum dry unit weight increased for both soils and the optimum water content decreased as much as 13.6% for highly plastic soil. The significant effect was observed in swell index and swelling pressure of the soils. The swell indexed decreased by as much as 42% and 33% for highly plastic and low plastic soils, respectively, when TWW is used. Additionally, the swelling pressure decreased by as much as 16% for both soil types. The result of this research pointed out that the use of treated waste water has a positive effect on compaction and compression properties of clay soil and promise for potential use of this water in engineering applications. Keywords—Consolidation, proctor compaction, swell index, treated waste-water, volume change.Keywords: Consolidation, proctor compaction, swell index, treated waste-water, volume change.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643949 Statistical Feature Extraction Method for Wood Species Recognition System
Authors: Mohd Iz'aan Paiz Bin Zamri, Anis Salwa Mohd Khairuddin, Norrima Mokhtar, Rubiyah Yusof
Abstract:
Effective statistical feature extraction and classification are important in image-based automatic inspection and analysis. An automatic wood species recognition system is designed to perform wood inspection at custom checkpoints to avoid mislabeling of timber which will results to loss of income to the timber industry. The system focuses on analyzing the statistical pores properties of the wood images. This paper proposed a fuzzy-based feature extractor which mimics the experts’ knowledge on wood texture to extract the properties of pores distribution from the wood surface texture. The proposed feature extractor consists of two steps namely pores extraction and fuzzy pores management. The total number of statistical features extracted from each wood image is 38 features. Then, a backpropagation neural network is used to classify the wood species based on the statistical features. A comprehensive set of experiments on a database composed of 5200 macroscopic images from 52 tropical wood species was used to evaluate the performance of the proposed feature extractor. The advantage of the proposed feature extraction technique is that it mimics the experts’ interpretation on wood texture which allows human involvement when analyzing the wood texture. Experimental results show the efficiency of the proposed method.Keywords: Classification, fuzzy, inspection system, image analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1745948 Optimal Efficiency Control of Pulse Width Modulation - Inverter Fed Motor Pump Drive Using Neural Network
Authors: O. S. Ebrahim, M. A. Badr, A. S. Elgendy, K. O. Shawky, P. K. Jain
Abstract:
This paper demonstrates an improved Loss Model Control (LMC) for a 3-phase induction motor (IM) driving pump load. Compared with other power loss reduction algorithms for IM, the presented one has the advantages of fast and smooth flux adaptation, high accuracy, and versatile implementation. The performance of LMC depends mainly on the accuracy of modeling the motor drive and losses. A loss-model for IM drive that considers the surplus power loss caused by inverter voltage harmonics using closed-form equations and also includes the magnetic saturation has been developed. Further, an Artificial Neural Network (ANN) controller is synthesized and trained offline to determine the optimal flux level that achieves maximum drive efficiency. The drive’s voltage and speed control loops are connecting via the stator frequency to avoid the possibility of excessive magnetization. Besides, the resistance change due to temperature is considered by a first-order thermal model. The obtained thermal information enhances motor protection and control. These together have the potential of making the proposed algorithm reliable. Simulation and experimental studies are performed on 5.5 kW test motor using the proposed control method. The test results are provided and compared with the fixed flux operation to validate the effectiveness.
Keywords: Artificial neural network, ANN, efficiency optimization, induction motor, IM, Pulse Width Modulated, PWM, harmonic losses.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 362947 A New High Speed Neural Model for Fast Character Recognition Using Cross Correlation and Matrix Decomposition
Authors: Hazem M. El-Bakry
Abstract:
Neural processors have shown good results for detecting a certain character in a given input matrix. In this paper, a new idead to speed up the operation of neural processors for character detection is presented. Such processors are designed based on cross correlation in the frequency domain between the input matrix and the weights of neural networks. This approach is developed to reduce the computation steps required by these faster neural networks for the searching process. The principle of divide and conquer strategy is applied through image decomposition. Each image is divided into small in size sub-images and then each one is tested separately by using a single faster neural processor. Furthermore, faster character detection is obtained by using parallel processing techniques to test the resulting sub-images at the same time using the same number of faster neural networks. In contrast to using only faster neural processors, the speed up ratio is increased with the size of the input image when using faster neural processors and image decomposition. Moreover, the problem of local subimage normalization in the frequency domain is solved. The effect of image normalization on the speed up ratio of character detection is discussed. Simulation results show that local subimage normalization through weight normalization is faster than subimage normalization in the spatial domain. The overall speed up ratio of the detection process is increased as the normalization of weights is done off line.Keywords: Fast Character Detection, Neural Processors, Cross Correlation, Image Normalization, Parallel Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1539946 Computer Countenanced Diagnosis of Skin Nodule Detection and Histogram Augmentation: Extracting System for Skin Cancer
Authors: S. Zith Dey Babu, S. Kour, S. Verma, C. Verma, V. Pathania, A. Agrawal, V. Chaudhary, A. Manoj Puthur, R. Goyal, A. Pal, T. Danti Dey, A. Kumar, K. Wadhwa, O. Ved
Abstract:
Background: Skin cancer is now is the buzzing button in the field of medical science. The cyst's pandemic is drastically calibrating the body and well-being of the global village. Methods: The extracted image of the skin tumor cannot be used in one way for diagnosis. The stored image contains anarchies like the center. This approach will locate the forepart of an extracted appearance of skin. Partitioning image models has been presented to sort out the disturbance in the picture. Results: After completing partitioning, feature extraction has been formed by using genetic algorithm and finally, classification can be performed between the trained and test data to evaluate a large scale of an image that helps the doctors for the right prediction. To bring the improvisation of the existing system, we have set our objectives with an analysis. The efficiency of the natural selection process and the enriching histogram is essential in that respect. To reduce the false-positive rate or output, GA is performed with its accuracy. Conclusions: The objective of this task is to bring improvisation of effectiveness. GA is accomplishing its task with perfection to bring down the invalid-positive rate or outcome. The paper's mergeable portion conflicts with the composition of deep learning and medical image processing, which provides superior accuracy. Proportional types of handling create the reusability without any errors.
Keywords: Computer-aided system, detection, image segmentation, morphology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 549945 Probabilistic Crash Prediction and Prevention of Vehicle Crash
Authors: Lavanya Annadi, Fahimeh Jafari
Abstract:
Transportation brings immense benefits to society, but it also has its costs. Costs include the cost of infrastructure, personnel, and equipment, but also the loss of life and property in traffic accidents on the road, delays in travel due to traffic congestion, and various indirect costs in terms of air transport. This research aims to predict the probabilistic crash prediction of vehicles using Machine Learning due to natural and structural reasons by excluding spontaneous reasons, like overspeeding, etc., in the United States. These factors range from meteorological elements such as weather conditions, precipitation, visibility, wind speed, wind direction, temperature, pressure, and humidity, to human-made structures, like road structure components such as Bumps, Roundabouts, No Exit, Turning Loops, Give Away, etc. The probabilities are categorized into ten distinct classes. All the predictions are based on multiclass classification techniques, which are supervised learning. This study considers all crashes in all states collected by the US government. The probability of the crash was determined by employing Multinomial Expected Value, and a classification label was assigned accordingly. We applied three classification models, including multiclass Logistic Regression, Random Forest and XGBoost. The numerical results show that XGBoost achieved a 75.2% accuracy rate which indicates the part that is being played by natural and structural reasons for the crash. The paper has provided in-depth insights through exploratory data analysis.
Keywords: Road safety, crash prediction, exploratory analysis, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 93944 Optimization and GIS-Based Intelligent Decision Support System for Urban Transportation Systems Analysis
Authors: Mohamad K. Hasan, Hameed Al-Qaheri
Abstract:
Optimization plays an important role in most real world applications that support decision makers to take the right decision regarding the strategic directions and operations of the system they manage. Solutions for traffic management and traffic congestion problems are considered major problems that most decision making authorities for cities around the world are looking for. This review paper gives a full description of the traffic problem as part of the transportation planning process and present a view as a framework of urban transportation system analysis where the core of the system is a transportation network equilibrium model that is based on optimization techniques and that can also be used for evaluating an alternative solution or a combination of alternative solutions for the traffic congestion. Different transportation network equilibrium models are reviewed from the sequential approach to the multiclass combining trip generation, trip distribution, modal split, trip assignment and departure time model. A GIS-Based intelligent decision support system framework for urban transportation system analysis is suggested for implementation where the selection of optimized alternative solutions, single or packages, will be based on an intelligent agent rather than human being which would lead to reduction in time, cost and the elimination of the difficulty, by human being, for finding the best solution to the traffic congestion problem.Keywords: Multiclass simultaneous transportation equilibrium models, transportation planning, urban transportation systems analysis, intelligent decision support system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2309943 Fuzzy Logic Based Improved Range Free Localization for Wireless Sensor Networks
Authors: Ashok Kumar, Vinod Kumar
Abstract:
Wireless Sensor Networks (WSNs) are used to monitor/observe vast inaccessible regions through deployment of large number of sensor nodes in the sensing area. For majority of WSN applications, the collected data needs to be combined with geographic information of its origin to make it useful for the user; information received from remote Sensor Nodes (SNs) that are several hops away from base station/sink is meaningless without knowledge of its source. In addition to this, location information of SNs can also be used to propose/develop new network protocols for WSNs to improve their energy efficiency and lifetime. In this paper, range free localization protocols for WSNs have been proposed. The proposed protocols are based on weighted centroid localization technique, where the edge weights of SNs are decided by utilizing fuzzy logic inference for received signal strength and link quality between the nodes. The fuzzification is carried out using (i) Mamdani, (ii) Sugeno, and (iii) Combined Mamdani Sugeno fuzzy logic inference. Simulation results demonstrate that proposed protocols provide better accuracy in node localization compared to conventional centroid based localization protocols despite presence of unintentional radio frequency interference from radio frequency (RF) sources operating in same frequency band.
Keywords: localization, range free, received signal strength, link quality indicator, Mamdani fuzzy logic inference, Sugeno fuzzy logic inference.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2634942 Roundabout Optimal Entry and Circulating Flow Induced by Road Hump
Authors: Amir Hossein Pakshir, A. Hossein Pour, N. Jahandar, Ali Paydar
Abstract:
Roundabout work on the principle of circulation and entry flows, where the maximum entry flow rates depend largely on circulating flow bearing in mind that entry flows must give away to circulating flows. Where an existing roundabout has a road hump installed at the entry arm, it can be hypothesized that the kinematics of vehicles may prevent the entry arm from achieving optimum performance. Road humps are traffic calming devices placed across road width solely as speed reduction mechanism. They are the preferred traffic calming option in Malaysia and often used on single and dual carriageway local routes. The speed limit on local routes is 30mph (50 km/hr). Road humps in their various forms achieved the biggest mean speed reduction (based on a mean speed before traffic calming of 30mph) of up to 10mph or 16 km/hr according to the UK Department of Transport. The underlying aim of reduced speed should be to achieve a 'safe' distribution of speeds which reflects the function of the road and the impacts on the local community. Constraining safe distribution of speeds may lead to poor drivers timing and delayed reflex reaction that can probably cause accident. Previous studies on road hump impact have focused mainly on speed reduction, traffic volume, noise and vibrations, discomfort and delay from the use of road humps. The paper is aimed at optimal entry and circulating flow induced by road humps. Results show that roundabout entry and circulating flow perform better in circumstances where there is no road hump at entrance.Keywords: Road hump, Roundabout, Speed Reduction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3024941 Estimation of Asphalt Pavement Surfaces Using Image Analysis Technique
Authors: Mohammad A. Khasawneh
Abstract:
Asphalt concrete pavements gradually lose their skid resistance causing safety problems especially under wet conditions and high driving speeds. In order to enact the actual field polishing and wearing process of asphalt pavement surfaces in a laboratory setting, several laboratory-scale accelerated polishing devices were developed by different agencies. To mimic the actual process, friction and texture measuring devices are needed to quantify surface deterioration at different polishing intervals that reflect different stages of the pavement life. The test could still be considered lengthy and to some extent labor-intensive. Therefore, there is a need to come up with another method that can assist in investigating the bituminous pavement surface characteristics in a practical and time-efficient test procedure.
The purpose of this paper is to utilize a well-developed image analysis technique to characterize asphalt pavement surfaces without the need to use conventional friction and texture measuring devices in an attempt to shorten and simplify the polishing procedure in the lab.
Promising findings showed the possibility of using image analysis in lieu of the labor-sensitive-variable-in-nature friction and texture measurements. It was found that the exposed aggregate surface area of asphalt specimens made from limestone and gravel aggregates produced solid evidence of the validity of this method in describing asphalt pavement surfaces. Image analysis results correlated well with the British Pendulum Numbers (BPN), Polish Values (PV) and Mean Texture Depth (MTD) values.
Keywords: Friction, Image Analysis, Polishing, Statistical Analysis, Texture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2564940 Study on Compressive Strength and Setting Times of Fly Ash Concrete after Slump Recovery Using Superplasticizer
Authors: Chaiyakrit Raoupatham, Ram Hari Dhakal, Chalermchai Wanichlamlert
Abstract:
Fresh concrete has one of dynamic properties known as slump. Slump of concrete is design to compatible with placing method. Due to hydration reaction of cement, the slump of concrete is loss through time. Therefore, delayed concrete probably get reject because slump is unacceptable. In order to recover the slump of delayed concrete the second dose of superplasticizer (naphthalene based type F) is added into the system, the slump recovery can be done as long as the concrete is not setting. By adding superplasticizer as solution for recover unusable slump loss concrete may affects other concrete properties. Therefore, this paper was observed setting times and compressive strength of concrete after being re-dose with chemical admixture type F (superplasticizer, naphthalene based) for slump recovery. The concrete used in this study was fly ash concrete with fly ash replacement of 0%, 30% and 50% respectively. Concrete mix designed for test specimen was prepared with paste content (ratio of volume of cement to volume of void in the aggregate) of 1.2 and 1.3, water-to-binder ratio (w/b) range of 0.3 to 0.58, initial dose of superplasticizer (SP) range from 0.5 to 1.6%. The setting times of concrete were tested both before and after re-dosed with different amount of second dose and time of dosing. The research was concluded that addition of second dose of superplasticizer would increase both initial and final setting times accordingly to dosage of addition. As for fly ash concrete, the prolongation effect was higher as the replacement of fly ash increase. The prolongation effect can reach up to maximum about 4 hours. In case of compressive strength, the re-dosed concrete has strength fluctuation within acceptable range of ±10%.Keywords: Compressive strength, Fly ash concrete, Second dose of superplasticizer, Slump recovery, Setting times.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1935939 A Study on Architectural Characteristics of Traditional Iranian Ordinary Houses in Mashhad, Iran
Authors: Rana Daneshvar Salehi
Abstract:
In many Iranian cities including Mashhad, the capital of Razavi Khorasan Province, ordinary samples of domestic architecture on a small scale is not considered as heritage. While the principals of house formation are respected in all traditional Iranian houses; from moderate to great ones. During the past decade, Mashhad has lost its identity, and has become a modern city. Identifying it as the capital of the Islamic Culture in 2017 by ISESCO and consequently looking for new developments and transfiguration caused to demolish a large number of traditional modest habitation. For this reason, the present paper aims to introduce the three undiscovered houses with the historical and monumental values located in the oldest neighborhoods of Mashhad which have been neglected in the cultural heritage field. The preliminary phase of this approach will be a measured survey to identify the significant characteristics of selected dwellings and understand the challenges through focusing on building form, orientation, room function, space proportion and ornamental elements’ details. A comparison between the case studies and the wealthy domestically buildings presents that a house belongs to inhabitants with an average income could introduce the same accurate, regular, harmonic and proportionate design which can be found in the great mansions. It reveals that an ordinary traditional house can be regarded as valuable construction not only for its historical characteristics but also for its aesthetical and architectural features that could avoid further destructions in the future.
Keywords: Traditional ordinary house, architectural characteristic, proportion, heritage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 813938 Packet Forwarding with Multiprotocol Label Switching
Authors: R.N.Pise, S.A.Kulkarni, R.V.Pawar
Abstract:
MultiProtocol Label Switching (MPLS) is an emerging technology that aims to address many of the existing issues associated with packet forwarding in today-s Internetworking environment. It provides a method of forwarding packets at a high rate of speed by combining the speed and performance of Layer 2 with the scalability and IP intelligence of Layer 3. In a traditional IP (Internet Protocol) routing network, a router analyzes the destination IP address contained in the packet header. The router independently determines the next hop for the packet using the destination IP address and the interior gateway protocol. This process is repeated at each hop to deliver the packet to its final destination. In contrast, in the MPLS forwarding paradigm routers on the edge of the network (label edge routers) attach labels to packets based on the forwarding Equivalence class (FEC). Packets are then forwarded through the MPLS domain, based on their associated FECs , through swapping the labels by routers in the core of the network called label switch routers. The act of simply swapping the label instead of referencing the IP header of the packet in the routing table at each hop provides a more efficient manner of forwarding packets, which in turn allows the opportunity for traffic to be forwarded at tremendous speeds and to have granular control over the path taken by a packet. This paper deals with the process of MPLS forwarding mechanism, implementation of MPLS datapath , and test results showing the performance comparison of MPLS and IP routing. The discussion will focus primarily on MPLS IP packet networks – by far the most common application of MPLS today.Keywords: Forwarding equivalence class, incoming label map, label, next hop label forwarding entry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2697937 Simulating Dynamics of Thoracolumbar Spine Derived from Life MOD under Haptic Forces
Authors: K. T. Huynh, I. Gibson, W. F. Lu, B. N. Jagdish
Abstract:
In this paper, the construction of a detailed spine model is presented using the LifeMOD Biomechanics Modeler. The detailed spine model is obtained by refining spine segments in cervical, thoracic and lumbar regions into individual vertebra segments, using bushing elements representing the intervertebral discs, and building various ligamentous soft tissues between vertebrae. In the sagittal plane of the spine, constant force will be applied from the posterior to anterior during simulation to determine dynamic characteristics of the spine. The force magnitude is gradually increased in subsequent simulations. Based on these recorded dynamic properties, graphs of displacement-force relationships will be established in terms of polynomial functions by using the least-squares method and imported into a haptic integrated graphic environment. A thoracolumbar spine model with complex geometry of vertebrae, which is digitized from a resin spine prototype, will be utilized in this environment. By using the haptic technique, surgeons can touch as well as apply forces to the spine model through haptic devices to observe the locomotion of the spine which is computed from the displacement-force relationship graphs. This current study provides a preliminary picture of our ongoing work towards building and simulating bio-fidelity scoliotic spine models in a haptic integrated graphic environment whose dynamic properties are obtained from LifeMOD. These models can be helpful for surgeons to examine kinematic behaviors of scoliotic spines and to propose possible surgical plans before spine correction operations.Keywords: Haptic interface, LifeMOD, spine modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1907936 An Energy Aware Data Aggregation in Wireless Sensor Network Using Connected Dominant Set
Authors: M. Santhalakshmi, P Suganthi
Abstract:
Wireless Sensor Networks (WSNs) have many advantages. Their deployment is easier and faster than wired sensor networks or other wireless networks, as they do not need fixed infrastructure. Nodes are partitioned into many small groups named clusters to aggregate data through network organization. WSN clustering guarantees performance achievement of sensor nodes. Sensor nodes energy consumption is reduced by eliminating redundant energy use and balancing energy sensor nodes use over a network. The aim of such clustering protocols is to prolong network life. Low Energy Adaptive Clustering Hierarchy (LEACH) is a popular protocol in WSN. LEACH is a clustering protocol in which the random rotations of local cluster heads are utilized in order to distribute energy load among all sensor nodes in the network. This paper proposes Connected Dominant Set (CDS) based cluster formation. CDS aggregates data in a promising approach for reducing routing overhead since messages are transmitted only within virtual backbone by means of CDS and also data aggregating lowers the ratio of responding hosts to the hosts existing in virtual backbones. CDS tries to increase networks lifetime considering such parameters as sensors lifetime, remaining and consumption energies in order to have an almost optimal data aggregation within networks. Experimental results proved CDS outperformed LEACH regarding number of cluster formations, average packet loss rate, average end to end delay, life computation, and remaining energy computation.Keywords: Wireless sensor network, connected dominant set, clustering, data aggregation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1132935 Case Study Approach Using Scenario Analysis to Analyze Unabsorbed Head Office Overheads
Authors: K. C. Iyer, T. Gupta, Y. M. Bindal
Abstract:
Head office overhead (HOOH) is an indirect cost and is recovered through individual project billings by the contractor. Delay in a project impacts the absorption of HOOH cost allocated to that particular project and thus diminishes the expected profit of the contractor. This unabsorbed HOOH cost is later claimed by contractors as damages. The subjective nature of the available formulae to compute unabsorbed HOOH is the difficulty that contractors and owners face and thus dispute it. The paper attempts to bring together the rationale of various HOOH formulae by gathering contractor’s HOOH cost data on all of its project, using case study approach and comparing variations in values of HOOH using scenario analysis. The case study approach uses project data collected from four construction projects of a contractor in India to calculate unabsorbed HOOH costs from various available formulae. Scenario analysis provides further variations in HOOH values after considering two independent situations mainly scope changes and new projects during the delay period. Interestingly, one of the findings in this study reveals that, in spite of HOOH getting absorbed by additional works available during the period of delay, a few formulae depict an increase in the value of unabsorbed HOOH, neglecting any absorption by the increase in scope. This indicates that these formulae are inappropriate for use in case of a change to the scope of work. Results of this study can help both parties in deciding on an appropriate formula more objectively, considering the events on a project causing the delay and contractor's position in respect of obtaining new projects.
Keywords: Absorbed and unabsorbed overheads, head office overheads, scenario analysis, scope variation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 829934 Analysis of the Fire Hazard Posed by Petrol Stations in Stellenbosch and the Degree of Risk Acknowledgement in Land-Use Planning
Authors: K. Qonono
Abstract:
Despite the significance and economic benefits of petrol stations in South Africa, these still pose a huge risk of fire and explosion threatening public safety. This research paper examines the extent to which land-use planning in Stellenbosch, South Africa, considers the fire risk posed by petrol stations and the implications for public safety as well as preparedness for large fires or explosions. To achieve this, the research identified the land-use types around petrol stations in Stellenbosch and determined the extent to which their locations comply with the local, national, and international land-use planning regulations. A mixed research method consisting of the collection and analysis of geospatial data and qualitative data was applied, where petrol stations within a six-kilometre radius of Stellenbosch’s town centre were utilised as study sites. The research examined the risk of fires/explosions at these petrol stations. The research investigated Stellenbosch Municipality’s institutional preparedness to respond in the event of a fire/explosion at these petrol stations. The research observed that siting of petrol stations does not comply with local, national, and international good practices, thus exposing the surrounding developments to fires and explosions. Land-use planning practice does not consider hazards created by petrol stations. Despite the potential for major fires at petrol stations, Stellenbosch Municipality’s level of preparedness to respond to petrol station fires appears low due to the prioritisation of more frequent events.
Keywords: Petrol stations, technological hazard, DRR, land-use planning, risk analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 150933 Electricity Load Modeling: An Application to Italian Market
Authors: Giovanni Masala, Stefania Marica
Abstract:
Forecasting electricity load plays a crucial role regards decision making and planning for economical purposes. Besides, in the light of the recent privatization and deregulation of the power industry, the forecasting of future electricity load turned out to be a very challenging problem. Empirical data about electricity load highlights a clear seasonal behavior (higher load during the winter season), which is partly due to climatic effects. We also emphasize the presence of load periodicity at a weekly basis (electricity load is usually lower on weekends or holidays) and at daily basis (electricity load is clearly influenced by the hour). Finally, a long-term trend may depend on the general economic situation (for example, industrial production affects electricity load). All these features must be captured by the model. The purpose of this paper is then to build an hourly electricity load model. The deterministic component of the model requires non-linear regression and Fourier series while we will investigate the stochastic component through econometrical tools. The calibration of the parameters’ model will be performed by using data coming from the Italian market in a 6 year period (2007- 2012). Then, we will perform a Monte Carlo simulation in order to compare the simulated data respect to the real data (both in-sample and out-of-sample inspection). The reliability of the model will be deduced thanks to standard tests which highlight a good fitting of the simulated values.Keywords: ARMA-GARCH process, electricity load, fitting tests, Fourier series, Monte Carlo simulation, non-linear regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1490932 An Analysis of Collapse Mechanism of Thin- Walled Circular Tubes Subjected to Bending
Authors: Somya Poonaya, Chawalit Thinvongpituk, Umphisak Teeboonma
Abstract:
Circular tubes have been widely used as structural members in engineering application. Therefore, its collapse behavior has been studied for many decades, focusing on its energy absorption characteristics. In order to predict the collapse behavior of members, one could rely on the use of finite element codes or experiments. These tools are helpful and high accuracy but costly and require extensive running time. Therefore, an approximating model of tubes collapse mechanism is an alternative for early step of design. This paper is also aimed to develop a closed-form solution of thin-walled circular tube subjected to bending. It has extended the Elchalakani et al.-s model (Int. J. Mech. Sci.2002; 44:1117-1143) to include the rate of energy dissipation of rolling hinge in the circumferential direction. The 3-D geometrical collapse mechanism was analyzed by adding the oblique hinge lines along the longitudinal tube within the length of plastically deforming zone. The model was based on the principal of energy rate conservation. Therefore, the rates of internal energy dissipation were calculated for each hinge lines which are defined in term of velocity field. Inextensional deformation and perfect plastic material behavior was assumed in the derivation of deformation energy rate. The analytical result was compared with experimental result. The experiment was conducted with a number of tubes having various D/t ratios. Good agreement between analytical and experiment was achieved.Keywords: Bending, Circular tube, Energy, Mechanism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3521