Search results for: estimate speed
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2174

Search results for: estimate speed

344 Improved Fuzzy Neural Modeling for Underwater Vehicles

Authors: O. Hassanein, Sreenatha G. Anavatti, Tapabrata Ray

Abstract:

The dynamics of the Autonomous Underwater Vehicles (AUVs) are highly nonlinear and time varying and the hydrodynamic coefficients of vehicles are difficult to estimate accurately because of the variations of these coefficients with different navigation conditions and external disturbances. This study presents the on-line system identification of AUV dynamics to obtain the coupled nonlinear dynamic model of AUV as a black box. This black box has an input-output relationship based upon on-line adaptive fuzzy model and adaptive neural fuzzy network (ANFN) model techniques to overcome the uncertain external disturbance and the difficulties of modelling the hydrodynamic forces of the AUVs instead of using the mathematical model with hydrodynamic parameters estimation. The models- parameters are adapted according to the back propagation algorithm based upon the error between the identified model and the actual output of the plant. The proposed ANFN model adopts a functional link neural network (FLNN) as the consequent part of the fuzzy rules. Thus, the consequent part of the ANFN model is a nonlinear combination of input variables. Fuzzy control system is applied to guide and control the AUV using both adaptive models and mathematical model. Simulation results show the superiority of the proposed adaptive neural fuzzy network (ANFN) model in tracking of the behavior of the AUV accurately even in the presence of noise and disturbance.

Keywords: AUV, AUV dynamic model, fuzzy control, fuzzy modelling, adaptive fuzzy control, back propagation, system identification, neural fuzzy model, FLNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2127
343 Development of UiTM Robotic Prosthetic Hand

Authors: M. Amlie A. Kasim, Ahsana Aqilah, Ahmed Jaffar, Cheng Yee Low, Roseleena Jaafar, M. Saiful Bahari, Armansyah

Abstract:

The study of human hand morphology reveals that developing an artificial hand with the capabilities of human hand is an extremely challenging task. This paper presents the development of a robotic prosthetic hand focusing on the improvement of a tendon driven mechanism towards a biomimetic prosthetic hand. The design of this prosthesis hand is geared towards achieving high level of dexterity and anthropomorphism by means of a new hybrid mechanism that integrates a miniature motor driven actuation mechanism, a Shape Memory Alloy actuated mechanism and a passive mechanical linkage. The synergy of these actuators enables the flexion-extension movement at each of the finger joints within a limited size, shape and weight constraints. Tactile sensors are integrated on the finger tips and the finger phalanges area. This prosthesis hand is developed with an exact size ratio that mimics a biological hand. Its behavior resembles the human counterpart in terms of working envelope, speed and torque, and thus resembles both the key physical features and the grasping functionality of an adult hand.

Keywords: Prosthetic hand, Biomimetic actuation, Shape Memory Alloy, Tactile sensing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2610
342 Neural Network Evaluation of FRP Strengthened RC Buildings Subjected to Near-Fault Ground Motions having Fling Step

Authors: Alireza Mortezaei, Kimia Mortezaei

Abstract:

Recordings from recent earthquakes have provided evidence that ground motions in the near field of a rupturing fault differ from ordinary ground motions, as they can contain a large energy, or “directivity" pulse. This pulse can cause considerable damage during an earthquake, especially to structures with natural periods close to those of the pulse. Failures of modern engineered structures observed within the near-fault region in recent earthquakes have revealed the vulnerability of existing RC buildings against pulse-type ground motions. This may be due to the fact that these modern structures had been designed primarily using the design spectra of available standards, which have been developed using stochastic processes with relatively long duration that characterizes more distant ground motions. Many recently designed and constructed buildings may therefore require strengthening in order to perform well when subjected to near-fault ground motions. Fiber Reinforced Polymers are considered to be a viable alternative, due to their relatively easy and quick installation, low life cycle costs and zero maintenance requirements. The objective of this paper is to investigate the adequacy of Artificial Neural Networks (ANN) to determine the three dimensional dynamic response of FRP strengthened RC buildings under the near-fault ground motions. For this purpose, one ANN model is proposed to estimate the base shear force, base bending moments and roof displacement of buildings in two directions. A training set of 168 and a validation set of 21 buildings are produced from FEA analysis results of the dynamic response of RC buildings under the near-fault earthquakes. It is demonstrated that the neural network based approach is highly successful in determining the response.

Keywords: Seismic evaluation, FRP, neural network, near-fault ground motion

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712
341 Effect of Stitching Pattern on Composite Tubular Structures Subjected to Quasi-Static Crushing

Authors: Ali Rabiee, Hessam Ghasemnejad

Abstract:

Extensive experimental investigation on the effect of stitching pattern on tubular composite structures was conducted. The effect of stitching reinforcement through thickness on using glass flux yarn on energy absorption of fiber-reinforced polymer (FRP) was investigated under high speed loading conditions at axial loading. Keeping the mass of the structure at 125 grams and applying different pattern of stitching at various locations in theory enables better energy absorption, and also enables the control over the behaviour of force-crush distance curve. The study consists of simple non-stitch absorber comparison with single and multi-location stitching behaviour and its effect on energy absorption capabilities. The locations of reinforcements are 10 mm, 20 mm, 30 mm, 10-20 mm, 10-30 mm, 20-30 mm, 10-20-30 mm and 10-15-20-25-30-35 mm from the top of the specimen. The effect of through the thickness reinforcements has shown increase in energy absorption capabilities and crushing load. The significance of this is that as the stitching locations are closer, the crushing load increases and consequently energy absorption capabilities are also increased. The implementation of this idea would improve the mean force by applying stitching and controlling the behaviour of force-crush distance curve.

Keywords: Through-thickness, stitching, reinforcement, Tulbular composite structures, energy absorption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1393
340 Effect of Coupling Media on Ultrasonic Pulse Velocity in Concrete: A Preliminary Investigation

Authors: Sura Al-Khafaji, Phil Purnell

Abstract:

Measurement of the ultrasonic pulse velocity (UPV) is an important tool in diagnostic examination of concrete. In this method piezoelectric transducers are normally held in direct contact with the concrete surface. The current study aims to test the hypothesis that a preferential coupling effect might exist i.e. that the speed of sound measured depends on the couplant used. In this study, different coupling media of varying acoustic impedance were placed between the transducers and concrete samples made with constant aggregate content but with different compressive strengths. The preliminary results show that using coupling materials (both solid and a range of liquid substances) has an effect on the pulse velocity measured in a given concrete. The effect varies depending on the material used. The UPV measurements with solid coupling were higher than these from the liquid coupling at all strength levels. The tests using couplants generally recorded lower UPV values than the conventional test, except when carbon fiber composite was used, which retuned higher values. Analysis of variances (ANOVA) was performed to confirm that there are statistically significant differences between the measurements recorded using a conventional system and a coupled system.

Keywords: Compressive strength, coupling effect, statistical analysis, ultrasonic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1754
339 Neural Networks for Distinguishing the Performance of Two Hip Joint Implants on the Basis of Hip Implant Side and Ground Reaction Force

Authors: L. Parisi

Abstract:

In this research work, neural networks were applied to classify two types of hip joint implants based on the relative hip joint implant side speed and three components of each ground reaction force. The condition of walking gait at normal velocity was used and carried out with each of the two hip joint implants assessed. Ground reaction forces’ kinetic temporal changes were considered in the first approach followed but discarded in the second one. Ground reaction force components were obtained from eighteen patients under such gait condition, half of which had a hip implant type I-II, whilst the other half had the hip implant, defined as type III by Orthoload®. After pre-processing raw gait kinetic data and selecting the time frames needed for the analysis, the ground reaction force components were used to train a MLP neural network, which learnt to distinguish the two hip joint implants in the abovementioned condition. Further to training, unknown hip implant side and ground reaction force components were presented to the neural networks, which assigned those features into the right class with a reasonably high accuracy for the hip implant type I-II and the type III. The results suggest that neural networks could be successfully applied in the performance assessment of hip joint implants.

Keywords: Kinemic gait data, Neural networks, Hip joint implant, Hip arthroplasty, Rehabilitation Engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1771
338 The Effect of Energy Consumption and Losses on the Nigerian Manufacturing Sector: Evidence from the ARDL Approach

Authors: Okezie A. Ihugba

Abstract:

The bounds testing ARDL (2, 2, 2, 2, 0) technique to cointegration was used in this study to investigate the effect of energy consumption and energy loss on Nigeria's manufacturing sector from 1981 to 2020. The model was created to determine the relationship between these three variables while also accounting for interactions with control variables such as inflation and commercial bank loans to the manufacturing sector. When the dependent variables are energy consumption and energy loss, the bound tests show that the variables of interest are bound together in the long run. Because electricity consumption is a critical factor in determining manufacturing value-added in Nigeria, some intriguing observations were made. According to the findings, the relationship between log of electricity consumption (LELC) and log of manufacturing value added (LMVA) is statistically significant. According to the findings, electricity consumption reduces manufacturing value-added. The target variable (energy loss) is statistically significant and has a positive sign. In Nigeria, a 1% reduction in energy loss increases manufacturing value-added by 36% in the first lag and 35% in the second. According to the study, the government should speed up the ongoing renovation of existing power plants across the country, as well as the construction of new gas-fired power plants. This will address a number of issues, including overpricing of electricity as a result of grid failure.

Keywords: ARDL, cointegration, Nigeria's manufacturing, electricity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 326
337 Advantages of Vibration in the GMAW Process for Improving the Quality and Mechanical Properties

Authors: C. A. C. Castro, D. C. Urashima, E. P. Silva, P. M. L.Silva

Abstract:

Since 1920, the industry has almost completely changed the rivets production techniques for the manufacture of permanent welding join production of structures and manufacture of other products. The welding arc is the process more widely used in industries. This is accomplished by the heat of an electric arc which melts the base metal while the molten metal droplets are transferred through the arc to the welding pool, protected from the atmosphere by a gas curtain. The GMAW (Gas metal arc welding) process is influenced by variables such as: current, polarity, welding speed, electrode: extension, position, moving direction; type of joint, welder's ability, among others. It is remarkable that the knowledge and control of these variables are essential for obtaining satisfactory quality welds, knowing that are interconnected so that changes in one of them requiring changes in one or more of the other to produce the desired results. The optimum values are affected by the type of base metal, the electrode composition, the welding position and the quality requirements. Thus, this paper proposes a new methodology, adding the variable vibration through a mechanism developed for GMAW welding, in order to improve the mechanical and metallurgical properties which does not affect the ability of the welder and enables repeatability of the welds made. For confirmation metallographic analysis and mechanical tests were made.

Keywords: HAZ, GMAW, vibration, welding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1786
336 A Speeded up Robust Scale-Invariant Feature Transform Currency Recognition Algorithm

Authors: Daliyah S. Aljutaili, Redna A. Almutlaq, Suha A. Alharbi, Dina M. Ibrahim

Abstract:

All currencies around the world look very different from each other. For instance, the size, color, and pattern of the paper are different. With the development of modern banking services, automatic methods for paper currency recognition become important in many applications like vending machines. One of the currency recognition architecture’s phases is Feature detection and description. There are many algorithms that are used for this phase, but they still have some disadvantages. This paper proposes a feature detection algorithm, which merges the advantages given in the current SIFT and SURF algorithms, which we call, Speeded up Robust Scale-Invariant Feature Transform (SR-SIFT) algorithm. Our proposed SR-SIFT algorithm overcomes the problems of both the SIFT and SURF algorithms. The proposed algorithm aims to speed up the SIFT feature detection algorithm and keep it robust. Simulation results demonstrate that the proposed SR-SIFT algorithm decreases the average response time, especially in small and minimum number of best key points, increases the distribution of the number of best key points on the surface of the currency. Furthermore, the proposed algorithm increases the accuracy of the true best point distribution inside the currency edge than the other two algorithms.

Keywords: Currency recognition, feature detection and description, SIFT algorithm, SURF algorithm, speeded up and robust features.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 834
335 Dynamic Routing to Multiple Destinations in IP Networks using Hybrid Genetic Algorithm (DRHGA)

Authors: K. Vijayalakshmi, S. Radhakrishnan

Abstract:

In this paper we have proposed a novel dynamic least cost multicast routing protocol using hybrid genetic algorithm for IP networks. Our protocol finds the multicast tree with minimum cost subject to delay, degree, and bandwidth constraints. The proposed protocol has the following features: i. Heuristic local search function has been devised and embedded with normal genetic operation to increase the speed and to get the optimized tree, ii. It is efficient to handle the dynamic situation arises due to either change in the multicast group membership or node / link failure, iii. Two different crossover and mutation probabilities have been used for maintaining the diversity of solution and quick convergence. The simulation results have shown that our proposed protocol generates dynamic multicast tree with lower cost. Results have also shown that the proposed algorithm has better convergence rate, better dynamic request success rate and less execution time than other existing algorithms. Effects of degree and delay constraints have also been analyzed for the multicast tree interns of search success rate.

Keywords: Dynamic Group membership change, Hybrid Genetic Algorithm, Link / node failure, QoS Parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1421
334 Reducing CO2 Emission Using EDA and Weighted Sum Model in Smart Parking System

Authors: Rahman Ali, Muhammad Sajjad, Farkhund Iqbal, Muhammad Sadiq Hassan Zada, Mohammed Hussain

Abstract:

Emission of Carbon Dioxide (CO2) has adversely affected the environment. One of the major sources of CO2 emission is transportation. In the last few decades, the increase in mobility of people using vehicles has enormously increased the emission of CO2 in the environment. To reduce CO2 emission, sustainable transportation system is required in which smart parking is one of the important measures that need to be established. To contribute to the issue of reducing the amount of CO2 emission, this research proposes a smart parking system. A cloud-based solution is provided to the drivers which automatically searches and recommends the most preferred parking slots. To determine preferences of the parking areas, this methodology exploits a number of unique parking features which ultimately results in the selection of a parking that leads to minimum level of CO2 emission from the current position of the vehicle. To realize the methodology, a scenario-based implementation is considered. During the implementation, a mobile application with GPS signals, vehicles with a number of vehicle features and a list of parking areas with parking features are used by sorting, multi-level filtering, exploratory data analysis (EDA, Analytical Hierarchy Process (AHP)) and weighted sum model (WSM) to rank the parking areas and recommend the drivers with top-k most preferred parking areas. In the EDA process, “2020testcar-2020-03-03”, a freely available dataset is used to estimate CO2 emission of a particular vehicle. To evaluate the system, results of the proposed system are compared with the conventional approach, which reveal that the proposed methodology supersedes the conventional one in reducing the emission of CO2 into the atmosphere.

Keywords: CO2 emission, IoT, EDA, Weighted Sum Model, WSM, regression, smart parking system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 699
333 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: Diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion equation, trends functions, bi-parameters Weibull density function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1945
332 Creeping Control Strategy for Direct Shift Gearbox Based on the Investigation of Temperature Variation of the Wet Clutch

Authors: Biao Ma, Jikai Liu, Man Chen, Jianpeng Wu, Liyong Wang, Changsong Zheng

Abstract:

Proposing an appropriate control strategy is an effective and practical way to address the overheat problems of the wet multi-plate clutch in Direct Shift Gearbox under the long-time creeping condition. To do so, the temperature variation of the wet multi-plate clutch is investigated firstly by establishing a thermal resistance model for the gearbox cooling system. To calculate the generated heat flux and predict the clutch temperature precisely, the friction torque model is optimized by introducing an improved friction coefficient, which is related to the pressure, the relative speed and the temperature. After that, the heat transfer model and the reasonable friction torque model are employed by the vehicle powertrain model to construct a comprehensive co-simulation model for the Direct Shift Gearbox (DSG) vehicle. A creeping control strategy is then proposed and, to evaluate the vehicle performance, the safety temperature (250 ℃) is particularly adopted as an important metric. During the creeping process, the temperature of two clutches is always under the safety value (250 ℃), which demonstrates the effectiveness of the proposed control strategy in avoiding the thermal failures of clutches.

Keywords: Creeping control strategy, direct shift gearbox, temperature variation, wet clutch.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 687
331 Adjustment and Scale-Up Strategy of Pilot Liquid Fermentation Process of Azotobacter sp.

Authors: G. Quiroga-Cubides, A. Díaz, M. Gómez

Abstract:

The genus Azotobacter has been widely used as bio-fertilizer due to its significant effects on the stimulation and promotion of plant growth in various agricultural species of commercial interest. In order to obtain significantly viable cellular concentration, a scale-up strategy for a liquid fermentation process (SmF) with two strains of A. chroococcum (named Ac1 and Ac10) was validated and adjusted at laboratory and pilot scale. A batch fermentation process under previously defined conditions was carried out on a biorreactor Infors®, model Minifors of 3.5 L, which served as a baseline for this research. For the purpose of increasing process efficiency, the effect of the reduction of stirring speed was evaluated in combination with a fed-batch-type fermentation laboratory scale. To reproduce the efficiency parameters obtained, a scale-up strategy with geometric and fluid dynamic behavior similarities was evaluated. According to the analysis of variance, this scale-up strategy did not have significant effect on cellular concentration and in laboratory and pilot fermentations (Tukey, p > 0.05). Regarding air consumption, fermentation process at pilot scale showed a reduction of 23% versus the baseline. The percentage of reduction related to energy consumption reduction under laboratory and pilot scale conditions was 96.9% compared with baseline.

Keywords: Azotobacter chroococcum, scale-up, liquid fermentation, fed-batch process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1286
330 Perceptions of Climate Change Risk to Forest Ecosystems: A Case Study of Patale Community Forestry User Group, Nepal

Authors: N. R. P Withana, E. Auch

Abstract:

The purpose of this study was to investigate perceptions of climate change risk to forest ecosystems and forestbased communities as well as perceived effectiveness of adaptation strategies for climate change as well as challenges for adaptation. Data was gathered using a pre-tested semi-structured questionnaire. Simple random selection technique was applied. For the majority of issues, the responses were obtained on multi-point likert scales, and the scores provided were, in turn, used to estimate the means and other useful estimates. A composite knowledge index developed using correct responses to a set of self-rated statements were used to evaluate the issues. The mean of the knowledge index was 0.64. Also all respondents recorded values of the knowledge index above 0.25. Increase forest fire was perceived by respondents as the greatest risk to forest eco-system. Decrease access to water supplies was perceived as the greatest risk to livelihoods of forest based communities. The most effective adaptation strategy relevant to climate change risks to forest eco-systems and forest based communities livelihoods in Kathmandu valley in Nepal as perceived by the respondents was reforestation and afforestation. As well, lack of public awareness was perceived as the major limitation for climate change adaptation. However, perceived risks as well as effective adaptation strategies showed an inconsistent association with knowledge indicators and social-cultural variables. The results provide useful information to any party who involve with climate change issues in Nepal, since such attempts would be more effective once the people’s perceptions on these aspects are taken into account.

Keywords: Climate change, forest ecosystems, forest-based communities, risk perceptions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2265
329 Analytical Modelling of Surface Roughness during Compacted Graphite Iron Milling Using Ceramic Inserts

Authors: S. Karabulut, A. Güllü, A. Güldas, R. Gürbüz

Abstract:

This study investigates the effects of the lead angle and chip thickness variation on surface roughness during the machining of compacted graphite iron using ceramic cutting tools under dry cutting conditions. Analytical models were developed for predicting the surface roughness values of the specimens after the face milling process. Experimental data was collected and imported to the artificial neural network model. A multilayer perceptron model was used with the back propagation algorithm employing the input parameters of lead angle, cutting speed and feed rate in connection with chip thickness. Furthermore, analysis of variance was employed to determine the effects of the cutting parameters on surface roughness. Artificial neural network and regression analysis were used to predict surface roughness. The values thus predicted were compared with the collected experimental data, and the corresponding percentage error was computed. Analysis results revealed that the lead angle is the dominant factor affecting surface roughness. Experimental results indicated an improvement in the surface roughness value with decreasing lead angle value from 88° to 45°.

Keywords: CGI, milling, surface roughness, ANN, regression, modeling, analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1939
328 A Medical Images Based Retrieval System using Soft Computing Techniques

Authors: Pardeep Singh, Sanjay Sharma

Abstract:

Content-Based Image Retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. Many programs and tools have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of difering sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever increasing quantities and used for diagnostics and therapy. In several articles, content based access to medical images for supporting clinical decision making has been proposed that would ease the management of clinical data and scenarios for the integration of content-based access methods into Picture Archiving and Communication Systems (PACS) have been created. This paper gives an overview of soft computing techniques. New research directions are being defined that can prove to be useful. Still, there are very few systems that seem to be used in clinical practice. It needs to be stated as well that the goal is not, in general, to replace text based retrieval methods as they exist at the moment.

Keywords: CBIR, GA, Rough sets, CBMIR

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2588
327 Traffic Signal Design and Simulation for Vulnerable Road Users Safety and Bus Preemption

Authors: Shih-Ching Lo, Hsieh-Chu Huang

Abstract:

Mostly, pedestrian-car accidents occurred at a signalized interaction is because pedestrians cannot across the intersection safely within the green light. From the viewpoint of pedestrian, there might have two reasons. The first one is pedestrians cannot speed up to across the intersection, such as the elders. The other reason is pedestrians do not sense that the signal phase is going to change and their right-of-way is going to lose. Developing signal logic to protect pedestrian, who is crossing an intersection is the first purpose of this study. Another purpose of this study is improving the reliability and reduce delay of public transportation service. Therefore, bus preemption is also considered in the designed signal logic. In this study, the traffic data of the intersection of Chong-Qing North Road and Min-Zu West Road, Taipei, Taiwan, is employed to calibrate and validate the signal logic by simulation. VISSIM 5.20, which is a microscopic traffic simulation software, is employed to simulate the signal logic. From the simulated results, the signal logic presented in this study can protect pedestrians crossing the intersection successfully. The design of bus preemption can reduce the average delay. However, the pedestrian safety and bus preemptive signal will influence the average delay of cars largely. Thus, whether applying the pedestrian safety and bus preemption signal logic to an isolated intersection or not should be evaluated carefully.

Keywords: vulnerable road user, bus preemption, signal design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1645
326 Assessing the Impact of Underground Cavities on Buildings with Stepped Foundations on Sloping Lands

Authors: Masoud Mahdavi

Abstract:

The use of sloping lands is increasing due to the reduction of suitable lands for the construction of buildings. In the design and construction of buildings on sloping lands, the foundation has special loading conditions that require the designer and executor to use the slopped foundation. The creation of underground cavities, including urban and subway tunnels, sewers, urban facilities, etc., inside the ground, causes the behavior of the foundation to be unknown. In the present study, using Abacus software, a 45-degree stepped foundation on the ground is designed. The foundations are placed on the ground in a cohesive (no-hole) manner with circular cavities that show the effect of increasing the cross-sectional area of ​​the underground cavities on the foundation's performance. The Kobe earthquake struck the foundation and ground for two seconds. The underground cavities have a circular cross-sectional area with a radius of 5 m, which is located at a depth of 22.54 m above the ground. The results showed that as the number of underground cavities increased, von Mises stress (in the vertical direction) increased. With the increase in the number of underground cavities, the plastic strain on the ground has increased. Also, with the increase in the number of underground cavities, the change in location and speed in the foundation has increased.

Keywords: Stepped foundation, sloping ground, Kobe earthquake, Abaqus software, underground excavations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 565
325 An Agent Based Dynamic Resource Scheduling Model with FCFS-Job Grouping Strategy in Grid Computing

Authors: Raksha Sharma, Vishnu Kant Soni, Manoj Kumar Mishra, Prachet Bhuyan, Utpal Chandra Dey

Abstract:

Grid computing is a group of clusters connected over high-speed networks that involves coordinating and sharing computational power, data storage and network resources operating across dynamic and geographically dispersed locations. Resource management and job scheduling are critical tasks in grid computing. Resource selection becomes challenging due to heterogeneity and dynamic availability of resources. Job scheduling is a NP-complete problem and different heuristics may be used to reach an optimal or near optimal solution. This paper proposes a model for resource and job scheduling in dynamic grid environment. The main focus is to maximize the resource utilization and minimize processing time of jobs. Grid resource selection strategy is based on Max Heap Tree (MHT) that best suits for large scale application and root node of MHT is selected for job submission. Job grouping concept is used to maximize resource utilization for scheduling of jobs in grid computing. Proposed resource selection model and job grouping concept are used to enhance scalability, robustness, efficiency and load balancing ability of the grid.

Keywords: Agent, Grid Computing, Job Grouping, Max Heap Tree (MHT), Resource Scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2063
324 Groundwater Potential Zone Identification in Unconsolidated Aquifer Using Geophysical Techniques around Tarbela Ghazi, District Haripur, Pakistan

Authors: Syed Muzyan Shahzad, Liu Jianxin, Asim Shahzad, Muhammad Sharjeel Raza, Sun Ya, Fanidi Meryem

Abstract:

Electrical resistivity investigation was conducted in vicinity of Tarbela Ghazi, in order to study the subsurface layer with a view of determining the depth to the aquifer and thickness of groundwater potential zones. Vertical Electrical Sounding (VES) using Schlumberger array was carried out at 16 VES stations. Well logging data at four tube wells have been used to mark the super saturated zones with great discharge rate. The present paper shows a geoelectrical identification of the lithology and an estimate of the relationship between the resistivity and Dar Zarrouk parameters (transverse unit resistance and longitudinal unit conductance). The VES results revealed both homogeneous and heterogeneous nature of the subsurface strata. Aquifer is unconfined to confine in nature, and at few locations though perched aquifer has been identified, groundwater potential zones are developed in unconsolidated deposits layers and more than seven geo-electric layers are observed at some VES locations. Saturated zones thickness ranges from 5 m to 150 m, whereas at few area aquifer is beyond 150 m thick. The average anisotropy, transvers resistance and longitudinal conductance values are 0.86 %, 35750.9821 Ω.m2, 0.729 Siemens, respectively. The transverse unit resistance values fluctuate all over the aquifer system, whereas below at particular depth high values are observed, that significantly associated with the high transmissivity zones. The groundwater quality in all analyzed samples is below permissible limit according to World Health Standard (WHO).

Keywords: Geoelectric layers, Dar Zarrouk parameters, Aquifer, Electro-stratigraphic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 784
323 Managing IT Departments in Higher Education Institutes: Coping with the Exponentially Growing Needs and Expectations

Authors: Balqees A. Al-Thuhli, Ali H. Al-Badi, Khamis Al-Gharbi

Abstract:

Information technology is changing rapidly and the users’ expectations are also growing. Dealing with these changes in information technology, while satisfying the users’ needs and expectations is a big challenge. IT managers need to explore new mechanisms/strategies to enable them to cope with such challenges.

 The objectives of this research are to identify the significant challenges that might face IT managers in higher education institutes in the face of the high and ever growing customer expectations and to propose possible solutions to cope with such high-speed changes in information technology.

To achieve these objectives, interviews with the IT professionals from different higher education institutes in Oman were conducted. In addition, documentation (printed and online) related to these institutions were studied and an intensive literature review of published work was examined.

The findings of this research are expected to give a better understanding of the challenges that might face the IT managers at higher education institutes. This acquired understanding is expected to highlight the importance of being adaptable and fast in keeping up with the ever-growing technological changes. Moreover, adopting different tools and technologies could assist IT managers in developing their organisations’ IT policies and strategies.

Keywords: Information technology, IT rapid changes, CIO roles, challenges, IT managers, coping mechanisms, users' expectations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1575
322 Estimation Model for Concrete Slump Recovery by Using Superplasticizer

Authors: Chaiyakrit Raoupatham, Ram Hari Dhakal, Chalermchai Wanichlamlert

Abstract:

This paper aimed to introduce the solution of concrete slump recovery using chemical admixture type-F (superplasticizer, naphthalene base) to the practice in order to solve unusable concrete problem due to concrete loss its slump, especially for those tropical countries that have faster slump loss rate. In the other hand, randomly adding superplasticizer into concrete can cause concrete to segregate. Therefore, this paper also develops the estimation model used to calculate amount of second dose of superplasticizer need for concrete slump recovery. Fresh properties of ordinary Portland cement concrete with volumetric ratio of paste to void between aggregate (paste content) of 1.1-1.3 with water-cement ratio zone of 0.30 to 0.67 and initial superplasticizer (naphthalene base) of 0.25%-1.6% were tested for initial slump and slump loss for every 30 minutes for one and half hour by slump cone test. Those concretes with slump loss range from 10% to 90% were re-dosed and successfully recovered back to its initial slump. Slump after re-dosed was tested by slump cone test. From the result, it has been concluded that, slump loss was slower for those mix with high initial dose of superplasticizer due to addition of superplasticizer will disturb cement hydration. The required second dose of superplasticizer was affected by two major parameters, which were water-cement ratio and paste content, where lower water-cement ratio and paste content cause an increase in require second dose of superplasticizer. The amount of second dose of superplasticizer is higher as the solid content within the system is increase, solid can be either from cement particles or aggregate. The data was analyzed to form an equation use to estimate the amount of second dosage requirement of superplasticizer to recovery slump to its original.

Keywords: Estimation model, second superplasticizer dosage, slump loss, slump recovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1889
321 Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences

Authors: C. Xavier Mendieta, J. J McArthur

Abstract:

Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.

Keywords: Building archetypes, data analysis, energy benchmarks, GHG emissions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 993
320 An Analysis of Eco-efficiency and GHG Emission of Olive Oil Production in Northeast of Portugal

Authors: M. Feliciano, F. Maia, A. Gonçalves

Abstract:

Olive oil production sector plays an important role in Portuguese economy. It had a major growth over the last decade, increasing its weight in the overall national exports. International market penetration for Mediterranean traditional products is increasingly more demanding, especially in the Northern European markets, where consumers are looking for more sustainable products. Trying to support this growing demand this study addresses olive oil production under the environmental and eco-efficiency perspectives. The analysis considers two consecutive product life cycle stages: olive trees farming; and olive oil extraction in mills. Addressing olive farming, data collection covered two different organizations: a middle-size farm (~12ha) (F1) and a large-size farm (~100ha) (F2). Results from both farms show that olive collection activities are responsible for the largest amounts of Green House Gases (GHG) emissions. In this activities, estimate for the Carbon Footprint per olive was higher in F2 (188g CO2e/kgolive) than in F1 (148g CO2e/kgolive). Considering olive oil extraction, two different mills were considered: one using a two-phase system (2P) and other with a three-phase system (3P). Results from the study of two mills show that there is a much higher use of water in 3P. Energy intensity (EI) is similar in both mills. When evaluating the GHG generated, two conditions are evaluated: a biomass neutral condition resulting on a carbon footprint higher in 3P (184g CO2e/Lolive oil) than in 2P (92g CO2e/Lolive oil); and a non-neutral biomass condition in which 2P increase its carbon footprint to 273g CO2e/Lolive oil. When addressing the carbon footprint of possible combinations among studied subsystems, results suggest that olive harvesting is the major source for GHG.

Keywords: Carbon footprint, environmental indicators, farming subsystem, industrial subsystem, olive oil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2858
319 The Relationship between Land Use Factors and Feeling of Happiness at the Neighbourhood Level

Authors: M. Moeinaddini, Z. Asadi-Shekari, Z. Sultan, M. Zaly Shah

Abstract:

Happiness can be related to everything that can provide a feeling of satisfaction or pleasure. This study tries to consider the relationship between land use factors and feeling of happiness at the neighbourhood level. Land use variables (beautiful and attractive neighbourhood design, availability and quality of shopping centres, sufficient recreational spaces and facilities, and sufficient daily service centres) are used as independent variables and the happiness score is used as the dependent variable in this study. In addition to the land use variables, socio-economic factors (gender, race, marital status, employment status, education, and income) are also considered as independent variables. This study uses the Oxford happiness questionnaire to estimate happiness score of more than 300 people living in six neighbourhoods. The neighbourhoods are selected randomly from Skudai neighbourhoods in Johor, Malaysia. The land use data were obtained by adding related questions to the Oxford happiness questionnaire. The strength of the relationship in this study is found using generalised linear modelling (GLM). The findings of this research indicate that increase in happiness feeling is correlated with an increasing income, more beautiful and attractive neighbourhood design, sufficient shopping centres, recreational spaces, and daily service centres. The results show that all land use factors in this study have significant relationship with happiness but only income, among socio-economic factors, can affect happiness significantly. Therefore, land use factors can affect happiness in Skudai more than socio-economic factors.

Keywords: Neighbourhood land use, neighbourhood design, happiness, socio-economic factors, generalised linear modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 691
318 A Real-time Computer Vision System for VehicleTracking and Collision Detection

Authors: Mustafa Kisa, Fatih Mehmet Botsali

Abstract:

Recent developments in automotive technology are focused on economy, comfort and safety. Vehicle tracking and collision detection systems are attracting attention of many investigators focused on safety of driving in the field of automotive mechatronics. In this paper, a vision-based vehicle detection system is presented. Developed system is intended to be used in collision detection and driver alert. The system uses RGB images captured by a camera in a car driven in the highway. Images captured by the moving camera are used to detect the moving vehicles in the image. A vehicle ahead of the camera is detected in daylight conditions. The proposed method detects moving vehicles by subtracting successive images. Plate height of the vehicle is determined by using a plate recognition algorithm. Distance of the moving object is calculated by using the plate height. After determination of the distance of the moving vehicle relative speed of the vehicle and Time-to-Collision are calculated by using distances measured in successive images. Results obtained in road tests are discussed in order to validate the use of the proposed method.

Keywords: Image possessing, vehicle tracking, license plate detection, computer vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3071
317 Accounting for Rice Productivity Heterogeneity in Ghana: The Two-Step Stochastic Metafrontier Approach

Authors: Franklin Nantui Mabe, Samuel A. Donkoh, Seidu Al-Hassan

Abstract:

Rice yields among agro-ecological zones are heterogeneous. Farmers, researchers and policy makers are making frantic efforts to bridge rice yield gaps between agro-ecological zones through the promotion of improved agricultural technologies (IATs). Farmers are also modifying these IATs and blending them with indigenous farming practices (IFPs) to form farmer innovation systems (FISs). Also, different metafrontier models have been used in estimating productivity performances and their drivers. This study used the two-step stochastic metafrontier model to estimate the productivity performances of rice farmers and their determining factors in GSZ, FSTZ and CSZ. The study used both primary and secondary data. Farmers in CSZ are the most technically efficient. Technical inefficiencies of farmers are negatively influenced by age, sex, household size, education years, extension visits, contract farming, access to improved seeds, access to irrigation, high rainfall amount, less lodging of rice, and well-coordinated and synergized adoption of technologies. Albeit farmers in CSZ are doing well in terms of rice yield, they still have the highest potential of increasing rice yield since they had the lowest TGR. It is recommended that government through the ministry of food and agriculture, development partners and individual private companies promote the adoption of IATs as well as educate farmers on how to coordinate and synergize the adoption of the whole package. Contract farming concept and agricultural extension intensification should be vigorously pursued to the latter.

Keywords: Efficiency, farmer innovation systems, improved agricultural technologies, two-step stochastic metafrontier approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 816
316 Robust Digital Cinema Watermarking

Authors: Sadi Vural, Hiromi Tomii, Hironori Yamauchi

Abstract:

With the advent of digital cinema and digital broadcasting, copyright protection of video data has been one of the most important issues. We present a novel method of watermarking for video image data based on the hardware and digital wavelet transform techniques and name it as “traceable watermarking" because the watermarked data is constructed before the transmission process and traced after it has been received by an authorized user. In our method, we embed the watermark to the lowest part of each image frame in decoded video by using a hardware LSI. Digital Cinema is an important application for traceable watermarking since digital cinema system makes use of watermarking technology during content encoding, encryption, transmission, decoding and all the intermediate process to be done in digital cinema systems. The watermark is embedded into the randomly selected movie frames using hash functions. Embedded watermark information can be extracted from the decoded video data. For that, there is no need to access original movie data. Our experimental results show that proposed traceable watermarking method for digital cinema system is much better than the convenient watermarking techniques in terms of robustness, image quality, speed, simplicity and robust structure.

Keywords: Decoder, Digital content, JPEG2000 Frame, System-On-Chip, traceable watermark, Hash Function, CRC-32.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1625
315 Numerical Simulations of Acoustic Imaging in Hydrodynamic Tunnel with Model Adaptation and Boundary Layer Noise Reduction

Authors: Sylvain Amailland, Jean-Hugh Thomas, Charles Pézerat, Romuald Boucheron, Jean-Claude Pascal

Abstract:

The noise requirements for naval and research vessels have seen an increasing demand for quieter ships in order to fulfil current regulations and to reduce the effects on marine life. Hence, new methods dedicated to the characterization of propeller noise, which is the main source of noise in the far-field, are needed. The study of cavitating propellers in closed-section is interesting for analyzing hydrodynamic performance but could involve significant difficulties for hydroacoustic study, especially due to reverberation and boundary layer noise in the tunnel. The aim of this paper is to present a numerical methodology for the identification of hydroacoustic sources on marine propellers using hydrophone arrays in a large hydrodynamic tunnel. The main difficulties are linked to the reverberation of the tunnel and the boundary layer noise that strongly reduce the signal-to-noise ratio. In this paper it is proposed to estimate the reflection coefficients using an inverse method and some reference transfer functions measured in the tunnel. This approach allows to reduce the uncertainties of the propagation model used in the inverse problem. In order to reduce the boundary layer noise, a cleaning algorithm taking advantage of the low rank and sparse structure of the cross-spectrum matrices of the acoustic and the boundary layer noise is presented. This approach allows to recover the acoustic signal even well under the boundary layer noise. The improvement brought by this method is visible on acoustic maps resulting from beamforming and DAMAS algorithms.

Keywords: Acoustic imaging, boundary layer noise denoising, inverse problems, model adaptation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 942