Search results for: depth average velocity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9138

Search results for: depth average velocity

7818 Simplified INS\GPS Integration Algorithm in Land Vehicle Navigation

Authors: Othman Maklouf, Abdunnaser Tresh

Abstract:

Land vehicle navigation is subject of great interest today. Global Positioning System (GPS) is the main navigation system for positioning in such systems. GPS alone is incapable of providing continuous and reliable positioning, because of its inherent dependency on external electromagnetic signals. Inertial Navigation (INS) is the implementation of inertial sensors to determine the position and orientation of a vehicle. The availability of low-cost Micro-Electro-Mechanical-System (MEMS) inertial sensors is now making it feasible to develop INS using an inertial measurement unit (IMU). INS has unbounded error growth since the error accumulates at each step. Usually, GPS and INS are integrated with a loosely coupled scheme. With the development of low-cost, MEMS inertial sensors and GPS technology, integrated INS/GPS systems are beginning to meet the growing demands of lower cost, smaller size, and seamless navigation solutions for land vehicles. Although MEMS inertial sensors are very inexpensive compared to conventional sensors, their cost (especially MEMS gyros) is still not acceptable for many low-end civilian applications (for example, commercial car navigation or personal location systems). An efficient way to reduce the expense of these systems is to reduce the number of gyros and accelerometers, therefore, to use a partial IMU (ParIMU) configuration. For land vehicular use, the most important gyroscope is the vertical gyro that senses the heading of the vehicle and two horizontal accelerometers for determining the velocity of the vehicle. This paper presents a field experiment for a low-cost strap down (ParIMU)\GPS combination, with data post processing for the determination of 2-D components of position (trajectory), velocity and heading. In the present approach, we have neglected earth rotation and gravity variations, because of the poor gyroscope sensitivities of our low-cost IMU (Inertial Measurement Unit) and because of the relatively small area of the trajectory.

Keywords: GPS, IMU, Kalman filter, materials engineering

Procedia PDF Downloads 417
7817 A Proper Continuum-Based Reformulation of Current Problems in Finite Strain Plasticity

Authors: Ladislav Écsi, Roland Jančo

Abstract:

Contemporary multiplicative plasticity models assume that the body's intermediate configuration consists of an assembly of locally unloaded neighbourhoods of material particles that cannot be reassembled together to give the overall stress-free intermediate configuration since the neighbourhoods are not necessarily compatible with each other. As a result, the plastic deformation gradient, an inelastic component in the multiplicative split of the deformation gradient, cannot be integrated, and the material particle moves from the initial configuration to the intermediate configuration without a position vector and a plastic displacement field when plastic flow occurs. Such behaviour is incompatible with the continuum theory and the continuum physics of elastoplastic deformations, and the related material models can hardly be denoted as truly continuum-based. The paper presents a proper continuum-based reformulation of current problems in finite strain plasticity. It will be shown that the incompatible neighbourhoods in real material are modelled by the product of the plastic multiplier and the yield surface normal when the plastic flow is defined in the current configuration. The incompatible plastic factor can also model the neighbourhoods as the solution of the system of differential equations whose coefficient matrix is the above product when the plastic flow is defined in the intermediate configuration. The incompatible tensors replace the compatible spatial plastic velocity gradient in the former case or the compatible plastic deformation gradient in the latter case in the definition of the plastic flow rule. They act as local imperfections but have the same position vector as the compatible plastic velocity gradient or the compatible plastic deformation gradient in the definitions of the related plastic flow rules. The unstressed intermediate configuration, the unloaded configuration after the plastic flow, where the residual stresses have been removed, can always be calculated by integrating either the compatible plastic velocity gradient or the compatible plastic deformation gradient. However, the corresponding plastic displacement field becomes permanent with both elastic and plastic components. The residual strains and stresses originate from the difference between the compatible plastic/permanent displacement field gradient and the prescribed incompatible second-order tensor characterizing the plastic flow in the definition of the plastic flow rule, which becomes an assignment statement rather than an equilibrium equation. The above also means that the elastic and plastic factors in the multiplicative split of the deformation gradient are, in reality, gradients and that there is no problem with the continuum physics of elastoplastic deformations. The formulation is demonstrated in a numerical example using the regularized Mooney-Rivlin material model and modified equilibrium statements where the intermediate configuration is calculated, whose analysis results are compared with the identical material model using the current equilibrium statements. The advantages and disadvantages of each formulation, including their relationship with multiplicative plasticity, are also discussed.

Keywords: finite strain plasticity, continuum formulation, regularized Mooney-Rivlin material model, compatibility

Procedia PDF Downloads 123
7816 Evaluation of Forming Properties on AA 5052 Aluminium Alloy by Incremental Forming

Authors: A. Anbu Raj, V. Mugendiren

Abstract:

Sheet metal forming is a vital manufacturing process used in automobile, aerospace, agricultural industries, etc. Incremental forming is a promising process providing a short and inexpensive way of forming complex three-dimensional parts without using die. The aim of this research is to study the forming behaviour of AA 5052, Aluminium Alloy, using incremental forming and also to study the FLD of cone shape AA 5052 Aluminium Alloy at room temperature and various annealing temperature. Initially the surface roughness and wall thickness through incremental forming on AA 5052 Aluminium Alloy sheet at room temperature is optimized by controlling the effects of forming parameters. The central composite design (CCD) was utilized to plan the experiment. The step depth, feed rate, and spindle speed were considered as input parameters in this study. The surface roughness and wall thickness were used as output response. The process performances such as average thickness and surface roughness were evaluated. The optimized results are taken for minimum surface roughness and maximum wall thickness. The optimal results are determined based on response surface methodology and the analysis of variance. Formability Limit Diagram is constructed on AA 5052 Aluminium Alloy at room temperature and various annealing temperature by using optimized process parameters from the response surface methodology. The cone has higher formability than the square pyramid and higher wall thickness distribution. Finally the FLD on cone shape and square pyramid shape at room temperature and the various annealing temperature is compared experimentally and simulated with Abaqus software.

Keywords: incremental forming, response surface methodology, optimization, wall thickness, surface roughness

Procedia PDF Downloads 336
7815 Analysis of Contact Width and Contact Stress of Three-Layer Corrugated Metal Gasket

Authors: I. Made Gatot Karohika, Shigeyuki Haruyama, Ken Kaminishi, Oke Oktavianty, Didik Nurhadiyanto

Abstract:

Contact width and contact stress are important parameters related to the leakage behavior of corrugated metal gasket. In this study, contact width and contact stress of three-layer corrugated metal gasket are investigated due to the modulus of elasticity and thickness of surface layer for 2 type gasket (0-MPa and 400-MPa mode). A finite element method was employed to develop simulation solution to analysis the effect of each parameter. The result indicated that lowering the modulus of elasticity ratio of surface layer will result in better contact width but the average contact stresses are smaller. When the modulus of elasticity ratio is held constant with thickness ratio increase, its contact width has an increscent trend otherwise the average contact stress has decreased trend.

Keywords: contact width, contact stress, layer, metal gasket, corrugated, simulation

Procedia PDF Downloads 315
7814 Noise Removal Techniques in Medical Images

Authors: Amhimmid Mohammed Saffour, Abdelkader Salama

Abstract:

Filtering is a part of image enhancement techniques, it is used to enhance certain details such as edges in the image that are relevant to the application. Additionally, filtering can even be used to eliminate unwanted components of noise. Medical images typically contain salt and pepper noise and Poisson noise. This noise appears to the presence of minute grey scale variations within the image. In this paper, different filters techniques namely (Median, Wiener, Rank order3, Rank order5, and Average) were applied on CT medical images (Brain and chest). We using all these filters to remove salt and pepper noise from these images. This type of noise consists of random pixels being set to black or white. Peak Signal to Noise Ratio (PSNR), Mean Square Error r(MSE) and Histogram were used to evaluated the quality of filtered images. The results, which we have achieved shows that, these filters, are more useful and they prove to be helpful for general medical practitioners to analyze the symptoms of the patients with no difficulty.

Keywords: CT imaging, median filter, adaptive filter and average filter, MATLAB

Procedia PDF Downloads 312
7813 Soil Bearing Capacity of Shallow Foundation and Consolidation Settlement at Around the Prospective Area of Sei Gong Dam Batam

Authors: Andri Hidayat, Zufialdi Zakaria, Raden Irvan Sophian

Abstract:

Batam city within next five years are expected to experience water crisis. Sei Gong dam which is located in the Sijantung village, Galang District, Batam City, Riau Islands Province is one of 13 dams that will be built to solve the problems of raw water crisis in the Batam city. The purpose of this study are to determine the condition of engineering geology around Sei Gong Dam area, knowing the value of the soil bearing capacity and recommended pile foundation, and knowing the characteristics of the soil consolidation as one of the factors that affect the incidence of soil subsidence. Based on calculations for shallow foundation in general - soil shear condition and local - soil condition indicates that the highest value in ultimate soil bearing capacity (qu) for each depth was in the square foundations at two meters depth. The zonations of shallow foundation of the research area are divided into five zones, they are bearing capacity zone <10 ton/m2, bearing capacity zone 10-15 ton/m2, bearing capacity zone 15-20 ton/m2, bearing capacity zone 20-25 ton/m2, and bearing capacity zone >25 ton/m2. Based on the parameters of soil engineering analysis, Sei Gong Dam areas at the middle part has a higher value for land subsidence.

Keywords: ultimate bearing capacity, type of foundation, consolidation, land subsidence, Batam

Procedia PDF Downloads 376
7812 Analyzing Nonsimilar Convective Heat Transfer in Copper/Alumina Nanofluid with Magnetic Field and Thermal Radiations

Authors: Abdulmohsen Alruwaili

Abstract:

A partial differential system featuring momentum and energy balance is often used to describe simulations of flow initiation and thermal shifting in boundary layers. The buoyancy force in terms of temperature is factored in the momentum balance equation. Buoyancy force causes the flow quantity to fluctuate along the streamwise direction 𝑋; therefore, the problem can be, to our best knowledge, analyzed through nonsimilar modeling. In this analysis, a nonsimilar model is evolved for radiative mixed convection of a magnetized power-law nanoliquid flow on top of a vertical plate installed in a stationary fluid. The upward linear stretching initiated the flow in the vertical direction. Assuming nanofluids are composite of copper (Cu) and alumina (Al₂O₃) nanoparticles, the viscous dissipation in this case is negligible. The nonsimilar system is dealt with analytically by local nonsimilarity (LNS) via numerical algorithm bvp4c. Surface temperature and flow field are shown visually in relation to factors like mixed convection, magnetic field strength, nanoparticle volume fraction, radiation parameters, and Prandtl number. The repercussions of magnetic and mixed convection parameters on the rate of energy transfer and friction coefficient are represented in tabular forms. The results obtained are compared to the published literature. It is found that the existence of nanoparticles significantly improves the temperature profile of considered nanoliquid. It is also observed that when the estimates of the magnetic parameter increase, the velocity profile decreases. Enhancement in nanoparticle concentration and mixed convection parameter improves the velocity profile.

Keywords: nanofluid, power law model, mixed convection, thermal radiation

Procedia PDF Downloads 28
7811 Effect of vr Based Wii Fit Training on Muscle Strength, Sensory Integration Ability and Walking Abilities in Patients with Parkinson's Disease: A Randomized Control Trial

Authors: Ying-Yi Laio, Yea-Ru Yang, Yih-Ru Wu, Ray-Yau Wang

Abstract:

Background: Virtual reality (VR) systems are proved to increase motor performance in stroke and elderly. However, the effects have not been established in patients with Parkinson’s disease (PD). Purpose: To examine the effects of VR based training in improving muscle strength, sensory integration ability and walking abilities in patients with PD by a randomized controlled trial. Method: Thirty six participants with diagnosis of PD were randomly assigned to one of the three groups (n=12 for each group). Participants received VR-based Wii Fit exercise (VRWii group) or traditional exercise (TE group) for 45 minutes, followed by treadmill training for another 15 minutes for 12 sessions in 6 weeks. Participants in the control group received no structured exercise program but fall-prevention education. Outcomes included lower extremity muscle strength, sensory integration ability, walking velocity, stride length, and functional gait assessment (FGA). All outcomes were assessed at baseline, after training and at 1-month follow-up. Results: Both VRWii and TE groups showed more improvement in level walking velocity, stride length, FGA, muscle strength and vestibular system integration than control group after training and at 1-month follow-up. The VRWii training, but not the TE training, resulted in more improvement in visual system integration than the control. Conclusions: VRWii training is as beneficial as traditional exercise in improving walking abilities, sensory integration ability and muscle strength in patients with PD, and such improvements persisted at least for 1 month. The VRWii training is then suggested to be implemented in patients with PD.

Keywords: virtual reality, walking, sensory integration, muscle strength, Parkinson’s disease

Procedia PDF Downloads 327
7810 Drone On-Time Obstacle Avoidance for Static and Dynamic Obstacles

Authors: Herath M. P. C. Jayaweera, Samer Hanoun

Abstract:

Path planning for on-time obstacle avoidance is an essential and challenging task that enables drones to achieve safe operation in any application domain. The level of challenge increases significantly on the obstacle avoidance technique when the drone is following a ground mobile entity (GME). This is mainly due to the change in direction and magnitude of the GME′s velocity in dynamic and unstructured environments. Force field techniques are the most widely used obstacle avoidance methods due to their simplicity, ease of use, and potential to be adopted for three-dimensional dynamic environments. However, the existing force field obstacle avoidance techniques suffer many drawbacks, including their tendency to generate longer routes when the obstacles are sideways of the drone′s route, poor ability to find the shortest flyable path, propensity to fall into local minima, producing a non-smooth path, and high failure rate in the presence of symmetrical obstacles. To overcome these shortcomings, this paper proposes an on-time three-dimensional obstacle avoidance method for drones to effectively and efficiently avoid dynamic and static obstacles in unknown environments while pursuing a GME. This on-time obstacle avoidance technique generates velocity waypoints for its obstacle-free and efficient path based on the shape of the encountered obstacles. This method can be utilized on most types of drones that have basic distance measurement sensors and autopilot-supported flight controllers. The proposed obstacle avoidance technique is validated and evaluated against existing force field methods for different simulation scenarios in Gazebo and ROS-supported PX4-SITL. The simulation results show that the proposed obstacle avoidance technique outperforms the existing force field techniques and is better suited for real-world applications.

Keywords: drones, force field methods, obstacle avoidance, path planning

Procedia PDF Downloads 91
7809 Factors Influencing Household Expenditure Patterns on Cereal Grains in Nasarawa State, Nigeria

Authors: E. A. Ojoko, G. B. Umbugadu

Abstract:

This study aims at describing the expenditure pattern of households on millet, maize and sorghum across income groups in Nasarawa State. A multi-stage sampling technique was used to select a sample size of 316 respondents for the study. The Almost Ideal Demand System (AIDS) model was adopted in this study. Results from the study shows that the average household size was five persons with dependency ratio of 52 %, which plays an important role on the household’s expenditure pattern by increasing the household budget share. On the average 82 % were male headed households with an average age of 49 years and 13 years of formal education. Results on expenditure share show that maize has the highest expenditure share of 38 % across the three income groups and that most of the price effects are significantly different from zero at 5 % significant level. This shows that the low price of maize increased its demand as compared to other cereals. Household size and age of household members are major factors affecting the demand for cereals in the study. This agrees with the fact that increased household population (size) will bring about increase consumption. The results on factors influencing preferences for cereal grains reveals that cooking quality and appearance (65.7 %) were the most important factors affecting the demand for maize in the study area. This study recommends that cereal crop production should be prioritized in government policies and farming activities that help to boost food security and alleviate poverty should be subsidized.

Keywords: expenditure pattern, AIDS model, budget share, price cereal grains and consumption

Procedia PDF Downloads 193
7808 Biostabilisation of Sediments for the Protection of Marine Infrastructure from Scour

Authors: Rob Schindler

Abstract:

Industry-standard methods of mitigating erosion of seabed sediments rely on ‘hard engineering’ approaches which have numerous environmental shortcomings: (1) direct loss of habitat by smothering of benthic species, (2) disruption of sediment transport processes, damaging geomorphic and ecosystem functionality (3) generation of secondary erosion problems, (4) introduction of material that may propagate non-local species, and (5) provision of pathways for the spread of invasive species. Recent studies have also revealed the importance of biological cohesion, the result of naturally occurring extra-cellular polymeric substances (EPS), in stabilizing natural sediments. Mimicking the strong bonding kinetics through the deliberate addition of EPS to sediments – henceforth termed ‘biostabilisation’ - offers a means in which to mitigate against erosion induced by structures or episodic increases in hydrodynamic forcing (e.g. storms and floods) whilst avoiding, or reducing, hard engineering. Here we present unique experiments that systematically examine how biostabilisation reduces scour around a monopile in a current, a first step to realizing the potential of this new method of scouring reduction for a wide range of engineering purposes in aquatic substrates. Experiments were performed in Plymouth University’s recirculating sediment flume which includes a recessed scour pit. The model monopile was 0.048 m in diameter, D. Assuming a prototype monopile diameter of 2.0 m yields a geometric ratio of 41.67. When applied to a 10 m prototype water depth this yields a model depth, d, of 0.24 m. The sediment pit containing the monopile was filled with different biostabilised substrata prepared using a mixture of fine sand (D50 = 230 μm) and EPS (Xanthan gum). Nine sand-EPS mixtures were examined spanning EPS contents of 0.0% < b0 < 0.50%. Scour development was measured using a laser point gauge along a 530 mm centreline at 10 mm increments at regular periods over 5 h. Maximum scour depth and excavated area were determined at different time steps and plotted against time to yield equilibrium values. After 5 hours the current was stopped and a detailed scan of the final scour morphology was taken. Results show that increasing EPS content causes a progressive reduction in the equilibrium depth and lateral extent of scour, and hence excavated material. Very small amounts equating to natural communities (< 0.1% by mass) reduce scour rate, depth and extent of scour around monopiles. Furthermore, the strong linear relationships between EPS content, equilibrium scour depth, excavation area and timescales of scouring offer a simple index on which to modify existing scour prediction methods. We conclude that the biostabilisation of sediments with EPS may offer a simple, cost-effective and ecologically sensitive means of reducing scour in a range of contexts including OWFs, bridge piers, pipeline installation, and void filling in rock armour. Biostabilisation may also reduce economic costs through (1) Use of existing site sediments, or waste dredged sediments (2) Reduced fabrication of materials, (3) Lower transport costs, (4) Less dependence on specialist vessels and precise sub-sea assembly. Further, its potential environmental credentials may allow sensitive use of the seabed in marine protection zones across the globe.

Keywords: biostabilisation, EPS, marine, scour

Procedia PDF Downloads 165
7807 Application of a Generalized Additive Model to Reveal the Relations between the Density of Zooplankton with Other Variables in the West Daya Bay, China

Authors: Weiwen Li, Hao Huang, Chengmao You, Jianji Liao, Lei Wang, Lina An

Abstract:

Zooplankton are a central issue in the ecology which makes a great contribution to maintaining the balance of an ecosystem. It is critical in promoting the material cycle and energy flow within the ecosystems. A generalized additive model (GAM) was applied to analyze the relationships between the density (individuals per m³) of zooplankton and other variables in West Daya Bay. All data used in this analysis (the survey month, survey station (longitude and latitude), the depth of the water column, the superficial concentration of chlorophyll a, the benthonic concentration of chlorophyll a, the number of zooplankton species and the number of zooplankton species) were collected through monthly scientific surveys during January to December 2016. GLM model (generalized linear model) was used to choose the significant variables’ impact on the density of zooplankton, and the GAM was employed to analyze the relationship between the density of zooplankton and the significant variables. The results showed that the density of zooplankton increased with an increase of the benthonic concentration of chlorophyll a, but decreased with a decrease in the depth of the water column. Both high numbers of zooplankton species and the overall total number of zooplankton individuals led to a higher density of zooplankton.

Keywords: density, generalized linear model, generalized additive model, the West Daya Bay, zooplankton

Procedia PDF Downloads 149
7806 A Case Study on the Estimation of Design Discharge for Flood Management in Lower Damodar Region, India

Authors: Susmita Ghosh

Abstract:

Catchment area of Damodar River, India experiences seasonal rains due to the south-west monsoon every year and depending upon the intensity of the storms, floods occur. During the monsoon season, the rainfall in the area is mainly due to active monsoon conditions. The upstream reach of Damodar river system has five dams store the water for utilization for various purposes viz, irrigation, hydro-power generation, municipal supplies and last but not the least flood moderation. But, in the downstream reach of Damodar River, known as Lower Damodar region, is severely and frequently suffering from flood due to heavy monsoon rainfall and also release from upstream reservoirs. Therefore, an effective flood management study is required to know in depth the nature and extent of flood, water logging, and erosion related problems, affected area, and damages in the Lower Damodar region, by conducting mathematical model study. The design flood or discharge is needed to decide to assign the respective model for getting several scenarios from the simulation runs. The ultimate aim is to achieve a sustainable flood management scheme from the several alternatives. there are various methods for estimating flood discharges to be carried through the rivers and their tributaries for quick drainage from inundated areas due to drainage congestion and excess rainfall. In the present study, the flood frequency analysis is performed to decide the design flood discharge of the study area. This, on the other hand, has limitations in respect of availability of long peak flood data record for determining long type of probability density function correctly. If sufficient past records are available, the maximum flood on a river with a given frequency can safely be determined. The floods of different frequency for the Damodar has been calculated by five candidate distributions i.e., generalized extreme value, extreme value-I, Pearson type III, Log Pearson and normal. Annual peak discharge series are available at Durgapur barrage for the period of 1979 to 2013 (35 years). The available series are subjected to frequency analysis. The primary objective of the flood frequency analysis is to relate the magnitude of extreme events to their frequencies of occurrence through the use of probability distributions. The design flood for return periods of 10, 15 and 25 years return period at Durgapur barrage are estimated by flood frequency method. It is necessary to develop flood hydrographs for the above floods to facilitate the mathematical model studies to find the depth and extent of inundation etc. Null hypothesis that the distributions fit the data at 95% confidence is checked with goodness of fit test, i.e., Chi Square Test. It is revealed from the goodness of fit test that the all five distributions do show a good fit on the sample population and is therefore accepted. However, it is seen that there is considerable variation in the estimation of frequency flood. It is therefore considered prudent to average out the results of these five distributions for required frequencies. The inundated area from past data is well matched using this flood.

Keywords: design discharge, flood frequency, goodness of fit, sustainable flood management

Procedia PDF Downloads 200
7805 Reconnaissance Investigation of Thermal Springs in the Middle Benue Trough, Nigeria by Remote Sensing

Authors: N. Tochukwu, M. Mukhopadhyay, A. Mohamed

Abstract:

It is no new that Nigeria faces a continual power shortage problem due to its vast population power demand and heavy reliance on nonrenewable forms of energy such as thermal power or fossil fuel. Many researchers have recommended using geothermal energy as an alternative; however, Past studies focus on the geophysical & geochemical investigation of this energy in the sedimentary and basement complex; only a few studies incorporated the remote sensing methods. Therefore, in this study, the preliminary examination of geothermal resources in the Middle Benue was carried out using satellite imagery in ArcMap. Landsat 8 scene (TIR, NIR, Red spectral bands) was used to estimate the Land Surface Temperature (LST). The Maximum Likelihood Classification (MLC) technique was used to classify sites with very low, low, moderate, and high LST. The intermediate and high classification happens to be possible geothermal zones, and they occupy 49% of the study area (38077km2). Riverline were superimposed on the LST layer, and the identification tool was used to locate high temperate sites. Streams that overlap on the selected sites were regarded as geothermal springs as. Surprisingly, the LST results show lower temperatures (<36°C) at the famous thermal springs (Awe & Wukari) than some unknown rivers/streams found in Kwande (38°C), Ussa, (38°C), Gwer East (37°C), Yola Cross & Ogoja (36°C). Studies have revealed that temperature increases with depth. However, this result shows excellent geothermal resources potential as it is expected to exceed the minimum geothermal gradient of 25.47 with an increase in depth. Therefore, further investigation is required to estimate the depth of the causative body, geothermal gradients, and the sustainability of the reservoirs by geophysical and field exploration. This method has proven to be cost-effective in locating geothermal resources in the study area. Consequently, the same procedure is recommended to be applied in other regions of the Precambrian basement complex and the sedimentary basins in Nigeria to save a preliminary field survey cost.

Keywords: ArcMap, geothermal resources, Landsat 8, LST, thermal springs, MLC

Procedia PDF Downloads 185
7804 Studying the Possibility to Weld AA1100 Aluminum Alloy by Friction Stir Spot Welding

Authors: Ahmad K. Jassim, Raheem Kh. Al-Subar

Abstract:

Friction stir welding is a modern and an environmentally friendly solid state joining process used to joint relatively lighter family of materials. Recently, friction stir spot welding has been used instead of resistance spot welding which has received considerable attention from the automotive industry. It is environmentally friendly process that eliminated heat and pollution. In this research, friction stir spot welding has been used to study the possibility to weld AA1100 aluminum alloy sheet with 3 mm thickness by overlapping the edges of sheet as lap joint. The process was done using a drilling machine instead of milling machine. Different tool rotational speeds of 760, 1065, 1445, and 2000 RPM have been applied with manual and automatic compression to study their effect on the quality of welded joints. Heat generation, pressure applied, and depth of tool penetration have been measured during the welding process. The result shows that there is a possibility to weld AA1100 sheets; however, there is some surface defect that happened due to insufficient condition of welding. Moreover, the relationship between rotational speed, pressure, heat generation and tool depth penetration was created.

Keywords: friction, spot, stir, environmental, sustainable, AA1100 aluminum alloy

Procedia PDF Downloads 194
7803 Balance Transfer of Heavy Metals in Marine Environments Subject to Natural and Anthropogenic Inputs: A Case Study on the Mejerda River Delta

Authors: Mohamed Amine Helali, Walid Oueslati, Ayed Added

Abstract:

Sedimentation rates and total fluxes of heavy metals (Fe, Mn, Pb, Zn and Cu) was measured in three different depths (10m, 20m and 40m) during March and August 2012, offshore of the Mejerda River outlet (Gulf of Tunis, Tunisia). The sedimentation rates are estimated from the fluxes of the suspended particulate matter at 7.32, 5.45 and 4.39 mm y⁻¹ respectively at 10m, 20m and 40m depth. Heavy metals sequestration in sediments was determined by chemical speciation and the total metal contents in each core collected from 10, 20 and 40m depth. Heavy metals intake to the sediment was measured also from the suspended particulate matter, while the fluxes from the sediment to the water column was determined using the benthic chambers technique and from the diffusive fluxes in the pore water. Results shown that iron is the only metal for which the balance transfer between intake/uptake (45 to 117 / 1.8 to 5.8 g m² y⁻¹) and sequestration (277 to 378 g m² y⁻¹) was negative, at the opposite of the Lead which intake fluxes (360 to 480 mg m² y⁻¹) are more than sequestration fluxes (50 to 92 mg m² y⁻¹). The balance transfer is neutral for Mn, Zn, and Cu. These clearly indicate that the contributions of Mejerda have consistently varied over time, probably due to the migration of the River mouth and to the changes in the mining activity in the Mejerda catchment and the recent human activities which affect the delta area.

Keywords: delta, fluxes, heavy metals, sediments, sedimentation rates

Procedia PDF Downloads 201
7802 Analgesic Efficacy of IPACK Block in Primary Total Knee Arthroplasty (90 CASES)

Authors: Fedili Benamar, Beloulou Mohamed Lamine, Ouahes Hassane, Ghattas Samir

Abstract:

 Background and aims: Peripheral regional anesthesia has been integrated into most analgesia protocols for total knee arthroplasty which considered among the most painful surgeries with a huge potential for chronicization. The adductor canal block (ACB) has gained popularity. Similarly, the IPACK block has been described to provide analgesia of the posterior knee capsule. This study aimed to evaluate the analgesic efficacy of this block in patients undergoing primary PTG. Methods: 90 patients were randomized to receive either an IPACK, an anterior sciatic block, or a sham block (30 patients in each group + multimodal analgesia and a catheter in the KCA adductor canal). GROUP 1 KCA GROUP 2 KCA+BSA GROUP 3 KCA+IPACK The analgesic blocks were done under echo-guidance preoperatively respecting the safety rules, the dose administered was 20 cc of ropivacaine 0.25% was used. We were to assess posterior knee pain 6 hours after surgery. Other endpoints included quality of recovery after surgery, pain scores, opioid requirements (PCA morphine)(EPI info 7.2 analysis). Results: -groups were matched -A predominance of women (4F/1H). -average age: 68 +/-7 years -the average BMI =31.75 kg/m2 +/- 4. -70% of patients ASA2 ,20% ASA3. -The average duration of the intervention: 89 +/- 19 minutes. -Morphine consumption (PCA) significantly higher in group 1 (16mg) & group 2 (8mg) group 3 (4mg) - The groups were matched . -There was a correlation between the use of the ipack block and postoperative pain Conclusions :In a multimodal analgesic protocol, the addition of IPACK block decreased pain scores and morphine consumption ,

Keywords: regional anesthesia, analgesia, total knee arthroplasty, the adductor canal block (acb), the ipack block, pain

Procedia PDF Downloads 72
7801 Modern Tragic Substance in O’Neill’s Desire under the Elms and Mourning Becomes Electra

Authors: Azza Taha Zaki

Abstract:

The position Eugene O’Neill occupies in the history of American drama is undisputable. Critics have agreed that the American theatre was waiting for O’Neill to give it substance, character, and value. The American dramatist continues to be considered as a major influence on the body of dramatic repertoire across the globe. The American theatre before O’Neill knew playwrights who were mostly viewed as entertainers. The serious drama had to wait until O’Neill started his career with expressionistic and social drama. His breakthrough, however, came in 1925 when he published Desire Under the Elms, described as the first important tragedy to be written in America. Mourning Becomes Electra, published in 1931, further reinforced the reputation of Eugene O’Neill and was described as his 'magnum opus'. Aspiring to portray the essence of life and man’s innermost conflicts, O’Neill turned to the classical model, rather than to social realistic drama, to create modern tragedies with the aid of the then-new science of psychology. The present paper aims to undertake an in-depth study of how overtones from classical tragedies by the classical masters Aeschylus, Sophocles, and Euripides resonate through O’Neill’s two plays. The paper shows how leaning on classical themes and concepts interpreted in terms of psychological forces have added depth and tragic substance to a modern milieu and produced masterpieces of dramaturgy.

Keywords: classical, drama, O'Neill, modern, tragic

Procedia PDF Downloads 144
7800 Experimental and Semi-Analytical Investigation of Wave Interaction with Double Vertical Slotted Walls

Authors: H. Ahmed, A. Schlenkhoff, R. Rousta, R. Abdelaziz

Abstract:

Vertical slotted walls can be used as permeable breakwaters to provide economical and environmental protection from undesirable waves and currents inside the port. The permeable breakwaters are partially protection and have been suggested to overcome the environmental disadvantages of fully protection breakwaters. For regular waves a semi-analytical model is based on an eigenfunction expansion method and utilizes a boundary condition at the surface of each wall are developed to detect the energy dissipation through the slots. Extensive laboratory tests are carried out to validate the semi-analytic models. The structure of the physical model contains two walls and it consists of impermeable upper and lower part, where the draft is based a decimal multiple of the total depth. The middle part is permeable with a porosity of 50%. The second barrier is located at a distant of 0.5, 1, 1.5 and 2 times of the water depth from the first one. A comparison of the theoretical results with previous studies and experimental measurements of the present study show a good agreement and that, the semi-analytical model is able to adequately reproduce most the important features of the experiment.

Keywords: permeable breakwater, double vertical slotted walls, semi-analytical model, transmission coefficient, reflection coefficient, energy dissipation coefficient

Procedia PDF Downloads 383
7799 QUALIFYING AGGREGATES PRODUCED IN KANO-NIGERIA FOR USE IN SUPERPAVE DESIGN METHOD

Authors: Ahmad Idris, Bishir Kado, Murtala Umar, Armaya`u Suleiman Labo

Abstract:

Superpave is the short form of Superior Performing Asphalt Pavement and represents a basis for specifying component materials, asphalt mixture design and analysis, and pavement performance prediction. This new technology is the result of long research projects conducted by the strategic Highway Research program (SHRP) of the Federal Highway Administration. This research was aimed at examining the suitability of Aggregates found in Kano for used in Superpave design method. Aggregates samples were collected from different sources in Kano Nigeria and their Engineering properties, as they relate to the SUPERPAVE design requirements were determined. The average result of Coarse Aggregate Angularity in Kano was found to be 87% and 86% of one fractured face and two or more fractured faces respectively with a standard of 80% and 85% respectively. Fine Aggregate Angularity average result was found to be 47% with a requirement of 45% minimum. A flat and elongated particle which was found to be 10% has a maximum criterion of 10%. Sand equivalent was found to be 51% with the criteria of 45% minimum. Strength tests were also carried out, and the results reflect the requirements of the standards. The tests include Impact value test, Aggregate crushing value, and Aggregate Abrasion tests and the results are 27.5%, 26.7%, and 13%, respectively, with the maximum criteria of 30%. Specific gravity was also carried out and the result was found to have an average value of 2.52 with a criterion of 2.6 to 2.9 and Water absorption was found to be 1.41% with maximum criteria of 0.6%. From the study, the result of the tests indicated that the aggregates properties has met the requirements of Superpave design method based on the specifications of ASTMD 5821, ASTM D 4791, AASHTO T176, AASHTO T33 and BS815.

Keywords: Superpave, aggregates, asphalt mix, Kano

Procedia PDF Downloads 390
7798 Evaluating the Capability of the Flux-Limiter Schemes in Capturing the Turbulence Structures in a Fully Developed Channel Flow

Authors: Mohamed Elghorab, Vendra C. Madhav Rao, Jennifer X. Wen

Abstract:

Turbulence modelling is still evolving, and efforts are on to improve and develop numerical methods to simulate the real turbulence structures by using the empirical and experimental information. The monotonically integrated large eddy simulation (MILES) is an attractive approach for modelling turbulence in high Re flows, which is based on the solving of the unfiltered flow equations with no explicit sub-grid scale (SGS) model. In the current work, this approach has been used, and the action of the SGS model has been included implicitly by intrinsic nonlinear high-frequency filters built into the convection discretization schemes. The MILES solver is developed using the opensource CFD OpenFOAM libraries. The role of flux limiters schemes namely, Gamma, superBee, van-Albada and van-Leer, is studied in predicting turbulent statistical quantities for a fully developed channel flow with a friction Reynolds number, ReT = 180, and compared the numerical predictions with the well-established Direct Numerical Simulation (DNS) results for studying the wall generated turbulence. It is inferred from the numerical predictions that Gamma, van-Leer and van-Albada limiters produced more diffusion and overpredicted the velocity profiles, while superBee scheme reproduced velocity profiles and turbulence statistical quantities in good agreement with the reference DNS data in the streamwise direction although it deviated slightly in the spanwise and normal to the wall directions. The simulation results are further discussed in terms of the turbulence intensities and Reynolds stresses averaged in time and space to draw conclusion on the flux limiter schemes performance in OpenFOAM context.

Keywords: flux limiters, implicit SGS, MILES, OpenFOAM, turbulence statistics

Procedia PDF Downloads 189
7797 The Economic Impact Analysis of the Use of Probiotics and Prebiotics in Broiler Feed

Authors: Hanan Al-Khalaifah, Afaf Al-Nasser

Abstract:

Probiotics and prebiotics claimed to serve as effective alternatives to antibiotics in the poultry. This study aims to investigate the effect of different probiotics and prebiotics on the economic impact analysis of the use of probiotics and prebiotics in broiler feed. The study involved four broiler cycles, two during winter and two during summer. In the first two cycles (summer and winter), different types of prebiotics and probiotics were used. The probiotics were Bacillus coagulans (1 g/kg dried culture) and Lactobacillus (1 g/kg dried culture of 12 commercial strains), and prebiotics included fructo-oligosaccharides (FOS) (5 g/kg) and mannan-oligosaccharide (MOS) derived from Saccharomyces cerevisiae (5 g/kg). Based on the results obtained, the best treatment was chosen to be FOS, from which different ratios were used in the last two cycles during winter and summer. The levels of FOS chosen were 0.3, 0.5, and 0.7% of the diet. From an economic point of view, it was generally concluded that in all dietary treatments, food was consumed less in cycle 1 than in cycle 2, the total body weight gain was more in cycle 1 than cycle 2, and the average feed efficiency was less in cycle l than cycle 2. This indicates that the weather condition affected better in cycle 1. Also, there were very small differences between the dietary treatments in each cycle. In cycle 1, the best total feed consumption was for the FOS treatment, the highest total body weight gain and average feed efficiency were for B. coagulans. In cycle 2, all performance was better in FOS treatment. FOS significantly reduced the Salmonella sp. counts in the intestine, where the environment was driven towards acidity. FOS was the best on the average taste panel study of the produced meat. Accordingly, FOS prebiotic was chosen to be the best treatment to be used in cycles 3 and 4. The economic impact analysis generally revealed that there were no big differences between the treatments in all of the studied indicators, but there was a difference between the cycles.

Keywords: antibiotic, economic impact, prebiotic, probiotic, broiler

Procedia PDF Downloads 149
7796 A Review of Paleo-Depositional Environment and Thermal Alteration Index of Carboniferous, Permian and Triassic of A1-9 well, NW Libya

Authors: Mohamed Ali Alrabib

Abstract:

This paper introduces a paleoenvironmental and hydrocarbon show in this well was identified in the interval of Dembaba formation to the Hassaona formation was poor to very poor oil show. And from palaeoenvironmental analysis there is neither particularly good reservoir nor source rock have been developed in the area. Recent palaeoenvironment work undertakes that the sedimentary succession in this area comprises the Upper Paleozoic rock of the Carboniferous and Permian and the Mesozoic (Triassic) sedimentary sequences. No early Paleozoic rocks have been found in this area, these rocks were eroding during the Late Carboniferous and Early Permian time. During Latest Permian and earliest Triassic time evidence for major marine transgression has occurred. From depths 5930-5940 feet, to 10800-10810 feet, the TAI of the Al Guidr, the Bir Al Jaja Al Uotia, Hebilia and the top varies between 3+ to 4-(mature-dry gas). This interval corporate the rest part of the Dembaba Formation. From depth 10800- 10810 feet, until total sediment depth (11944 feet Log) which corporate the rest of the Dembaba and underlying equivalents of the Assedjefar and M rar Formations and the underlying Indeterminate unit (Hassouna Formation) the TAI varies between 4 and 5 (dry gas-black& deformed).

Keywords: paleoenveronments, thermail index, carboniferous, Libya

Procedia PDF Downloads 438
7795 A Microwave Heating Model for Endothermic Reaction in the Cement Industry

Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira

Abstract:

Microwave technology has been gaining importance in contributing to decarbonization processes in high energy demand industries. Despite the several numerical models presented in the literature, a proper Verification and Validation exercise is still lacking. This is important and required to evaluate the physical process model accuracy and adequacy. Another issue addresses impedance matching, which is an important mechanism used in microwave experiments to increase electromagnetic efficiency. Such mechanism is not available in current computational tools, thus requiring an external numerical procedure. A numerical model was implemented to study the continuous processing of limestone with microwave heating. This process requires the material to be heated until a certain temperature that will prompt a highly endothermic reaction. Both a 2D and 3D model were built in COMSOL Multiphysics to solve the two-way coupling between Maxwell and Energy equations, along with the coupling between both heat transfer phenomena and limestone endothermic reaction. The 2D model was used to study and evaluate the required numerical procedure, being also a benchmark test, allowing other authors to implement impedance matching procedures. To achieve this goal, a controller built in MATLAB was used to continuously matching the cavity impedance and predicting the required energy for the system, thus successfully avoiding energy inefficiencies. The 3D model reproduces realistic results and therefore supports the main conclusions of this work. Limestone was modeled as a continuous flow under the transport of concentrated species, whose material and kinetics properties were taken from literature. Verification and Validation of the coupled model was taken separately from the chemical kinetic model. The chemical kinetic model was found to correctly describe the chosen kinetic equation by comparing numerical results with experimental data. A solution verification was made for the electromagnetic interface, where second order and fourth order accurate schemes were found for linear and quadratic elements, respectively, with numerical uncertainty lower than 0.03%. Regarding the coupled model, it was demonstrated that the numerical error would diverge for the heat transfer interface with the mapped mesh. Results showed numerical stability for the triangular mesh, and the numerical uncertainty was less than 0.1%. This study evaluated limestone velocity, heat transfer, and load influence on thermal decomposition and overall process efficiency. The velocity and heat transfer coefficient were studied with the 2D model, while different loads of material were studied with the 3D model. Both models demonstrated to be highly unstable when solving non-linear temperature distributions. High velocity flows exhibited propensity to thermal runways, and the thermal efficiency showed the tendency to stabilize for the higher velocities and higher filling ratio. Microwave efficiency denoted an optimal velocity for each heat transfer coefficient, pointing out that electromagnetic efficiency is a consequence of energy distribution uniformity. The 3D results indicated the inefficient development of the electric field for low filling ratios. Thermal efficiencies higher than 90% were found for the higher loads and microwave efficiencies up to 75% were accomplished. The 80% fill ratio was demonstrated to be the optimal load with an associated global efficiency of 70%.

Keywords: multiphysics modeling, microwave heating, verification and validation, endothermic reactions modeling, impedance matching, limestone continuous processing

Procedia PDF Downloads 139
7794 Phase II Monitoring of First-Order Autocorrelated General Linear Profiles

Authors: Yihua Wang, Yunru Lai

Abstract:

Statistical process control has been successfully applied in a variety of industries. In some applications, the quality of a process or product is better characterized and summarized by a functional relationship between a response variable and one or more explanatory variables. A collection of this type of data is called a profile. Profile monitoring is used to understand and check the stability of this relationship or curve over time. The independent assumption for the error term is commonly used in the existing profile monitoring studies. However, in many applications, the profile data show correlations over time. Therefore, we focus on a general linear regression model with a first-order autocorrelation between profiles in this study. We propose an exponentially weighted moving average charting scheme to monitor this type of profile. The simulation study shows that our proposed methods outperform the existing schemes based on the average run length criterion.

Keywords: autocorrelation, EWMA control chart, general linear regression model, profile monitoring

Procedia PDF Downloads 459
7793 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status

Authors: Rosa Figueroa, Christopher Flores

Abstract:

Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).

Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm

Procedia PDF Downloads 296
7792 The Study on Corpse Floating Time in Shanghai Region of China

Authors: Hang Meng, Wen-Bin Liu, Bi Xiao, Kai-Jun Ma, Jian-Hui Xie, Geng Fei, Tian-Ye Zhang, Lu-Yi Xu, Dong-Chuan Zhang

Abstract:

The victims in water are often found in the coastal region, along river region or the region with lakes. In China, the examination for the bodies of victims in the water is conducted by forensic doctors working in the public security bureau. Because the enter water time for most of the victims are not clear, and often lack of monitor images and other information, so to find out the corpse enter water time for victims is very difficult. After the corpse of the victim enters the water, it sinks first, then corruption gas produces, which can make the density of the corpse to be less than water, and thus rise again. So the factor that determines the corpse floating time is temperature. On the basis of the temperature data obtained in Shanghai region of China (Shanghai is a north subtropical marine monsoon climate, with an average annual temperature of about 17.1℃. The hottest month is July, the average monthly temperature is 28.6℃, and the coldest month is January, the average monthly temperature is 4.8℃). This study selected about 100 cases with definite corpse enter water time and corpse floating time, analyzed the cases and obtained the empirical law of the corpse floating time. For example, in the Shanghai region, on June 15th and October 15th, the corpse floating time is about 1.5 days. In early December, the bodies who entered the water will go up around January 1st of the following year, and the bodies who enter water in late December will float in March of next year. The results of this study can be used to roughly estimate the water enter time of the victims in Shanghai. Forensic doctors around the world can also draw on the results of this study to infer the time when the corpses of the victims in the water go up.

Keywords: corpse enter water time, corpse floating time, drowning, forensic pathology, victims in the water

Procedia PDF Downloads 196
7791 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors

Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov

Abstract:

Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.

Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model

Procedia PDF Downloads 217
7790 An Optimization of Machine Parameters for Modified Horizontal Boring Tool Using Taguchi Method

Authors: Thirasak Panyaphirawat, Pairoj Sapsmarnwong, Teeratas Pornyungyuen

Abstract:

This paper presents the findings of an experimental investigation of important machining parameters for the horizontal boring tool modified to mouth with a horizontal lathe machine to bore an overlength workpiece. In order to verify a usability of a modified tool, design of experiment based on Taguchi method is performed. The parameters investigated are spindle speed, feed rate, depth of cut and length of workpiece. Taguchi L9 orthogonal array is selected for four factors three level parameters in order to minimize surface roughness (Ra and Rz) of S45C steel tubes. Signal to noise ratio analysis and analysis of variance (ANOVA) is performed to study an effect of said parameters and to optimize the machine setting for best surface finish. The controlled factors with most effect are depth of cut, spindle speed, length of workpiece, and feed rate in order. The confirmation test is performed to test the optimal setting obtained from Taguchi method and the result is satisfactory.

Keywords: design of experiment, Taguchi design, optimization, analysis of variance, machining parameters, horizontal boring tool

Procedia PDF Downloads 436
7789 Derivation of Bathymetry from High-Resolution Satellite Images: Comparison of Empirical Methods through Geographical Error Analysis

Authors: Anusha P. Wijesundara, Dulap I. Rathnayake, Nihal D. Perera

Abstract:

Bathymetric information is fundamental importance to coastal and marine planning and management, nautical navigation, and scientific studies of marine environments. Satellite-derived bathymetry data provide detailed information in areas where conventional sounding data is lacking and conventional surveys are inaccessible. The two empirical approaches of log-linear bathymetric inversion model and non-linear bathymetric inversion model are applied for deriving bathymetry from high-resolution multispectral satellite imagery. This study compares these two approaches by means of geographical error analysis for the site Kankesanturai using WorldView-2 satellite imagery. Based on the Levenberg-Marquardt method calibrated the parameters of non-linear inversion model and the multiple-linear regression model was applied to calibrate the log-linear inversion model. In order to calibrate both models, Single Beam Echo Sounding (SBES) data in this study area were used as reference points. Residuals were calculated as the difference between the derived depth values and the validation echo sounder bathymetry data and the geographical distribution of model residuals was mapped. The spatial autocorrelation was calculated by comparing the performance of the bathymetric models and the results showing the geographic errors for both models. A spatial error model was constructed from the initial bathymetry estimates and the estimates of autocorrelation. This spatial error model is used to generate more reliable estimates of bathymetry by quantifying autocorrelation of model error and incorporating this into an improved regression model. Log-linear model (R²=0.846) performs better than the non- linear model (R²=0.692). Finally, the spatial error models improved bathymetric estimates derived from linear and non-linear models up to R²=0.854 and R²=0.704 respectively. The Root Mean Square Error (RMSE) was calculated for all reference points in various depth ranges. The magnitude of the prediction error increases with depth for both the log-linear and the non-linear inversion models. Overall RMSE for log-linear and the non-linear inversion models were ±1.532 m and ±2.089 m, respectively.

Keywords: log-linear model, multi spectral, residuals, spatial error model

Procedia PDF Downloads 295