Search results for: The Kernel density estimate.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1939

Search results for: The Kernel density estimate.

139 Blood Lymphocyte and Neutrophil Response of Cultured Rainbow Trout, Oncorhynchus mykiss, Administered Varying Dosages of an Oral Immunomodulator – ‘Fin-Immune™’

Authors: Duane Barker, John Holliday

Abstract:

In a 10-week (May – August, 2008) Phase I trial, 840, 1+ rainbow trout, Oncorhynchus mykiss, received a commercial oral immunomodulator, Fin Immune™, at four different dosages (0, 10, 20 and 30 mg g-1) to evaluate immune response and growth. The overall objective of was to determine an optimal dosage of this product for rainbow trout that provides enhanced immunity with maximal growth and health. Biweekly blood samples were taken from 10 randomly selected fish in each tank (30 samples per treatment) to evaluate the duration of enhanced immunity conferred by Fin-Immune™. The immunological assessment included serum white blood cell (lymphocyte, neutrophil) densities and blood hematocrit (packed cell volume %). Of these three variables, only lymphocyte density increased significantly among trout fed Fin- Immune™ at 20 and 30 mg g-1 which peaked at week 6. At week 7, all trout were switched to regular feed (lacking Fin-Immune™) and by week 10, lymphocyte levels decreased among all levels but were still greater than at week 0. There was growth impairment at the highest dose of Fin-Immune™ tested (30 mg g-1) which can be associated with a physiological compensatory mechanism due to a dose-specific threshold level. Thus, our main objective of this Phase I study was achieved, the 20 mg g-1 dose of Fin-Immune™ should be the most efficacious (of those we tested) to use for a Phase II disease challenge trial.

Keywords: Blood Lymphocyte, Neutrophil Response of Cultured Rainbow Trout, Oncorhynchus mykiss, Oral Immunomodulator – 'Fin-ImmuneTM'.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1516
138 The Mass Attenuation Coefficients, Effective Atomic Cross Sections, Effective Atomic Numbers and Electron Densities of Some Halides

Authors: Shivalinge Gowda

Abstract:

The total mass attenuation coefficients m/r, of some halides such as, NaCl, KCl, CuCl, NaBr, KBr, RbCl, AgCl, NaI, KI, AgBr, CsI, HgCl2, CdI2 and HgI2 were determined at photon energies 279.2, 320.07, 514.0, 661.6, 1115.5, 1173.2 and 1332.5 keV in a well-collimated narrow beam good geometry set-up using a high resolution, hyper pure germanium detector. The mass attenuation coefficients and the effective atomic cross sections are found to be in good agreement with the XCOM values. From these mass attenuation coefficients, the effective atomic cross sections sa, of the compounds were determined. These effective atomic cross section sa data so obtained are then used to compute the effective atomic numbers Zeff. For this, the interpolation of total attenuation cross-sections of photons of energy E in elements of atomic number Z was performed by using the logarithmic regression analysis of the data measured by the authors and reported earlier for the above said energies along with XCOM data for standard energies. The best-fit coefficients in the photon energy range of 250 to 350 keV, 350 to 500 keV, 500 to 700 keV, 700 to 1000 keV and 1000 to 1500 keV by a piecewise interpolation method were then used to find the Zeff of the compounds with respect to the effective atomic cross section sa from the relation obtained by piece wise interpolation method. Using these Zeff values, the electron densities Nel of halides were also determined. The present Zeff and Nel values of halides are found to be in good agreement with the values calculated from XCOM data and other available published values.

Keywords: Mass attenuation coefficient, atomic cross-section, effective atomic number, electron density.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2122
137 Development of a Tilt-Rotor Aircraft Model Using System Identification Technique

Authors: Antonio Vitale, Nicola Genito, Giovanni Cuciniello, Ferdinando Montemari

Abstract:

The introduction of tilt-rotor aircraft into the existing civilian air transportation system will provide beneficial effects due to tilt-rotor capability to combine the characteristics of a helicopter and a fixed-wing aircraft into one vehicle. The disposability of reliable tilt-rotor simulation models supports the development of such vehicle. Indeed, simulation models are required to design automatic control systems that increase safety, reduce pilot's workload and stress, and ensure the optimal aircraft configuration with respect to flight envelope limits, especially during the most critical flight phases such as conversion from helicopter to aircraft mode and vice versa. This article presents a process to build a simplified tilt-rotor simulation model, derived from the analysis of flight data. The model aims to reproduce the complex dynamics of tilt-rotor during the in-flight conversion phase. It uses a set of scheduled linear transfer functions to relate the autopilot reference inputs to the most relevant rigid body state variables. The model also computes information about the rotor flapping dynamics, which are useful to evaluate the aircraft control margin in terms of rotor collective and cyclic commands. The rotor flapping model is derived through a mixed theoretical-empirical approach, which includes physical analytical equations (applicable to helicopter configuration) and parametric corrective functions. The latter are introduced to best fit the actual rotor behavior and balance the differences existing between helicopter and tilt-rotor during flight. Time-domain system identification from flight data is exploited to optimize the model structure and to estimate the model parameters. The presented model-building process was applied to simulated flight data of the ERICA Tilt-Rotor, generated by using a high fidelity simulation model implemented in FlightLab environment. The validation of the obtained model was very satisfying, confirming the validity of the proposed approach.

Keywords: Flapping Dynamics, Flight Dynamics, System Identification, Tilt-Rotor Modeling and Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1286
136 Preparation of Carbon Nanofiber Reinforced HDPE Using Dialkylimidazolium as a Dispersing Agent: Effect on Thermal and Rheological Properties

Authors: J. Samuel, S. Al-Enezi, A. Al-Banna

Abstract:

High-density polyethylene reinforced with carbon nanofibers (HDPE/CNF) have been prepared via melt processing using dialkylimidazolium tetrafluoroborate (ionic liquid) as a dispersion agent. The prepared samples were characterized by thermogravimetric (TGA) and differential scanning calorimetric (DSC) analyses. The samples blended with imidazolium ionic liquid exhibit higher thermal stability. DSC analysis showed clear miscibility of ionic liquid in the HDPE matrix and showed single endothermic peak. The melt rheological analysis of HDPE/CNF composites was performed using an oscillatory rheometer. The influence of CNF and ionic liquid concentration (ranging from 0, 0.5, and 1 wt%) on the viscoelastic parameters was investigated at 200 °C with an angular frequency range of 0.1 to 100 rad/s. The rheological analysis shows the shear-thinning behavior for the composites. An improvement in the viscoelastic properties was observed as the nanofiber concentration increases. The progress in the modulus values was attributed to the structural rigidity imparted by the high aspect ratio CNF. The modulus values and complex viscosity of the composites increased significantly at low frequencies. Composites blended with ionic liquid exhibit slightly lower values of complex viscosity and modulus over the corresponding HDPE/CNF compositions. Therefore, reduction in melt viscosity is an additional benefit for polymer composite processing as a result of wetting effect by polymer-ionic liquid combinations.

Keywords: HDPE, carbon nanofiber, ionic liquid, complex viscosity, modulus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 756
135 The Effects of Applying Wash and Green-A Syrups as Substitution of Sugar on Dough and Cake Properties

Authors: Banafsheh Aghamohammadi, Masoud Honarvar, Babak Ghiassi Tarzi

Abstract:

Usage of different components has been considered to improve the quality and nutritional properties of cakes in recent years. The effects of applying some sweeteners, instead of sugar, have been evaluated in cakes and many bread formulas up to now; but there has not been any research about the usage of by-products of sugar factories such as Wash and Green-A Syrups in cake formulas. In this research, the effects of substituting 25%, 50%, 75% and 100% of sugar with Wash and Green-A Syrups on some dough and cake properties, such as pH, viscosity, density, volume, weight loss, moisture, water activity, texture, staling, color and sensory evaluations, are studied. The results of these experiments showed that the pH values were not significantly different among any of the all cake batters and also most of the cake samples. Although differences among viscosity and specific gravity of all treatments were both significant and insignificant, these two parameters resulted in higher volume in all samples than the blank one. The differences in weight loss, moisture content and water activity of samples were insignificant. Evaluating of texture showed that the softness of most of samples is increased and the staling is decreased. Crumb color and sensory evaluations of samples were also affected by the replacement of sucrose with Wash and Green-A Syrups. According to the results, we can increase the shelf life and improve the quality and nutritional values of cake by using these kinds of syrups in the formulation.

Keywords: Cake, green-A syrup, quality tests, sensory evaluation, wash syrup.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 967
134 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation

Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar

Abstract:

The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.

Keywords: Computational fluid dynamics, erosion, slurry transportation, k-ε Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1918
133 How to Win Passengers and Influence Motorists? Lessons Learned from a Comparative Study of Global Transit Systems

Authors: Oliver F. Shyr, Yu-Hsuan Hsiao, David E. Andersson

Abstract:

Due to the call of global warming effects, city planners aim at actions for reducing carbon emission. One of the approaches is to promote the usage of public transportation system toward the transit-oriented-development. For example, rapid transit system in Taipei city and Kaohsiung city are opening. However, until November 2008 the average daily patronage counted only 113,774 passengers at Kaohsiung MRT systems, much less than which was expected. Now the crucial questions: how the public transport competes with private transport? And more importantly, what factors would enhance the use of public transport? To give the answers to those questions, our study first applied regression to analyze the factors attracting people to use public transport around cities in the world. It is shown in our study that the number of MRT stations, city population, cost of living, transit fare, density, gasoline price, and scooter being a major mode of transport are the major factors. Subsequently, our study identified successful and unsuccessful cities in regard of the public transport usage based on the diagnosis of regression residuals. Finally, by comparing transportation strategies adopted by those successful cities, our conclusion stated that Kaohsiung City could apply strategies such as increasing parking fees, reducing parking spaces in downtown area, and reducing transfer time by providing more bus services and public bikes to promote the usage of public transport.

Keywords: Public Transit System, Comparative Study, Transport Demand Management, Regression

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2091
132 Driver Readiness in Autonomous Vehicle Take-Overs

Authors: Abdurrahman Arslanyilmaz, Salman Al Matouq, Durmus V. Doner

Abstract:

Level 3 autonomous vehicles are able to take full responsibility over the control of the vehicle unless a system boundary is reached or a system failure occurs, in which case, the driver is expected to take-over the control of the vehicle. While this happens, the driver is often not aware of the traffic situation or is engaged in a secondary task. Factors affecting the duration and quality of take-overs in these situations have included secondary task type and nature, traffic density, take-over request (TOR) time, and TOR warning type and modality. However, to the best of the authors’ knowledge, no prior study examined time buffer for TORs when a system failure occurs immediately before intersections. The first objective of this study is to investigate the effect of time buffer (3 and 7 seconds) on the duration and quality of take-overs when a system failure occurs just prior to intersections. In addition, eye-tracking has become one of the most popular methods to report what individuals view, in what order, for how long, and how often, and it has been utilized in driving simulations with various objectives. However, to the extent of authors’ knowledge, none has compared drivers’ eye gaze behavior in the two different time buffers in order to examine drivers’ attention and comprehension of salient information. The second objective is to understand the driver’s attentional focus on comprehension of salient traffic-related information presented on different parts of the dashboard and on the roads.

Keywords: Autonomous vehicles, driving simulation, eye gaze, attention, comprehension, take-over duration, take-over quality, time buffer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 886
131 Application Reliability Method for Concrete Dams

Authors: Mustapha Kamel Mihoubi, Mohamed Essadik Kerkar

Abstract:

Probabilistic risk analysis models are used to provide a better understanding of the reliability and structural failure of works, including when calculating the stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of reliability analysis methods including the methods used in engineering. It is in our case, the use of level 2 methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type first order risk method (FORM) and the second order risk method (SORM). By way of comparison, a level three method was used which generates a full analysis of the problem and involves an integration of the probability density function of random variables extended to the field of security using the Monte Carlo simulation method. Taking into account the change in stress following load combinations: normal, exceptional and extreme acting on the dam, calculation of the results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities, thus causing a significant decrease in strength, shear forces then induce a shift that threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case the increase of uplift in a hypothetical default of the drainage system.

Keywords: Dam, failure, limit-state, Monte Carlo simulation, reliability, probability, simulation, sliding, Taylor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1225
130 Localization of Geospatial Events and Hoax Prediction in the UFO Database

Authors: Harish Krishnamurthy, Anna Lafontant, Ren Yi

Abstract:

Unidentified Flying Objects (UFOs) have been an interesting topic for most enthusiasts and hence people all over the United States report such findings online at the National UFO Report Center (NUFORC). Some of these reports are a hoax and among those that seem legitimate, our task is not to establish that these events confirm that they indeed are events related to flying objects from aliens in outer space. Rather, we intend to identify if the report was a hoax as was identified by the UFO database team with their existing curation criterion. However, the database provides a wealth of information that can be exploited to provide various analyses and insights such as social reporting, identifying real-time spatial events and much more. We perform analysis to localize these time-series geospatial events and correlate with known real-time events. This paper does not confirm any legitimacy of alien activity, but rather attempts to gather information from likely legitimate reports of UFOs by studying the online reports. These events happen in geospatial clusters and also are time-based. We look at cluster density and data visualization to search the space of various cluster realizations to decide best probable clusters that provide us information about the proximity of such activity. A random forest classifier is also presented that is used to identify true events and hoax events, using the best possible features available such as region, week, time-period and duration. Lastly, we show the performance of the scheme on various days and correlate with real-time events where one of the UFO reports strongly correlates to a missile test conducted in the United States.

Keywords: Time-series clustering, feature extraction, hoax prediction, geospatial events.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 851
129 Numerical Model of Low Cost Rubber Isolators for Masonry Housing in High Seismic Regions

Authors: Ahmad B. Habieb, Gabriele Milani, Tavio Tavio, Federico Milani

Abstract:

Housings in developing countries have often inadequate seismic protection, particularly for masonry. People choose this type of structure since the cost and application are relatively cheap. Seismic protection of masonry remains an interesting issue among researchers. In this study, we develop a low-cost seismic isolation system for masonry using fiber reinforced elastomeric isolators. The elastomer proposed consists of few layers of rubber pads and fiber lamina, making it lower in cost comparing to the conventional isolators. We present a finite element (FE) analysis to predict the behavior of the low cost rubber isolators undergoing moderate deformations. The FE model of the elastomer involves a hyperelastic material property for the rubber pad. We adopt a Yeoh hyperelasticity model and estimate its coefficients through the available experimental data. Having the shear behavior of the elastomers, we apply that isolation system onto small masonry housing. To attach the isolators on the building, we model the shear behavior of the isolation system by means of a damped nonlinear spring model. By this attempt, the FE analysis becomes computationally inexpensive. Several ground motion data are applied to observe its sensitivity. Roof acceleration and tensile damage of walls become the parameters to evaluate the performance of the isolators. In this study, a concrete damage plasticity model is used to model masonry in the nonlinear range. This tool is available in the standard package of Abaqus FE software. Finally, the results show that the low-cost isolators proposed are capable of reducing roof acceleration and damage level of masonry housing. Through this study, we are also capable of monitoring the shear deformation of isolators during seismic motion. It is useful to determine whether the isolator is applicable. According to the results, the deformations of isolators on the benchmark one story building are relatively small.

Keywords: Masonry, low cost elastomeric isolator, finite element analysis, hyperelasticity, damped non-linear spring, concrete damage plasticity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1185
128 Physicochemical Characterization of Medium Alkyd Resins Prepared with a Mixture of Linum usitatissimum L. and Plukenetia volubilis L. Oils

Authors: Antonella Hadzich, Santiago Flores

Abstract:

Alkyds have become essential raw materials in the coating and paint industry, due to their low cost, good application properties and lower environmental impact in comparison with petroleum-based polymers. The properties of these oil-modified materials depend on the type of polyunsaturated vegetable oil used for its manufacturing, since a higher degree of unsaturation provides a better crosslinking of the cured paint. Linum usitatissimum L. (flax) oil is widely used to develop alkyd resins due to its high degree of unsaturation. Although it is intended to find non-traditional sources and increase their commercial value, to authors’ best knowledge a natural source that can replace flaxseed oil has not yet been found. However, Plukenetia volubilis L. oil, of Peruvian origin, contains a similar fatty acid polyunsaturated content to the one reported for Linum usitatissimum L. oil. In this perspective, medium alkyd resins were prepared with a mixture of 50% of Linum usitatissimum L. oil and 50% of Plukenetia volubilis L. oil. Pure Linum usitatissimum L. oil was also used for comparison purposes. Three different resins were obtained by varying the amount of glycerol and pentaerythritol. The synthesized alkyd resins were characterized by FT-IR, and physicochemical properties like acid value, colour, viscosity, density and drying time were evaluated by standard methods. The pencil hardness and chemical resistance behaviour of the cured resins were also studied. Overall, it can be concluded that medium alkyd resins containing Plukenetia volubilis L. oil have an equivalent behaviour compared to those prepared purely with Linum usitatissimum L. oil. Both Plukenetia volubilis L. oil and pentaerythritol have a remarkable influence on certain physicochemical properties of medium alkyd resins.

Keywords: Alkyd resins, flaxseed oil, pentaerythritol, Plukenetia volubilis L. oil, protective coating.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 759
127 Chemical Characterization and Prebiotic Effect of Water-Soluble Polysaccharides from Zizyphus lotus Leaves

Authors: Zakaria Boual, Abdellah Kemassi, Toufik Chouana, Philippe Michaud, Mohammed Didi Ould El Hadj

Abstract:

In order to investigate the prebiotic potential of oligosaccharides prepared by chemical hydrolysis of water-soluble polysaccharides (WSP) from Zizyphus lotus leaves, the effect of oligosaccharides on bacterial growth was studied. The chemical composition of WSP was evaluated by colorimetric assays revealed the average values: 7.05±0.73% proteins and 86.21±0.74% carbohydrates, among them 64.81±0.42% is neutral sugar and the rest 16.25±1.62% is uronic acids. The characterization of monosaccharides was determined by high performance anion exchange chromatography with pulsed amperometric detection (HPAEC-PAD) was found to be composed of galactose (23.95%), glucose (21.30%), rhamnose (20.28%), arabinose (9.55%), and glucuronic acid (22.95%). The effects of oligosaccharides on the growth of lactic acid bacteria were compared with those of fructooligosaccharide (RP95). The oligosaccharides concentration was 1g/L of Man, Rogosa, Sharpe broth. Bacterial growth was assessed during 2, 4.5, 6.5, 9, 12, 16 and 24 h by measuring the optical density of the cultures at 600 nm (OD600) and pH values. During fermentation, pH in broth cultures decreased from 6.7 to 5.87±0.15. The enumeration of lactic acid bacteria indicated that oligosaccharides led to a significant increase in bacteria (P≤0.05) compared to the control. The fermentative metabolism appeared to be faster on RP95 than on oligosaccharides from Zizyphus lotus leaves. Both RP95 and oligosaccharides showed clear prebiotic effects, but had differences in fermentation kinetics because of to the different degree of polymerization. This study shows the prebiotic effectiveness of oligosaccharides, and provides proof for the selection of leaves of Zizyphus lotus for use as functional food ingredients.

Keywords: Zizyphus lotus, polysaccharides, characterization, prebiotic effects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2427
126 An In-depth Experimental Study of Wax Deposition in Pipelines

Authors: M. L. Arias, J. D’Adamo, M. N. Novosad, P. A. Raffo, H. P. Burbridge, G. O. Artana

Abstract:

Shale oils are highly paraffinic and, consequently, can create wax deposits that foul pipelines during transportation. Several factors must be considered when designing pipelines or treatment programs that prevent wax deposition: including chemical species in crude oils, flowrates, pipes diameters and temperature. This paper describes the wax deposition study carried out within the framework of YPF Tecnolgía S.A. (Y-TEC) flow assurance projects, as part of the process to achieve a better understanding on wax deposition issues. Laboratory experiments were performed on a medium size, 1 inch diameter, wax deposition loop of 15 meters long equipped with a solid detector system, online microscope to visualize crystals, temperature, and pressure sensors along the loop pipe. A baseline test was performed with diesel with no added paraffin or additive content. Tests were undertaken with different temperatures of circulating and cooling fluid at different flow conditions. Then, a solution formed with a paraffin incorporated to the diesel was considered. Tests varying flowrate and cooling rate were again run. Viscosity, density, WAT (Wax Appearance Temperature) with DSC (Differential Scanning Calorimetry), pour point and cold finger measurements were carried out to determine physical properties of the working fluids. The results obtained in the loop were analyzed through momentum balance and heat transfer models. To determine possible paraffin deposition scenarios temperature and pressure loop output signals were studied. They were compared with WAT static laboratory methods.

Keywords: Paraffin deposition, wax, oil pipelines, experimental pipe loop.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 161
125 Application of Transportation Models for Analysing Future Intercity and Intracity Travel Patterns in Kuwait

Authors: Srikanth Pandurangi, Basheer Mohammed, Nezar Al Sayegh

Abstract:

In order to meet the increasing demand for housing care for Kuwaiti citizens, the government authorities in Kuwait are undertaking a series of projects in the form of new large cities, outside the current urban area. Al Mutlaa City located to the north-west of the Kuwait Metropolitan Area is one such project out of the 15 planned new cities. The city accommodates a wide variety of residential developments, employment opportunities, commercial, recreational, health care and institutional uses. This paper examines the application of comprehensive transportation demand modeling works undertaken in VISUM platform to understand the future intracity and intercity travel distribution patterns in Kuwait. The scope of models developed varied in levels of detail: strategic model update, sub-area models representing future demand of Al Mutlaa City, sub-area models built to estimate the demand in the residential neighborhoods of the city. This paper aims at offering model update framework that facilitates easy integration between sub-area models and strategic national models for unified traffic forecasts. This paper presents the transportation demand modeling results utilized in informing the planning of multi-modal transportation system for Al Mutlaa City. This paper also presents the household survey data collection efforts undertaken using GPS devices (first time in Kuwait) and notebook computer based digital survey forms for interviewing representative sample of citizens and residents. The survey results formed the basis of estimating trip generation rates and trip distribution coefficients used in the strategic base year model calibration and validation process.

Keywords: GPS based household surveys, transportation infrastructure, origin-destination trip matrices, traffic forecasts, transportation demand modeling, travel behavior patterns.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1706
124 Discovering Liouville-Type Problems for p-Energy Minimizing Maps in Closed Half-Ellipsoids by Calculus Variation Method

Authors: Lina Wu, Jia Liu, Ye Li

Abstract:

The goal of this project is to investigate constant properties (called the Liouville-type Problem) for a p-stable map as a local or global minimum of a p-energy functional where the domain is a Euclidean space and the target space is a closed half-ellipsoid. The First and Second Variation Formulas for a p-energy functional has been applied in the Calculus Variation Method as computation techniques. Stokes’ Theorem, Cauchy-Schwarz Inequality, Hardy-Sobolev type Inequalities, and the Bochner Formula as estimation techniques have been used to estimate the lower bound and the upper bound of the derived p-Harmonic Stability Inequality. One challenging point in this project is to construct a family of variation maps such that the images of variation maps must be guaranteed in a closed half-ellipsoid. The other challenging point is to find a contradiction between the lower bound and the upper bound in an analysis of p-Harmonic Stability Inequality when a p-energy minimizing map is not constant. Therefore, the possibility of a non-constant p-energy minimizing map has been ruled out and the constant property for a p-energy minimizing map has been obtained. Our research finding is to explore the constant property for a p-stable map from a Euclidean space into a closed half-ellipsoid in a certain range of p. The certain range of p is determined by the dimension values of a Euclidean space (the domain) and an ellipsoid (the target space). The certain range of p is also bounded by the curvature values on an ellipsoid (that is, the ratio of the longest axis to the shortest axis). Regarding Liouville-type results for a p-stable map, our research finding on an ellipsoid is a generalization of mathematicians’ results on a sphere. Our result is also an extension of mathematicians’ Liouville-type results from a special ellipsoid with only one parameter to any ellipsoid with (n+1) parameters in the general setting.

Keywords: Bochner Formula, Stokes’ Theorem, Cauchy-Schwarz Inequality, first and second variation formulas, Hardy-Sobolev type inequalities, Liouville-type problem, p-harmonic map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 914
123 Effect of Sodium Hydroxide Treatment on the Mechanical Properties of Crushed and Uncrushed Luffa cylindrica Fibre Reinforced rLDPE Composites

Authors: Paschal A. Ubi, Salawu Abdul Rahman Asipita

Abstract:

Sustainability and eco-friendly requirement of engineering materials are sort for in recent times, thus giving rise to the development of bio-composites. However, the natural fibres to matrix interface interactions remain a key issue in getting the desired mechanical properties from such composites. Treatment of natural fibres is essential in improving matrix to filler adhesion, hence improving its mechanical properties. In this study, investigations were carried out to determine the effect of sodium hydroxide treatment on the tensile, flexural, impact and hardness properties of crushed and uncrushed Luffa cylindrica fibre reinforced recycled low density polyethylene composites. The LC (Luffa cylindrica) fibres were treated with 0%, 2%, 4%, 6%, 8% and 10% wt. sodium hydroxide (NaOH) concentrations for a period of 24 hours under room temperature conditions. A formulation ratio of 80/20 g (matrix to reinforcement) was maintained for all developed samples. Analysis of the results showed that the uncrushed luffa fibre samples gave better mechanical properties compared with the crushed luffa fibre samples. The uncrushed luffa fibre composites had a maximum tensile and flexural strength of 7.65 MPa and 17.08 Mpa respectively corresponding to a young modulus and flexural modulus of 21.08 MPa and 232.22 MPa for the 8% and 4% wt. NaOH concentration respectively. Results obtained in the research showed that NaOH treatment with the 8% NaOH concentration improved the mechanical properties of the LC fibre reinforced composites when compared with other NaOH treatment concentration values.

Keywords: Flexural strength, LC fibres, LC/rLDPE composite, Tensile strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2609
122 Gate Tunnel Current Calculation for NMOSFET Based on Deep Sub-Micron Effects

Authors: Ashwani K. Rana, Narottam Chand, Vinod Kapoor

Abstract:

Aggressive scaling of MOS devices requires use of ultra-thin gate oxides to maintain a reasonable short channel effect and to take the advantage of higher density, high speed, lower cost etc. Such thin oxides give rise to high electric fields, resulting in considerable gate tunneling current through gate oxide in nano regime. Consequently, accurate analysis of gate tunneling current is very important especially in context of low power application. In this paper, a simple and efficient analytical model has been developed for channel and source/drain overlap region gate tunneling current through ultra thin gate oxide n-channel MOSFET with inevitable deep submicron effect (DSME).The results obtained have been verified with simulated and reported experimental results for the purpose of validation. It is shown that the calculated tunnel current is well fitted to the measured one over the entire oxide thickness range. The proposed model is suitable enough to be used in circuit simulator due to its simplicity. It is observed that neglecting deep sub-micron effect may lead to large error in the calculated gate tunneling current. It is found that temperature has almost negligible effect on gate tunneling current. It is also reported that gate tunneling current reduces with the increase of gate oxide thickness. The impact of source/drain overlap length is also assessed on gate tunneling current.

Keywords: Gate tunneling current, analytical model, gate dielectrics, non uniform poly gate doping, MOSFET, fringing field effect and image charges.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1733
121 Limiting Fiber Extensibility as Parameter for Damage in Venous Wall

Authors: Lukas Horny, Rudolf Zitny, Hynek Chlup, Tomas Adamek, Michal Sara

Abstract:

An inflation–extension test with human vena cava inferior was performed with the aim to fit a material model. The vein was modeled as a thick–walled tube loaded by internal pressure and axial force. The material was assumed to be an incompressible hyperelastic fiber reinforced continuum. Fibers are supposed to be arranged in two families of anti–symmetric helices. Considered anisotropy corresponds to local orthotropy. Used strain energy density function was based on a concept of limiting strain extensibility. The pressurization was comprised by four pre–cycles under physiological venous loading (0 – 4kPa) and four cycles under nonphysiological loading (0 – 21kPa). Each overloading cycle was performed with different value of axial weight. Overloading data were used in regression analysis to fit material model. Considered model did not fit experimental data so good. Especially predictions of axial force failed. It was hypothesized that due to nonphysiological values of loading pressure and different values of axial weight the material was not preconditioned enough and some damage occurred inside the wall. A limiting fiber extensibility parameter Jm was assumed to be in relation to supposed damage. Each of overloading cycles was fitted separately with different values of Jm. Other parameters were held the same. This approach turned out to be successful. Variable value of Jm can describe changes in the axial force – axial stretch response and satisfy pressure – radius dependence simultaneously.

Keywords: Constitutive model, damage, fiber reinforcedcomposite, limiting fiber extensibility, preconditioning, vena cavainferior.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1472
120 Spatial Structure and Spatial Impacts of the Jakarta Metropolitan Area: A Southeast Asian EMR Perspective

Authors: Ikhwan Hakim, Bruno Parolin

Abstract:

This paper investigates the spatial structure of employment in the Jakarta Metropolitan Area (JMA), with reference to the concept of the Southeast Asian extended metropolitan region (EMR). A combination of factor analysis and local Getis-Ord (Gi*) hot-spot analysis is used to identify clusters of employment in the region, including those of the urban and agriculture sectors. Spatial statistical analysis is further used to probe the spatial association of identified employment clusters with their surroundings on several dimensions, including the spatial association between the central business district (CBD) in Jakarta city on employment density in the region, the spatial impacts of urban expansion on population growth and the degree of urban-rural interaction. The degree of spatial interaction for the whole JMA is measured by the patterns of commuting trips destined to the various employment clusters. Results reveal the strong role of the urban core of Jakarta, and the regional CBD, as the centre for mixed job sectors such as retail, wholesale, services and finance. Manufacturing and local government services, on the other hand, form corridors radiating out of the urban core, reaching out to the agriculture zones in the fringes. Strong associations between the urban expansion corridors and population growth, and urban-rural mix, are revealed particularly in the eastern and western parts of JMA. Metropolitan wide commuting patterns are focussed on the urban core of Jakarta and the CBD, while relatively local commuting patterns are shown to be prevalent for the employment corridors.

Keywords: Jakarta Metropolitan Area, Southeast Asian EMR, spatial association, spatial statistics, spatial structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2594
119 Metallurgy of Friction Welding of Porous Stainless Steel-Solid Iron Billets

Authors: S. D. El Wakil

Abstract:

The research work reported here was aimed at investigating the feasibility of joining high-porosity stainless steel discs and wrought iron bars by friction welding. The sound friction-welded joints were then subjected to a metallurgical investigation and an analysis of failure resulting from tensile loading. Discs having 50 mm diameter and 10 mm thickness were produced by loose sintering of stainless steel powder at a temperature of 1350 oC in an argon atmosphere for one hour. Minor machining was then carried out to control the dimensions of the discs, and the density of each disc could then be determined. The level of porosity was calculated and was found to be about 40% in all of those discs. Solid wrought iron bars were also machined to facilitate tensile testing of the joints produced by friction welding. Using our previously gained experience, the porous stainless steel disc and the wrought iron tube were successfully friction welded. SEM was employed to examine the fracture surface after a tensile test of the joint in order to determine the type of failure. It revealed that the failure did not occur in the joint, but rather in the in the porous metal in the area adjacent to the joint. The load carrying capacity was actually determined by the strength of the porous metal and not by that of the welded joint. Macroscopic and microscopic metallographic examinations were also performed and showed that the welded joint involved a dense heat-affected zone where the porous metal underwent densification at elevated temperature, explaining and supporting the findings of the SEM study.

Keywords: Fracture of friction-welded joints, metallurgy of friction welding, solid-porous structures, strength of joint.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1160
118 Influence of Overfeeding on Productive Performance Traits, Foie Gras Production, Blood Parameters, Internal Organs, Carcass Traits, and Mortality Rate in Two Breeds of Ducks

Authors: El-Sayed, Mona, Y., U. E. Mahrous

Abstract:

A total of 60 male mule ducks and 60 male Muscovy ducks were allotted into three groups (n = 20) to estimate the effects of overfeeding (two and four meals) versus ad libitum feeding on productive performance traits, foie gras production, internal organs, and blood parameters.

The results show that force-feeding four meals significantly increased (P < 0.01) body weight, weight gain, and gain percentage compared to force-feeding two meals. Both force-feeding regimes (two or four meals) induced significantly higher body weight, weight gain, gain percentage, and absolute carcass weight than ad libitum feeding; however, carcass percentage was significantly higher in ad libitum feeding. Mule ducks had significantly higher weight gain and weight gain percentages than Muscovy ducks.

Feed consumption per kilogram of foie gras and per kilogram weight gain was lower for the four-meal than for the two-meal forced feeding regime. Force-feeding four meals induced significantly higher liver weight and percentage (488.96 ± 25.78g, 7.82 ± 0.40%) than force-feeding two meals (381.98 ± 13.60g, 6.42 ± 0.21%). Moreover, feed conversion was significantly higher under forced feeding than under ad libitum feeding (77.65 ± 3.41g, 1.72 ± 0.05%; P < 0.01).

Forced feeding (two or four meals) increased all organ weights (intestine, proventriculus, heart, spleen, and pancreas) over ad libitum feeding weights, except for the gizzard; however intestinal and abdominal fat values were higher for four-meal forced feeding than for two-meal forced feeding.

Overfeeding did not change blood parameters significantly compared to ad libitum feeding; however, four-meal forced feeding improved the quality of foie gras since it significantly increased the percentage of grade A foie gras (62.5%) at the expense of grades B (33.33%) and C (4.17%) compared with the two-meal forced feeding.

The mortality percentage among Muscovy ducks during the forced feeding period was 22.5%, compared to 0% in mule ducks. Liver weight was highly significantly correlated with life weight after overfeeding and certain blood plasma traits.

Keywords: Foie gras, overfeeding, ducks, productive performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2555
117 Development of Electrospun Membranes with Defined Polyethylene Collagen and Oxide Architectures Reinforced with Medium and High Intensity Statins

Authors: S. Jaramillo, Y. Montoya, W. Agudelo, J. Bustamante

Abstract:

Cardiovascular diseases (CVD) are related to affectations of the heart and blood vessels, within these are pathologies such as coronary or peripheral heart disease, caused by the narrowing of the vessel wall (atherosclerosis), which is related to the accumulation of Low-Density Lipoproteins (LDL) in the arterial walls that leads to a progressive reduction of the lumen of the vessel and alterations in blood perfusion. Currently, the main therapeutic strategy for this type of alteration is drug treatment with statins, which inhibit the enzyme 3-hydroxy-3-methyl-glutaryl-CoA reductase (HMG-CoA reductase), responsible for modulating the rate of cholesterol production and other isoprenoids in the mevalonate pathway. This enzyme induces the expression of LDL receptors in the liver, increasing their number on the surface of liver cells, reducing the plasma concentration of cholesterol. On the other hand, when the blood vessel presents stenosis, a surgical procedure with vascular implants is indicated, which are used to restore circulation in the arterial or venous bed. Among the materials used for the development of vascular implants are Dacron® and Teflon®, which perform the function of re-waterproofing the circulatory circuit, but due to their low biocompatibility, they do not have the ability to promote remodeling and tissue regeneration processes. Based on this, the present research proposes the development of a hydrolyzed collagen and polyethylene oxide electrospun membrane reinforced with medium and high-intensity statins, so that in future research it can favor tissue remodeling processes from its microarchitecture.

Keywords: atherosclerosis, medium and high-intensity statins, microarchitecture, electrospun membrane

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 646
116 Social Media Idea Ontology: A Concept for Semantic Search of Product Ideas in Customer Knowledge through User-Centered Metrics and Natural Language Processing

Authors: Martin H¨ausl, Maximilian Auch, Johannes Forster, Peter Mandl, Alexander Schill

Abstract:

In order to survive on the market, companies must constantly develop improved and new products. These products are designed to serve the needs of their customers in the best possible way. The creation of new products is also called innovation and is primarily driven by a company’s internal research and development department. However, a new approach has been taking place for some years now, involving external knowledge in the innovation process. This approach is called open innovation and identifies customer knowledge as the most important source in the innovation process. This paper presents a concept of using social media posts as an external source to support the open innovation approach in its initial phase, the Ideation phase. For this purpose, the social media posts are semantically structured with the help of an ontology and the authors are evaluated using graph-theoretical metrics such as density. For the structuring and evaluation of relevant social media posts, we also use the findings of Natural Language Processing, e. g. Named Entity Recognition, specific dictionaries, Triple Tagger and Part-of-Speech-Tagger. The selection and evaluation of the tools used are discussed in this paper. Using our ontology and metrics to structure social media posts enables users to semantically search these posts for new product ideas and thus gain an improved insight into the external sources such as customer needs.

Keywords: Idea ontology, innovation management, open innovation, semantic search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 784
115 Municipal Solid Waste Management Using Life Cycle Assessment Approach: Case Study of Maku City, Iran

Authors: L. Heidari, M. Jalili Ghazizade

Abstract:

This paper aims to determine the best environmental and economic scenario for Municipal Solid Waste (MSW) management of the Maku city by using Life Cycle Assessment (LCA) approach. The functional elements of this study are collection, transportation, and disposal of MSW in Maku city. Waste composition and density, as two key parameters of MSW, have been determined by field sampling, and then, the other important specifications of MSW like chemical formula, thermal energy and water content were calculated. These data beside other information related to collection and disposal facilities are used as a reliable source of data to assess the environmental impacts of different waste management options, including landfills, composting, recycling and energy recovery. The environmental impact of MSW management options has been investigated in 15 different scenarios by Integrated Waste Management (IWM) software. The photochemical smog, greenhouse gases, acid gases, toxic emissions, and energy consumption of each scenario are measured. Then, the environmental indices of each scenario are specified by weighting these parameters. Economic costs of scenarios have been also compared with each other based on literature. As final result, since the organic materials make more than 80% of the waste, compost can be a suitable method. Although the major part of the remaining 20% of waste can be recycled, due to the high cost of necessary equipment, the landfill option has been suggested. Therefore, the scenario with 80% composting and 20% landfilling is selected as superior environmental and economic scenario. This study shows that, to select a scenario with practical applications, simultaneously environmental and economic aspects of different scenarios must be considered.

Keywords: IWM software, life cycle assessment, Maku, municipal solid waste management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1318
114 Evaluation of Electro-Flocculation for Biomass Production of Marine Microalgae Phaodactylum tricornutum

Authors: Luciana C. Ramos, Leandro J. Sousa, Antônio Ferreira da Silva, Valéria Gomes Oliveira Falcão, Suzana T. Cunha Lima

Abstract:

The commercial production of biodiesel using microalgae demands a high-energy input for harvesting biomass, making production economically unfeasible. Methods currently used involve mechanical, chemical, and biological procedures. In this work, a flocculation system is presented as a cost and energy effective process to increase biomass production of Phaeodactylum tricornutum. This diatom is the only species of the genus that present fast growth and lipid accumulation ability that are of great interest for biofuel production. The algae, selected from the Bank of Microalgae, Institute of Biology, Federal University of Bahia (Brazil), have been bred in tubular reactor with photoperiod of 12 h (clear/dark), providing luminance of about 35 μmol photons m-2s-1, and temperature of 22 °C. The medium used for growing cells was the Conway medium, with addition of silica. The seaweed growth curve was accompanied by cell count in Neubauer camera and by optical density in spectrophotometer, at 680 nm. The precipitation occurred at the end of the stationary phase of growth, 21 days after inoculation, using two methods: centrifugation at 5000 rpm for 5 min, and electro-flocculation at 19 EPD and 95 W. After precipitation, cells were frozen at -20 °C and, subsequently, lyophilized. Biomass obtained by electro-flocculation was approximately four times greater than the one achieved by centrifugation. The benefits of this method are that no addition of chemical flocculants is necessary and similar cultivation conditions can be used for the biodiesel production and pharmacological purposes. The results may contribute to improve biodiesel production costs using marine microalgae.

Keywords: Biomass, diatom, flocculation, microalgae.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1365
113 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation

Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke

Abstract:

Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.

Keywords: Automatic calibration framework, approximate Bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740
112 Modelling and Simulating CO2 Electro-Reduction to Formic Acid Using Microfluidic Electrolytic Cells: The Influence of Bi-Sn Catalyst and 1-Ethyl-3-Methyl Imidazolium Tetra-Fluoroborate Electrolyte on Cell Performance

Authors: Akan C. Offong, E. J. Anthony, Vasilije Manovic

Abstract:

A modified steady-state numerical model is developed for the electrochemical reduction of CO2 to formic acid. The numerical model achieves a CD (current density) (~60 mA/cm2), FE-faradaic efficiency (~98%) and conversion (~80%) for CO2 electro-reduction to formic acid in a microfluidic cell. The model integrates charge and species transport, mass conservation, and momentum with electrochemistry. Specifically, the influences of Bi-Sn based nanoparticle catalyst (on the cathode surface) at different mole fractions and 1-ethyl-3-methyl imidazolium tetra-fluoroborate ([EMIM][BF4]) electrolyte, on CD, FE and CO2 conversion to formic acid is studied. The reaction is carried out at a constant concentration of electrolyte (85% v/v., [EMIM][BF4]). Based on the mass transfer characteristics analysis (concentration contours), mole ratio 0.5:0.5 Bi-Sn catalyst displays the highest CO2 mole consumption in the cathode gas channel. After validating with experimental data (polarisation curves) from literature, extensive simulations reveal performance measure: CD, FE and CO2 conversion. Increasing the negative cathode potential increases the current densities for both formic acid and H2 formations. However, H2 formations are minimal as a result of insufficient hydrogen ions in the ionic liquid electrolyte. Moreover, the limited hydrogen ions have a negative effect on formic acid CD. As CO2 flow rate increases, CD, FE and CO2 conversion increases.

Keywords: Carbon dioxide, electro-chemical reduction, microfluidics, ionic liquids, modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1097
111 A New Method for Extracting Ocean Wave Energy Utilizing the Wave Shoaling Phenomenon

Authors: Shafiq R. Qureshi, Syed Noman Danish, Muhammad Saeed Khalid

Abstract:

Fossil fuels are the major source to meet the world energy requirements but its rapidly diminishing rate and adverse effects on our ecological system are of major concern. Renewable energy utilization is the need of time to meet the future challenges. Ocean energy is the one of these promising energy resources. Threefourths of the earth-s surface is covered by the oceans. This enormous energy resource is contained in the oceans- waters, the air above the oceans, and the land beneath them. The renewable energy source of ocean mainly is contained in waves, ocean current and offshore solar energy. Very fewer efforts have been made to harness this reliable and predictable resource. Harnessing of ocean energy needs detail knowledge of underlying mathematical governing equation and their analysis. With the advent of extra ordinary computational resources it is now possible to predict the wave climatology in lab simulation. Several techniques have been developed mostly stem from numerical analysis of Navier Stokes equations. This paper presents a brief over view of such mathematical model and tools to understand and analyze the wave climatology. Models of 1st, 2nd and 3rd generations have been developed to estimate the wave characteristics to assess the power potential. A brief overview of available wave energy technologies is also given. A novel concept of on-shore wave energy extraction method is also presented at the end. The concept is based upon total energy conservation, where energy of wave is transferred to the flexible converter to increase its kinetic energy. Squeezing action by the external pressure on the converter body results in increase velocities at discharge section. High velocity head then can be used for energy storage or for direct utility of power generation. This converter utilizes the both potential and kinetic energy of the waves and designed for on-shore or near-shore application. Increased wave height at the shore due to shoaling effects increases the potential energy of the waves which is converted to renewable energy. This approach will result in economic wave energy converter due to near shore installation and more dense waves due to shoaling. Method will be more efficient because of tapping both potential and kinetic energy of the waves.

Keywords: Energy Utilizing, Wave Shoaling Phenomenon

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2669
110 Adaptive WiFi Fingerprinting for Location Approximation

Authors: Mohd Fikri Azli bin Abdullah, Khairul Anwar bin Kamarul Hatta, Esther Jeganathan

Abstract:

WiFi has become an essential technology that is widely used nowadays. It is famous due to its convenience to be used with mobile devices. This is especially true for Internet users worldwide that use WiFi connections. There are many location based services that are available nowadays which uses Wireless Fidelity (WiFi) signal fingerprinting. A common example that is gaining popularity in this era would be Foursquare. In this work, the WiFi signal would be used to estimate the user or client’s location. Similar to GPS, fingerprinting method needs a floor plan to increase the accuracy of location estimation. Still, the factor of inconsistent WiFi signal makes the estimation defer at different time intervals. Given so, an adaptive method is needed to obtain the most accurate signal at all times. WiFi signals are heavily distorted by external factors such as physical objects, radio frequency interference, electrical interference, and environmental factors to name a few. Due to these factors, this work uses a method of reducing the signal noise and estimation using the Nearest Neighbour based on past activities of the signal to increase the signal accuracy up to more than 80%. The repository yet increases the accuracy by using Artificial Neural Network (ANN) pattern matching. The repository acts as the server cum support of the client side application decision. Numerous previous works has adapted the methods of collecting signal strengths in the repository over the years, but mostly were just static. In this work, proposed solutions on how the adaptive method is done to match the signal received to the data in the repository are highlighted. With the said approach, location estimation can be done more accurately. Adaptive update allows the latest location fingerprint to be stored in the repository. Furthermore, any redundant location fingerprints are removed and only the updated version of the fingerprint is stored in the repository. How the location estimation of the user can be predicted would be highlighted more in the proposed solution section. After some studies on previous works, it is found that the Artificial Neural Network is the most feasible method to deploy in updating the repository and making it adaptive. The Artificial Neural Network functions are to do the pattern matching of the WiFi signal to the existing data available in the repository.

Keywords: Adaptive Repository, Artificial Neural Network, Location Estimation, Nearest Neighbour Euclidean Distance, WiFi RSSI Fingerprinting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3459