Search results for: noise estimation
2012 Wind Resource Estimation and Economic Analysis for Rakiraki, Fiji
Authors: Kaushal Kishore
Abstract:
Immense amount of imported fuels are used in Fiji for electricity generation, transportation and for carrying out miscellaneous household work. To alleviate its dependency on fossil fuel, paramount importance has been given to instigate the utilization of renewable energy sources for power generation and to reduce the environmental dilapidation. Amongst the many renewable energy sources, wind has been considered as one of the best identified renewable sources that are comprehensively available in Fiji. In this study the wind resource assessment for three locations in Rakiraki, Fiji has been carried out. The wind resource estimation at Rokavukavu, Navolau and at Tuvavatu has been analyzed. The average wind speed at 55 m above ground level (a.g.l) at Rokavukavu, Navolau, and Tuvavatu sites are 5.91 m/s, 8.94 m/s and 8.13 m/s with the turbulence intensity of 14.9%, 17.1%, and 11.7% respectively. The moment fitting method has been used to estimate the Weibull parameter and the power density at each sites. A high resolution wind resource map for the three locations has been developed by using Wind Atlas Analysis and Application Program (WAsP). The results obtained from WAsP exhibited good wind potential at Navolau and Tuvavatu sites. A wind farm has been proposed at Navolau and Tuvavatu site that comprises six Vergnet 275 kW wind turbines at each site. The annual energy production (AEP) for each wind farm is estimated and an economic analysis is performed. The economic analysis for the proposed wind farms at Navolau and Tuvavatu sites showed a payback period of 5 and 6 years respectively.Keywords: annual energy production, Rakiraki Fiji, turbulence intensity, Weibull parameter, wind speed, Wind Atlas Analysis and Application Program
Procedia PDF Downloads 1892011 3D Interactions in Under Water Acoustic Simulations
Authors: Prabu Duplex
Abstract:
Due to stringent emission regulation targets, large-scale transition to renewable energy sources is a global challenge, and wind power plays a significant role in the solution vector. This scenario has led to the construction of offshore wind farms, and several wind farms are planned in the shallow waters where the marine habitat exists. It raises concerns over impacts of underwater noise on marine species, for example bridge constructions in the ocean straits. Dangerous to aquatic life, the environmental organisations say, the bridge would be devastating, since ocean straits are important place of transit for marine mammals. One of the highest concentrations of biodiversity in the world is concentrated these areas. The investigation of ship noise and piling noise that may happen during bridge construction and in operation is therefore vital. Once the source levels are known the receiver levels can be modelled. With this objective this work investigates the key requirement of the software that can model transmission loss in high frequencies that may occur during construction or operation phases. Most propagation models are 2D solutions, calculating the propagation loss along a transect, which does not include horizontal refraction, reflection or diffraction. In many cases, such models provide sufficient accuracy and can provide three-dimensional maps by combining, through interpolation, several two-dimensional (distance and depth) transects. However, in some instances the use of 2D models may not be sufficient to accurately model the sound propagation. A possible example includes a scenario where an island or land mass is situated between the source and receiver. The 2D model will result in a shadow behind the land mass where the modelled transects intersect the land mass. Diffraction will occur causing bending of the sound around the land mass. In such cases, it may be necessary to use a 3D model, which accounts for horizontal diffraction to accurately represent the sound field. Other scenarios where 2D models may not provide sufficient accuracy may be environments characterised by a strong up-sloping or down sloping seabed, such as propagation around continental shelves. In line with these objectives by means of a case study, this work addresses the importance of 3D interactions in underwater acoustics. The methodology used in this study can also be used for other 3D underwater sound propagation studies. This work assumes special significance given the increasing interest in using underwater acoustic modeling for environmental impacts assessments. Future work also includes inter-model comparison in shallow water environments considering more physical processes known to influence sound propagation, such as scattering from the sea surface. Passive acoustic monitoring of the underwater soundscape with distributed hydrophone arrays is also suggested to investigate the 3D propagation effects as discussed in this article.Keywords: underwater acoustics, naval, maritime, cetaceans
Procedia PDF Downloads 192010 Tracing Sources of Sediment in an Arid River, Southern Iran
Authors: Hesam Gholami
Abstract:
Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran
Procedia PDF Downloads 742009 Passive Seismic in Hydrogeological Prospecting: The Case Study from Hard Rock and Alluvium Plain
Authors: Prarabdh Tiwari, M. Vidya Sagar, K. Bhima Raju, Joy Choudhury, Subash Chandra, E. Nagaiah, Shakeel Ahmed
Abstract:
Passive seismic, a wavefield interferometric imaging, low cost and rapid tool for subsurface investigation is used for various geotechnical purposes such as hydrocarbon exploration, seismic microzonation, etc. With the recent advancement, its application has also been extended to groundwater exploration by means of finding the bedrock depth. Council of Scientific & Industrial Research (CSIR)-National Geophysical Research Institute (NGRI) has experimented passive seismic studies along with electrical resistivity tomography for groundwater in hard rock (Choutuppal, Hyderabad). Passive Seismic with Electrical Resistivity (ERT) can give more clear 2-D subsurface image for Groundwater Exploration in Hard Rock area. Passive seismic data were collected using a Tromino, a three-component broadband seismometer, to measure background ambient noise and processed using GRILLA software. The passive seismic results are found corroborating with ERT (Electrical Resistivity Tomography) results. For data acquisition purpose, Tromino was kept over 30 locations consist recording of 20 minutes at each station. These location shows strong resonance frequency peak, suggesting good impedance contrast between different subsurface layers (ex. Mica rich Laminated layer, Weathered layer, granite, etc.) This paper presents signature of passive seismic for hard rock terrain. It has been found that passive seismic has potential application for formation characterization and can be used as an alternative tool for delineating litho-stratification in an urban condition where electrical and electromagnetic tools cannot be applied due to high cultural noise. In addition to its general application in combination with electrical and electromagnetic methods can improve the interpreted subsurface model.Keywords: passive seismic, resonant frequency, Tromino, GRILLA
Procedia PDF Downloads 1882008 Estimation of Constant Coefficients of Bourgoyne and Young Drilling Rate Model for Drill Bit Wear Prediction
Authors: Ahmed Z. Mazen, Nejat Rahmanian, Iqbal Mujtaba, Ali Hassanpour
Abstract:
In oil and gas well drilling, the drill bit is an important part of the Bottom Hole Assembly (BHA), which is installed and designed to drill and produce a hole by several mechanisms. The efficiency of the bit depends on many drilling parameters such as weight on bit, rotary speed, and mud properties. When the bit is pulled out of the hole, the evaluation of the bit damage must be recorded very carefully to guide engineers in order to select the bits for further planned wells. Having a worn bit for hole drilling may cause severe damage to bit leading to cutter or cone losses in the bottom of hole, where a fishing job will have to take place, and all of these will increase the operating cost. The main factor to reduce the cost of drilling operation is to maximize the rate of penetration by analyzing real-time data to predict the drill bit wear while drilling. There are numerous models in the literature for prediction of the rate of penetration based on drilling parameters, mostly based on empirical approaches. One of the most commonly used approaches is Bourgoyne and Young model, where the rate of penetration can be estimated by the drilling parameters as well as a wear index using an empirical correlation, provided all the constants and coefficients are accurately determined. This paper introduces a new methodology to estimate the eight coefficients for Bourgoyne and Young model using the gPROMS parameters estimation GPE (Version 4.2.0). Real data collected form similar formations (12 ¼’ sections) in two different fields in Libya are used to estimate the coefficients. The estimated coefficients are then used in the equations and applied to nearby wells in the same field to predict the bit wear.Keywords: Bourgoyne and Young model, bit wear, gPROMS, rate of penetration
Procedia PDF Downloads 1542007 Modeling and System Identification of a Variable Excited Linear Direct Drive
Authors: Heiko Weiß, Andreas Meister, Christoph Ament, Nils Dreifke
Abstract:
Linear actuators are deployed in a wide range of applications. This paper presents the modeling and system identification of a variable excited linear direct drive (LDD). The LDD is designed based on linear hybrid stepper technology exhibiting the characteristic tooth structure of mover and stator. A three-phase topology provides the thrust force caused by alternating strengthening and weakening of the flux of the legs. To achieve best possible synchronous operation, the phases are commutated sinusoidal. Despite the fact that these LDDs provide high dynamics and drive forces, noise emission limits their operation in calm workspaces. To overcome this drawback an additional excitation of the magnetic circuit is introduced to LDD using additional enabling coils instead of permanent magnets. The new degree of freedom can be used to reduce force variations and related noise by varying the excitation flux that is usually generated by permanent magnets. Hence, an identified simulation model is necessary to analyze the effects of this modification. Especially the force variations must be modeled well in order to reduce them sufficiently. The model can be divided into three parts: the current dynamics, the mechanics and the force functions. These subsystems are described with differential equations or nonlinear analytic functions, respectively. Ordinary nonlinear differential equations are derived and transformed into state space representation. Experiments have been carried out on a test rig to identify the system parameters of the complete model. Static and dynamic simulation based optimizations are utilized for identification. The results are verified in time and frequency domain. Finally, the identified model provides a basis for later design of control strategies to reduce existing force variations.Keywords: force variations, linear direct drive, modeling and system identification, variable excitation flux
Procedia PDF Downloads 3702006 Estimation and Removal of Chlorophenolic Compounds from Paper Mill Waste Water by Electrochemical Treatment
Authors: R. Sharma, S. Kumar, C. Sharma
Abstract:
A number of toxic chlorophenolic compounds are formed during pulp bleaching. The nature and concentration of these chlorophenolic compounds largely depends upon the amount and nature of bleaching chemicals used. These compounds are highly recalcitrant and difficult to remove but are partially removed by the biochemical treatment processes adopted by the paper industry. Identification and estimation of these chlorophenolic compounds has been carried out in the primary and secondary clarified effluents from the paper mill by GCMS. Twenty-six chorophenolic compounds have been identified and estimated in paper mill waste waters. Electrochemical treatment is an efficient method for oxidation of pollutants and has successfully been used to treat textile and oil waste water. Electrochemical treatment using less expensive anode material, stainless steel electrodes has been tried to study their removal. The electrochemical assembly comprised a DC power supply, a magnetic stirrer and stainless steel (316 L) electrode. The optimization of operating conditions has been carried out and treatment has been performed under optimized treatment conditions. Results indicate that 68.7% and 83.8% of cholorphenolic compounds are removed during 2 h of electrochemical treatment from primary and secondary clarified effluent respectively. Further, there is a reduction of 65.1, 60 and 92.6% of COD, AOX and color, respectively for primary clarified and 83.8%, 75.9% and 96.8% of COD, AOX and color, respectively for secondary clarified effluent. EC treatment has also been found to increase significantly the biodegradability index of wastewater because of conversion of non- biodegradable fraction into biodegradable fraction. Thus, electrochemical treatment is an efficient method for the degradation of cholorophenolic compounds, removal of color, AOX and other recalcitrant organic matter present in paper mill waste water.Keywords: chlorophenolics, effluent, electrochemical treatment, wastewater
Procedia PDF Downloads 3872005 Government Size and Economic Growth: Testing the Non-Linear Hypothesis for Nigeria
Authors: R. Santos Alimi
Abstract:
Using time-series techniques, this study empirically tested the validity of existing theory which stipulates there is a nonlinear relationship between government size and economic growth; such that government spending is growth-enhancing at low levels but growth-retarding at high levels, with the optimal size occurring somewhere in between. This study employed three estimation equations. First, for the size of government, two measures are considered as follows: (i) share of total expenditures to gross domestic product, (ii) share of recurrent expenditures to gross domestic product. Second, the study adopted real GDP (without government expenditure component), as a variant measure of economic growth other than the real total GDP, in estimating the optimal level of government expenditure. The study is based on annual Nigeria country-level data for the period 1970 to 2012. Estimation results show that the inverted U-shaped curve exists for the two measures of government size and the estimated optimum shares are 19.81% and 10.98%, respectively. Finally, with the adoption of real GDP (without government expenditure component), the optimum government size was found to be 12.58% of GDP. Our analysis shows that the actual share of government spending on average (2000 - 2012) is about 13.4%.This study adds to the literature confirming that the optimal government size exists not only for developed economies but also for developing economy like Nigeria. Thus, a public intervention threshold level that fosters economic growth is a reality; beyond this point economic growth should be left in the hands of the private sector. This finding has a significant implication for the appraisal of government spending and budgetary policy design.Keywords: public expenditure, economic growth, optimum level, fully modified OLS
Procedia PDF Downloads 4202004 Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm
Authors: El Harraj Abdeslam, Raissouni Naoufal
Abstract:
The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.Keywords: video surveillance, background subtraction, contrast limited histogram equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes
Procedia PDF Downloads 2562003 Effective Planning of Public Transportation Systems: A Decision Support Application
Authors: Ferdi Sönmez, Nihal Yorulmaz
Abstract:
Decision making on the true planning of the public transportation systems to serve potential users is a must for metropolitan areas. To take attraction of travelers to projected modes of transport, adequately fair overall travel times should be provided. In this fashion, other benefits such as lower traffic congestion, road safety and lower noise and atmospheric pollution may be earned. The congestion which comes with increasing demand of public transportation is becoming a part of our lives and making residents’ life difficult. Hence, regulations should be done to reduce this congestion. To provide a constructive and balanced regulation in public transportation systems, right stations should be located in right places. In this study, it is aimed to design and implement a Decision Support System (DSS) Application to determine the optimal bus stop places for public transport in Istanbul which is one of the biggest and oldest cities in the world. Required information is gathered from IETT (Istanbul Electricity, Tram and Tunnel) Enterprises which manages all public transportation services in Istanbul Metropolitan Area. By using the most real-like values, cost assignments are made. The cost is calculated with the help of equations produced by bi-level optimization model. For this study, 300 buses, 300 drivers, 10 lines and 110 stops are used. The user cost of each station and the operator cost taken place in lines are calculated. Some components like cost, security and noise pollution are considered as significant factors affecting the solution of set covering problem which is mentioned for identifying and locating the minimum number of possible bus stops. Preliminary research and model development for this study refers to previously published article of the corresponding author. Model results are represented with the intent of decision support to the specialists on locating stops effectively.Keywords: operator cost, bi-level optimization model, user cost, urban transportation
Procedia PDF Downloads 2462002 Statistical Assessment of Models for Determination of Soil–Water Characteristic Curves of Sand Soils
Authors: S. J. Matlan, M. Mukhlisin, M. R. Taha
Abstract:
Characterization of the engineering behavior of unsaturated soil is dependent on the soil-water characteristic curve (SWCC), a graphical representation of the relationship between water content or degree of saturation and soil suction. A reasonable description of the SWCC is thus important for the accurate prediction of unsaturated soil parameters. The measurement procedures for determining the SWCC, however, are difficult, expensive, and time-consuming. During the past few decades, researchers have laid a major focus on developing empirical equations for predicting the SWCC, with a large number of empirical models suggested. One of the most crucial questions is how precisely existing equations can represent the SWCC. As different models have different ranges of capability, it is essential to evaluate the precision of the SWCC models used for each particular soil type for better SWCC estimation. It is expected that better estimation of SWCC would be achieved via a thorough statistical analysis of its distribution within a particular soil class. With this in view, a statistical analysis was conducted in order to evaluate the reliability of the SWCC prediction models against laboratory measurement. Optimization techniques were used to obtain the best-fit of the model parameters in four forms of SWCC equation, using laboratory data for relatively coarse-textured (i.e., sandy) soil. The four most prominent SWCCs were evaluated and computed for each sample. The result shows that the Brooks and Corey model is the most consistent in describing the SWCC for sand soil type. The Brooks and Corey model prediction also exhibit compatibility with samples ranging from low to high soil water content in which subjected to the samples that evaluated in this study.Keywords: soil-water characteristic curve (SWCC), statistical analysis, unsaturated soil, geotechnical engineering
Procedia PDF Downloads 3382001 Don't Just Guess and Slip: Estimating Bayesian Knowledge Tracing Parameters When Observations Are Scant
Authors: Michael Smalenberger
Abstract:
Intelligent tutoring systems (ITS) are computer-based platforms which can incorporate artificial intelligence to provide step-by-step guidance as students practice problem-solving skills. ITS can replicate and even exceed some benefits of one-on-one tutoring, foster transactivity in collaborative environments, and lead to substantial learning gains when used to supplement the instruction of a teacher or when used as the sole method of instruction. A common facet of many ITS is their use of Bayesian Knowledge Tracing (BKT) to estimate parameters necessary for the implementation of the artificial intelligence component, and for the probability of mastery of a knowledge component relevant to the ITS. While various techniques exist to estimate these parameters and probability of mastery, none directly and reliably ask the user to self-assess these. In this study, 111 undergraduate students used an ITS in a college-level introductory statistics course for which detailed transaction-level observations were recorded, and users were also routinely asked direct questions that would lead to such a self-assessment. Comparisons were made between these self-assessed values and those obtained using commonly used estimation techniques. Our findings show that such self-assessments are particularly relevant at the early stages of ITS usage while transaction level data are scant. Once a user’s transaction level data become available after sufficient ITS usage, these can replace the self-assessments in order to eliminate the identifiability problem in BKT. We discuss how these findings are relevant to the number of exercises necessary to lead to mastery of a knowledge component, the associated implications on learning curves, and its relevance to instruction time.Keywords: Bayesian Knowledge Tracing, Intelligent Tutoring System, in vivo study, parameter estimation
Procedia PDF Downloads 1722000 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning
Authors: T. Bryan , V. Kepuska, I. Kostnaic
Abstract:
A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit
Procedia PDF Downloads 2531999 Fatigue Life Prediction under Variable Loading Based a Non-Linear Energy Model
Authors: Aid Abdelkrim
Abstract:
A method of fatigue damage accumulation based upon application of energy parameters of the fatigue process is proposed in the paper. Using this model is simple, it has no parameter to be determined, it requires only the knowledge of the curve W–N (W: strain energy density N: number of cycles at failure) determined from the experimental Wöhler curve. To examine the performance of nonlinear models proposed in the estimation of fatigue damage and fatigue life of components under random loading, a batch of specimens made of 6082 T 6 aluminium alloy has been studied and some of the results are reported in the present paper. The paper describes an algorithm and suggests a fatigue cumulative damage model, especially when random loading is considered. This work contains the results of uni-axial random load fatigue tests with different mean and amplitude values performed on 6082T6 aluminium alloy specimens. The proposed model has been formulated to take into account the damage evolution at different load levels and it allows the effect of the loading sequence to be included by means of a recurrence formula derived for multilevel loading, considering complex load sequences. It is concluded that a ‘damaged stress interaction damage rule’ proposed here allows a better fatigue damage prediction than the widely used Palmgren–Miner rule, and a formula derived in random fatigue could be used to predict the fatigue damage and fatigue lifetime very easily. The results obtained by the model are compared with the experimental results and those calculated by the most fatigue damage model used in fatigue (Miner’s model). The comparison shows that the proposed model, presents a good estimation of the experimental results. Moreover, the error is minimized in comparison to the Miner’s model.Keywords: damage accumulation, energy model, damage indicator, variable loading, random loading
Procedia PDF Downloads 3961998 Sorting Fish by Hu Moments
Authors: J. M. Hernández-Ontiveros, E. E. García-Guerrero, E. Inzunza-González, O. R. López-Bonilla
Abstract:
This paper presents the implementation of an algorithm that identifies and accounts different fish species: Catfish, Sea bream, Sawfish, Tilapia, and Totoaba. The main contribution of the method is the fusion of the characteristics of invariance to the position, rotation and scale of the Hu moments, with the proper counting of fish. The identification and counting is performed, from an image under different noise conditions. From the experimental results obtained, it is inferred the potentiality of the proposed algorithm to be applied in different scenarios of aquaculture production.Keywords: counting fish, digital image processing, invariant moments, pattern recognition
Procedia PDF Downloads 4091997 Foreign Direct Investment on Economic Growth by Industries in Central and Eastern European Countries
Authors: Shorena Pharjiani
Abstract:
The Present empirical paper investigates the relationship between FDI and economic growth by 10 selected industries in 10 Central and Eastern European countries from the period 1995 to 2012. Different estimation approaches were used to explore the connection between FDI and economic growth, for example OLS, RE, FE with and without time dummies. Obtained empirical results leads to some main consequences: First, the Central and East European countries (CEEC) attracted foreign direct investment, which raised the productivity of industries they entered in. It should be concluded that the linkage between FDI and output growth by industries is positive and significant enough to suggest that foreign firm’s participation enhanced the productivity of the industries they occupied. There had been an endogeneity problem in the regression and fixed effects estimation approach was used which partially corrected the regression analysis in order to make the results less biased. Second, it should be stressed that the results show that time has an important role in making FDI operational for enhancing output growth by industries via total factor productivity. Third, R&D positively affected economic growth and at the same time, it should take some time for research and development to influence economic growth. Fourth, the general trends masked crucial differences at the country level: over the last 20 years, the analysis of the tables and figures at the country level show that the main recipients of FDI of the 11 Central and Eastern European countries were Hungary, Poland and the Czech Republic. The main reason was that these countries had more open door policies for attracting the FDI. Fifth, according to the graphical analysis, while Hungary had the highest FDI inflow in this region, it was not reflected in the GDP growth as much as in other Central and Eastern European countries.Keywords: central and East European countries (CEEC), economic growth, FDI, panel data
Procedia PDF Downloads 2371996 Nonlinear Estimation Model for Rail Track Deterioration
Authors: M. Karimpour, L. Hitihamillage, N. Elkhoury, S. Moridpour, R. Hesami
Abstract:
Rail transport authorities around the world have been facing a significant challenge when predicting rail infrastructure maintenance work for a long period of time. Generally, maintenance monitoring and prediction is conducted manually. With the restrictions in economy, the rail transport authorities are in pursuit of improved modern methods, which can provide precise prediction of rail maintenance time and location. The expectation from such a method is to develop models to minimize the human error that is strongly related to manual prediction. Such models will help them in understanding how the track degradation occurs overtime under the change in different conditions (e.g. rail load, rail type, rail profile). They need a well-structured technique to identify the precise time that rail tracks fail in order to minimize the maintenance cost/time and secure the vehicles. The rail track characteristics that have been collected over the years will be used in developing rail track degradation prediction models. Since these data have been collected in large volumes and the data collection is done both electronically and manually, it is possible to have some errors. Sometimes these errors make it impossible to use them in prediction model development. This is one of the major drawbacks in rail track degradation prediction. An accurate model can play a key role in the estimation of the long-term behavior of rail tracks. Accurate models increase the track safety and decrease the cost of maintenance in long term. In this research, a short review of rail track degradation prediction models has been discussed before estimating rail track degradation for the curve sections of Melbourne tram track system using Adaptive Network-based Fuzzy Inference System (ANFIS) model.Keywords: ANFIS, MGT, prediction modeling, rail track degradation
Procedia PDF Downloads 3361995 Brain-Computer Interfaces That Use Electroencephalography
Authors: Arda Ozkurt, Ozlem Bozkurt
Abstract:
Brain-computer interfaces (BCIs) are devices that output commands by interpreting the data collected from the brain. Electroencephalography (EEG) is a non-invasive method to measure the brain's electrical activity. Since it was invented by Hans Berger in 1929, it has led to many neurological discoveries and has become one of the essential components of non-invasive measuring methods. Despite the fact that it has a low spatial resolution -meaning it is able to detect when a group of neurons fires at the same time-, it is a non-invasive method, making it easy to use without possessing any risks. In EEG, electrodes are placed on the scalp, and the voltage difference between a minimum of two electrodes is recorded, which is then used to accomplish the intended task. The recordings of EEGs include, but are not limited to, the currents along dendrites from synapses to the soma, the action potentials along the axons connecting neurons, and the currents through the synaptic clefts connecting axons with dendrites. However, there are some sources of noise that may affect the reliability of the EEG signals as it is a non-invasive method. For instance, the noise from the EEG equipment, the leads, and the signals coming from the subject -such as the activity of the heart or muscle movements- affect the signals detected by the electrodes of the EEG. However, new techniques have been developed to differentiate between those signals and the intended ones. Furthermore, an EEG device is not enough to analyze the data from the brain to be used by the BCI implication. Because the EEG signal is very complex, to analyze it, artificial intelligence algorithms are required. These algorithms convert complex data into meaningful and useful information for neuroscientists to use the data to design BCI devices. Even though for neurological diseases which require highly precise data, invasive BCIs are needed; non-invasive BCIs - such as EEGs - are used in many cases to help disabled people's lives or even to ease people's lives by helping them with basic tasks. For example, EEG is used to detect before a seizure occurs in epilepsy patients, which can then prevent the seizure with the help of a BCI device. Overall, EEG is a commonly used non-invasive BCI technique that has helped develop BCIs and will continue to be used to detect data to ease people's lives as more BCI techniques will be developed in the future.Keywords: BCI, EEG, non-invasive, spatial resolution
Procedia PDF Downloads 711994 Synthesis of Filtering in Stochastic Systems on Continuous-Time Memory Observations in the Presence of Anomalous Noises
Authors: S. Rozhkova, O. Rozhkova, A. Harlova, V. Lasukov
Abstract:
We have conducted the optimal synthesis of root-mean-squared objective filter to estimate the state vector in the case if within the observation channel with memory the anomalous noises with unknown mathematical expectation are complement in the function of the regular noises. The synthesis has been carried out for linear stochastic systems of continuous-time.Keywords: mathematical expectation, filtration, anomalous noise, memory
Procedia PDF Downloads 2471993 Railway Ballast Volumes Automated Estimation Based on LiDAR Data
Authors: Bahar Salavati Vie Le Sage, Ismaïl Ben Hariz, Flavien Viguier, Sirine Noura Kahil, Audrey Jacquin, Maxime Convert
Abstract:
The ballast layer plays a key role in railroad maintenance and the geometry of the track structure. Ballast also holds the track in place as the trains roll over it. Track ballast is packed between the sleepers and on the sides of railway tracks. An imbalance in ballast volume on the tracks can lead to safety issues as well as a quick degradation of the overall quality of the railway segment. If there is a lack of ballast in the track bed during the summer, there is a risk that the rails will expand and buckle slightly due to the high temperatures. Furthermore, the knowledge of the ballast quantities that will be excavated during renewal works is important for efficient ballast management. The volume of excavated ballast per meter of track can be calculated based on excavation depth, excavation width, volume of track skeleton (sleeper and rail) and sleeper spacing. Since 2012, SNCF has been collecting 3D points cloud data covering its entire railway network by using 3D laser scanning technology (LiDAR). This vast amount of data represents a modelization of the entire railway infrastructure, allowing to conduct various simulations for maintenance purposes. This paper aims to present an automated method for ballast volume estimation based on the processing of LiDAR data. The estimation of abnormal volumes in ballast on the tracks is performed by analyzing the cross-section of the track. Further, since the amount of ballast required varies depending on the track configuration, the knowledge of the ballast profile is required. Prior to track rehabilitation, excess ballast is often present in the ballast shoulders. Based on 3D laser scans, a Digital Terrain Model (DTM) was generated and automatic extraction of the ballast profiles from this data is carried out. The surplus in ballast is then estimated by performing a comparison between this ballast profile obtained empirically, and a geometric modelization of the theoretical ballast profile thresholds as dictated by maintenance standards. Ideally, this excess should be removed prior to renewal works and recycled to optimize the output of the ballast renewal machine. Based on these parameters, an application has been developed to allow the automatic measurement of ballast profiles. We evaluated the method on a 108 kilometers segment of railroad LiDAR scans, and the results show that the proposed algorithm detects ballast surplus that amounts to values close to the total quantities of spoil ballast excavated.Keywords: ballast, railroad, LiDAR , cloud point, track ballast, 3D point
Procedia PDF Downloads 1101992 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments
Authors: Rahul Paul, Peter Mctaggart, Luke Skinner
Abstract:
Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry
Procedia PDF Downloads 991991 Band Characterization and Development of Hyperspectral Indices for Retrieving Chlorophyll Content
Authors: Ramandeep Kaur M. Malhi, Prashant K. Srivastava, G.Sandhya Kiran
Abstract:
Quantitative estimates of foliar biochemicals, namely chlorophyll content (CC), serve as key information for the assessment of plant productivity, stress, and the availability of nutrients. This also plays a critical role in predicting the dynamic response of any vegetation to altering climate conditions. The advent of hyperspectral data with an enhanced number of available wavelengths has increased the possibility of acquiring improved information on CC. Retrieval of CC is extensively carried through well known spectral indices derived from hyperspectral data. In the present study, an attempt is made to develop hyperspectral indices by identifying optimum bands for CC estimation in Butea monosperma (Lam.) Taub growing in forests of Shoolpaneshwar Wildlife Sanctuary, Narmada district, Gujarat State, India. 196 narrow bands of EO-1 Hyperion images were screened, and the best optimum wavelength from blue, green, red, and near infrared (NIR) regions were identified based on the coefficient of determination (R²) between band reflectance and laboratory estimated CC. The identified optimum wavelengths were then employed for developing 12 hyperspectral indices. These spectral index values and CC values were then correlated to investigate the relation between laboratory measured CC and spectral indices. Band 15 of blue range and Band 22 of green range, Band 40 of the red region, and Band 79 of NIR region were found to be optimum bands for estimating CC. The optimum band based combinations on hyperspectral data proved to be the most effective indices for quantifying Butea CC with NDVI and TVI identified as the best (R² > 0.7, p < 0.01). The study demonstrated the significance of band characterization in the development of the best hyperspectral indices for the chlorophyll estimation, which can aid in monitoring the vitality of forests.Keywords: band, characterization, chlorophyll, hyperspectral, indices
Procedia PDF Downloads 1551990 Estimating Age in Deceased Persons from the North Indian Population Using Ossification of the Sternoclavicular Joint
Authors: Balaji Devanathan, Gokul G., Raveena Divya, Abhishek Yadav, Sudhir K. Gupta
Abstract:
Background: Age estimation is a common problem in administrative settings, medico legal cases, and among athletes competing in different sports. Age estimation is a problem in medico legal problems that arise in hospitals when there has been a criminal abortion, when consenting to surgery or a general physical examination, when there has been infanticide, impotence, sterility, etc. Medical imaging progress has benefited forensic anthropology in various ways, most notably in the area of determining bone age. An efficient method for researching the epiphyseal union and other differences in the body's bones and joints is multi-slice computed tomography. There isn't a significant database on Indians available. So to obtain an Indian based database author has performed this original study. Methodologies: The appearance and fusion of ossification centre of sternoclavicular joint is evaluated, and grades were assigned accordingly. Using MSCT scans, we examined the relationship between the age of the deceased and alterations in the sternoclavicular joint during the appearance and union in 500 instances, 327 men and 173 females, in the age range of 0 to 25 years. Results: According to our research in both the male and female groups, the ossification centre for the medial end of the clavicle first appeared between the ages of 18.5 and 17.1 respectively. The age range of the partial union was 20.4 and 20.2 years old. The earliest age of complete fusion was 23 years for males and 22 years for females. For fusion of their sternebrae into one, age range is 11–24 years for females and 17–24 years. The fusion of the third and fourth sternebrae was completed by 11 years. The fusions of the first and second and second and third sternebrae occur by the age of 17 years. Furthermore, correlation and reliability were carried out which yielded significant results. Conclusion: With numerous exceptions, the projected values are consistent with a large number of the previously developed age charts. These variations may be caused by the ethnic or regional heterogeneity in the ossification pattern among the population under study. The pattern of bone maturation did not significantly differ between the sexes, according to the study. The study's age range was 0 to 25 years, and for obvious reasons, the majority of the occurrences occurred in the last five years, or between 20 and 25 years of age. This resulted in a comparatively smaller study population for the 12–18 age group, where age estimate is crucial because of current legal requirements. It will require specialized PMCT research in this age range to produce population standard charts for age estimate. The medial end of the clavicle is one of several ossification foci that are being thoroughly investigated since they are challenging to assess with a traditional X-ray examination. Combining the two has been shown to be a valid result when it comes to raising the age beyond eighteen.Keywords: age estimation, sternoclavicular joint, medial clavicle, computed tomography
Procedia PDF Downloads 451989 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring
Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti
Abstract:
Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement
Procedia PDF Downloads 1231988 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering
Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott
Abstract:
Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.Keywords: cancer research, graph theory, machine learning, single cell analysis
Procedia PDF Downloads 1131987 Evaluation of Natural Frequency of Single and Grouped Helical Piles
Authors: Maryam Shahbazi, Amy B. Cerato
Abstract:
The importance of a systems’ natural frequency (fn) emerges when the vibration force frequency is equivalent to foundation's fn which causes response amplitude (resonance) that may cause irreversible damage to the structure. Several factors such as pile geometry (e.g., length and diameter), soil density, load magnitude, pile condition, and physical structure affect the fn of a soil-pile system; some of these parameters are evaluated in this study. Although experimental and analytical studies have assessed the fn of a soil-pile system, few have included individual and grouped helical piles. Thus, the current study aims to provide quantitative data on dynamic characteristics of helical pile-soil systems from full-scale shake table tests that will allow engineers to predict more realistic dynamic response under motions with variable frequency ranges. To evaluate the fn of single and grouped helical piles in dry dense sand, full-scale shake table tests were conducted in a laminar box (6.7 m x 3.0 m with 4.6 m high). Two different diameters (8.8 cm and 14 cm) helical piles were embedded in the soil box with corresponding lengths of 3.66m (excluding one pile with length of 3.96) and 4.27m. Different configurations were implemented to evaluate conditions such as fixed and pinned connections. In the group configuration, all four piles with similar geometry were tied together. Simulated real earthquake motions, in addition to white noise, were applied to evaluate the wide range of soil-pile system behavior. The Fast Fourier Transform (FFT) of measured time history responses using installed strain gages and accelerometers were used to evaluate fn. Both time-history records using accelerometer or strain gages were found to be acceptable for calculating fn. In this study, the existence of a pile reduced the fn of the soil slightly. Greater fn occurred on single piles with larger l/d ratios (higher slenderness ratio). Also, regardless of the connection type, the more slender pile group which is obviously surrounded by more soil, yielded higher natural frequencies under white noise, which may be due to exhibiting more passive soil resistance around it. Relatively speaking, within both pile groups, a pinned connection led to a lower fn than a fixed connection (e.g., for the same pile group the fn’s are 5.23Hz and 4.65Hz for fixed and pinned connections, respectively). Generally speaking, a stronger motion causes nonlinear behavior and degrades stiffness which reduces a pile’s fn; even more, reduction occurs in soil with a lower density. Moreover, fn of dense sand under white noise signal was obtained 5.03 which is reduced by 44% when an earthquake with the acceleration of 0.5g was applied. By knowing the factors affecting fn, the designer can effectively match the properties of the soil to a type of pile and structure to attempt to avoid resonance. The quantitative results in this study assist engineers in predicting a probable range of fn for helical pile foundations under potential future earthquake, and machine loading applied forces.Keywords: helical pile, natural frequency, pile group, shake table, stiffness
Procedia PDF Downloads 1331986 Regression Analysis in Estimating Stream-Flow and the Effect of Hierarchical Clustering Analysis: A Case Study in Euphrates-Tigris Basin
Authors: Goksel Ezgi Guzey, Bihrat Onoz
Abstract:
The scarcity of streamflow gauging stations and the increasing effects of global warming cause designing water management systems to be very difficult. This study is a significant contribution to assessing regional regression models for estimating streamflow. In this study, simulated meteorological data was related to the observed streamflow data from 1971 to 2020 for 33 stream gauging stations of the Euphrates-Tigris Basin. Ordinary least squares regression was used to predict flow for 2020-2100 with the simulated meteorological data. CORDEX- EURO and CORDEX-MENA domains were used with 0.11 and 0.22 grids, respectively, to estimate climate conditions under certain climate scenarios. Twelve meteorological variables simulated by two regional climate models, RCA4 and RegCM4, were used as independent variables in the ordinary least squares regression, where the observed streamflow was the dependent variable. The variability of streamflow was then calculated with 5-6 meteorological variables and watershed characteristics such as area and height prior to the application. Of the regression analysis of 31 stream gauging stations' data, the stations were subjected to a clustering analysis, which grouped the stations in two clusters in terms of their hydrometeorological properties. Two streamflow equations were found for the two clusters of stream gauging stations for every domain and every regional climate model, which increased the efficiency of streamflow estimation by a range of 10-15% for all the models. This study underlines the importance of homogeneity of a region in estimating streamflow not only in terms of the geographical location but also in terms of the meteorological characteristics of that region.Keywords: hydrology, streamflow estimation, climate change, hydrologic modeling, HBV, hydropower
Procedia PDF Downloads 1291985 Performance Comparison of Non-Binary RA and QC-LDPC Codes
Abstract:
Repeat–Accumulate (RA) codes are subclass of LDPC codes with fast encoder structures. In this paper, we consider a nonbinary extension of binary LDPC codes over GF(q) and construct a non-binary RA code and a non-binary QC-LDPC code over GF(2^4), we construct non-binary RA codes with linear encoding method and non-binary QC-LDPC codes with algebraic constructions method. And the BER performance of RA and QC-LDPC codes over GF(q) are compared with BP decoding and by simulation over the Additive White Gaussian Noise (AWGN) channels.Keywords: non-binary RA codes, QC-LDPC codes, performance comparison, BP algorithm
Procedia PDF Downloads 3761984 Big Data Applications for the Transport Sector
Authors: Antonella Falanga, Armando Cartenì
Abstract:
Today, an unprecedented amount of data coming from several sources, including mobile devices, sensors, tracking systems, and online platforms, characterizes our lives. The term “big data” not only refers to the quantity of data but also to the variety and speed of data generation. These data hold valuable insights that, when extracted and analyzed, facilitate informed decision-making. The 4Vs of big data - velocity, volume, variety, and value - highlight essential aspects, showcasing the rapid generation, vast quantities, diverse sources, and potential value addition of these kinds of data. This surge of information has revolutionized many sectors, such as business for improving decision-making processes, healthcare for clinical record analysis and medical research, education for enhancing teaching methodologies, agriculture for optimizing crop management, finance for risk assessment and fraud detection, media and entertainment for personalized content recommendations, emergency for a real-time response during crisis/events, and also mobility for the urban planning and for the design/management of public and private transport services. Big data's pervasive impact enhances societal aspects, elevating the quality of life, service efficiency, and problem-solving capacities. However, during this transformative era, new challenges arise, including data quality, privacy, data security, cybersecurity, interoperability, the need for advanced infrastructures, and staff training. Within the transportation sector (the one investigated in this research), applications span planning, designing, and managing systems and mobility services. Among the most common big data applications within the transport sector are, for example, real-time traffic monitoring, bus/freight vehicle route optimization, vehicle maintenance, road safety and all the autonomous and connected vehicles applications. Benefits include a reduction in travel times, road accidents and pollutant emissions. Within these issues, the proper transport demand estimation is crucial for sustainable transportation planning. Evaluating the impact of sustainable mobility policies starts with a quantitative analysis of travel demand. Achieving transportation decarbonization goals hinges on precise estimations of demand for individual transport modes. Emerging technologies, offering substantial big data at lower costs than traditional methods, play a pivotal role in this context. Starting from these considerations, this study explores the usefulness impact of big data within transport demand estimation. This research focuses on leveraging (big) data collected during the COVID-19 pandemic to estimate the evolution of the mobility demand in Italy. Estimation results reveal in the post-COVID-19 era, more than 96 million national daily trips, about 2.6 trips per capita, with a mobile population of more than 37.6 million Italian travelers per day. Overall, this research allows us to conclude that big data better enhances rational decision-making for mobility demand estimation, which is imperative for adeptly planning and allocating investments in transportation infrastructures and services.Keywords: big data, cloud computing, decision-making, mobility demand, transportation
Procedia PDF Downloads 621983 Remaining Useful Life Estimation of Bearings Based on Nonlinear Dimensional Reduction Combined with Timing Signals
Authors: Zhongmin Wang, Wudong Fan, Hengshan Zhang, Yimin Zhou
Abstract:
In data-driven prognostic methods, the prediction accuracy of the estimation for remaining useful life of bearings mainly depends on the performance of health indicators, which are usually fused some statistical features extracted from vibrating signals. However, the existing health indicators have the following two drawbacks: (1) The differnet ranges of the statistical features have the different contributions to construct the health indicators, the expert knowledge is required to extract the features. (2) When convolutional neural networks are utilized to tackle time-frequency features of signals, the time-series of signals are not considered. To overcome these drawbacks, in this study, the method combining convolutional neural network with gated recurrent unit is proposed to extract the time-frequency image features. The extracted features are utilized to construct health indicator and predict remaining useful life of bearings. First, original signals are converted into time-frequency images by using continuous wavelet transform so as to form the original feature sets. Second, with convolutional and pooling layers of convolutional neural networks, the most sensitive features of time-frequency images are selected from the original feature sets. Finally, these selected features are fed into the gated recurrent unit to construct the health indicator. The results state that the proposed method shows the enhance performance than the related studies which have used the same bearing dataset provided by PRONOSTIA.Keywords: continuous wavelet transform, convolution neural net-work, gated recurrent unit, health indicators, remaining useful life
Procedia PDF Downloads 133