Search results for: time series data mining
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 38091

Search results for: time series data mining

36111 Comparative Analysis of Data Gathering Protocols with Multiple Mobile Elements for Wireless Sensor Network

Authors: Bhat Geetalaxmi Jairam, D. V. Ashoka

Abstract:

Wireless Sensor Networks are used in many applications to collect sensed data from different sources. Sensed data has to be delivered through sensors wireless interface using multi-hop communication towards the sink. The data collection in wireless sensor networks consumes energy. Energy consumption is the major constraints in WSN .Reducing the energy consumption while increasing the amount of generated data is a great challenge. In this paper, we have implemented two data gathering protocols with multiple mobile sinks/elements to collect data from sensor nodes. First, is Energy-Efficient Data Gathering with Tour Length-Constrained Mobile Elements in Wireless Sensor Networks (EEDG), in which mobile sinks uses vehicle routing protocol to collect data. Second is An Intelligent Agent-based Routing Structure for Mobile Sinks in WSNs (IAR), in which mobile sinks uses prim’s algorithm to collect data. Authors have implemented concepts which are common to both protocols like deployment of mobile sinks, generating visiting schedule, collecting data from the cluster member. Authors have compared the performance of both protocols by taking statistics based on performance parameters like Delay, Packet Drop, Packet Delivery Ratio, Energy Available, Control Overhead. Authors have concluded this paper by proving EEDG is more efficient than IAR protocol but with few limitations which include unaddressed issues likes Redundancy removal, Idle listening, Mobile Sink’s pause/wait state at the node. In future work, we plan to concentrate more on these limitations to avail a new energy efficient protocol which will help in improving the life time of the WSN.

Keywords: aggregation, consumption, data gathering, efficiency

Procedia PDF Downloads 476
36110 Monte Carlo Methods and Statistical Inference of Multitype Branching Processes

Authors: Ana Staneva, Vessela Stoimenova

Abstract:

A parametric estimation of the MBP with Power Series offspring distribution family is considered in this paper. The MLE for the parameters is obtained in the case when the observable data are incomplete and consist only with the generation sizes of the family tree of MBP. The parameter estimation is calculated by using the Monte Carlo EM algorithm. The estimation for the posterior distribution and for the offspring distribution parameters are calculated by using the Bayesian approach and the Gibbs sampler. The article proposes various examples with bivariate branching processes together with computational results, simulation and an implementation using R.

Keywords: Bayesian, branching processes, EM algorithm, Gibbs sampler, Monte Carlo methods, statistical estimation

Procedia PDF Downloads 405
36109 An Improvement of Multi-Label Image Classification Method Based on Histogram of Oriented Gradient

Authors: Ziad Abdallah, Mohamad Oueidat, Ali El-Zaart

Abstract:

Image Multi-label Classification (IMC) assigns a label or a set of labels to an image. The big demand for image annotation and archiving in the web attracts the researchers to develop many algorithms for this application domain. The existing techniques for IMC have two drawbacks: The description of the elementary characteristics from the image and the correlation between labels are not taken into account. In this paper, we present an algorithm (MIML-HOGLPP), which simultaneously handles these limitations. The algorithm uses the histogram of gradients as feature descriptor. It applies the Label Priority Power-set as multi-label transformation to solve the problem of label correlation. The experiment shows that the results of MIML-HOGLPP are better in terms of some of the evaluation metrics comparing with the two existing techniques.

Keywords: data mining, information retrieval system, multi-label, problem transformation, histogram of gradients

Procedia PDF Downloads 358
36108 Optimization of Manufacturing Process Parameters: An Empirical Study from Taiwan's Tech Companies

Authors: Chao-Ton Su, Li-Fei Chen

Abstract:

The parameter design is crucial to improving the uniformity of a product or process. In the product design stage, parameter design aims to determine the optimal settings for the parameters of each element in the system, thereby minimizing the functional deviations of the product. In the process design stage, parameter design aims to determine the operating settings of the manufacturing processes so that non-uniformity in manufacturing processes can be minimized. The parameter design, trying to minimize the influence of noise on the manufacturing system, plays an important role in the high-tech companies. Taiwan has many well-known high-tech companies, which show key roles in the global economy. Quality remains the most important factor that enables these companies to sustain their competitive advantage. In Taiwan however, many high-tech companies face various quality problems. A common challenge is related to root causes and defect patterns. In the R&D stage, root causes are often unknown, and defect patterns are difficult to classify. Additionally, data collection is not easy. Even when high-volume data can be collected, data interpretation is difficult. To overcome these challenges, high-tech companies in Taiwan use more advanced quality improvement tools. In addition to traditional statistical methods and quality tools, the new trend is the application of powerful tools, such as neural network, fuzzy theory, data mining, industrial engineering, operations research, and innovation skills. In this study, several examples of optimizing the parameter settings for the manufacturing process in Taiwan’s tech companies will be presented to illustrate proposed approach’s effectiveness. Finally, a discussion of using traditional experimental design versus the proposed approach for process optimization will be made.

Keywords: quality engineering, parameter design, neural network, genetic algorithm, experimental design

Procedia PDF Downloads 129
36107 Investigation of Cavitation in a Centrifugal Pump Using Synchronized Pump Head Measurements, Vibration Measurements and High-Speed Image Recording

Authors: Simon Caba, Raja Abou Ackl, Svend Rasmussen, Nicholas E. Pedersen

Abstract:

It is a challenge to directly monitor cavitation in a pump application during operation because of a lack of visual access to validate the presence of cavitation and its form of appearance. In this work, experimental investigations are carried out in an inline single-stage centrifugal pump with optical access. Hence, it gives the opportunity to enhance the value of CFD tools and standard cavitation measurements. Experiments are conducted using two impellers running in the same volute at 3000 rpm and the same flow rate. One of the impellers used is optimized for lower NPSH₃% by its blade design, whereas the other one is manufactured using a standard casting method. The cavitation is detected by pump performance measurements, vibration measurements and high-speed image recordings. The head drop and the pump casing vibration caused by cavitation are correlated with the visual appearance of the cavitation. The vibration data is recorded in an axial direction of the impeller using accelerometers recording at a sample rate of 131 kHz. The vibration frequency domain data (up to 20 kHz) and the time domain data are analyzed as well as the root mean square values. The high-speed recordings, focusing on the impeller suction side, are taken at 10,240 fps to provide insight into the flow patterns and the cavitation behavior in the rotating impeller. The videos are synchronized with the vibration time signals by a trigger signal. A clear correlation between cloud collapses and abrupt peaks in the vibration signal can be observed. The vibration peaks clearly indicate cavitation, especially at higher NPSHA values where the hydraulic performance is not affected. It is also observed that below a certain NPSHA value, the cavitation started in the inlet bend of the pump. Above this value, cavitation occurs exclusively on the impeller blades. The impeller optimized for NPSH₃% does show a lower NPSH₃% than the standard impeller, but the head drop starts at a higher NPSHA value and is more gradual. Instabilities in the head drop curve of the optimized impeller were observed in addition to a higher vibration level. Furthermore, the cavitation clouds on the suction side appear more unsteady when using the optimized impeller. The shape and location of the cavitation are compared to 3D fluid flow simulations. The simulation results are in good agreement with the experimental investigations. In conclusion, these investigations attempt to give a more holistic view on the appearance of cavitation by comparing the head drop, vibration spectral data, vibration time signals, image recordings and simulation results. Data indicates that a criterion for cavitation detection could be derived from the vibration time-domain measurements, which requires further investigation. Usually, spectral data is used to analyze cavitation, but these investigations indicate that the time domain could be more appropriate for some applications.

Keywords: cavitation, centrifugal pump, head drop, high-speed image recordings, pump vibration

Procedia PDF Downloads 165
36106 The Types of Annuities with Flexible Premium

Authors: Deniz Ünal Özpalamutcu, Burcu Altman

Abstract:

Actuaria uses mathematics, statistic and financial information when analyzing the financial impacts of uncertainties, risks, insurance and pension related issues. In other words, it deals with the likelihood of potential risks, their financial impacts and especially the financial measures. Handling these measures require some long-term payment and investments. So, it is obvious it is inevitable to plan the periodic payments with equal time intervals considering also the changing value of money over time. These series of payment made specific intervals of time is called annuity or rant. In literature, rants are classified based on start and end dates, start times, payments times, payments amount or frequency. Classification of rants based on payment amounts changes based on the constant, descending or ascending payment methods. The literature about handling the annuity is very limited. Yet in a daily life, especially in today’s world where the economic issues gained a prominence, it is very crucial to use the variable annuity method in line with the demands of the customers. In this study, the types of annuities with flexible payment are discussed. In other words, we focus on calculating payment amount of a period by adding a certain percentage of previous period payment was studied. While studying this problem, formulas were created considering both start and end period payments for cash value and accumulated. Also increase of each period payment by r interest rate each period payments calculated with previous periods increases. And the problem of annuities (rants) of which each period payment increased with previous periods’ increase by r interest rate has been analyzed. Cash value and accumulated value calculation of this problem were studied separately based on the period start/end and their relations were expressed by formulas.

Keywords: actuaria, annuity, flexible payment, rant

Procedia PDF Downloads 201
36105 Econometric Analysis of West African Countries’ Container Terminal Throughput and Gross Domestic Products

Authors: Kehinde Peter Oyeduntan, Kayode Oshinubi

Abstract:

The west African ports have been experiencing large inflow and outflow of containerized cargo in the last decades, and this has created a quest amongst the countries to attain the status of hub port for the sub-region. This study analyzed the relationship between the container throughput and Gross Domestic Products (GDP) of nine west African countries, using Simple Linear Regression (SLR), Polynomial Regression Model (PRM) and Support Vector Machines (SVM) with a time series of 20 years. The results showed that there exists a high correlation between the GDP and container throughput. The model also predicted the container throughput in west Africa for the next 20 years. The findings and recommendations presented in this research will guide policy makers and help improve the management of container ports and terminals in west Africa, thereby boosting the economy.

Keywords: container, ports, terminals, throughput

Procedia PDF Downloads 199
36104 Recommender Systems Using Ensemble Techniques

Authors: Yeonjeong Lee, Kyoung-jae Kim, Youngtae Kim

Abstract:

This study proposes a novel recommender system that uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user’s preference. The proposed model consists of two steps. In the first step, this study uses logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. Then, this study combines the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. In the second step, this study uses the market basket analysis to extract association rules for co-purchased products. Finally, the system selects customers who have high likelihood to purchase products in each product group and recommends proper products from same or different product groups to them through above two steps. We test the usability of the proposed system by using prototype and real-world transaction and profile data. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The results also show that the proposed system may be useful in real-world online shopping store.

Keywords: product recommender system, ensemble technique, association rules, decision tree, artificial neural networks

Procedia PDF Downloads 279
36103 Data Management System for Environmental Remediation

Authors: Elizaveta Petelina, Anton Sizo

Abstract:

Environmental remediation projects deal with a wide spectrum of data, including data collected during site assessment, execution of remediation activities, and environmental monitoring. Therefore, an appropriate data management is required as a key factor for well-grounded decision making. The Environmental Data Management System (EDMS) was developed to address all necessary data management aspects, including efficient data handling and data interoperability, access to historical and current data, spatial and temporal analysis, 2D and 3D data visualization, mapping, and data sharing. The system focuses on support of well-grounded decision making in relation to required mitigation measures and assessment of remediation success. The EDMS is a combination of enterprise and desktop level data management and Geographic Information System (GIS) tools assembled to assist to environmental remediation, project planning, and evaluation, and environmental monitoring of mine sites. EDMS consists of seven main components: a Geodatabase that contains spatial database to store and query spatially distributed data; a GIS and Web GIS component that combines desktop and server-based GIS solutions; a Field Data Collection component that contains tools for field work; a Quality Assurance (QA)/Quality Control (QC) component that combines operational procedures for QA and measures for QC; Data Import and Export component that includes tools and templates to support project data flow; a Lab Data component that provides connection between EDMS and laboratory information management systems; and a Reporting component that includes server-based services for real-time report generation. The EDMS has been successfully implemented for the Project CLEANS (Clean-up of Abandoned Northern Mines). Project CLEANS is a multi-year, multimillion-dollar project aimed at assessing and reclaiming 37 uranium mine sites in northern Saskatchewan, Canada. The EDMS has effectively facilitated integrated decision-making for CLEANS project managers and transparency amongst stakeholders.

Keywords: data management, environmental remediation, geographic information system, GIS, decision making

Procedia PDF Downloads 140
36102 An Output Oriented Super-Efficiency Model for Considering Time Lag Effect

Authors: Yanshuang Zhang, Byungho Jeong

Abstract:

There exists some time lag between the consumption of inputs and the production of outputs. This time lag effect should be considered in calculating efficiency of decision making units (DMU). Recently, a couple of DEA models were developed for considering time lag effect in efficiency evaluation of research activities. However, these models can’t discriminate efficient DMUs because of the nature of basic DEA model in which efficiency scores are limited to ‘1’. This problem can be resolved a super-efficiency model. However, a super efficiency model sometimes causes infeasibility problem. This paper suggests an output oriented super-efficiency model for efficiency evaluation under the consideration of time lag effect. A case example using a long term research project is given to compare the suggested model with the MpO model

Keywords: DEA, Super-efficiency, Time Lag, research activities

Procedia PDF Downloads 638
36101 Study of Climate Change Scenarios (IPCC) in the Littoral Zone of the Caspian Sea

Authors: L. Rashidian, M. Rajabali

Abstract:

Climate changes have unpredictable and costly effects on water resources of various basins. The impact of atmospheric phenomena on human life and the environment is so significant that only knowledge of management can reduce its consequences. In this study, using LARS.WG model and down scaling of general circulation climate model HADCM-3 and according to the IPCC scenarios, including series A1b, A2 and B1, we simulated data from 2010 to 2040 in order to using them for long term forecasting of climate parameters of the Caspian Sea and its impact on sea level. Our research involves collecting data on monthly precipitation amounts, minimum and maximum temperature and daily sunshine hours, from meteorological organization for Caspian Sea coastal station such as Gorgan, Ramsar, Rasht, Anzali, Astara and Ghaemshahr since their establishment until 2010. Considering the fact that the fluctuation range of water level in the Caspian Sea has various ups and downs in different times, there is an increase in minimum and maximum temperature for all the mentioned scenarios, which will last until 2040. Overall, the amount of rainfall in cities bordering the Caspian Sea was studied based on the three scenarios, which shows an increase in the amount. However, there will be a decrease in water level of the Caspian Sea till 2040.

Keywords: IPCC, climate change, atmospheric circulation, Caspian Sea, HADCM3, sea level

Procedia PDF Downloads 224
36100 Soil Macronutrients Sensing for Precision Agriculture Purpose Using Fourier Transform Infrared Spectroscopy

Authors: Hossein Navid, Maryam Adeli Khadem, Shahin Oustan, Mahmoud Zareie

Abstract:

Among the nutrients needed by the plants, three elements containing nitrate, phosphorus and potassium are more important. The objective of this research was measuring these nutrient amounts in soil using Fourier transform infrared spectroscopy in range of 400- 4000 cm-1. Soil samples for different soil types (sandy, clay and loam) were collected from different areas of East Azerbaijan. Three types of fertilizers in conventional farming (urea, triple superphosphate, potassium sulphate) were used for soil treatment. Each specimen was divided into two categories: The first group was used in the laboratory (direct measurement) to extract nitrate, phosphorus and potassium uptake by colorimetric method of Olsen and ammonium acetate. The second group was used to measure drug absorption spectrometry. In spectrometry, the small amount of soil samples mixed with KBr and was taken in a small pill form. For the tests, the pills were put in the center of infrared spectrometer and graphs were obtained. Analysis of data was done using MINITAB and PLSR software. The data obtained from spectrometry method were compared with amount of soil nutrients obtained from direct drug absorption using EXCEL software. There were good fitting between these two data series. For nitrate, phosphorus and potassium R2 was 79.5%, 92.0% and 81.9%, respectively. Also, results showed that the range of MIR (mid-infrared) is appropriate for determine the amount of soil nitrate and potassium and can be used in future research to obtain detailed maps of land in agricultural use.

Keywords: nitrate, phosphorus, potassium, soil nutrients, spectroscopy

Procedia PDF Downloads 383
36099 Intelligent Transport System: Classification of Traffic Signs Using Deep Neural Networks in Real Time

Authors: Anukriti Kumar, Tanmay Singh, Dinesh Kumar Vishwakarma

Abstract:

Traffic control has been one of the most common and irritating problems since the time automobiles have hit the roads. Problems like traffic congestion have led to a significant time burden around the world and one significant solution to these problems can be the proper implementation of the Intelligent Transport System (ITS). It involves the integration of various tools like smart sensors, artificial intelligence, position technologies and mobile data services to manage traffic flow, reduce congestion and enhance driver's ability to avoid accidents during adverse weather. Road and traffic signs’ recognition is an emerging field of research in ITS. Classification problem of traffic signs needs to be solved as it is a major step in our journey towards building semi-autonomous/autonomous driving systems. The purpose of this work focuses on implementing an approach to solve the problem of traffic sign classification by developing a Convolutional Neural Network (CNN) classifier using the GTSRB (German Traffic Sign Recognition Benchmark) dataset. Rather than using hand-crafted features, our model addresses the concern of exploding huge parameters and data method augmentations. Our model achieved an accuracy of around 97.6% which is comparable to various state-of-the-art architectures.

Keywords: multiclass classification, convolution neural network, OpenCV

Procedia PDF Downloads 158
36098 A Super-Efficiency Model for Evaluating Efficiency in the Presence of Time Lag Effect

Authors: Yanshuang Zhang, Byungho Jeong

Abstract:

In many cases, there is a time lag between the consumption of inputs and the production of outputs. This time lag effect should be considered in evaluating the performance of organizations. Recently, a couple of DEA models were developed for considering time lag effect in efficiency evaluation of research activities. Multi-periods input(MpI) and Multi-periods output(MpO) models are integrated models to calculate simple efficiency considering time lag effect. However, these models can’t discriminate efficient DMUs because of the nature of basic DEA model in which efficiency scores are limited to ‘1’. That is, efficient DMUs can’t be discriminated because their efficiency scores are same. Thus, this paper suggests a super-efficiency model for efficiency evaluation under the consideration of time lag effect based on the MpO model. A case example using a long-term research project is given to compare the suggested model with the MpO model.

Keywords: DEA, super-efficiency, time lag, multi-periods input

Procedia PDF Downloads 454
36097 Blockchain-Based Approach on Security Enhancement of Distributed System in Healthcare Sector

Authors: Loong Qing Zhe, Foo Jing Heng

Abstract:

A variety of data files are now available on the internet due to the advancement of technology across the globe today. As more and more data are being uploaded on the internet, people are becoming more concerned that their private data, particularly medical health records, are being compromised and sold to others for money. Hence, the accessibility and confidentiality of patients' medical records have to be protected through electronic means. Blockchain technology is introduced to offer patients security against adversaries or unauthorised parties. In the blockchain network, only authorised personnel or organisations that have been validated as nodes may share information and data. For any change within the network, including adding a new block or modifying existing information about the block, a majority of two-thirds of the vote is required to confirm its legitimacy. Additionally, a consortium permission blockchain will connect all the entities within the same community. Consequently, all medical data in the network can be safely shared with all authorised entities. Also, synchronization can be performed within the cloud since the data is real-time. This paper discusses an efficient method for storing and sharing electronic health records (EHRs). It also examines the framework of roles within the blockchain and proposes a new approach to maintain EHRs with keyword indexes to search for patients' medical records while ensuring data privacy.

Keywords: healthcare sectors, distributed system, blockchain, electronic health records (EHR)

Procedia PDF Downloads 173
36096 Toxic Metal and Radiological Risk Assessment of Soil, Water and Vegetables around a Gold Mine Turned Residential Area in Mokuro Area of Ile-Ife, Osun State Nigeria: An Implications for Human Health

Authors: Grace O. Akinlade, Danjuma D. Maza, Oluwakemi O. Olawolu, Delight O. Babalola, John A. O. Oyekunle, Joshua O. Ojo

Abstract:

The Mokuro area of Ile-Ife, South West Nigeria, was well known for gold mining in the past (about twenty years ago). However, the place has since been reclaimed and converted to residential area without any environmental risk assessment of the impact of the mining tailings on the environment. Soil, water, and plant samples were collected from 4 different locations around the mine-turned-residential area. Soil samples were pulverized and sieved into finer particles, while the plant samples were dried and pulverized. All the samples were digested and analyzed for As, Pb, Cd, and Zn using atomic absorption spectroscopy (AAS). From the analysis results, the hazard index (HI) was then calculated for the metals. The soil and plant samples were air dried and pulverized, then weighed, after which the samples were packed into special and properly sealed containers to prevent radon gas leakage. After the sealing, the samples were kept for 28 days to attain secular equilibrium. The concentrations of 40K, 238U, and 232Th in the samples were measured using a cesium iodide (CsI) spectrometer and URSA software. The AAS analysis showed that As, Pb, Cd (Toxic metals), and Zn (essential trace metals) are in concentrations lower than permissible limits in plants and soil samples, while the water samples had concentrations higher than permissible limits. The calculated health indices (HI) show that HI for water is >1 and that of plants and soil is <1. Gamma spectrometry result shows high levels of activity concentrations above the recommended limits for all the soil and plant samples collected from the area. Only the water samples have activity concentrations below the recommended limit. Consequently, the absorbed dose, annual effective dose, and excess lifetime cancer risk are all above the recommended safe limit for all the samples except for water samples. In conclusion, all the samples collected from the area are either contaminated with toxic metals or they pose radiological hazards to the consumers. Further detailed study is therefore recommended in order to be able to advise the residents appropriately.

Keywords: toxic metals, gamma spectrometry, Ile-Ife, radiological hazards, gold mining

Procedia PDF Downloads 33
36095 Numerical Investigation of Dynamic Stall over a Wind Turbine Pitching Airfoil by Using OpenFOAM

Authors: Mahbod Seyednia, Shidvash Vakilipour, Mehran Masdari

Abstract:

Computations for two-dimensional flow past a stationary and harmonically pitching wind turbine airfoil at a moderate value of Reynolds number (400000) are carried out by progressively increasing the angle of attack for stationary airfoil and at fixed pitching frequencies for rotary one. The incompressible Navier-Stokes equations in conjunction with Unsteady Reynolds Average Navier-Stokes (URANS) equations for turbulence modeling are solved by OpenFOAM package to investigate the aerodynamic phenomena occurred at stationary and pitching conditions on a NACA 6-series wind turbine airfoil. The aim of this study is to enhance the accuracy of numerical simulation in predicting the aerodynamic behavior of an oscillating airfoil in OpenFOAM. Hence, for turbulence modelling, k-ω-SST with low-Reynolds correction is employed to capture the unsteady phenomena occurred in stationary and oscillating motion of the airfoil. Using aerodynamic and pressure coefficients along with flow patterns, the unsteady aerodynamics at pre-, near-, and post-static stall regions are analyzed in harmonically pitching airfoil, and the results are validated with the corresponding experimental data possessed by the authors. The results indicate that implementing the mentioned turbulence model leads to accurate prediction of the angle of static stall for stationary airfoil and flow separation, dynamic stall phenomenon, and reattachment of the flow on the surface of airfoil for pitching one. Due to the geometry of the studied 6-series airfoil, the vortex on the upper surface of the airfoil during upstrokes is formed at the trailing edge. Therefore, the pattern flow obtained by our numerical simulations represents the formation and change of the trailing-edge vortex at near- and post-stall regions where this process determines the dynamic stall phenomenon.

Keywords: CFD, moderate Reynolds number, OpenFOAM, pitching oscillation, unsteady aerodynamics, wind turbine

Procedia PDF Downloads 188
36094 Adapting an Accurate Reverse-time Migration Method to USCT Imaging

Authors: Brayden Mi

Abstract:

Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.

Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation

Procedia PDF Downloads 58
36093 Research and Application of Consultative Committee for Space Data Systems Wireless Communications Standards for Spacecraft

Authors: Cuitao Zhang, Xiongwen He

Abstract:

According to the new requirements of the future spacecraft, such as networking, modularization and non-cable, this paper studies the CCSDS wireless communications standards, and focuses on the low data-rate wireless communications for spacecraft monitoring and control. The application fields and advantages of wireless communications are analyzed. Wireless communications technology has significant advantages in reducing the weight of the spacecraft, saving time in spacecraft integration, etc. Based on this technology, a scheme for spacecraft data system is put forward. The corresponding block diagram and key wireless interface design of the spacecraft data system are given. The design proposal of the wireless node and information flow of the spacecraft are also analyzed. The results show that the wireless communications scheme is reasonable and feasible. The wireless communications technology can meet the future spacecraft demands in networking, modularization and non-cable.

Keywords: Consultative Committee for Space Data Systems (CCSDS) standards, information flow, non-cable, spacecraft, wireless communications

Procedia PDF Downloads 312
36092 Oil-price Volatility and Economic Prosperity in Nigeria: Empirical Evidence

Authors: Yohanna Panshak

Abstract:

The impact of macroeconomic instability on economic growth and prosperity has been at forefront in many discourses among researchers and policy makers and has generated a lot of controversies over the years. This has generated series of research efforts towards understanding the remote causes of this phenomenon; its nature, determinants and how it can be targeted and mitigated. While others have opined that the root cause of macroeconomic flux in Nigeria is attributed to Oil-Price volatility, others viewed the issue as resulting from some constellation of structural constraints both within and outside the shores of the country. Research works of scholars such as [Akpan (2009), Aliyu (2009), Olomola (2006), etc] argue that oil volatility can determine economic growth or has the potential of doing so. On the contrary, [Darby (1982), Cerralo (2005) etc] share the opinion that it can slow down growth. The earlier argument rest on the understanding that for a net balance of oil exporting economies, price upbeat directly increases real national income through higher export earnings, whereas, the latter allude to the case of net-oil importing countries (which experience price rises, increased input costs, reduced non-oil demand, low investment, fall in tax revenues and ultimately an increase in budget deficit which will further reduce welfare level). Therefore, assessing the precise impact of oil price volatility on virtually any economy is a function of whether it is an oil-exporting or importing nation. Research on oil price volatility and its outcome on the growth of the Nigerian economy are evolving and in a march towards resolving Nigeria’s macroeconomic instability as long as oil revenue still remain the mainstay and driver of socio-economic engineering. Recently, a major importer of Nigeria’s oil- United States made a historic breakthrough in more efficient source of energy for her economy with the capacity of serving significant part of the world. This undoubtedly suggests a threat to the exchange earnings of the country. The need to understand fluctuation in its major export commodity is critical. This paper leans on the Renaissance growth theory with greater focus on theoretical work of Lee (1998); a leading proponent of this school who makes a clear cut of difference between oil price changes and oil price volatility. Based on the above background, the research seeks to empirically examine the impact oil-price volatility on government expenditure using quarterly time series data spanning 1986:1 to 2014:4. Vector Auto Regression (VAR) econometric approach shall be used. The structural properties of the model shall be tested using Augmented Dickey-Fuller and Phillips-Perron. Relevant diagnostics tests of heteroscedasticity, serial correlation and normality shall also be carried out. Policy recommendation shall be offered on the empirical findings and believes it assist policy makers not only in Nigeria but the world-over.

Keywords: oil-price, volatility, prosperity, budget, expenditure

Procedia PDF Downloads 256
36091 The Effect of Feature Selection on Pattern Classification

Authors: Chih-Fong Tsai, Ya-Han Hu

Abstract:

The aim of feature selection (or dimensionality reduction) is to filter out unrepresentative features (or variables) making the classifier perform better than the one without feature selection. Since there are many well-known feature selection algorithms, and different classifiers based on different selection results may perform differently, very few studies consider examining the effect of performing different feature selection algorithms on the classification performances by different classifiers over different types of datasets. In this paper, two widely used algorithms, which are the genetic algorithm (GA) and information gain (IG), are used to perform feature selection. On the other hand, three well-known classifiers are constructed, which are the CART decision tree (DT), multi-layer perceptron (MLP) neural network, and support vector machine (SVM). Based on 14 different types of datasets, the experimental results show that in most cases IG is a better feature selection algorithm than GA. In addition, the combinations of IG with DT and IG with SVM perform best and second best for small and large scale datasets.

Keywords: data mining, feature selection, pattern classification, dimensionality reduction

Procedia PDF Downloads 652
36090 Recycling of End of Life Concrete Based on C2CA Method

Authors: Somayeh Lotfi, Manuel Eggimann, Eckhard Wagner, Radosław Mróz, Jan Deja

Abstract:

One of the main environmental challenges in the construction industry is a strong social force to decrease the bulk transport of the building materials in urban environments. Considering this fact, applying more in-situ recycling technologies for Construction and Demolition Waste (CDW) is an urgent need. The European C2CA project develops a novel concrete recycling technology that can be performed purely mechanically and in situ. The technology consists of a combination of smart demolition, gentle grinding of the crushed concrete in an autogenous mill, and a novel dry classification technology called ADR to remove the fines. The feasibility of this recycling process was examined in demonstration projects involving in total 20,000 tons of End of Life (EOL) concrete from two office towers in Groningen, The Netherlands. This paper concentrates on the second demonstration project of C2CA, where EOL concrete was recycled on an industrial site. After recycling, the properties of the produced Recycled Aggregate (RA) were investigated, and results are presented. An experimental study was carried out on mechanical and durability properties of produced Recycled Aggregate Concrete (RAC) compared to those of the Natural Aggregate Concrete (NAC). The aim was to understand the importance of RA substitution, w/c ratio and type of cement to the properties of RAC. In this regard, two series of reference concrete with strength classes of C25/30 and C45/55 were produced using natural coarse aggregates (rounded and crushed) and natural sand. The RAC series were created by replacing parts of the natural aggregate, resulting in series of concrete with 0%, 20%, 50% and 100% of RA. Results show that the concrete mix design and type of cement have a decisive effect on the properties of RAC. On the other hand, the substitution of RA even at a high percentage replacement level has a minor and manageable impact on the performance of RAC. This result is a good indication towards the feasibility of using RA in structural concrete by modifying the mix design and using a proper type of cement.

Keywords: C2CA, ADR, concrete recycling, recycled aggregate, durability

Procedia PDF Downloads 375
36089 Effect of Micaceous Iron Oxide and Nanocrystalline Al on the Electrochemical Behavior of Aliphatic Amine Cured Epoxy Coating

Authors: Asiful H. Seikh, Jabair A. Mohammed, Ubair A. Samad, Mohammad A. Alam, Saeed M. Al-Zahrani, El-Sayed M. Sherif

Abstract:

Three coating formulations were fabricated by incorporating different percentages of MIO (micaceous iron oxide ) (1, 2, and wt%) with ball-milled nanocrystalline Al (2 wt%) particles, which was optimized earlier. These coatings were characterized by means of different methods, namely, SEM, TGA, pendulum hardness, scratch test, and nano-indentation. The EIS measurements were carried out to report the effect of adding MIO powder in fabricated coatings on their corrosion behavior in 3.5 wt% NaCl solutions. In order to report the effect of immersion time on the corrosion and degradation of the prepared coatings, the EIS data were also acquired after various exposure periods of time, i.e., 1 h, 7 d, 14 d, 21 d, and 30 d in the test chloride solution. It has been found that the obtained EIS data for the fabricated coatings proved that the presence of 2% MIO provided the highest corrosion resistance amongst all coatings and that effect was recorded after all immersion periods of time. But, the MIO-incorporated coatings have less corrosion resistance than Al based epoxy coatings. It was also shown that with prolonged immersion, the resistance to corrosion declined after 7d, then with a longer period of immersion, i.e. 14 d, 21 d, and 30 d increases the resistance to corrosion by forming oxide products on the coatings surface. The results obtained from both mechanical and electrochemical testing confirmed that the fabricated coating with 2 wt% Al exhibited better hardness and higher resistance to corrosion as compared to coatings with 1 wt% Al and 3 wt% Al.

Keywords: epoxy coatings, nanomaterials, corrosion resistance, EIS, nanoindentation

Procedia PDF Downloads 45
36088 Time Compression in Engineer-to-Order Industry: A Case Study of a Norwegian Shipbuilding Industry

Authors: Tarek Fatouh, Chehab Elbelehy, Alaa Abdelsalam, Eman Elakkad, Alaa Abdelshafie

Abstract:

This paper aims to explore the possibility of time compression in Engineer to Order production networks. A case study research method is used in a Norwegian shipbuilding project by implementing a value stream mapping lean tool with total cycle time as a unit of analysis. The analysis resulted in demonstrating the time deviations for the planned tasks in one of the processes in the shipbuilding project. So, authors developed a future state map by removing time wastes from value stream process.

Keywords: engineer to order, total cycle time, value stream mapping, shipbuilding

Procedia PDF Downloads 143
36087 A Case-Based Reasoning-Decision Tree Hybrid System for Stock Selection

Authors: Yaojun Wang, Yaoqing Wang

Abstract:

Stock selection is an important decision-making problem. Many machine learning and data mining technologies are employed to build automatic stock-selection system. A profitable stock-selection system should consider the stock’s investment value and the market timing. In this paper, we present a hybrid system including both engage for stock selection. This system uses a case-based reasoning (CBR) model to execute the stock classification, uses a decision-tree model to help with market timing and stock selection. The experiments show that the performance of this hybrid system is better than that of other techniques regarding to the classification accuracy, the average return and the Sharpe ratio.

Keywords: case-based reasoning, decision tree, stock selection, machine learning

Procedia PDF Downloads 397
36086 Study on Disaster Prevention Plan for an Electronic Industry in Thailand

Authors: S. Pullteap, M. Pathomsuriyaporn

Abstract:

In this article, a study of employee’s opinion to the factors that affect to the flood preventive and the corrective action plan in an electronic industry at the Sharp Manufacturing (Thailand) Co., Ltd. has been investigated. The surveys data of 175 workers and supervisors have, however, been selected for data analysis. The results is shown that the employees emphasize about the needs in a subsidy at the time of disaster at high levels of 77.8%, as the plan focusing on flood prevention of the rehabilitation equipment is valued at the intermediate level, which is 79.8%. Demonstration of the hypothesis has found that the different education levels has thus been affected to the needs factor at the flood disaster time. Moreover, most respondents give priority to flood disaster risk management factor. Consequently, we found that the flood prevention plan is valued at high level, especially on information monitoring, which is 93.4% for the supervisor item. The respondents largely assume that the flood will have impacts on the industry, up to 80%, thus to focus on flood management plans is enormous.

Keywords: flood prevention plan, flood event, electronic industrial plant, disaster, risk management

Procedia PDF Downloads 306
36085 Time and Kinematics of Moving Bodies

Authors: Muhammad Omer Farooq Saeed

Abstract:

The purpose of the proposal is to find out what time actually is! And to understand the natural phenomenon of the behavior of time and light corresponding to the motion of the bodies at relatively high speeds. The utmost concern of the paper is to deal with the possible demerits in the equations of relativity, thereby providing some valuable extensions in those equations and concepts. The idea used develops the most basic conception of the relative motion of the body with respect to space and a real understanding of time and the variation of energy of the body in different frames of reference. The results show the development of a completely new understanding of time, relative motion and energy, along with some extensions in the equations of special relativity most importantly the time dilation and the mass-energy relationship that will explain all frames of a body, all in one go. The proposal also raises serious questions on the validity of the “Principle of Equivalence” on which the General Relativity is based, most importantly a serious case of the bending light that eventually goes against its own governing concepts of space-time being proposed in the theory. The results also predict the existence of a completely new field that explains the fact just how and why bodies acquire energy in space-time. This field explains the production of gravitational waves based on time. All in all, this proposal challenges the formulas and conceptions of Special and General Relativity, respectively.

Keywords: time, relative motion, energy, speed, frame of reference, photon, curvature, space-time, time –differentials

Procedia PDF Downloads 52
36084 Coarse Grid Computational Fluid Dynamics Fire Simulations

Authors: Wolfram Jahn, Jose Manuel Munita

Abstract:

While computational fluid dynamics (CFD) simulations of fire scenarios are commonly used in the design of buildings, less attention has been given to the use of CFD simulations as an operational tool for the fire services. The reason of this lack of attention lies mainly in the fact that CFD simulations typically take large periods of time to complete, and their results would thus not be available in time to be of use during an emergency. Firefighters often face uncertain conditions when entering a building to attack a fire. They would greatly benefit from a technology based on predictive fire simulations, able to assist their decision-making process. The principal constraint to faster CFD simulations is the fine grid necessary to solve accurately the physical processes that govern a fire. This paper explores the possibility of overcoming this constraint and using coarse grid CFD simulations for fire scenarios, and proposes a methodology to use the simulation results in a meaningful way that can be used by the fire fighters during an emergency. Data from real scale compartment fire tests were used to compare CFD fire models with different grid arrangements, and empirical correlations were obtained to interpolate data points into the grids. The results show that the strongly predominant effect of the heat release rate of the fire on the fluid dynamics allows for the use of coarse grids with relatively low overall impact of simulation results. Simulations with an acceptable level of accuracy could be run in real time, thus making them useful as a forecasting tool for emergency response purposes.

Keywords: CFD, fire simulations, emergency response, forecast

Procedia PDF Downloads 303
36083 Analyzing the Effectiveness of a Bank of Parallel Resistors, as a Burden Compensation Technique for Current Transformer's Burden, Using LabVIEW™ Data Acquisition Tool

Authors: Dilson Subedi

Abstract:

Current transformers are an integral part of power system because it provides a proportional safe amount of current for protection and measurement applications. However, due to upgradation of electromechanical relays to numerical relays and electromechanical energy meters to digital meters, the connected burden, which defines some of the CT characteristics, has drastically reduced. This has led to the system experiencing high currents damaging the connected relays and meters. Since the protection and metering equipment's are designed to withstand only certain amount of current with respect to time, these high currents pose a risk to man and equipment. Therefore, during such instances, the CT saturation characteristics have a huge influence on the safety of both man and equipment and on the reliability of the protection and metering system. This paper shows the effectiveness of a bank of parallel connected resistors, as a burden compensation technique, in compensating the burden of under-burdened CT’s. The response of the CT in the case of failure of one or more resistors at different levels of overcurrent will be captured using the LabVIEWTM data acquisition hardware (DAQ). The analysis is done on the real-time data gathered using LabVIEWTM. Variation of current transformer saturation characteristics with changes in burden will be discussed.

Keywords: accuracy limiting factor, burden, burden compensation, current transformer

Procedia PDF Downloads 234
36082 High-Throughput Artificial Guide RNA Sequence Design for Type I, II and III CRISPR/Cas-Mediated Genome Editing

Authors: Farahnaz Sadat Golestan Hashemi, Mohd Razi Ismail, Mohd Y. Rafii

Abstract:

A huge revolution has emerged in genome engineering by the discovery of CRISPR (clustered regularly interspaced palindromic repeats) and CRISPR-associated system genes (Cas) in bacteria. The function of type II Streptococcus pyogenes (Sp) CRISPR/Cas9 system has been confirmed in various species. Other S. thermophilus (St) CRISPR-Cas systems, CRISPR1-Cas and CRISPR3-Cas, have been also reported for preventing phage infection. The CRISPR1-Cas system interferes by cleaving foreign dsDNA entering the cell in a length-specific and orientation-dependant manner. The S. thermophilus CRISPR3-Cas system also acts by cleaving phage dsDNA genomes at the same specific position inside the targeted protospacer as observed in the CRISPR1-Cas system. It is worth mentioning, for the effective DNA cleavage activity, RNA-guided Cas9 orthologs require their own specific PAM (protospacer adjacent motif) sequences. Activity levels are based on the sequence of the protospacer and specific combinations of favorable PAM bases. Therefore, based on the specific length and sequence of PAM followed by a constant length of target site for the three orthogonals of Cas9 protein, a well-organized procedure will be required for high-throughput and accurate mining of possible target sites in a large genomic dataset. Consequently, we created a reliable procedure to explore potential gRNA sequences for type I (Streptococcus thermophiles), II (Streptococcus pyogenes), and III (Streptococcus thermophiles) CRISPR/Cas systems. To mine CRISPR target sites, four different searching modes of sgRNA binding to target DNA strand were applied. These searching modes are as follows: i) coding strand searching, ii) anti-coding strand searching, iii) both strand searching, and iv) paired-gRNA searching. The output of such procedure highlights the power of comparative genome mining for different CRISPR/Cas systems. This could yield a repertoire of Cas9 variants with expanded capabilities of gRNA design, and will pave the way for further advance genome and epigenome engineering.

Keywords: CRISPR/Cas systems, gRNA mining, Streptococcus pyogenes, Streptococcus thermophiles

Procedia PDF Downloads 244