Search results for: uncorrected refractive error
861 Research on Robot Adaptive Polishing Control Technology
Authors: Yi Ming Zhang, Zhan Xi Wang, Hang Chen, Gang Wang
Abstract:
Manual polishing has problems such as high labor intensity, low production efficiency and difficulty in guaranteeing the consistency of polishing quality. It is more and more necessary to replace manual polishing with robot polishing. Polishing force directly affects the quality of polishing, so accurate tracking and control of polishing force is one of the most important conditions for improving the accuracy of robot polishing. The traditional force control strategy is difficult to adapt to the strong coupling of force control and position control during the robot polishing process. Therefore, based on the analysis of force-based impedance control and position-based impedance control, this paper proposed a new type of adaptive controller. Based on force feedback control of active compliance control, the controller can adaptively estimate the stiffness and position of the external environment and eliminate the steady-state force error produced by traditional impedance control. The simulation results of the model shows that the adaptive controller has good adaptability to changing environmental positions and environmental stiffness, and can accurately track and control polishing force.Keywords: robot polishing, force feedback, impedance control, adaptive control
Procedia PDF Downloads 199860 Presenting a Model for Predicting the State of Being Accident-Prone of Passages According to Neural Network and Spatial Data Analysis
Authors: Hamd Rezaeifar, Hamid Reza Sahriari
Abstract:
Accidents are considered to be one of the challenges of modern life. Due to the fact that the victims of this problem and also internal transportations are getting increased day by day in Iran, studying effective factors of accidents and identifying suitable models and parameters about this issue are absolutely essential. The main purpose of this research has been studying the factors and spatial data affecting accidents of Mashhad during 2007- 2008. In this paper it has been attempted to – through matching spatial layers on each other and finally by elaborating them with the place of accident – at the first step by adding landmarks of the accident and through adding especial fields regarding the existence or non-existence of effective phenomenon on accident, existing information banks of the accidents be completed and in the next step by means of data mining tools and analyzing by neural network, the relationship between these data be evaluated and a logical model be designed for predicting accident-prone spots with minimum error. The model of this article has a very accurate prediction in low-accident spots; yet it has more errors in accident-prone regions due to lack of primary data.Keywords: accident, data mining, neural network, GIS
Procedia PDF Downloads 47859 Application of Hybrid Honey Bees Mating Optimization Algorithm in Multiuser Detection of Wireless Communication Systems
Abstract:
Wireless communication systems have changed dramatically and shown spectacular evolution over the past two decades. These radio technologies are engaged in a quest endless high-speed transmission coupled to a constant need to improve transmission quality. Various radio communication systems being developed use code division multiple access (CDMA) technique. This work analyses a hybrid honey bees mating optimization algorithm (HBMO) applied to multiuser detection (MuD) in CDMA communication systems. The HBMO is a swarm-based optimization algorithm, which simulates the mating process of real honey bees. We apply a hybridization of HBMO with simulated annealing (SA) in order to improve the solution generated by the HBMO. Simulation results show that the detection based on Hybrid HBMO, in term of bit error rate (BER), is viable option when compared with the classic detectors from literature under Rayleigh flat fading channel.Keywords: BER, DS-CDMA multiuser detection, genetic algorithm, hybrid HBMO, simulated annealing
Procedia PDF Downloads 435858 Reduction of Multiple User Interference for Optical CDMA Systems Using Successive Interference Cancellation Scheme
Authors: Tawfig Eltaif, Hesham A. Bakarman, N. Alsowaidi, M. R. Mokhtar, Malek Harbawi
Abstract:
In Commonly, it is primary problem that there is multiple user interference (MUI) noise resulting from the overlapping among the users in optical code-division multiple access (OCDMA) system. In this article, we aim to mitigate this problem by studying an interference cancellation scheme called successive interference cancellation (SIC) scheme. This scheme will be tested on two different detection schemes, spectral amplitude coding (SAC) and direct detection systems (DS), using partial modified prime (PMP) as the signature codes. It was found that SIC scheme based on both SAC and DS methods had a potential to suppress the intensity noise, that is to say, it can mitigate MUI noise. Furthermore, SIC/DS scheme showed much lower bit error rate (BER) performance relative to SIC/SAC scheme for different magnitude of effective power. Hence, many more users can be supported by SIC/DS receiver system.Keywords: optical code-division multiple access (OCDMA), successive interference cancellation (SIC), multiple user interference (MUI), spectral amplitude coding (SAC), partial modified prime code (PMP)
Procedia PDF Downloads 521857 Blind Watermarking Using Discrete Wavelet Transform Algorithm with Patchwork
Authors: Toni Maristela C. Estabillo, Michaela V. Matienzo, Mikaela L. Sabangan, Rosette M. Tienzo, Justine L. Bahinting
Abstract:
This study is about blind watermarking on images with different categories and properties using two algorithms namely, Discrete Wavelet Transform and Patchwork Algorithm. A program is created to perform watermark embedding, extraction and evaluation. The evaluation is based on three watermarking criteria namely: image quality degradation, perceptual transparency and security. Image quality is measured by comparing the original properties with the processed one. Perceptual transparency is measured by a visual inspection on a survey. Security is measured by implementing geometrical and non-geometrical attacks through a pass or fail testing. Values used to measure the following criteria are mostly based on Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). The results are based on statistical methods used to interpret and collect data such as averaging, z Test and survey. The study concluded that the combined DWT and Patchwork algorithms were less efficient and less capable of watermarking than DWT algorithm only.Keywords: blind watermarking, discrete wavelet transform algorithm, patchwork algorithm, digital watermark
Procedia PDF Downloads 268856 Controlled Nano Texturing in Silicon Wafer for Excellent Optical and Photovoltaic Properties
Authors: Deb Kumar Shah, M. Shaheer Akhtar, Ha Ryeon Lee, O-Bong Yang, Chong Yeal Kim
Abstract:
The crystalline silicon (Si) solar cells are highly renowned photovoltaic technology and well-established as the commercial solar technology. Most of the solar panels are globally installed with the crystalline Si solar modules. At the present scenario, the major photovoltaic (PV) market is shared by c-Si solar cells, but the cost of c-Si panels are still very high as compared with the other PV technology. In order to reduce the cost of Si solar panels, few necessary steps such as low-cost Si manufacturing, cheap antireflection coating materials, inexpensive solar panel manufacturing are to be considered. It is known that the antireflection (AR) layer in c-Si solar cell is an important component to reduce Fresnel reflection for improving the overall conversion efficiency. Generally, Si wafer exhibits the 30% reflection because it normally poses the two major intrinsic drawbacks such as; the spectral mismatch loss and the high Fresnel reflection loss due to the high contrast of refractive indices between air and silicon wafer. In recent years, researchers and scientists are highly devoted to a lot of researches in the field of searching effective and low-cost AR materials. Silicon nitride (SiNx) is well-known AR materials in commercial c-Si solar cells due to its good deposition and interaction with passivated Si surfaces. However, the deposition of SiNx AR is usually performed by expensive plasma enhanced chemical vapor deposition (PECVD) process which could have several demerits like difficult handling and damaging the Si substrate by plasma when secondary electrons collide with the wafer surface for AR coating. It is very important to explore new, low cost and effective AR deposition process to cut the manufacturing cost of c-Si solar cells. One can also be realized that a nano-texturing process like the growth of nanowires, nanorods, nanopyramids, nanopillars, etc. on Si wafer can provide a low reflection on the surface of Si wafer based solar cells. The above nanostructures might be enhanced the antireflection property which provides the larger surface area and effective light trapping. In this work, we report on the development of crystalline Si solar cells without using the AR layer. The Silicon wafer was modified by growing nanowires like Si nanostructures using the wet controlled etching method and directly used for the fabrication of Si solar cell without AR. The nanostructures over Si wafer were optimized in terms of sizes, lengths, and densities by changing the etching conditions. Well-defined and aligned wires like structures were achieved when the etching time is 20 to 30 min. The prepared Si nanostructured displayed the minimum reflectance ~1.64% at 850 nm with the average reflectance of ~2.25% in the wavelength range from 400-1000 nm. The nanostructured Si wafer based solar cells achieved the comparable power conversion efficiency in comparison with c-Si solar cells with SiNx AR layer. From this study, it is confirmed that the reported method (controlled wet etching) is an easy, facile method for preparation of nanostructured like wires on Si wafer with low reflectance in the whole visible region, which has greater prospects in developing c-Si solar cells without AR layer at low cost.Keywords: chemical etching, conversion efficiency, silicon nanostructures, silicon solar cells, surface modification
Procedia PDF Downloads 125855 Towards an Intelligent Ontology Construction Cost Estimation System: Using BIM and New Rules of Measurement Techniques
Authors: F. H. Abanda, B. Kamsu-Foguem, J. H. M. Tah
Abstract:
Construction cost estimation is one of the most important aspects of construction project design. For generations, the process of cost estimating has been manual, time-consuming and error-prone. This has partly led to most cost estimates to be unclear and riddled with inaccuracies that at times lead to over- or under-estimation of construction cost. The development of standard set of measurement rules that are understandable by all those involved in a construction project, have not totally solved the challenges. Emerging Building Information Modelling (BIM) technologies can exploit standard measurement methods to automate cost estimation process and improves accuracies. This requires standard measurement methods to be structured in ontologically and machine readable format; so that BIM software packages can easily read them. Most standard measurement methods are still text-based in textbooks and require manual editing into tables or Spreadsheet during cost estimation. The aim of this study is to explore the development of an ontology based on New Rules of Measurement (NRM) commonly used in the UK for cost estimation. The methodology adopted is Methontology, one of the most widely used ontology engineering methodologies. The challenges in this exploratory study are also reported and recommendations for future studies proposed.Keywords: BIM, construction projects, cost estimation, NRM, ontology
Procedia PDF Downloads 551854 Drying Kinetics of Vacuum Dried Beef Meat Slices
Authors: Elif Aykin Dincer, Mustafa Erbas
Abstract:
The vacuum drying behavior of beef slices (10 x 4 x 0.2 cm3) was experimentally investigated at the temperature of 60, 70, and 80°C under 25 mbar ultimate vacuum pressure and the mathematical models (Lewis, Page, Midilli, Two-term, Wangh and Singh and Modified Henderson and Pabis) were used to fit the vacuum drying of beef slices. The increase in drying air temperature resulted in a decrease in drying time. It took approximately 206, 180 and 157 min to dry beef slices from an initial moisture content to a final moisture content of 0.05 kg water/kg dry matter at 60, 70 and 80 °C of vacuum drying, respectively. It is also observed that the drying rate increased with increasing drying temperature. The coefficients (R2), the reduced chi-square (x²) and root mean square error (RMSE) values were obtained by application of six models to the experimental drying data. The best model with the highest R2 and, the lowest x² and RMSE values was selected to describe the drying characteristics of beef slices. The Page model has shown a better fit to the experimental drying data as compared to other models. In addition, the effective moisture diffusivities of beef slices in the vacuum drying at 60 - 80 °C varied in the range of 1.05 – 1.09 x 10-10 m2/s. Consequently, this results can be used to simulate vacuum drying process of beef slices and improve efficiency of the drying process.Keywords: beef slice, drying models, effective diffusivity, vacuum
Procedia PDF Downloads 288853 Digital Image Steganography with Multilayer Security
Authors: Amar Partap Singh Pharwaha, Balkrishan Jindal
Abstract:
In this paper, a new method is developed for hiding image in a digital image with multilayer security. In the proposed method, the secret image is encrypted in the first instance using a flexible matrix based symmetric key to add first layer of security. Then another layer of security is added to the secret data by encrypting the ciphered data using Pythagorean Theorem method. The ciphered data bits (4 bits) produced after double encryption are then embedded within digital image in the spatial domain using Least Significant Bits (LSBs) substitution. To improve the image quality of the stego-image, an improved form of pixel adjustment process is proposed. To evaluate the effectiveness of the proposed method, image quality metrics including Peak Signal-to-Noise Ratio (PSNR), Mean Square Error (MSE), entropy, correlation, mean value and Universal Image Quality Index (UIQI) are measured. It has been found experimentally that the proposed method provides higher security as well as robustness. In fact, the results of this study are quite promising.Keywords: Pythagorean theorem, pixel adjustment, ciphered data, image hiding, least significant bit, flexible matrix
Procedia PDF Downloads 337852 A Greedy Alignment Algorithm Supporting Medication Reconciliation
Authors: David Tresner-Kirsch
Abstract:
Reconciling patient medication lists from multiple sources is a critical task supporting the safe delivery of patient care. Manual reconciliation is a time-consuming and error-prone process, and recently attempts have been made to develop efficiency- and safety-oriented automated support for professionals performing the task. An important capability of any such support system is automated alignment – finding which medications from a list correspond to which medications from a different source, regardless of misspellings, naming differences (e.g. brand name vs. generic), or changes in treatment (e.g. switching a patient from one antidepressant class to another). This work describes a new algorithmic solution to this alignment task, using a greedy matching approach based on string similarity, edit distances, concept extraction and normalization, and synonym search derived from the RxNorm nomenclature. The accuracy of this algorithm was evaluated against a gold-standard corpus of 681 medication records; this evaluation found that the algorithm predicted alignments with 99% precision and 91% recall. This performance is sufficient to support decision support applications for medication reconciliation.Keywords: clinical decision support, medication reconciliation, natural language processing, RxNorm
Procedia PDF Downloads 285851 Sectoral Energy Consumption in South Africa and Its Implication for Economic Growth
Authors: Kehinde Damilola Ilesanmi, Dev Datt Tewari
Abstract:
South Africa is in its post-industrial era moving from the primary and secondary sector to the tertiary sector. The study investigated the impact of the disaggregated energy consumption (coal, oil, and electricity) on the primary, secondary and tertiary sectors of the economy between 1980 and 2012 in South Africa. Using vector error correction model, it was established that South Africa is an energy dependent economy, and that energy (especially electricity and oil) is a limiting factor of growth. This implies that implementation of energy conservation policies may hamper economic growth. Output growth is significantly outpacing energy supply, which has necessitated load shedding. To meet up the excess energy demand, there is a need to increase the generating capacity which will necessitate increased investment in the electricity sector as well as strategic steps to increase oil production. There is also need to explore more renewable energy sources, in order to meet the growing energy demand without compromising growth and environmental sustainability. Policy makers should also pursue energy efficiency policies especially at sectoral level of the economy.Keywords: causality, economic growth, energy consumption, hypothesis, sectoral output
Procedia PDF Downloads 470850 A Visual Inspection System for Automotive Sheet Metal Chasis Parts Produced with Cold-Forming Method
Authors: İmren Öztürk Yılmaz, Abdullah Yasin Bilici, Yasin Atalay Candemir
Abstract:
The system consists of 4 main elements: motion system, image acquisition system, image processing software, and control interface. The parts coming out of the production line to enter the image processing system with the conveyor belt at the end of the line. The 3D scanning of the produced part is performed with the laser scanning system integrated into the system entry side. With the 3D scanning method, it is determined at what position and angle the parts enter the system, and according to the data obtained, parameters such as part origin and conveyor speed are calculated with the designed software, and the robot is informed about the position where it will take part. The robot, which receives the information, takes the produced part on the belt conveyor and shows it to high-resolution cameras for quality control. Measurement processes are carried out with a maximum error of 20 microns determined by the experiments.Keywords: quality control, industry 4.0, image processing, automated fault detection, digital visual inspection
Procedia PDF Downloads 113849 A Sequential Approach for Random-Effects Meta-Analysis
Authors: Samson Henry Dogo, Allan Clark, Elena Kulinskaya
Abstract:
The objective in meta-analysis is to combine results from several independent studies in order to create generalization and provide evidence based for decision making. But recent studies show that the magnitude of effect size estimates reported in many areas of research finding changed with year publication and this can impair the results and conclusions of meta-analysis. A number of sequential methods have been proposed for monitoring the effect size estimates in meta-analysis. However they are based on statistical theory applicable to fixed effect model (FEM). For random-effects model (REM), the analysis incorporates the heterogeneity variance, tau-squared and its estimation create complications. In this paper proposed the use of Gombay and Serbian (2005) truncated CUSUM-type test with asymptotically valid critical values for sequential monitoring of REM. Simulation results show that the test does not control the Type I error well, and is not recommended. Further work required to derive an appropriate test in this important area of application.Keywords: meta-analysis, random-effects model, sequential test, temporal changes in effect sizes
Procedia PDF Downloads 467848 Application of Support Vector Machines in Forecasting Non-Residential
Authors: Wiwat Kittinaraporn, Napat Harnpornchai, Sutja Boonyachut
Abstract:
This paper deals with the application of a novel neural network technique, so-called Support Vector Machine (SVM). The objective of this study is to explore the variable and parameter of forecasting factors in the construction industry to build up forecasting model for construction quantity in Thailand. The scope of the research is to study the non-residential construction quantity in Thailand. There are 44 sets of yearly data available, ranging from 1965 to 2009. The correlation between economic indicators and construction demand with the lag of one year was developed by Apichat Buakla. The selected variables are used to develop SVM models to forecast the non-residential construction quantity in Thailand. The parameters are selected by using ten-fold cross-validation method. The results are indicated in term of Mean Absolute Percentage Error (MAPE). The MAPE value for the non-residential construction quantity predicted by Epsilon-SVR in corporation with Radial Basis Function (RBF) of kernel function type is 5.90. Analysis of the experimental results show that the support vector machine modelling technique can be applied to forecast construction quantity time series which is useful for decision planning and management purpose.Keywords: forecasting, non-residential, construction, support vector machines
Procedia PDF Downloads 434847 Sensitivity of the Estimated Output Energy of the Induction Motor to both the Asymmetry Supply Voltage and the Machine Parameters
Authors: Eyhab El-Kharashi, Maher El-Dessouki
Abstract:
The paper is dedicated to precise assessment of the induction motor output energy during the unbalanced operation. Since many years ago and until now the voltage complex unbalance factor (CVUF) is used only to assess the output energy of the induction motor while this output energy for asymmetry supply voltage does not depend on the value of unbalanced voltage only but also on the machine parameters. The paper illustrates the variation of the two unbalance factors, complex voltage unbalance factor (CVUF) and impedance unbalance factor (IUF), with positive sequence voltage component, reveals that degree and manner of unbalance in supply voltage. From this point of view the paper delineates the current unbalance factor (CUF) to exactly reflect the output energy during unbalanced operation. The paper proceeds to illustrate the importance of using this factor in the multi-machine system to precise prediction of the output energy during the unbalanced operation. The use of the proposed unbalance factor (CUF) avoids the accumulation of the error due to more than one machine in the system which is expected if only the complex voltage unbalance factor (CVUF) is used.Keywords: induction motor, electromagnetic torque, voltage unbalance, energy conversion
Procedia PDF Downloads 557846 Comparison of Statistical Methods for Estimating Missing Precipitation Data in the River Subbasin Lenguazaque, Colombia
Authors: Miguel Cañon, Darwin Mena, Ivan Cabeza
Abstract:
In this work was compared and evaluated the applicability of statistical methods for the estimation of missing precipitations data in the basin of the river Lenguazaque located in the departments of Cundinamarca and Boyacá, Colombia. The methods used were the method of simple linear regression, distance rate, local averages, mean rates, correlation with nearly stations and multiple regression method. The analysis used to determine the effectiveness of the methods is performed by using three statistical tools, the correlation coefficient (r2), standard error of estimation and the test of agreement of Bland and Altmant. The analysis was performed using real rainfall values removed randomly in each of the seasons and then estimated using the methodologies mentioned to complete the missing data values. So it was determined that the methods with the highest performance and accuracy in the estimation of data according to conditions that were counted are the method of multiple regressions with three nearby stations and a random application scheme supported in the precipitation behavior of related data sets.Keywords: statistical comparison, precipitation data, river subbasin, Bland and Altmant
Procedia PDF Downloads 467845 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data
Authors: Nicola Colaninno, Eugenio Morello
Abstract:
The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing
Procedia PDF Downloads 194844 Macroeconomic Impact of Economic Growth on Unemployment: A Case of South Africa
Authors: Ashika Govender
Abstract:
This study seeks to determine whether Okun’s Law is valid for the South African economy, using time series data for the period 2004 to 2014. The data were accessed from the South African Reserve Bank and Stats SA. The stationarity of the variables was analysed by applying unit root tests via the Augmented Dickey-Fuller test (ADF), the Phillips-Perron (PP) test, and the Kwiatkowski–Phillips–Schmidt–Shin test (KPSS) test. The study used an ordinary least square (OLS) model in analysing the dynamic version of Okun’s law. The Error Correction Model (ECM) was used to analyse the short-run impact of GDP growth on unemployment, as well as the speed of adjustment. The results indicate a short run and long run relationship between unemployment rate and GDP growth rate in period 2004q1-2014q4, suggesting that Okun’s law is valid for the South African economy. With a 1 percent increase in GDP, unemployment can decrease by 0.13 percent, ceteris paribus. The research culminates in important policy recommendations, highlighting the relationship between unemployment and economic growth in the spirit of the National Development Plan.Keywords: unemployment, economic growth, Okun's law, South Africa
Procedia PDF Downloads 272843 Prediction of the Mechanical Power in Wind Turbine Powered Car Using Velocity Analysis
Authors: Abdelrahman Alghazali, Youssef Kassem, Hüseyin Çamur, Ozan Erenay
Abstract:
Savonius is a drag type vertical axis wind turbine. Savonius wind turbines have a low cut-in speed and can operate at low wind speed. This makes it suitable for electricity or mechanical generation in low-power applications such as individual domestic installations. Therefore, the primary purpose of this work was to investigate the relationship between the type of Savonius rotor and the torque and mechanical power generated. And it was to illustrate how the type of rotor might play an important role in the prediction of mechanical power of wind turbine powered car. The main purpose of this paper is to predict and investigate the aerodynamic effects by means of velocity analysis on the performance of a wind turbine powered car by converting the wind energy into mechanical energy to overcome load that rotates the main shaft. The predicted results based on theoretical analysis were compared with experimental results obtained from literature. The percentage of error between the two was approximately around 20%. Prediction of the torque was done at a wind speed of 4 m/s, and an angular velocity of 130 RPM according to meteorological statistics in Northern Cyprus.Keywords: mechanical power, torque, Savonius rotor, wind car
Procedia PDF Downloads 337842 Analysis of Production Forecasting in Unconventional Gas Resources Development Using Machine Learning and Data-Driven Approach
Authors: Dongkwon Han, Sangho Kim, Sunil Kwon
Abstract:
Unconventional gas resources have dramatically changed the future energy landscape. Unlike conventional gas resources, the key challenges in unconventional gas have been the requirement that applies to advanced approaches for production forecasting due to uncertainty and complexity of fluid flow. In this study, artificial neural network (ANN) model which integrates machine learning and data-driven approach was developed to predict productivity in shale gas. The database of 129 wells of Eagle Ford shale basin used for testing and training of the ANN model. The Input data related to hydraulic fracturing, well completion and productivity of shale gas were selected and the output data is a cumulative production. The performance of the ANN using all data sets, clustering and variables importance (VI) models were compared in the mean absolute percentage error (MAPE). ANN model using all data sets, clustering, and VI were obtained as 44.22%, 10.08% (cluster 1), 5.26% (cluster 2), 6.35%(cluster 3), and 32.23% (ANN VI), 23.19% (SVM VI), respectively. The results showed that the pre-trained ANN model provides more accurate results than the ANN model using all data sets.Keywords: unconventional gas, artificial neural network, machine learning, clustering, variables importance
Procedia PDF Downloads 196841 Evaluation of Three Digital Graphical Methods of Baseflow Separation Techniques in the Tekeze Water Basin in Ethiopia
Authors: Alebachew Halefom, Navsal Kumar, Arunava Poddar
Abstract:
The purpose of this work is to specify the parameter values, the base flow index (BFI), and to rank the methods that should be used for base flow separation. Three different digital graphical approaches are chosen and used in this study for the purpose of comparison. The daily time series discharge data were collected from the site for a period of 30 years (1986 up to 2015) and were used to evaluate the algorithms. In order to separate the base flow and the surface runoff, daily recorded streamflow (m³/s) data were used to calibrate procedures and get parameter values for the basin. Additionally, the performance of the model was assessed by the use of the standard error (SE), the coefficient of determination (R²), and the flow duration curve (FDC) and baseflow indexes. The findings indicate that, in general, each strategy can be used worldwide to differentiate base flow; however, the Sliding Interval Method (SIM) performs significantly better than the other two techniques in this basin. The average base flow index was calculated to be 0.72 using the local minimum method, 0.76 using the fixed interval method, and 0.78 using the sliding interval method, respectively.Keywords: baseflow index, digital graphical methods, streamflow, Emba Madre Watershed
Procedia PDF Downloads 79840 Impact of Different Modulation Techniques on the Performance of Free-Space Optics
Authors: Naman Singla, Ajay Pal Singh Chauhan
Abstract:
As the demand for providing high bit rate and high bandwidth is increasing at a rapid rate so there is a need to see in this problem and finds a technology that provides high bit rate and also high bandwidth. One possible solution is by use of optical fiber. Optical fiber technology provides high bandwidth in THz. But the disadvantage of optical fiber is of high cost and not used everywhere because it is not possible to reach all the locations on the earth. Also high maintenance required for usage of optical fiber. It puts a lot of cost. Another technology which is almost similar to optical fiber is Free Space Optics (FSO) technology. FSO is the line of sight technology where modulated optical beam whether infrared or visible is used to transfer information from one point to another through the atmosphere which works as a channel. This paper concentrates on analyzing the performance of FSO in terms of bit error rate (BER) and quality factor (Q) using different modulation techniques like non return to zero on off keying (NRZ-OOK), differential phase shift keying (DPSK) and differential quadrature phase shift keying (DQPSK) using OptiSystem software. The findings of this paper show that FSO system based on DQPSK modulation technique performs better.Keywords: attenuation, bit rate, free space optics, link length
Procedia PDF Downloads 347839 Flexible Capacitive Sensors Based on Paper Sheets
Authors: Mojtaba Farzaneh, Majid Baghaei Nejad
Abstract:
This article proposes a new Flexible Capacitive Tactile Sensors based on paper sheets. This method combines the parameters of sensor's material and dielectric, and forms a new model of flexible capacitive sensors. The present article tries to present a practical explanation of this method's application and advantages. With the use of this new method, it is possible to make a more flexibility and accurate sensor in comparison with the current models. To assess the performance of this model, the common capacitive sensor is simulated and the proposed model of this article and one of the existing models are assessed. The results of this article indicate that the proposed model of this article can enhance the speed and accuracy of tactile sensor and has less error in comparison with the current models. Based on the results of this study, it can be claimed that in comparison with the current models, the proposed model of this article is capable of representing more flexibility and more accurate output parameters for touching the sensor, especially in abnormal situations and uneven surfaces, and increases accuracy and practicality.Keywords: capacitive sensor, paper sheets, flexible, tactile, uneven
Procedia PDF Downloads 353838 Performance Evaluation and Dear Based Optimization on Machining Leather Specimens to Reduce Carbonization
Authors: Khaja Moiduddin, Tamer Khalaf, Muthuramalingam Thangaraj
Abstract:
Due to the variety of benefits over traditional cutting techniques, the usage of laser cutting technology has risen substantially in recent years. Hot wire machining can cut the leather in the required shape by controlling the wire by generating thermal energy. In the present study, an attempt has been made to investigate the effects of performance measures in the hot wire machining process on cutting leather specimens. Carbonization and material removal rates were considered as quality indicators. Burning leather during machining might cause carbon particles, reducing product quality. Minimizing the effect of carbon particles is crucial for assuring operator and environmental safety, health, and product quality. Hot wire machining can efficiently cut the specimens by controlling the current through it. Taguchi- DEAR-based optimization was also performed in the process, which resulted in a required Carbonization and material removal rate. Using the DEAR approach, the optimal parameters of the present study were found with 3.7% prediction error accuracy.Keywords: cabronization, leather, MRR, current
Procedia PDF Downloads 64837 Using Artificial Intelligence Method to Explore the Important Factors in the Reuse of Telecare by the Elderly
Authors: Jui-Chen Huang
Abstract:
This research used artificial intelligence method to explore elderly’s opinions on the reuse of telecare, its effect on their service quality, satisfaction and the relationship between customer perceived value and intention to reuse. This study conducted a questionnaire survey on the elderly. A total of 124 valid copies of a questionnaire were obtained. It adopted Backpropagation Network (BPN) to propose an effective and feasible analysis method, which is different from the traditional method. Two third of the total samples (82 samples) were taken as the training data, and the one third of the samples (42 samples) were taken as the testing data. The training and testing data RMSE (root mean square error) are 0.022 and 0.009 in the BPN, respectively. As shown, the errors are acceptable. On the other hand, the training and testing data RMSE are 0.100 and 0.099 in the regression model, respectively. In addition, the results showed the service quality has the greatest effects on the intention to reuse, followed by the satisfaction, and perceived value. This result of the Backpropagation Network method is better than the regression analysis. This result can be used as a reference for future research.Keywords: artificial intelligence, backpropagation network (BPN), elderly, reuse, telecare
Procedia PDF Downloads 212836 The Use of Fractional Brownian Motion in the Generation of Bed Topography for Bodies of Water Coupled with the Lattice Boltzmann Method
Authors: Elysia Barker, Jian Guo Zhou, Ling Qian, Steve Decent
Abstract:
A method of modelling topography used in the simulation of riverbeds is proposed in this paper, which removes the need for datapoints and measurements of physical terrain. While complex scans of the contours of a surface can be achieved with other methods, this requires specialised tools, which the proposed method overcomes by using fractional Brownian motion (FBM) as a basis to estimate the real surface within a 15% margin of error while attempting to optimise algorithmic efficiency. This removes the need for complex, expensive equipment and reduces resources spent modelling bed topography. This method also accounts for the change in topography over time due to erosion, sediment transport, and other external factors which could affect the topography of the ground by updating its parameters and generating a new bed. The lattice Boltzmann method (LBM) is used to simulate both stationary and steady flow cases in a side-by-side comparison over the generated bed topography using the proposed method and a test case taken from an external source. The method, if successful, will be incorporated into the current LBM program used in the testing phase, which will allow an automatic generation of topography for the given situation in future research, removing the need for bed data to be specified.Keywords: bed topography, FBM, LBM, shallow water, simulations
Procedia PDF Downloads 98835 Multiloop Fractional Order PID Controller Tuned Using Cuckoo Algorithm for Two Interacting Conical Tank Process
Authors: U. Sabura Banu, S. K. Lakshmanaprabu
Abstract:
The improvement of meta-heuristic algorithm encourages control engineer to design an optimal controller for industrial process. Most real-world industrial processes are non-linear multivariable process with high interaction. Even in sub-process unit, thousands of loops are available mostly interacting in nature. Optimal controller design for such process are still challenging task. Closed loop controller design by multiloop PID involves a tedious procedure by performing interaction study and then PID auto-tuning the loop with higher interaction. Finally, detuning the controller to accommodate the effects of the other process variables. Fractional order PID controllers are replacing integer order PID controllers recently. Design of Multiloop Fractional Order (MFO) PID controller is still more complicated. Cuckoo algorithm, a swarm intelligence technique is used to optimally tune the MFO PID controller with easiness minimizing Integral Time Absolute Error. The closed loop performance is tested under servo, regulatory and servo-regulatory conditions.Keywords: Cuckoo algorithm, mutliloop fractional order PID controller, two Interacting conical tank process
Procedia PDF Downloads 499834 A Comparative Study of Generalized Autoregressive Conditional Heteroskedasticity (GARCH) and Extreme Value Theory (EVT) Model in Modeling Value-at-Risk (VaR)
Authors: Longqing Li
Abstract:
The paper addresses the inefficiency of the classical model in measuring the Value-at-Risk (VaR) using a normal distribution or a Student’s t distribution. Specifically, the paper focuses on the one day ahead Value-at-Risk (VaR) of major stock market’s daily returns in US, UK, China and Hong Kong in the most recent ten years under 95% confidence level. To improve the predictable power and search for the best performing model, the paper proposes using two leading alternatives, Extreme Value Theory (EVT) and a family of GARCH models, and compares the relative performance. The main contribution could be summarized in two aspects. First, the paper extends the GARCH family model by incorporating EGARCH and TGARCH to shed light on the difference between each in estimating one day ahead Value-at-Risk (VaR). Second, to account for the non-normality in the distribution of financial markets, the paper applies Generalized Error Distribution (GED), instead of the normal distribution, to govern the innovation term. A dynamic back-testing procedure is employed to assess the performance of each model, a family of GARCH and the conditional EVT. The conclusion is that Exponential GARCH yields the best estimate in out-of-sample one day ahead Value-at-Risk (VaR) forecasting. Moreover, the discrepancy of performance between the GARCH and the conditional EVT is indistinguishable.Keywords: Value-at-Risk, Extreme Value Theory, conditional EVT, backtesting
Procedia PDF Downloads 321833 The Use of Lane-Centering to Assure the Visible Light Communication Connectivity for a Platoon of Autonomous Vehicles
Authors: Mohammad Y. Abualhoul, Edgar Talavera Munoz, Fawzi Nashashibi
Abstract:
The new emerging Visible Light Communication (VLC) technology has been subjected to intensive investigation, evaluation, and lately, deployed in the context of convoy-based applications for Intelligent Transportations Systems (ITS). The technology limitations were defined and supported by different solutions proposals to enhance the crucial alignment and mobility limitations. In this paper, we propose the incorporation of VLC technology and Lane-Centering (LC) technique to assure the VLC-connectivity by keeping the autonomous vehicle aligned to the lane center using vision-based lane detection in a convoy-based formation. Such combination can ensure the optical communication connectivity with a lateral error less than 30 cm. As soon as the road lanes are detectable, the evaluated system showed stable behavior independently from the inter-vehicle distances and without the need for any exchanged information of the remote vehicles. The evaluation of the proposed system is verified using VLC prototype and an empirical result of LC running application over 60 km in Madrid M40 highway.Keywords: visible light communication, lane-centerin, platooning, intelligent transportation systems, road safety applications
Procedia PDF Downloads 171832 Comparison Of Data Mining Models To Predict Future Bridge Conditions
Authors: Pablo Martinez, Emad Mohamed, Osama Mohsen, Yasser Mohamed
Abstract:
Highway and bridge agencies, such as the Ministry of Transportation in Ontario, use the Bridge Condition Index (BCI) which is defined as the weighted condition of all bridge elements to determine the rehabilitation priorities for its bridges. Therefore, accurate forecasting of BCI is essential for bridge rehabilitation budgeting planning. The large amount of data available in regard to bridge conditions for several years dictate utilizing traditional mathematical models as infeasible analysis methods. This research study focuses on investigating different classification models that are developed to predict the bridge condition index in the province of Ontario, Canada based on the publicly available data for 2800 bridges over a period of more than 10 years. The data preparation is a key factor to develop acceptable classification models even with the simplest one, the k-NN model. All the models were tested, compared and statistically validated via cross validation and t-test. A simple k-NN model showed reasonable results (within 0.5% relative error) when predicting the bridge condition in an incoming year.Keywords: asset management, bridge condition index, data mining, forecasting, infrastructure, knowledge discovery in databases, maintenance, predictive models
Procedia PDF Downloads 191