Search results for: bit error rate
1912 Sectoral Energy Consumption in South Africa and Its Implication for Economic Growth
Authors: Kehinde Damilola Ilesanmi, Dev Datt Tewari
Abstract:
South Africa is in its post-industrial era moving from the primary and secondary sector to the tertiary sector. The study investigated the impact of the disaggregated energy consumption (coal, oil, and electricity) on the primary, secondary and tertiary sectors of the economy between 1980 and 2012 in South Africa. Using vector error correction model, it was established that South Africa is an energy dependent economy, and that energy (especially electricity and oil) is a limiting factor of growth. This implies that implementation of energy conservation policies may hamper economic growth. Output growth is significantly outpacing energy supply, which has necessitated load shedding. To meet up the excess energy demand, there is a need to increase the generating capacity which will necessitate increased investment in the electricity sector as well as strategic steps to increase oil production. There is also need to explore more renewable energy sources, in order to meet the growing energy demand without compromising growth and environmental sustainability. Policy makers should also pursue energy efficiency policies especially at sectoral level of the economy.Keywords: Causality, economic growth, energy consumption, hypothesis, sectoral output.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16561911 Towards an Intelligent Ontology Construction Cost Estimation System: Using BIM and New Rules of Measurement Techniques
Authors: F. H. Abanda, B. Kamsu-Foguem, J. H. M. Tah
Abstract:
Construction cost estimation is one of the most important aspects of construction project design. For generations, the process of cost estimating has been manual, time-consuming and error-prone. This has partly led to most cost estimates to be unclear and riddled with inaccuracies that at times lead to over- or underestimation of construction cost. The development of standard set of measurement rules that are understandable by all those involved in a construction project, have not totally solved the challenges. Emerging Building Information Modelling (BIM) technologies can exploit standard measurement methods to automate cost estimation process and improve accuracies. This requires standard measurement methods to be structured in ontological and machine readable format; so that BIM software packages can easily read them. Most standard measurement methods are still text-based in textbooks and require manual editing into tables or Spreadsheet during cost estimation. The aim of this study is to explore the development of an ontology based on New Rules of Measurement (NRM) commonly used in the UK for cost estimation. The methodology adopted is Methontology, one of the most widely used ontology engineering methodologies. The challenges in this exploratory study are also reported and recommendations for future studies proposed.
Keywords: BIM, Construction projects, Cost estimation, NRM, Ontology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 44441910 A Hybrid Classification Method using Artificial Neural Network Based Decision Tree for Automatic Sleep Scoring
Authors: Haoyu Ma, Bin Hu, Mike Jackson, Jingzhi Yan, Wen Zhao
Abstract:
In this paper we propose a new classification method for automatic sleep scoring using an artificial neural network based decision tree. It attempts to treat sleep scoring progress as a series of two-class problems and solves them with a decision tree made up of a group of neural network classifiers, each of which uses a special feature set and is aimed at only one specific sleep stage in order to maximize the classification effect. A single electroencephalogram (EEG) signal is used for our analysis rather than depending on multiple biological signals, which makes greatly simplifies the data acquisition process. Experimental results demonstrate that the average epoch by epoch agreement between the visual and the proposed method in separating 30s wakefulness+S1, REM, S2 and SWS epochs was 88.83%. This study shows that the proposed method performed well in all the four stages, and can effectively limit error propagation at the same time. It could, therefore, be an efficient method for automatic sleep scoring. Additionally, since it requires only a small volume of data it could be suited to pervasive applications.
Keywords: Sleep, Sleep stage, Automatic sleep scoring, Electroencephalography, Decision tree, Artificial neural network
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20721909 Efficient Variants of Square Contour Algorithm for Blind Equalization of QAM Signals
Authors: Ahmad Tariq Sheikh, Shahzad Amin Sheikh
Abstract:
A new distance-adjusted approach is proposed in which static square contours are defined around an estimated symbol in a QAM constellation, which create regions that correspond to fixed step sizes and weighting factors. As a result, the equalizer tap adjustment consists of a linearly weighted sum of adaptation criteria that is scaled by a variable step size. This approach is the basis of two new algorithms: the Variable step size Square Contour Algorithm (VSCA) and the Variable step size Square Contour Decision-Directed Algorithm (VSDA). The proposed schemes are compared with existing blind equalization algorithms in the SCA family in terms of convergence speed, constellation eye opening and residual ISI suppression. Simulation results for 64-QAM signaling over empirically derived microwave radio channels confirm the efficacy of the proposed algorithms. An RTL implementation of the blind adaptive equalizer based on the proposed schemes is presented and the system is configured to operate in VSCA error signal mode, for square QAM signals up to 64-QAM.Keywords: Adaptive filtering, Blind Equalization, Square Contour Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18531908 Application of PSO Technique for Seismic Control of Tall Building
Authors: A. Shayeghi, H. Shayeghi, H. Eimani Kalasar
Abstract:
In recent years, tuned mass damper (TMD) control systems for civil engineering structures have attracted considerable attention. This paper emphasizes on the application of particle swarm application (PSO) to design and optimize the parameters of the TMD control scheme for achieving the best results in the reduction of the building response under earthquake excitations. The Integral of the Time multiplied Absolute value of the Error (ITAE) based on relative displacement of all floors in the building is taken as a performance index of the optimization criterion. The problem of robustly TMD controller design is formatted as an optimization problem based on the ITAE performance index to be solved using the PSO technique which has a story ability to find the most optimistic results. An 11- story realistic building, located in the city of Rasht, Iran is considered as a test system to demonstrate effectiveness of the proposed method. The results analysis through the time-domain simulation and some performance indices reveals that the designed PSO based TMD controller has an excellent capability in reduction of the seismically excited example building.
Keywords: TMD, Particle Swarm Optimization, Tall Buildings, Structural Dynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18201907 Effect Comparison of Speckle Noise Reduction Filters on 2D-Echocardigraphic Images
Authors: Faten A. Dawood, Rahmita W. Rahmat, Suhaini B. Kadiman, Lili N. Abdullah, Mohd D. Zamrin
Abstract:
Echocardiography imaging is one of the most common diagnostic tests that are widely used for assessing the abnormalities of the regional heart ventricle function. The main goal of the image enhancement task in 2D-echocardiography (2DE) is to solve two major anatomical structure problems; speckle noise and low quality. Therefore, speckle noise reduction is one of the important steps that used as a pre-processing to reduce the distortion effects in 2DE image segmentation. In this paper, we present the common filters that based on some form of low-pass spatial smoothing filters such as Mean, Gaussian, and Median. The Laplacian filter was used as a high-pass sharpening filter. A comparative analysis was presented to test the effectiveness of these filters after being applied to original 2DE images of 4-chamber and 2-chamber views. Three statistical quantity measures: root mean square error (RMSE), peak signal-to-ratio (PSNR) and signal-tonoise ratio (SNR) are used to evaluate the filter performance quantitatively on the output enhanced image.
Keywords: Gaussian operator, median filter, speckle texture, peak signal-to-ratio
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19951906 Cubic Splines and Fourier Series Approach to Study Temperature Variation in Dermal Layers of Elliptical Shaped Human Limbs
Authors: Mamta Agrawal, Neeru Adlakha, K.R. Pardasani
Abstract:
An attempt has been made to develop a seminumerical model to study temperature variations in dermal layers of human limbs. The model has been developed for two dimensional steady state case. The human limb has been assumed to have elliptical cross section. The dermal region has been divided into three natural layers namely epidermis, dermis and subdermal tissues. The model incorporates the effect of important physiological parameters like blood mass flow rate, metabolic heat generation, and thermal conductivity of the tissues. The outer surface of the limb is exposed to the environment and it is assumed that heat loss takes place at the outer surface by conduction, convection, radiation, and evaporation. The temperature of inner core of the limb also varies at the lower atmospheric temperature. Appropriate boundary conditions have been framed based on the physical conditions of the problem. Cubic splines approach has been employed along radial direction and Fourier series along angular direction to obtain the solution. The numerical results have been computed for different values of eccentricity resembling with the elliptic cross section of the human limbs. The numerical results have been used to obtain the temperature profile and to study the relationships among the various physiological parameters.Keywords: Blood Mass Flow Rate, Metabolic Heat Generation, Fourier Series, Cubic splines and Thermal Conductivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18001905 CAD/CAM Algorithms for 3D Woven Multilayer Textile Structures
Authors: Martin A. Smith, Xiaogang Chen
Abstract:
This paper proposes new algorithms for the computeraided design and manufacture (CAD/CAM) of 3D woven multi-layer textile structures. Existing commercial CAD/CAM systems are often restricted to the design and manufacture of 2D weaves. Those CAD/CAM systems that do support the design and manufacture of 3D multi-layer weaves are often limited to manual editing of design paper grids on the computer display and weave retrieval from stored archives. This complex design activity is time-consuming, tedious and error-prone and requires considerable experience and skill of a technical weaver. Recent research reported in the literature has addressed some of the shortcomings of commercial 3D multi-layer weave CAD/CAM systems. However, earlier research results have shown the need for further work on weave specification, weave generation, yarn path editing and layer binding. Analysis of 3D multi-layer weaves in this research has led to the design and development of efficient and robust algorithms for the CAD/CAM of 3D woven multi-layer textile structures. The resulting algorithmically generated weave designs can be used as a basis for lifting plans that can be loaded onto looms equipped with electronic shedding mechanisms for the CAM of 3D woven multi-layer textile structures.Keywords: CAD/CAM, Multi-layer, Textile, Weave.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25711904 The Use of Fractional Brownian Motion in the Generation of Bed Topography for Bodies of Water Coupled with the Lattice Boltzmann Method
Authors: Elysia Barker, Jian Guo Zhou, Ling Qian, Steve Decent
Abstract:
A method of modelling topography used in the simulation of riverbeds is proposed in this paper which removes the need for datapoints and measurements of a physical terrain. While complex scans of the contours of a surface can be achieved with other methods, this requires specialised tools which the proposed method overcomes by using fractional Brownian motion (FBM) as a basis to estimate the real surface within a 15% margin of error while attempting to optimise algorithmic efficiency. This removes the need for complex, expensive equipment and reduces resources spent modelling bed topography. This method also accounts for the change in topography over time due to erosion, sediment transport, and other external factors which could affect the topography of the ground by updating its parameters and generating a new bed. The lattice Boltzmann method (LBM) is used to simulate both stationary and steady flow cases in a side-by-side comparison over the generated bed topography using the proposed method, and a test case taken from an external source. The method, if successful, will be incorporated into the current LBM program used in the testing phase, which will allow an automatic generation of topography for the given situation in future research, removing the need for bed data to be specified.
Keywords: Bed topography, FBM, LBM, shallow water, simulations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3071903 Analysis of Supply Side Factors Affecting Bank Financing of Non-Oil Exports in Nigeria
Authors: Sama’ila Idi Ningi, Abubakar Yusuf Dutse
Abstract:
The banking sector poses a lot of problems in Nigeria in general and the non-oil export sector in particular. The banks' lack effectiveness in handling small, medium or long-term credit risk (lack of training of loan officers, lack of information on borrowers and absence of a reliable credit registry) results in non-oil exporters being burdened with high requirements, such as up to three years of financial statements, enough collateral to cover both the loan principal and interest (including a cash deposit that may be up to 30% of the loans' net present value), and to provide every detail of the international trade transaction in question. The stated problems triggered this research. Consequently, information on bank financing of non-oil exports was collected from 100 respondents from the 20 Deposit Money Banks (DMBs) in Nigeria. The data was analysed by the use of descriptive statistics correlation and regression. It is found that, Nigerian banks are participants in the financing of non-oil exports. Despite their participation, the rate of interest for credit extended to non-oil export is usually high, ranging between 15-20%. Small and medium sized non-oil export businesses lack the credit history for banks to judge them as reputable. Banks also consider the non-oil export sector very risky for investment. The banks actually do grant less credit than the exporters may require and therefore are not properly funded by banks. Banks grant very low volume of foreign currency loan in addition to, unfavorable exchange rate at which Naira is exchanged to the Dollar and other currencies in the country. This makes importation of inputs costly and negatively impacted on the non-oil export performance in Nigeria.
Keywords: Supply Side Factors, Bank Financing, Non-Oil Exports.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27111902 Microscopic Emission and Fuel Consumption Modeling for Light-duty Vehicles Using Portable Emission Measurement System Data
Authors: Wei Lei, Hui Chen, Lin Lu
Abstract:
Microscopic emission and fuel consumption models have been widely recognized as an effective method to quantify real traffic emission and energy consumption when they are applied with microscopic traffic simulation models. This paper presents a framework for developing the Microscopic Emission (HC, CO, NOx, and CO2) and Fuel consumption (MEF) models for light-duty vehicles. The variable of composite acceleration is introduced into the MEF model with the purpose of capturing the effects of historical accelerations interacting with current speed on emission and fuel consumption. The MEF model is calibrated by multivariate least-squares method for two types of light-duty vehicle using on-board data collected in Beijing, China by a Portable Emission Measurement System (PEMS). The instantaneous validation results shows the MEF model performs better with lower Mean Absolute Percentage Error (MAPE) compared to other two models. Moreover, the aggregate validation results tells the MEF model produces reasonable estimations compared to actual measurements with prediction errors within 12%, 10%, 19%, and 9% for HC, CO, NOx emissions and fuel consumption, respectively.Keywords: Emission, Fuel consumption, Light-duty vehicle, Microscopic, Modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20061901 Estimation of Real Power Transfer Allocation Using Intelligent Systems
Authors: H. Shareef, A. Mohamed, S. A. Khalid, Aziah Khamis
Abstract:
This paper presents application artificial intelligent (AI) techniques, namely artificial neural network (ANN), adaptive neuro fuzzy interface system (ANFIS), to estimate the real power transfer between generators and loads. Since these AI techniques adopt supervised learning, it first uses modified nodal equation method (MNE) to determine real power contribution from each generator to loads. Then the results of MNE method and load flow information are utilized to estimate the power transfer using AI techniques. The 25-bus equivalent system of south Malaysia is utilized as a test system to illustrate the effectiveness of both AI methods compared to that of the MNE method. The mean squared error of the estimate of ANN and ANFIS power transfer allocation methods are 1.19E-05 and 2.97E-05, respectively. Furthermore, when compared to MNE method, ANN and ANFIS methods computes generator contribution to loads within 20.99 and 39.37msec respectively whereas the MNE method took 360msec for the calculation of same real power transfer allocation.
Keywords: Artificial intelligence, Power tracing, Artificial neural network, ANFIS, Power system deregulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25841900 Capability Prediction of Machining Processes Based on Uncertainty Analysis
Authors: Hamed Afrasiab, Saeed Khodaygan
Abstract:
Prediction of machining process capability in the design stage plays a key role to reach the precision design and manufacturing of mechanical products. Inaccuracies in machining process lead to errors in position and orientation of machined features on the part, and strongly affect the process capability in the final quality of the product. In this paper, an efficient systematic approach is given to investigate the machining errors to predict the manufacturing errors of the parts and capability prediction of corresponding machining processes. A mathematical formulation of fixture locators modeling is presented to establish the relationship between the part errors and the related sources. Based on this method, the final machining errors of the part can be accurately estimated by relating them to the combined dimensional and geometric tolerances of the workpiece – fixture system. This method is developed for uncertainty analysis based on the Worst Case and statistical approaches. The application of the presented method is illustrated through presenting an example and the computational results are compared with the Monte Carlo simulation results.Keywords: Process capability, machining error, dimensional and geometrical tolerances, uncertainty analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12341899 River Flow Prediction Using Nonlinear Prediction Method
Authors: N. H. Adenan, M. S. M. Noorani
Abstract:
River flow prediction is an essential to ensure proper management of water resources can be optimally distribute water to consumers. This study presents an analysis and prediction by using nonlinear prediction method involving monthly river flow data in Tanjung Tualang from 1976 to 2006. Nonlinear prediction method involves the reconstruction of phase space and local linear approximation approach. The phase space reconstruction involves the reconstruction of one-dimensional (the observed 287 months of data) in a multidimensional phase space to reveal the dynamics of the system. Revenue of phase space reconstruction is used to predict the next 72 months. A comparison of prediction performance based on correlation coefficient (CC) and root mean square error (RMSE) have been employed to compare prediction performance for nonlinear prediction method, ARIMA and SVM. Prediction performance comparisons show the prediction results using nonlinear prediction method is better than ARIMA and SVM. Therefore, the result of this study could be used to develop an efficient water management system to optimize the allocation water resources.
Keywords: River flow, nonlinear prediction method, phase space, local linear approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23651898 Global Kinetics of Direct Dimethyl Ether Synthesis Process from Syngas in Slurry Reactor over a Novel Cu-Zn-Al-Zr Slurry Catalyst
Authors: Zhen Chen, Haitao Zhang, Weiyong Ying, Dingye Fang
Abstract:
The direct synthesis process of dimethyl ether (DME) from syngas in slurry reactors is considered to be promising because of its advantages in caloric transfer. In this paper, the influences of operating conditions (temperature, pressure and weight hourly space velocity) on the conversion of CO, selectivity of DME and methanol were studied in a stirred autoclave over Cu-Zn-Al-Zr slurry catalyst, which is far more suitable to liquid phase dimethyl ether synthesis process than bifunctional catalyst commercially. A Langmuir- Hinshelwood mechanism type global kinetics model for liquid phase DME direct synthesis based on methanol synthesis models and a methanol dehydration model has been investigated by fitting our experimental data. The model parameters were estimated with MATLAB program based on general Genetic Algorithms and Levenberg-Marquardt method, which is suitably fitting experimental data and its reliability was verified by statistical test and residual error analysis.Keywords: alcohol/ether fuel, Cu-Zn-Al-Zr slurry catalyst, global kinetics, slurry reactor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 55211897 Generalization of SGIP Surface Tension Force Model in Three-Dimensional Flows and Compare to Other Models in Interfacial Flows
Authors: Afshin Ahmadi Nadooshan, Ebrahim Shirani
Abstract:
In this paper, the two-dimensional stagger grid interface pressure (SGIP) model has been generalized and presented into three-dimensional form. For this purpose, various models of surface tension force for interfacial flows have been investigated and compared with each other. The VOF method has been used for tracking the interface. To show the ability of the SGIP model for three-dimensional flows in comparison with other models, pressure contours, maximum spurious velocities, norm spurious flow velocities and pressure jump error for motionless drop of liquid and bubble of gas are calculated using different models. It has been pointed out that SGIP model in comparison with the CSF, CSS and PCIL models produces the least maximum and norm spurious velocities. Additionally, the new model produces more accurate results in calculating the pressure jumps across the interface for motionless drop of liquid and bubble of gas which is generated in surface tension force. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14131896 The Ability of Forecasting the Term Structure of Interest Rates Based On Nelson-Siegel and Svensson Model
Authors: Tea Poklepović, Zdravka Aljinović, Branka Marasović
Abstract:
Due to the importance of yield curve and its estimation it is inevitable to have valid methods for yield curve forecasting in cases when there are scarce issues of securities and/or week trade on a secondary market. Therefore in this paper, after the estimation of weekly yield curves on Croatian financial market from October 2011 to August 2012 using Nelson-Siegel and Svensson models, yield curves are forecasted using Vector autoregressive model and Neural networks. In general, it can be concluded that both forecasting methods have good prediction abilities where forecasting of yield curves based on Nelson Siegel estimation model give better results in sense of lower Mean Squared Error than forecasting based on Svensson model Also, in this case Neural networks provide slightly better results. Finally, it can be concluded that most appropriate way of yield curve prediction is Neural networks using Nelson-Siegel estimation of yield curves.
Keywords: Nelson-Siegel model, Neural networks, Svensson model, Vector autoregressive model, Yield curve.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32481895 Seismic Control of Tall Building Using a New Optimum Controller Based on GA
Authors: A. Shayeghi, H. Eimani Kalasar, H. Shayeghi
Abstract:
This paper emphasizes on the application of genetic algorithm (GA) to optimize the parameters of the TMD for achieving the best results in the reduction of the building response under earthquake excitations. The Integral of the Time multiplied Absolute value of the Error (ITAE) based on relative displacement of all floors in the building is taken as a performance index of the optimization criterion. The problem of robustly TMD controller design is formatted as an optimization problem based on the ITAE performance index to be solved using GA that has a story ability to find the most optimistic results. An 11–story realistic building, located in the city of Rasht, Iran is considered as a test system to demonstrate effectiveness of the proposed GA based TMD (GATMD) controller without specifying which mode should be controlled. The results of the proposed GATMD controller are compared with the uncontrolled structure through timedomain simulation and some performance indices. The results analysis reveals that the designed GA based TMD controller has an excellent capability in reduction of the seismically excited example building and the ITAE performance, that is so for remains as unknown, can be introduced a new criteria - method for structural dynamic design.
Keywords: Tuned Mass Damper, Genetic Algorithm, TallBuildings, Structural Dynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17981894 Effect of Segregation on the Reaction Rate of Sewage Sludge Pyrolysis in a Bubbling Fluidized Bed
Authors: A. Soria-Verdugo, A. Morato-Godino, L. M. García-Gutiérrez, N. García-Hernando
Abstract:
The evolution of the pyrolysis of sewage sludge in a fixed and a fluidized bed was analyzed using a novel measuring technique. This original measuring technique consists of installing the whole reactor over a precision scale, capable of measuring the mass of the complete reactor with enough precision to detect the mass released by the sewage sludge sample during its pyrolysis. The inert conditions required for the pyrolysis process were obtained supplying the bed with a nitrogen flowrate, and the bed temperature was adjusted to either 500 ºC or 600 ºC using a group of three electric resistors. The sewage sludge sample was supplied through the top of the bed in a batch of 10 g. The measurement of the mass released by the sewage sludge sample was employed to determine the evolution of the reaction rate during the pyrolysis, the total amount of volatile matter released, and the pyrolysis time. The pyrolysis tests of sewage sludge in the fluidized bed were conducted using two different bed materials of the same size but different densities: silica sand and sepiolite particles. The higher density of silica sand particles induces a flotsam behavior for the sewage sludge particles which move close to the bed surface. In contrast, the lower density of sepiolite produces a neutrally-buoyant behavior for the sewage sludge particles, which shows a proper circulation throughout the whole bed in this case. The analysis of the evolution of the pyrolysis process in both fluidized beds show that the pyrolysis is faster when buoyancy effects are negligible, i.e. in the bed conformed by sepiolite particles. Moreover, sepiolite was found to show an absorbent capability for the volatile matter released during the pyrolysis of sewage sludge.
Keywords: Bubbling fluidized bed, pyrolysis time, segregation effects, sewage sludge.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11281893 A CFD Study of Turbulent Convective Heat Transfer Enhancement in Circular Pipeflow
Authors: Perumal Kumar, Rajamohan Ganesan
Abstract:
Addition of milli or micro sized particles to the heat transfer fluid is one of the many techniques employed for improving heat transfer rate. Though this looks simple, this method has practical problems such as high pressure loss, clogging and erosion of the material of construction. These problems can be overcome by using nanofluids, which is a dispersion of nanosized particles in a base fluid. Nanoparticles increase the thermal conductivity of the base fluid manifold which in turn increases the heat transfer rate. Nanoparticles also increase the viscosity of the basefluid resulting in higher pressure drop for the nanofluid compared to the base fluid. So it is imperative that the Reynolds number (Re) and the volume fraction have to be optimum for better thermal hydraulic effectiveness. In this work, the heat transfer enhancement using aluminium oxide nanofluid using low and high volume fraction nanofluids in turbulent pipe flow with constant wall temperature has been studied by computational fluid dynamic modeling of the nanofluid flow adopting the single phase approach. Nanofluid, up till a volume fraction of 1% is found to be an effective heat transfer enhancement technique. The Nusselt number (Nu) and friction factor predictions for the low volume fractions (i.e. 0.02%, 0.1 and 0.5%) agree very well with the experimental values of Sundar and Sharma (2010). While, predictions for the high volume fraction nanofluids (i.e. 1%, 4% and 6%) are found to have reasonable agreement with both experimental and numerical results available in the literature. So the computationally inexpensive single phase approach can be used for heat transfer and pressure drop prediction of new nanofluids.Keywords: Heat transfer intensification, nanofluid, CFD, friction factor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28751892 Computational Methods in Official Statistics with an Example on Calculating and Predicting Diabetes Mellitus [DM] Prevalence in Different Age Groups within Australia in Future Years, in Light of the Aging Population
Authors: D. Hilton
Abstract:
An analysis of the Australian Diabetes Screening Study estimated undiagnosed diabetes mellitus [DM] prevalence in a high risk general practice based cohort. DM prevalence varied from 9.4% to 18.1% depending upon the diagnostic criteria utilised with age being a highly significant risk factor. Utilising the gold standard oral glucose tolerance test, the prevalence of DM was 22-23% in those aged >= 70 years and <15% in those aged 40-59 years. Opportunistic screening in Australian general practice potentially can identify many persons with undiagnosed type 2 DM. An Australian Bureau of Statistics document published three years ago, reported the highest rate of DM in men aged 65-74 years [19%] whereas the rate for women was highest in those over 75 years [13%]. If you consider that the Australian Bureau of Statistics report in 2007 found that 13% of the population was over 65 years of age and that this will increase to 23-25% by 2056 with a further projected increase to 25-28% by 2101, obviously this information has to be factored into the equation when age related diabetes prevalence predictions are calculated. This 10-15% proportional increase of elderly persons within the population demographics has dramatic implications for the estimated number of elderly persons with DM in these age groupings. Computational methodology showing the age related demographic changes reported in these official statistical documents will be done showing estimates for 2056 and 2101 for different age groups. This has relevance for future diabetes prevalence rates and shows that along with many countries worldwide Australia is facing an increasing pandemic. In contrast Japan is expected to have a decrease in the next twenty years in the number of persons with diabetes.
Keywords: Epidemiological methods, aging, prevalence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19541891 Instability of Soliton Solutions to the Schamel-nonlinear Schrödinger Equation
Authors: Sarun Phibanchon, Michael A. Allen
Abstract:
A variational method is used to obtain the growth rate of a transverse long-wavelength perturbation applied to the soliton solution of a nonlinear Schr¨odinger equation with a three-half order potential. We demonstrate numerically that this unstable perturbed soliton will eventually transform into a cylindrical soliton.
Keywords: Soliton, instability, variational method, spectral method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37011890 Women's Employment Issues in Georgia and Solutions Based on European Experience
Authors: N. Damenia, E. Kharaishvili, N. Sagareishvili, M. Saghareishvili
Abstract:
Women's Employment is one of the most important issues in the global economy. The article discusses the stated topic in Georgia, through historical content, Soviet experience, and modern perspectives. The paper discusses segmentation insa terms of employment and related problems. Based on statistical analysis, women's unemployment rate and its factors are analyzed. The level of employment of women in Transcaucasia (Georgia, Armenia, and Azerbaijan) is discussed and is compared with Baltic countries (Lithuania, Latvia, and Estonia). The study analyzes women’s level of development, according to the average age of marriage and migration level. The focus is on Georgia's Association Agreement with the EU in 2014, which includes economic, social, trade and political issues. One part of it is gender equality at workplaces. According to the research, the average monthly remuneration of women managers in the financial and insurance sector equaled to 1044.6 Georgian Lari, while in overall business sector average monthly remuneration equaled to 961.1 GEL. Average salaries are increasing; however, the employment rate remains problematic. For example, in 2017, 74.6% of men and 50.8% of women were employed from a total workforce. It is also interesting that the proportion of men and women at managerial positions is 29% (women) to 71% (men). Based on the results, the main recommendation for government and civil society is to consider women as a part of the country’s economic development. In this aspect, the experience of developed countries should be considered. It is important to create additional jobs in urban or rural areas and help migrant women return and use their working resources properly.
Keywords: Employment of women, segregation in terms of employment, women's employment level in Transcaucasia, migration level.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7251889 A Model-Reference Sliding Mode for Dual-Stage Actuator Servo Control in HDD
Authors: S. Sonkham, U. Pinsopon, W. Chatlatanagulchai
Abstract:
This paper presents a method of sliding mode control (SMC) designing and developing for the servo system in a dual-stage actuator (DSA) hard disk drive. Mathematical modeling of hard disk drive actuators is obtained, extracted from measuring frequency response of the voice-coil motor (VCM) and PZT micro-actuator separately. Matlab software tools are used for mathematical model estimation and also for controller design and simulation. A model-reference approach for tracking requirement is selected as a proposed technique. The simulation results show that performance of a model-reference SMC controller design in DSA servo control can be satisfied in the tracking error, as well as keeping the positioning of the head within the boundary of +/-5% of track width under the presence of internal and external disturbance. The overall results of model-reference SMC design in DSA are met per requirement specifications and significant reduction in %off track is found when compared to the single-state actuator (SSA).
Keywords: Hard Disk Drive, Dual-Stage Actuator, Track Following, HDD Servo Control, Sliding Mode Control, Model-Reference, Tracking Control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19621888 Performance Evaluation of Filtration System for Groundwater Recharging Well in the Presence of Medium Sand-Mixed Storm Water
Authors: Krishna Kumar Singh, Praveen Jain
Abstract:
Collection of storm water runoff and forcing it into the groundwater is the need of the hour to sustain the ground water table. However, the runoff entraps various types of sediments and other floating objects whose removal are essential to avoid pollution of ground water and blocking of pores of aquifer. However, it requires regular cleaning and maintenance due to problem of clogging. To evaluate the performance of filter system consisting of coarse sand (CS), gravel (G) and pebble (P) layers, a laboratory experiment was conducted in a rectangular column. The effect of variable thickness of CS, G and P layers of the filtration unit of the recharge shaft on the recharge rate and the sediment concentration of effluent water were evaluated. Medium sand (MS) of three particle sizes, viz. 0.150–0.300 mm (T1), 0.300–0.425 mm (T2) and 0.425–0.600 mm of thickness 25 cm, 30 cm and 35 cm respectively in the top layer of the filter system and having seven influent sediment concentrations of 250–3,000 mg/l were used for experimental study. The performance was evaluated in terms of recharge rates and clogging time. The results indicated that 100 % suspended solids were entrapped in the upper 10 cm layer of MS, the recharge rates declined sharply for influent concentrations of more than 1,000 mg/l. All treatments with higher thickness of MS media indicated recharge rate slightly more than that of all treatment with lower thickness of MS media respectively. The performance of storm water infiltration systems was highly dependent on the formation of a clogging layer at the filter. An empirical relationship has been derived between recharge rates, inflow sediment load, size of MS and thickness of MS with using MLR.
Keywords: Groundwater, medium sand-mixed storm water filter, inflow sediment load.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22821887 Detection of Actuator Faults for an Attitude Control System using Neural Network
Authors: S. Montenegro, W. Hu
Abstract:
The objective of this paper is to develop a neural network-based residual generator to detect the fault in the actuators for a specific communication satellite in its attitude control system (ACS). First, a dynamic multilayer perceptron network with dynamic neurons is used, those neurons correspond a second order linear Infinite Impulse Response (IIR) filter and a nonlinear activation function with adjustable parameters. Second, the parameters from the network are adjusted to minimize a performance index specified by the output estimated error, with the given input-output data collected from the specific ACS. Then, the proposed dynamic neural network is trained and applied for detecting the faults injected to the wheel, which is the main actuator in the normal mode for the communication satellite. Then the performance and capabilities of the proposed network were tested and compared with a conventional model-based observer residual, showing the differences between these two methods, and indicating the benefit of the proposed algorithm to know the real status of the momentum wheel. Finally, the application of the methods in a satellite ground station is discussed.Keywords: Satellite, Attitude Control, Momentum Wheel, Neural Network, Fault Detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19921886 Efficient Semi-Systolic Finite Field Multiplier Using Redundant Basis
Authors: Hyun-Ho Lee, Kee-Won Kim
Abstract:
The arithmetic operations over GF(2m) have been extensively used in error correcting codes and public-key cryptography schemes. Finite field arithmetic includes addition, multiplication, division and inversion operations. Addition is very simple and can be implemented with an extremely simple circuit. The other operations are much more complex. The multiplication is the most important for cryptosystems, such as the elliptic curve cryptosystem, since computing exponentiation, division, and computing multiplicative inverse can be performed by computing multiplication iteratively. In this paper, we present a parallel computation algorithm that operates Montgomery multiplication over finite field using redundant basis. Also, based on the multiplication algorithm, we present an efficient semi-systolic multiplier over finite field. The multiplier has less space and time complexities compared to related multipliers. As compared to the corresponding existing structures, the multiplier saves at least 5% area, 50% time, and 53% area-time (AT) complexity. Accordingly, it is well suited for VLSI implementation and can be easily applied as a basic component for computing complex operations over finite field, such as inversion and division operation.Keywords: Finite field, Montgomery multiplication, systolic array, cryptography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16461885 Efficient HAAR Wavelet Transform with Embedded Zerotrees of Wavelet Compression for Color Images
Authors: S. Piramu Kailasam
Abstract:
This study is expected to compress true color image with compression algorithms in color spaces to provide high compression rates. The need of high compression ratio is to improve storage space. Alternative aim is to rank compression algorithms in a suitable color space. The dataset is sequence of true color images with size 128 x 128. HAAR Wavelet is one of the famous wavelet transforms, has great potential and maintains image quality of color images. HAAR wavelet Transform using Set Partitioning in Hierarchical Trees (SPIHT) algorithm with different color spaces framework is applied to compress sequence of images with angles. Embedded Zerotrees of Wavelet (EZW) is a powerful standard method to sequence data. Hence the proposed compression frame work of HAAR wavelet, xyz color space, morphological gradient and applied image with EZW compression, obtained improvement to other methods, in terms of Compression Ratio, Mean Square Error, Peak Signal Noise Ratio and Bits Per Pixel quality measures.
Keywords: Color Spaces, HAAR Wavelet, Morphological Gradient, Embedded Zerotrees Wavelet Compression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5181884 Creating Customer Value through SOA and Outsourcing: A NEBIC Approach
Authors: Benazeer Md. Shahzada, Verelst Jan, Van Grembergen Wim, Mannaert Herwig
Abstract:
This article is an extension and a practical application approach of Wheeler-s NEBIC theory (Net Enabled Business Innovation Cycle). NEBIC theory is a new approach in IS research and can be used for dynamic environment related to new technology. Firms can follow the market changes rapidly with support of the IT resources. Flexible firms adapt their market strategies, and respond more quickly to customers changing behaviors. When every leading firm in an industry has access to the same IT resources, the way that these IT resources are managed will determine the competitive advantages or disadvantages of firm. From Dynamic Capabilities Perspective and from newly introduced NEBIC theory by Wheeler, we know that only IT resources cannot deliver customer value but good configuration of those resources can guarantee customer value by choosing the right emerging technology, grasping the right economic opportunities through business innovation and growth. We found evidences in literature that SOA (Service Oriented Architecture) is a promising emerging technology which can deliver the desired economic opportunity through modularity, flexibility and loose-coupling. SOA can also help firms to connect in network which can open a new window of opportunity to collaborate in innovation and right kind of outsourcing. There are many articles and research reports indicates that failure rate in outsourcing is very high but at the same time research indicates that successful outsourcing projects adds tangible and intangible benefits to the service consumer. Business executives and policy makers in the west should not afraid of outsourcing but they should choose the right strategy through the use of emerging technology to significantly reduce the failure rate in outsourcing.Keywords: Absorptive capacity, Dynamic Capability, Netenabled business innovation cycle, Service oriented architecture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14111883 Adaptive Non-linear Filtering Technique for Image Restoration
Authors: S. K. Satpathy, S. Panda, K. K. Nagwanshi, S. K. Nayak, C. Ardil
Abstract:
Removing noise from the any processed images is very important. Noise should be removed in such a way that important information of image should be preserved. A decisionbased nonlinear algorithm for elimination of band lines, drop lines, mark, band lost and impulses in images is presented in this paper. The algorithm performs two simultaneous operations, namely, detection of corrupted pixels and evaluation of new pixels for replacing the corrupted pixels. Removal of these artifacts is achieved without damaging edges and details. However, the restricted window size renders median operation less effective whenever noise is excessive in that case the proposed algorithm automatically switches to mean filtering. The performance of the algorithm is analyzed in terms of Mean Square Error [MSE], Peak-Signal-to-Noise Ratio [PSNR], Signal-to-Noise Ratio Improved [SNRI], Percentage Of Noise Attenuated [PONA], and Percentage Of Spoiled Pixels [POSP]. This is compared with standard algorithms already in use and improved performance of the proposed algorithm is presented. The advantage of the proposed algorithm is that a single algorithm can replace several independent algorithms which are required for removal of different artifacts.
Keywords: Filtering, Decision Based Algorithm, noise, imagerestoration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2158