Search results for: Error index (J)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2253

Search results for: Error index (J)

1683 Turbine Follower Control Strategy Design Based on Developed FFPP Model

Authors: Ali Ghaffari, Mansour Nikkhah Bahrami, Hesam Parsa

Abstract:

In this paper a comprehensive model of a fossil fueled power plant (FFPP) is developed in order to evaluate the performance of a newly designed turbine follower controller. Considering the drawbacks of previous works, an overall model is developed to minimize the error between each subsystem model output and the experimental data obtained at the actual power plant. The developed model is organized in two main subsystems namely; Boiler and Turbine. Considering each FFPP subsystem characteristics, different modeling approaches are developed. For economizer, evaporator, superheater and reheater, first order models are determined based on principles of mass and energy conservation. Simulations verify the accuracy of the developed models. Due to the nonlinear characteristics of attemperator, a new model, based on a genetic-fuzzy systems utilizing Pittsburgh approach is developed showing a promising performance vis-à-vis those derived with other methods like ANFIS. The optimization constraints are handled utilizing penalty functions. The effect of increasing the number of rules and membership functions on the performance of the proposed model is also studied and evaluated. The turbine model is developed based on the equation of adiabatic expansion. Parameters of all evaluated models are tuned by means of evolutionary algorithms. Based on the developed model a fuzzy PI controller is developed. It is then successfully implemented in the turbine follower control strategy of the plant. In this control strategy instead of keeping control parameters constant, they are adjusted on-line with regard to the error and the error rate. It is shown that the response of the system improves significantly. It is also shown that fuel consumption decreases considerably.

Keywords: Attemperator, Evolutionary algorithms, Fossil fuelled power plant (FFPP), Fuzzy set theory, Gain scheduling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795
1682 Comparison Mechanical and Chemical Treatments on Properties of Low Yield Bagasse Pulp During Recycling

Authors: Parizad Sheikhi, Mohammad Talaeipour

Abstract:

the effects of refining and alkaline chemicals on potential of recycling bleached chemical pulp of bagasse were investigated in this study. Recycling was done until three times. Handsheet properties such as, apparent density, light scattering coefficient, tear index, burst index, breaking length, and fold number according to TAPPI standard were measured. Water retention value also was used to considering the treatments during recycling. Refining enhanced the strength of recycled pulp by increasing fiber flexibility and swelling ability, whereas by applying chemical treatment didn't observe any improvement. The morphology of recycled fiber was considered with scanning electron microscopy (SEM).

Keywords: Bagasse pulp, chemical treatment, recycling, refining, scanning electron microscopy, water retention value.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2685
1681 Improving Cache Memory Utilization

Authors: Sami I. Serhan, Hamed M. Abdel-Haq

Abstract:

In this paper, an efficient technique is proposed to manage the cache memory. The proposed technique introduces some modifications on the well-known set associative mapping technique. This modification requires a little alteration in the structure of the cache memory and on the way by which it can be referenced. The proposed alteration leads to increase the set size virtually and consequently to improve the performance and the utilization of the cache memory. The current mapping techniques have accomplished good results. In fact, there are still different cases in which cache memory lines are left empty and not used, whereas two or more processes overwrite the lines of each other, instead of using those empty lines. The proposed algorithm aims at finding an efficient way to deal with such problem.

Keywords: Modified Set Associative Mapping, Locality of Reference, Miss Ratio, Hit Ratio, Cache Memory, Clustered Behavior, Index Address, Tag Field, Status Field, and Complement of Index Address.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1929
1680 Causal Relationship between Macro-Economic Indicators and Funds Unit Prices Behavior: Evidence from Malaysian Islamic Equity Unit Trust Funds Industry

Authors: Anwar Hasan Abdullah Othman, Ahamed Kameel, Hasanuddeen Abdul Aziz

Abstract:

In this study, attempt has been made to investigate the relationship specifically the causal relation between fund unit prices of Islamic equity unit trust fund which measure by fund NAV and the selected macro-economic variables of Malaysian economy by using VECM causality test and Granger causality test. Monthly data has been used from Jan, 2006 to Dec, 2012 for all the variables. The findings of the study showed that industrial production index, political election and financial crisis are the only variables having unidirectional causal relationship with fund unit price. However the global oil price is having bidirectional causality with fund NAV. Thus, it is concluded that the equity unit trust fund industry in Malaysia is an inefficient market with respect to the industrial production index, global oil prices, political election and financial crisis. However the market is approaching towards informational efficiency at least with respect to four macroeconomic variables, treasury bill rate, money supply, foreign exchange rate, and corruption index.

Keywords: Fund unit price, unit trust industry, Malaysia, macroeconomic variables, causality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3607
1679 Influence of Composition and Austempering Temperature on Machinability of Austempered Ductile Iron

Authors: Jagmohan Datt, Uma Batra

Abstract:

Present investigations involve a systematic study on the machinability of austempered ductile irons (ADI) developed from four commercially viable ductile irons alloyed with different contents of 0, 0.1, 0.3 and 0.6 wt.% of Ni. The influence of Ni content, amount of retained austenite and hardness of ADI on machining behavior has been conducted systematically. Austempering heat treatment was carried out for 120 minutes at four temperatures- 270oC, 320oC, 370oC or 420oC, after austenitization at 900oC for 120 min. Milling tests were performed and machinability index, cutting forces and surface roughness measurements were used to evaluate the machinability. Higher cutting forces, lower machinability index and the poorer surface roughness of the samples austempered at lower temperatures indicated that austempering at higher temperatures resulted in better machinability. The machinability of samples austempered at 420oC, which contained higher fractions of retained austenite, was superior to that of samples austempered at lower temperatures, indicating that hardness is an important factor in assessing machinability in addition to high carbon austenite content. The ADI with 0.6% Ni, austempered at 420°C for 120 minutes, demonstrated best machinability.

Keywords: Austempering, machinability, machining index, cutting force, surface finish.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2384
1678 Design of Tracking Controllers for Medical Equipment Holders Using AHRS and MEMS Sensors

Authors: Seung You Na, Joo Hyun Jung, Jin Young Kim, Mohammad AhangarKiasari

Abstract:

There are various kinds of medical equipment which requires relatively accurate positional adjustments for successful treatment. However, patients tend to move without notice during a certain span of operations. Therefore, it is common practice that accompanying operators adjust the focus of the equipment. In this paper, tracking controllers for medical equipment are suggested to replace the operators. The tracking controllers use AHRS sensor information to recognize the movements of patients. Sensor fusion is applied to reducing the error magnitudes through linear Kalman filters. The image processing of optical markers is included to adjust the accumulation errors of gyroscope sensor data especially for yaw angles. The tracking controller reduces the positional errors between the current focus of a device and the target position on the body of a patient. Since the sensing frequencies of AHRS sensors are very high compared to the physical movements, the control performance is satisfactory. The typical applications are, for example, ESWT or rTMS, which have the error ranges of a few centimeters.

Keywords: AHRS, Sensor fusion, Tracking control, Position and posture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1897
1677 Modified Fuzzy ARTMAP and Supervised Fuzzy ART: Comparative Study with Multispectral Classification

Authors: F.Alilat, S.Loumi, H.Merrad, B.Sansal

Abstract:

In this article a modification of the algorithm of the fuzzy ART network, aiming at returning it supervised is carried out. It consists of the search for the comparison, training and vigilance parameters giving the minimum quadratic distances between the output of the training base and those obtained by the network. The same process is applied for the determination of the parameters of the fuzzy ARTMAP giving the most powerful network. The modification consist in making learn the fuzzy ARTMAP a base of examples not only once as it is of use, but as many time as its architecture is in evolution or than the objective error is not reached . In this way, we don-t worry about the values to impose on the eight (08) parameters of the network. To evaluate each one of these three networks modified, a comparison of their performances is carried out. As application we carried out a classification of the image of Algiers-s bay taken by SPOT XS. We use as criterion of evaluation the training duration, the mean square error (MSE) in step control and the rate of good classification per class. The results of this study presented as curves, tables and images show that modified fuzzy ARTMAP presents the best compromise quality/computing time.

Keywords: Neural Networks, fuzzy ART, fuzzy ARTMAP, Remote sensing, multispectral Classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1367
1676 Sustainable Geographic Information System-Based Map for Suitable Landfill Sites in Aley and Chouf, Lebanon

Authors: Allaw Kamel, Bazzi Hasan

Abstract:

Municipal solid waste (MSW) generation is among the most significant sources which threaten the global environmental health. Solid Waste Management has been an important environmental problem in developing countries because of the difficulties in finding sustainable solutions for solid wastes. Therefore, more efforts are needed to be implemented to overcome this problem. Lebanon has suffered a severe solid waste management problem in 2015, and a new landfill site was proposed to solve the existing problem. The study aims to identify and locate the most suitable area to construct a landfill taking into consideration the sustainable development to overcome the present situation and protect the future demands. Throughout the article, a landfill site selection methodology was discussed using Geographic Information System (GIS) and Multi Criteria Decision Analysis (MCDA). Several environmental, economic and social factors were taken as criterion for selection of a landfill. Soil, geology, and LUC (Land Use and Land Cover) indices with the Sustainable Development Index were main inputs to create the final map of Environmentally Sensitive Area (ESA) for landfill site. Different factors were determined to define each index. Input data of each factor was managed, visualized and analyzed using GIS. GIS was used as an important tool to identify suitable areas for landfill. Spatial Analysis (SA), Analysis and Management GIS tools were implemented to produce input maps capable of identifying suitable areas related to each index. Weight has been assigned to each factor in the same index, and the main weights were assigned to each index used. The combination of the different indices map generates the final output map of ESA. The output map was reclassified into three suitability classes of low, moderate, and high suitability. Results showed different locations suitable for the construction of a landfill. Results also reflected the importance of GIS and MCDA in helping decision makers finding a solution of solid wastes by a sanitary landfill.

Keywords: Sustainable development, landfill, municipal solid waste, geographic information system, GIS, multi criteria decision analysis, environmentally sensitive area.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 885
1675 Preoperative to Intraoperative Space Registration for Management of Head Injuries

Authors: M. Gooroochurn, M. Ovinis, D. Kerr, K. Bouazza-Marouf, M. Vloeberghs

Abstract:

A registration framework for image-guided robotic surgery is proposed for three emergency neurosurgical procedures, namely Intracranial Pressure (ICP) Monitoring, External Ventricular Drainage (EVD) and evacuation of a Chronic Subdural Haematoma (CSDH). The registration paradigm uses CT and white light as modalities. This paper presents two simulation studies for a preliminary evaluation of the registration protocol: (1) The loci of the Target Registration Error (TRE) in the patient-s axial, coronal and sagittal views were simulated based on a Fiducial Localisation Error (FLE) of 5 mm and (2) Simulation of the actual framework using projected views from a surface rendered CT model to represent white light images of the patient. Craniofacial features were employed as the registration basis to map the CT space onto the simulated intraoperative space. Photogrammetry experiments on an artificial skull were also performed to benchmark the results obtained from the second simulation. The results of both simulations show that the proposed protocol can provide a 5mm accuracy for these neurosurgical procedures.

Keywords: Image-guided Surgery, Multimodality Registration, Photogrammetry, Preoperative to Intraoperative Registration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
1674 The Use of Performance Indicators for Evaluating Models of Drying Jackfruit (Artocarpus heterophyllus L.): Page, Midilli, and Lewis

Authors: D. S. C. Soares, D. G. Costa, J. T. S., A. K. S. Abud, T. P. Nunes, A. M. Oliveira Júnior

Abstract:

Mathematical models of drying are used for the purpose of understanding the drying process in order to determine important parameters for design and operation of the dryer. The jackfruit is a fruit with high consumption in the Northeast and perishability. It is necessary to apply techniques to improve their conservation for longer in order to diffuse it by regions with low consumption. This study aimed to analyze several mathematical models (Page, Lewis, and Midilli) to indicate one that best fits the conditions of convective drying process using performance indicators associated with each model: accuracy (Af) and noise factors (Bf), mean square error (RMSE) and standard error of prediction (% SEP). Jackfruit drying was carried out in convective type tray dryer at a temperature of 50°C for 9 hours. It is observed that the model Midili was more accurate with Af: 1.39, Bf: 1.33, RMSE: 0.01%, and SEP: 5.34. However, the use of the Model Midilli is not appropriate for purposes of control process due to need four tuning parameters. With the performance indicators used in this paper, the Page model showed similar results with only two parameters. It is concluded that the best correlation between the experimental and estimated data is given by the Page’s model.

Keywords: Drying, models, jackfruit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2425
1673 Evaluating Hourly Sulphur Dioxide and Ground Ozone Simulated with the Air Quality Model in Lima, Peru

Authors: Odón R. Sánchez-Ccoyllo, Elizabeth Ayma-Choque, Alan Llacza

Abstract:

Sulphur dioxide (SO₂) and surface-ozone (O₃) concentrations are associated with diseases. The objective of this research is to evaluate the effectiveness of the air-quality Weather Research and Forecasting model coupled to Chemistry (WRF-Chem) model with a horizontal resolution of 5 km x 5 km. For this purpose, the measurements of the hourly SO₂ and O₃ concentrations available in three air quality monitoring stations in Lima, Peru were used for the purpose of validating the simulations of the SO₂ and O₃ concentrations obtained with the WRF-Chem model in February 2018. For the quantitative evaluation of the simulations of these gases, statistical techniques were implemented, such as the average of the simulations; the average of the measurements; the Mean Bias (MeB); the Mean Error (MeE); and the Root Mean Square Error (RMSE). The results of these statistical metrics indicated that the simulated SO₂ and O₃ values over-predicted the SO₂ and O₃ measurements. For the SO₂ concentration, the MeB values varied from 0.58 to 26.35 µg/m³; the MeE values varied from 8.75 to 26.5 µg/m³; the RMSE values varied from 13.3 to 31.79 µg/m³; while for O₃ concentrations the statistical values of the MeB varied from 37.52 to 56.29 µg/m³; the MeE values varied from 37.54 to 56.70 µg/m³; the RMSE values varied from 43.05 to 69.56 µg/m³.

Keywords: Ground-ozone, Lima, Sulphur dioxide, WRF-Chem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 371
1672 A Quantitative Assessment of the Social Marginalization in Romania

Authors: Andra Costache, Rădiţa Alexe

Abstract:

The analysis of the spatial disparities of social marginalization is a requirement in the present-day socio-economic and political context of Romania, an East-European state, member of the European Union since 2007, at present faced with the imperatives of the growth of its territorial cohesion. The main objective of this article is to develop a methodology for the assessment of social marginalization, in order to understand the intensity of the marginalization phenomenon at different spatial scales. The article proposes a social marginalization index (SMI), calculated through the integration of ten indicators relevant for the two components of social marginalization: the material component and the symbolical component. The results highlighted a strong connection between the total degree of social marginalization and the dependence on social benefits, unemployment rate, non-inclusion in the compulsory education, criminality rate, and the type of pension insurance.

Keywords: Romania, social marginalization index, territorial disparities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1965
1671 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.

Keywords: Injection molding, shrinkage, six sigma, Taguchi parameter design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1391
1670 A Novel RLS Based Adaptive Filtering Method for Speech Enhancement

Authors: Pogula Rakesh, T. Kishore Kumar

Abstract:

Speech enhancement is a long standing problem with numerous applications like teleconferencing, VoIP, hearing aids and speech recognition. The motivation behind this research work is to obtain a clean speech signal of higher quality by applying the optimal noise cancellation technique. Real-time adaptive filtering algorithms seem to be the best candidate among all categories of the speech enhancement methods. In this paper, we propose a speech enhancement method based on Recursive Least Squares (RLS) adaptive filter of speech signals. Experiments were performed on noisy data which was prepared by adding AWGN, Babble and Pink noise to clean speech samples at -5dB, 0dB, 5dB and 10dB SNR levels. We then compare the noise cancellation performance of proposed RLS algorithm with existing NLMS algorithm in terms of Mean Squared Error (MSE), Signal to Noise ratio (SNR) and SNR Loss. Based on the performance evaluation, the proposed RLS algorithm was found to be a better optimal noise cancellation technique for speech signals.

Keywords: Adaptive filter, Adaptive Noise Canceller, Mean Squared Error, Noise reduction, NLMS, RLS, SNR, SNR Loss.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3186
1669 The Use of Artificial Neural Network in Option Pricing: The Case of S and P 100 Index Options

Authors: Zeynep İltüzer Samur, Gül Tekin Temur

Abstract:

Due to the increasing and varying risks that economic units face with, derivative instruments gain substantial importance, and trading volumes of derivatives have reached very significant level. Parallel with these high trading volumes, researchers have developed many different models. Some are parametric, some are nonparametric. In this study, the aim is to analyse the success of artificial neural network in pricing of options with S&P 100 index options data. Generally, the previous studies cover the data of European type call options. This study includes not only European call option but also American call and put options and European put options. Three data sets are used to perform three different ANN models. One only includes data that are directly observed from the economic environment, i.e. strike price, spot price, interest rate, maturity, type of the contract. The others include an extra input that is not an observable data but a parameter, i.e. volatility. With these detail data, the performance of ANN in put/call dimension, American/European dimension, moneyness dimension is analyzed and whether the contribution of the volatility in neural network analysis make improvement in prediction performance or not is examined. The most striking results revealed by the study is that ANN shows better performance when pricing call options compared to put options; and the use of volatility parameter as an input does not improve the performance.

Keywords: Option Pricing, Neural Network, S&P 100 Index, American/European options

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3089
1668 Vitamin D Deficiency and Insufficiency in Postmenopausal Women with Obesity

Authors: Vladyslav Povoroznyuk, Anna Musiienko, Nataliia Dzerovych, Roksolana Povoroznyuk, Oksana Ivanyk

Abstract:

Deficiency and insufficiency of Vitamin D is a pandemic of the 21st century. Obesity patients have a lower level of vitamin D, but the literature data are contradictory. The purpose of this study is to investigate deficiency and insufficiency vitamin D in postmenopausal women with obesity. We examined 1007 women aged 50-89 years. Mean age was 65.74±8.61 years; mean height was 1.61±0.07 m; mean weight was 70.65±13.50 kg; mean body mass index was 27.27±4.86 kg/m2, and mean 25(OH) D levels in serum was 26.00±12.00 nmol/l. The women were divided into the following six groups depending on body mass index: I group – 338 women with normal body weight, II group – 16 women with insufficient body weight, III group – 382 women with excessive body weight, IV group – 199 women with obesity of class I, V group – 60 women with obesity of class II, and VI group – 12 women with obesity of class III. Level of 25(OH)D in serum was measured by means of an electrochemiluminescent method - Elecsys 2010 analyzer (Roche Diagnostics, Germany) and cobas test-systems. 34.4% of the examined women have deficiency of vitamin D and 31.4% insufficiency. Women with obesity of class I (23.60±10.24 ng/ml) and obese of class II (22.38±10.34 ng/ml) had significantly lower levels of 25 (OH) D compared to women with normal body weight (28.24±12.99 ng/ml), p=0.00003. In women with obesity, BMI significantly influences vitamin D level, and this influence does not depend on the season.

Keywords: Obesity, body mass index, vitamin D deficiency/insufficiency, postmenopausal women, age.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1061
1667 Reduction of Linear Time-Invariant Systems Using Routh-Approximation and PSO

Authors: S. Panda, S. K. Tomar, R. Prasad, C. Ardil

Abstract:

Order reduction of linear-time invariant systems employing two methods; one using the advantages of Routh approximation and other by an evolutionary technique is presented in this paper. In Routh approximation method the denominator of the reduced order model is obtained using Routh approximation while the numerator of the reduced order model is determined using the indirect approach of retaining the time moments and/or Markov parameters of original system. By this method the reduced order model guarantees stability if the original high order model is stable. In the second method Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical examples.

Keywords: Model Order Reduction, Markov Parameters, Routh Approximation, Particle Swarm Optimization, Integral Squared Error, Steady State Stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3293
1666 Application of Biometrics to Obtain High Entropy Cryptographic Keys

Authors: Sanjay Kanade, Danielle Camara, Dijana Petrovska-Delacretaz, Bernadette Dorizzi

Abstract:

In this paper, a two factor scheme is proposed to generate cryptographic keys directly from biometric data, which unlike passwords, are strongly bound to the user. Hash value of the reference iris code is used as a cryptographic key and its length depends only on the hash function, being independent of any other parameter. The entropy of such keys is 94 bits, which is much higher than any other comparable system. The most important and distinct feature of this scheme is that it regenerates the reference iris code by providing a genuine iris sample and the correct user password. Since iris codes obtained from two images of the same eye are not exactly the same, error correcting codes (Hadamard code and Reed-Solomon code) are used to deal with the variability. The scheme proposed here can be used to provide keys for a cryptographic system and/or for user authentication. The performance of this system is evaluated on two publicly available databases for iris biometrics namely CBS and ICE databases. The operating point of the system (values of False Acceptance Rate (FAR) and False Rejection Rate (FRR)) can be set by properly selecting the error correction capacity (ts) of the Reed- Solomon codes, e.g., on the ICE database, at ts = 15, FAR is 0.096% and FRR is 0.76%.

Keywords:

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2092
1665 Information Retrieval: A Comparative Study of Textual Indexing Using an Oriented Object Database (db4o) and the Inverted File

Authors: Mohammed Erritali

Abstract:

The growth in the volume of text data such as books and articles in libraries for centuries has imposed to establish effective mechanisms to locate them. Early techniques such as abstraction, indexing and the use of classification categories have marked the birth of a new field of research called "Information Retrieval". Information Retrieval (IR) can be defined as the task of defining models and systems whose purpose is to facilitate access to a set of documents in electronic form (corpus) to allow a user to find the relevant ones for him, that is to say, the contents which matches with the information needs of the user. Most of the models of information retrieval use a specific data structure to index a corpus which is called "inverted file" or "reverse index". This inverted file collects information on all terms over the corpus documents specifying the identifiers of documents that contain the term in question, the frequency of each term in the documents of the corpus, the positions of the occurrences of the word... In this paper we use an oriented object database (db4o) instead of the inverted file, that is to say, instead to search a term in the inverted file, we will search it in the db4o database. The purpose of this work is to make a comparative study to see if the oriented object databases may be competing for the inverse index in terms of access speed and resource consumption using a large volume of data.

Keywords: Information Retrieval, indexation, oriented object database (db4o), inverted file.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1735
1664 Shannon-Weaver Biodiversity of Neutrophils in Fractal Networks of Immunofluorescence for Medical Diagnostics

Authors: N.E.Galich

Abstract:

We develop new nonlinear methods of immunofluorescence analysis for a sensitive technology of respiratory burst reaction of DNA fluorescence due to oxidative activity in the peripheral blood neutrophils. Histograms in flow cytometry experiments represent a fluorescence flashes frequency as functions of fluorescence intensity. We used the Shannon-Weaver index for definition of neutrophils- biodiversity and Hurst index for definition of fractal-s correlations in immunofluorescence for different donors, as the basic quantitative criteria for medical diagnostics of health status. We analyze frequencies of flashes, information, Shannon entropies and their fractals in immunofluorescence networks due to reduction of histogram range. We found the number of simplest universal correlations for biodiversity, information and Hurst index in diagnostics and classification of pathologies for wide spectra of diseases. In addition is determined the clear criterion of a common immunity and human health status in a form of yes/no answers type. These answers based on peculiarities of information in immunofluorescence networks and biodiversity of neutrophils. Experimental data analysis has shown the existence of homeostasis for information entropy in oxidative activity of DNA in neutrophil nuclei for all donors.

Keywords: blood and cells fluorescence in diagnostics ofdiseases, cytometric histograms, entropy and information in fractalnetworks of oxidative activity of DNA, long-range chromosomalcorrelations in living cells.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1703
1663 Fuzzy Controller Design for TCSC to Improve Power Oscillations Damping

Authors: M Nayeripour, H. Khorsand, A. Roosta, T. Niknam, E. Azad

Abstract:

Series compensators have been used for many years, to increase the stability and load ability of transmission line. They compensate retarded or advanced volt drop of transmission lines by placing advanced or retarded voltage in series with them to compensate the effective reactance, which cause to increase load ability of transmission lines. In this paper, two method of fuzzy controller, based on power reference tracking and impedance reference tracking have been developed on TCSC controller in order to increase load ability and improving power oscillation damping of system. In these methods, fire angle of thyristors are determined directly through the special Rule-bases with the error and change of error as the inputs. The simulation results of two area four- machines power system show the good performance of power oscillation damping in system. Comparison of this method with classical PI controller shows the increasing speed of system response in power oscillation damping.

Keywords: TCSC, Two area network, Fuzzy controller, Power oscillation damping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2000
1662 Corporate Governance, Shareholder Monitoring and Cost of Debt in Malaysia

Authors: Zulkufly Ramly

Abstract:

This paper attempts to investigate the effect of corporate governance and shareholder monitoring mechanisms on cost of debt of Malaysian listed firms. We assess the quality of corporate governance using comprehensive corporate governance index, which consists of 139 items in six broad categories. We classify shareholder monitoring mechanisms into concentrated ownership, family, insider and government ownerships. Using panel sample from 2003 to 2007, regression results show that high corporate governance quality and concentrated ownership lower firm cost of debt. Debt issuers consider board structure and procedures, board compensation practices, accountability and audit, transparency and social and environmental activities as integral components of a good corporate governance framework.

Keywords: Corporate governance index, cost of debt, ownership structure, Malaysia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3512
1661 A Nano-Scaled SRAM Guard Band Design with Gaussian Mixtures Model of Complex Long Tail RTN Distributions

Authors: Worawit Somha, Hiroyuki Yamauchi

Abstract:

This paper proposes, for the first time, how the challenges facing the guard-band designs including the margin assist-circuits scheme for the screening-test in the coming process generations should be addressed. The increased screening error impacts are discussed based on the proposed statistical analysis models. It has been shown that the yield-loss caused by the misjudgment on the screening test would become 5-orders of magnitude larger than that for the conventional one when the amplitude of random telegraph noise (RTN) caused variations approaches to that of random dopant fluctuation. Three fitting methods to approximate the RTN caused complex Gamma mixtures distributions by the simple Gaussian mixtures model (GMM) are proposed and compared. It has been verified that the proposed methods can reduce the error of the fail-bit predictions by 4-orders of magnitude.

Keywords: Mixtures of Gaussian, Random telegraph noise, EM algorithm, Long-tail distribution, Fail-bit analysis, Static random access memory, Guard band design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846
1660 Optimizing and Evaluating Performance Quality Control of the Production Process of Disposable Essentials Using Approach Vague Goal Programming

Authors: Hadi Gholizadeh, Ali Tajdin

Abstract:

To have effective production planning, it is necessary to control the quality of processes. This paper aims at improving the performance of the disposable essentials process using statistical quality control and goal programming in a vague environment. That is expressed uncertainty because there is always a measurement error in the real world. Therefore, in this study, the conditions are examined in a vague environment that is a distance-based environment. The disposable essentials process in Kach Company was studied. Statistical control tools were used to characterize the existing process for four factor responses including the average of disposable glasses’ weights, heights, crater diameters, and volumes. Goal programming was then utilized to find the combination of optimal factors setting in a vague environment which is measured to apply uncertainty of the initial information when some of the parameters of the models are vague; also, the fuzzy regression model is used to predict the responses of the four described factors. Optimization results show that the process capability index values for disposable glasses’ average of weights, heights, crater diameters and volumes were improved. Such increasing the quality of the products and reducing the waste, which will reduce the cost of the finished product, and ultimately will bring customer satisfaction, and this satisfaction, will mean increased sales.

Keywords: Goal programming, quality control, vague environment, disposable glasses’ optimization, fuzzy regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1043
1659 Quantifying the Second-Level Digital Divide on Sub-National Level

Authors: Vladimir Korovkin, Albert Park, Evgeny Kaganer

Abstract:

Digital divide, the gap in the access to the world of digital technologies and the socio-economic opportunities that they create is an important phenomenon of the XXI century. This gap may exist between countries, regions within a country or socio-demographic groups, creating the classes of “digital have and have nots”. While the 1st-level divide (the difference in opportunities to access the digital networks) was demonstrated to diminish with time, the issues of 2nd level divide (the difference in skills and usage of digital systems) and 3rd level divide (the difference in effects obtained from digital technology) may grow. The paper offers a systemic review of literature on the measurement of the digital divide, noting the certain conceptual stagnation due to the lack of effective instruments that would capture the complex nature of the phenomenon. As a result, many important concepts do not receive the empiric exploration they deserve. As a solution the paper suggests a composite Digital Life Index, that studies separately the digital supply and demand across seven independent dimensions providing for 14 subindices. The Index is based on Internet-borne data, a distinction from traditional research approaches that rely on official statistics or surveys. The application of the model to the study of the digital divide between Russian regions and between cities in China have brought promising results. The paper advances the existing methodological literature on the 2nd level digital divide and can also inform practical decision-making regarding the strategies of national and regional digital development.

Keywords: Digital transformation, second-level digital divide, composite index, digital policy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 470
1658 Software Effort Estimation Models Using Radial Basis Function Network

Authors: E. Praynlin, P. Latha

Abstract:

Software Effort Estimation is the process of estimating the effort required to develop software. By estimating the effort, the cost and schedule required to estimate the software can be determined. Accurate Estimate helps the developer to allocate the resource accordingly in order to avoid cost overrun and schedule overrun. Several methods are available in order to estimate the effort among which soft computing based method plays a prominent role. Software cost estimation deals with lot of uncertainty among all soft computing methods neural network is good in handling uncertainty. In this paper Radial Basis Function Network is compared with the back propagation network and the results are validated using six data sets and it is found that RBFN is best suitable to estimate the effort. The Results are validated using two tests the error test and the statistical test.

Keywords: Software cost estimation, Radial Basis Function Network (RBFN), Back propagation function network, Mean Magnitude of Relative Error (MMRE).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2389
1657 Performance Analysis of MIMO-OFDM Using Convolution Codes with QAM Modulation

Authors: I Gede Puja Astawa, Yoedy Moegiharto, Ahmad Zainudin, Imam Dui Agus Salim, Nur Annisa Anggraeni

Abstract:

Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs. Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier transmits Rayleigh multipath channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2x2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4x4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4x4 MIMO-OFDM system without coding, power saving 7dB of 2x2 MIMO-OFDM and significant power savings from SISO-OFDM system

Keywords: Convolution code, OFDM, MIMO, QAM, BER.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3391
1656 HaskellFL: A Tool for Detecting Logical Errors in Haskell

Authors: Vanessa Vasconcelos, Mariza A. S. Bigonha

Abstract:

Understanding and using the functional paradigm is a challenge for many programmers. Looking for logical errors in code may take a lot of a developer’s time when a program grows in size. In order to facilitate both processes, this paper presents HaskellFL, a tool that uses fault localization techniques to locate a logical error in Haskell code. The Haskell subset used in this work is sufficiently expressive for those studying Functional Programming to get immediate help debugging their code and to answer questions about key concepts associated with the functional paradigm. HaskellFL was tested against Functional Programming assignments submitted by students enrolled at the Functional Programming class at the Federal University of Minas Gerais and against exercises from the Exercism Haskell track that are publicly available in GitHub. This work also evaluated the effectiveness of two fault localization techniques, Tarantula and Ochiai, in the Haskell context. Furthermore, the EXAM score was chosen to evaluate the tool’s effectiveness, and results showed that HaskellFL reduced the effort needed to locate an error for all tested scenarios. The results also showed that the Ochiai method was more effective than Tarantula.

Keywords: Debug, fault localization, functional programming, Haskell.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 735
1655 Appraisal of Relativistic Effects on GNSS Receiver Positioning

Authors: I. Yakubu, Y. Y. Ziggah, E. A. Gyamera

Abstract:

The Global Navigation Satellite System (GNSS) started with the launch of the United State Department of Defense Global Positioning System (GPS). GNSS systems has grown over the years to include: GLONASS (Russia); Galileo (European Union); BeiDou (China). Any GNSS architecture consists of three major segments: Space, Control and User Segments. Errors such as; multipath, ionospheric and tropospheric effects, satellite clocks, receiver noise and orbit errors (relativity effect) have significant effects on GNSS positioning. To obtain centimeter level accuracy, the impacts of the relative motion of the satellites and earth need to be taken into account. This paper discusses the relevance of the theory of relativity as a source of error for GNSS receivers for position fix based on available relevant literature. Review of relevant literature reveals that due to relativity; Time dilation, Gravitational frequency shift and Sagnac effect cause significant influence on the use of GNSS receivers for positioning by an error range of ± 2.5 m based on pseudo-range computation.

Keywords: GNSS, relativistic effects, pseudo-range, accuracy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 398
1654 Performance Evaluation of a Minimum Mean Square Error-Based Physical Sidelink Share Channel Receiver under Fading Channel

Authors: Yang Fu, Jaime Rodrigo Navarro, Jose F. Monserrat, Faiza Bouchmal, Oscar Carrasco Quilis

Abstract:

Cellular Vehicle to Everything (C-V2X) is considered a promising solution for future autonomous driving. From Release 16 to Release 17, the Third Generation Partnership Project (3GPP) has introduced the definitions and services for 5G New Radio (NR) V2X. Since establishing a simulator for C-V2X communications is an essential preliminary step to achieve reliable and stable communication links, this paper proposes a complete framework of a link-level simulator based on the 3GPP specifications for the Physical Sidelink Share Channel (PSSCH) of the 5G NR Physical Layer (PHY). In this framework, several algorithms in the receiver part, i.e., sliding window in channel estimation and Minimum Mean Square Error (MMSE)-based equalization, are developed. Finally, the performance of the developed PSSCH receiver is validated through extensive simulations under different assumptions.

Keywords: Yang Fu, Jaime Rodrigo Navarro, Jose F. Monserrat, Faiza Bouchmal, Oscar Carrasco Quilis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 156