Search results for: Normalized least mean square methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4650

Search results for: Normalized least mean square methods

2850 EZW Coding System with Artificial Neural Networks

Authors: Saudagar Abdul Khader Jilani, Syed Abdul Sattar

Abstract:

Image compression plays a vital role in today-s communication. The limitation in allocated bandwidth leads to slower communication. To exchange the rate of transmission in the limited bandwidth the Image data must be compressed before transmission. Basically there are two types of compressions, 1) LOSSY compression and 2) LOSSLESS compression. Lossy compression though gives more compression compared to lossless compression; the accuracy in retrievation is less in case of lossy compression as compared to lossless compression. JPEG, JPEG2000 image compression system follows huffman coding for image compression. JPEG 2000 coding system use wavelet transform, which decompose the image into different levels, where the coefficient in each sub band are uncorrelated from coefficient of other sub bands. Embedded Zero tree wavelet (EZW) coding exploits the multi-resolution properties of the wavelet transform to give a computationally simple algorithm with better performance compared to existing wavelet transforms. For further improvement of compression applications other coding methods were recently been suggested. An ANN base approach is one such method. Artificial Neural Network has been applied to many problems in image processing and has demonstrated their superiority over classical methods when dealing with noisy or incomplete data for image compression applications. The performance analysis of different images is proposed with an analysis of EZW coding system with Error Backpropagation algorithm. The implementation and analysis shows approximately 30% more accuracy in retrieved image compare to the existing EZW coding system.

Keywords: Accuracy, Compression, EZW, JPEG2000, Performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1934
2849 Combination of Geological, Geophysical and Reservoir Engineering Analyses in Field Development: A Case Study

Authors: Atif Zafar, Fan Haijun

Abstract:

A sequence of different Reservoir Engineering methods and tools in reservoir characterization and field development are presented in this paper. The real data of Jin Gas Field of L-Basin of Pakistan is used. The basic concept behind this work is to enlighten the importance of well test analysis in a broader way (i.e. reservoir characterization and field development) unlike to just determine the permeability and skin parameters. Normally in the case of reservoir characterization we rely on well test analysis to some extent but for field development plan, the well test analysis has become a forgotten tool specifically for locations of new development wells. This paper describes the successful implementation of well test analysis in Jin Gas Field where the main uncertainties are identified during initial stage of field development when location of new development well was marked only on the basis of G&G (Geologic and Geophysical) data. The seismic interpretation could not encounter one of the boundary (fault, sub-seismic fault, heterogeneity) near the main and only producing well of Jin Gas Field whereas the results of the model from the well test analysis played a very crucial rule in order to propose the location of second well of the newly discovered field. The results from different methods of well test analysis of Jin Gas Field are also integrated with and supported by other tools of Reservoir Engineering i.e. Material Balance Method and Volumetric Method. In this way, a comprehensive way out and algorithm is obtained in order to integrate the well test analyses with Geological and Geophysical analyses for reservoir characterization and field development. On the strong basis of this working and algorithm, it was successfully evaluated that the proposed location of new development well was not justified and it must be somewhere else except South direction.

Keywords: Field development, reservoir characterization, reservoir engineering, well test analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1115
2848 Evaluating Mechanical Properties of CoNiCrAlY Coating from Miniature Specimen Testing at Elevated Temperature

Authors: W. Wen, G. Jackson, S. Maskill, D. G. McCartney, W. Sun

Abstract:

CoNiCrAlY alloys have been widely used as bond coats for thermal barrier coating (TBC) systems because of low cost, improved control of composition, and the feasibility to tailor the coatings microstructures. Coatings are in general very thin structures, and therefore it is impossible to characterize the mechanical responses of the materials via conventional mechanical testing methods. Due to this reason, miniature specimen testing methods, such as the small punch test technique, have been developed. This paper presents some of the recent research in evaluating the mechanical properties of the CoNiCrAlY coatings at room and high temperatures, through the use of small punch testing and the developed miniature specimen tensile testing, applicable to a range of temperature, to investigate the elastic-plastic and creep behavior as well as ductile-brittle transition temperature (DBTT) behavior. An inverse procedure was developed to derive the mechanical properties from such tests for the coating materials. A two-layer specimen test method is also described. The key findings include: 1) the temperature-dependent coating properties can be accurately determined by the miniature tensile testing within a wide range of temperature; 2) consistent DBTTs can be identified by both the SPT and miniature tensile tests (~ 650 °C); and 3) the FE SPT modelling has shown good capability of simulating the early local cracking. In general, the temperature-dependent material behaviors of the CoNiCrAlY coating has been effectively characterized using miniature specimen testing and inverse method.

Keywords: CoNiCrAlY coatings, mechanical properties, DBTT, miniature specimen testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 769
2847 Study of Temperature Changes in Fars Province

Authors: A. Gandomkar, R. Dehghani

Abstract:

Climate change is a phenomenon has been based on the available evidence from a very long time ago and now its existence is very probable. The speed and nature of climate parameters changes at the middle of twentieth century has been different and its quickness more than the before and its trend changed to some extent comparing to the past. Climate change issue now regarded as not only one of the most common scientific topic but also a social political one, is not a new issue. Climate change is a complicated atmospheric oceanic phenomenon on a global scale and long-term. Precipitation pattern change, fast decrease of snowcovered resources and its rapid melting, increased evaporation, the occurrence of destroying floods, water shortage crisis, severe reduction at the rate of harvesting agricultural products and, so on are all the significant of climate change. To cope with this phenomenon, its consequences and events in which public instruction is the most important but it may be climate that no significant cant and effective action has been done so far. The present article is included a part of one surrey about climate change in Fars. The study area having annually mean temperature 14 and precipitation 320 mm .23 stations inside the basin with a common 37 year statistical period have been applied to the meteorology data (1974-2010). Man-kendal and change factor methods are two statistical methods, applying them, the trend of changes and the annual mean average temperature and the annual minimum mean temperature were studied by using them. Based on time series for each parameter, the annual mean average temperature and the mean of annual maximum temperature have a rising trend so that this trend is clearer to the mean of annual maximum temperature.

Keywords: Climate change, Coefficient Variation, Fars province, Man-Kendal method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1915
2846 Performance Based Design of Masonry Infilled Reinforced Concrete Frames for Near-Field Earthquakes Using Energy Methods

Authors: Alok Madan, Arshad K. Hashmi

Abstract:

Performance based design (PBD) is an iterative exercise in which a preliminary trial design of the building structure is selected and if the selected trial design of the building structure does not conform to the desired performance objective, the trial design is revised. In this context, development of a fundamental approach for performance based seismic design of masonry infilled frames with minimum number of trials is an important objective. The paper presents a plastic design procedure based on the energy balance concept for PBD of multi-story multi-bay masonry infilled reinforced concrete (R/C) frames subjected to near-field earthquakes. The proposed energy based plastic design procedure was implemented for trial performance based seismic design of representative masonry infilled reinforced concrete frames with various practically relevant distributions of masonry infill panels over the frame elevation. Non-linear dynamic analyses of the trial PBD of masonry infilled R/C frames was performed under the action of near-field earthquake ground motions. The results of non-linear dynamic analyses demonstrate that the proposed energy method is effective for performance based design of masonry infilled R/C frames under near-field as well as far-field earthquakes.

Keywords: Masonry Infilled Frame, Energy Methods, Near-fault Ground Motions, Pushover Analysis, Nonlinear Dynamic Analysis, Seismic Demand.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2791
2845 Efficient Microspore Isolation Methods for High Yield Embryoids and Regeneration in Rice (Oryza sativa L.)

Authors: S. M. Shahinul Islam, Israt Ara, Narendra Tuteja, Sreeramanan Subramaniam

Abstract:

Through anther and microspore culture methods, complete homozygous plants can be produced within a year as compared to the long inbreeding method. Isolated microspore culture is one of the most important techniques for rapid development of haploid plants. The efficiency of this method is influenced by several factors such as cultural conditions, growth regulators, plant media, pretreatments, physical and growth conditions of the donor plants, pollen isolation procedure, etc. The main purpose of this study was to improve the isolated microspore culture protocol in order to increase the efficiency of embryoids, its regeneration and reducing albinisms. Under this study we have tested mainly three different microspore isolation procedures by glass rod, homozeniger and by blending and found the efficiency on gametic embryogenesis. There are three types of media viz. washing, pre-culture and induction was used. The induction medium as AMC (modified MS) supplemented by 2, 4-D (2.5 mg/l), kinetin (0.5 mg/l) and higher amount of D-Manitol (90 g/l) instead of sucrose and two types of amino acids (L-glutamine and L-serine) were used. Out of three main microspore isolation procedure by homogenizer isolation (P4) showed best performance on ELS induction (177%) and green plantlets (104%) compared with other techniques. For all cases albinisims occurred but microspore isolation from excised anthers by glass rod and homogenizer showed lesser numbers of albino plants that was also one of the important findings in this study.

Keywords: Androgenesis, pretreatment, microspore culture, regeneration, albino plants, Oryza sativa.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4133
2844 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation

Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke

Abstract:

Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.

Keywords: Automatic calibration framework, approximate Bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740
2843 Thermal Method for Testing Small Chemisorbents Samples on the Base of Potassium Superoxide

Authors: Pavel V. Balabanov, Daria A. Liubimova, Aleksandr P. Savenkov

Abstract:

The increase of technogenic and natural accidents, accompanied by air pollution, for example, by combustion products, leads to the necessity of respiratory protection. This work is devoted to the development of a calorimetric method and a device which allows investigating quickly the kinetics of carbon dioxide sorption by chemisorbents on the base of potassium superoxide in order to assess the protective properties of respiratory protective closed circuit apparatus. The features of the traditional approach for determining the sorption properties in a thin layer of chemisorbent are described, as well as methods and devices, which can be used for the sorption kinetics study. The authors developed an approach (as opposed to the traditional approach) based on the power measurement of internal heat sources in the chemisorbent layer. The emergence of the heat sources is a result of exothermic reaction of carbon dioxide sorption. This approach eliminates the necessity of chemical analysis of samples and can significantly reduce the time and material expenses during chemisorbents testing. Error of determining the volume fraction of adsorbed carbon dioxide by the developed method does not exceed 12%. Taking into account the efficiency of the method, we consider that it is a good alternative to traditional methods of chemical analysis under the assessment of the protection sorbents quality.

Keywords: Carbon dioxide chemisorption, exothermic reaction, internal heat sources, respiratory protective apparatus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1696
2842 Prediction of the Epileptic Events 'Epileptic Seizures' by Neural Networks and Expert Systems

Authors: Kifah Tout, Nisrine Sinno, Mohamad Mikati

Abstract:

Many studies have focused on the nonlinear analysis of electroencephalography (EEG) mainly for the characterization of epileptic brain states. It is assumed that at least two states of the epileptic brain are possible: the interictal state characterized by a normal apparently random, steady-state EEG ongoing activity; and the ictal state that is characterized by paroxysmal occurrence of synchronous oscillations and is generally called in neurology, a seizure. The spatial and temporal dynamics of the epileptogenic process is still not clear completely especially the most challenging aspects of epileptology which is the anticipation of the seizure. Despite all the efforts we still don-t know how and when and why the seizure occurs. However actual studies bring strong evidence that the interictal-ictal state transition is not an abrupt phenomena. Findings also indicate that it is possible to detect a preseizure phase. Our approach is to use the neural network tool to detect interictal states and to predict from those states the upcoming seizure ( ictal state). Analysis of the EEG signal based on neural networks is used for the classification of EEG as either seizure or non-seizure. By applying prediction methods it will be possible to predict the upcoming seizure from non-seizure EEG. We will study the patients admitted to the epilepsy monitoring unit for the purpose of recording their seizures. Preictal, ictal, and post ictal EEG recordings are available on such patients for analysis The system will be induced by taking a body of samples then validate it using another. Distinct from the two first ones a third body of samples is taken to test the network for the achievement of optimum prediction. Several methods will be tried 'Backpropagation ANN' and 'RBF'.

Keywords: Artificial neural network (ANN), automatic prediction, epileptic seizures analysis, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1540
2841 Improvement of Gas Turbine Performance Test in Combine Cycle

Authors: M. Khosravy-el-Hossani, Q. Dorosti

Abstract:

One of the important applications of gas turbines is their utilization for heat recovery steam generator in combine-cycle technology. Exhaust flow and energy are two key parameters for determining heat recovery steam generator performance which are mainly determined by the main gas turbine components performance data. For this reason a method was developed for determining the exhaust energy in the new edition of ASME PTC22. The result of this investigation shows that the method of standard has considerable error. Therefore in this paper a new method is presented for modifying of the performance calculation. The modified method is based on exhaust gas constituent analysis and combustion calculations. The case study presented here by two kind of General Electric gas turbine design data for validation of methodologies. The result shows that the modified method is more precise than the ASME PTC22 method. The exhaust flow calculation deviation from design data is 1.5-2 % by ASME PTC22 method so that the deviation regarding with modified method is 0.3-0.5%. Based on precision of analyzer instruments, the method can be suitable alternative for gas turbine standard performance test. In advance two methods are proposed based on known and unknown fuel in modified method procedure. The result of this paper shows that the difference between the two methods is below than %0.02. In according to reasonable esult of the second procedure (unknown fuel composition), the method can be applied to performance evaluation of gas turbine, so that the measuring cost and data gathering should be reduced.

Keywords: Gas turbine, Performance test code, Combined cycle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2990
2840 Route Training in Mobile Robotics through System Identification

Authors: Roberto Iglesias, Theocharis Kyriacou, Ulrich Nehmzow, Steve Billings

Abstract:

Fundamental sensor-motor couplings form the backbone of most mobile robot control tasks, and often need to be implemented fast, efficiently and nevertheless reliably. Machine learning techniques are therefore often used to obtain the desired sensor-motor competences. In this paper we present an alternative to established machine learning methods such as artificial neural networks, that is very fast, easy to implement, and has the distinct advantage that it generates transparent, analysable sensor-motor couplings: system identification through nonlinear polynomial mapping. This work, which is part of the RobotMODIC project at the universities of Essex and Sheffield, aims to develop a theoretical understanding of the interaction between the robot and its environment. One of the purposes of this research is to enable the principled design of robot control programs. As a first step towards this aim we model the behaviour of the robot, as this emerges from its interaction with the environment, with the NARMAX modelling method (Nonlinear, Auto-Regressive, Moving Average models with eXogenous inputs). This method produces explicit polynomial functions that can be subsequently analysed using established mathematical methods. In this paper we demonstrate the fidelity of the obtained NARMAX models in the challenging task of robot route learning; we present a set of experiments in which a Magellan Pro mobile robot was taught to follow four different routes, always using the same mechanism to obtain the required control law.

Keywords: Mobile robotics, system identification, non-linear modelling, NARMAX.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1722
2839 Current Status and Future Trends of Mechanized Fruit Thinning Devices and Sensor Technology

Authors: Marco Lopes, Pedro D. Gaspar, Maria P. Simões

Abstract:

This paper reviews the different concepts that have been investigated concerning the mechanization of fruit thinning as well as multiple working principles and solutions that have been developed for feature extraction of horticultural products, both in the field and industrial environments. The research should be committed towards selective methods, which inevitably need to incorporate some kinds of sensor technology. Computer vision often comes out as an obvious solution for unstructured detection problems, although leaves despite the chosen point of view frequently occlude fruits. Further research on non-traditional sensors that are capable of object differentiation is needed. Ultrasonic and Near Infrared (NIR) technologies have been investigated for applications related to horticultural produce and show a potential to satisfy this need while simultaneously providing spatial information as time of flight sensors. Light Detection and Ranging (LIDAR) technology also shows a huge potential but it implies much greater costs and the related equipment is usually much larger, making it less suitable for portable devices, which may serve a purpose on smaller unstructured orchards. Portable devices may serve a purpose on these types of orchards. In what concerns sensor methods, on-tree fruit detection, major challenge is to overcome the problem of fruits’ occlusion by leaves and branches. Hence, nontraditional sensors capable of providing some type of differentiation should be investigated.

Keywords: Fruit thinning, horticultural field, portable devices, sensor technologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 983
2838 Participation in IAEA Proficiency Test to Analyse Cobalt, Strontium and Caesium in Seawater Using Direct Counting and Radiochemical Techniques

Authors: S. Visetpotjanakit, C. Khrautongkieo

Abstract:

Radiation monitoring in the environment and foodstuffs is one of the main responsibilities of Office of Atoms for Peace (OAP) as the nuclear regulatory body of Thailand. The main goal of the OAP is to assure the safety of the Thai people and environment from any radiological incidents. Various radioanalytical methods have been developed to monitor radiation and radionuclides in the environmental and foodstuff samples. To validate our analytical performance, several proficiency test exercises from the International Atomic Energy Agency (IAEA) have been performed. Here, the results of a proficiency test exercise referred to as the Proficiency Test for Tritium, Cobalt, Strontium and Caesium Isotopes in Seawater 2017 (IAEA-RML-2017-01) are presented. All radionuclides excepting ³H were analysed using various radioanalytical methods, i.e. direct gamma-ray counting for determining ⁶⁰Co, ¹³⁴Cs and ¹³⁷Cs and developed radiochemical techniques for analysing ¹³⁴Cs, ¹³⁷Cs using AMP pre-concentration technique and 90Sr using di-(2-ethylhexyl) phosphoric acid (HDEHP) liquid extraction technique. The analysis results were submitted to IAEA. All results passed IAEA criteria, i.e. accuracy, precision and trueness and obtained ‘Accepted’ statuses. These confirm the data quality from the OAP environmental radiation laboratory to monitor radiation in the environment.

Keywords: International atomic energy agency, proficiency test, radiation monitoring, seawater.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 824
2837 Unsteady Transonic Aerodynamic Analysis for Oscillatory Airfoils using Time Spectral Method

Authors: Mohamad Reza. Mohaghegh, Majid. Malek Jafarian

Abstract:

This research proposes an algorithm for the simulation of time-periodic unsteady problems via the solution unsteady Euler and Navier-Stokes equations. This algorithm which is called Time Spectral method uses a Fourier representation in time and hence solve for the periodic state directly without resolving transients (which consume most of the resources in a time-accurate scheme). Mathematical tools used here are discrete Fourier transformations. It has shown tremendous potential for reducing the computational cost compared to conventional time-accurate methods, by enforcing periodicity and using Fourier representation in time, leading to spectral accuracy. The accuracy and efficiency of this technique is verified by Euler and Navier-Stokes calculations for pitching airfoils. Because of flow turbulence nature, Baldwin-Lomax turbulence model has been used at viscous flow analysis. The results presented by the Time Spectral method are compared with experimental data. It has shown tremendous potential for reducing the computational cost compared to the conventional time-accurate methods, by enforcing periodicity and using Fourier representation in time, leading to spectral accuracy, because results verify the small number of time intervals per pitching cycle required to capture the flow physics.

Keywords: Time Spectral Method, Time-periodic unsteadyflow, Discrete Fourier transform, Pitching airfoil, Turbulence flow

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770
2836 Identification of Outliers in Flood Frequency Analysis: Comparison of Original and Multiple Grubbs-Beck Test

Authors: Ayesha S. Rahman, Khaled Haddad, Ataur Rahman

Abstract:

At-site flood frequency analysis is used to estimate flood quantiles when at-site record length is reasonably long. In Australia, FLIKE software has been introduced for at-site flood frequency analysis. The advantage of FLIKE is that, for a given application, the user can compare a number of most commonly adopted probability distributions and parameter estimation methods relatively quickly using a windows interface. The new version of FLIKE has been incorporated with the multiple Grubbs and Beck test which can identify multiple numbers of potentially influential low flows. This paper presents a case study considering six catchments in eastern Australia which compares two outlier identification tests (original Grubbs and Beck test and multiple Grubbs and Beck test) and two commonly applied probability distributions (Generalized Extreme Value (GEV) and Log Pearson type 3 (LP3)) using FLIKE software. It has been found that the multiple Grubbs and Beck test when used with LP3 distribution provides more accurate flood quantile estimates than when LP3 distribution is used with the original Grubbs and Beck test. Between these two methods, the differences in flood quantile estimates have been found to be up to 61% for the six study catchments. It has also been found that GEV distribution (with L moments) and LP3 distribution with the multiple Grubbs and Beck test provide quite similar results in most of the cases; however, a difference up to 38% has been noted for flood quantiles for annual exceedance probability (AEP) of 1 in 100 for one catchment. This finding needs to be confirmed with a greater number of stations across other Australian states.

Keywords: Floods, FLIKE, probability distributions, flood frequency, outlier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3311
2835 Intelligent Assistive Methods for Diagnosis of Rheumatoid Arthritis Using Histogram Smoothing and Feature Extraction of Bone Images

Authors: SP. Chokkalingam, K. Komathy

Abstract:

Advances in the field of image processing envision a new era of evaluation techniques and application of procedures in various different fields. One such field being considered is the biomedical field for prognosis as well as diagnosis of diseases. This plethora of methods though provides a wide range of options to select from, it also proves confusion in selecting the apt process and also in finding which one is more suitable. Our objective is to use a series of techniques on bone scans, so as to detect the occurrence of rheumatoid arthritis (RA) as accurately as possible. Amongst other techniques existing in the field our proposed system tends to be more effective as it depends on new methodologies that have been proved to be better and more consistent than others. Computer aided diagnosis will provide more accurate and infallible rate of consistency that will help to improve the efficiency of the system. The image first undergoes histogram smoothing and specification, morphing operation, boundary detection by edge following algorithm and finally image subtraction to determine the presence of rheumatoid arthritis in a more efficient and effective way. Using preprocessing noises are removed from images and using segmentation, region of interest is found and Histogram smoothing is applied for a specific portion of the images. Gray level co-occurrence matrix (GLCM) features like Mean, Median, Energy, Correlation, Bone Mineral Density (BMD) and etc. After finding all the features it stores in the database. This dataset is trained with inflamed and noninflamed values and with the help of neural network all the new images are checked properly for their status and Rough set is implemented for further reduction.

Keywords: Computer Aided Diagnosis, Edge Detection, Histogram Smoothing, Rheumatoid Arthritis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2479
2834 Analysis of Aiming Performance for Games Using Mapping Method of Corneal Reflections Based on Two Different Light Sources

Authors: Yoshikazu Onuki, Itsuo Kumazawa

Abstract:

Fundamental motivation of this paper is how gaze estimation can be utilized effectively regarding an application to games. In games, precise estimation is not always important in aiming targets but an ability to move a cursor to an aiming target accurately is also significant. Incidentally, from a game producing point of view, a separate expression of a head movement and gaze movement sometimes becomes advantageous to expressing sense of presence. A case that panning a background image associated with a head movement and moving a cursor according to gaze movement can be a representative example. On the other hand, widely used technique of POG estimation is based on a relative position between a center of corneal reflection of infrared light sources and a center of pupil. However, a calculation of a center of pupil requires relatively complicated image processing, and therefore, a calculation delay is a concern, since to minimize a delay of inputting data is one of the most significant requirements in games. In this paper, a method to estimate a head movement by only using corneal reflections of two infrared light sources in different locations is proposed. Furthermore, a method to control a cursor using gaze movement as well as a head movement is proposed. By using game-like-applications, proposed methods are evaluated and, as a result, a similar performance to conventional methods is confirmed and an aiming control with lower computation power and stressless intuitive operation is obtained.

Keywords: Point-of-gaze, gaze estimation, head movement, corneal reflections, two infrared light sources, game.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1071
2833 A Comparative Analysis Approach Based on Fuzzy AHP, TOPSIS and PROMETHEE for the Selection Problem of GSCM Solutions

Authors: Omar Boutkhoum, Mohamed Hanine, Abdessadek Bendarag

Abstract:

Sustainable economic growth is nowadays driving firms to extend toward the adoption of many green supply chain management (GSCM) solutions. However, the evaluation and selection of these solutions is a matter of concern that needs very serious decisions, involving complexity owing to the presence of various associated factors. To resolve this problem, a comparative analysis approach based on multi-criteria decision-making methods is proposed for adequate evaluation of sustainable supply chain management solutions. In the present paper, we propose an integrated decision-making model based on FAHP (Fuzzy Analytic Hierarchy Process), TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) and PROMETHEE (Preference Ranking Organisation METHod for Enrichment Evaluations) to contribute to a better understanding and development of new sustainable strategies for industrial organizations. Due to the varied importance of the selected criteria, FAHP is used to identify the evaluation criteria and assign the importance weights for each criterion, while TOPSIS and PROMETHEE methods employ these weighted criteria as inputs to evaluate and rank the alternatives. The main objective is to provide a comparative analysis based on TOPSIS and PROMETHEE processes to help make sound and reasoned decisions related to the selection problem of GSCM solution.

Keywords: GSCM solutions, multi-criteria analysis, FAHP, TOPSIS, PROMETHEE, decision support system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 938
2832 Military Families’ Attachment to the Royal Guards Community of Dusit District, Bangkok Metropolitan

Authors: Kaniknun Photchong, Phusit Phukamchanoad

Abstract:

The objective of this research is to study the people’s level of participation in activities of the community, their satisfaction towards the community, the attachment they have to the community, factors that influence the attachment, as well as the characteristics of the relationships of military families’ of the Royal Guards community of Dusit District. The method used was non-probability sampling by quota sampling according to people’s age. The determined age group was 18 years or older.

One set of a sample group was done per family. The questionnaires were conducted by 287 people. Snowball sampling was also used by interviewing people of the community, starting from the Royal Guards Community’s leader, then by 20 of the community’s well-respected persons. The data was analyzed by using descriptive statistics, such as arithmetic mean and standard deviation, as well as by inferential statistics, such as Independent - Samples T test (T-test), One-Way ANOVA (F-test), Chi-Square. Descriptive analysis according to the structure of the interview content was also used. The results of the research is that the participation of the population in the Royal Guards Community in various activities is at a medium level, with the average participation level during Mother’s and Father’s Days. The people’s general level of satisfaction towards the premises of the Royal Guards Community is at the highest level.

The people were most satisfied with the transportation within the community and in contacting with people from outside the premises. The access to the community is convenient and there are various entrances. The attachment of the people to the Royal Guards Community in general and by each category is at a high level. The feeling that the community is their home rated the highest average. Factors that influence the attachment of the people of the Royal Guards Community are age, status, profession, income, length of stay in the community, membership of social groups, having neighbors they feel close and familiar with, and as well as the benefits they receive from the community. In addition, it was found that people that participate in activities have a high level of positive relationship towards the attachment of the people to the Royal Guards Community. The satisfaction of the community has a very high level of positive relationship with the attachment of the people to the Royal Guards Community.

The characteristics of the attachment of military families’ is that they live in big houses that everyone has to protect and care for, starting from the leader of the family as well as all members. Therefore, they all love the community they live in. The characteristics that show the participation of activities within the community and the high level of satisfaction towards the premises of the community will enable the people to be more attached to the community. The people feel that everyone is close neighbors within the community, as if they are one big family.

Keywords: Activities, Attachment, Community, Royal Guards, Satisfaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1667
2831 Evaluation of the Impact of Dataset Characteristics for Classification Problems in Biological Applications

Authors: Kanthida Kusonmano, Michael Netzer, Bernhard Pfeifer, Christian Baumgartner, Klaus R. Liedl, Armin Graber

Abstract:

Availability of high dimensional biological datasets such as from gene expression, proteomic, and metabolic experiments can be leveraged for the diagnosis and prognosis of diseases. Many classification methods in this area have been studied to predict disease states and separate between predefined classes such as patients with a special disease versus healthy controls. However, most of the existing research only focuses on a specific dataset. There is a lack of generic comparison between classifiers, which might provide a guideline for biologists or bioinformaticians to select the proper algorithm for new datasets. In this study, we compare the performance of popular classifiers, which are Support Vector Machine (SVM), Logistic Regression, k-Nearest Neighbor (k-NN), Naive Bayes, Decision Tree, and Random Forest based on mock datasets. We mimic common biological scenarios simulating various proportions of real discriminating biomarkers and different effect sizes thereof. The result shows that SVM performs quite stable and reaches a higher AUC compared to other methods. This may be explained due to the ability of SVM to minimize the probability of error. Moreover, Decision Tree with its good applicability for diagnosis and prognosis shows good performance in our experimental setup. Logistic Regression and Random Forest, however, strongly depend on the ratio of discriminators and perform better when having a higher number of discriminators.

Keywords: Classification, High dimensional data, Machine learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2384
2830 The Reliability of Wireless Sensor Network

Authors: B. Juhasova, I. Halenar, M. Juhas

Abstract:

The wireless communication is one of the widely used methods of data transfer at the present days. The benefit of this communication method is the partial independence of the infrastructure and the possibility of mobility. In some special applications it is the only way how to connect. This paper presents some problems in the implementation of a sensor network connection for measuring environmental parameters in the area of manufacturing plants.

Keywords: Network, communication, reliability, sensors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1535
2829 Closed form Delay Model for on-Chip VLSIRLCG Interconnects for Ramp Input for Different Damping Conditions

Authors: Susmita Sahoo, Madhumanti Datta, Rajib Kar

Abstract:

Fast delay estimation methods, as opposed to simulation techniques, are needed for incremental performance driven layout synthesis. On-chip inductive effects are becoming predominant in deep submicron interconnects due to increasing clock speed and circuit complexity. Inductance causes noise in signal waveforms, which can adversely affect the performance of the circuit and signal integrity. Several approaches have been put forward which consider the inductance for on-chip interconnect modelling. But for even much higher frequency, of the order of few GHz, the shunt dielectric lossy component has become comparable to that of other electrical parameters for high speed VLSI design. In order to cope up with this effect, on-chip interconnect has to be modelled as distributed RLCG line. Elmore delay based methods, although efficient, cannot accurately estimate the delay for RLCG interconnect line. In this paper, an accurate analytical delay model has been derived, based on first and second moments of RLCG interconnection lines. The proposed model considers both the effect of inductance and conductance matrices. We have performed the simulation in 0.18μm technology node and an error of as low as less as 5% has been achieved with the proposed model when compared to SPICE. The importance of the conductance matrices in interconnect modelling has also been discussed and it is shown that if G is neglected for interconnect line modelling, then it will result an delay error of as high as 6% when compared to SPICE.

Keywords: Delay Modelling; On-Chip Interconnect; RLCGInterconnect; Ramp Input; Damping; VLSI

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2048
2828 Turbine Compressor Vibration Analysis and Rotor Movement Evaluation by Shaft Center Line Method (The Case History Related to Main Turbine Compressor of an Olefin Plant in Iran Oil Industries)

Authors: Omid A. Zargar

Abstract:

Vibration monitoring methods of most critical equipment like main turbine and compressors always plays important role in preventive maintenance and management consideration in big industrial plants. There are a number of traditional methods like monitoring the overall vibration data from Bently Nevada panel and the time wave form (TWF) or fast Fourier transform (FFT) monitoring. Besides, Shaft centerline monitoring method developed too much in recent years. There are a number of arguments both in favor of and against this method between people who work in preventive maintenance and condition monitoring systems (vibration analysts). In this paper basic principal of Turbine compressor vibration analysis and rotor movement evaluation by shaft centerline method discussed in details through a case history. This case history is related to main turbine compressor of an olefin plant in Iran oil industry. In addition, some common mistakes that may occur by vibration analyst during the process discussed in details. It is worthy to know that, these mistakes may one of the reasons that sometimes this method seems to be not effective. Furthermore, recent patent and innovation in shaft position and movement evaluation are discussed in this paper.

Keywords: Shaft centerline position, attitude angle, journal bearing, sleeve bearing, tilting pad, steam turbine, main compressor, multistage compressor, condition monitoring, non-contact probe

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7127
2827 Modern Detection and Description Methods for Natural Plants Recognition

Authors: Masoud Fathi Kazerouni, Jens Schlemper, Klaus-Dieter Kuhnert

Abstract:

Green planet is one of the Earth’s names which is known as a terrestrial planet and also can be named the fifth largest planet of the solar system as another scientific interpretation. Plants do not have a constant and steady distribution all around the world, and even plant species’ variations are not the same in one specific region. Presence of plants is not only limited to one field like botany; they exist in different fields such as literature and mythology and they hold useful and inestimable historical records. No one can imagine the world without oxygen which is produced mostly by plants. Their influences become more manifest since no other live species can exist on earth without plants as they form the basic food staples too. Regulation of water cycle and oxygen production are the other roles of plants. The roles affect environment and climate. Plants are the main components of agricultural activities. Many countries benefit from these activities. Therefore, plants have impacts on political and economic situations and future of countries. Due to importance of plants and their roles, study of plants is essential in various fields. Consideration of their different applications leads to focus on details of them too. Automatic recognition of plants is a novel field to contribute other researches and future of studies. Moreover, plants can survive their life in different places and regions by means of adaptations. Therefore, adaptations are their special factors to help them in hard life situations. Weather condition is one of the parameters which affect plants life and their existence in one area. Recognition of plants in different weather conditions is a new window of research in the field. Only natural images are usable to consider weather conditions as new factors. Thus, it will be a generalized and useful system. In order to have a general system, distance from the camera to plants is considered as another factor. The other considered factor is change of light intensity in environment as it changes during the day. Adding these factors leads to a huge challenge to invent an accurate and secure system. Development of an efficient plant recognition system is essential and effective. One important component of plant is leaf which can be used to implement automatic systems for plant recognition without any human interface and interaction. Due to the nature of used images, characteristic investigation of plants is done. Leaves of plants are the first characteristics to select as trusty parts. Four different plant species are specified for the goal to classify them with an accurate system. The current paper is devoted to principal directions of the proposed methods and implemented system, image dataset, and results. The procedure of algorithm and classification is explained in details. First steps, feature detection and description of visual information, are outperformed by using Scale invariant feature transform (SIFT), HARRIS-SIFT, and FAST-SIFT methods. The accuracy of the implemented methods is computed. In addition to comparison, robustness and efficiency of results in different conditions are investigated and explained.

Keywords: SIFT combination, feature extraction, feature detection, natural images, natural plant recognition, HARRIS-SIFT, FAST-SIFT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 729
2826 Upgraded Rough Clustering and Outlier Detection Method on Yeast Dataset by Entropy Rough K-Means Method

Authors: P. Ashok, G. M. Kadhar Nawaz

Abstract:

Rough set theory is used to handle uncertainty and incomplete information by applying two accurate sets, Lower approximation and Upper approximation. In this paper, the rough clustering algorithms are improved by adopting the Similarity, Dissimilarity–Similarity and Entropy based initial centroids selection method on three different clustering algorithms namely Entropy based Rough K-Means (ERKM), Similarity based Rough K-Means (SRKM) and Dissimilarity-Similarity based Rough K-Means (DSRKM) were developed and executed by yeast dataset. The rough clustering algorithms are validated by cluster validity indexes namely Rand and Adjusted Rand indexes. An experimental result shows that the ERKM clustering algorithm perform effectively and delivers better results than other clustering methods. Outlier detection is an important task in data mining and very much different from the rest of the objects in the clusters. Entropy based Rough Outlier Factor (EROF) method is seemly to detect outlier effectively for yeast dataset. In rough K-Means method, by tuning the epsilon (ᶓ) value from 0.8 to 1.08 can detect outliers on boundary region and the RKM algorithm delivers better results, when choosing the value of epsilon (ᶓ) in the specified range. An experimental result shows that the EROF method on clustering algorithm performed very well and suitable for detecting outlier effectively for all datasets. Further, experimental readings show that the ERKM clustering method outperformed the other methods.

Keywords: Clustering, Entropy, Outlier, Rough K-Means, validity index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1413
2825 Optimization of Lead Bioremediation by Marine Halomonas sp. ES015 Using Statistical Experimental Methods

Authors: Aliaa M. El-Borai, Ehab A. Beltagy, Eman E. Gadallah, Samy A. ElAssar

Abstract:

Bioremediation technology is now used for treatment instead of traditional metal removal methods. A strain was isolated from Marsa Alam, Red sea, Egypt showed high resistance to high lead concentration and was identified by the 16S rRNA gene sequencing technique as Halomonas sp. ES015. Medium optimization was carried out using Plackett-Burman design, and the most significant factors were yeast extract, casamino acid and inoculums size. The optimized media obtained by the statistical design raised the removal efficiency from 84% to 99% from initial concentration 250 ppm of lead. Moreover, Box-Behnken experimental design was applied to study the relationship between yeast extract concentration, casamino acid concentration and inoculums size. The optimized medium increased removal efficiency to 97% from initial concentration 500 ppm of lead. Immobilized Halomonas sp. ES015 cells on sponge cubes, using optimized medium in loop bioremediation column, showed relatively constant lead removal efficiency when reused six successive cycles over the range of time interval. Also metal removal efficiency was not affected by flow rate changes. Finally, the results of this research refer to the possibility of lead bioremediation by free or immobilized cells of Halomonas sp. ES015. Also, bioremediation can be done in batch cultures and semicontinuous cultures using column technology.

Keywords: Bioremediation, lead, Box–Behnken, Halomonas sp. ES015, loop bioremediation, Plackett-Burman.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1018
2824 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes

Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono

Abstract:

Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is widely used for LV segmentation, but it suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is improved to achieve a fast and efficient LV segmentation. First, a robust and efficient detection based on Hough forest localizes cardiac feature points. Such feature points are used to predict the initial fitting of the LV shape model. Second, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. With the robust initialization, ASM is able to achieve more accurate segmentation. The performance of the proposed method is evaluated on a dataset of 810 cardiac ultrasound images that are mostly abnormal shapes. This proposed method is compared with several combinations of ASM and existing initialization methods. Our experiment results demonstrate that accuracy of the proposed method for feature point detection for initialization was 40% higher than the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops and thus speeds up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.

Keywords: Hough forest, active shape model, segmentation, cardiac left ventricle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1504
2823 Effect of Three Drying Methods on Antioxidant Efficiency and Vitamin C Content of Moringa oleifera Leaf Extract

Authors: Kenia Martínez, Geniel Talavera, Juan Alonso

Abstract:

Moringa oleifera is a plant containing many nutrients that are mostly concentrated within the leaves. Commonly, the separation process of these nutrients involves solid-liquid extraction followed by evaporation and drying to obtain a concentrated extract, which is rich in proteins, vitamins, carbohydrates, and other essential nutrients that can be used in the food industry. In this work, three drying methods were used, which involved very different temperature and pressure conditions, to evaluate the effect of each method on the vitamin C content and the antioxidant efficiency of the extracts. Solid-liquid extractions of Moringa leaf (LE) were carried out by employing an ethanol solution (35% v/v) at 50 °C for 2 hours. The resulting extracts were then dried i) in a convective oven (CO) at 100 °C and at an atmospheric pressure of 750 mbar for 8 hours, ii) in a vacuum evaporator (VE) at 50 °C and at 300 mbar for 2 hours, and iii) in a freeze-drier (FD) at -40 °C and at 0.050 mbar for 36 hours. The antioxidant capacity (EC50, mg solids/g DPPH) of the dry solids was calculated by the free radical inhibition method employing DPPH˙ at 517 nm, resulting in a value of 2902.5 ± 14.8 for LE, 3433.1 ± 85.2 for FD, 3980.1 ± 37.2 for VE, and 8123.5 ± 263.3 for CO. The calculated antioxidant efficiency (AE, g DPPH/(mg solids·min)) was 2.920 × 10-5 for LE, 2.884 × 10-5 for FD, 2.512 × 10-5 for VE, and 1.009 × 10-5 for CO. Further, the content of vitamin C (mg/L) determined by HPLC was 59.0 ± 0.3 for LE, 49.7 ± 0.6 for FD, 45.0 ± 0.4 for VE, and 23.6 ± 0.7 for CO. The results indicate that the convective drying preserves vitamin C and antioxidant efficiency to 40% and 34% of the initial value, respectively, while vacuum drying to 76% and 86%, and freeze-drying to 84% and 98%, respectively.

Keywords: Antioxidant efficiency, convective drying, freeze-drying, Moringa oleifera, vacuum drying, vitamin C content.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1798
2822 An Embedded System for Artificial Intelligence Applications

Authors: Ioannis P. Panagopoulos, Christos C. Pavlatos, George K. Papakonstantinou

Abstract:

Conventional approaches in the implementation of logic programming applications on embedded systems are solely of software nature. As a consequence, a compiler is needed that transforms the initial declarative logic program to its equivalent procedural one, to be programmed to the microprocessor. This approach increases the complexity of the final implementation and reduces the overall system's performance. On the contrary, presenting hardware implementations which are only capable of supporting logic programs prevents their use in applications where logic programs need to be intertwined with traditional procedural ones, for a specific application. We exploit HW/SW codesign methods to present a microprocessor, capable of supporting hybrid applications using both programming approaches. We take advantage of the close relationship between attribute grammar (AG) evaluation and knowledge engineering methods to present a programmable hardware parser that performs logic derivations and combine it with an extension of a conventional RISC microprocessor that performs the unification process to report the success or failure of those derivations. The extended RISC microprocessor is still capable of executing conventional procedural programs, thus hybrid applications can be implemented. The presented implementation is programmable, supports the execution of hybrid applications, increases the performance of logic derivations (experimental analysis yields an approximate 1000% increase in performance) and reduces the complexity of the final implemented code. The proposed hardware design is supported by a proposed extended C-language called C-AG.

Keywords: Attribute Grammars, Logic Programming, RISC microprocessor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5087
2821 Human Trafficking: The Kosovar Perspective of Fighting the Phenomena through Police and Civil Society Cooperation

Authors: Samedin Mehmeti

Abstract:

The rationale behind this study is considering combating and preventing the phenomenon of trafficking in human beings from a multidisciplinary perspective that involves many layers of the society. Trafficking in human beings is an abhorrent phenomenon highly affecting negatively the victims and their families in both human and material aspect, sometimes causing irreversible damages. The longer term effects of this phenomenon, in countries with a weak economic development and extremely young and dynamic population, such as Kosovo, without proper measures to prevented and control can cause tremendous damages in the society. Given the fact that a complete eradication of this phenomenon is almost impossible, efforts should be concentrated at least on the prevention and controlling aspects. Treating trafficking in human beings based on traditional police tactics, methods and proceedings cannot bring satisfactory results. There is no doubt that a multi-disciplinary approach is an irreplaceable requirement, in other words, a combination of authentic and functional proactive and reactive methods, techniques and tactics. Obviously, police must exercise its role in preventing and combating trafficking in human beings, a role sanctioned by the law, however, police role and contribution cannot by any means considered complete if all segments of the society are not included in these efforts. Naturally, civil society should have an important share in these collaborative and interactive efforts especially in preventive activities such as: awareness on trafficking risks and damages, proactive engagement in drafting appropriate legislation and strategies, law enforcement monitoring and direct or indirect involvement in protective and supporting activities which benefit the victims of trafficking etc.

Keywords: Civil society, cooperation, police, trafficking in human beings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1635