Search results for: Information Processing on the Web
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5171

Search results for: Information Processing on the Web

2381 Using Data Clustering in Oral Medicine

Authors: Fahad Shahbaz Khan, Rao Muhammad Anwer, Olof Torgersson

Abstract:

The vast amount of information hidden in huge databases has created tremendous interests in the field of data mining. This paper examines the possibility of using data clustering techniques in oral medicine to identify functional relationships between different attributes and classification of similar patient examinations. Commonly used data clustering algorithms have been reviewed and as a result several interesting results have been gathered.

Keywords: Oral Medicine, Cluto, Data Clustering, Data Mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1977
2380 A Propagator Method like Algorithm for Estimation of Multiple Real-Valued Sinusoidal Signal Frequencies

Authors: Sambit Prasad Kar, P.Palanisamy

Abstract:

In this paper a novel method for multiple one dimensional real valued sinusoidal signal frequency estimation in the presence of additive Gaussian noise is postulated. A computationally simple frequency estimation method with efficient statistical performance is attractive in many array signal processing applications. The prime focus of this paper is to combine the subspace-based technique and a simple peak search approach. This paper presents a variant of the Propagator Method (PM), where a collaborative approach of SUMWE and Propagator method is applied in order to estimate the multiple real valued sine wave frequencies. A new data model is proposed, which gives the dimension of the signal subspace is equal to the number of frequencies present in the observation. But, the signal subspace dimension is twice the number of frequencies in the conventional MUSIC method for estimating frequencies of real-valued sinusoidal signal. The statistical analysis of the proposed method is studied, and the explicit expression of asymptotic (large-sample) mean-squared-error (MSE) or variance of the estimation error is derived. The performance of the method is demonstrated, and the theoretical analysis is substantiated through numerical examples. The proposed method can achieve sustainable high estimation accuracy and frequency resolution at a lower SNR, which is verified by simulation by comparing with conventional MUSIC, ESPRIT and Propagator Method.

Keywords: Frequency estimation, peak search, subspace-based method without eigen decomposition, quadratic convex function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1732
2379 On Best Estimation for Parameter Weibull Distribution

Authors: Hadeel Salim Alkutubi

Abstract:

The objective of this study is to introduce estimators to the parameters and survival function for Weibull distribution using three different methods, Maximum Likelihood estimation, Standard Bayes estimation and Modified Bayes estimation. We will then compared the three methods using simulation study to find the best one base on MPE and MSE.

Keywords: Maximum Likelihood estimation , Bayes estimation, Jeffery prior information, Simulation study

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1268
2378 Computer Countenanced Diagnosis of Skin Nodule Detection and Histogram Augmentation: Extracting System for Skin Cancer

Authors: S. Zith Dey Babu, S. Kour, S. Verma, C. Verma, V. Pathania, A. Agrawal, V. Chaudhary, A. Manoj Puthur, R. Goyal, A. Pal, T. Danti Dey, A. Kumar, K. Wadhwa, O. Ved

Abstract:

Background: Skin cancer is now is the buzzing button in the field of medical science. The cyst's pandemic is drastically calibrating the body and well-being of the global village. Methods: The extracted image of the skin tumor cannot be used in one way for diagnosis. The stored image contains anarchies like the center. This approach will locate the forepart of an extracted appearance of skin. Partitioning image models has been presented to sort out the disturbance in the picture. Results: After completing partitioning, feature extraction has been formed by using genetic algorithm and finally, classification can be performed between the trained and test data to evaluate a large scale of an image that helps the doctors for the right prediction. To bring the improvisation of the existing system, we have set our objectives with an analysis. The efficiency of the natural selection process and the enriching histogram is essential in that respect. To reduce the false-positive rate or output, GA is performed with its accuracy. Conclusions: The objective of this task is to bring improvisation of effectiveness. GA is accomplishing its task with perfection to bring down the invalid-positive rate or outcome. The paper's mergeable portion conflicts with the composition of deep learning and medical image processing, which provides superior accuracy. Proportional types of handling create the reusability without any errors.

Keywords: Computer-aided system, detection, image segmentation, morphology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 544
2377 The Influence of Fashion Bloggers on the Pre-Purchase Decision for Online Fashion Products among Generation Y Female Malaysian Consumers

Authors: Mohd Zaimmudin Mohd Zain, Patsy Perry, Lee Quinn

Abstract:

This study explores how fashion consumers are influenced by fashion bloggers towards pre-purchase decision for online fashion products in a non-Western context. Malaysians rank among the world’s most avid online shoppers, with apparel the third most popular purchase category. However, extant research on fashion blogging focuses on the developed Western market context. Numerous international fashion retailers have entered the Malaysian market from luxury to fast fashion segments of the market; however Malaysian fashion consumers must balance religious and social norms for modesty with their dress style and adoption of fashion trends. Consumers increasingly mix and match Islamic and Western elements of dress to create new styles enabling them to follow Western fashion trends whilst paying respect to social and religious norms. Social media have revolutionised the way that consumers can search for and find information about fashion products. For online fashion brands with no physical presence, social media provide a means of discovery for consumers. By allowing the creation and exchange of user-generated content (UGC) online, they provide a public forum that gives individual consumers their own voices, as well as access to product information that facilitates their purchase decisions. Social media empower consumers and brands have important roles in facilitating conversations among consumers and themselves, to help consumers connect with them and one another. Fashion blogs have become an important fashion information sources. By sharing their personal style and inspiring their followers with what they wear on popular social media platforms such as Instagram, fashion bloggers have become fashion opinion leaders. By creating UGC to spread useful information to their followers, they influence the pre-purchase decision. Hence, successful Western fashion bloggers such as Chiara Ferragni may earn millions of US dollars every year, and some have created their own fashion ranges and beauty products, become judges in fashion reality shows, won awards, and collaborated with high street and luxury brands. As fashion blogging has become more established worldwide, increasing numbers of fashion bloggers have emerged from non-Western backgrounds to promote Islamic fashion styles, such as Hassanah El-Yacoubi and Dian Pelangi. This study adopts a qualitative approach using netnographic content analysis of consumer comments on two famous Malaysian fashion bloggers’ Instagram accounts during January-March 2016 and qualitative interviews with 16 Malaysian Generation Y fashion consumers during September-October 2016. Netnography adapts ethnographic techniques to the study of online communities or computer-mediated communications. Template analysis of the data involved coding comments according to the theoretical framework, which was developed from the literature review. Initial data analysis shows the strong influence of Malaysian fashion bloggers on their followers in terms of lifestyle and morals as well as fashion style. Followers were guided towards the mix and match trend of dress with Western and Islamic elements, for example, showing how vivid colours or accessories could be worked into an outfit whilst still respecting social and religious norms. The blogger’s Instagram account is a form of online community where followers can communicate and gain guidance and support from other followers, as well as from the blogger.

Keywords: Fashion bloggers, Malaysia, qualitative, social media.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2482
2376 Mean Codeword Lengths and Their Correspondence with Entropy Measures

Authors: R.K.Tuli

Abstract:

The objective of the present communication is to develop new genuine exponentiated mean codeword lengths and to study deeply the problem of correspondence between well known measures of entropy and mean codeword lengths. With the help of some standard measures of entropy, we have illustrated such a correspondence. In literature, we usually come across many inequalities which are frequently used in information theory. Keeping this idea in mind, we have developed such inequalities via coding theory approach.

Keywords: Codeword, Code alphabet, Uniquely decipherablecode, Mean codeword length, Uncertainty, Noiseless channel

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1702
2375 Feature Analysis of Predictive Maintenance Models

Authors: Zhaoan Wang

Abstract:

Research in predictive maintenance modeling has improved in the recent years to predict failures and needed maintenance with high accuracy, saving cost and improving manufacturing efficiency. However, classic prediction models provide little valuable insight towards the most important features contributing to the failure. By analyzing and quantifying feature importance in predictive maintenance models, cost saving can be optimized based on business goals. First, multiple classifiers are evaluated with cross-validation to predict the multi-class of failures. Second, predictive performance with features provided by different feature selection algorithms are further analyzed. Third, features selected by different algorithms are ranked and combined based on their predictive power. Finally, linear explainer SHAP (SHapley Additive exPlanations) is applied to interpret classifier behavior and provide further insight towards the specific roles of features in both local predictions and global model behavior. The results of the experiments suggest that certain features play dominant roles in predictive models while others have significantly less impact on the overall performance. Moreover, for multi-class prediction of machine failures, the most important features vary with type of machine failures. The results may lead to improved productivity and cost saving by prioritizing sensor deployment, data collection, and data processing of more important features over less importance features.

Keywords: Automated supply chain, intelligent manufacturing, predictive maintenance machine learning, feature engineering, model interpretation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2005
2374 Rheological and Computational Analysis of Crude Oil Transportation

Authors: Praveen Kumar, Satish Kumar, Jashanpreet Singh

Abstract:

Transportation of unrefined crude oil from the production unit to a refinery or large storage area by a pipeline is difficult due to the different properties of crude in various areas. Thus, the design of a crude oil pipeline is a very complex and time consuming process, when considering all the various parameters. There were three very important parameters that play a significant role in the transportation and processing pipeline design; these are: viscosity profile, temperature profile and the velocity profile of waxy crude oil through the crude oil pipeline. Knowledge of the Rheological computational technique is required for better understanding the flow behavior and predicting the flow profile in a crude oil pipeline. From these profile parameters, the material and the emulsion that is best suited for crude oil transportation can be predicted. Rheological computational fluid dynamic technique is a fast method used for designing flow profile in a crude oil pipeline with the help of computational fluid dynamics and rheological modeling. With this technique, the effect of fluid properties including shear rate range with temperature variation, degree of viscosity, elastic modulus and viscous modulus was evaluated under different conditions in a transport pipeline. In this paper, two crude oil samples was used, as well as a prepared emulsion with natural and synthetic additives, at different concentrations ranging from 1,000 ppm to 3,000 ppm. The rheological properties was then evaluated at a temperature range of 25 to 60 °C and which additive was best suited for transportation of crude oil is determined. Commercial computational fluid dynamics (CFD) has been used to generate the flow, velocity and viscosity profile of the emulsions for flow behavior analysis in crude oil transportation pipeline. This rheological CFD design can be further applied in developing designs of pipeline in the future.

Keywords: Natural surfactant, crude oil, rheology, CFD, viscosity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676
2373 Structural Damage Detection via Incomplete Modal Data Using Output Data Only

Authors: Ahmed Noor Al-Qayyim, Barlas Ozden Caglayan

Abstract:

Structural failure is caused mainly by damage that often occurs on structures. Many researchers focus on to obtain very efficient tools to detect the damage in structures in the early state. In the past decades, a subject that has received considerable attention in literature is the damage detection as determined by variations in the dynamic characteristics or response of structures. The study presents a new damage identification technique. The technique detects the damage location for the incomplete structure system using output data only. The method indicates the damage based on the free vibration test data by using ‘Two Points Condensation (TPC) technique’. This method creates a set of matrices by reducing the structural system to two degrees of freedom systems. The current stiffness matrices obtain from optimization the equation of motion using the measured test data. The current stiffness matrices compare with original (undamaged) stiffness matrices. The large percentage changes in matrices’ coefficients lead to the location of the damage. TPC technique is applied to the experimental data of a simply supported steel beam model structure after inducing thickness change in one element, where two cases consider. The method detects the damage and determines its location accurately in both cases. In addition, the results illustrate these changes in stiffness matrix can be a useful tool for continuous monitoring of structural safety using ambient vibration data. Furthermore, its efficiency proves that this technique can be used also for big structures.

Keywords: Damage detection, two points–condensation, structural health monitoring, signals processing, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2699
2372 Influence of Sr(BO2)2 Doping on Superconducting Properties of (Bi,Pb)-2223 Phase

Authors: N. G. Margiani, I. G. Kvartskhava, G. A. Mumladze, Z. A. Adamia

Abstract:

Chemical doping with different elements and compounds at various amounts represents the most suitable approach to improve the superconducting properties of bismuth-based superconductors for technological applications. In this paper, the influence of partial substitution of Sr(BO2)2 for SrO on the phase formation kinetics and transport properties of (Bi,Pb)-2223 HTS has been studied for the first time. Samples with nominal composition Bi1.7Pb0.3Sr2-xCa2Cu3Oy[Sr(BO2)2]x, x=0, 0.0375, 0.075, 0.15, 0.25, were prepared by the standard solid state processing. The appropriate mixtures were calcined at 845 oC for 40 h. The resulting materials were pressed into pellets and annealed at 837 oC for 30 h in air. Superconducting properties of undoped (reference) and Sr(BO2)2-doped (Bi,Pb)-2223 compounds were investigated through X-ray diffraction (XRD), resistivity (ρ) and transport critical current density (Jc) measurements. The surface morphology changes in the prepared samples were examined by scanning electron microscope (SEM). XRD and Jc studies have shown that the low level Sr(BO2)2 doping (x=0.0375-0.075) to the Sr-site promotes the formation of high-Tc phase and leads to the enhancement of current carrying capacity in (Bi,Pb)-2223 HTS. The doped sample with x=0.0375 has the best performance compared to other prepared samples. The estimated volume fraction of (Bi,Pb)-2223 phase increases from ~25 % for reference specimen to ~70 % for x=0.0375. Moreover, strong increase in the self-field Jc value was observed for this dopant amount (Jc=340 A/cm2), compared to an undoped sample (Jc=110 A/cm2). Pronounced enhancement of superconducting properties of (Bi,Pb)-2223 superconductor can be attributed to the acceleration of high-Tc phase formation as well as the improvement of inter-grain connectivity by small amounts of Sr(BO2)2 dopant.

Keywords: Bismuth-based superconductor, critical current density, phase formation, Sr(BO2)2 doping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 756
2371 Intelligent Assistive Methods for Diagnosis of Rheumatoid Arthritis Using Histogram Smoothing and Feature Extraction of Bone Images

Authors: SP. Chokkalingam, K. Komathy

Abstract:

Advances in the field of image processing envision a new era of evaluation techniques and application of procedures in various different fields. One such field being considered is the biomedical field for prognosis as well as diagnosis of diseases. This plethora of methods though provides a wide range of options to select from, it also proves confusion in selecting the apt process and also in finding which one is more suitable. Our objective is to use a series of techniques on bone scans, so as to detect the occurrence of rheumatoid arthritis (RA) as accurately as possible. Amongst other techniques existing in the field our proposed system tends to be more effective as it depends on new methodologies that have been proved to be better and more consistent than others. Computer aided diagnosis will provide more accurate and infallible rate of consistency that will help to improve the efficiency of the system. The image first undergoes histogram smoothing and specification, morphing operation, boundary detection by edge following algorithm and finally image subtraction to determine the presence of rheumatoid arthritis in a more efficient and effective way. Using preprocessing noises are removed from images and using segmentation, region of interest is found and Histogram smoothing is applied for a specific portion of the images. Gray level co-occurrence matrix (GLCM) features like Mean, Median, Energy, Correlation, Bone Mineral Density (BMD) and etc. After finding all the features it stores in the database. This dataset is trained with inflamed and noninflamed values and with the help of neural network all the new images are checked properly for their status and Rough set is implemented for further reduction.

Keywords: Computer Aided Diagnosis, Edge Detection, Histogram Smoothing, Rheumatoid Arthritis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2479
2370 Analysis of Aiming Performance for Games Using Mapping Method of Corneal Reflections Based on Two Different Light Sources

Authors: Yoshikazu Onuki, Itsuo Kumazawa

Abstract:

Fundamental motivation of this paper is how gaze estimation can be utilized effectively regarding an application to games. In games, precise estimation is not always important in aiming targets but an ability to move a cursor to an aiming target accurately is also significant. Incidentally, from a game producing point of view, a separate expression of a head movement and gaze movement sometimes becomes advantageous to expressing sense of presence. A case that panning a background image associated with a head movement and moving a cursor according to gaze movement can be a representative example. On the other hand, widely used technique of POG estimation is based on a relative position between a center of corneal reflection of infrared light sources and a center of pupil. However, a calculation of a center of pupil requires relatively complicated image processing, and therefore, a calculation delay is a concern, since to minimize a delay of inputting data is one of the most significant requirements in games. In this paper, a method to estimate a head movement by only using corneal reflections of two infrared light sources in different locations is proposed. Furthermore, a method to control a cursor using gaze movement as well as a head movement is proposed. By using game-like-applications, proposed methods are evaluated and, as a result, a similar performance to conventional methods is confirmed and an aiming control with lower computation power and stressless intuitive operation is obtained.

Keywords: Point-of-gaze, gaze estimation, head movement, corneal reflections, two infrared light sources, game.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1071
2369 Potential of Sunflower (Helianthus annuus L.) for Phytoremediation of Soils Contaminated with Heavy Metals

Authors: Violina R. Angelova, Mariana N. Perifanova-Nemska, Galina P. Uzunova, Krasimir I. Ivanov, Huu Q. Lee

Abstract:

A field study was conducted to evaluate the efficacy of the sunflower (Helianthus annuus L.) for phytoremediation of contaminated soils. The experiment was performed on an agricultural field contaminated by the Non-Ferrous-Metal Works near Plovdiv, Bulgaria. Field experiments with a randomized, complete block design with five treatments (control, compost amendments added at 20 and 40 t/daa, and vemicompost amendments added at 20 and 40 t/daa) were carried out. The accumulation of heavy metals in the sunflower plant and the quality of the sunflower oil (heavy metals and fatty acid composition) were determined. The tested organic amendments significantly influenced the uptake of Pb, Zn and Cd by the sunflower plant. The incorporation of 40 t/decare of compost and 20 t/decare of vermicompost to the soil led to an increase in the ability of the sunflower to take up and accumulate Cd, Pb and Zn. Sunflower can be subjected to the accumulators of Pb, Zn and Cd and can be successfully used for phytoremediation of contaminated soils with heavy metals. The 40 t/daa compost treatment led to a decrease in heavy metal content in sunflower oil to below the regulated limits. Oil content and fatty acids composition were affected by compost and vermicompost amendment treatments. Adding compost and vermicompost increased the oil content in the seeds. Adding organic amendments increased the content of stearic, palmitoleic and oleic acids, and reduced the content of palmitic and gadoleic acids in sunflower oil. The possibility of further industrial processing of seeds to oil and use of the obtained oil will make sunflowers economically interesting crops for farmers of phytoremediation technology.

Keywords: Heavy metals, organic amendments, phytoremediation, sunflower.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3936
2368 Developing Laser Spot Position Determination and PRF Code Detection with Quadrant Detector

Authors: Mohamed Fathy Heweage, Xiao Wen, Ayman Mokhtar, Ahmed Eldamarawy

Abstract:

In this paper, we are interested in modeling, simulation, and measurement of the laser spot position with a quadrant detector. We enhance detection and tracking of semi-laser weapon decoding system based on microcontroller. The system receives the reflected pulse through quadrant detector and processes the laser pulses through a processing circuit, a microcontroller decoding laser pulse reflected by the target. The seeker accuracy will be enhanced by the decoding system, the laser detection time based on the receiving pulses number is reduced, a gate is used to limit the laser pulse width. The model is implemented based on Pulse Repetition Frequency (PRF) technique with two microcontroller units (MCU). MCU1 generates laser pulses with different codes. MCU2 decodes the laser code and locks the system at the specific code. The codes EW selected based on the two selector switches. The system is implemented and tested in Proteus ISIS software. The implementation of the full position determination circuit with the detector is produced. General system for the spot position determination was performed with the laser PRF for incident radiation and the mechanical system for adjusting system at different angles. The system test results show that the system can detect the laser code with only three received pulses based on the narrow gate signal, and good agreement between simulation and measured system performance is obtained.

Keywords: 4-quadrant detector, pulse code detection, laser guided weapons, pulse repetition frequency, ATmega 32 microcontrollers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1536
2367 The Effect of Deformation Activation Volume, Strain Rate Sensitivity and Processing Temperature of Grain Size Variants

Authors: P. B. Sob, A. A. Alugongo, T. B. Tengen

Abstract:

The activation volume of 6082T6 aluminum is investigated at different temperatures for grain size variants. The deformation activation volume was computed on the basis of the relationship between the Boltzmann’s constant k, the testing temperatures, the material strain rate sensitivity and the material yield stress grain size variants. The material strain rate sensitivity is computed as a function of yield stress and strain rate grain size variants. The effect of the material strain rate sensitivity and the deformation activation volume of 6082T6 aluminum at different temperatures of 3-D grain are discussed. It is shown that the strain rate sensitivities and activation volume are negative for the grain size variants during the deformation of nanostructured materials. It is also observed that the activation volume vary in different ways with the equivalent radius, semi minor axis radius, semi major axis radius and major axis radius. From the obtained results it is shown that the variation of activation volume increase and decrease with the testing temperature. It was revealed that, increase in strain rate sensitivity led to decrease in activation volume whereas increase in activation volume led to decrease in strain rate sensitivity.

Keywords: Nanostructured materials, grain size variants, temperature, yield stress, strain rate sensitivity, activation volume.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2602
2366 Probabilistic Graphical Model for the Web

Authors: M. Nekri, A. Khelladi

Abstract:

The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.

Keywords: Clustering coefficient, preferential attachment, small world, Web community.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1604
2365 A Study on Algorithm Fusion for Recognition and Tracking of Moving Robot

Authors: Jungho Choi, Youngwan Cho

Abstract:

This paper presents an algorithm for the recognition and tracking of moving objects, 1/10 scale model car is used to verify performance of the algorithm. Presented algorithm for the recognition and tracking of moving objects in the paper is as follows. SURF algorithm is merged with Lucas-Kanade algorithm. SURF algorithm has strong performance on contrast, size, rotation changes and it recognizes objects but it is slow due to many computational complexities. Processing speed of Lucas-Kanade algorithm is fast but the recognition of objects is impossible. Its optical flow compares the previous and current frames so that can track the movement of a pixel. The fusion algorithm is created in order to solve problems which occurred using the Kalman Filter to estimate the position and the accumulated error compensation algorithm was implemented. Kalman filter is used to create presented algorithm to complement problems that is occurred when fusion two algorithms. Kalman filter is used to estimate next location, compensate for the accumulated error. The resolution of the camera (Vision Sensor) is fixed to be 640x480. To verify the performance of the fusion algorithm, test is compared to SURF algorithm under three situations, driving straight, curve, and recognizing cars behind the obstacles. Situation similar to the actual is possible using a model vehicle. Proposed fusion algorithm showed superior performance and accuracy than the existing object recognition and tracking algorithms. We will improve the performance of the algorithm, so that you can experiment with the images of the actual road environment.

Keywords: SURF, Optical Flow Lucas-Kanade, Kalman Filter, object recognition, object tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2292
2364 The Examination of Prospective ICT Teachers’ Attitudes towards Application of Computer Assisted Instruction

Authors: Agâh Tuğrul Korucu, Ismail Fatih Yavuzaslan, Lale Toraman

Abstract:

Nowadays, thanks to development of technology, integration of technology into teaching and learning activities is spreading. Increasing technological literacy which is one of the expected competencies for individuals of 21st century is associated with the effective use of technology in education. The most important factor in effective use of technology in education institutions is ICT teachers. The concept of computer assisted instruction (CAI) refers to the utilization of information and communication technology as a tool aided teachers in order to make education more efficient and improve its quality in the process of educational. Teachers can use computers in different places and times according to owned hardware and software facilities and characteristics of the subject and student in CAI. Analyzing teachers’ use of computers in education is significant because teachers are the ones who manage the course and they are the most important element in comprehending the topic by students. To accomplish computer-assisted instruction efficiently is possible through having positive attitude of teachers. Determination the level of knowledge, attitude and behavior of teachers who get the professional knowledge from educational faculties and elimination of deficiencies if any are crucial when teachers are at the faculty. Therefore, the aim of this paper is to identify ICT teachers' attitudes toward computer-assisted instruction in terms of different variables. Research group consists of 200 prospective ICT teachers studying at Necmettin Erbakan University Ahmet Keleşoğlu Faculty of Education CEIT department. As data collection tool of the study; “personal information form” developed by the researchers and used to collect demographic data and "the attitude scale related to computer-assisted instruction" are used. The scale consists of 20 items. 10 of these items show positive feature, while 10 of them show negative feature. The Kaiser-Meyer-Olkin (KMO) coefficient of the scale is found 0.88 and Barlett test significance value is found 0.000. The Cronbach’s alpha reliability coefficient of the scale is found 0.93. In order to analyze the data collected by data collection tools computer-based statistical software package used; statistical techniques such as descriptive statistics, t-test, and analysis of variance are utilized. It is determined that the attitudes of prospective instructors towards computers do not differ according to their educational branches. On the other hand, the attitudes of prospective instructors who own computers towards computer-supported education are determined higher than those of the prospective instructors who do not own computers. It is established that the departments of students who previously received computer lessons do not affect this situation so much. The result is that; the computer experience affects the attitude point regarding the computer-supported education positively.

Keywords: Attitude, computer based instruction, information and communication technologies, technology based instruction, teacher candidate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740
2363 Application of Geo-Informatic Technology in Studying of Land Tenure and Land Use for Cultivation of Cash Crops by Local Communities in the Local Administration Organizations of Phailuang and Maepoon in Lublae District, Uttaradit Province

Authors: Kunchit Pirapake

Abstract:

Application of Geo-Informatic technology in land tenure and land use on the economic crop area, to create sustainable land, access to the area, and produce sustainable food for the demand of its people in the community. The research objectives are to 1) apply Geo-Informatic Technology on land ownership and agricultural land use (cash crops) in the research area, 2) create GIS database on land ownership and land use, 3) create database of an online Geoinformation system on land tenure and land use. The results of this study reveal that, first; the study area is on high slope, mountains and valleys. The land is mainly in the forest zone which was included in the Forest Act 1941 and National Conserved Forest 1964. Residents gained the rights to exploit the land passed down from their ancestors. The practice was recognized by communities. The land was suitable for cultivating a wide variety of economic crops that was the main income of the family. At present the local residents keep expanding the land to grow cash crops. Second; creating a database of the geographic information system consisted of the area range, announcement from the Interior Ministry, interpretation of satellite images, transportation routes, waterways, plots of land with a title deed available at the provincial land office. Most pieces of land without a title deed are located in the forest and national reserve areas. Data were created from a field study and a land zone determined by a GPS. Last; an online Geo-Informatic System can show the information of land tenure and land use of each economic crop. Satellite data with high resolution which could be updated and checked on the online Geo-Informatic System simultaneously.

Keywords: Geo-Informatic Technology, Land Tenure, Online Geo-Informatic System, Land Use of cash crops.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469
2362 Radioactivity Assessment of Sediments in Negombo Lagoon Sri Lanka

Authors: H. M. N. L. Handagiripathira

Abstract:

The distributions of naturally occurring and anthropogenic radioactive materials were determined in surface sediments taken at 27 different locations along the bank of Negombo Lagoon in Sri Lanka. Hydrographic parameters of lagoon water and the grain size analyses of the sediment samples were also carried out for this study. The conductivity of the adjacent water was varied from 13.6 mS/cm to 55.4 mS/cm near to the southern end and the northern end of the lagoon, respectively, and equally salinity levels varied from 7.2 psu to 32.1 psu. The average pH in the water was 7.6 and average water temperature was 28.7 °C. The grain size analysis emphasized the mass fractions of the samples as sand (60.9%), fine sand (30.6%) and fine silt+clay (1.3%) in the sampling locations. The surface sediment samples of wet weight, 1 kg each from upper 5-10 cm layer, were oven dried at 105 °C for 24 hours to get a constant weight, homogenized and sieved through a 2 mm sieve (IAEA technical series no. 295). The radioactivity concentrations were determined using gamma spectrometry technique. Ultra Low Background Broad Energy High Purity Ge Detector, BEGe (Model BE5030, Canberra) was used for radioactivity measurement with Canberra Industries' Laboratory Source-less Calibration Software (LabSOCS) mathematical efficiency calibration approach and Geometry composer software. The mean activity concentration was found to be 24 ± 4, 67 ± 9, 181 ± 10, 59 ± 8, 3.5 ± 0.4 and 0.47 ± 0.08 Bq/kg for 238U, 232Th, 40K, 210Pb, 235U and 137Cs respectively. The mean absorbed dose rate in air, radium equivalent activity, external hazard index, annual gonadal dose equivalent and annual effective dose equivalent were 60.8 nGy/h, 137.3 Bq/kg, 0.4, 425.3 mSv/year and 74.6 mSv/year, respectively. The results of this study will provide baseline information on the natural and artificial radioactive isotopes and environmental pollution associated with information on radiological risk.

Keywords: Gamma spectrometry, lagoon, radioactivity, sediments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 524
2361 Study of Integrated Vehicle Image System Including LDW, FCW, and AFS

Authors: Yi-Feng Su, Chia-Tseng Chen, Hsueh-Lung Liao

Abstract:

The objective of this research is to develop an advanced driver assistance system characterized with the functions of lane departure warning (LDW), forward collision warning (FCW) and adaptive front-lighting system (AFS). The system is mainly configured a CCD/CMOS camera to acquire the images of roadway ahead in association with the analysis made by an image-processing unit concerning the lane ahead and the preceding vehicles. The input image captured by a camera is used to recognize the lane and the preceding vehicle positions by image detection and DROI (Dynamic Range of Interesting) algorithms. Therefore, the system is able to issue real-time auditory and visual outputs of warning when a driver is departing the lane or driving too close to approach the preceding vehicle unwittingly so that the danger could be prevented from occurring. During the nighttime, in addition to the foregoing warning functions, the system is able to control the bending light of headlamp to provide an immediate light illumination when making a turn at a curved lane and adjust the level automatically to reduce the lighting interference against the oncoming vehicles driving in the opposite direction by the curvature of lane and the vanishing point estimations. The experimental results show that the integrated vehicle image system is robust to most environments such as the lane detection and preceding vehicle detection average accuracy performances are both above 90 %.

Keywords: Lane mark detection, lane departure warning (LDW), dynamic range of interesting (DROI), forward collision warning (FCW), adaptive front-lighting system (AFS).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2157
2360 Comparison of Central Light Reflex Width-to-Retinal Vessel Diameter Ratio between Glaucoma and Normal Eyes by Using Edge Detection Technique

Authors: P. Siriarchawatana, K. Leungchavaphongse, N. Covavisaruch, K. Rojananuangnit, P. Boondaeng, N. Panyayingyong

Abstract:

Glaucoma is a disease that causes visual loss in adults. Glaucoma causes damage to the optic nerve and its overall pathophysiology is still not fully understood. Vasculopathy may be one of the possible causes of nerve damage. Photographic imaging of retinal vessels by fundus camera during eye examination may complement clinical management. This paper presents an innovation for measuring central light reflex width-to-retinal vessel diameter ratio (CRR) from digital retinal photographs. Using our edge detection technique, CRRs from glaucoma and normal eyes were compared to examine differences and associations. CRRs were evaluated on fundus photographs of participants from Mettapracharak (Wat Raikhing) Hospital in Nakhon Pathom, Thailand. Fifty-five photographs from normal eyes and twenty-one photographs from glaucoma eyes were included. Participants with hypertension were excluded. In each photograph, CRRs from four retinal vessels, including arteries and veins in the inferotemporal and superotemporal regions, were quantified using edge detection technique. From our finding, mean CRRs of all four retinal arteries and veins were significantly higher in persons with glaucoma than in those without glaucoma (0.34 vs. 0.32, p < 0.05 for inferotemporal vein, 0.33 vs. 0.30, p < 0.01 for inferotemporal artery, 0.34 vs. 0.31, p < 0.01 for superotemporal vein, and 0.33 vs. 0.30, p < 0.05 for superotemporal artery). From these results, an increase in CRRs of retinal vessels, as quantitatively measured from fundus photographs, could be associated with glaucoma.

Keywords: Glaucoma, retinal vessel, central light reflex, image processing, fundus photograph, edge detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1087
2359 A Strategy for Address Coding from HouseHold Registry Database

Authors: Yungchien Cheng, Chienmin Chu

Abstract:

Address Matching is an important application of Geographic Information System (GIS). Prior to Address Matching working, obtaining X,Y coordinates is necessary, which process is calling Address Geocoding. This study will illustrate the effective address geocoding process of using household registry database, and the check system for geocoded address.

Keywords: GIS, Address Geocoding, HouseHold Registry Database

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1478
2358 Performance Analysis of Digital Signal Processors Using SMV Benchmark

Authors: Erh-Wen Hu, Cyril S. Ku, Andrew T. Russo, Bogong Su, Jian Wang

Abstract:

Unlike general-purpose processors, digital signal processors (DSP processors) are strongly application-dependent. To meet the needs for diverse applications, a wide variety of DSP processors based on different architectures ranging from the traditional to VLIW have been introduced to the market over the years. The functionality, performance, and cost of these processors vary over a wide range. In order to select a processor that meets the design criteria for an application, processor performance is usually the major concern for digital signal processing (DSP) application developers. Performance data are also essential for the designers of DSP processors to improve their design. Consequently, several DSP performance benchmarks have been proposed over the past decade or so. However, none of these benchmarks seem to have included recent new DSP applications. In this paper, we use a new benchmark that we recently developed to compare the performance of popular DSP processors from Texas Instruments and StarCore. The new benchmark is based on the Selectable Mode Vocoder (SMV), a speech-coding program from the recent third generation (3G) wireless voice applications. All benchmark kernels are compiled by the compilers of the respective DSP processors and run on their simulators. Weighted arithmetic mean of clock cycles and arithmetic mean of code size are used to compare the performance of five DSP processors. In addition, we studied how the performance of a processor is affected by code structure, features of processor architecture and optimization of compiler. The extensive experimental data gathered, analyzed, and presented in this paper should be helpful for DSP processor and compiler designers to meet their specific design goals.

Keywords: digital signal processors, DSP benchmark, instruction level parallelism, modified cyclomatic complexity, performance analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1610
2357 The CEO Mission II, Rescue Robot with Multi-Joint Mechanical Arm

Authors: Amon Tunwannarux, Supanunt Tunwannarux

Abstract:

This paper presents design features of a rescue robot, named CEO Mission II. Its body is designed to be the track wheel type with double front flippers for climbing over the collapse and the rough terrain. With 125 cm. long, 5-joint mechanical arm installed on the robot body, it is deployed not only for surveillance from the top view but also easier and faster access to the victims to get their vital signs. Two cameras and sensors for searching vital signs are set up at the tip of the multi-joint mechanical arm. The third camera is at the back of the robot for driving control. Hardware and software of the system, which controls and monitors the rescue robot, are explained. The control system is used for controlling the robot locomotion, the 5-joint mechanical arm, and for turning on/off devices. The monitoring system gathers all information from 7 distance sensors, IR temperature sensors, 3 CCD cameras, voice sensor, robot wheels encoders, yawn/pitch/roll angle sensors, laser range finder and 8 spare A/D inputs. All sensors and controlling data are communicated with a remote control station via IEEE 802.11b Wi-Fi. The audio and video data are compressed and sent via another IEEE 802.11g Wi-Fi transmitter for getting real-time response. At remote control station site, the robot locomotion and the mechanical arm are controlled by joystick. Moreover, the user-friendly GUI control program is developed based on the clicking and dragging method to easily control the movement of the arm. Robot traveling map is plotted from computing the information of wheel encoders and the yawn/pitch data. 2D Obstacle map is plotted from data of the laser range finder. The concept and design of this robot can be adapted to suit many other applications. As the Best Technique awardee from Thailand Rescue Robot Championship 2006, all testing results are satisfied.

Keywords: Controlling, monitoring, rescue robot, mechanicalarm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1973
2356 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling

Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow

Abstract:

Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.

Keywords: Dynamic modeling, missing data, multiple imputation, physiological measures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 811
2355 Translator Design to Model Cpp Files

Authors: Er. Satwinder Singh, Dr. K.S. Kahlon, Rakesh Kumar, Er. Gurjeet Singh

Abstract:

The most reliable and accurate description of the actual behavior of a software system is its source code. However, not all questions about the system can be answered directly by resorting to this repository of information. What the reverse engineering methodology aims at is the extraction of abstract, goal-oriented “views" of the system, able to summarize relevant properties of the computation performed by the program. While concentrating on reverse engineering we had modeled the C++ files by designing the translator.

Keywords: Translator, Modeling, UML, DYNO, ISVis, TED.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
2354 SUPAR: System for User-Centric Profiling of Association Rules in Streaming Data

Authors: Sarabjeet Kaur Kochhar

Abstract:

With a surge of stream processing applications novel techniques are required for generation and analysis of association rules in streams. The traditional rule mining solutions cannot handle streams because they generally require multiple passes over the data and do not guarantee the results in a predictable, small time. Though researchers have been proposing algorithms for generation of rules from streams, there has not been much focus on their analysis. We propose Association rule profiling, a user centric process for analyzing association rules and attaching suitable profiles to them depending on their changing frequency behavior over a previous snapshot of time in a data stream. Association rule profiles provide insights into the changing nature of associations and can be used to characterize the associations. We discuss importance of characteristics such as predictability of linkages present in the data and propose metric to quantify it. We also show how association rule profiles can aid in generation of user specific, more understandable and actionable rules. The framework is implemented as SUPAR: System for Usercentric Profiling of Association Rules in streaming data. The proposed system offers following capabilities: i) Continuous monitoring of frequency of streaming item-sets and detection of significant changes therein for association rule profiling. ii) Computation of metrics for quantifying predictability of associations present in the data. iii) User-centric control of the characterization process: user can control the framework through a) constraint specification and b) non-interesting rule elimination.

Keywords: Data Streams, User subjectivity, Change detection, Association rule profiles, Predictability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1458
2353 Different Views and Evaluations of IT Artifacts

Authors: Sameh Al-Natour, Izak Benbasat

Abstract:

The introduction of a multitude of new and interactive e-commerce information technology (IT) artifacts has impacted adoption research. Rather than solely functioning as productivity tools, new IT artifacts assume the roles of interaction mediators and social actors. This paper describes the varying roles assumed by IT artifacts, and proposes and distinguishes between four distinct foci of how the artifacts are evaluated. It further proposes a theoretical model that maps the different views of IT artifacts to four distinct types of evaluations.

Keywords: IT adoption, IT artifacts, similarity, social actor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1352
2352 Optimization of Some Process Parameters to Produce Raisin Concentrate in Khorasan Region of Iran

Authors: Peiman Ariaii, Hamid Tavakolipour, Mohsen Pirdashti, Rabehe Izadi Amoli

Abstract:

Raisin Concentrate (RC) are the most important products obtained in the raisin processing industries. These RC products are now used to make the syrups, drinks and confectionery productions and introduced as natural substitute for sugar in food applications. Iran is a one of the biggest raisin exporter in the world but unfortunately despite a good raw material, no serious effort to extract the RC has been taken in Iran. Therefore, in this paper, we determined and analyzed affected parameters on extracting RC process and then optimizing these parameters for design the extracting RC process in two types of raisin (round and long) produced in Khorasan region. Two levels of solvent (1:1 and 2:1), three levels of extraction temperature (60°C, 70°C and 80°C), and three levels of concentration temperature (50°C, 60°C and 70°C) were the treatments. Finally physicochemical characteristics of the obtained concentrate such as color, viscosity, percentage of reduction sugar, acidity and the microbial tests (mould and yeast) were counted. The analysis was performed on the basis of factorial in the form of completely randomized design (CRD) and Duncan's multiple range test (DMRT) was used for the comparison of the means. Statistical analysis of results showed that optimal conditions for production of concentrate is round raisins when the solvent ratio was 2:1 with extraction temperature of 60°C and then concentration temperature of 50°C. Round raisin is cheaper than the long one, and it is more economical to concentrate production. Furthermore, round raisin has more aromas and the less color degree with increasing the temperature of concentration and extraction. Finally, according to mentioned factors the concentrate of round raisin is recommended.

Keywords: Raisin concentrate, optimization, process parameters, round raisin, Iran.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1600