Search results for: year level.
1102 Estimation of Missing or Incomplete Data in Road Performance Measurement Systems
Authors: Kristjan Kuhi, Kati K. Kaare, Ott Koppel
Abstract:
Modern management in most fields is performance based; both planning and implementation of maintenance and operational activities are driven by appropriately defined performance indicators. Continuous real-time data collection for management is becoming feasible due to technological advancements. Outdated and insufficient input data may result in incorrect decisions. When using deterministic models the uncertainty of the object state is not visible thus applying the deterministic models are more likely to give false diagnosis. Constructing structured probabilistic models of the performance indicators taking into consideration the surrounding indicator environment enables to estimate the trustworthiness of the indicator values. It also assists to fill gaps in data to improve the quality of the performance analysis and management decisions. In this paper authors discuss the application of probabilistic graphical models in the road performance measurement and propose a high-level conceptual model that enables analyzing and predicting more precisely future pavement deterioration based on road utilization.
Keywords: Probabilistic graphical models, performance indicators, road performance management, data collection
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18401101 A Model for Test Case Selection in the Software-Development Life Cycle
Authors: Adtha Lawanna
Abstract:
Software maintenance is one of the essential processes of Software-Development Life Cycle. The main philosophies of retaining software concern the improvement of errors, the revision of codes, the inhibition of future errors, and the development in piece and capacity. While the adjustment has been employing, the software structure has to be retested to an upsurge a level of assurance that it will be prepared due to the requirements. According to this state, the test cases must be considered for challenging the revised modules and the whole software. A concept of resolving this problem is ongoing by regression test selection such as the retest-all selections, random/ad-hoc selection and the safe regression test selection. Particularly, the traditional techniques concern a mapping between the test cases in a test suite and the lines of code it executes. However, there are not only the lines of code as one of the requirements that can affect the size of test suite but including the number of functions and faulty versions. Therefore, a model for test case selection is developed to cover those three requirements by the integral technique which can produce the smaller size of the test cases when compared with the traditional regression selection techniques.
Keywords: Software maintenance, regression test selection, test case.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17021100 A Model for Test Case Selection in the Software-Development Life Cycle
Authors: Adtha Lawanna
Abstract:
Software maintenance is one of the essential processes of Software-Development Life Cycle. The main philosophies of retaining software concern the improvement of errors, the revision of codes, the inhibition of future errors, and the development in piece and capacity. While the adjustment has been employing, the software structure has to be retested to an upsurge a level of assurance that it will be prepared due to the requirements. According to this state, the test cases must be considered for challenging the revised modules and the whole software. A concept of resolving this problem is ongoing by regression test selection such as the retest-all selections, random/ad-hoc selection and the safe regression test selection. Particularly, the traditional techniques concern a mapping between the test cases in a test suite and the lines of code it executes. However, there are not only the lines of code as one of the requirements that can affect the size of test suite but including the number of functions and faulty versions. Therefore, a model for test case selection is developed to cover those three requirements by the integral technique which can produce the smaller size of the test cases when compared with the traditional regression selection techniques.
Keywords: Software maintenance, regression test selection, test case.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16031099 Comparison Ageing Deterioration of Silicone Rubber Outdoor Polymer Insulators in Artificial Accelerated Salt Fog Ageing Test
Authors: S.Thong-Om, W. Payakcho, J. Grasaesom, A. Oonsivilai, B. Marungsri
Abstract:
This paper presents the experimental results of silicone rubber outdoor polymer insulators in salt fog ageing test based on IEC 61109. Specimens made ofHTV silicone rubber with ATH content having three different configurations, straight shedsalternated sheds, and incline and alternate sheds, were tested continuously 1000 hrs.in artificial salt fog chamber. Contamination level, reduction of hydrophobicity and hardness measurement were used as physical damaged inspection techniques to evaluate degree of surface deterioration. In addition, chemical changing of tested specimen surface was evaluated by ATR-FTIRto confirm physical damaged inspection. After 1000 hrs.of salt fog test, differences in degree of surface deterioration were observed on all tested specimens. Physical damaged inspection and chemical analysis results confirmed the experimental results as well.
Keywords: Ageing deterioration, Silicone rubber, Polymer Insulator, Salt fog ageing test.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25431098 Designing Pictogram for Food Portion Size
Authors: Y.C. Liu, S.J. Lu, Y.C. Weng, H. Su
Abstract:
The objective of this paper is to investigate a new approach based on the idea of pictograms for food portion size. This approach adopts the model of the United States Pharmacopeia- Drug Information (USP-DI). The representation of each food portion size composed of three parts: frame, the connotation of dietary portion sizes and layout. To investigate users- comprehension based on this approach, two experiments were conducted, included 122 Taiwanese people, 60 male and 62 female with ages between 16 and 64 (divided into age groups of 16-30, 31-45 and 46-64). In Experiment 1, the mean correcting rate of the understanding level of food items is 48.54% (S.D.= 95.08) and the mean response time 2.89sec (S.D.=2.14). The difference on the correct rates for different age groups is significant (P*=0.00<0.05). In Experiment 2, the correcting rate of selecting the right life-size measurement aid is 65.02% (S.D.=21.31). The result showed the potential of the approach for certain food potion sizes. Issues raised for discussions including comprehension on numerous food varieties in an open environment, selection of photograph or drawing, reasons of different correcting rates for the measurement aid. This research also could be used for those interested in systematic and pictorial representation of dietary portion size information.Keywords: Comprehension, Food Portion Size, Model of DietaryInformation, Pictogram Design, USP-DI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19391097 Detailed Mapping of Pyroclastic Flow Deposits by SAR Data Processing for an Active Volcano in the Torrid Zone
Authors: Asep Saepuloh, Katsuaki Koike
Abstract:
Field mapping activity for an active volcano mainly in the Torrid Zone is usually hampered by several problems such as steep terrain and bad atmosphere conditions. In this paper we present a simple solution for such problem by a combination Synthetic Aperture Radar (SAR) and geostatistical methods. By this combination, we could reduce the speckle effect from the SAR data and then estimate roughness distribution of the pyroclastic flow deposits. The main purpose of this study is to detect spatial distribution of new pyroclastic flow deposits termed as P-zone accurately using the β°data from two RADARSAT-1 SAR level-0 data. Single scene of Hyperion data and field observation were used for cross-validation of the SAR results. Mt. Merapi in central Java, Indonesia, was chosen as a study site and the eruptions in May-June 2006 were examined. The P-zones were found in the western and southern flanks. The area size and the longest flow distance were calculated as 2.3 km2 and 6.8 km, respectively. The grain size variation of the P-zone was mapped in detail from fine to coarse deposits regarding the C-band wavelength of 5.6 cm.Keywords: Geostatistical Method, Mt. Merapi, Pyroclastic, RADARSAT-1.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13101096 Speckle Reducing Contourlet Transform for Medical Ultrasound Images
Authors: P.S. Hiremath, Prema T. Akkasaligar, Sharan Badiger
Abstract:
Speckle noise affects all coherent imaging systems including medical ultrasound. In medical images, noise suppression is a particularly delicate and difficult task. A tradeoff between noise reduction and the preservation of actual image features has to be made in a way that enhances the diagnostically relevant image content. Even though wavelets have been extensively used for denoising speckle images, we have found that denoising using contourlets gives much better performance in terms of SNR, PSNR, MSE, variance and correlation coefficient. The objective of the paper is to determine the number of levels of Laplacian pyramidal decomposition, the number of directional decompositions to perform on each pyramidal level and thresholding schemes which yields optimal despeckling of medical ultrasound images, in particular. The proposed method consists of the log transformed original ultrasound image being subjected to contourlet transform, to obtain contourlet coefficients. The transformed image is denoised by applying thresholding techniques on individual band pass sub bands using a Bayes shrinkage rule. We quantify the achieved performance improvement.Keywords: Contourlet transform, Despeckling, Pyramidal directionalfilter bank, Thresholding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24491095 Empirical Modeling of Air Dried Rubberwood Drying System
Authors: S. Khamtree, T. Ratanawilai, C. Nuntadusit
Abstract:
Rubberwood is a crucial commercial timber in Southern Thailand. All processes in a rubberwood production depend on the knowledge and expertise of the technicians, especially the drying process. This research aims to develop an empirical model for drying kinetics in rubberwood. During the experiment, the temperature of the hot air and the average air flow velocity were kept at 80-100 °C and 1.75 m/s, respectively. The moisture content in the samples was determined less than 12% in the achievement of drying basis. The drying kinetic was simulated using an empirical solver. The experimental results illustrated that the moisture content was reduced whereas the drying temperature and time were increased. The coefficient of the moisture ratio between the empirical and the experimental model was tested with three statistical parameters, R-square (R²), Root Mean Square Error (RMSE) and Chi-square (χ²) to predict the accuracy of the parameters. The experimental moisture ratio had a good fit with the empirical model. Additionally, the results indicated that the drying of rubberwood using the Henderson and Pabis model revealed the suitable level of agreement. The result presented an excellent estimation (R² = 0.9963) for the moisture movement compared to the other models. Therefore, the empirical results were valid and can be implemented in the future experiments.
Keywords: Empirical models, hot air, moisture ratio, rubberwood.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7831094 Optimal Temperature and Duration for Dabbing Customers with the Massage Compressed Packs Reported from Customers’ Perception
Authors: Wichan Lertlop, Boonyarat Chaleephay
Abstract:
The objective of this research was to study the appropriate thermal level and time for dabbing customers with the massage compressed pack reported from their perception. The investigation was conducted by comparing different angles of tilted heads done by the customers together with their perception before and after the dabbing. The variables included different temperature of the compressed packs and different dabbing duration. Samples in this study included volunteers who got massage therapy and dabbing with hot compressed packs by traditional Thai medical students. The experiment was conducted during January to June 2013. The research tool consisted of angle meters, stop watches, thermometers, and massage compressed packs. The customers were interviewed for their perceptions before and after the dabbing. The results showed that:
- There was a difference of the average angles of tilted heads before and after the dabbing.
- There was no difference of the average angles at different temperatures but constant duration.
- There was no difference of the average angles at different durations.
- The customers reported relaxation no matter what the various temperatures and various dabbing durations were. However, they reported too hot at the temperature 70oC and over.
Keywords: Massage, Therapy, Therapeutic Systems and Technologies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16941093 Numerical Optimization Design of PEM Fuel Cell Performance Applying the Taguchi Method
Authors: Shan-Jen Cheng, Jr-Ming Miao, Sheng-Ju Wu
Abstract:
The purpose of this paper is applied Taguchi method on the optimization for PEMFC performance, and a representative Computational Fluid Dynamics (CFD) model is selectively performed for statistical analysis. The studied factors in this paper are pressure of fuel cell, operating temperature, the relative humidity of anode and cathode, porosity of gas diffusion electrode (GDE) and conductivity of GDE. The optimal combination for maximum power density is gained by using a three-level statistical method. The results confirmed that the robustness of the optimum design parameters influencing the performance of fuel cell are founded by pressure of fuel cell, 3atm; operating temperature, 353K; the relative humidity of anode, 50%; conductivity of GDE, 1000 S/m, but the relative humidity of cathode and porosity of GDE are pooled as error due to a small sum of squares. The present simulation results give designers the ideas ratify the effectiveness of the proposed robust design methodology for the performance of fuel cell.
Keywords: PEMFC, numerical simulation, optimization, Taguchi method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25581092 A New Source Code Auditing Algorithm for Detecting LFI and RFI in PHP Programs
Authors: Seyed Ali Mir Heydari, Mohsen Sayadiharikandeh
Abstract:
Static analysis of source code is used for auditing web applications to detect the vulnerabilities. In this paper, we propose a new algorithm to analyze the PHP source code for detecting LFI and RFI potential vulnerabilities. In our approach, we first define some patterns for finding some functions which have potential to be abused because of unhandled user inputs. More precisely, we use regular expression as a fast and simple method to define some patterns for detection of vulnerabilities. As inclusion functions could be also used in a safe way, there could occur many false positives (FP). The first cause of these FP-s could be that the function does not use a usersupplied variable as an argument. So, we extract a list of usersupplied variables to be used for detecting vulnerable lines of code. On the other side, as vulnerability could spread among the variables like by multi-level assignment, we also try to extract the hidden usersupplied variables. We use the resulted list to decrease the false positives of our method. Finally, as there exist some ways to prevent the vulnerability of inclusion functions, we define also some patterns to detect them and decrease our false positives.Keywords: User-supplied Variables, hidden user-supplied variables, PHP vulnerabilities.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25161091 Cross Project Software Fault Prediction at Design Phase
Authors: Pradeep Singh, Shrish Verma
Abstract:
Software fault prediction models are created by using the source code, processed metrics from the same or previous version of code and related fault data. Some company do not store and keep track of all artifacts which are required for software fault prediction. To construct fault prediction model for such company, the training data from the other projects can be one potential solution. Earlier we predicted the fault the less cost it requires to correct. The training data consists of metrics data and related fault data at function/module level. This paper investigates fault predictions at early stage using the cross-project data focusing on the design metrics. In this study, empirical analysis is carried out to validate design metrics for cross project fault prediction. The machine learning techniques used for evaluation is Naïve Bayes. The design phase metrics of other projects can be used as initial guideline for the projects where no previous fault data is available. We analyze seven datasets from NASA Metrics Data Program which offer design as well as code metrics. Overall, the results of cross project is comparable to the within company data learning.Keywords: Software Metrics, Fault prediction, Cross project, Within project.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25531090 Increase Energy Savings with Lighting Automation Using Light Pipes and Power LEDs
Abstract:
Using of natural lighting has come into prominence in constructed buildings, especially in last ten years, under scope of energy efficiency. Natural lighting methods are one of the methods that aim to take advantage of day light in maximum level and decrease using of artificial lighting. Increasing of day light amount in buildings by using suitable methods will give optimum result in terms of comfort and energy saving when the daylight-artificial light integration is ensured with a suitable control system. Using of natural light in places that require lighting will ensure energy saving in great extent. With this study, it is aimed to save energy used for purpose of lighting. Under this scope, lighting of a scanning laboratory of a hospital was realized by using a lighting automation containing natural and artificial lighting. In natural lighting, light pipes were used and in artificial lighting, dimmable power LED modules were used. Necessity of lighting was followed with motion sensors. The lighting automation containing natural and artificial light was ensured with fuzzy logic control. At the scanning laboratory where this application was realized, energy saving in lighting was obtained.
Keywords: Daylight transfer, fuzzy logic controller, light pipe, Power LED.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21581089 Dynamic Capitalization and Visualization Strategy in Collaborative Knowledge Management System for EI Process
Authors: Bolanle F. Oladejo, Victor T. Odumuyiwa, Amos A. David
Abstract:
Knowledge is attributed to human whose problemsolving behavior is subjective and complex. In today-s knowledge economy, the need to manage knowledge produced by a community of actors cannot be overemphasized. This is due to the fact that actors possess some level of tacit knowledge which is generally difficult to articulate. Problem-solving requires searching and sharing of knowledge among a group of actors in a particular context. Knowledge expressed within the context of a problem resolution must be capitalized for future reuse. In this paper, an approach that permits dynamic capitalization of relevant and reliable actors- knowledge in solving decision problem following Economic Intelligence process is proposed. Knowledge annotation method and temporal attributes are used for handling the complexity in the communication among actors and in contextualizing expressed knowledge. A prototype is built to demonstrate the functionalities of a collaborative Knowledge Management system based on this approach. It is tested with sample cases and the result showed that dynamic capitalization leads to knowledge validation hence increasing reliability of captured knowledge for reuse. The system can be adapted to various domains.Keywords: Actors' communication, knowledge annotation, recursive knowledge capitalization, visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13711088 Process Development of Safe and Ready-to-eat Raw Oyster Meat by Irradiation Technology
Authors: Pattama Ratana-Arporn, Pongtep Wilaipun
Abstract:
White scar oyster (Crassostrea belcheri) is often eaten raw and being the leading vehicle for foodborne disease, especially Salmonella Weltevreden which exposed the prominent and most resistant to radiation. Gamma irradiation at a low dose of 1 kGy was enough to eliminate S. Weltevreden contaminated in oyster meat at a level up to 5 log CFU/g while it still retain the raw characteristics and equivalent sensory quality as the non-irradiated one. Process development of ready-to-eat chilled oyster meat was conducted by shucking the meat, individually packed in plastic bags, subjected to 1 kGy gamma radiation at chilled condition and then stored in 4oC refrigerated temperature. Microbiological determination showed the absence of S. Weltevreden (5 log CFU/g initial inoculated) along the whole storage time of 30 days. Sensory evaluation indicated the decreasing in sensory scores along storage time which determining the product shelf life to be 18 days compared to 15 days of nonirradiated one. The most advantage of developed process was to provide the safe raw oyster to consumers and in addition sensory quality retained and 3-day extension shelf life also exist.Keywords: decontamination, food safety, irradiation, oyster, Salmonella Weltevreden
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16861087 The Significant Effect of Wudu’ and Zikr in the Controlling of Emotional Pressure Using Biofeedback Emwave Technique
Authors: Mohd Anuar Awang Idris, Muhammad Nubli Abdul Wahab, Nora Yusma Mohamed Yusoff
Abstract:
Wudu’ (Ablution) and Zikr are amongst some of the spiritual tools which may help an individual control his mind, emotion and attitude. These tools are deemed to be able to deliver a positive impact on an individual’s psychophysiology. The main objective of this research is to determine the effects of Wudu’ (Ablution) and Zikr therapy using the biofeedback emWave application and technology. For this research, 13 students were selected as samples from the students’ representative body at the University Tenaga National, Malaysia. The DASS (Depression Anxiety Stress Scale) questionnaire was used to help with the assessment and measurement of each student’s ability in controlling his or her emotions before and after the therapies. The biofeedback emWave technology was utilized to monitor the student’s psychophysiology level. In addition, the data obtained from the Heart rate variability (HRV) test have also been used to affirm that Wudu’ and Zikr had had significant impacts on the student’s success in controlling his or her emotional pressure.
Keywords: Biofeedback emWave, emotion, psychophysiology, wudu’, zikr.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15531086 Random Projections for Dimensionality Reduction in ICA
Authors: Sabrina Gaito, Andrea Greppi, Giuliano Grossi
Abstract:
In this paper we present a technique to speed up ICA based on the idea of reducing the dimensionality of the data set preserving the quality of the results. In particular we refer to FastICA algorithm which uses the Kurtosis as statistical property to be maximized. By performing a particular Johnson-Lindenstrauss like projection of the data set, we find the minimum dimensionality reduction rate ¤ü, defined as the ratio between the size k of the reduced space and the original one d, which guarantees a narrow confidence interval of such estimator with high confidence level. The derived dimensionality reduction rate depends on a system control parameter β easily computed a priori on the basis of the observations only. Extensive simulations have been done on different sets of real world signals. They show that actually the dimensionality reduction is very high, it preserves the quality of the decomposition and impressively speeds up FastICA. On the other hand, a set of signals, on which the estimated reduction rate is greater than 1, exhibits bad decomposition results if reduced, thus validating the reliability of the parameter β. We are confident that our method will lead to a better approach to real time applications.Keywords: Independent Component Analysis, FastICA algorithm, Higher-order statistics, Johnson-Lindenstrauss lemma.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18931085 On the Performance of Information Criteria in Latent Segment Models
Authors: Jaime R. S. Fonseca
Abstract:
Nevertheless the widespread application of finite mixture models in segmentation, finite mixture model selection is still an important issue. In fact, the selection of an adequate number of segments is a key issue in deriving latent segments structures and it is desirable that the selection criteria used for this end are effective. In order to select among several information criteria, which may support the selection of the correct number of segments we conduct a simulation study. In particular, this study is intended to determine which information criteria are more appropriate for mixture model selection when considering data sets with only categorical segmentation base variables. The generation of mixtures of multinomial data supports the proposed analysis. As a result, we establish a relationship between the level of measurement of segmentation variables and some (eleven) information criteria-s performance. The criterion AIC3 shows better performance (it indicates the correct number of the simulated segments- structure more often) when referring to mixtures of multinomial segmentation base variables.Keywords: Quantitative Methods, Multivariate Data Analysis, Clustering, Finite Mixture Models, Information Theoretical Criteria, Simulation experiments.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15231084 Data Hiding by Vector Quantization in Color Image
Authors: Yung-Gi Wu
Abstract:
With the growing of computer and network, digital data can be spread to anywhere in the world quickly. In addition, digital data can also be copied or tampered easily so that the security issue becomes an important topic in the protection of digital data. Digital watermark is a method to protect the ownership of digital data. Embedding the watermark will influence the quality certainly. In this paper, Vector Quantization (VQ) is used to embed the watermark into the image to fulfill the goal of data hiding. This kind of watermarking is invisible which means that the users will not conscious the existing of embedded watermark even though the embedded image has tiny difference compared to the original image. Meanwhile, VQ needs a lot of computation burden so that we adopt a fast VQ encoding scheme by partial distortion searching (PDS) and mean approximation scheme to speed up the data hiding process. The watermarks we hide to the image could be gray, bi-level and color images. Texts are also can be regarded as watermark to embed. In order to test the robustness of the system, we adopt Photoshop to fulfill sharpen, cropping and altering to check if the extracted watermark is still recognizable. Experimental results demonstrate that the proposed system can resist the above three kinds of tampering in general cases.Keywords: Data hiding, vector quantization, watermark.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17781083 Efficient Numerical Model for Studying Bridge Pier Collapse in Floods
Authors: Thanut Kallaka, Ching-Jong Wang
Abstract:
High level and high velocity flood flows are potentially harmful to bridge piers as evidenced in many toppled piers, and among them the single-column piers were considered as the most vulnerable. The flood flow characteristic parameters including drag coefficient, scouring and vortex shedding are built into a pier-flood interaction model to investigate structural safety against flood hazards considering the effects of local scouring, hydrodynamic forces, and vortex induced resonance vibrations. By extracting the pier-flood simulation results embedded in a neural networks code, two cases of pier toppling occurred in typhoon days were reexamined: (1) a bridge overcome by flash flood near a mountain side; (2) a bridge washed off in flood across a wide channel near the estuary. The modeling procedures and simulations are capable of identifying the probable causes for the tumbled bridge piers during heavy floods, which include the excessive pier bending moments and resonance in structural vibrations.Keywords: Bridge piers, Neural networks, Scour depth, Structural safety, Vortex shedding
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22681082 General Purpose Graphic Processing Units Based Real Time Video Tracking System
Authors: Mallikarjuna Rao Gundavarapu, Ch. Mallikarjuna Rao, K. Anuradha Bai
Abstract:
Real Time Video Tracking is a challenging task for computing professionals. The performance of video tracking techniques is greatly affected by background detection and elimination process. Local regions of the image frame contain vital information of background and foreground. However, pixel-level processing of local regions consumes a good amount of computational time and memory space by traditional approaches. In our approach we have explored the concurrent computational ability of General Purpose Graphic Processing Units (GPGPU) to address this problem. The Gaussian Mixture Model (GMM) with adaptive weighted kernels is used for detecting the background. The weights of the kernel are influenced by local regions and are updated by inter-frame variations of these corresponding regions. The proposed system has been tested with GPU devices such as GeForce GTX 280, GeForce GTX 280 and Quadro K2000. The results are encouraging with maximum speed up 10X compared to sequential approach.
Keywords: Connected components, Embrace threads, Local weighted kernel, Structuring element.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11761081 The Role of Driving Experience in Hazard Perception and Categorization: A Traffic-Scene Paradigm
Authors: Avinoam Borowsky, Tal Oron-Gilad, Yisrael Parmet
Abstract:
This study examined the role of driving experience in hazard perception and categorization using traffic scene pictures. Specifically, young-inexperienced, moderately experienced and very experienced (taxi) drivers observed traffic scene pictures while connected to an eye tracking system and were asked to rate the level of hazardousness of each picture and to mention the three most prominent hazards in it. Target pictures included nine, nearly identical, pairs of pictures where one picture in each pair included an actual hazard as an additional element. Altogether, 22 areas of interest (AOIs) were predefined and included 13 potential hazards and 9 actual hazards. Data analysis included both verbal reports and eye scanning patterns of these AOIs. Generally, both experienced and taxi drivers noted a relatively larger number of potential hazards than young inexperienced drivers Thus, by relating to less salient potential hazards, experienced drivers have demonstrated a better situation model of the traffic environment.
Keywords: Concept Construction, Hazard Perception, EyeMovements, Driving Experience.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16691080 An In-Depth Analysis of Open Data Portals as an Emerging Public E-Service
Authors: Martin Lnenicka
Abstract:
Governments collect and produce large amounts of data. Increasingly, governments worldwide have started to implement open data initiatives and also launch open data portals to enable the release of these data in open and reusable formats. Therefore, a large number of open data repositories, catalogues and portals have been emerging in the world. The greater availability of interoperable and linkable open government data catalyzes secondary use of such data, so they can be used for building useful applications which leverage their value, allow insight, provide access to government services, and support transparency. The efficient development of successful open data portals makes it necessary to evaluate them systematic, in order to understand them better and assess the various types of value they generate, and identify the required improvements for increasing this value. Thus, the attention of this paper is directed particularly to the field of open data portals. The main aim of this paper is to compare the selected open data portals on the national level using content analysis and propose a new evaluation framework, which further improves the quality of these portals. It also establishes a set of considerations for involving businesses and citizens to create eservices and applications that leverage on the datasets available from these portals.
Keywords: Big data, content analysis, criteria comparison, data quality, open data, open data portals, public sector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30871079 Faster FPGA Routing Solution using DNA Computing
Authors: Manpreet Singh, Parvinder Singh Sandhu, Manjinder Singh Kahlon
Abstract:
There are many classical algorithms for finding routing in FPGA. But Using DNA computing we can solve the routes efficiently and fast. The run time complexity of DNA algorithms is much less than other classical algorithms which are used for solving routing in FPGA. The research in DNA computing is in a primary level. High information density of DNA molecules and massive parallelism involved in the DNA reactions make DNA computing a powerful tool. It has been proved by many research accomplishments that any procedure that can be programmed in a silicon computer can be realized as a DNA computing procedure. In this paper we have proposed two tier approaches for the FPGA routing solution. First, geometric FPGA detailed routing task is solved by transforming it into a Boolean satisfiability equation with the property that any assignment of input variables that satisfies the equation specifies a valid routing. Satisfying assignment for particular route will result in a valid routing and absence of a satisfying assignment implies that the layout is un-routable. In second step, DNA search algorithm is applied on this Boolean equation for solving routing alternatives utilizing the properties of DNA computation. The simulated results are satisfactory and give the indication of applicability of DNA computing for solving the FPGA Routing problem.Keywords: FPGA, Routing, DNA Computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15991078 An Off-the-Shelf Scheme for Dependable Grid Systems Using Virtualization
Authors: Toshinori Takabatake
Abstract:
Recently, grid computing has been widely focused on the science, industry, and business fields, which are required a vast amount of computing. Grid computing is to provide the environment that many nodes (i.e., many computers) are connected with each other through a local/global network and it is available for many users. In the environment, to achieve data processing among nodes for any applications, each node executes mutual authentication by using certificates which published from the Certificate Authority (for short, CA). However, if a failure or fault has occurred in the CA, any new certificates cannot be published from the CA. As a result, a new node cannot participate in the gird environment. In this paper, an off-the-shelf scheme for dependable grid systems using virtualization techniques is proposed and its implementation is verified. The proposed approach using the virtualization techniques is to restart an application, e.g., the CA, if it has failed. The system can tolerate a failure or fault if it has occurred in the CA. Since the proposed scheme is implemented at the application level easily, the cost of its implementation by the system builder hardly takes compared it with other methods. Simulation results show that the CA in the system can recover from its failure or fault.Keywords: grid computing, restarting application, certificate authority, virtualization, dependability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13821077 Experimental Study of the Metal Foam Flow Conditioner for Orifice Plate Flowmeters
Authors: B. Manshoor, N. Ihsak, Amir Khalid
Abstract:
The sensitivity of orifice plate metering to disturbed flow (either asymmetric or swirling) is a subject of great concern to flow meter users and manufacturers. The distortions caused by pipe fittings and pipe installations upstream of the orifice plate are major sources of this type of non-standard flows. These distortions can alter the accuracy of metering to an unacceptable degree. In this work, a multi-scale object known as metal foam has been used to generate a predetermined turbulent flow upstream of the orifice plate. The experimental results showed that the combination of an orifice plate and metal foam flow conditioner is broadly insensitive to upstream disturbances. This metal foam demonstrated a good performance in terms of removing swirl and producing a repeatable flow profile within a short distance downstream of the device. The results of using a combination of a metal foam flow conditioner and orifice plate for non-standard flow conditions including swirling flow and asymmetric flow show this package can preserve the accuracy of metering up to the level required in the standards.Keywords: Metal foam flow conditioner, flow measurement, orifice plate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20661076 Trends and Prospects for the Development of Georgian Wine Market
Authors: E. Kharaishvili, M. Chavleishvili, M. Natsvaladze
Abstract:
The article presents the trends in Georgian wine market development and evaluates the competitive advantages of Georgia to enter the wine market based on its customs, traditions and historical practices combined with modern technologies. In order to analyze the supply of wine, dynamics of vineyard land area and grape varieties are discussed, trends in wine production are presented, trends in export and import are evaluated, local wine market, its micro and macro environments are studied and analyzed based on the interviews with experts and analysis of initial recording materials. For strengthening its position on the international market, the level of competitiveness of Georgian wine is defined, which is evaluated by “ex-ante” and “ex-post” methods, as well as by four basic and two additional factors of the Porter’s diamond method; potential advantages and disadvantages of Georgian wine are revealed. Conclusions are made by identifying the factors that hinder the development of Georgian wine market. Based on the conclusions, relevant recommendations are developed.
Keywords: Georgian wine market, competitive advantage, bio wine, export-import, Porter's diamond model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41661075 Interaction of between Cd and Zn in Barley (Hordeum vulgare L.) Plant for Phytoextraction Method
Authors: S. Adiloğlu, K. Bellitürk, Y. Solmaz, A. Adiloğlu
Abstract:
The aim of this research is to remediation of the cadmium (Cd) pollution in agricultural soils by using barley (Hordeum vulgare L.) plant. For this purpose, a pot experiment was done in greenhouse conditions. Cadmium (100 mg/kg) as CdSO4.8H2O forms was applied to each pot and incubated during 30 days. Then Ethylenediamine tetraacetic acid (EDTA) chelate was applied to each pot at five doses (0, 3, 6, 8 and 10 mmol/kg) 20 days before harvesting time of the barley plants. The plants were harvested after two months planting. According to the pot experiment results, Cd and Zn amounts of barley plant increased with increasing EDTA application and Zn and Cd contents of barley 20,13 and 1,35 mg/kg for 0 mmol /kg EDTA; 58,61 and 113,24 mg/kg for 10 mmol/kg EDTA doses, respectively. On the other hand, Cd and Zn concentrations of experiment soil increased with EDTA application to the soil samples. Zinc and Cd concentrations of soil 0,31 and 0,021 mg/kg for 0 mmol /kg EDTA; 2,39 and 67,40 mg/kg for 10 mmol/kg EDTA doses, respectively. These increases were found to be statistically significant at the level of 1 %. According to the results of the pot experiment, some heavy metal especially Cd pollution of barley (Hordeum vulgare L.) plant province can be remediated by the phytoextraction method.
Keywords: Barley (Hordeum vulgare L.), Cadmium and Zinc, phytoextraction, soil pollution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6861074 Parametric Study of Confined Turbulent Impinging Slot Jets upon a Flat Plate
Authors: A. M. Tahsini, S. Tadayon Mousavi
Abstract:
In the present paper, a numerical investigation has been carried out to classify and clarify the effects of paramount parameters on turbulent impinging slot jets. The effects of nozzle-s exit turbulent intensity, distance between nozzle and impinging plate are studied at Reynolds number 5000 and 20000. In addition, the effect of Mach number that is varied between 0.3-0.8 at a constant Reynolds number 133000 is investigated to elucidate the effect of compressibility in impinging jet upon a flat plate. The wall that is located at the same level with nozzle-s exit confines the flow. A compressible finite volume solver is implemented for simulation the flow behavior. One equation Spalart-Allmaras turbulent model is used to simulate turbulent flow at this study. Assessment of the Spalart-Allmaras turbulent model at high nozzle to plate distance, and giving enough insights to characterize the effect of Mach number at high Reynolds number for the complex impinging jet flow are the remarkable results of this study.Keywords: Impinging jet, Numerical simulation, Turbulence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24771073 Modeling and Analysis of Process Parameters on Surface Roughness in EDM of AISI D2 Tool Steel by RSM Approach
Authors: M. K. Pradhan, C. K. Biswas
Abstract:
In this research, Response Surface Methodology (RSM) is used to investigate the effect of four controllable input variables namely: discharge current, pulse duration, pulse off time and applied voltage Surface Roughness (SR) of on Electrical Discharge Machined surface. To study the proposed second-order polynomial model for SR, a Central Composite Design (CCD) is used to estimation the model coefficients of the four input factors, which are alleged to influence the SR in Electrical Discharge Machining (EDM) process. Experiments were conducted on AISI D2 tool steel with copper electrode. The response is modeled using RSM on experimental data. The significant coefficients are obtained by performing Analysis of Variance (ANOVA) at 5% level of significance. It is found that discharge current, pulse duration, and pulse off time and few of their interactions have significant effect on the SR. The model sufficiency is very satisfactory as the Coefficient of Determination (R2) is found to be 91.7% and adjusted R2-statistic (R2 adj ) 89.6%.
Keywords: Electrical discharge machining, surface roughness, response surface methodology, ANOVA, central composite design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2360