Search results for: Stream query.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 443

Search results for: Stream query.

83 Frequent Itemset Mining Using Rough-Sets

Authors: Usman Qamar, Younus Javed

Abstract:

Frequent pattern mining is the process of finding a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set. It was proposed in the context of frequent itemsets and association rule mining. Frequent pattern mining is used to find inherent regularities in data. What products were often purchased together? Its applications include basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis. However, one of the bottlenecks of frequent itemset mining is that as the data increase the amount of time and resources required to mining the data increases at an exponential rate. In this investigation a new algorithm is proposed which can be uses as a pre-processor for frequent itemset mining. FASTER (FeAture SelecTion using Entropy and Rough sets) is a hybrid pre-processor algorithm which utilizes entropy and roughsets to carry out record reduction and feature (attribute) selection respectively. FASTER for frequent itemset mining can produce a speed up of 3.1 times when compared to original algorithm while maintaining an accuracy of 71%.

Keywords: Rough-sets, Classification, Feature Selection, Entropy, Outliers, Frequent itemset mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2398
82 Despiking of Turbulent Flow Data in Gravel Bed Stream

Authors: Ratul Das

Abstract:

The present experimental study insights the decontamination of instantaneous velocity fluctuations captured by Acoustic Doppler Velocimeter (ADV) in gravel-bed streams to ascertain near-bed turbulence for low Reynolds number. The interference between incidental and reflected pulses produce spikes in the ADV data especially in the near-bed flow zone and therefore filtering the data are very essential. Nortek’s Vectrino four-receiver ADV probe was used to capture the instantaneous three-dimensional velocity fluctuations over a non-cohesive bed. A spike removal algorithm based on the acceleration threshold method was applied to note the bed roughness and its influence on velocity fluctuations and velocity power spectra in the carrier fluid. The velocity power spectra of despiked signals with a best combination of velocity threshold (VT) and acceleration threshold (AT) are proposed which ascertained velocity power spectra a satisfactory fit with the Kolmogorov “–5/3 scaling-law” in the inertial sub-range. Also, velocity distributions below the roughness crest level fairly follows a third-degree polynomial series.

Keywords: Acoustic Doppler Velocimeter, gravel-bed, spike removal, Reynolds shear stress, near-bed turbulence, velocity power spectra.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1143
81 Retrofitting of Bridge Piers against the Scour Damages: Case Study of the Marand-Soofian Route Bridge

Authors: Shatirah Akib, Hossein Basser, Hojat Karami, Afshin Jahangirzadeh

Abstract:

Bridge piers which are constructed in the track of high water rivers cause some variations in the flow patterns. This variation mostly is a result of the changes in river sections. Decreasing the river section, bridge piers significantly impress the flow patterns. Once the flow approaches the piers, the stream lines change their order, causing the appearance of different flow patterns around the bridge piers. New flow patterns are created following the geometry and the other technical characteristics of the piers. One of the most significant consequences of this event is the scour generated around the bridge piers which threatens the safety of the structure. In order to determine the properties of scour holes, to find maximum depth of the scour is an important factor. In this manuscript a numerical simulation of the scour around Marand-Soofian route bridge piers has been carried out via SSIIM 2.0 Software and the amount of maximum scour has been achieved subsequently. Eventually the methods for retrofitting of bridge piers against scours and also the methods for decreasing the amount of scour have been offered.

Keywords: Scour, Bridge pier, numerical simulation, SSIIM 2.0.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2727
80 Exergy Analysis of Combined Cycle of Air Separation and Natural Gas Liquefaction

Authors: Hanfei Tuo, Yanzhong Li

Abstract:

This paper presented a novel combined cycle of air separation and natural gas liquefaction. The idea is that natural gas can be liquefied, meanwhile gaseous or liquid nitrogen and oxygen are produced in one combined cryogenic system. Cycle simulation and exergy analysis were performed to evaluate the process and thereby reveal the influence of the crucial parameter, i.e., flow rate ratio through two stages expanders β on heat transfer temperature difference, its distribution and consequent exergy loss. Composite curves for the combined hot streams (feeding natural gas and recycled nitrogen) and the cold stream showed the degree of optimization available in this process if appropriate β was designed. The results indicated that increasing β reduces temperature difference and exergy loss in heat exchange process. However, the maximum limit value of β should be confined in terms of minimum temperature difference proposed in heat exchanger design standard and heat exchanger size. The optimal βopt under different operation conditions corresponding to the required minimum temperature differences was investigated.

Keywords: combined cycle simulation, exergy analysis, natural gas liquefaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2750
79 Analyzing Irbid’s Food Waste as Feedstock for Anaerobic Digestion

Authors: Assal E. Haddad

Abstract:

Food waste samples from Irbid were collected from 5 different sources for 12 weeks to characterize their composition in terms of four food categories; rice, meat, fruits and vegetables, and bread. Average food type compositions were 39% rice, 6% meat, 34% fruits and vegetables, and 23% bread. Methane yield was also measured for all food types and was found to be 362, 499, 352, and 375 mL/g VS for rice, meat, fruits and vegetables, and bread, respectively. A representative food waste sample was created to test the actual methane yield and compare it to calculated one. Actual methane yield (414 mL/g VS) was greater than the calculated value (377 mL/g VS) based on food type proportions and their specific methane yield. This study emphasizes the effect of the types of food and their proportions in food waste on the final biogas production. Findings in this study provide representative methane emission factors for Irbid’s food waste, which represent as high as 68% of total Municipal Solid Waste (MSW) in Irbid, and also indicate the energy and economic value within the solid waste stream in Irbid.

Keywords: Food waste, solid waste management, anaerobic digestion, methane yield.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 773
78 The Effect of Blockage Factor on Savonius Hydrokinetic Turbine Performance

Authors: Thochi Seb Rengma, Mahendra Kumar Gupta, P. M. V. Subbarao

Abstract:

Hydrokinetic turbines can be used to produce power in inaccessible villages located near rivers. The hydrokinetic turbine uses the kinetic energy of the water and maybe put it directly into the natural flow of water without dams. For off-grid power production, the Savonius-type vertical axis turbine is the easiest to design and manufacture. This proposal uses three-dimensional Computational Fluid Dynamics (CFD) simulations to measure the considerable interaction and complexity of turbine blades. Savonius hydrokinetic turbine (SHKT) performance is affected by a blockage in the river, canals, and waterways. Putting a large object in a water channel causes water obstruction and raises local free stream velocity. The blockage correction factor or velocity increment measures the impact of velocity on the performance. SHKT performance is evaluated by comparing power coefficient (Cp) with tip-speed ratio (TSR) at various blockage ratios. The maximum Cp was obtained at a TSR of 1.1 with a blockage ratio of 45%, whereas TSR of 0.8 yielded the highest Cp without blockage. The greatest Cp of 0.29 was obtained with a 45% blockage ratio compared to a Cp max of 0.18 without a blockage.

Keywords: Savonius hydrokinetic turbine, blockage ratio, vertical axis turbine, power coefficient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 108
77 Detached-Eddy Simulation of Vortex Generator Jet Using Chimera Grids

Authors: Saqib Mahmood, Rolf Radespiel

Abstract:

This paper aims at numerically analysing the effect of an active flow control (AFC) by a vortex generator jet (VGJ) submerged in a boundary layer via Chimera Grids and Detached- Eddy Simulation (DES). The performance of DES results are judged against Reynolds-Averaged Navier-Stokes (RANS) and compared with the experiments that showed an unsteady vortex motion downstream of VGJ. Experimental results showed that the mechanism of embedding logitudinal vortex structure in the main stream flow is quite effective in increasing the near wall momentum of separated aircraft wing. In order to simulate such a flow configuration together with the VGJ, an efficient numerical approach is required. This requirement is fulfilled by performing the DES simulation over the flat plate using the DLR TAU Code. The DES predictions identify the vortex region via smooth hybrid length scale and predict the unsteady vortex motion observed in the experiments. The DES results also showed that the sufficient grid refinement in the vortex region resolves the turbulent scales downstream of the VGJ, the spatial vortex core postion and nondimensional momentum coefficient RVx .

Keywords: VGJ, Chimera Grid, DES, RANS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2444
76 The Dependence of the Liquid Application on the Coverage of the Sprayed Objects in Terms of the Characteristics of the Sprayed Object during Spraying

Authors: Beata Cieniawska, Deta Łuczycka, Katarzyna Dereń

Abstract:

When assessing the quality of the spraying procedure, three indicators are used: uneven distribution of precipitation of liquid sprayed, degree of coverage of sprayed surfaces, and deposition of liquid spraying However, there is a lack of information on the relationship between the quality parameters of the procedure. Therefore, the research was carried out at the Institute of Agricultural Engineering of Wrocław University of Environmental and Life Sciences. The aim of the study was to determine the relationship between the degree of coverage of sprayed surfaces and the deposition of liquid in the aspect of the parametric characteristics of the protected plant using selected single and double stream nozzles. Experiments were conducted under laboratory conditions. The carrier of nozzles acted as an independent self-propelled sprayer used for spraying, whereas the parametric characteristics of plants were determined using artificial plants as the ratio of the vertical projection surface and the horizontal projection surface. The results and their analysis showed a strong and very strong correlation between the analyzed parameters in terms of the characteristics of the sprayed object.

Keywords: Degree of coverage, deposition of liquid, nozzle, spraying.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 719
75 Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based On Local Color Histograms

Authors: Mawloud Mosbah, Bachir Boucheham

Abstract:

Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.

Keywords: CBIR, Color Global Histogram, Color Local Histogram, Weak Segmentation, Euclidean Distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686
74 Estimating Marine Tidal Power Potential in Kenya

Authors: Lucy Patricia Onundo, Wilfred Njoroge Mwema

Abstract:

The rapidly diminishing fossil fuel reserves, their exorbitant cost and the increasingly apparent negative effect of fossil fuels to climate changes is a wake-up call to explore renewable energy. Wind, bio-fuel and solar power have already become staples of Kenyan electricity mix. The potential of electric power generation from marine tidal currents is enormous, with oceans covering more than 70% of the earth. However, attempts to harness marine tidal energy in Kenya, has yet to be studied thoroughly due to its promising, cyclic, reliable and predictable nature and the vast energy contained within it. The high load factors resulting from the fluid properties and the predictable resource characteristics make marine currents particularly attractive for power generation and advantageous when compared to others. Global-level resource assessments and oceanographic literature and data have been compiled in an analysis of the technology-specific requirements for tidal energy technologies and the physical resources. Temporal variations in resource intensity as well as the differences between small-scale applications are considered.

Keywords: Energy data assessment, environmental legislation, renewable energy, tidal-in-stream turbines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1293
73 Rainfall–Runoff Simulation Using WetSpa Model in Golestan Dam Basin, Iran

Authors: M. R. Dahmardeh Ghaleno, M. Nohtani, S. Khaledi

Abstract:

Flood simulation and prediction is one of the most active research areas in surface water management. WetSpa is a distributed, continuous, and physical model with daily or hourly time step that explains precipitation, runoff, and evapotranspiration processes for both simple and complex contexts. This model uses a modified rational method for runoff calculation. In this model, runoff is routed along the flow path using Diffusion-Wave equation which depends on the slope, velocity, and flow route characteristics. Golestan Dam Basin is located in Golestan province in Iran and it is passing over coordinates 55° 16´ 50" to 56° 4´ 25" E and 37° 19´ 39" to 37° 49´ 28"N. The area of the catchment is about 224 km2, and elevations in the catchment range from 414 to 2856 m at the outlet, with average slope of 29.78%. Results of the simulations show a good agreement between calculated and measured hydrographs at the outlet of the basin. Drawing upon Nash-Sutcliffe model efficiency coefficient for calibration periodic model estimated daily hydrographs and maximum flow rate with an accuracy up to 59% and 80.18%, respectively.

Keywords: Watershed simulation, WetSpa, stream flow, flood prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 987
72 High Level Synthesis of Canny Edge Detection Algorithm on Zynq Platform

Authors: Hanaa M. Abdelgawad, Mona Safar, Ayman M. Wahba

Abstract:

Real time image and video processing is a demand in many computer vision applications, e.g. video surveillance, traffic management and medical imaging. The processing of those video applications requires high computational power. Thus, the optimal solution is the collaboration of CPU and hardware accelerators. In this paper, a Canny edge detection hardware accelerator is proposed. Edge detection is one of the basic building blocks of video and image processing applications. It is a common block in the pre-processing phase of image and video processing pipeline. Our presented approach targets offloading the Canny edge detection algorithm from processing system (PS) to programmable logic (PL) taking the advantage of High Level Synthesis (HLS) tool flow to accelerate the implementation on Zynq platform. The resulting implementation enables up to a 100x performance improvement through hardware acceleration. The CPU utilization drops down and the frame rate jumps to 60 fps of 1080p full HD input video stream.

Keywords: High Level Synthesis, Canny edge detection, Hardware accelerators, and Computer Vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5389
71 Information Retrieval in Domain Specific Search Engine with Machine Learning Approaches

Authors: Shilpy Sharma

Abstract:

As the web continues to grow exponentially, the idea of crawling the entire web on a regular basis becomes less and less feasible, so the need to include information on specific domain, domain-specific search engines was proposed. As more information becomes available on the World Wide Web, it becomes more difficult to provide effective search tools for information access. Today, people access web information through two main kinds of search interfaces: Browsers (clicking and following hyperlinks) and Query Engines (queries in the form of a set of keywords showing the topic of interest) [2]. Better support is needed for expressing one's information need and returning high quality search results by web search tools. There appears to be a need for systems that do reasoning under uncertainty and are flexible enough to recover from the contradictions, inconsistencies, and irregularities that such reasoning involves. In a multi-view problem, the features of the domain can be partitioned into disjoint subsets (views) that are sufficient to learn the target concept. Semi-supervised, multi-view algorithms, which reduce the amount of labeled data required for learning, rely on the assumptions that the views are compatible and uncorrelated. This paper describes the use of semi-structured machine learning approach with Active learning for the “Domain Specific Search Engines". A domain-specific search engine is “An information access system that allows access to all the information on the web that is relevant to a particular domain. The proposed work shows that with the help of this approach relevant data can be extracted with the minimum queries fired by the user. It requires small number of labeled data and pool of unlabelled data on which the learning algorithm is applied to extract the required data.

Keywords: Search engines; machine learning, Informationretrieval, Active logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2051
70 Examination of Flood Runoff Reproductivity for Different Rainfall Sources in Central Vietnam

Authors: Do Hoai Nam, Keiko Udo, Akira Mano

Abstract:

This paper presents the combination of different precipitation data sets and the distributed hydrological model, in order to examine the flood runoff reproductivity of scattered observation catchments. The precipitation data sets were obtained from observation using rain-gages, satellite based estimate (TRMM), and numerical weather prediction model (NWP), then were coupled with the super tank model. The case study was conducted in three basins (small, medium, and large size) located in Central Vietnam. Calculated hydrographs based on ground observation rainfall showed best fit to measured stream flow, while those obtained from TRMM and NWP showed high uncertainty of peak discharges. However, calculated hydrographs using the adjusted rainfield depicted a promising alternative for the application of TRMM and NWP in flood modeling for scattered observation catchments, especially for the extension of forecast lead time.

Keywords: Flood forecast, rainfall-runoff model, satellite rainfall estimate, numerical weather prediction, quantitative precipitation forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577
69 Modeling and Parametric Study for CO2/CH4 Separation Using Membrane Processes

Authors: Faizan Ahmad, Lau Kok Keong, Azmi Mohd. Shariff

Abstract:

The upgrading of low quality crude natural gas (NG) is attracting interest due to high demand of pipeline-grade gas in recent years. Membrane processes are commercially proven technology for the removal of impurities like carbon dioxide from NG. In this work, cross flow mathematical model has been suggested to be incorporated with ASPEN HYSYS as a user defined unit operation in order to design the membrane system for CO2/CH4 separation. The effect of operating conditions (such as feed composition and pressure) and membrane selectivity on the design parameters (methane recovery and total membrane area required for the separation) has been studied for different design configurations. These configurations include single stage (with and without recycle) and double stage membrane systems (with and without permeate or retentate recycle). It is shown that methane recovery can be improved by recycling permeate or retentate stream as well as by using double stage membrane systems. The ASPEN HYSYS user defined unit operation proposed in the study has potential to be applied for complex membrane system design and optimization.

Keywords: CO2/CH4 Separation, Membrane Process, Membrane modeling, Natural Gas Processing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3816
68 Water Boundary Layer Flow Over Rotating Sphere with Mass Transfer

Authors: G. Revathi, P. Saikrishnan

Abstract:

An analysis is performed to study the influence of nonuniform double slot suction on a steady laminar boundary layer flow over a rotating sphere when fluid properties such as viscosity and Prandtl number are inverse linear functions of temperature. Nonsimilar solutions have been obtained from the starting point of the streamwise co-ordinate to the exact point of separation. The difficulties arising at the starting point of the streamwise co-ordinate, at the edges of the slot and at the point of separation have been overcome by applying an implicit finite difference scheme in combination with the quasi-linearization technique and an appropriate selection of the finer step sizes along the stream-wise direction. The present investigation shows that the point of ordinary separation can be delayed by nonuniform double slot suction if the mass transfer rate is increased and also if the slots are positioned further downstream. In addition, the investigation reveals that double slot suction is found to be more effective compared to a single slot suction in delaying ordinary separation. As rotation parameter increase the point of separation moves upstream direction.

Keywords: Boundary layer, suction, mass transfer, rotating sphere.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6338
67 Treatment of Wastewater from Wet Scrubbers in Secondary Lead Smelters for Recycling and Lead Recovery

Authors: Mahmoud A. Rabah

Abstract:

The present study shows a method to recover lead metal from wastewater of wet scrubber in secondary lead smelter. The wastewater is loaded with 42,000 ppm of insoluble lead compounds (TSP) submicron in diameter. The technical background benefits the use of cationic polyfloc solution to flocculate these colloidal solids before press filtration. The polymer solution is injected in the wastewater stream in a countercurrent flow design. The study demonstrates the effect of polymer dose, temperature, pH, flow velocity of the wastewater and different filtration media on the filtration extent. Results indicated that filtration rate (¦r), quality of purified water, purifying efficiency (¦e) and floc diameter decrease regularly with increase in mass flow rate and velocity up to turbulence of 0.5 m.sec-1. Laminar flow is in favor of flocculation. Polyfloc concentration of 0.75 – 1.25 g/m3 wastewater is convenient. Increasing temperature of the wastewater and pneumatic pressure of filtration enhances ¦r. High pH value deforms floc formation and assists degradation of the filtration fabric. The overall efficiency of the method amounts to 93.2 %. Lead metal was recovered from the filtrate cake using carbon as a reducing agent at 900°C.

Keywords: Wastewater, wet scrubbers, filtration, secondary lead.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3404
66 Innovation in Lean Thinking to Achieve Rapid Construction

Authors: Muhamad Azani Yahya, Vikneswaran Munikanan, Mohammed Alias Yusof

Abstract:

Lean thinking holds the potential for improving the construction sector, and therefore, it is a concept that should be adopted by construction sector players and academicians in the real industry. Bridging from that, a learning process for construction sector players regarding this matter should be the agenda in gaining the knowledge in preparation for their career. Lean principles offer opportunities for reducing lead times, eliminating non-value adding activities, reducing variability, and are facilitated by methods such as pull scheduling, simplified operations and buffer reduction. Thus, the drive for rapid construction, which is a systematic approach in enhancing efficiency to deliver a project using time reduction, while lean is the continuous process of eliminating waste, meeting or exceeding all customer requirements, focusing on the entire value stream and pursuing perfection in the execution of a constructed project. The methodology presented is shown to be valid through literature, interviews and questionnaire. The results show that the majority of construction sector players unfamiliar with lean thinking and they agreed that it can improve the construction process flow. With this background knowledge established and identified, best practices and recommended action are drawn.

Keywords: Construction improvement, rapid construction, time reduction, lean construction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1275
65 Mooring Analysis of Duct-Type Tidal Current Power System in Shallow Water

Authors: Chul H. Jo, Do Y. Kim, Bong K. Cho, Myeong J. Kim

Abstract:

The exhaustion of oil and the environmental pollution from the use of fossil fuel are increasing. Tidal current power (TCP) has been proposed as an alternative energy source because of its predictability and reliability. By applying a duct and single point mooring (SPM) system, a TCP device can amplify the generating power and keep its position properly. Because the generating power is proportional to cube of the current stream velocity, amplifying the current speed by applying a duct to a TCP system is an effective way to improve the efficiency of the power device. An SPM system can be applied at any water depth and is highly cost effective. Simple installation and maintenance procedures are also merits of an SPM system. In this study, we designed an SPM system for a duct-type TCP device for use in shallow water. Motions of the duct are investigated to obtain the response amplitude operator (RAO) as the magnitude of the transfer function. Parameters affecting the stability of the SPM system such as the fairlead departure angle, current velocity, and the number of clamp weights are analyzed and/or optimized. Wadam and OrcaFlex commercial software is used to design the mooring line.

Keywords: Mooring design, parametric analysis, response amplitude operator, single point mooring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2137
64 Computational Method for Annotation of Protein Sequence According to Gene Ontology Terms

Authors: Razib M. Othman, Safaai Deris, Rosli M. Illias

Abstract:

Annotation of a protein sequence is pivotal for the understanding of its function. Accuracy of manual annotation provided by curators is still questionable by having lesser evidence strength and yet a hard task and time consuming. A number of computational methods including tools have been developed to tackle this challenging task. However, they require high-cost hardware, are difficult to be setup by the bioscientists, or depend on time intensive and blind sequence similarity search like Basic Local Alignment Search Tool. This paper introduces a new method of assigning highly correlated Gene Ontology terms of annotated protein sequences to partially annotated or newly discovered protein sequences. This method is fully based on Gene Ontology data and annotations. Two problems had been identified to achieve this method. The first problem relates to splitting the single monolithic Gene Ontology RDF/XML file into a set of smaller files that can be easy to assess and process. Thus, these files can be enriched with protein sequences and Inferred from Electronic Annotation evidence associations. The second problem involves searching for a set of semantically similar Gene Ontology terms to a given query. The details of macro and micro problems involved and their solutions including objective of this study are described. This paper also describes the protein sequence annotation and the Gene Ontology. The methodology of this study and Gene Ontology based protein sequence annotation tool namely extended UTMGO is presented. Furthermore, its basic version which is a Gene Ontology browser that is based on semantic similarity search is also introduced.

Keywords: automatic clustering, bioinformatics tool, gene ontology, protein sequence annotation, semantic similarity search

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3095
63 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network

Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.

Keywords: Big data, k-NN, machine learning, traffic speed prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1333
62 Simulation of Fluid Flow and Heat Transfer in Inclined Cavity using Lattice Boltzmann Method

Authors: Arash Karimipour, A. Hossein Nezhad, E. Shirani, A. Safaei

Abstract:

In this paper, Lattice Boltzmann Method (LBM) is used to study laminar flow with mixed convection heat transfer inside a two-dimensional inclined lid-driven rectangular cavity with aspect ratio AR = 3. Bottom wall of the cavity is maintained at lower temperature than the top lid, and its vertical walls are assumed insulated. Top lid motion results in fluid motion inside the cavity. Inclination of the cavity causes horizontal and vertical components of velocity to be affected by buoyancy force. To include this effect, calculation procedure of macroscopic properties by LBM is changed and collision term of Boltzmann equation is modified. A computer program is developed to simulate this problem using BGK model of lattice Boltzmann method. The effects of the variations of Richardson number and inclination angle on the thermal and flow behavior of the fluid inside the cavity are investigated. The results are presented as velocity and temperature profiles, stream function contours and isotherms. It is concluded that LBM has good potential to simulate mixed convection heat transfer problems.

Keywords: gravity, inclined lid driven cavity, lattice Boltzmannmethod, mixed convection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1912
61 Evaluating Portfolio Performance by Highlighting Network Property and the Sharpe Ratio in the Stock Market

Authors: Zahra Hatami, Hesham Ali, David Volkman

Abstract:

Selecting a portfolio for investing is a crucial decision for individuals and legal entities. In the last two decades, with economic globalization, a stream of financial innovations has rushed to the aid of financial institutions. The importance of selecting stocks for the portfolio is always a challenging task for investors. This study aims to create a financial network to identify optimal portfolios using network centralities metrics. This research presents a community detection technique of superior stocks that can be described as an optimal stock portfolio to be used by investors. By using the advantages of a network and its property in extracted communities, a group of stocks was selected for each of the various time periods. The performance of the optimal portfolios was compared to the famous index. Their Sharpe ratio was calculated in a timely manner to evaluate their profit for making decisions. The analysis shows that the selected potential portfolio from stocks with low centrality measurement can outperform the market; however, they have a lower Sharpe ratio than stocks with high centrality scores. In other words, stocks with low centralities could outperform the S&P500 yet have a lower Sharpe ratio than high central stocks.

Keywords: Portfolio management performance, network analysis, centrality measurements, Sharpe ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 331
60 A Fast Neural Algorithm for Serial Code Detection in a Stream of Sequential Data

Authors: Hazem M. El-Bakry, Qiangfu Zhao

Abstract:

In recent years, fast neural networks for object/face detection have been introduced based on cross correlation in the frequency domain between the input matrix and the hidden weights of neural networks. In our previous papers [3,4], fast neural networks for certain code detection was introduced. It was proved in [10] that for fast neural networks to give the same correct results as conventional neural networks, both the weights of neural networks and the input matrix must be symmetric. This condition made those fast neural networks slower than conventional neural networks. Another symmetric form for the input matrix was introduced in [1-9] to speed up the operation of these fast neural networks. Here, corrections for the cross correlation equations (given in [13,15,16]) to compensate for the symmetry condition are presented. After these corrections, it is proved mathematically that the number of computation steps required for fast neural networks is less than that needed by classical neural networks. Furthermore, there is no need for converting the input data into symmetric form. Moreover, such new idea is applied to increase the speed of neural networks in case of processing complex values. Simulation results after these corrections using MATLAB confirm the theoretical computations.

Keywords: Fast Code/Data Detection, Neural Networks, Cross Correlation, real/complex values.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1589
59 Lean Environmental Management Integration System (LEMIS) Framework Development

Authors: Puvanasvaran, A. P., Suresh V., N. Norazlin

Abstract:

The Lean Environmental Management Integration System (LEMIS) framework development is integration between lean core element and ISO 14001. The curiosity on the relationship between continuous improvement and sustainability of lean implementation has influenced this study toward LEMIS. Characteristic of ISO 14001 standard clauses and core elements of lean principles are explored from past studies and literature reviews. Survey was carried out on ISO 14001 certified companies to examine continual improvement by implementing the ISO 14001 standard. The study found that there is a significant and positive relationship between Lean Principles: value, value stream, flow, pull and perfection with the ISO 14001 requirements. LEMIS is significant to support the continuous improvement and sustainability. The integration system can be implemented to any manufacturing company. It gives awareness on the importance on why organizations need to sustain its environmental management system. In the meantime, the lean principle can be adapted in order to streamline daily activities of the company. Throughout the study, it had proven that there is no sacrifice or trade-off between lean principles with ISO 14001 requirements. The framework developed in the study can be further simplified in the future, especially the method of crossing each sub requirements of ISO 14001 standard with the core elements of Lean principles in this study.

Keywords: LEMIS, ISO 14001, integration, framework.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2341
58 Separation of Polyphenolics and Sugar by Ultrafiltration: Effects of Operating Conditions on Fouling and Diafiltration

Authors: Diqiao S. Wei, M. Hossain, Zaid S. Saleh

Abstract:

Polyphenolics and sugar are the components of many fruit juices. In this work, the performance of ultra-filtration (UF) for separating phenolic compounds from apple juice was studied by performing batch experiments in a membrane module with an area of 0.1 m2 and fitted with a regenerated cellulose membrane of 1 kDa MWCO. The effects of various operating conditions: transmembrane pressure (3, 4, 5 bar), temperature (30, 35, 40 ºC), pH (2, 3, 4, 5), feed concentration (3, 5, 7, 10, 15 ºBrix for apple juice) and feed flow rate (1, 1.5, 1.8 L/min) on the performance were determined. The optimum operating conditions were: transmembrane pressure 4 bar, temperature 30 ºC, feed flow rate 1 – 1.8 L/min, pH 3 and 10 Brix (apple juice). After performing ultrafiltration under these conditions, the concentration of polyphenolics in retentate was increased by a factor of up to 2.7 with up to 70% recovered in the permeate and with approx. 20% of the sugar in that stream.. Application of diafiltration (addition of water to the concentrate) can regain the flux by a factor of 1.5, which has been decreased due to fouling. The material balance performed on the process has shown the amount of deposits on the membrane and the extent of fouling in the system. In conclusion, ultrafiltration has been demonstrated as a potential technology to separate the polyphenolics and sugars from their mixtures and can be applied to remove sugars from fruit juice.

Keywords: Fouling, membrane, polyphenols, ultrafiltration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3340
57 An Analysis of Digital Forensic Laboratory Development among Malaysia’s Law Enforcement Agencies

Authors: Sarah K. Taylor, Miratun M. Saharuddin, Zabri A. Talib

Abstract:

Cybercrime is on the rise, and yet many Law Enforcement Agencies (LEAs) in Malaysia have no Digital Forensics Laboratory (DFL) to assist them in the attrition and analysis of digital evidence. From the estimated number of 30 LEAs in Malaysia, sadly, only eight of them owned a DFL. All of the DFLs are concentrated in the capital of Malaysia and none at the state level. LEAs are still depending on the national DFL (CyberSecurity Malaysia) even for simple and straightforward cases. A survey was conducted among LEAs in Malaysia owning a DFL to understand their history of establishing the DFL, the challenges that they faced and the significance of the DFL to their case investigation. The results showed that the while some LEAs faced no challenge in establishing a DFL, some of them took seven to 10 years to do so. The reason was due to the difficulty in convincing their management because of the high costs involved. The results also revealed that with the establishment of a DFL, LEAs were better able to get faster forensic result and to meet agency’s timeline expectation. It is also found that LEAs were also able to get more meaningful forensic results on cases that require niche expertise, compared to sending off cases to the national DFL. Other than that, cases are getting more complex, and hence, a continuous stream of budget for equipment and training is inevitable. The result derived from the study is hoped to be used by other LEAs in justifying to their management the benefits of establishing an in-house DFL.

Keywords: Digital forensics, digital forensics laboratory, digital evidence, law enforcement agency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1422
56 RFU Based Computational Unit Design For Reconfigurable Processors

Authors: M. Aqeel Iqbal

Abstract:

Fully customized hardware based technology provides high performance and low power consumption by specializing the tasks in hardware but lacks design flexibility since any kind of changes require re-design and re-fabrication. Software based solutions operate with software instructions due to which a great flexibility is achieved from the easy development and maintenance of the software code. But this execution of instructions introduces a high overhead in performance and area consumption. In past few decades the reconfigurable computing domain has been introduced which overcomes the traditional trades-off between flexibility and performance and is able to achieve high performance while maintaining a good flexibility. The dramatic gains in terms of chip performance and design flexibility achieved through the reconfigurable computing systems are greatly dependent on the design of their computational units being integrated with reconfigurable logic resources. The computational unit of any reconfigurable system plays vital role in defining its strength. In this research paper an RFU based computational unit design has been presented using the tightly coupled, multi-threaded reconfigurable cores. The proposed design has been simulated for VLIW based architectures and a high gain in performance has been observed as compared to the conventional computing systems.

Keywords: Configuration Stream, Configuration overhead, Configuration Controller, Reconfigurable devices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582
55 CRYPTO COPYCAT: A Fashion Centric Blockchain Framework for Eliminating Fashion Infringement

Authors: Magdi Elmessiry, Adel Elmessiry

Abstract:

The fashion industry represents a significant portion of the global gross domestic product, however, it is plagued by cheap imitators that infringe on the trademarks which destroys the fashion industry's hard work and investment. While eventually the copycats would be found and stopped, the damage has already been done, sales are missed and direct and indirect jobs are lost. The infringer thrives on two main facts: the time it takes to discover them and the lack of tracking technologies that can help the consumer distinguish them. Blockchain technology is a new emerging technology that provides a distributed encrypted immutable and fault resistant ledger. Blockchain presents a ripe technology to resolve the infringement epidemic facing the fashion industry. The significance of the study is that a new approach leveraging the state of the art blockchain technology coupled with artificial intelligence is used to create a framework addressing the fashion infringement problem. It transforms the current focus on legal enforcement, which is difficult at best, to consumer awareness that is far more effective. The framework, Crypto CopyCat, creates an immutable digital asset representing the actual product to empower the customer with a near real time query system. This combination emphasizes the consumer's awareness and appreciation of the product's authenticity, while provides real time feedback to the producer regarding the fake replicas. The main findings of this study are that implementing this approach can delay the fake product penetration of the original product market, thus allowing the original product the time to take advantage of the market. The shift in the fake adoption results in reduced returns, which impedes the copycat market and moves the emphasis to the original product innovation.

Keywords: Fashion, infringement, Blockchain, artificial intelligence, textiles supply.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1183
54 Ultimately Bounded Takagi-Sugeno Fuzzy Management in Urban Traffic Stream Mechanism: Multi-Agent Modeling Approach

Authors: Reza Ghasemi, Negin Amiri Hazaveh

Abstract:

In this paper, control methodology based on the selection of the type of traffic light and the period of the green phase to accomplish an optimum balance at intersections is proposed. This balance should be flexible to the static behavior of time, and randomness in a traffic situation; the goal of the proposed method is to reduce traffic volume in transportation, the average delay for each vehicle, and control over the crash of cars. The proposed method was specifically investigated at the intersection through an appropriate timing of traffic lights by sampling a multi-agent system. It consists of a large number of intersections, each of which is considered as an independent agent that exchanges information with each other, and the stability of each agent is provided separately. The robustness against uncertainties, scalability, and stability of the closed-loop overall system are the main merits of the proposed methodology. The simulation results show that the fuzzy intelligent controller in this multi-factor system which is a Takagi-Sugeno (TS) fuzzy is more useful than scheduling in the fixed-time method and it reduces the lengths of vehicles queuing.

Keywords: Fuzzy intelligent controller, traffic-light control, multi-agent systems, state space equations, stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 465