Search results for: Compositional techniques
2025 Machine Scoring Model Using Data Mining Techniques
Authors: Wimalin S. Laosiritaworn, Pongsak Holimchayachotikul
Abstract:
this article proposed a methodology for computer numerical control (CNC) machine scoring. The case study company is a manufacturer of hard disk drive parts in Thailand. In this company, sample of parts manufactured from CNC machine are usually taken randomly for quality inspection. These inspection data were used to make a decision to shut down the machine if it has tendency to produce parts that are out of specification. Large amount of data are produced in this process and data mining could be very useful technique in analyzing them. In this research, data mining techniques were used to construct a machine scoring model called 'machine priority assessment model (MPAM)'. This model helps to ensure that the machine with higher risk of producing defective parts be inspected before those with lower risk. If the defective prone machine is identified sooner, defective part and rework could be reduced hence improving the overall productivity. The results showed that the proposed method can be successfully implemented and approximately 351,000 baht of opportunity cost could have saved in the case study company.Keywords: Computer Numerical Control, Data Mining, HardDisk Drive.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13952024 Numerical Investigation of Multiphase Flow in Pipelines
Authors: Gozel Judakova, Markus Bause
Abstract:
We present and analyze reliable numerical techniques for simulating complex flow and transport phenomena related to natural gas transportation in pipelines. Such kind of problems are of high interest in the field of petroleum and environmental engineering. Modeling and understanding natural gas flow and transformation processes during transportation is important for the sake of physical realism and the design and operation of pipeline systems. In our approach a two fluid flow model based on a system of coupled hyperbolic conservation laws is considered for describing natural gas flow undergoing hydratization. The accurate numerical approximation of two-phase gas flow remains subject of strong interest in the scientific community. Such hyperbolic problems are characterized by solutions with steep gradients or discontinuities, and their approximation by standard finite element techniques typically gives rise to spurious oscillations and numerical artefacts. Recently, stabilized and discontinuous Galerkin finite element techniques have attracted researchers’ interest. They are highly adapted to the hyperbolic nature of our two-phase flow model. In the presentation a streamline upwind Petrov-Galerkin approach and a discontinuous Galerkin finite element method for the numerical approximation of our flow model of two coupled systems of Euler equations are presented. Then the efficiency and reliability of stabilized continuous and discontinous finite element methods for the approximation is carefully analyzed and the potential of the either classes of numerical schemes is investigated. In particular, standard benchmark problems of two-phase flow like the shock tube problem are used for the comparative numerical study.Keywords: Discontinuous Galerkin method, Euler system, inviscid two-fluid model, streamline upwind Petrov-Galerkin method, two-phase flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7902023 Separating Permanent and Induced Magnetic Signature: A Simple Approach
Authors: O. J. G. Somsen, G. P. M. Wagemakers
Abstract:
Magnetic signature detection provides sensitive detection of metal objects, especially in the natural environment. Our group is developing a tabletop setup for magnetic signatures of various small and model objects. A particular issue is the separation of permanent and induced magnetization. While the latter depends only on the composition and shape of the object, the former also depends on the magnetization history. With common deperming techniques, a significant permanent signature may still remain, which confuses measurements of the induced component. We investigate a basic technique of separating the two. Measurements were done by moving the object along an aluminum rail while the three field components are recorded by a detector attached near the center. This is done first with the rail parallel to the Earth magnetic field and then with anti-parallel orientation. The reversal changes the sign of the induced- but not the permanent magnetization so that the two can be separated. Our preliminary results on a small iron block show excellent reproducibility. A considerable permanent magnetization was indeed present, resulting in a complex asymmetric signature. After separation, a much more symmetric induced signature was obtained that can be studied in detail and compared with theoretical calculations.
Keywords: Magnetic signature, data analysis, magnetization, deperming techniques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10762022 Drone On-time Obstacle Avoidance for Static and Dynamic Obstacles
Authors: Herath MPC Jayaweera, Samer Hanoun
Abstract:
Path planning for on-time obstacle avoidance is an essential and challenging task that enables drones to achieve safe operation in any application domain. The level of challenge increases significantly on the obstacle avoidance technique when the drone is following a ground mobile entity (GME). This is mainly due to the change in direction and magnitude of the GMEs velocity in dynamic and unstructured environments. Force field techniques are the most widely used obstacle avoidance methods due to their simplicity, ease of use and potential to be adopted for three-dimensional dynamic environments. However, the existing force field obstacle avoidance techniques suffer many drawbacks including their tendency to generate longer routes when the obstacles are sideways of the drones route, poor ability to find the shortest flyable path, propensity to fall into local minima, producing a non-smooth path, and high failure rate in the presence of symmetrical obstacles. To overcome these shortcomings, this paper proposes an on-time three-dimensional obstacle avoidance method for drones to effectively and efficiently avoid dynamic and static obstacles in unknown environments while pursuing a GME. This on-time obstacle avoidance technique generates velocity waypoints for its obstacle-free and efficient path based on the shape of the encountered obstacles. This method can be utilize on most types of drones that have basic distance measurement sensors and autopilot supported flight controllers. The proposed obstacle avoidance technique is validated and evaluated against existing force field methods for different simulation scenarios in Gazebo and ROS supported PX4-SITL. The simulation results show that the proposed obstacle avoidance technique outperforms the existing force field techniques and is better suited for real-world applications.
Keywords: Drones, force field methods, obstacle avoidance, path planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 782021 Radiation Usage Impact of on Anti-Nutritional Compounds (Antitrypsin and Phytic Acid) of Livestock and Poultry Foods
Authors: Mohammad Khosravi, Ali Kiani, Behroz Dastar, Parvin Showrang
Abstract:
Review was carried out on important anti-nutritional compounds of livestock and poultry foods and the effect of radiation usage. Nowadays, with advancement in technology, different methods have been considered for the optimum usage of nutrients in livestock and poultry foods. Steaming, extruding, pelleting, and the use of chemicals are the most common and popular methods in food processing. Use of radiation in food processing researches in the livestock and poultry industry is currently highly regarded. Ionizing (electrons, gamma) and non-ionizing beams (microwave and infrared) are the most useable rays in animal food processing. In recent researches, these beams have been used to remove and reduce the anti-nutritional factors and microbial contamination and improve the digestibility of nutrients in poultry and livestock food. The evidence presented will help researchers to recognize techniques of relevance to them. Simplification of some of these techniques, especially in developing countries, must be addressed so that they can be used more widely.
Keywords: Antitrypsin, gamma anti-nutritional components, phytic acid, radiation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12102020 Efficient Large Numbers Karatsuba-Ofman Multiplier Designs for Embedded Systems
Authors: M.Machhout, M.Zeghid, W.El hadj youssef, B.Bouallegue, A.Baganne, R.Tourki
Abstract:
Long number multiplications (n ≥ 128-bit) are a primitive in most cryptosystems. They can be performed better by using Karatsuba-Ofman technique. This algorithm is easy to parallelize on workstation network and on distributed memory, and it-s known as the practical method of choice. Multiplying long numbers using Karatsuba-Ofman algorithm is fast but is highly recursive. In this paper, we propose different designs of implementing Karatsuba-Ofman multiplier. A mixture of sequential and combinational system design techniques involving pipelining is applied to our proposed designs. Multiplying large numbers can be adapted flexibly to time, area and power criteria. Computationally and occupation constrained in embedded systems such as: smart cards, mobile phones..., multiplication of finite field elements can be achieved more efficiently. The proposed designs are compared to other existing techniques. Mathematical models (Area (n), Delay (n)) of our proposed designs are also elaborated and evaluated on different FPGAs devices.Keywords: finite field, Karatsuba-Ofman, long numbers, multiplication, mathematical model, recursivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25302019 A Novel Optimal Setting for Directional over Current Relay Coordination using Particle Swarm Optimization
Authors: D. Vijayakumar, R. K. Nema
Abstract:
Over Current Relays (OCRs) and Directional Over Current Relays (DOCRs) are widely used for the radial protection and ring sub transmission protection systems and for distribution systems. All previous work formulates the DOCR coordination problem either as a Non-Linear Programming (NLP) for TDS and Ip or as a Linear Programming (LP) for TDS using recently a social behavior (Particle Swarm Optimization techniques) introduced to the work. In this paper, a Modified Particle Swarm Optimization (MPSO) technique is discussed for the optimal settings of DOCRs in power systems as a Non-Linear Programming problem for finding Ip values of the relays and for finding the TDS setting as a linear programming problem. The calculation of the Time Dial Setting (TDS) and the pickup current (Ip) setting of the relays is the core of the coordination study. PSO technique is considered as realistic and powerful solution schemes to obtain the global or quasi global optimum in optimization problem.
Keywords: Directional over current relays, Optimization techniques, Particle swarm optimization, Power system protection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27552018 Predicting the Three Major Dimensions of the Learner-s Emotions from Brainwaves
Authors: Alicia Heraz, Claude Frasson
Abstract:
This paper investigates how the use of machine learning techniques can significantly predict the three major dimensions of learner-s emotions (pleasure, arousal and dominance) from brainwaves. This study has adopted an experimentation in which participants were exposed to a set of pictures from the International Affective Picture System (IAPS) while their electrical brain activity was recorded with an electroencephalogram (EEG). The pictures were already rated in a previous study via the affective rating system Self-Assessment Manikin (SAM) to assess the three dimensions of pleasure, arousal, and dominance. For each picture, we took the mean of these values for all subjects used in this previous study and associated them to the recorded brainwaves of the participants in our study. Correlation and regression analyses confirmed the hypothesis that brainwave measures could significantly predict emotional dimensions. This can be very useful in the case of impassive, taciturn or disabled learners. Standard classification techniques were used to assess the reliability of the automatic detection of learners- three major dimensions from the brainwaves. We discuss the results and the pertinence of such a method to assess learner-s emotions and integrate it into a brainwavesensing Intelligent Tutoring System.
Keywords: Algorithms, brainwaves, emotional dimensions, performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22052017 Rural Women’s Skill Acquisition in the Processing of Locust Bean in Ipokia Local Government Area of Ogun State, Nigeria
Authors: A. A. Adekunle, A. M. Omoare, W. O. Oyediran
Abstract:
This study was carried out to assess rural women’s skill acquisition in the processing of locust bean in Ipokia Local Government Area of Ogun State, Nigeria. Simple random sampling technique was used to select 90 women locust bean processors for this study. Data were analyzed with descriptive statistics and Pearson Product Moment Correlation. The result showed that the mean age of respondents was 40.72 years. Most (70.00%) of the respondents were married. The mean processing experience was 8.63 years. 93.30% of the respondents relied on information from fellow locust beans processors and friends. All (100%) the respondents did not acquire improved processing skill through trainings and workshops. It can be concluded that the rural women’s skill acquisition on modernized processing techniques was generally low. It is hereby recommend that the rural women processors should be trained by extension service providers through series of workshops and seminars on improved processing techniques.
Keywords: Locust bean, processing, skill acquisition, rural women.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28182016 Hyperspectral Mapping Methods for Differentiating Mangrove Species along Karachi Coast
Authors: Sher Muhammad, Mirza Muhammad Waqar
Abstract:
It is necessary to monitor and identify mangroves types and spatial extent near coastal areas because it plays an important role in coastal ecosystem and environmental protection. This research aims at identifying and mapping mangroves types along Karachi coast ranging from 24.790 to 24.850 in latitude and 66.910 to 66.970 in longitude using hyperspectral remote sensing data and techniques. Image acquired during February, 2012 through Hyperion sensor have been used for this research. Image pre processing includes geometric and radiometric correction followed by Minimum Noise Fraction (MNF) and Pixel Purity Index (PPI). The output of MNF and PPI has been analyzed by visualizing it in n-dimensions for end member extraction. Well distributed clusters on the n-dimensional scatter plot have been selected with the region of interest (ROI) tool as end members. These end members have been used as an input for classification techniques applied to identify and map mangroves species including Spectral Angle Mapper (SAM), Spectral Feature Fitting (SFF) and Spectral Information Diversion (SID). Only two types of mangroves namely Avicennia Marina (White Mangroves) and Avicennia germinans (Black Mangroves) have been observed throughout the study area.
Keywords: Mangrove, Hyperspectral, SAM, SFF, SID.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29062015 Performance Evaluation of Data Mining Techniques for Predicting Software Reliability
Authors: Pradeep Kumar, Abdul Wahid
Abstract:
Accurate software reliability prediction not only enables developers to improve the quality of software but also provides useful information to help them for planning valuable resources. This paper examines the performance of three well-known data mining techniques (CART, TreeNet and Random Forest) for predicting software reliability. We evaluate and compare the performance of proposed models with Cascade Correlation Neural Network (CCNN) using sixteen empirical databases from the Data and Analysis Center for Software. The goal of our study is to help project managers to concentrate their testing efforts to minimize the software failures in order to improve the reliability of the software systems. Two performance measures, Normalized Root Mean Squared Error (NRMSE) and Mean Absolute Errors (MAE), illustrate that CART model is accurate than the models predicted using Random Forest, TreeNet and CCNN in all datasets used in our study. Finally, we conclude that such methods can help in reliability prediction using real-life failure datasets.
Keywords: Classification, Cascade Correlation Neural Network, Random Forest, Software reliability, TreeNet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18392014 Performance Improvement of MAC Protocols for Broadband Power-Line Access Networks of Developing Countries: A Case of Tanzania
Authors: Abdi T. Abdalla, Justinian Anatory
Abstract:
This paper investigates the possibility of improving throughputs of some Media Access Controls protocols such as ALOHA, slotted ALOHA and Carrier Sense Multiple Access with Collision Avoidance with the aim of increasing the performance of Powerline access networks. In this investigation, the real Powerline network topology in Tanzania located in Dar es Salaam City, Kariakoo area was used as a case study. During this investigation, Wireshark Network Protocol Analyzer was used to analyze data traffic of similar existing network for projection purpose and then the data were simulated using MATLAB. This paper proposed and analyzed three improvement techniques based on collision domain, packet length and combination of the two. From the results, it was found that the throughput of Carrier Sense Multiple Access with Collision Avoidance protocol improved noticeably while ALOHA and slotted ALOHA showed insignificant changes especially when the hybrid techniques were employed.
Keywords: Access Network, ALOHA, Broadband Powerline Communication, Slotted ALOHA, CSMA/CA and MAC Protocols.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20312013 A Model for Test Case Selection in the Software-Development Life Cycle
Authors: Adtha Lawanna
Abstract:
Software maintenance is one of the essential processes of Software-Development Life Cycle. The main philosophies of retaining software concern the improvement of errors, the revision of codes, the inhibition of future errors, and the development in piece and capacity. While the adjustment has been employing, the software structure has to be retested to an upsurge a level of assurance that it will be prepared due to the requirements. According to this state, the test cases must be considered for challenging the revised modules and the whole software. A concept of resolving this problem is ongoing by regression test selection such as the retest-all selections, random/ad-hoc selection and the safe regression test selection. Particularly, the traditional techniques concern a mapping between the test cases in a test suite and the lines of code it executes. However, there are not only the lines of code as one of the requirements that can affect the size of test suite but including the number of functions and faulty versions. Therefore, a model for test case selection is developed to cover those three requirements by the integral technique which can produce the smaller size of the test cases when compared with the traditional regression selection techniques.
Keywords: Software maintenance, regression test selection, test case.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16972012 A Model for Test Case Selection in the Software-Development Life Cycle
Authors: Adtha Lawanna
Abstract:
Software maintenance is one of the essential processes of Software-Development Life Cycle. The main philosophies of retaining software concern the improvement of errors, the revision of codes, the inhibition of future errors, and the development in piece and capacity. While the adjustment has been employing, the software structure has to be retested to an upsurge a level of assurance that it will be prepared due to the requirements. According to this state, the test cases must be considered for challenging the revised modules and the whole software. A concept of resolving this problem is ongoing by regression test selection such as the retest-all selections, random/ad-hoc selection and the safe regression test selection. Particularly, the traditional techniques concern a mapping between the test cases in a test suite and the lines of code it executes. However, there are not only the lines of code as one of the requirements that can affect the size of test suite but including the number of functions and faulty versions. Therefore, a model for test case selection is developed to cover those three requirements by the integral technique which can produce the smaller size of the test cases when compared with the traditional regression selection techniques.
Keywords: Software maintenance, regression test selection, test case.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15992011 Existence of Nano-Organic Carbon Particles below the Size Range of 10 nm in the Indoor Air Environment
Authors: Bireswar Paul, Amitava Datta
Abstract:
Indoor air environment is a big concern in the last few decades in the developing countries, with increased focus on monitoring the air quality. In this work, an experimental study has been conducted to establish the existence of carbon nanoparticles below the size range of 10 nm in the non-sooting zone of a LPG/air partially premixed flame. Mainly, four optical techniques, UV absorption spectroscopy, fluorescence spectroscopy, dynamic light scattering and TEM have been used to characterize and measure the size of carbon nanoparticles in the sampled materials collected from the inner surface of the flame front. The existence of the carbon nanoparticles in the sampled material has been confirmed with the typical nature of the absorption and fluorescence spectra already reported in the literature. The band gap energy shows that the particles are made up of three to six aromatic rings. The size measurement by DLS technique also shows that the particles below the size range of 10 nm. The results of DLS are also corroborated by the TEM image of the same material.
Keywords: Indoor air, carbon nanoparticles, LPG, partially premixed flame, optical techniques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8802010 Investigation of a Hybrid Process: Multipoint Incremental Forming
Authors: Safa Boudhaouia, Mohamed Amen Gahbiche, Eliane Giraud, Wacef Ben Salem, Philippe Dal Santo
Abstract:
Multi-point forming (MPF) and asymmetric incremental forming (ISF) are two flexible processes for sheet metal manufacturing. To take advantages of these two techniques, a hybrid process has been developed: The Multipoint Incremental Forming (MPIF). This process accumulates at once the advantages of each of these last mentioned forming techniques, which makes it a very interesting and particularly an efficient process for single, small, and medium series production. In this paper, an experimental and a numerical investigation of this technique are presented. To highlight the flexibility of this process and its capacity to manufacture standard and complex shapes, several pieces were produced by using MPIF. The forming experiments are performed on a 3-axis CNC machine. Moreover, a numerical model of the MPIF process has been implemented in ABAQUS and the analysis showed a good agreement with experimental results in terms of deformed shape. Furthermore, the use of an elastomeric interpolator allows avoiding classical local defaults like dimples, which are generally caused by the asymmetric contact and also improves the distribution of residual strain. Future works will apply this approach to other alloys used in aeronautic or automotive applications.Keywords: Incremental forming, numerical simulation, MPIF, multipoint forming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13122009 Object-Centric Process Mining Using Process Cubes
Authors: Anahita Farhang Ghahfarokhi, Alessandro Berti, Wil M.P. van der Aalst
Abstract:
Process mining provides ways to analyze business processes. Common process mining techniques consider the process as a whole. However, in real-life business processes, different behaviors exist that make the overall process too complex to interpret. Process comparison is a branch of process mining that isolates different behaviors of the process from each other by using process cubes. Process cubes organize event data using different dimensions. Each cell contains a set of events that can be used as an input to apply process mining techniques. Existing work on process cubes assume single case notions. However, in real processes, several case notions (e.g., order, item, package, etc.) are intertwined. Object-centric process mining is a new branch of process mining addressing multiple case notions in a process. To make a bridge between object-centric process mining and process comparison, we propose a process cube framework, which supports process cube operations such as slice and dice on object-centric event logs. To facilitate the comparison, the framework is integrated with several object-centric process discovery approaches.Keywords: Process mining, multidimensional process mining, multi-perspective business processes, OLAP, process cubes, process discovery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11192008 Using Different Aspects of the Signings for Appearance-based Sign Language Recognition
Authors: Morteza Zahedi, Philippe Dreuw, Thomas Deselaers, Hermann Ney
Abstract:
Sign language is used by the deaf and hard of hearing people for communication. Automatic sign language recognition is a challenging research area since sign language often is the only way of communication for the deaf people. Sign language includes different components of visual actions made by the signer using the hands, the face, and the torso, to convey his/her meaning. To use different aspects of signs, we combine the different groups of features which have been extracted from the image frames recorded directly by a stationary camera. We combine the features in two levels by employing three techniques. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, or by concatenating feature groups over time and using LDA to choose the most discriminant elements. At the model level, a late fusion of differently trained models can be carried out by a log-linear model combination. In this paper, we investigate these three combination techniques in an automatic sign language recognition system and show that the recognition rate can be significantly improved.
Keywords: American sign language, appearance-based features, Feature combination, Sign language recognition
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13982007 Improved Network Construction Methods Based on Virtual Rails for Mobile Sensor Network
Authors: Noritaka Shigei, Kazuto Matsumoto, Yoshiki Nakashima, Hiromi Miyajima
Abstract:
Although Mobile Wireless Sensor Networks (MWSNs), which consist of mobile sensor nodes (MSNs), can cover a wide range of observation region by using a small number of sensor nodes, they need to construct a network to collect the sensing data on the base station by moving the MSNs. As an effective method, the network construction method based on Virtual Rails (VRs), which is referred to as VR method, has been proposed. In this paper, we propose two types of effective techniques for the VR method. They can prolong the operation time of the network, which is limited by the battery capabilities of MSNs and the energy consumption of MSNs. The first technique, an effective arrangement of VRs, almost equalizes the number of MSNs belonging to each VR. The second technique, an adaptive movement method of MSNs, takes into account the residual energy of battery. In the simulation, we demonstrate that each technique can improve the network lifetime and the combination of both techniques is the most effective.
Keywords: Wireless sensor network, mobile sensor node, relay of sensing data, virtual rail, residual energy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17542006 Optimization of Acid Treatments by Assessing Diversion Strategies in Carbonate and Sandstone Formations
Authors: Ragi Poyyara, Vijaya Patnana, Mohammed Alam
Abstract:
When acid is pumped into damaged reservoirs for damage removal/stimulation, distorted inflow of acid into the formation occurs caused by acid preferentially traveling into highly permeable regions over low permeable regions, or (in general) into the path of least resistance. This can lead to poor zonal coverage and hence warrants diversion to carry out an effective placement of acid. Diversion is desirably a reversible technique of temporarily reducing the permeability of high perm zones, thereby forcing the acid into lower perm zones. The uniqueness of each reservoir can pose several challenges to engineers attempting to devise optimum and effective diversion strategies. Diversion techniques include mechanical placement and/or chemical diversion of treatment fluids, further sub-classified into ball sealers, bridge plugs, packers, particulate diverters, viscous gels, crosslinked gels, relative permeability modifiers (RPMs), foams, and/or the use of placement techniques, such as coiled tubing (CT) and the maximum pressure difference and injection rate (MAPDIR) methodology. It is not always realized that the effectiveness of diverters greatly depends on reservoir properties, such as formation type, temperature, reservoir permeability, heterogeneity, and physical well characteristics (e.g., completion type, well deviation, length of treatment interval, multiple intervals, etc.). This paper reviews the mechanisms by which each variety of diverter functions and discusses the effect of various reservoir properties on the efficiency of diversion techniques. Guidelines are recommended to help enhance productivity from zones of interest by choosing the best methods of diversion while pumping an optimized amount of treatment fluid. The success of an overall acid treatment often depends on the effectiveness of the diverting agents.
Keywords: Acid treatment, carbonate, diversion, sandstone.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40482005 An Off-the-Shelf Scheme for Dependable Grid Systems Using Virtualization
Authors: Toshinori Takabatake
Abstract:
Recently, grid computing has been widely focused on the science, industry, and business fields, which are required a vast amount of computing. Grid computing is to provide the environment that many nodes (i.e., many computers) are connected with each other through a local/global network and it is available for many users. In the environment, to achieve data processing among nodes for any applications, each node executes mutual authentication by using certificates which published from the Certificate Authority (for short, CA). However, if a failure or fault has occurred in the CA, any new certificates cannot be published from the CA. As a result, a new node cannot participate in the gird environment. In this paper, an off-the-shelf scheme for dependable grid systems using virtualization techniques is proposed and its implementation is verified. The proposed approach using the virtualization techniques is to restart an application, e.g., the CA, if it has failed. The system can tolerate a failure or fault if it has occurred in the CA. Since the proposed scheme is implemented at the application level easily, the cost of its implementation by the system builder hardly takes compared it with other methods. Simulation results show that the CA in the system can recover from its failure or fault.Keywords: grid computing, restarting application, certificate authority, virtualization, dependability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13772004 A Consistency Protocol Multi-Layer for Replicas Management in Large Scale Systems
Authors: Ghalem Belalem, Yahya Slimani
Abstract:
Large scale systems such as computational Grid is a distributed computing infrastructure that can provide globally available network resources. The evolution of information processing systems in Data Grid is characterized by a strong decentralization of data in several fields whose objective is to ensure the availability and the reliability of the data in the reason to provide a fault tolerance and scalability, which cannot be possible only with the use of the techniques of replication. Unfortunately the use of these techniques has a height cost, because it is necessary to maintain consistency between the distributed data. Nevertheless, to agree to live with certain imperfections can improve the performance of the system by improving competition. In this paper, we propose a multi-layer protocol combining the pessimistic and optimistic approaches conceived for the data consistency maintenance in large scale systems. Our approach is based on a hierarchical representation model with tree layers, whose objective is with double vocation, because it initially makes it possible to reduce response times compared to completely pessimistic approach and it the second time to improve the quality of service compared to an optimistic approach.Keywords: Data Grid, replication, consistency, optimistic approach, pessimistic approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15752003 A Normalization-based Robust Image Watermarking Scheme Using SVD and DCT
Authors: Say Wei Foo, Qi Dong
Abstract:
Digital watermarking is one of the techniques for copyright protection. In this paper, a normalization-based robust image watermarking scheme which encompasses singular value decomposition (SVD) and discrete cosine transform (DCT) techniques is proposed. For the proposed scheme, the host image is first normalized to a standard form and divided into non-overlapping image blocks. SVD is applied to each block. By concatenating the first singular values (SV) of adjacent blocks of the normalized image, a SV block is obtained. DCT is then carried out on the SV blocks to produce SVD-DCT blocks. A watermark bit is embedded in the highfrequency band of a SVD-DCT block by imposing a particular relationship between two pseudo-randomly selected DCT coefficients. An adaptive frequency mask is used to adjust local watermark embedding strength. Watermark extraction involves mainly the inverse process. The watermark extracting method is blind and efficient. Experimental results show that the quality degradation of watermarked image caused by the embedded watermark is visually transparent. Results also show that the proposed scheme is robust against various image processing operations and geometric attacks.Keywords: Image watermarking, Image normalization, Singularvalue decomposition, Discrete cosine transform, Robustness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20962002 A Frugal Bidding Procedure for Replicating WWW Content
Authors: Samee Ullah Khan, C. Ardil
Abstract:
Fine-grained data replication over the Internet allows duplication of frequently accessed data objects, as opposed to entire sites, to certain locations so as to improve the performance of largescale content distribution systems. In a distributed system, agents representing their sites try to maximize their own benefit since they are driven by different goals such as to minimize their communication costs, latency, etc. In this paper, we will use game theoretical techniques and in particular auctions to identify a bidding mechanism that encapsulates the selfishness of the agents, while having a controlling hand over them. In essence, the proposed game theory based mechanism is the study of what happens when independent agents act selfishly and how to control them to maximize the overall performance. A bidding mechanism asks how one can design systems so that agents- selfish behavior results in the desired system-wide goals. Experimental results reveal that this mechanism provides excellent solution quality, while maintaining fast execution time. The comparisons are recorded against some well known techniques such as greedy, branch and bound, game theoretical auctions and genetic algorithms.
Keywords: Internet, data content replication, static allocation, mechanism design, equilibrium.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14042001 Genetic Programming: Principles, Applications and Opportunities for Hydrological Modelling
Authors: Oluwaseun K. Oyebode, Josiah A. Adeyemo
Abstract:
Hydrological modelling plays a crucial role in the planning and management of water resources, most especially in water stressed regions where the need to effectively manage the available water resources is of critical importance. However, due to the complex, nonlinear and dynamic behaviour of hydro-climatic interactions, achieving reliable modelling of water resource systems and accurate projection of hydrological parameters are extremely challenging. Although a significant number of modelling techniques (process-based and data-driven) have been developed and adopted in that regard, the field of hydrological modelling is still considered as one that has sluggishly progressed over the past decades. This is majorly as a result of the identification of some degree of uncertainty in the methodologies and results of techniques adopted. In recent times, evolutionary computation (EC) techniques have been developed and introduced in response to the search for efficient and reliable means of providing accurate solutions to hydrological related problems. This paper presents a comprehensive review of the underlying principles, methodological needs and applications of a promising evolutionary computation modelling technique – genetic programming (GP). It examines the specific characteristics of the technique which makes it suitable to solving hydrological modelling problems. It discusses the opportunities inherent in the application of GP in water related-studies such as rainfall estimation, rainfall-runoff modelling, streamflow forecasting, sediment transport modelling, water quality modelling and groundwater modelling among others. Furthermore, the means by which such opportunities could be harnessed in the near future are discussed. In all, a case for total embracement of GP and its variants in hydrological modelling studies is made so as to put in place strategies that would translate into achieving meaningful progress as it relates to modelling of water resource systems, and also positively influence decision-making by relevant stakeholders.
Keywords: Computational modelling, evolutionary algorithms, genetic programming, hydrological modelling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33292000 A Materialized View Approach to Support Aggregation Operations over Long Periods in Sensor Networks
Authors: Minsoo Lee, Julee Choi, Sookyung Song
Abstract:
The increasing interest on processing data created by sensor networks has evolved into approaches to implement sensor networks as databases. The aggregation operator, which calculates a value from a large group of data such as computing averages or sums, etc. is an essential function that needs to be provided when implementing such sensor network databases. This work proposes to add the DURING clause into TinySQL to calculate values during a specific long period and suggests a way to implement the aggregation service in sensor networks by applying materialized view and incremental view maintenance techniques that is used in data warehouses. In sensor networks, data values are passed from child nodes to parent nodes and an aggregation value is computed at the root node. As such root nodes need to be memory efficient and low powered, it becomes a problem to recompute aggregate values from all past and current data. Therefore, applying incremental view maintenance techniques can reduce the memory consumption and support fast computation of aggregate values.Keywords: Aggregation, Incremental View Maintenance, Materialized view, Sensor Network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15401999 Contrast Enhancement of Color Images with Color Morphing Approach
Authors: Javed Khan, Aamir Saeed Malik, Nidal Kamel, Sarat Chandra Dass, Azura Mohd Affandi
Abstract:
Low contrast images can result from the wrong setting of image acquisition or poor illumination conditions. Such images may not be visually appealing and can be difficult for feature extraction. Contrast enhancement of color images can be useful in medical area for visual inspection. In this paper, a new technique is proposed to improve the contrast of color images. The RGB (red, green, blue) color image is transformed into normalized RGB color space. Adaptive histogram equalization technique is applied to each of the three channels of normalized RGB color space. The corresponding channels in the original image (low contrast) and that of contrast enhanced image with adaptive histogram equalization (AHE) are morphed together in proper proportions. The proposed technique is tested on seventy color images of acne patients. The results of the proposed technique are analyzed using cumulative variance and contrast improvement factor measures. The results are also compared with decorrelation stretch. Both subjective and quantitative analysis demonstrates that the proposed techniques outperform the other techniques.
Keywords: Contrast enhancement, normalized RGB, adaptive histogram equalization, cumulative variance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11051998 Secure Hashing Algorithm and Advance Encryption Algorithm in Cloud Computing
Authors: Jaimin Patel
Abstract:
Cloud computing is one of the most sharp and important movement in various computing technologies. It provides flexibility to users, cost effectiveness, location independence, easy maintenance, enables multitenancy, drastic performance improvements, and increased productivity. On the other hand, there are also major issues like security. Being a common server, security for a cloud is a major issue; it is important to provide security to protect user’s private data, and it is especially important in e-commerce and social networks. In this paper, encryption algorithms such as Advanced Encryption Standard algorithms, their vulnerabilities, risk of attacks, optimal time and complexity management and comparison with other algorithms based on software implementation is proposed. Encryption techniques to improve the performance of AES algorithms and to reduce risk management are given. Secure Hash Algorithms, their vulnerabilities, software implementations, risk of attacks and comparison with other hashing algorithms as well as the advantages and disadvantages between hashing techniques and encryption are given.
Keywords: Cloud computing, encryption algorithm, secure hashing algorithm, brute force attack, birthday attack, plaintext attack, man-in-the-middle attack.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17481997 HaskellFL: A Tool for Detecting Logical Errors in Haskell
Authors: Vanessa Vasconcelos, Mariza A. S. Bigonha
Abstract:
Understanding and using the functional paradigm is a challenge for many programmers. Looking for logical errors in code may take a lot of a developer’s time when a program grows in size. In order to facilitate both processes, this paper presents HaskellFL, a tool that uses fault localization techniques to locate a logical error in Haskell code. The Haskell subset used in this work is sufficiently expressive for those studying Functional Programming to get immediate help debugging their code and to answer questions about key concepts associated with the functional paradigm. HaskellFL was tested against Functional Programming assignments submitted by students enrolled at the Functional Programming class at the Federal University of Minas Gerais and against exercises from the Exercism Haskell track that are publicly available in GitHub. This work also evaluated the effectiveness of two fault localization techniques, Tarantula and Ochiai, in the Haskell context. Furthermore, the EXAM score was chosen to evaluate the tool’s effectiveness, and results showed that HaskellFL reduced the effort needed to locate an error for all tested scenarios. The results also showed that the Ochiai method was more effective than Tarantula.
Keywords: Debug, fault localization, functional programming, Haskell.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7311996 Advanced Geolocation of IP Addresses
Authors: Robert Koch, Mario Golling, Gabi Dreo Rodosek
Abstract:
Tracing and locating the geographical location of users (Geolocation) is used extensively in todays Internet. Whenever we, e.g., request a page from google we are - unless there was a specific configuration made - automatically forwarded to the page with the relevant language and amongst others, dependent on our location identified, specific commercials are presented. Especially within the area of Network Security, Geolocation has a significant impact. Because of the way the Internet works, attacks can be executed from almost everywhere. Therefore, for an attribution, knowledge of the origination of an attack - and thus Geolocation - is mandatory in order to be able to trace back an attacker. In addition, Geolocation can also be used very successfully to increase the security of a network during operation (i.e. before an intrusion actually has taken place). Similar to greylisting in emails, Geolocation allows to (i) correlate attacks detected with new connections and (ii) as a consequence to classify traffic a priori as more suspicious (thus particularly allowing to inspect this traffic in more detail). Although numerous techniques for Geolocation are existing, each strategy is subject to certain restrictions. Following the ideas of Endo et al., this publication tries to overcome these shortcomings with a combined solution of different methods to allow improved and optimized Geolocation. Thus, we present our architecture for improved Geolocation, by designing a new algorithm, which combines several Geolocation techniques to increase the accuracy.
Keywords: IP geolocation, prosecution of computer fraud, attack attribution, target-analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4726