Search results for: Paper assessment
2701 Fuzzy Population-Based Meta-Heuristic Approaches for Attribute Reduction in Rough Set Theory
Authors: Mafarja Majdi, Salwani Abdullah, Najmeh S. Jaddi
Abstract:
One of the global combinatorial optimization problems in machine learning is feature selection. It concerned with removing the irrelevant, noisy, and redundant data, along with keeping the original meaning of the original data. Attribute reduction in rough set theory is an important feature selection method. Since attribute reduction is an NP-hard problem, it is necessary to investigate fast and effective approximate algorithms. In this paper, we proposed two feature selection mechanisms based on memetic algorithms (MAs) which combine the genetic algorithm with a fuzzy record to record travel algorithm and a fuzzy controlled great deluge algorithm, to identify a good balance between local search and genetic search. In order to verify the proposed approaches, numerical experiments are carried out on thirteen datasets. The results show that the MAs approaches are efficient in solving attribute reduction problems when compared with other meta-heuristic approaches.Keywords: Rough Set Theory, Attribute Reduction, Fuzzy Logic, Memetic Algorithms, Record to Record Algorithm, Great Deluge Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19392700 Waste Generation in Iranian Building Industry: Addressing a Theory
Authors: Golnaz Moghimi, Alireza Afsharghotli, Alireza Rezaei
Abstract:
Construction waste has been gradually increased as a result of upsizing construction projects which are occurred within the lifecycle of buildings. Since waste management is a major priority and has profound impacts on the volume of waste generated in construction stage, the majority of efforts have been attempted to reuse, recycle and reduce waste. However, there is still room to study on lack of sufficient knowledge about waste management in construction industry. This paper intends to provide an insight into the effect of project management knowledge areas on waste management solely on construction stage. To this end, a survey among Iranian building construction industry contractors was conducted to identify the effectiveness of project management knowledge areas on three jobsite key factors including ‘Site activity’, ‘Training’, and ‘Awareness’. As a result, four management disciplines were identified as most influential ones on amount of construction waste. These disciplines were Project Cost Management, Quality Management, Human Resource Management, and Integration Management. Based on the research findings, a new model was presented to develop effective construction waste strategies.Keywords: Awareness, PMBOK, site activity, training, waste management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12392699 Generational PipeLined Genetic Algorithm (PLGA)using Stochastic Selection
Authors: Malay K. Pakhira, Rajat K. De
Abstract:
In this paper, a pipelined version of genetic algorithm, called PLGA, and a corresponding hardware platform are described. The basic operations of conventional GA (CGA) are made pipelined using an appropriate selection scheme. The selection operator, used here, is stochastic in nature and is called SA-selection. This helps maintaining the basic generational nature of the proposed pipelined GA (PLGA). A number of benchmark problems are used to compare the performances of conventional roulette-wheel selection and the SA-selection. These include unimodal and multimodal functions with dimensionality varying from very small to very large. It is seen that the SA-selection scheme is giving comparable performances with respect to the classical roulette-wheel selection scheme, for all the instances, when quality of solutions and rate of convergence are considered. The speedups obtained by PLGA for different benchmarks are found to be significant. It is shown that a complete hardware pipeline can be developed using the proposed scheme, if parallel evaluation of the fitness expression is possible. In this connection a low-cost but very fast hardware evaluation unit is described. Results of simulation experiments show that in a pipelined hardware environment, PLGA will be much faster than CGA. In terms of efficiency, PLGA is found to outperform parallel GA (PGA) also.Keywords: Hardware evaluation, Hardware pipeline, Optimization, Pipelined genetic algorithm, SA-selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14482698 Seed-Based Region Growing (SBRG) vs Adaptive Network-Based Inference System (ANFIS) vs Fuzzyc-Means (FCM): Brain Abnormalities Segmentation
Authors: Shafaf Ibrahim, Noor Elaiza Abdul Khalid, Mazani Manaf
Abstract:
Segmentation of Magnetic Resonance Imaging (MRI) images is the most challenging problems in medical imaging. This paper compares the performances of Seed-Based Region Growing (SBRG), Adaptive Network-Based Fuzzy Inference System (ANFIS) and Fuzzy c-Means (FCM) in brain abnormalities segmentation. Controlled experimental data is used, which designed in such a way that prior knowledge of the size of the abnormalities are known. This is done by cutting various sizes of abnormalities and pasting it onto normal brain tissues. The normal tissues or the background are divided into three different categories. The segmentation is done with fifty seven data of each category. The knowledge of the size of the abnormalities by the number of pixels are then compared with segmentation results of three techniques proposed. It was proven that the ANFIS returns the best segmentation performances in light abnormalities, whereas the SBRG on the other hand performed well in dark abnormalities segmentation.
Keywords: Seed-Based Region Growing (SBRG), Adaptive Network-Based Fuzzy Inference System (ANFIS), Fuzzy c-Means (FCM), Brain segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23082697 Reliability of Chute-Feeders in Automatic Machines of High Production Capacity
Authors: R. Usubamatov, A. Usubamatova, S. Hussain
Abstract:
Modern highly automated production systems faces problems of reliability. Machine function reliability results in changes of productivity rate and efficiency use of expensive industrial facilities. Predicting of reliability has become an important research and involves complex mathematical methods and calculation. The reliability of high productivity technological automatic machines that consists of complex mechanical, electrical and electronic components is important. The failure of these units results in major economic losses of production systems. The reliability of transport and feeding systems for automatic technological machines is also important, because failure of transport leads to stops of technological machines. This paper presents reliability engineering on the feeding system and its components for transporting a complex shape parts to automatic machines. It also discusses about the calculation of the reliability parameters of the feeding unit by applying the probability theory. Equations produced for calculating the limits of the geometrical sizes of feeders and the probability of sticking the transported parts into the chute represents the reliability of feeders as a function of its geometrical parameters.Keywords: Chute-feeder, parts, reliability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14602696 Health Monitoring and Failure Detection of Electronic and Structural Components in Small Unmanned Aerial Vehicles
Authors: Gopi Kandaswamy, P. Balamuralidhar
Abstract:
Fully autonomous small Unmanned Aerial Vehicles (UAVs) are increasingly being used in many commercial applications. Although a lot of research has been done to develop safe, reliable and durable UAVs, accidents due to electronic and structural failures are not uncommon and pose a huge safety risk to the UAV operators and the public. Hence there is a strong need for an automated health monitoring system for UAVs with a view to minimizing mission failures thereby increasing safety. This paper describes our approach to monitoring the electronic and structural components in a small UAV without the need for additional sensors to do the monitoring. Our system monitors data from four sources; sensors, navigation algorithms, control inputs from the operator and flight controller outputs. It then does statistical analysis on the data and applies a rule based engine to detect failures. This information can then be fed back into the UAV and a decision to continue or abort the mission can be taken automatically by the UAV and independent of the operator. Our system has been verified using data obtained from real flights over the past year from UAVs of various sizes that have been designed and deployed by us for various applications.Keywords: Fault detection, health monitoring, unmanned aerial vehicles, vibration analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15042695 Intelligent Mobile Search Oriented to Global e-Commerce
Authors: Abdelkader Dekdouk
Abstract:
In this paper we propose a novel approach for searching eCommerce products using a mobile phone, illustrated by a prototype eCoMobile. This approach aims to globalize the mobile search by integrating the concept of user multilinguism into it. To show that, we particularly deal with English and Arabic languages. Indeed the mobile user can formulate his query on a commercial product in either language (English/Arabic). The description of his information need on commercial products relies on the ontology that represents the conceptualization of the product catalogue knowledge domain defined in both English and Arabic languages. A query expressed on a mobile device client defines the concept that corresponds to the name of the product followed by a set of pairs (property, value) specifying the characteristics of the product. Once a query is submitted it is then communicated to the server side which analyses it and in its turn performs an http request to an eCommerce application server (like Amazon). This latter responds by returning an XML file representing a set of elements where each element defines an item of the searched product with its specific characteristics. The XML file is analyzed on the server side and then items are displayed on the mobile device client along with its relevant characteristics in the chosen language.Keywords: Mobile computing, search engine, multilingualglobal eCommerce, ontology, XML.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21022694 Application of Mutual Information based Least dependent Component Analysis (MILCA) for Removal of Ocular Artifacts from Electroencephalogram
Authors: V Krishnaveni, S Jayaraman, K Ramadoss
Abstract:
The electrical potentials generated during eye movements and blinks are one of the main sources of artifacts in Electroencephalogram (EEG) recording and can propagate much across the scalp, masking and distorting brain signals. In recent times, signal separation algorithms are used widely for removing artifacts from the observed EEG data. In this paper, a recently introduced signal separation algorithm Mutual Information based Least dependent Component Analysis (MILCA) is employed to separate ocular artifacts from EEG. The aim of MILCA is to minimize the Mutual Information (MI) between the independent components (estimated sources) under a pure rotation. Performance of this algorithm is compared with eleven popular algorithms (Infomax, Extended Infomax, Fast ICA, SOBI, TDSEP, JADE, OGWE, MS-ICA, SHIBBS, Kernel-ICA, and RADICAL) for the actual independence and uniqueness of the estimated source components obtained for different sets of EEG data with ocular artifacts by using a reliable MI Estimator. Results show that MILCA is best in separating the ocular artifacts and EEG and is recommended for further analysis.
Keywords: Electroencephalogram, Ocular Artifacts (OA), Independent Component Analysis (ICA), Mutual Information (MI), Mutual Information based Least dependent Component Analysis(MILCA)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21962693 An Overview of the Factors Affecting Microbial-Induced Calcite Precipitation and its Potential Application in Soil Improvement
Authors: Wei-Soon Ng, Min-Lee Lee, Siew-Ling Hii
Abstract:
Microbial-induced calcite precipitation (MICP) is a relatively green and sustainable soil improvement technique. It utilizes biochemical process that exists naturally in soil to improve engineering properties of soils. The calcite precipitation process is uplifted by the mean of injecting higher concentration of urease positive bacteria and reagents into the soil. The main objective of this paper is to provide an overview of the factors affecting the MICP in soil. Several factors were identified including nutrients, bacteria type, geometric compatibility of bacteria, bacteria cell concentration, fixation and distribution of bacteria in soil, temperature, reagents concentration, pH, and injection method. These factors were found to be essential for promoting successful MICP soil treatment. Furthermore, a preliminary laboratory test was carried out to investigate the potential application of the technique in improving the shear strength and impermeability of a residual soil specimen. The results showed that both shear strength and impermeability of residual soil improved significantly upon MICP treatment. The improvement increased with increasing soil density.Keywords: Bacteria, biocementation, bioclogging, calcite precipitation, soil improvement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59672692 Performance Evaluation of Energy Efficient Communication Protocol for Mobile Ad Hoc Networks
Authors: Toshihiko Sasama, Kentaro Kishida, Kazunori Sugahara, Hiroshi Masuyama
Abstract:
A mobile ad hoc network is a network of mobile nodes without any notion of centralized administration. In such a network, each mobile node behaves not only as a host which runs applications but also as a router to forward packets on behalf of others. Clustering has been applied to routing protocols to achieve efficient communications. A CH network expresses the connected relationship among cluster-heads. This paper discusses the methods for constructing a CH network, and produces the following results: (1) The required running costs of 3 traditional methods for constructing a CH network are not so different from each other in the static circumstance, or in the dynamic circumstance. Their running costs in the static circumstance do not differ from their costs in the dynamic circumstance. Meanwhile, although the routing costs required for the above 3 methods are not so different in the static circumstance, the costs are considerably different from each other in the dynamic circumstance. Their routing costs in the static circumstance are also very different from their costs in the dynamic circumstance, and the former is one tenths of the latter. The routing cost in the dynamic circumstance is mostly the cost for re-routing. (2) On the strength of the above results, we discuss new 2 methods regarding whether they are tolerable or not in the dynamic circumstance, that is, whether the times of re-routing are small or not. These new methods are revised methods that are based on the traditional methods. We recommended the method which produces the smallest routing cost in the dynamic circumstance, therefore producing the smallest total cost.Keywords: cluster, mobile ad hoc network, re-routing cost, simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13552691 Network Reconfiguration for Load Balancing in Distribution System with Distributed Generation and Capacitor Placement
Authors: T. Lantharthong, N. Rugthaicharoencheep
Abstract:
This paper presents an efficient algorithm for optimization of radial distribution systems by a network reconfiguration to balance feeder loads and eliminate overload conditions. The system load-balancing index is used to determine the loading conditions of the system and maximum system loading capacity. The index value has to be minimum in the optimal network reconfiguration of load balancing. A method based on Tabu search algorithm, The Tabu search algorithm is employed to search for the optimal network reconfiguration. The basic idea behind the search is a move from a current solution to its neighborhood by effectively utilizing a memory to provide an efficient search for optimality. It presents low computational effort and is able to find good quality configurations. Simulation results for a radial 69-bus system with distributed generations and capacitors placement. The study results show that the optimal on/off patterns of the switches can be identified to give the best network reconfiguration involving balancing of feeder loads while respecting all the constraints.Keywords: Network reconfiguration, Distributed generation Capacitor placement, Load balancing, Optimization technique
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42222690 Multiple Peaks Tracking Algorithm using Particle Swarm Optimization Incorporated with Artificial Neural Network
Authors: Mei Shan Ngan, Chee Wei Tan
Abstract:
Due to the non-linear characteristics of photovoltaic (PV) array, PV systems typically are equipped with the capability of maximum power point tracking (MPPT) feature. Moreover, in the case of PV array under partially shaded conditions, hotspot problem will occur which could damage the PV cells. Partial shading causes multiple peaks in the P-V characteristic curves. This paper presents a hybrid algorithm of Particle Swarm Optimization (PSO) and Artificial Neural Network (ANN) MPPT algorithm for the detection of global peak among the multiple peaks in order to extract the true maximum energy from PV panel. The PV system consists of PV array, dc-dc boost converter controlled by the proposed MPPT algorithm and a resistive load. The system was simulated using MATLAB/Simulink package. The simulation results show that the proposed algorithm performs well to detect the true global peak power. The results of the simulations are analyzed and discussed.Keywords: Photovoltaic (PV), Partial Shading, Maximum Power Point Tracking (MPPT), Particle Swarm Optimization (PSO) and Artificial Neural Network (ANN)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37602689 Encryption Efficiency Analysis and Security Evaluation of RC6 Block Cipher for Digital Images
Authors: Hossam El-din H. Ahmed, Hamdy M. Kalash, Osama S. Farag Allah
Abstract:
This paper investigates the encryption efficiency of RC6 block cipher application to digital images, providing a new mathematical measure for encryption efficiency, which we will call the encryption quality instead of visual inspection, The encryption quality of RC6 block cipher is investigated among its several design parameters such as word size, number of rounds, and secret key length and the optimal choices for the best values of such design parameters are given. Also, the security analysis of RC6 block cipher for digital images is investigated from strict cryptographic viewpoint. The security estimations of RC6 block cipher for digital images against brute-force, statistical, and differential attacks are explored. Experiments are made to test the security of RC6 block cipher for digital images against all aforementioned types of attacks. Experiments and results verify and prove that RC6 block cipher is highly secure for real-time image encryption from cryptographic viewpoint. Thorough experimental tests are carried out with detailed analysis, demonstrating the high security of RC6 block cipher algorithm. So, RC6 block cipher can be considered to be a real-time secure symmetric encryption for digital images.
Keywords: Block cipher, Image encryption, Encryption quality, and Security analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24302688 Gasification of Trans-4-Hydroxycinnamic Acid with Ethanol at Elevated Temperatures
Authors: Shyh-Ming Chern, Wei-Ling Lin
Abstract:
Lignin is a major constituent of woody biomass, and exists abundantly in nature. It is the major byproducts from the paper industry and bioethanol production processes. The byproducts are mainly used for low-valued applications. Instead, lignin can be converted into higher-valued gaseous fuel, thereby helping to curtail the ever-growing price of oil and to slow down the trend of global warming. Although biochemical treatment is capable of converting cellulose into liquid ethanol fuel, it cannot be applied to the conversion of lignin. Alternatively, it is possible to convert lignin into gaseous fuel thermochemically. In the present work, trans-4-hydroxycinnamic acid, a model compound for lignin, which closely resembles the basic building blocks of lignin, is gasified in an autoclave with ethanol at elevated temperatures and pressures, that are above the critical point of ethanol. Ethanol, instead of water, is chosen, because ethanol dissolves trans-4-hydroxycinnamic acid easily and helps to convert it into lighter gaseous species relatively well. The major operating parameters for the gasification reaction include temperature (673-873 K), reaction pressure (5-25 MPa) and feed concentration (0.05-0.3 M). Generally, more than 80% of the reactant, including trans-4-hydroxycinnamic acid and ethanol, were converted into gaseous products at an operating condition of 873 K and 5 MPa.Keywords: Ethanol, gasification, lignin, supercritical.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10832687 Web Page Watermarking: XML files using Synonyms and Acronyms
Authors: Nighat Mir, Sayed Afaq Hussain
Abstract:
Advent enhancements in the field of computing have increased massive use of web based electronic documents. Current Copyright protection laws are inadequate to prove the ownership for electronic documents and do not provide strong features against copying and manipulating information from the web. This has opened many channels for securing information and significant evolutions have been made in the area of information security. Digital Watermarking has developed into a very dynamic area of research and has addressed challenging issues for digital content. Watermarking can be visible (logos or signatures) and invisible (encoding and decoding). Many visible watermarking techniques have been studied for text documents but there are very few for web based text. XML files are used to trade information on the internet and contain important information. In this paper, two invisible watermarking techniques using Synonyms and Acronyms are proposed for XML files to prove the intellectual ownership and to achieve the security. Analysis is made for different attacks and amount of capacity to be embedded in the XML file is also noticed. A comparative analysis for capacity is also made for both methods. The system has been implemented using C# language and all tests are made practically to get the results.Keywords: Watermarking, Extensible Markup Language (XML), Synonyms, Acronyms, Copyright protection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22852686 A Partially Accelerated Life Test Planning with Competing Risks and Linear Degradation Path under Tampered Failure Rate Model
Authors: Fariba Azizi, Firoozeh Haghighi, Viliam Makis
Abstract:
In this paper, we propose a method to model the relationship between failure time and degradation for a simple step stress test where underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to shorten failure time of products and a tampered failure rate (TFR) model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates (MLEs) of the model parameters are obtained through an expectation-maximization (EM) algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real example is analyzed to illustrate the application of the proposed methods.Keywords: Expectation-maximization (EM) algorithm, cause of failure, intensity, linear degradation path, masked data, reliability function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10772685 Financial Sources and Instruments for Public Grants and Financial Facilities of Smes in EU
Authors: Simeon Karafolas, Maciej Woźniak
Abstract:
Mostly of public financing programs at national and regional level are funded from European Union sources. EU can participate directly to a national and regional program (example LEADER initiative, URBAN…) or indirectly by funding regional or national funds.Funds from European Union are provided from EU multiannual financial framework form which the annual budget is programmed. The adjusted program 2007-2013 of the EU considered commitments of almost 1 trillion Euros for the EU-28 countries. Provisions of the new program 2014-2020 consider commitments of more than 1 trillion Euros. Sustainable growth, divided to Cohesion and Competitiveness for Growth an Employment, is one of the two principal categories; the other is the preservation and management of natural resources.Through this financing process SMEs benefited of EU and public sources by receiving grants for their investments. Most of the financial instruments are available indirectly through the national financial intermediaries. Part of them is managed by the European Investment Fund.The paper focuses on the public financing to SMEs by examining case studies on divers forms of public help. It tries to distinguish the efficiency of the examined good practices and therefore try to have some conclusions on the possibility of application to other regions.
Keywords: DIFASS, financing, grants, SMEs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15962684 Design and Analysis of MEMS based Accelerometer for Automatic Detection of Railway Wheel Flat
Authors: Rajib Ul Alam Uzzal, Ion Stiharu, Waiz Ahmed
Abstract:
This paper presents the modeling of a MEMS based accelerometer in order to detect the presence of a wheel flat in the railway vehicle. A haversine wheel flat is assigned to one wheel of a 5 DOF pitch plane vehicle model, which is coupled to a 3 layer track model. Based on the simulated acceleration response obtained from the vehicle-track model, an accelerometer is designed that meets all the requirements to detect the presence of a wheel flat. The proposed accelerometer can survive in a dynamic shocking environment with acceleration up to ±150g. The parameters of the accelerometer are calculated in order to achieve the required specifications using lumped element approximation and the results are used for initial design layout. A finite element analysis code (COMSOL) is used to perform simulations of the accelerometer under various operating conditions and to determine the optimum configuration. The simulated results are found within about 2% of the calculated values, which indicates the validity of lumped element approach. The stability of the accelerometer is also determined in the desired range of operation including the condition under shock.
Keywords: MEMS accelerometer, Pitch plane vehicle, wheel flat.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30802683 Transmit Sub-aperture Optimization in MSTA Ultrasound Imaging Method
Authors: YuriyTasinkevych, Ihor Trots, AndrzejNowicki, Marcin Lewandowski
Abstract:
The paper presents the optimization problem for the multi-element synthetic transmit aperture method (MSTA) in ultrasound imaging applications. The optimal choice of the transmit aperture size is performed as a trade-off between the lateral resolution, penetration depth and the frame rate. Results of the analysis obtained by a developed optimization algorithm are presented. Maximum penetration depth and the best lateral resolution at given depths are chosen as the optimization criteria. The optimization algorithm was tested using synthetic aperture data of point reflectors simulated by Filed II program for Matlab® for the case of 5MHz 128-element linear transducer array with 0.48 mm pitch are presented. The visualization of experimentally obtained synthetic aperture data of a tissue mimicking phantom and in vitro measurements of the beef liver are also shown. The data were obtained using the SonixTOUCH Research systemequipped with a linear 4MHz 128 element transducerwith 0.3 mm element pitch, 0.28 mm element width and 70% fractional bandwidth was excited by one sine cycle pulse burst of transducer's center frequency.Keywords: synthetic aperture method, ultrasound imaging, beamforming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18892682 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network
Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu
Abstract:
A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.Keywords: Big data, k-NN, machine learning, traffic speed prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13802681 Influence of Metakaolin and Cements Types on Compressive Strength and Transport Properties of Self-Consolidating Concrete
Authors: Kianoosh Samimi, Farhad Estakhr, Mahdi Mahdikhani, Faramaz Moodi
Abstract:
The self-consolidating concrete (SCC) performance over ordinary concrete is generally related to the ingredients used. The metakaolin can modify various properties of concrete, due to high pozzolanic reactions and also makes a denser microstructure. The objective of this paper is to examine the influence of three types of Portland cement and metakaolin on compressive strength and transport properties of SCC at early ages and up to 90 days. Six concrete mixtures were prepared with three types of different cements and substitution of 15% metakaolin. The results show that the highest value of compressive strength was achieved for Portland Slag Cement (PSC) and without any metakaolin at age of 90 days. Conversely, the lowest level of compressive strength at all ages of conservation was obtained for Pozzolanic Portland Cement (PPC) and containing 15% metakaolin. As can be seen in the results, compressive strength in SCC containing Portland cement type II with metakaolin is higher compared to that relative to SCC without metakaolin from 28 days of age. On the other hand, the samples containing PSC and PPC with metakaolin had a lower compressive strength than the plain samples. Therefore, it can be concluded that metakaolin has a negative effect on the compressive strength of SCC containing PSC and PPC. In addition, results show that metakaolin has enhanced chloride durability of SCCs and reduced capillary water absorption at 28, 90 days.
Keywords: SCC, metakaolin, cement type, compressive strength, chloride diffusion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8962680 Performance of Neural Networks vs. Radial Basis Functions When Forming a Metamodel for Residential Buildings
Authors: Philip Symonds, Jon Taylor, Zaid Chalabi, Michael Davies
Abstract:
Average temperatures worldwide are expected to continue to rise. At the same time, major cities in developing countries are becoming increasingly populated and polluted. Governments are tasked with the problem of overheating and air quality in residential buildings. This paper presents the development of a model, which is able to estimate the occupant exposure to extreme temperatures and high air pollution within domestic buildings. Building physics simulations were performed using the EnergyPlus building physics software. An accurate metamodel is then formed by randomly sampling building input parameters and training on the outputs of EnergyPlus simulations. Metamodels are used to vastly reduce the amount of computation time required when performing optimisation and sensitivity analyses. Neural Networks (NNs) have been compared to a Radial Basis Function (RBF) algorithm when forming a metamodel. These techniques were implemented using the PyBrain and scikit-learn python libraries, respectively. NNs are shown to perform around 15% better than RBFs when estimating overheating and air pollution metrics modelled by EnergyPlus.Keywords: Neural Networks, Radial Basis Functions, Metamodelling, Python machine learning libraries.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21232679 On the Difference between Cultural and Religious Identities: A Case Study of Christianity and Islam in Some African and Asian Countries
Authors: Mputu Ngandu Simon
Abstract:
Culture and religion are two of the most significant markers of an individual or group`s identity. Religion finds its expression in a given culture and culture is the costume in which a religion is dressed. In other words, there is a crucial relationship between religion and culture which should not be ignored. On the one hand, religion influences the way in which a culture is consumed. A person`s consumption of a certain cultural practice is influenced by his/her religious identity. On the other hand, the cultural identity plays an important role on how a religion is practiced by its adherents. Some cultural practices become more credible when interpreted in religious terms just as religious doctrines and dogmas need cultural interpretation to be understood by a given people, in a given context. This relationship goes so deep that sometimes the boundaries between culture and religion become blurred and people end up mixing religion and culture. In some cases, the two are considered to be one and the same thing. However, despite this apparent sameness, religion and culture are two distinct aspects of identity and they should always be considered as such. One results from knowledge while the other has beliefs as its foundation. This paper explores the difference between cultural and religious identities by drawing from existing literature on this topic as a whole, before applying that knowledge to two specific case studies: Christianity among San people of Botswana, Namibia, Angola, Zambia, Lesotho, Zimbabwe, and South Africa, and Islam in Somalia, Kenya, Ethiopia, Djibouti and Iran.
Keywords: Belief, identity, knowledge, culture, religion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5132678 Cost-Effective Design of Space Structures Joints: A Review
Authors: Mohammed I. Ali, Feng Fan, Peter N. Khakina, Ma H.H
Abstract:
In construction of any structure, the aesthetic and utility values should be considered in such a way as to make the structure cost-effective. Most structures are composed of elements and joints which are very critical in any skeletal space structure because they majorly determine the performance of the structure. In early times, most space structures were constructed using rigid joints which had the advantage of better performing structures as compared to pin-jointed structures but with the disadvantage of requiring all the construction work to be done on site. The discovery of semi-rigid joints now enables connections to be prefabricated and quickly assembled on site while maintaining good performance. In this paper, cost-effective is discussed basing on strength of connectors at the joints, buckling of joints and overall structure, and the effect of initial geometrical imperfections. Several existing joints are reviewed by classifying them into categories and discussing where they are most suited and how they perform structurally. Also, finite element modeling using ABAQUS is done to determine the buckling behavior. It is observed that some joints are more economical than others. The rise to span ratio and imperfections are also found to affect the buckling of the structures. Based on these, general principles that guide the design of cost-effective joints and structures are discussed.
Keywords: Buckling, Connectors, Joint stiffness, Eccentricity, Second moment of area, Semi-rigid joints.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 47862677 Behavior Analysis Based On Nine Degrees-of-Freedom Sensor for Emergency Rescue Evacuation Support System
Authors: Maeng-Hwan Hyun, Dae-Man Do, Young-Bok Choi
Abstract:
Around the world, there are frequent incidents of natural disasters, such as earthquakes, tsunamis, floods, and snowstorms, as well as manmade disasters such as fires, arsons, and acts of terror. These diverse and unpredictable adversities have resulted in a number of fatalities and injuries. If disaster occurrence can be assessed quickly and information such as the exact location of the disaster and evacuation routes can be provided, victims can promptly move to safe locations, minimizing losses. This paper proposes a behavior analysis method based on a nine degrees-of-freedom (9-DOF) sensor that is effective for the emergency rescue evacuation support system (ERESS), which is being researched with an objective of providing evacuation support during disasters. Based on experiments performed using the acceleration sensor and the gyroscope sensor in the 9-DOF sensor, data are analyzed for human behavior regarding stationary position, walking, running, and during emergency situation to suggest guidelines for system judgment. Using the results of the experiments performed to determine disaster occurrence, it was confirmed that the proposed method quickly determines whether a disaster has occurred.
Keywords: Behavior Analysis, Nine degrees-of-freedom sensor, Emergency rescue.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16922676 Economy-Based Computing with WebCom
Authors: Adarsh Patil, David A. Power, John P. Morrison
Abstract:
Grid environments consist of the volatile integration of discrete heterogeneous resources. The notion of the Grid is to unite different users and organisations and pool their resources into one large computing platform where they can harness, inter-operate, collaborate and interact. If the Grid Community is to achieve this objective, then participants (Users and Organisations) need to be willing to donate or share their resources and permit other participants to use their resources. Resources do not have to be shared at all times, since it may result in users not having access to their own resource. The idea of reward-based computing was developed to address the sharing problem in a pragmatic manner. Participants are offered a reward to donate their resources to the Grid. A reward may include monetary recompense or a pro rata share of available resources when constrained. This latter point may imply a quality of service, which in turn may require some globally agreed reservation mechanism. This paper presents a platform for economybased computing using the WebCom Grid middleware. Using this middleware, participants can configure their resources at times and priority levels to suit their local usage policy. The WebCom system accounts for processing done on individual participants- resources and rewards them accordingly.Keywords: WebCom, Economy-based computing, WebComGrid Bank Reward, Condensed Graph, Distributor, Accounting, GridPoint.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12112675 Character Segmentation Method for a License Plate with Topological Transform
Authors: Jaedo Kim, Youngjoon Han, Hernsoo Hahn
Abstract:
This paper propose the robust character segmentation method for license plate with topological transform such as twist,rotation. The first step of the proposed method is to find a candidate region for character and license plate. The character or license plate must be appeared as closed loop in the edge image. In the case of detecting candidate for character region, the evaluation of detected region is using topological relationship between each character. When this method decides license plate candidate region, character features in the region with binarization are used. After binarization for the detected candidate region, each character region is decided again. In this step, each character region is fitted more than previous step. In the next step, the method checks other character regions with different scale near the detected character regions, because most license plates have license numbers with some meaningful characters around them. The method uses perspective projection for geometrical normalization. If there is topological distortion in the character region, the method projects the region on a template which is defined as standard license plate using perspective projection. In this step, the method is able to separate each number region and small meaningful characters. The evaluation results are tested with a number of test images.Keywords: License Plate Detection, Character Segmentation, Perspective Projection, Topological Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19382674 An Images Monitoring System based on Multi-Format Streaming Grid Architecture
Authors: Yi-Haur Shiau, Sun-In Lin, Shi-Wei Lo, Hsiu-Mei Chou, Yi-Hsuan Chen
Abstract:
This paper proposes a novel multi-format stream grid architecture for real-time image monitoring system. The system, based on a three-tier architecture, includes stream receiving unit, stream processor unit, and presentation unit. It is a distributed computing and a loose coupling architecture. The benefit is the amount of required servers can be adjusted depending on the loading of the image monitoring system. The stream receive unit supports multi capture source devices and multi-format stream compress encoder. Stream processor unit includes three modules; they are stream clipping module, image processing module and image management module. Presentation unit can display image data on several different platforms. We verified the proposed grid architecture with an actual test of image monitoring. We used a fast image matching method with the adjustable parameters for different monitoring situations. Background subtraction method is also implemented in the system. Experimental results showed that the proposed architecture is robust, adaptive, and powerful in the image monitoring system.Keywords: Motion detection, grid architecture, image monitoring system, and background subtraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15962673 Organizational Management Model based on Knowledge Management, Talent Management and Technology Management Framework “Gomak“
Authors: Nieto Bernal W., Luna Amaya C.
Abstract:
This paper aims to present a framework for the organizational knowledge management, which seeks to deploy a standardized structure for the integrated management of knowledge is a common language based on domains, processes and global indicators inspired by the COBIT framework 5 (ISACA, 2012), which supports the integration of three technologies, enterprise information architecture (EIA), the business process modeling (BPM) and service-oriented architecture (SOA). The Gomak Framework is a management platform that seeks to integrate the information technology infrastructure, the structure of applications, information infrastructure, and business logic and business model to support a sound strategy of organizational knowledge management, low process-based approach and concurrent engineering. Concurrent engineering (CE) is a systematic approach to integrated product development that respond to customer expectations, involving all perspectives in parallel, from the beginning of the product life cycle. (European Space Agency, 2000).Keywords: Business Process Modeling, Enterprise Information Architecture, Government and Knowledge Management, Service Oriented Architecture, Process Management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18532672 Design of Two-Channel Quincunx Quadrature Mirror Filter Banks Using Digital All-Pass Lattice Filters
Authors: Ju-Hong Lee, Chong-Jia Ciou
Abstract:
This paper deals with the problem of two-dimensional (2-D) recursive two-channel quincunx quadrature mirror filter (QQMF) banks design. The analysis and synthesis filters of the 2-D recursive QQMF bank are composed of 2-D recursive digital allpass lattice filters (DALFs) with symmetric half-plane (SHP) support regions. Using the 2-D doubly complementary half-band (DC-HB) property possessed by the analysis and synthesis filters, we facilitate the design of the proposed QQMF bank. For finding the coefficients of the 2-D recursive SHP DALFs, we present a structure of 2-D recursive digital allpass filters by using 2-D SHP recursive digital all-pass lattice filters (DALFs). The novelty of using 2-D SHP recursive DALFs to construct a 2-D recursive QQMF bank is that the resulting 2-D recursive QQMF bank provides better performance than the existing 2-D recursive QQMF banks. Simulation results are also presented for illustration and comparison.
Keywords: All-pass digital filter, doubly complementary, lattice structure, symmetric half-plane digital filter, quincunx QMF bank.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 889