Search results for: DSP benchmark
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 225

Search results for: DSP benchmark

45 Overview of Multi-Chip Alternatives for 2.5D and 3D Integrated Circuit Packagings

Authors: Ching-Feng Chen, Ching-Chih Tsai

Abstract:

With the size of the transistor gradually approaching the physical limit, it challenges the persistence of Moore’s Law due to such issues of the short channel effect and the development of the high numerical aperture (NA) lithography equipment. In the context of the ever-increasing technical requirements of portable devices and high-performance computing (HPC), relying on the law continuation to enhance the chip density will no longer support the prospects of the electronics industry. Weighing the chip’s power consumption-performance-area-cost-cycle time to market (PPACC) is an updated benchmark to drive the evolution of the advanced wafer nanometer (nm). The advent of two and half- and three-dimensional (2.5 and 3D)- Very-Large-Scale Integration (VLSI) packaging based on Through Silicon Via (TSV) technology has updated the traditional die assembly methods and provided the solution. This overview investigates the up-to-date and cutting-edge packaging technologies for 2.5D and 3D integrated circuits (IC) based on the updated transistor structure and technology nodes. We conclude that multi-chip solutions for 2.5D and 3D IC packaging can prolong Moore’s Law.

Keywords: Moore’s Law, High Numerical Aperture, Power Consumption-Performance-Area-Cost-Cycle Time to Market, PPACC, 2.5 and 3D-Very-Large-Scale Integration Packaging, Through Silicon Vi.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 228
44 Static and Dynamic Analysis of Hyperboloidal Helix Having Thin Walled Open and Close Sections

Authors: Merve Ermis, Murat Yılmaz, Nihal Eratlı, Mehmet H. Omurtag

Abstract:

The static and dynamic analyses of hyperboloidal helix having the closed and the open square box sections are investigated via the mixed finite element formulation based on Timoshenko beam theory. Frenet triad is considered as local coordinate systems for helix geometry. Helix domain is discretized with a two-noded curved element and linear shape functions are used. Each node of the curved element has 12 degrees of freedom, namely, three translations, three rotations, two shear forces, one axial force, two bending moments and one torque. Finite element matrices are derived by using exact nodal values of curvatures and arc length and it is interpolated linearly throughout the element axial length. The torsional moments of inertia for close and open square box sections are obtained by finite element solution of St. Venant torsion formulation. With the proposed method, the torsional rigidity of simply and multiply connected cross-sections can be also calculated in same manner. The influence of the close and the open square box cross-sections on the static and dynamic analyses of hyperboloidal helix is investigated. The benchmark problems are represented for the literature.

Keywords: Hyperboloidal helix, squared cross section, thin walled cross section, torsional rigidity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1668
43 Understanding Innovation by Analyzing the Pillars of the Global Competitiveness Index

Authors: Ujjwala Bhand, Mridula Goel

Abstract:

Global Competitiveness Index (GCI) prepared by World Economic Forum has become a benchmark in studying the competitiveness of countries and for understanding the factors that enable competitiveness. Innovation is a key pillar in competitiveness and has the unique property of enabling exponential economic growth. This paper attempts to analyze how the pillars comprising the Global Competitiveness Index affect innovation and whether GDP growth can directly affect innovation outcomes for a country. The key objective of the study is to identify areas on which governments of developing countries can focus policies and programs to improve their country’s innovativeness. We have compiled a panel data set for top innovating countries and large emerging economies called BRICS from 2007-08 to 2014-15 in order to find the significant factors that affect innovation. The results of the regression analysis suggest that government should make policies to improve labor market efficiency, establish sophisticated business networks, provide basic health and primary education to its people and strengthen the quality of higher education and training services in the economy. The achievements of smaller economies on innovation suggest that concerted efforts by governments can counter any size related disadvantage, and in fact can provide greater flexibility and speed in encouraging innovation.

Keywords: Innovation, Global Competitiveness Index, BRICS, economic growth.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1055
42 Surrogate based Evolutionary Algorithm for Design Optimization

Authors: Maumita Bhattacharya

Abstract:

Optimization is often a critical issue for most system design problems. Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, finding optimal solution to complex high dimensional, multimodal problems often require highly computationally expensive function evaluations and hence are practically prohibitive. The Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model presented in our earlier work [14] reduced computation time by controlled use of meta-models to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the meta-model are generated from a single uniform model. Situations like model formation involving variable input dimensions and noisy data certainly can not be covered by this assumption. In this paper we present an enhanced version of DAFHEA that incorporates a multiple-model based learning approach for the SVM approximator. DAFHEA-II (the enhanced version of the DAFHEA framework) also overcomes the high computational expense involved with additional clustering requirements of the original DAFHEA framework. The proposed framework has been tested on several benchmark functions and the empirical results illustrate the advantages of the proposed technique.

Keywords: Evolutionary algorithm, Fitness function, Optimization, Meta-model, Stochastic method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1576
41 Intelligent Transport System: Classification of Traffic Signs Using Deep Neural Networks in Real Time

Authors: Anukriti Kumar, Tanmay Singh, Dinesh Kumar Vishwakarma

Abstract:

Traffic control has been one of the most common and irritating problems since the time automobiles have hit the roads. Problems like traffic congestion have led to a significant time burden around the world and one significant solution to these problems can be the proper implementation of the Intelligent Transport System (ITS). It involves the integration of various tools like smart sensors, artificial intelligence, position technologies and mobile data services to manage traffic flow, reduce congestion and enhance driver's ability to avoid accidents during adverse weather. Road and traffic signs’ recognition is an emerging field of research in ITS. Classification problem of traffic signs needs to be solved as it is a major step in our journey towards building semi-autonomous/autonomous driving systems. The purpose of this work focuses on implementing an approach to solve the problem of traffic sign classification by developing a Convolutional Neural Network (CNN) classifier using the GTSRB (German Traffic Sign Recognition Benchmark) dataset. Rather than using hand-crafted features, our model addresses the concern of exploding huge parameters and data method augmentations. Our model achieved an accuracy of around 97.6% which is comparable to various state-of-the-art architectures.

Keywords: Multiclass classification, convolution neural network, OpenCV, Data Augmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 814
40 Radial Basis Surrogate Model Integrated to Evolutionary Algorithm for Solving Computation Intensive Black-Box Problems

Authors: Abdulbaset Saad, Adel Younis, Zuomin Dong

Abstract:

For design optimization with high-dimensional expensive problems, an effective and efficient optimization methodology is desired. This work proposes a series of modification to the Differential Evolution (DE) algorithm for solving computation Intensive Black-Box Problems. The proposed methodology is called Radial Basis Meta-Model Algorithm Assisted Differential Evolutionary (RBF-DE), which is a global optimization algorithm based on the meta-modeling techniques. A meta-modeling assisted DE is proposed to solve computationally expensive optimization problems. The Radial Basis Function (RBF) model is used as a surrogate model to approximate the expensive objective function, while DE employs a mechanism to dynamically select the best performing combination of parameters such as differential rate, cross over probability, and population size. The proposed algorithm is tested on benchmark functions and real life practical applications and problems. The test results demonstrate that the proposed algorithm is promising and performs well compared to other optimization algorithms. The proposed algorithm is capable of converging to acceptable and good solutions in terms of accuracy, number of evaluations, and time needed to converge.

Keywords: Differential evolution, engineering design, expensive computations, meta-modeling, radial basis function, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1173
39 Distances over Incomplete Diabetes and Breast Cancer Data Based on Bhattacharyya Distance

Authors: Loai AbdAllah, Mahmoud Kaiyal

Abstract:

Missing values in real-world datasets are a common problem. Many algorithms were developed to deal with this problem, most of them replace the missing values with a fixed value that was computed based on the observed values. In our work, we used a distance function based on Bhattacharyya distance to measure the distance between objects with missing values. Bhattacharyya distance, which measures the similarity of two probability distributions. The proposed distance distinguishes between known and unknown values. Where the distance between two known values is the Mahalanobis distance. When, on the other hand, one of them is missing the distance is computed based on the distribution of the known values, for the coordinate that contains the missing value. This method was integrated with Wikaya, a digital health company developing a platform that helps to improve prevention of chronic diseases such as diabetes and cancer. In order for Wikaya’s recommendation system to work distance between users need to be measured. Since there are missing values in the collected data, there is a need to develop a distance function distances between incomplete users profiles. To evaluate the accuracy of the proposed distance function in reflecting the actual similarity between different objects, when some of them contain missing values, we integrated it within the framework of k nearest neighbors (kNN) classifier, since its computation is based only on the similarity between objects. To validate this, we ran the algorithm over diabetes and breast cancer datasets, standard benchmark datasets from the UCI repository. Our experiments show that kNN classifier using our proposed distance function outperforms the kNN using other existing methods.

Keywords: Missing values, distance metric, Bhattacharyya distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 781
38 Dynamic Features Selection for Heart Disease Classification

Authors: Walid MOUDANI

Abstract:

The healthcare environment is generally perceived as being information rich yet knowledge poor. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. In fact, valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, a proficient methodology for the extraction of significant patterns from the Coronary Heart Disease warehouses for heart attack prediction, which unfortunately continues to be a leading cause of mortality in the whole world, has been presented. For this purpose, we propose to enumerate dynamically the optimal subsets of the reduced features of high interest by using rough sets technique associated to dynamic programming. Therefore, we propose to validate the classification using Random Forest (RF) decision tree to identify the risky heart disease cases. This work is based on a large amount of data collected from several clinical institutions based on the medical profile of patient. Moreover, the experts- knowledge in this field has been taken into consideration in order to define the disease, its risk factors, and to establish significant knowledge relationships among the medical factors. A computer-aided system is developed for this purpose based on a population of 525 adults. The performance of the proposed model is analyzed and evaluated based on set of benchmark techniques applied in this classification problem.

Keywords: Multi-Classifier Decisions Tree, Features Reduction, Dynamic Programming, Rough Sets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2532
37 Shariah Views on the Components of Profit Rate in Al-Murabahah Asset Financing in Malaysian Islamic Bank

Authors: M. Pisol B Mat Isa, Asmak Ab Rahman, Hezlina Bt M Hashim, Abd Mutalib B Embong

Abstract:

Al-Murabahah is an Islamic financing facility used in asset financing, the profit rate of the contract is determined by components which are also being used in the conventional banking. Such are cost of fund, overhead cost, risk premium cost and bank-s profit margin. At the same time, the profit rate determined by Islamic banking system also refers to Inter-Bank Offered Rate (LIBOR) in London as a benchmark. This practice has risen arguments among Muslim scholars in term of its validity of the contract; whether the contract maintains the Shariah compliance or not. This paper aims to explore the view of Shariah towards the above components practiced by Islamic Banking in determining the profit rate of al-murabahah asset financing in Malaysia. This is a comparative research which applied the views of Muslim scholars from all major mazahibs in Islamic jurisprudence and examined the practices by Islamic banks in Malaysia for the above components. The study found that the shariah accepts all the components with conditions. The cost of fund is accepted as a portion of al-mudarabah-s profit, the overhead cost is accepted as a cost of product, risk premium cost consist of business risk and mitigation risk are accepted through the concept of alta-awun and bank-s profit margin is accepted as a right of bank after venturing in risky investment.

Keywords: Islamic banking, Islamic finance, al-murabahah and asset financing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5919
36 A Coupled Extended-Finite-Discrete Element Method: On the Different Contact Schemes between Continua and Discontinua

Authors: Shervin Khazaeli, Shahab Haj-zamani

Abstract:

Recently, advanced geotechnical engineering problems related to soil movement, particle loss, and modeling of local failure (i.e. discontinua) as well as modeling the in-contact structures (i.e. continua) are of the great interest among researchers. The aim of this research is to meet the requirements with respect to the modeling of the above-mentioned two different domains simultaneously. To this end, a coupled numerical method is introduced based on Discrete Element Method (DEM) and eXtended-Finite Element Method (X-FEM). In the coupled procedure, DEM is employed to capture the interactions and relative movements of soil particles as discontinua, while X-FEM is utilized to model in-contact structures as continua, which may consist of different types of discontinuities. For verification purposes, the new coupled approach is utilized to examine benchmark problems including different contacts between/within continua and discontinua. Results are validated by comparison with those of existing analytical and numerical solutions. This study proves that extended-finite-discrete element method can be used to robustly analyze not only contact problems, but also other types of discontinuities in continua such as (i) crack formations and propagations, (ii) voids and bimaterial interfaces, and (iii) combination of previous cases. In essence, the proposed method can be used vastly in advanced soil-structure interaction problems to investigate the micro and macro behaviour of the surrounding soil and the response of the embedded structure that contains discontinuities.

Keywords: Contact problems, discrete element method, extended-finite element method, soil-structure interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1236
35 A Neurofuzzy Learning and its Application to Control System

Authors: Seema Chopra, R. Mitra, Vijay Kumar

Abstract:

A neurofuzzy approach for a given set of input-output training data is proposed in two phases. Firstly, the data set is partitioned automatically into a set of clusters. Then a fuzzy if-then rule is extracted from each cluster to form a fuzzy rule base. Secondly, a fuzzy neural network is constructed accordingly and parameters are tuned to increase the precision of the fuzzy rule base. This network is able to learn and optimize the rule base of a Sugeno like Fuzzy inference system using Hybrid learning algorithm, which combines gradient descent, and least mean square algorithm. This proposed neurofuzzy system has the advantage of determining the number of rules automatically and also reduce the number of rules, decrease computational time, learns faster and consumes less memory. The authors also investigate that how neurofuzzy techniques can be applied in the area of control theory to design a fuzzy controller for linear and nonlinear dynamic systems modelling from a set of input/output data. The simulation analysis on a wide range of processes, to identify nonlinear components on-linely in a control system and a benchmark problem involving the prediction of a chaotic time series is carried out. Furthermore, the well-known examples of linear and nonlinear systems are also simulated under the Matlab/Simulink environment. The above combination is also illustrated in modeling the relationship between automobile trips and demographic factors.

Keywords: Fuzzy control, neuro-fuzzy techniques, fuzzy subtractive clustering, extraction of rules, and optimization of membership functions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2592
34 Feature Based Unsupervised Intrusion Detection

Authors: Deeman Yousif Mahmood, Mohammed Abdullah Hussein

Abstract:

The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.

Keywords: Information Gain (IG), Intrusion Detection System (IDS), K-means Clustering, Weka.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2776
33 Rank-Based Chain-Mode Ensemble for Binary Classification

Authors: Chongya Song, Kang Yen, Alexander Pons, Jin Liu

Abstract:

In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called “curse of correlation” which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve.

Keywords: Consensus, curse of correlation, imbalanced classification, rank-based chain-mode ensemble.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 734
32 Fast Approximate Bayesian Contextual Cold Start Learning (FAB-COST)

Authors: Jack R. McKenzie, Peter A. Appleby, Thomas House, Neil Walton

Abstract:

Cold-start is a notoriously difficult problem which can occur in recommendation systems, and arises when there is insufficient information to draw inferences for users or items. To address this challenge, a contextual bandit algorithm – the Fast Approximate Bayesian Contextual Cold Start Learning algorithm (FAB-COST) – is proposed, which is designed to provide improved accuracy compared to the traditionally used Laplace approximation in the logistic contextual bandit, while controlling both algorithmic complexity and computational cost. To this end, FAB-COST uses a combination of two moment projection variational methods: Expectation Propagation (EP), which performs well at the cold start, but becomes slow as the amount of data increases; and Assumed Density Filtering (ADF), which has slower growth of computational cost with data size but requires more data to obtain an acceptable level of accuracy. By switching from EP to ADF when the dataset becomes large, it is able to exploit their complementary strengths. The empirical justification for FAB-COST is presented, and systematically compared to other approaches on simulated data. In a benchmark against the Laplace approximation on real data consisting of over 670, 000 impressions from autotrader.co.uk, FAB-COST demonstrates at one point increase of over 16% in user clicks. On the basis of these results, it is argued that FAB-COST is likely to be an attractive approach to cold-start recommendation systems in a variety of contexts.

Keywords: Cold-start, expectation propagation, multi-armed bandits, Thompson sampling, variational inference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 552
31 Dynamic Bayesian Networks Modeling for Inferring Genetic Regulatory Networks by Search Strategy: Comparison between Greedy Hill Climbing and MCMC Methods

Authors: Huihai Wu, Xiaohui Liu

Abstract:

Using Dynamic Bayesian Networks (DBN) to model genetic regulatory networks from gene expression data is one of the major paradigms for inferring the interactions among genes. Averaging a collection of models for predicting network is desired, rather than relying on a single high scoring model. In this paper, two kinds of model searching approaches are compared, which are Greedy hill-climbing Search with Restarts (GSR) and Markov Chain Monte Carlo (MCMC) methods. The GSR is preferred in many papers, but there is no such comparison study about which one is better for DBN models. Different types of experiments have been carried out to try to give a benchmark test to these approaches. Our experimental results demonstrated that on average the MCMC methods outperform the GSR in accuracy of predicted network, and having the comparable performance in time efficiency. By proposing the different variations of MCMC and employing simulated annealing strategy, the MCMC methods become more efficient and stable. Apart from comparisons between these approaches, another objective of this study is to investigate the feasibility of using DBN modeling approaches for inferring gene networks from few snapshots of high dimensional gene profiles. Through synthetic data experiments as well as systematic data experiments, the experimental results revealed how the performances of these approaches can be influenced as the target gene network varies in the network size, data size, as well as system complexity.

Keywords: Genetic regulatory network, Dynamic Bayesian network, GSR, MCMC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1886
30 Regression Approach for Optimal Purchase of Hosts Cluster in Fixed Fund for Hadoop Big Data Platform

Authors: Haitao Yang, Jianming Lv, Fei Xu, Xintong Wang, Yilin Huang, Lanting Xia, Xuewu Zhu

Abstract:

Given a fixed fund, purchasing fewer hosts of higher capability or inversely more of lower capability is a must-be-made trade-off in practices for building a Hadoop big data platform. An exploratory study is presented for a Housing Big Data Platform project (HBDP), where typical big data computing is with SQL queries of aggregate, join, and space-time condition selections executed upon massive data from more than 10 million housing units. In HBDP, an empirical formula was introduced to predict the performance of host clusters potential for the intended typical big data computing, and it was shaped via a regression approach. With this empirical formula, it is easy to suggest an optimal cluster configuration. The investigation was based on a typical Hadoop computing ecosystem HDFS+Hive+Spark. A proper metric was raised to measure the performance of Hadoop clusters in HBDP, which was tested and compared with its predicted counterpart, on executing three kinds of typical SQL query tasks. Tests were conducted with respect to factors of CPU benchmark, memory size, virtual host division, and the number of element physical host in cluster. The research has been applied to practical cluster procurement for housing big data computing.

Keywords: Hadoop platform planning, optimal cluster scheme at fixed-fund, performance empirical formula, typical SQL query tasks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 837
29 Identifying Business Opportunities Based on Patent and Trademark Portfolios: A Technology-Based Service Industry Case

Authors: Mingook Lee, Sungjoo Lee

Abstract:

As technology-based service industries grow drastically worldwide; companies are recognizing the importance of market preoccupancy and have made an effort to capture a large market to gain the upper hand. To this end, a focus on patents can be used to determine the properties of a technology, as well as to capture advantages in technical skills, in comparison with the firm’s competitors. However, technology-based services largely depend not only on their technological value but also their economic value, due to the recognized worth that is passed to a plurality of users. Thus, it is important to determine whether there are any competitors in the target areas and what services they provide in any field. Despite this importance, little effort has been made to systematically benchmark competitors in order to identify business opportunities. Thus, this study aims to not only identify each position of technology-centered service companies in complex market dynamics, but also to discover new business opportunities. For this, we try to consider both technology and market environments simultaneously by utilizing patent data as a representative proxy for technology and trademark dates as an index for a firm’s target goods and services. Theoretically, this is one of the earliest attempts to combine patent data and trademark data to analyze corporate strategies. In practice, the research results are expected to be used as a decision criterion to diagnose the economic value that companies can obtain by entering the market, as well as the technological value to be passed onto their customers. Thus, the proposed approach can be useful to support effective technology and business strategies in a firm.

Keywords: Business opportunity, patent, Portfolio analysis, trademark.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1532
28 Construction Unit Rate Factor Modelling Using Neural Networks

Authors: Balimu Mwiya, Mundia Muya, Chabota Kaliba, Peter Mukalula

Abstract:

Factors affecting construction unit cost vary depending on a country’s political, economic, social and technological inclinations. Factors affecting construction costs have been studied from various perspectives. Analysis of cost factors requires an appreciation of a country’s practices. Identified cost factors provide an indication of a country’s construction economic strata. The purpose of this paper is to identify the essential factors that affect unit cost estimation and their breakdown using artificial neural networks. Twenty five (25) identified cost factors in road construction were subjected to a questionnaire survey and employing SPSS factor analysis the factors were reduced to eight. The 8 factors were analysed using neural network (NN) to determine the proportionate breakdown of the cost factors in a given construction unit rate. NN predicted that political environment accounted 44% of the unit rate followed by contractor capacity at 22% and financial delays, project feasibility and overhead & profit each at 11%. Project location, material availability and corruption perception index had minimal impact on the unit cost from the training data provided. Quantified cost factors can be incorporated in unit cost estimation models (UCEM) to produce more accurate estimates. This can create improvements in the cost estimation of infrastructure projects and establish a benchmark standard to assist the process of alignment of work practises and training of new staff, permitting the on-going development of best practises in cost estimation to become more effective.

Keywords: Construction cost factors, neural networks, roadworks, Zambian Construction Industry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3825
27 An Approach towards Designing an Energy Efficient Building through Embodied Energy Assessment: A Case of Apartment Building in Composite Climate

Authors: Ambalika Ekka

Abstract:

In today’s world, the growing demand for urban built forms has resulted in the production and consumption of building materials i.e. embodied energy in building construction, leading to pollution and greenhouse gas (GHG) emissions. Therefore, new buildings will offer a unique opportunity to implement more energy efficient building without compromising on building performance of the building. Embodied energy of building materials forms major contribution to embodied energy in buildings. The paper results in an approach towards designing an energy efficient apartment building through embodied energy assessment. This paper discusses the trend of residential development in Rourkela, which includes three case studies of the contemporary houses, followed by architectural elements, number of storeys, predominant material use and plot sizes using primary data. It results in identification of predominant material used and other characteristics in urban area. Further, the embodied energy coefficients of various dominant building materials and alternative materials manufactured in Indian Industry is taken in consideration from secondary source i.e. literature study. The paper analyses the embodied energy by estimating materials and operational energy of proposed building followed by altering the specifications of the materials based on the building components i.e. walls, flooring, windows, insulation and roof through res build India software and comparison of different options is assessed with consideration of sustainable parameters. This paper results that autoclaved aerated concrete block only reaches the energy performance Index benchmark i.e. 69.35 kWh/m2 yr i.e. by saving 4% of operational energy and as embodied energy has no particular index, out of all materials it has the highest EE 23206202.43  MJ.

Keywords: Energy efficient, embodied energy, energy performance index, building materials.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 998
26 Enhancing Hand Efficiency of Smart Glass Cleaning Robot through Generative Design Module

Authors: Pankaj Gupta, Amit Kumar Srivastava, Nitesh Pandey

Abstract:

This article explores the domain of generative design in order to enhance the development of robot designs for innovative and efficient maintenance approaches for tall buildings. This study aims to optimize the design of robotic hands by focusing on minimizing mass and volume while ensuring they can withstand the specified pressure with equal strength. The research procedure is structured and systematic. The purpose of optimization is to enhance the efficiency of the robot and reduce the manufacturing expenses. The project seeks to investigate the application of generative design in order to optimize products. Autodesk Fusion 360 offers the capability to immediately apply the generative design functionality to the solid model. The effort involved creating a solid model of the Smart Glass Cleaning Robot and optimizing one of its components, the Hand, using generative techniques. The article has thoroughly examined the designs, outcomes, and procedure. These loads serve as a benchmark for creating designs that can endure the necessary level of pressure and preserve their structural integrity. The efficacy of the generative design process is contingent upon the selection of materials, as different materials possess distinct physical attributes. The study utilizes five different materials, namely Steel, Stainless Steel, Titanium, Aluminum, and CFRP (Carbon Fiber Reinforced Polymer), in order to investigate a range of design possibilities.

Keywords: Generative design, mass and volume optimization, material strength analysis, generative design, smart glass cleaning robot.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 201
25 Efficient Design Optimization of Multi-State Flow Network for Multiple Commodities

Authors: Yu-Cheng Chou, Po Ting Lin

Abstract:

The network of delivering commodities has been an important design problem in our daily lives and many transportation applications. The delivery performance is evaluated based on the system reliability of delivering commodities from a source node to a sink node in the network. The system reliability is thus maximized to find the optimal routing. However, the design problem is not simple because (1) each path segment has randomly distributed attributes; (2) there are multiple commodities that consume various path capacities; (3) the optimal routing must successfully complete the delivery process within the allowable time constraints. In this paper, we want to focus on the design optimization of the Multi-State Flow Network (MSFN) for multiple commodities. We propose an efficient approach to evaluate the system reliability in the MSFN with respect to randomly distributed path attributes and find the optimal routing subject to the allowable time constraints. The delivery rates, also known as delivery currents, of the path segments are evaluated and the minimal-current arcs are eliminated to reduce the complexity of the MSFN. Accordingly, the correct optimal routing is found and the worst-case reliability is evaluated. It has been shown that the reliability of the optimal routing is at least higher than worst-case measure. Two benchmark examples are utilized to demonstrate the proposed method. The comparisons between the original and the reduced networks show that the proposed method is very efficient.

Keywords: Multiple Commodities, Multi-State Flow Network (MSFN), Time Constraints, Worst-Case Reliability (WCR)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450
24 Numerical Investigation of Nozzle Shape Effect on Shock Wave in Natural Gas Processing

Authors: Esam I. Jassim, Mohamed M. Awad

Abstract:

Natural gas flow contains undesirable solid particles, liquid condensation, and/or oil droplets and requires reliable removing equipment to perform filtration. Recent natural gas processing applications are demanded compactness and reliability of process equipment. Since conventional means are sophisticated in design, poor in efficiency, and continue lacking robust, a supersonic nozzle has been introduced as an alternative means to meet such demands. A 3-D Convergent-Divergent Nozzle is simulated using commercial Code for pressure ratio (NPR) varies from 1.2 to 2. Six different shapes of nozzle are numerically examined to illustrate the position of shock-wave as such spot could be considered as a benchmark of particle separation. Rectangle, triangle, circular, elliptical, pentagon, and hexagon nozzles are simulated using Fluent Code with all have same cross-sectional area. The simple one-dimensional inviscid theory does not describe the actual features of fluid flow precisely as it ignores the impact of nozzle configuration on the flow properties. CFD Simulation results, however, show that nozzle geometry influences the flow structures including location of shock wave. The CFD analysis predicts shock appearance when p01/pa>1.2 for almost all geometry and locates at the lower area ratio (Ae/At). Simulation results showed that shock wave in Elliptical nozzle has the farthest distance from the throat among the others at relatively small NPR. As NPR increases, hexagon would be the farthest. The numerical result is compared with available experimental data and has shown good agreement in terms of shock location and flow structure.

Keywords: CFD, Particle Separation, Shock wave, Supersonic Nozzle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3250
23 A Set Theory Based Factoring Technique and Its Use for Low Power Logic Design

Authors: Padmanabhan Balasubramanian, Ryuta Arisaka

Abstract:

Factoring Boolean functions is one of the basic operations in algorithmic logic synthesis. A novel algebraic factorization heuristic for single-output combinatorial logic functions is presented in this paper and is developed based on the set theory paradigm. The impact of factoring is analyzed mainly from a low power design perspective for standard cell based digital designs in this paper. The physical implementation of a number of MCNC/IWLS combinational benchmark functions and sub-functions are compared before and after factoring, based on a simple technology mapping procedure utilizing only standard gate primitives (readily available as standard cells in a technology library) and not cells corresponding to optimized complex logic. The power results were obtained at the gate-level by means of an industry-standard power analysis tool from Synopsys, targeting a 130nm (0.13μm) UMC CMOS library, for the typical case. The wire-loads were inserted automatically and the simulations were performed with maximum input activity. The gate-level simulations demonstrate the advantage of the proposed factoring technique in comparison with other existing methods from a low power perspective, for arbitrary examples. Though the benchmarks experimentation reports mixed results, the mean savings in total power and dynamic power for the factored solution over a non-factored solution were 6.11% and 5.85% respectively. In terms of leakage power, the average savings for the factored forms was significant to the tune of 23.48%. The factored solution is expected to better its non-factored counterpart in terms of the power-delay product as it is well-known that factoring, in general, yields a delay-efficient multi-level solution.

Keywords: Factorization, Set theory, Logic function, Standardcell based design, Low power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1791
22 Accurate And Efficient Global Approximation using Adaptive Polynomial RSM for Complex Mechanical and Vehicular Performance Models

Authors: Y. Z. Wu, Z. Dong, S. K. You

Abstract:

Global approximation using metamodel for complex mathematical function or computer model over a large variable domain is often needed in sensibility analysis, computer simulation, optimal control, and global design optimization of complex, multiphysics systems. To overcome the limitations of the existing response surface (RS), surrogate or metamodel modeling methods for complex models over large variable domain, a new adaptive and regressive RS modeling method using quadratic functions and local area model improvement schemes is introduced. The method applies an iterative and Latin hypercube sampling based RS update process, divides the entire domain of design variables into multiple cells, identifies rougher cells with large modeling error, and further divides these cells along the roughest dimension direction. A small number of additional sampling points from the original, expensive model are added over the small and isolated rough cells to improve the RS model locally until the model accuracy criteria are satisfied. The method then combines local RS cells to regenerate the global RS model with satisfactory accuracy. An effective RS cells sorting algorithm is also introduced to improve the efficiency of model evaluation. Benchmark tests are presented and use of the new metamodeling method to replace complex hybrid electrical vehicle powertrain performance model in vehicle design optimization and optimal control are discussed.

Keywords: Global approximation, polynomial response surface, domain decomposition, domain combination, multiphysics modeling, hybrid powertrain optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1908
21 Sentiment Analysis of Fake Health News Using Naive Bayes Classification Models

Authors: Danielle Shackley, Yetunde Folajimi

Abstract:

As more people turn to the internet seeking health related information, there is more risk of finding false, inaccurate, or dangerous information. Sentiment analysis is a natural language processing technique that assigns polarity scores of text, ranging from positive, neutral and negative. In this research, we evaluate the weight of a sentiment analysis feature added to fake health news classification models. The dataset consists of existing reliably labeled health article headlines that were supplemented with health information collected about COVID-19 from social media sources. We started with data preprocessing, tested out various vectorization methods such as Count and TFIDF vectorization. We implemented 3 Naive Bayes classifier models, including Bernoulli, Multinomial and Complement. To test the weight of the sentiment analysis feature on the dataset, we created benchmark Naive Bayes classification models without sentiment analysis, and those same models were reproduced and the feature was added. We evaluated using the precision and accuracy scores. The Bernoulli initial model performed with 90% precision and 75.2% accuracy, while the model supplemented with sentiment labels performed with 90.4% precision and stayed constant at 75.2% accuracy. Our results show that the addition of sentiment analysis did not improve model precision by a wide margin; while there was no evidence of improvement in accuracy, we had a 1.9% improvement margin of the precision score with the Complement model. Future expansion of this work could include replicating the experiment process, and substituting the Naive Bayes for a deep learning neural network model.

Keywords: Sentiment analysis, Naive Bayes model, natural language processing, topic analysis, fake health news classification model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 487
20 Improving Fake News Detection Using K-means and Support Vector Machine Approaches

Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy

Abstract:

Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.

Keywords: Fake news detection, feature selection, support vector machine, K-means clustering, machine learning, social media.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4524
19 Numerical Investigation of Multiphase Flow in Pipelines

Authors: Gozel Judakova, Markus Bause

Abstract:

We present and analyze reliable numerical techniques for simulating complex flow and transport phenomena related to natural gas transportation in pipelines. Such kind of problems are of high interest in the field of petroleum and environmental engineering. Modeling and understanding natural gas flow and transformation processes during transportation is important for the sake of physical realism and the design and operation of pipeline systems. In our approach a two fluid flow model based on a system of coupled hyperbolic conservation laws is considered for describing natural gas flow undergoing hydratization. The accurate numerical approximation of two-phase gas flow remains subject of strong interest in the scientific community. Such hyperbolic problems are characterized by solutions with steep gradients or discontinuities, and their approximation by standard finite element techniques typically gives rise to spurious oscillations and numerical artefacts. Recently, stabilized and discontinuous Galerkin finite element techniques have attracted researchers’ interest. They are highly adapted to the hyperbolic nature of our two-phase flow model. In the presentation a streamline upwind Petrov-Galerkin approach and a discontinuous Galerkin finite element method for the numerical approximation of our flow model of two coupled systems of Euler equations are presented. Then the efficiency and reliability of stabilized continuous and discontinous finite element methods for the approximation is carefully analyzed and the potential of the either classes of numerical schemes is investigated. In particular, standard benchmark problems of two-phase flow like the shock tube problem are used for the comparative numerical study.

Keywords: Discontinuous Galerkin method, Euler system, inviscid two-fluid model, streamline upwind Petrov-Galerkin method, two-phase flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 790
18 Utilization of Cement Kiln Dust in Adsorption Technology

Authors: Yousef Swesi, Asia Elmeshergi, Abdelati Elalem, Walid Alfoghy

Abstract:

This paper involves a study of the heavy metal pollution of the soils around one of cement plants in Libya called Suk-Alkhameas and surrounding urban areas caused by cement kiln dust (CKD) emitted. Samples of soil was collected from sites at four directions around the cement factory at distances 250m, 1000m, and 3000m from the factory and at (0-10)cm deep in the soil. These samples are analyzed for Fe (iii), Zn(ii), and Pb (ii) as major pollutants. These values are compared with soils at 25 Km distances from the factory as a reference or control samples. The results show that the concentration of Fe ions in the surface soil was within the acceptable range of 1000ppm. However, for Zn and Pb ions the concentrations at the east and north sides of the factory were found six fold higher than the benchmark level. This high value was attributed to the wind which blows usually from south to north and from west to east. This work includes an investigation of the adsorption isotherms and adsorption efficiency of CKD as adsorbent of heavy metal ions (Fe (iii), Zn(ii), and Pb(ii)) from the polluted soils of Suk-Alkameas city. The investigation was conducted in batch and fixed bed column flow technique. The adsorption efficiency of the studied heavy metals ions removals onto CKD depends on the pH of the solution. The optimum pH values are found to be in the ranges of 8-10 and decreases at lower pH values. The removal efficiency of these heavy metals ions ranged from 93% for Pb, 94% for Zn, and 98% for Fe ions for 10 g.l-1 adsorbent concentration. The maximum removal efficiency of these ions was achieved at 50-60 minutes contact times at which equilibrium is reached. Fixed bed column experimental measurements are also made to evaluate CKD as an adsorbent for the heavy metals. Results obtained are with good agreement with Langmuir and Drachsal assumption of multilayer formation on the adsorbent surface.

Keywords: Adsorption, Cement Kiln dust (CKD & CAC), Isotherms, Zn and Pb ions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2406
17 Meta Model Based EA for Complex Optimization

Authors: Maumita Bhattacharya

Abstract:

Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, many real life optimization problems often require finding optimal solution to complex high dimensional, multimodal problems involving computationally very expensive fitness function evaluations. Use of evolutionary algorithms in such problem domains is thus practically prohibitive. An attractive alternative is to build meta models or use an approximation of the actual fitness functions to be evaluated. These meta models are order of magnitude cheaper to evaluate compared to the actual function evaluation. Many regression and interpolation tools are available to build such meta models. This paper briefly discusses the architectures and use of such meta-modeling tools in an evolutionary optimization context. We further present two evolutionary algorithm frameworks which involve use of meta models for fitness function evaluation. The first framework, namely the Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model [14] reduces computation time by controlled use of meta-models (in this case approximate model generated by Support Vector Machine regression) to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the metamodel are generated from a single uniform model. This does not take into account uncertain scenarios involving noisy fitness functions. The second model, DAFHEA-II, an enhanced version of the original DAFHEA framework, incorporates a multiple-model based learning approach for the support vector machine approximator to handle noisy functions [15]. Empirical results obtained by evaluating the frameworks using several benchmark functions demonstrate their efficiency

Keywords: Meta model, Evolutionary algorithm, Stochastictechnique, Fitness function, Optimization, Support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067
16 Development of an Impregnated Diamond Bit with an Improved Rate of Penetration

Authors: Tim Dunne, Weicheng Li, Chris Cheng, Qi Peng

Abstract:

Deeper petroleum reservoirs are more challenging to exploit due to the high hardness and abrasive characteristics of the formations. A cutting structure that consists of particulate diamond impregnated in a supporting matrix is found to be effective. Diamond impregnated bits are favored in these applications due to the higher thermal stability of the matrix material. The diamond particles scour or abrade away concentric grooves while the rock formation adjacent to the grooves is fractured and removed. The matrix material supporting the diamond will wear away, leaving the superficial dull diamonds to fall out. The matrix material wear will expose other embedded intact sharp diamonds to continue the operation. Minimizing the erosion effect on the matrix is an important design consideration, as the life of the bit can be extended by preventing early diamond pull-out. A careful balancing of the key parameters, such as diamond concentration, tungsten carbide and metal binder must be considered during development. Described herein is the design of experiment for developing and lab testing 8 unique samples. ASTM B611 wear testing was performed to benchmark the material performance against baseline products, with further scanning electron microscopy and microhardness evaluations. The recipe S5 with diamond 25/35 mesh size, narrow size distribution, high concentration blended with fine tungsten carbide and Co-Cu-Fe-P metal binder has the best performance, which shows 19% improvement in the ASTM B611 wear test compared with the reference material. In the field trial, the rate of penetration (ROP) is measured as 15 m/h, compared to 9.5, 7.8, and 6.8 m/h of other commercial impregnated bits in the same formation. A second round of optimizing recipe S5 for a higher wear resistance is further reported.

Keywords: Diamond containing material, grit hot press insert, impregnated diamond, insert, rate of penetration, ultrahard formation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 374