Search results for: Weighted sum
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 313

Search results for: Weighted sum

43 An Evaluation of Solubility of Wax and Asphaltene in Crude Oil for Improved Flow Properties Using a Copolymer Solubilized in Organic Solvent with an Aromatic Hydrocarbon

Authors: S. M. Anisuzzaman, Sariah Abang, Awang Bono, D. Krishnaiah, N. M. Ismail, G. B. Sandrison

Abstract:

Wax and asphaltene are high molecular weighted compounds that contribute to the stability of crude oil at a dispersed state. Transportation of crude oil along pipelines from the oil rig to the refineries causes fluctuation of temperature which will lead to the coagulation of wax and flocculation of asphaltenes. This paper focuses on the prevention of wax and asphaltene precipitate deposition on the inner surface of the pipelines by using a wax inhibitor and an asphaltene dispersant. The novelty of this prevention method is the combination of three substances; a wax inhibitor dissolved in a wax inhibitor solvent and an asphaltene solvent, namely, ethylene-vinyl acetate (EVA) copolymer dissolved in methylcyclohexane (MCH) and toluene (TOL) to inhibit the precipitation and deposition of wax and asphaltene. The objective of this paper was to optimize the percentage composition of each component in this inhibitor which can maximize the viscosity reduction of crude oil. The optimization was divided into two stages which are the laboratory experimental stage in which the viscosity of crude oil samples containing inhibitor of different component compositions is tested at decreasing temperatures and the data optimization stage using response surface methodology (RSM) to design an optimizing model. The results of experiment proved that the combination of 50% EVA + 25% MCH + 25% TOL gave a maximum viscosity reduction of 67% while the RSM model proved that the combination of 57% EVA + 20.5% MCH + 22.5% TOL gave a maximum viscosity reduction of up to 61%.

Keywords: Asphaltene, ethylene-vinyl acetate, methylcyclohexane, toluene, wax.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1446
42 The Household-Based Socio-Economic Index for Every District in Peninsular Malaysia

Authors: Nuzlinda Abdul Rahman, Syerrina Zakaria

Abstract:

Deprivation indices are widely used in public health study. These indices are also referred as the index of inequalities or disadvantage. Even though, there are many indices that have been built before, it is believed to be less appropriate to use the existing indices to be applied in other countries or areas which had different socio-economic conditions and different geographical characteristics. The objective of this study is to construct the index based on the geographical and socio-economic factors in Peninsular Malaysia which is defined as the weighted household-based deprivation index. This study has employed the variables based on household items, household facilities, school attendance and education level obtained from Malaysia 2000 census report. The factor analysis is used to extract the latent variables from indicators, or reducing the observable variable into smaller amount of components or factor. Based on the factor analysis, two extracted factors were selected, known as Basic Household Amenities and Middle-Class Household Item factor. It is observed that the district with a lower index values are located in the less developed states like Kelantan, Terengganu and Kedah. Meanwhile, the areas with high index values are located in developed states such as Pulau Pinang, W.P. Kuala Lumpur and Selangor.

Keywords: Factor Analysis, Basic Household Amenities, Middle-Class Household Item, Socio-economic Index

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3010
41 Development of Integrated GIS Interface for Characteristics of Regional Daily Flow

Authors: Ju Young Lee, Jung-Seok Yang, Jaeyoung Choi

Abstract:

The purpose of this paper primarily intends to develop GIS interface for estimating sequences of stream-flows at ungauged stations based on known flows at gauged stations. The integrated GIS interface is composed of three major steps. The first, precipitation characteristics using statistical analysis is the procedure for making multiple linear regression equation to get the long term mean daily flow at ungauged stations. The independent variables in regression equation are mean daily flow and drainage area. Traditionally, mean flow data are generated by using Thissen polygon method. However, method for obtaining mean flow data can be selected by user such as Kriging, IDW (Inverse Distance Weighted), Spline methods as well as other traditional methods. At the second, flow duration curve (FDC) is computing at unguaged station by FDCs in gauged stations. Finally, the mean annual daily flow is computed by spatial interpolation algorithm. The third step is to obtain watershed/topographic characteristics. They are the most important factors which govern stream-flows. In summary, the simulated daily flow time series are compared with observed times series. The results using integrated GIS interface are closely similar and are well fitted each other. Also, the relationship between the topographic/watershed characteristics and stream flow time series is highly correlated.

Keywords: Integrated GIS interface, spatial interpolation algorithm, FDC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1509
40 Weighted Data Replication Strategy for Data Grid Considering Economic Approach

Authors: N. Mansouri, A. Asadi

Abstract:

Data Grid is a geographically distributed environment that deals with data intensive application in scientific and enterprise computing. Data replication is a common method used to achieve efficient and fault-tolerant data access in Grids. In this paper, a dynamic data replication strategy, called Enhanced Latest Access Largest Weight (ELALW) is proposed. This strategy is an enhanced version of Latest Access Largest Weight strategy. However, replication should be used wisely because the storage capacity of each Grid site is limited. Thus, it is important to design an effective strategy for the replication replacement task. ELALW replaces replicas based on the number of requests in future, the size of the replica, and the number of copies of the file. It also improves access latency by selecting the best replica when various sites hold replicas. The proposed replica selection selects the best replica location from among the many replicas based on response time that can be determined by considering the data transfer time, the storage access latency, the replica requests that waiting in the storage queue and the distance between nodes. Simulation results utilizing the OptorSim show our replication strategy achieve better performance overall than other strategies in terms of job execution time, effective network usage and storage resource usage.

Keywords: Data grid, data replication, simulation, replica selection, replica placement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2109
39 Hybrid Weighted Multiple Attribute Decision Making Handover Method for Heterogeneous Networks

Authors: Mohanad Alhabo, Li Zhang, Naveed Nawaz

Abstract:

Small cell deployment in 5G networks is a promising technology to enhance the capacity and coverage. However, unplanned deployment may cause high interference levels and high number of unnecessary handovers, which in turn result in an increase in the signalling overhead. To guarantee service continuity, minimize unnecessary handovers and reduce signalling overhead in heterogeneous networks, it is essential to properly model the handover decision problem. In this paper, we model the handover decision problem using Multiple Attribute Decision Making (MADM) method, specifically Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS), and propose a hybrid TOPSIS method to control the handover in heterogeneous network. The proposed method adopts a hybrid weighting policy, which is a combination of entropy and standard deviation. A hybrid weighting control parameter is introduced to balance the impact of the standard deviation and entropy weighting on the network selection process and the overall performance. Our proposed method show better performance, in terms of the number of frequent handovers and the mean user throughput, compared to the existing methods.

Keywords: Handover, HetNets, interference, MADM, small cells, TOPSIS, weight.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 572
38 Modelling Hydrological Time Series Using Wakeby Distribution

Authors: Ilaria Lucrezia Amerise

Abstract:

The statistical modelling of precipitation data for a given portion of territory is fundamental for the monitoring of climatic conditions and for Hydrogeological Management Plans (HMP). This modelling is rendered particularly complex by the changes taking place in the frequency and intensity of precipitation, presumably to be attributed to the global climate change. This paper applies the Wakeby distribution (with 5 parameters) as a theoretical reference model. The number and the quality of the parameters indicate that this distribution may be the appropriate choice for the interpolations of the hydrological variables and, moreover, the Wakeby is particularly suitable for describing phenomena producing heavy tails. The proposed estimation methods for determining the value of the Wakeby parameters are the same as those used for density functions with heavy tails. The commonly used procedure is the classic method of moments weighed with probabilities (probability weighted moments, PWM) although this has often shown difficulty of convergence, or rather, convergence to a configuration of inappropriate parameters. In this paper, we analyze the problem of the likelihood estimation of a random variable expressed through its quantile function. The method of maximum likelihood, in this case, is more demanding than in the situations of more usual estimation. The reasons for this lie, in the sampling and asymptotic properties of the estimators of maximum likelihood which improve the estimates obtained with indications of their variability and, therefore, their accuracy and reliability. These features are highly appreciated in contexts where poor decisions, attributable to an inefficient or incomplete information base, can cause serious damages.

Keywords: Generalized extreme values (GEV), likelihood estimation, precipitation data, Wakeby distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 674
37 Portfolio Management for Construction Company during Covid-19 Using AHP Technique

Authors: Sareh Rajabi, Salwa Bheiry

Abstract:

In general, Covid-19 created many financial and non-financial damages to the economy and community. Level and severity of covid-19 as pandemic case varies over the region and due to different types of the projects. Covid-19 virus emerged as one of the most imperative risk management factors word-wide recently. Therefore, as part of portfolio management assessment, it is essential to evaluate severity of such risk on the project and program in portfolio management level to avoid any risky portfolio. Covid-19 appeared very effectively in South America, part of Europe and Middle East. Such pandemic infection affected the whole universe, due to lock down, interruption in supply chain management, health and safety requirements, transportations and commercial impacts. Therefore, this research proposes Analytical Hierarchy Process (AHP) to analyze and assess such pandemic case like Covid-19 and its impacts on the construction projects. The AHP technique uses four sub-criteria: Health and safety, commercial risk, completion risk and contractual risk to evaluate the project and program. The result will provide the decision makers with information which project has higher or lower risk in case of Covid-19 and pandemic scenario. Therefore, the decision makers can have most feasible solution based on effective weighted criteria for project selection within their portfolio to match with the organization’s strategies.

Keywords: Portfolio management, risk management, COVID-19, analytical hierarchy process technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 832
36 A Software Framework for Predicting Oil-Palm Yield from Climate Data

Authors: Mohd. Noor Md. Sap, A. Majid Awan

Abstract:

Intelligent systems based on machine learning techniques, such as classification, clustering, are gaining wide spread popularity in real world applications. This paper presents work on developing a software system for predicting crop yield, for example oil-palm yield, from climate and plantation data. At the core of our system is a method for unsupervised partitioning of data for finding spatio-temporal patterns in climate data using kernel methods which offer strength to deal with complex data. This work gets inspiration from the notion that a non-linear data transformation into some high dimensional feature space increases the possibility of linear separability of the patterns in the transformed space. Therefore, it simplifies exploration of the associated structure in the data. Kernel methods implicitly perform a non-linear mapping of the input data into a high dimensional feature space by replacing the inner products with an appropriate positive definite function. In this paper we present a robust weighted kernel k-means algorithm incorporating spatial constraints for clustering the data. The proposed algorithm can effectively handle noise, outliers and auto-correlation in the spatial data, for effective and efficient data analysis by exploring patterns and structures in the data, and thus can be used for predicting oil-palm yield by analyzing various factors affecting the yield.

Keywords: Pattern analysis, clustering, kernel methods, spatial data, crop yield

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978
35 Finite Element Prediction of Multi-Size Particulate Flow through Two-Dimensional Pump Casing

Authors: K. V. Pagalthivarthi, R. J. Visintainer

Abstract:

Two-dimensional Eulerian (volume-averaged) continuity and momentum equations governing multi-size slurry flow through pump casings are solved by applying a penalty finite element formulation. The computational strategy validated for multi-phase flow through rectangular channels is adapted to the present study.   The flow fields of the carrier, mixture and each solids species, and the concentration field of each species are determined sequentially in an iterative manner. The eddy viscosity field computed using Spalart-Allmaras model for the pure carrier phase is modified for the presence of particles. Streamline upwind Petrov-Galerkin formulation is used for all the momentum equations for the carrier, mixture and each solids species and the concentration field for each species. After ensuring mesh-independence of solutions, results of multi-size particulate flow simulation are presented to bring out the effect of bulk flow rate, average inlet concentration, and inlet particle size distribution. Mono-size computations using (1) the concentration-weighted mean diameter of the slurry and (2) the D50 size of the slurry are also presented for comparison with multi-size results.

Keywords: Eulerian-Eulerian model, Multi-size particulate flow, Penalty finite elements, Pump casing, Spalart-Allmaras.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1430
34 Statistical Modeling of Local Area Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes

Authors: Jihad S. Daba, J. P. Dubois

Abstract:

Fading noise degrades the performance of cellular communication, most notably in femto- and pico-cells in 3G and 4G systems. When the wireless channel consists of a small number of scattering paths, the statistics of fading noise is not analytically tractable and poses a serious challenge to developing closed canonical forms that can be analysed and used in the design of efficient and optimal receivers. In this context, noise is multiplicative and is referred to as stochastically local fading. In many analytical investigation of multiplicative noise, the exponential or Gamma statistics are invoked. More recent advances by the author of this paper utilized a Poisson modulated-weighted generalized Laguerre polynomials with controlling parameters and uncorrelated noise assumptions. In this paper, we investigate the statistics of multidiversity stochastically local area fading channel when the channel consists of randomly distributed Rayleigh and Rician scattering centers with a coherent Nakagami-distributed line of sight component and an underlying doubly stochastic Poisson process driven by a lognormal intensity. These combined statistics form a unifying triply stochastic filtered marked Poisson point process model.

Keywords: Cellular communication, femto- and pico-cells, stochastically local area fading channel, triply stochastic filtered marked Poisson point process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1345
33 Dynamic Fault Diagnosis for Semi-Batch Reactor under Closed-Loop Control via Independent Radial Basis Function Neural Network

Authors: Abdelkarim M. Ertiame, D. W. Yu, D. L. Yu, J. B. Gomm

Abstract:

In this paper, a robust fault detection and isolation (FDI) scheme is developed to monitor a multivariable nonlinear chemical process called the Chylla-Haase polymerization reactor, when it is under the cascade PI control. The scheme employs a radial basis function neural network (RBFNN) in an independent mode to model the process dynamics, and using the weighted sum-squared prediction error as the residual. The Recursive Orthogonal Least Squares algorithm (ROLS) is employed to train the model to overcome the training difficulty of the independent mode of the network. Then, another RBFNN is used as a fault classifier to isolate faults from different features involved in the residual vector. Several actuator and sensor faults are simulated in a nonlinear simulation of the reactor in Simulink. The scheme is used to detect and isolate the faults on-line. The simulation results show the effectiveness of the scheme even the process is subjected to disturbances and uncertainties including significant changes in the monomer feed rate, fouling factor, impurity factor, ambient temperature, and measurement noise. The simulation results are presented to illustrate the effectiveness and robustness of the proposed method.

Keywords: Robust fault detection, cascade control, independent RBF model, RBF neural networks, Chylla-Haase reactor, FDI under closed-loop control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1835
32 Development and Acceptance of a Proposed Module for Enhancing the Reading and Writing Skills in Baybayin: The Traditional Writing System in the Philippines

Authors: Maria Venus G. Solares

Abstract:

The ancient Filipinos had their own spelling or alphabet that differed from the modern Roman alphabet brought by the Spaniards. It consists of seventeen letters, three vowels, and fourteen consonants and is called Baybayin. The Baybayin, a traditional writing system, is composed of characters that represent syllables. A proposal in the Philippine Congress to declare Baybayin as the national writing system inspired this study. The main objective of this study was to develop and assess the proposed module for enhancing the reading and writing skills in Baybayin of the students. The aim was to ensure the acceptability of the Baybayin using the proposed module and to meet the needs of students in developing their ability to read and write Baybayin through the module. A quasi-experimental research design was used in this study.  The data were collected through the initial and final analysis of the students of Adamson University's ABM 1102 using convenient sampling techniques. Based on statistical analysis of data using weighted mean, standard deviation, and paired t-tests, the proposed module helped improve the students' literacy skills, and the response exercises in the proposed module changed the acceptability of the Baybayin in their minds. The study showed that there was an important difference in the scores of students before and after the use of the module. The students' response to the assessment of their reading and writing skills on Baybayin was highly acceptable. This study will help to develop the reading and writing skills of the students in Baybayin and to teach the Baybayin in response to the revival of a part of Philippine culture that has been long forgotten.

Keywords: Baybayin, proposed module, ancient writing, acceptability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 48
31 Aircraft Supplier Selection using Multiple Criteria Group Decision Making Process with Proximity Measure Method for Determinate Fuzzy Set Ranking Analysis

Authors: C. Ardil

Abstract:

Aircraft supplier selection process, which is considered as a fundamental supply chain problem, is a multi-criteria group decision problem that has a significant impact on the performance of the entire supply chain. In practical situations are frequently incomplete and uncertain information, making it difficult for decision-makers to communicate their opinions on candidates with precise and definite values. To solve the aircraft supplier selection problem in an environment of incomplete and uncertain information, proximity measure method is proposed. It uses determinate fuzzy numbers. The weights of each decision maker are equally predetermined and the entropic criteria weights are calculated using each decision maker's decision matrix. Additionally, determinate fuzzy numbers, it is proposed to use the weighted normalized Minkowski distance function and Hausdorff distance function to determine the ranking order patterns of alternatives. A numerical example for aircraft supplier selection is provided to further demonstrate the applicability, effectiveness, validity and rationality of the proposed method.

Keywords: Aircraft supplier selection, multiple criteria decision making, fuzzy sets, determinate fuzzy sets, intuitionistic fuzzy sets, proximity measure method, Minkowski distance function, Hausdorff distance function, PMM, MCDM

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 387
30 Graph-based High Level Motion Segmentation using Normalized Cuts

Authors: Sungju Yun, Anjin Park, Keechul Jung

Abstract:

Motion capture devices have been utilized in producing several contents, such as movies and video games. However, since motion capture devices are expensive and inconvenient to use, motions segmented from captured data was recycled and synthesized to utilize it in another contents, but the motions were generally segmented by contents producers in manual. Therefore, automatic motion segmentation is recently getting a lot of attentions. Previous approaches are divided into on-line and off-line, where on-line approaches segment motions based on similarities between neighboring frames and off-line approaches segment motions by capturing the global characteristics in feature space. In this paper, we propose a graph-based high-level motion segmentation method. Since high-level motions consist of several repeated frames within temporal distances, we consider all similarities among all frames within the temporal distance. This is achieved by constructing a graph, where each vertex represents a frame and the edges between the frames are weighted by their similarity. Then, normalized cuts algorithm is used to partition the constructed graph into several sub-graphs by globally finding minimum cuts. In the experiments, the results using the proposed method showed better performance than PCA-based method in on-line and GMM-based method in off-line, as the proposed method globally segment motions from the graph constructed based similarities between neighboring frames as well as similarities among all frames within temporal distances.

Keywords: Capture Devices, High-Level Motion, Motion Segmentation, Normalized Cuts

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1315
29 Simulation of Lid Cavity Flow in Rectangular, Half-Circular and Beer Bucket Shapes using Quasi-Molecular Modeling

Authors: S. Kulsri, M. Jaroensutasinee, K. Jaroensutasinee

Abstract:

We developed a new method based on quasimolecular modeling to simulate the cavity flow in three cavity shapes: rectangular, half-circular and bucket beer in cgs units. Each quasi-molecule was a group of particles that interacted in a fashion entirely analogous to classical Newtonian molecular interactions. When a cavity flow was simulated, the instantaneous velocity vector fields were obtained by using an inverse distance weighted interpolation method. In all three cavity shapes, fluid motion was rotated counter-clockwise. The velocity vector fields of the three cavity shapes showed a primary vortex located near the upstream corners at time t ~ 0.500 s, t ~ 0.450 s and t ~ 0.350 s, respectively. The configurational kinetic energy of the cavities increased as time increased until the kinetic energy reached a maximum at time t ~ 0.02 s and, then, the kinetic energy decreased as time increased. The rectangular cavity system showed the lowest kinetic energy, while the half-circular cavity system showed the highest kinetic energy. The kinetic energy of rectangular, beer bucket and half-circular cavities fluctuated about stable average values 35.62 x 103, 38.04 x 103 and 40.80 x 103 ergs/particle, respectively. This indicated that the half-circular shapes were the most suitable shape for a shrimp pond because the water in shrimp pond flows best when we compared with rectangular and beer bucket shape.

Keywords: Quasi-molecular modelling, particle modelling, lid driven cavity flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1727
28 Assessment of the Illustrated Language Activities of the Portage Guide to Early Education

Authors: Ofelia A. Damag

Abstract:

The study was focused on the development and assessment of the illustrated language activities of the 1996 Edition of the Portage Guide to Early Education. It determined the extent of appropriateness, applicability, time efficiency and aesthetics of the illustrated language activities to be used as instructional material not only by teachers, but parents and caregivers as well. The eclectic research design was applied in this study using qualitative and quantitative methods. To determine the applicability and time efficiency of the study, a try out was done. Since the eclectic research design was used, it made use of a researcher-made survey questionnaire and focus group discussion. Analysis of the data was done through weighted mean and ANOVA. The respondents of the study were representatives of Special Education (SPED) teachers, caregivers and parents of a special-needs child, particularly with difficulties in learning basic language skills. The results of the study show that a large number of respondents are SPED teachers and caregivers and are mostly college graduates. Many of them have earned units towards Master’s studies. Moreover, a majority of the respondents have not attended seminars or in-service training in early intervention for them to be more competent in the area of specialization. It is concluded that the illustrated language activities under review in this study are appropriate, applicable, time efficient and aesthetic for use as a tool in teaching. The recommendations are focused on the advocacy for SPED teachers, caregivers and parents of special-needs children to be more consistent in the implementation of the new instructional materials as an aid in an intervention program.

Keywords: Illustrated language activities, inclusion, portage guide to early education, special educational needs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1419
27 Incentive Policies to Promote Green Infrastructure in Urban Jordan

Authors: Zayed Freah Zeadat

Abstract:

The wellbeing of urban dwellers is strongly associated with the quality and quantity of green infrastructure. Nevertheless, urban green infrastructure is still lagging in many Arab cities, and Jordan is no exception. The capital city of Jordan, Amman, is becoming more urban dense with limited green spaces. The unplanned urban growth in Amman has caused several environmental problems such as urban heat islands, air pollution and lack of green spaces. This study aims to investigate the most suitable drivers to leverage the implementation of urban green infrastructure in Jordan through qualitative and quantitative analysis. The qualitative research includes an extensive literature review to discuss the most common drivers used internationally to promote urban green infrastructure implementation in the literature. The quantitative study employs a questionnaire survey to rank the suitability of each driver. Consultants, contractors and policymakers were invited to fill the research questionnaire according to their judgments and opinions. Relative Importance Index has been used to calculate the weighted average of all drivers and the Kruskal-Wallis test to check the degree of agreement among groups. This study finds that research participants agreed that indirect financial incentives (i.e., tax reductions, reduction in stormwater utility fee, reduction of interest rate, density bonus etc.) are the most effective incentive policy whilst granting sustainability certificate policy is the least effective driver to ensure widespread of UGI is elements in Jordan.

Keywords: sustainable development, urban green infrastructure, relative importance index, urban Jordan

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 564
26 Fuzzy Logic Based Improved Range Free Localization for Wireless Sensor Networks

Authors: Ashok Kumar, Vinod Kumar

Abstract:

Wireless Sensor Networks (WSNs) are used to monitor/observe vast inaccessible regions through deployment of large number of sensor nodes in the sensing area. For majority of WSN applications, the collected data needs to be combined with geographic information of its origin to make it useful for the user; information received from remote Sensor Nodes (SNs) that are several hops away from base station/sink is meaningless without knowledge of its source. In addition to this, location information of SNs can also be used to propose/develop new network protocols for WSNs to improve their energy efficiency and lifetime. In this paper, range free localization protocols for WSNs have been proposed. The proposed protocols are based on weighted centroid localization technique, where the edge weights of SNs are decided by utilizing fuzzy logic inference for received signal strength and link quality between the nodes. The fuzzification is carried out using (i) Mamdani, (ii) Sugeno, and (iii) Combined Mamdani Sugeno fuzzy logic inference. Simulation results demonstrate that proposed protocols provide better accuracy in node localization compared to conventional centroid based localization protocols despite presence of unintentional radio frequency interference from radio frequency (RF) sources operating in same frequency band.

Keywords: localization, range free, received signal strength, link quality indicator, Mamdani fuzzy logic inference, Sugeno fuzzy logic inference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2631
25 Fuel Economy and Stability Enhancement of the Hybrid Vehicles by Using Electrical Machines on Non-Driven Wheels

Authors: P. Naderi, S.M.T. Bathaee, R. Hoseinnezhad, R. Chini

Abstract:

Using electrical machine in conventional vehicles, also called hybrid vehicles, has become a promising control scheme that enables some manners for fuel economy and driver assist for better stability. In this paper, vehicle stability control, fuel economy and Driving/Regeneration braking for a 4WD hybrid vehicle is investigated by using an electrical machine on each non-driven wheels. In front wheels driven vehicles, fuel economy and regenerative braking can be obtained by summing torques applied on rear wheels. On the other hand, unequal torques applied to rear wheels provides enhanced safety and path correction in steering. In this paper, a model with fourteen degrees of freedom is considered for vehicle body, tires and, suspension systems. Thereafter, powertrain subsystems are modeled. Considering an electrical machine on each rear wheel, a fuzzy controller is designed for each driving, braking, and stability conditions. Another fuzzy controller recognizes the vehicle requirements between the driving/regeneration and stability modes. Intelligent vehicle control to multi objective operation and forward simulation are the paper advantages. For reaching to these aims, power management control and yaw moment control will be done by three fuzzy controllers. Also, the above mentioned goals are weighted by another fuzzy sub-controller base on vehicle dynamic. Finally, Simulations performed in MATLAB/SIMULINK environment show that the proposed structure can enhance the vehicle performance in different modes effectively.

Keywords: Hybrid, pitch, roll, regeneration, yaw.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1873
24 Automated Textile Defect Recognition System Using Computer Vision and Artificial Neural Networks

Authors: Atiqul Islam, Shamim Akhter, Tumnun E. Mursalin

Abstract:

Least Development Countries (LDC) like Bangladesh, whose 25% revenue earning is achieved from Textile export, requires producing less defective textile for minimizing production cost and time. Inspection processes done on these industries are mostly manual and time consuming. To reduce error on identifying fabric defects requires more automotive and accurate inspection process. Considering this lacking, this research implements a Textile Defect Recognizer which uses computer vision methodology with the combination of multi-layer neural networks to identify four classifications of textile defects. The recognizer, suitable for LDC countries, identifies the fabric defects within economical cost and produces less error prone inspection system in real time. In order to generate input set for the neural network, primarily the recognizer captures digital fabric images by image acquisition device and converts the RGB images into binary images by restoration process and local threshold techniques. Later, the output of the processed image, the area of the faulty portion, the number of objects of the image and the sharp factor of the image, are feed backed as an input layer to the neural network which uses back propagation algorithm to compute the weighted factors and generates the desired classifications of defects as an output.

Keywords: Computer vision, image acquisition device, machine vision, multi-layer neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3298
23 A Comparative Analysis Approach Based on Fuzzy AHP, TOPSIS and PROMETHEE for the Selection Problem of GSCM Solutions

Authors: Omar Boutkhoum, Mohamed Hanine, Abdessadek Bendarag

Abstract:

Sustainable economic growth is nowadays driving firms to extend toward the adoption of many green supply chain management (GSCM) solutions. However, the evaluation and selection of these solutions is a matter of concern that needs very serious decisions, involving complexity owing to the presence of various associated factors. To resolve this problem, a comparative analysis approach based on multi-criteria decision-making methods is proposed for adequate evaluation of sustainable supply chain management solutions. In the present paper, we propose an integrated decision-making model based on FAHP (Fuzzy Analytic Hierarchy Process), TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) and PROMETHEE (Preference Ranking Organisation METHod for Enrichment Evaluations) to contribute to a better understanding and development of new sustainable strategies for industrial organizations. Due to the varied importance of the selected criteria, FAHP is used to identify the evaluation criteria and assign the importance weights for each criterion, while TOPSIS and PROMETHEE methods employ these weighted criteria as inputs to evaluate and rank the alternatives. The main objective is to provide a comparative analysis based on TOPSIS and PROMETHEE processes to help make sound and reasoned decisions related to the selection problem of GSCM solution.

Keywords: GSCM solutions, multi-criteria analysis, FAHP, TOPSIS, PROMETHEE, decision support system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 938
22 Multi Response Optimization in Drilling Al6063/SiC/15% Metal Matrix Composite

Authors: Hari Singh, Abhishek Kamboj, Sudhir Kumar

Abstract:

This investigation proposes a grey-based Taguchi method to solve the multi-response problems. The grey-based Taguchi method is based on the Taguchi’s design of experimental method, and adopts grey relational analysis (GRA) to transfer multi-response problems into single-response problems. In this investigation, an attempt has been made to optimize the drilling process parameters considering weighted output response characteristics using grey relational analysis. The output response characteristics considered are surface roughness, burr height and hole diameter error under the experimental conditions of cutting speed, feed rate, step angle, and cutting environment. The drilling experiments were conducted using L27 orthogonal array. A combination of orthogonal array, design of experiments and grey relational analysis was used to ascertain best possible drilling process parameters that give minimum surface roughness, burr height and hole diameter error. The results reveal that combination of Taguchi design of experiment and grey relational analysis improves surface quality of drilled hole. 

Keywords: Metal matrix composite, Drilling, Optimization, step drill, Surface roughness, burr height, hole diameter error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3255
21 Applying Case-Based Reasoning in Supporting Strategy Decisions

Authors: S. M. Seyedhosseini, A. Makui, M. Ghadami

Abstract:

Globalization and therefore increasing tight competition among companies, have resulted to increase the importance of making well-timed decision. Devising and employing effective strategies, that are flexible and adaptive to changing market, stand a greater chance of being effective in the long-term. In other side, a clear focus on managing the entire product lifecycle has emerged as critical areas for investment. Therefore, applying wellorganized tools to employ past experience in new case, helps to make proper and managerial decisions. Case based reasoning (CBR) is based on a means of solving a new problem by using or adapting solutions to old problems. In this paper, an adapted CBR model with k-nearest neighbor (K-NN) is employed to provide suggestions for better decision making which are adopted for a given product in the middle of life phase. The set of solutions are weighted by CBR in the principle of group decision making. Wrapper approach of genetic algorithm is employed to generate optimal feature subsets. The dataset of the department store, including various products which are collected among two years, have been used. K-fold approach is used to evaluate the classification accuracy rate. Empirical results are compared with classical case based reasoning algorithm which has no special process for feature selection, CBR-PCA algorithm based on filter approach feature selection, and Artificial Neural Network. The results indicate that the predictive performance of the model, compare with two CBR algorithms, in specific case is more effective.

Keywords: Case based reasoning, Genetic algorithm, Groupdecision making, Product management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2173
20 A Hybrid Ontology Based Approach for Ranking Documents

Authors: Sarah Motiee, Azadeh Nematzadeh, Mehrnoush Shamsfard

Abstract:

Increasing growth of information volume in the internet causes an increasing need to develop new (semi)automatic methods for retrieval of documents and ranking them according to their relevance to the user query. In this paper, after a brief review on ranking models, a new ontology based approach for ranking HTML documents is proposed and evaluated in various circumstances. Our approach is a combination of conceptual, statistical and linguistic methods. This combination reserves the precision of ranking without loosing the speed. Our approach exploits natural language processing techniques to extract phrases from documents and the query and doing stemming on words. Then an ontology based conceptual method will be used to annotate documents and expand the query. To expand a query the spread activation algorithm is improved so that the expansion can be done flexible and in various aspects. The annotated documents and the expanded query will be processed to compute the relevance degree exploiting statistical methods. The outstanding features of our approach are (1) combining conceptual, statistical and linguistic features of documents, (2) expanding the query with its related concepts before comparing to documents, (3) extracting and using both words and phrases to compute relevance degree, (4) improving the spread activation algorithm to do the expansion based on weighted combination of different conceptual relationships and (5) allowing variable document vector dimensions. A ranking system called ORank is developed to implement and test the proposed model. The test results will be included at the end of the paper.

Keywords: Document ranking, Ontology, Spread activation algorithm, Annotation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1629
19 Performance Analysis of Digital Signal Processors Using SMV Benchmark

Authors: Erh-Wen Hu, Cyril S. Ku, Andrew T. Russo, Bogong Su, Jian Wang

Abstract:

Unlike general-purpose processors, digital signal processors (DSP processors) are strongly application-dependent. To meet the needs for diverse applications, a wide variety of DSP processors based on different architectures ranging from the traditional to VLIW have been introduced to the market over the years. The functionality, performance, and cost of these processors vary over a wide range. In order to select a processor that meets the design criteria for an application, processor performance is usually the major concern for digital signal processing (DSP) application developers. Performance data are also essential for the designers of DSP processors to improve their design. Consequently, several DSP performance benchmarks have been proposed over the past decade or so. However, none of these benchmarks seem to have included recent new DSP applications. In this paper, we use a new benchmark that we recently developed to compare the performance of popular DSP processors from Texas Instruments and StarCore. The new benchmark is based on the Selectable Mode Vocoder (SMV), a speech-coding program from the recent third generation (3G) wireless voice applications. All benchmark kernels are compiled by the compilers of the respective DSP processors and run on their simulators. Weighted arithmetic mean of clock cycles and arithmetic mean of code size are used to compare the performance of five DSP processors. In addition, we studied how the performance of a processor is affected by code structure, features of processor architecture and optimization of compiler. The extensive experimental data gathered, analyzed, and presented in this paper should be helpful for DSP processor and compiler designers to meet their specific design goals.

Keywords: digital signal processors, DSP benchmark, instruction level parallelism, modified cyclomatic complexity, performance analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1607
18 ORank: An Ontology Based System for Ranking Documents

Authors: Mehrnoush Shamsfard, Azadeh Nematzadeh, Sarah Motiee

Abstract:

Increasing growth of information volume in the internet causes an increasing need to develop new (semi)automatic methods for retrieval of documents and ranking them according to their relevance to the user query. In this paper, after a brief review on ranking models, a new ontology based approach for ranking HTML documents is proposed and evaluated in various circumstances. Our approach is a combination of conceptual, statistical and linguistic methods. This combination reserves the precision of ranking without loosing the speed. Our approach exploits natural language processing techniques for extracting phrases and stemming words. Then an ontology based conceptual method will be used to annotate documents and expand the query. To expand a query the spread activation algorithm is improved so that the expansion can be done in various aspects. The annotated documents and the expanded query will be processed to compute the relevance degree exploiting statistical methods. The outstanding features of our approach are (1) combining conceptual, statistical and linguistic features of documents, (2) expanding the query with its related concepts before comparing to documents, (3) extracting and using both words and phrases to compute relevance degree, (4) improving the spread activation algorithm to do the expansion based on weighted combination of different conceptual relationships and (5) allowing variable document vector dimensions. A ranking system called ORank is developed to implement and test the proposed model. The test results will be included at the end of the paper.

Keywords: Document ranking, Ontology, Spread activation algorithm, Annotation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1887
17 Extended Intuitionistic Fuzzy VIKOR Method in Group Decision Making: The Case of Vendor Selection Decision

Authors: Nastaran Hajiheydari, Mohammad Soltani Delgosha

Abstract:

Vendor (supplier) selection is a group decision-making (GDM) process, in which, based on some predetermined criteria, the experts’ preferences are provided in order to rank and choose the most desirable suppliers. In the real business environment, our attitudes or our choices would be made in an uncertain and indecisive situation could not be expressed in a crisp framework. Intuitionistic fuzzy sets (IFSs) could handle such situations in the best way. VIKOR method was developed to solve multi-criteria decision-making (MCDM) problems. This method, which is used to determine the compromised feasible solution with respect to the conflicting criteria, introduces a multi-criteria ranking index based on the particular measure of 'closeness' to the 'ideal solution'. Until now, there has been a little investigation of VIKOR with IFS, therefore we extended the intuitionistic fuzzy (IF) VIKOR to solve vendor selection problem under IF GDM environment. The present study intends to develop an IF VIKOR method in a GDM situation. Therefore, a model is presented to calculate the criterion weights based on entropy measure. Then, the interval-valued intuitionistic fuzzy weighted geometric (IFWG) operator utilized to obtain the total decision matrix. In the next stage, an approach based on the positive idle intuitionistic fuzzy number (PIIFN) and negative idle intuitionistic fuzzy number (NIIFN) was developed. Finally, the application of the proposed method to solve a vendor selection problem illustrated.

Keywords: Group decision making, intuitionistic fuzzy entropy measure, intuitionistic fuzzy set, vendor selection VIKOR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 734
16 A Novel Multiple Valued Logic OHRNS Modulo rn Adder Circuit

Authors: Mehdi Hosseinzadeh, Somayyeh Jafarali Jassbi, Keivan Navi

Abstract:

Residue Number System (RNS) is a modular representation and is proved to be an instrumental tool in many digital signal processing (DSP) applications which require high-speed computations. RNS is an integer and non weighted number system; it can support parallel, carry-free, high-speed and low power arithmetic. A very interesting correspondence exists between the concepts of Multiple Valued Logic (MVL) and Residue Number Arithmetic. If the number of levels used to represent MVL signals is chosen to be consistent with the moduli which create the finite rings in the RNS, MVL becomes a very natural representation for the RNS. There are two concerns related to the application of this Number System: reaching the most possible speed and the largest dynamic range. There is a conflict when one wants to resolve both these problem. That is augmenting the dynamic range results in reducing the speed in the same time. For achieving the most performance a method is considere named “One-Hot Residue Number System" in this implementation the propagation is only equal to one transistor delay. The problem with this method is the huge increase in the number of transistors they are increased in order m2 . In real application this is practically impossible. In this paper combining the Multiple Valued Logic and One-Hot Residue Number System we represent a new method to resolve both of these two problems. In this paper we represent a novel design of an OHRNS-based adder circuit. This circuit is useable for Multiple Valued Logic moduli, in comparison to other RNS design; this circuit has considerably improved the number of transistors and power consumption.

Keywords: Computer Arithmetic, Residue Number System, Multiple Valued Logic, One-Hot, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1842
15 Emotional Intelligence as Predictor of Academic Success among Third Year College Students of PIT

Authors: Sonia Arradaza-Pajaron

Abstract:

College students are expected to engage in an on-the-job training or internship for completion of a course requirement prior to graduation. In this scenario, they are exposed to the real world of work outside their training institution. To find out their readiness both emotionally and academically, this study has been conducted. A descriptive-correlational research design was employed and random sampling technique method was utilized among 265 randomly selected third year college students of PIT, SY 2014-15. A questionnaire on Emotional Intelligence (bearing the four components namely; emotional literacy, emotional quotient competence, values and beliefs and emotional quotient outcomes) was fielded to the respondents and GWA was extracted from the school automate. Data collected were statistically treated using percentage, weighted mean and Pearson-r for correlation.

Results revealed that respondents’ emotional intelligence level is moderately high while their academic performance is good. A high significant relationship was found between the EI component; Emotional Literacy and their academic performance while only significant relationship was found between Emotional Quotient Outcomes and their academic performance. Therefore, if EI influences academic performance significantly when correlated, a possibility that their OJT performance can also be affected either positively or negatively. Thus, EI can be considered predictor of their academic and academic-related performance. Based on the result, it is then recommended that the institution would try to look deeply into the consideration of embedding emotional intelligence as part of the (especially on Emotional Literacy and Emotional Quotient Outcomes of the students) college curriculum. It can be done if the school shall have an effective Emotional Intelligence framework or program manned by qualified and competent teachers, guidance counselors in different colleges in its implementation.

Keywords: Academic performance, emotional intelligence, emotional literacy, emotional quotient competence, emotional quotient outcomes, values and beliefs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1851
14 Dynamic Analysis of Porous Media Using Finite Element Method

Authors: M. Pasbani Khiavi, A. R. M. Gharabaghi, K. Abedi

Abstract:

The mechanical behavior of porous media is governed by the interaction between its solid skeleton and the fluid existing inside its pores. The interaction occurs through the interface of gains and fluid. The traditional analysis methods of porous media, based on the effective stress and Darcy's law, are unable to account for these interactions. For an accurate analysis, the porous media is represented in a fluid-filled porous solid on the basis of the Biot theory of wave propagation in poroelastic media. In Biot formulation, the equations of motion of the soil mixture are coupled with the global mass balance equations to describe the realistic behavior of porous media. Because of irregular geometry, the domain is generally treated as an assemblage of fmite elements. In this investigation, the numerical formulation for the field equations governing the dynamic response of fluid-saturated porous media is analyzed and employed for the study of transient wave motion. A finite element model is developed and implemented into a computer code called DYNAPM for dynamic analysis of porous media. The weighted residual method with 8-node elements is used for developing of a finite element model and the analysis is carried out in the time domain considering the dynamic excitation and gravity loading. Newmark time integration scheme is developed to solve the time-discretized equations which are an unconditionally stable implicit method Finally, some numerical examples are presented to show the accuracy and capability of developed model for a wide variety of behaviors of porous media.

Keywords: Dynamic analysis, Interaction, Porous media, time domain

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1874