Search results for: penalized weighted least squares
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 483

Search results for: penalized weighted least squares

363 Cross Layer Optimization for Fairness Balancing Based on Adaptively Weighted Utility Functions in OFDMA Systems

Authors: Jianwei Wang, Timo Korhonen, Yuping Zhao

Abstract:

Cross layer optimization based on utility functions has been recently studied extensively, meanwhile, numerous types of utility functions have been examined in the corresponding literature. However, a major drawback is that most utility functions take a fixed mathematical form or are based on simple combining, which can not fully exploit available information. In this paper, we formulate a framework of cross layer optimization based on Adaptively Weighted Utility Functions (AWUF) for fairness balancing in OFDMA networks. Under this framework, a two-step allocation algorithm is provided as a sub-optimal solution, whose control parameters can be updated in real-time to accommodate instantaneous QoS constrains. The simulation results show that the proposed algorithm achieves high throughput while balancing the fairness among multiple users.

Keywords: OFDMA, Fairness, AWUF, QoS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1790
362 Application of RS and GIS Technique for Identifying Groundwater Potential Zone in Gomukhi Nadhi Sub Basin, South India

Authors: Punitha Periyasamy, Mahalingam Sudalaimuthu, Sachikanta Nanda, Arasu Sundaram

Abstract:

India holds 17.5% of the world’s population but has only 2% of the total geographical area of the world where 27.35% of the area is categorized as wasteland due to lack of or less groundwater. So there is a demand for excessive groundwater for agricultural and non agricultural activities to balance its growth rate. With this in mind, an attempt is made to find the groundwater potential zone in Gomukhi Nadhi sub basin of Vellar River basin, TamilNadu, India covering an area of 1146.6 Sq.Km consists of 9 blocks from Peddanaickanpalayam to Virudhachalam in the sub basin. The thematic maps such as Geology, Geomorphology, Lineament, Landuse and Landcover and Drainage are prepared for the study area using IRS P6 data. The collateral data includes rainfall, water level, soil map are collected for analysis and inference. The digital elevation model (DEM) is generated using Shuttle Radar Topographic Mission (SRTM) and the slope of the study area is obtained. ArcGIS 10.1 acts as a powerful spatial analysis tool to find out the ground water potential zones in the study area by means of weighted overlay analysis. Each individual parameter of the thematic maps are ranked and weighted in accordance with their influence to increase the water level in the ground. The potential zones in the study area are classified viz., Very Good, Good, Moderate, Poor with its aerial extent of 15.67, 381.06, 575.38, 174.49 Sq.Km respectively.

Keywords: ArcGIS, DEM, Groundwater, Recharge, Weighted Overlay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2956
361 Clustering in WSN Based on Minimum Spanning Tree Using Divide and Conquer Approach

Authors: Uttam Vijay, Nitin Gupta

Abstract:

Due to heavy energy constraints in WSNs clustering is an efficient way to manage the energy in sensors. There are many methods already proposed in the area of clustering and research is still going on to make clustering more energy efficient. In our paper we are proposing a minimum spanning tree based clustering using divide and conquer approach. The MST based clustering was first proposed in 1970’s for large databases. Here we are taking divide and conquer approach and implementing it for wireless sensor networks with the constraints attached to the sensor networks. This Divide and conquer approach is implemented in a way that we don’t have to construct the whole MST before clustering but we just find the edge which will be the part of the MST to a corresponding graph and divide the graph in clusters there itself if that edge from the graph can be removed judging on certain constraints and hence saving lot of computation.

Keywords: Algorithm, Clustering, Edge-Weighted Graph, Weighted-LEACH.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2424
360 Optimal Design for SARMA(P,Q)L Process of EWMA Control Chart

Authors: Y. Areepong

Abstract:

The main goal of this paper is to study Statistical Process Control (SPC) with Exponentially Weighted Moving Average (EWMA) control chart when observations are serially-correlated. The characteristic of control chart is Average Run Length (ARL) which is the average number of samples taken before an action signal is given. Ideally, an acceptable ARL of in-control process should be enough large, so-called (ARL0). Otherwise it should be small when the process is out-of-control, so-called Average of Delay Time (ARL1) or a mean of true alarm. We find explicit formulas of ARL for EWMA control chart for Seasonal Autoregressive and Moving Average processes (SARMA) with Exponential white noise. The results of ARL obtained from explicit formula and Integral equation are in good agreement. In particular, this formulas for evaluating (ARL0) and (ARL1) be able to get a set of optimal parameters which depend on smoothing parameter (λ) and width of control limit (H) for designing EWMA chart with minimum of (ARL1).

Keywords: Average Run Length1, Optimal parameters, Exponentially Weighted Moving Average (EWMA) control chart.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1950
359 Construction of Space-Filling Designs for Three Input Variables Computer Experiments

Authors: Kazeem A. Osuolale, Waheed B. Yahya, Babatunde L. Adeleke

Abstract:

Latin hypercube designs (LHDs) have been applied in many computer experiments among the space-filling designs found in the literature. A LHD can be randomly generated but a randomly chosen LHD may have bad properties and thus act poorly in estimation and prediction. There is a connection between Latin squares and orthogonal arrays (OAs). A Latin square of order s involves an arrangement of s symbols in s rows and s columns, such that every symbol occurs once in each row and once in each column and this exists for every non-negative integer s. In this paper, a computer program was written to construct orthogonal array-based Latin hypercube designs (OA-LHDs). Orthogonal arrays (OAs) were constructed from Latin square of order s and the OAs constructed were afterward used to construct the desired Latin hypercube designs for three input variables for use in computer experiments. The LHDs constructed have better space-filling properties and they can be used in computer experiments that involve only three input factors. MATLAB 2012a computer package (www.mathworks.com/) was used for the development of the program that constructs the designs.

Keywords: Computer Experiments, Latin Squares, Latin Hypercube Designs, Orthogonal Array, Space-filling Designs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1682
358 Detection Efficient Enterprises via Data Envelopment Analysis

Authors: S. Turkan

Abstract:

In this paper, the Turkey’s Top 500 Industrial Enterprises data in 2014 were analyzed by data envelopment analysis. Data envelopment analysis is used to detect efficient decision-making units such as universities, hospitals, schools etc. by using inputs and outputs. The decision-making units in this study are enterprises. To detect efficient enterprises, some financial ratios are determined as inputs and outputs. For this reason, financial indicators related to productivity of enterprises are considered. The efficient foreign weighted owned capital enterprises are detected via super efficiency model. According to the results, it is said that Mercedes-Benz is the most efficient foreign weighted owned capital enterprise in Turkey.

Keywords: Data envelopment analysis, super efficiency, financial ratios, BCC model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 842
357 Image Segmentation Based on Graph Theoretical Approach to Improve the Quality of Image Segmentation

Authors: Deepthi Narayan, Srikanta Murthy K., G. Hemantha Kumar

Abstract:

Graph based image segmentation techniques are considered to be one of the most efficient segmentation techniques which are mainly used as time & space efficient methods for real time applications. How ever, there is need to focus on improving the quality of segmented images obtained from the earlier graph based methods. This paper proposes an improvement to the graph based image segmentation methods already described in the literature. We contribute to the existing method by proposing the use of a weighted Euclidean distance to calculate the edge weight which is the key element in building the graph. We also propose a slight modification of the segmentation method already described in the literature, which results in selection of more prominent edges in the graph. The experimental results show the improvement in the segmentation quality as compared to the methods that already exist, with a slight compromise in efficiency.

Keywords: Graph based image segmentation, threshold, Weighted Euclidean distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1525
356 A New Weighted LDA Method in Comparison to Some Versions of LDA

Authors: Delaram Jarchi, Reza Boostani

Abstract:

Linear Discrimination Analysis (LDA) is a linear solution for classification of two classes. In this paper, we propose a variant LDA method for multi-class problem which redefines the between class and within class scatter matrices by incorporating a weight function into each of them. The aim is to separate classes as much as possible in a situation that one class is well separated from other classes, incidentally, that class must have a little influence on classification. It has been suggested to alleviate influence of classes that are well separated by adding a weight into between class scatter matrix and within class scatter matrix. To obtain a simple and effective weight function, ordinary LDA between every two classes has been used in order to find Fisher discrimination value and passed it as an input into two weight functions and redefined between class and within class scatter matrices. Experimental results showed that our new LDA method improved classification rate, on glass, iris and wine datasets, in comparison to different versions of LDA.

Keywords: Discriminant vectors, weighted LDA, uncorrelation, principle components, Fisher-face method, Bootstarp method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1489
355 Standard Fuzzy Sets for Aircraft Selection using Multiple Criteria Decision Making Analysis

Authors: C. Ardil

Abstract:

This study uses two-dimensional standard fuzzy sets to enhance multiple criteria decision-making analysis for passenger aircraft selection, allowing decision-makers to express judgments with uncertain and vague information. Using two-dimensional fuzzy numbers, three decision makers evaluated three aircraft alternatives according to seven decision criteria. A validity analysis based on two-dimensional standard fuzzy weighted geometric (SFWG) and two-dimensional standard fuzzy weighted average (SFGA) operators is conducted to test the proposed approach's robustness and effectiveness in the fuzzy multiple criteria decision making (MCDM) evaluation process. 

Keywords: Standard fuzzy sets (SFSs), aircraft selection, multiple criteria decision making, intuitionistic fuzzy sets (IFSs), SFWG, SFGA, MCDM

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 316
354 A Mesh Free Moving Node Method To Analyze Flow Through Spirals of Orbiting Scroll Pump

Authors: I.Banerjee, A.K.Mahendra, T.K.Bera, B.G.Chandresh

Abstract:

The scroll pump belongs to the category of positive displacement pump can be used for continuous pumping of gases at low pressure apart from general vacuum application. The shape of volume occupied by the gas moves and deforms continuously as the spiral orbits. To capture flow features in such domain where mesh deformation varies with time in a complicated manner, mesh less solver was found to be very useful. Least Squares Kinetic Upwind Method (LSKUM) is a kinetic theory based mesh free Euler solver working on arbitrary distribution of points. Here upwind is enforced in molecular level based on kinetic flux vector splitting scheme (KFVS). In the present study we extended the LSKUM to moving node viscous flow application. This new code LSKUM-NS-MN for moving node viscous flow is validated for standard airfoil pitching test case. Simulation performed for flow through scroll pump using LSKUM-NS-MN code agrees well with the experimental pumping speed data.

Keywords: Least Squares, Moving node, Pitching, Spirals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1867
353 MIMO Broadcast Scheduling for Weighted Sum-rate Maximization

Authors: Swadhin Kumar Mishra, Sidhartha Panda, C. Ardil

Abstract:

Multiple-Input-Multiple-Output (MIMO) is one of the most important communication techniques that allow wireless systems to achieve higher data rate. To overcome the practical difficulties in implementing Dirty Paper Coding (DPC), various suboptimal MIMO Broadcast (MIMO-BC) scheduling algorithms are employed which choose the best set of users among all the users. In this paper we discuss such a sub-optimal MIMO-BC scheduling algorithm which employs antenna selection at the receiver side. The channels for the users considered here are not Identical and Independent Distributed (IID) so that users at the receiver side do not get equal opportunity for communication. So we introduce a method of applying weights to channels of the users which are not IID in such a way that each of the users gets equal opportunity for communication. The effect of weights on overall sum-rate achieved by the system has been investigated and presented.

Keywords: Antenna selection, Identical and Independent Distributed (IID), Sum-rate capacity, Weighted sum rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1561
352 The Performance Analysis of Error Saturation Nonlinearity LMS in Impulsive Noise based on Weighted-Energy Conservation

Authors: T Panigrahi, G Panda, Mulgrew

Abstract:

This paper introduces a new approach for the performance analysis of adaptive filter with error saturation nonlinearity in the presence of impulsive noise. The performance analysis of adaptive filters includes both transient analysis which shows that how fast a filter learns and the steady-state analysis gives how well a filter learns. The recursive expressions for mean-square deviation(MSD) and excess mean-square error(EMSE) are derived based on weighted energy conservation arguments which provide the transient behavior of the adaptive algorithm. The steady-state analysis for co-related input regressor data is analyzed, so this approach leads to a new performance results without restricting the input regression data to be white.

Keywords: Error saturation nonlinearity, transient analysis, impulsive noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1746
351 Evaluation of Best-Fit Probability Distribution for Prediction of Extreme Hydrologic Phenomena

Authors: Karim Hamidi Machekposhti, Hossein Sedghi

Abstract:

The probability distributions are the best method for forecasting of extreme hydrologic phenomena such as rainfall and flood flows. In this research, in order to determine suitable probability distribution for estimating of annual extreme rainfall and flood flows (discharge) series with different return periods, precipitation with 40 and discharge with 58 years time period had been collected from Karkheh River at Iran. After homogeneity and adequacy tests, data have been analyzed by Stormwater Management and Design Aid (SMADA) software and residual sum of squares (R.S.S). The best probability distribution was Log Pearson Type III with R.S.S value (145.91) and value (13.67) for peak discharge and Log Pearson Type III with R.S.S values (141.08) and (8.95) for maximum discharge in Jelogir Majin and Pole Zal stations, respectively. The best distribution for maximum precipitation in Jelogir Majin and Pole Zal stations was Log Pearson Type III distribution with R.S.S values (1.74&1.90) and then Pearson Type III distribution with R.S.S values (1.53&1.69). Overall, the Log Pearson Type III distributions are acceptable distribution types for representing statistics of extreme hydrologic phenomena in Karkheh River at Iran with the Pearson Type III distribution as a potential alternative.

Keywords: Karkheh river, log pearson type III, probability distribution, residual sum of squares.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 828
350 CFD Modeling of a Radiator Axial Fan for Air Flow Distribution

Authors: S. Jain, Y. Deshpande

Abstract:

The fluid mechanics principle is used extensively in designing axial flow fans and their associated equipment. This paper presents a computational fluid dynamics (CFD) modeling of air flow distribution from a radiator axial flow fan used in an acid pump truck Tier4 (APT T4) Repower. This axial flow fan augments the transfer of heat from the engine mounted on the APT T4. CFD analysis was performed for an area weighted average static pressure difference at the inlet and outlet of the fan. Pressure contours, velocity vectors, and path lines were plotted for detailing the flow characteristics for different orientations of the fan blade. The results were then compared and verified against known theoretical observations and actual experimental data. This study shows that a CFD simulation can be very useful for predicting and understanding the flow distribution from a radiator fan for further research work.

Keywords: Computational fluid dynamics (CFD), acid pump truck (APT) Tier4 Repower, axial flow fan, area weighted average static pressure difference, and contour plots.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8445
349 Application of Feed-Forward Neural Networks Autoregressive Models in Gross Domestic Product Prediction

Authors: Ε. Giovanis

Abstract:

In this paper we present an autoregressive model with neural networks modeling and standard error backpropagation algorithm training optimization in order to predict the gross domestic product (GDP) growth rate of four countries. Specifically we propose a kind of weighted regression, which can be used for econometric purposes, where the initial inputs are multiplied by the neural networks final optimum weights from input-hidden layer after the training process. The forecasts are compared with those of the ordinary autoregressive model and we conclude that the proposed regression-s forecasting results outperform significant those of autoregressive model in the out-of-sample period. The idea behind this approach is to propose a parametric regression with weighted variables in order to test for the statistical significance and the magnitude of the estimated autoregressive coefficients and simultaneously to estimate the forecasts.

Keywords: Autoregressive model, Error back-propagation Feed-Forward neural networks, , Gross Domestic Product

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1388
348 Reducing CO2 Emission Using EDA and Weighted Sum Model in Smart Parking System

Authors: Rahman Ali, Muhammad Sajjad, Farkhund Iqbal, Muhammad Sadiq Hassan Zada, Mohammed Hussain

Abstract:

Emission of Carbon Dioxide (CO2) has adversely affected the environment. One of the major sources of CO2 emission is transportation. In the last few decades, the increase in mobility of people using vehicles has enormously increased the emission of CO2 in the environment. To reduce CO2 emission, sustainable transportation system is required in which smart parking is one of the important measures that need to be established. To contribute to the issue of reducing the amount of CO2 emission, this research proposes a smart parking system. A cloud-based solution is provided to the drivers which automatically searches and recommends the most preferred parking slots. To determine preferences of the parking areas, this methodology exploits a number of unique parking features which ultimately results in the selection of a parking that leads to minimum level of CO2 emission from the current position of the vehicle. To realize the methodology, a scenario-based implementation is considered. During the implementation, a mobile application with GPS signals, vehicles with a number of vehicle features and a list of parking areas with parking features are used by sorting, multi-level filtering, exploratory data analysis (EDA, Analytical Hierarchy Process (AHP)) and weighted sum model (WSM) to rank the parking areas and recommend the drivers with top-k most preferred parking areas. In the EDA process, “2020testcar-2020-03-03”, a freely available dataset is used to estimate CO2 emission of a particular vehicle. To evaluate the system, results of the proposed system are compared with the conventional approach, which reveal that the proposed methodology supersedes the conventional one in reducing the emission of CO2 into the atmosphere.

Keywords: CO2 emission, IoT, EDA, Weighted Sum Model, WSM, regression, smart parking system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 684
347 The Induced Generalized Hybrid Averaging Operator and its Application in Financial Decision Making

Authors: José M. Merigó, Montserrat Casanovas

Abstract:

We present the induced generalized hybrid averaging (IGHA) operator. It is a new aggregation operator that generalizes the hybrid averaging (HA) by using generalized means and order inducing variables. With this formulation, we get a wide range of mean operators such as the induced HA (IHA), the induced hybrid quadratic averaging (IHQA), the HA, etc. The ordered weighted averaging (OWA) operator and the weighted average (WA) are included as special cases of the HA operator. Therefore, with this generalization we can obtain a wide range of aggregation operators such as the induced generalized OWA (IGOWA), the generalized OWA (GOWA), etc. We further generalize the IGHA operator by using quasi-arithmetic means. Then, we get the Quasi-IHA operator. Finally, we also develop an illustrative example of the new approach in a financial decision making problem. The main advantage of the IGHA is that it gives a more complete view of the decision problem to the decision maker because it considers a wide range of situations depending on the operator used.

Keywords: Decision making, Aggregation operators, OWA operator, Generalized means, Selection of investments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1464
346 W3-Miner: Mining Weighted Frequent Subtree Patterns in a Collection of Trees

Authors: R. AliMohammadzadeh, M. Haghir Chehreghani, A. Zarnani, M. Rahgozar

Abstract:

Mining frequent tree patterns have many useful applications in XML mining, bioinformatics, network routing, etc. Most of the frequent subtree mining algorithms (i.e. FREQT, TreeMiner and CMTreeMiner) use anti-monotone property in the phase of candidate subtree generation. However, none of these algorithms have verified the correctness of this property in tree structured data. In this research it is shown that anti-monotonicity does not generally hold, when using weighed support in tree pattern discovery. As a result, tree mining algorithms that are based on this property would probably miss some of the valid frequent subtree patterns in a collection of trees. In this paper, we investigate the correctness of anti-monotone property for the problem of weighted frequent subtree mining. In addition we propose W3-Miner, a new algorithm for full extraction of frequent subtrees. The experimental results confirm that W3-Miner finds some frequent subtrees that the previously proposed algorithms are not able to discover.

Keywords: Semi-Structured Data Mining, Anti-Monotone Property, Trees.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1337
345 Analytical Authentication of Butter Using Fourier Transform Infrared Spectroscopy Coupled with Chemometrics

Authors: M. Bodner, M. Scampicchio

Abstract:

Fourier Transform Infrared (FT-IR) spectroscopy coupled with chemometrics was used to distinguish between butter samples and non-butter samples. Further, quantification of the content of margarine in adulterated butter samples was investigated. Fingerprinting region (1400-800 cm–1) was used to develop unsupervised pattern recognition (Principal Component Analysis, PCA), supervised modeling (Soft Independent Modelling by Class Analogy, SIMCA), classification (Partial Least Squares Discriminant Analysis, PLS-DA) and regression (Partial Least Squares Regression, PLS-R) models. PCA of the fingerprinting region shows a clustering of the two sample types. All samples were classified in their rightful class by SIMCA approach; however, nine adulterated samples (between 1% and 30% w/w of margarine) were classified as belonging both at the butter class and at the non-butter one. In the two-class PLS-DA model’s (R2 = 0.73, RMSEP, Root Mean Square Error of Prediction = 0.26% w/w) sensitivity was 71.4% and Positive Predictive Value (PPV) 100%. Its threshold was calculated at 7% w/w of margarine in adulterated butter samples. Finally, PLS-R model (R2 = 0.84, RMSEP = 16.54%) was developed. PLS-DA was a suitable classification tool and PLS-R a proper quantification approach. Results demonstrate that FT-IR spectroscopy combined with PLS-R can be used as a rapid, simple and safe method to identify pure butter samples from adulterated ones and to determine the grade of adulteration of margarine in butter samples.

Keywords: Adulterated butter, margarine, PCA, PLS-DA, PLS-R, SIMCA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 731
344 Genetic Algorithm Application in a Dynamic PCB Assembly with Carryover Sequence- Dependent Setups

Authors: M. T. Yazdani Sabouni, Rasaratnam Logendran

Abstract:

We consider a typical problem in the assembly of printed circuit boards (PCBs) in a two-machine flow shop system to simultaneously minimize the weighted sum of weighted tardiness and weighted flow time. The investigated problem is a group scheduling problem in which PCBs are assembled in groups and the interest is to find the best sequence of groups as well as the boards within each group to minimize the objective function value. The type of setup operation between any two board groups is characterized as carryover sequence-dependent setup time, which exactly matches with the real application of this problem. As a technical constraint, all of the boards must be kitted before the assembly operation starts (kitting operation) and by kitting staff. The main idea developed in this paper is to completely eliminate the role of kitting staff by assigning the task of kitting to the machine operator during the time he is idle which is referred to as integration of internal (machine) and external (kitting) setup times. Performing the kitting operation, which is a preparation process of the next set of boards while the other boards are currently being assembled, results in the boards to continuously enter the system or have dynamic arrival times. Consequently, a dynamic PCB assembly system is introduced for the first time in the assembly of PCBs, which also has characteristics similar to that of just-in-time manufacturing. The problem investigated is computationally very complex, meaning that finding the optimal solutions especially when the problem size gets larger is impossible. Thus, a heuristic based on Genetic Algorithm (GA) is employed. An example problem on the application of the GA developed is demonstrated and also numerical results of applying the GA on solving several instances are provided.

Keywords: Genetic algorithm, Dynamic PCB assembly, Carryover sequence-dependent setup times, Multi-objective.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1535
343 Mediating Role of Social Responsibility on the Relationship between Consumer Awareness of Green Marketing and Purchase Intentions

Authors: Norazah Mohd Suki, Norbayah Mohd Suki

Abstract:

This research aims to examine the influence of mediating effect of corporate social responsibility on the relationship between consumer awareness of green marketing and purchase intentions in the retail setting. Data from 200 valid questionnaires was analyzed using the partial least squares (PLS) approach for the analysis of structural equation models with SmartPLS computer program version 2.0 as research data does not necessarily have a multivariate normal distribution and is less sensitive to sample size than other covariance approaches. PLS results revealed that corporate social responsibility partially mediated the link between consumer awareness of green marketing and purchase intentions of the product in the retail setting. Marketing managers should allocate a sufficient portion of their budget to appropriate corporate social responsibility activities by engaging in voluntary programs for positive return on investment leading to increased business profitability and long run business sustainability. The outcomes of the mediating effects of corporate social responsibility add a new impetus to the growing literature and preceding discoveries on consumer green marketing awareness, which is inadequately researched in the Malaysian setting. Direction for future research is also presented.

Keywords: Green marketing awareness, corporate social responsibility, partial least squares, purchase intention.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1538
342 Multiple Criteria Decision Making for Turkish Air Force Stealth Fighter Aircraft Selection

Authors: C. Ardil

Abstract:

Neutrosophic logic decision analysis is proposed as a method of stealth fighter aircraft selection for Turkish Air Force. The opinion of experts is employed to rank the alternatives across a set of criteria. The analyst uses neutrosophic logic numbers to describe the experts' preferences. This approach can handle the situation in the case of unavailability of precise data, which is most commonly the case in stealth fighter aircraft selection. Neutrosophic logic numbers can consider the imprecision of the factors affecting decision making such as stealth analysis, survivability analysis, and performance analysis. Neutrosophic logic ranking is achieved using weighted arithmetic operator and weighted geometric operator and the alternatives are ranked from best to worst. An example is also presented to illustrate the applicability and effectiveness of the proposed method. 

Keywords: Neutrosophic set theory, stealth fighter aircraft selection, multiple criteria decision-making, neutrosophic logic decision making, Turkish Air Force, MCDM

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 425
341 Solving Process Planning and Scheduling with Number of Operation Plus Processing Time Due-Date Assignment Concurrently Using a Genetic Search

Authors: Halil Ibrahim Demir, Alper Goksu, Onur Canpolat, Caner Erden, Melek Nur

Abstract:

Traditionally process planning, scheduling and due date assignment are performed sequentially and separately. High interrelation between these functions makes integration very useful. Although there are numerous works on integrated process planning and scheduling and many works on scheduling with due date assignment, there are only a few works on the integration of these three functions. Here we tested the different integration levels of these three functions and found a fully integrated version as the best. We applied genetic search and random search and genetic search was found better compared to the random search. We penalized all earliness, tardiness and due date related costs. Since all these three terms are all undesired, it is better to penalize all of them.

Keywords: Process planning, scheduling, due-date assignment, genetic algorithm, random search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 799
340 Influence of Parameters of Modeling and Data Distribution for Optimal Condition on Locally Weighted Projection Regression Method

Authors: Farhad Asadi, Mohammad Javad Mollakazemi, Aref Ghafouri

Abstract:

Recent research in neural networks science and neuroscience for modeling complex time series data and statistical learning has focused mostly on learning from high input space and signals. Local linear models are a strong choice for modeling local nonlinearity in data series. Locally weighted projection regression is a flexible and powerful algorithm for nonlinear approximation in high dimensional signal spaces. In this paper, different learning scenario of one and two dimensional data series with different distributions are investigated for simulation and further noise is inputted to data distribution for making different disordered distribution in time series data and for evaluation of algorithm in locality prediction of nonlinearity. Then, the performance of this algorithm is simulated and also when the distribution of data is high or when the number of data is less the sensitivity of this approach to data distribution and influence of important parameter of local validity in this algorithm with different data distribution is explained.

Keywords: Local nonlinear estimation, LWPR algorithm, Online training method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1565
339 Compressed Sensing of Fetal Electrocardiogram Signals Based on Joint Block Multi-Orthogonal Least Squares Algorithm

Authors: Xiang Jianhong, Wang Cong, Wang Linyu

Abstract:

With the rise of medical IoT technologies, Wireless body area networks (WBANs) can collect fetal electrocardiogram (FECG) signals to support telemedicine analysis. The compressed sensing (CS)-based WBANs system can avoid the sampling of a large amount of redundant information and reduce the complexity and computing time of data processing, but the existing algorithms have poor signal compression and reconstruction performance. In this paper, a Joint block multi-orthogonal least squares (JBMOLS) algorithm is proposed. We apply the FECG signal to the Joint block sparse model (JBSM), and a comparative study of sparse transformation and measurement matrices is carried out. A FECG signal compression transmission mode based on Rbio5.5 wavelet, Bernoulli measurement matrix, and JBMOLS algorithm is proposed to improve the compression and reconstruction performance of FECG signal by CS-based WBANs. Experimental results show that the compression ratio (CR) required for accurate reconstruction of this transmission mode is increased by nearly 10%, and the runtime is saved by about 30%.

Keywords: telemedicine, fetal electrocardiogram, compressed sensing, joint sparse reconstruction, block sparse signal

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 414
338 Advanced Jet Trainer and Light Attack Aircraft Selection Using Composite Programming in Multiple Criteria Decision Making Analysis Method

Authors: C. Ardil

Abstract:

In this paper, composite programming is discussed for aircraft evaluation and selection problem using the multiple criteria decision analysis method. The decision criteria and aircraft alternatives were identified from the literature review. The importance of criteria weights was determined by the standard deviation method. The proposed model is applied to a practical decision problem for evaluating and selecting advanced jet trainer and light attack aircraft. The proposed technique gives robust and efficient results in modeling multiple criteria decisions. As a result of composite programming analysis, Hürjet, an advanced jet trainer and light attack aircraft alternative (a3), was chosen as the most suitable aircraft candidate.  

Keywords: composite programming, additive weighted model, multiplicative weighted model, multiple criteria decision making analysis, MCDMA, aircraft selection, advanced jet trainer and light attack aircraft, M-346, FA-50, Hürjet

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 435
337 Accurate Visualization of Graphs of Functions of Two Real Variables

Authors: Zeitoun D. G., Thierry Dana-Picard

Abstract:

The study of a real function of two real variables can be supported by visualization using a Computer Algebra System (CAS). One type of constraints of the system is due to the algorithms implemented, yielding continuous approximations of the given function by interpolation. This often masks discontinuities of the function and can provide strange plots, not compatible with the mathematics. In recent years, point based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of complex surfaces. In this paper we present different artifacts created by mesh surfaces near discontinuities and propose a point based method that controls and reduces these artifacts. A least squares penalty method for an automatic generation of the mesh that controls the behavior of the chosen function is presented. The special feature of this method is the ability to improve the accuracy of the surface visualization near a set of interior points where the function may be discontinuous. The present method is formulated as a minimax problem and the non uniform mesh is generated using an iterative algorithm. Results show that for large poorly conditioned matrices, the new algorithm gives more accurate results than the classical preconditioned conjugate algorithm.

Keywords: Function singularities, mesh generation, point allocation, visualization, collocation least squares method, Augmented Lagrangian method, Uzawa's Algorithm, Preconditioned Conjugate Gradien

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676
336 Robust Statistics Based Algorithm to Remove Salt and Pepper Noise in Images

Authors: V.R.Vijaykumar, P.T.Vanathi, P.Kanagasabapathy, D.Ebenezer

Abstract:

In this paper, a robust statistics based filter to remove salt and pepper noise in digital images is presented. The function of the algorithm is to detect the corrupted pixels first since the impulse noise only affect certain pixels in the image and the remaining pixels are uncorrupted. The corrupted pixels are replaced by an estimated value using the proposed robust statistics based filter. The proposed method perform well in removing low to medium density impulse noise with detail preservation upto a noise density of 70% compared to standard median filter, weighted median filter, recursive weighted median filter, progressive switching median filter, signal dependent rank ordered mean filter, adaptive median filter and recently proposed decision based algorithm. The visual and quantitative results show the proposed algorithm outperforms in restoring the original image with superior preservation of edges and better suppression of impulse noise

Keywords: Image denoising, Nonlinear filter, Robust Statistics, and Salt and Pepper Noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2158
335 General Purpose Graphic Processing Units Based Real Time Video Tracking System

Authors: Mallikarjuna Rao Gundavarapu, Ch. Mallikarjuna Rao, K. Anuradha Bai

Abstract:

Real Time Video Tracking is a challenging task for computing professionals. The performance of video tracking techniques is greatly affected by background detection and elimination process. Local regions of the image frame contain vital information of background and foreground. However, pixel-level processing of local regions consumes a good amount of computational time and memory space by traditional approaches. In our approach we have explored the concurrent computational ability of General Purpose Graphic Processing Units (GPGPU) to address this problem. The Gaussian Mixture Model (GMM) with adaptive weighted kernels is used for detecting the background. The weights of the kernel are influenced by local regions and are updated by inter-frame variations of these corresponding regions. The proposed system has been tested with GPU devices such as GeForce GTX 280, GeForce GTX 280 and Quadro K2000. The results are encouraging with maximum speed up 10X compared to sequential approach.

Keywords: Connected components, Embrace threads, Local weighted kernel, Structuring element.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1128
334 Spatial Pattern and GIS-Based Model for Risk Assessment – A Case Study of Dusit District, Bangkok

Authors: Morakot Worachairungreung

Abstract:

The objectives of the research are to study patterns of fire location distribution and develop techniques of Geographic Information System application in fire risk assessment for fire planning and management. Fire risk assessment was based on two factors: the vulnerability factor such as building material types, building height, building density and capacity for mitigation factor such as accessibility by road, distance to fire station, distance to hydrants and it was obtained from four groups of stakeholders including firemen, city planners, local government officers and local residents. Factors obtained from all stakeholders were converted into Raster data of GIS and then were superimposed on the data in order to prepare fire risk map of the area showing level of fire risk ranging from high to low. The level of fire risk was obtained from weighted mean of each factor based on the stakeholders. Weighted mean for each factor was obtained by Analytical Hierarchy Analysis.

Keywords: Fire Risk Assessment, Geographic Information System: GIS, Raster Analysis and Analytical Hierarchy Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2158