Search results for: weighted multiplicationoperator.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 313

Search results for: weighted multiplicationoperator.

73 Efficient Variants of Square Contour Algorithm for Blind Equalization of QAM Signals

Authors: Ahmad Tariq Sheikh, Shahzad Amin Sheikh

Abstract:

A new distance-adjusted approach is proposed in which static square contours are defined around an estimated symbol in a QAM constellation, which create regions that correspond to fixed step sizes and weighting factors. As a result, the equalizer tap adjustment consists of a linearly weighted sum of adaptation criteria that is scaled by a variable step size. This approach is the basis of two new algorithms: the Variable step size Square Contour Algorithm (VSCA) and the Variable step size Square Contour Decision-Directed Algorithm (VSDA). The proposed schemes are compared with existing blind equalization algorithms in the SCA family in terms of convergence speed, constellation eye opening and residual ISI suppression. Simulation results for 64-QAM signaling over empirically derived microwave radio channels confirm the efficacy of the proposed algorithms. An RTL implementation of the blind adaptive equalizer based on the proposed schemes is presented and the system is configured to operate in VSCA error signal mode, for square QAM signals up to 64-QAM.

Keywords: Adaptive filtering, Blind Equalization, Square Contour Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1852
72 Simulation of Non-Linear Behavior of Shear Wall under Seismic Loading

Authors: M. A. Ghorbani, M. Pasbani Khiavi

Abstract:

The seismic response of steel shear wall system considering nonlinearity effects using finite element method is investigated in this paper. The non-linear finite element analysis has potential as usable and reliable means for analyzing of civil structures with the availability of computer technology. In this research the large displacements and materially nonlinear behavior of shear wall is presented with developing of finite element code. A numerical model based on the finite element method for the seismic analysis of shear wall is presented with developing of finite element code in this research. To develop the finite element code, the standard Galerkin weighted residual formulation is used. Two-dimensional plane stress model and total Lagrangian formulation was carried out to present the shear wall response and the Newton-Raphson method is applied for the solution of nonlinear transient equations. The presented model in this paper can be developed for analysis of civil engineering structures with different material behavior and complicated geometry.

Keywords: Finite element, steel shear wall, nonlinear, earthquake

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1839
71 Dynamic Correlations and Portfolio Optimization between Islamic and Conventional Equity Indexes: A Vine Copula-Based Approach

Authors: Imen Dhaou

Abstract:

This study examines conditional Value at Risk by applying the GJR-EVT-Copula model, and finds the optimal portfolio for eight Dow Jones Islamic-conventional pairs. Our methodology consists of modeling the data by a bivariate GJR-GARCH model in which we extract the filtered residuals and then apply the Peak over threshold model (POT) to fit the residual tails in order to model marginal distributions. After that, we use pair-copula to find the optimal portfolio risk dependence structure. Finally, with Monte Carlo simulations, we estimate the Value at Risk (VaR) and the conditional Value at Risk (CVaR). The empirical results show the VaR and CVaR values for an equally weighted portfolio of Dow Jones Islamic-conventional pairs. In sum, we found that the optimal investment focuses on Islamic-conventional US Market index pairs because of high investment proportion; however, all other index pairs have low investment proportion. These results deliver some real repercussions for portfolio managers and policymakers concerning to optimal asset allocations, portfolio risk management and the diversification advantages of these markets.

Keywords: CVaR, Dow Jones Islamic index, GJR-GARCH-EVT-pair copula, portfolio optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 992
70 Nonlinear Analysis of Shear Wall Using Finite Element Model

Authors: M. A. Ghorbani, M. Pasbani Khiavi, F. Rezaie Moghaddam

Abstract:

In the analysis of structures, the nonlinear effects due to large displacement, large rotation and materially-nonlinear are very important and must be considered for the reliable analysis. The non-linear fmite element analysis has potential as usable and reliable means for analyzing of civil structures with the availability of computer technology. In this research the large displacements and materially nonlinear behavior of shear wall is presented with developing of fmite element code using the standard Galerkin weighted residual formulation. Two-dimensional plane stress model was carried out to present the shear wall response. Total Lagangian formulation, which is computationally more effective, is used in the formulation of stiffness matrices and the Newton-Raphson method is applied for the solution of nonlinear transient equations. The details of the program formulation are highlighted and the results of the analyses are presented, along with a comparison of the response of the structure with Ansys software results. The presented model in this paper can be developed for nonlinear analysis of civil engineering structures with different material behavior and complicated geometry.

Keywords: Finite element, large displacements, materially nonlinear, shear wall.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1751
69 An Improved k Nearest Neighbor Classifier Using Interestingness Measures for Medical Image Mining

Authors: J. Alamelu Mangai, Satej Wagle, V. Santhosh Kumar

Abstract:

The exponential increase in the volume of medical image database has imposed new challenges to clinical routine in maintaining patient history, diagnosis, treatment and monitoring. With the advent of data mining and machine learning techniques it is possible to automate and/or assist physicians in clinical diagnosis. In this research a medical image classification framework using data mining techniques is proposed. It involves feature extraction, feature selection, feature discretization and classification. In the classification phase, the performance of the traditional kNN k nearest neighbor classifier is improved using a feature weighting scheme and a distance weighted voting instead of simple majority voting. Feature weights are calculated using the interestingness measures used in association rule mining. Experiments on the retinal fundus images show that the proposed framework improves the classification accuracy of traditional kNN from 78.57 % to 92.85 %.

Keywords: Medical Image Mining, Data Mining, Feature Weighting, Association Rule Mining, k nearest neighbor classifier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3307
68 Genetic Algorithm Parameters Optimization for Bi-Criteria Multiprocessor Task Scheduling Using Design of Experiments

Authors: Sunita Dhingra, Satinder Bal Gupta, Ranjit Biswas

Abstract:

Multiprocessor task scheduling is a NP-hard problem and Genetic Algorithm (GA) has been revealed as an excellent technique for finding an optimal solution. In the past, several methods have been considered for the solution of this problem based on GAs. But, all these methods consider single criteria and in the present work, minimization of the bi-criteria multiprocessor task scheduling problem has been considered which includes weighted sum of makespan & total completion time. Efficiency and effectiveness of genetic algorithm can be achieved by optimization of its different parameters such as crossover, mutation, crossover probability, selection function etc. The effects of GA parameters on minimization of bi-criteria fitness function and subsequent setting of parameters have been accomplished by central composite design (CCD) approach of response surface methodology (RSM) of Design of Experiments. The experiments have been performed with different levels of GA parameters and analysis of variance has been performed for significant parameters for minimisation of makespan and total completion time simultaneously.

Keywords: Multiprocessor task scheduling, Design of experiments, Genetic Algorithm, Makespan, Total completion time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2843
67 Data Envelopment Analysis with Partially Perfect Objects

Authors: Alexander Y. Vaninsky

Abstract:

This paper presents a simplified version of Data Envelopment Analysis (DEA) - a conventional approach to evaluating the performance and ranking of competitive objects characterized by two groups of factors acting in opposite directions: inputs and outputs. DEA with a Perfect Object (DEA PO) augments the group of actual objects with a virtual Perfect Object - the one having greatest outputs and smallest inputs. It allows for obtaining an explicit analytical solution and making a step to an absolute efficiency. This paper develops this approach further and introduces a DEA model with Partially Perfect Objects. DEA PPO consecutively eliminates the smallest relative inputs or greatest relative outputs, and applies DEA PO to the reduced collections of indicators. The partial efficiency scores are combined to get the weighted efficiency score. The computational scheme remains simple, like that of DEA PO, but the advantage of the DEA PPO is taking into account all of the inputs and outputs for each actual object. Firm evaluation is considered as an example.

Keywords: Data Envelopment Analysis, Perfect object, Partially perfect object, Partial efficiency, Explicit solution, Simplified algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695
66 A Weighted Group EI Incorporating Role Information for More Representative Group EI Measurement

Authors: Siyu Wang, Anthony Ward

Abstract:

Emotional intelligence (EI) is a well-established personal characteristic. It has been viewed as a critical factor which can influence an individual's academic achievement, ability to work and potential to succeed. When working in a group, EI is fundamentally connected to the group members' interaction and ability to work as a team. The ability of a group member to intelligently perceive and understand own emotions (Intrapersonal EI), to intelligently perceive and understand other members' emotions (Interpersonal EI), and to intelligently perceive and understand emotions between different groups (Cross-boundary EI) can be considered as Group emotional intelligence (Group EI). In this research, a more representative Group EI measurement approach, which incorporates the information of the composition of a group and an individual’s role in that group, is proposed. To demonstrate the claim of being more representative Group EI measurement approach, this study adopts a multi-method research design, involving a combination of both qualitative and quantitative techniques to establish a metric of Group EI. From the results, it can be concluded that by introducing the weight coefficient of each group member on group work into the measurement of Group EI, Group EI will be more representative and more capable of understanding what happens during teamwork than previous approaches.

Keywords: Emotional intelligence, EI, Group EI, multi-method research, teamwork.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 629
65 Freighter Aircraft Selection Using Entropic Programming for Multiple Criteria Decision Making Analysis

Authors: C. Ardil

Abstract:

This paper proposes entropic programming for the freighter aircraft selection problem using the multiple criteria decision analysis method. The study aims to propose a systematic and comprehensive framework by focusing on the perspective of freighter aircraft selection. In order to achieve this goal, an integrated entropic programming approach was proposed to evaluate and rank alternatives. The decision criteria and aircraft alternatives were identified from the research data analysis. The objective criteria weights were determined by the mean weight method and the standard deviation method. The proposed entropic programming model was applied to a practical decision problem for evaluating and selecting freighter aircraft. The proposed entropic programming technique gives robust, reliable, and efficient results in modeling decision making analysis problems. As a result of entropic programming analysis, Boeing B747-8F, a freighter aircraft alternative ( a3), was chosen as the most suitable freighter aircraft candidate.   

Keywords: entropic programming, additive weighted model, multiple criteria decision making analysis, MCDMA, TOPSIS, aircraft selection, freighter aircraft, Boeing B747-8F, Boeing B777F, Airbus A350F

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 548
64 Fundamental Equation of Complete Factor Synergetics of Complex Systems with Normalization of Dimension

Authors: Li Zong-Cheng

Abstract:

It is by reason of the unified measure of varieties of resources and the unified processing of the disposal of varieties of resources, that these closely related three of new basic models called the resources assembled node and the disposition integrated node as well as the intelligent organizing node are put forth in this paper; the three closely related quantities of integrative analytical mechanics including the disposal intensity and disposal- weighted intensity as well as the charge of resource charge are set; and then the resources assembled space and the disposition integrated space as well as the intelligent organizing space are put forth. The system of fundamental equations and model of complete factor synergetics is preliminarily approached for the general situation in this paper, to form the analytical base of complete factor synergetics. By the essential variables constituting this system of equations we should set twenty variables respectively with relation to the essential dynamical effect, external synergetic action and internal synergetic action of the system.

Keywords: complex system, disposal of resources, completefactor synergetics, fundamental equation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1418
63 Shape Optimization of Permanent Magnet Motors Using the Reduced Basis Technique

Authors: A. Jabbari, M. Shakeri, A. Nabavi

Abstract:

In this paper, a tooth shape optimization method for cogging torque reduction in Permanent Magnet (PM) motors is developed by using the Reduced Basis Technique (RBT) coupled by Finite Element Analysis (FEA) and Design of Experiments (DOE) methods. The primary objective of the method is to reduce the enormous number of design variables required to define the tooth shape. RBT is a weighted combination of several basis shapes. The aim of the method is to find the best combination using the weights for each tooth shape as the design variables. A multi-level design process is developed to find suitable basis shapes or trial shapes at each level that can be used in the reduced basis technique. Each level is treated as a separated optimization problem until the required objective – minimum cogging torque – is achieved. The process is started with geometrically simple basis shapes that are defined by their shape co-ordinates. The experimental design of Taguchi method is used to build the approximation model and to perform optimization. This method is demonstrated on the tooth shape optimization of a 8-poles/12-slots PM motor.

Keywords: PM motor, cogging torque, tooth shape optimization, RBT, FEA, DOE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2502
62 Multiple Model and Neural based Adaptive Multi-loop PID Controller for a CSTR Process

Authors: R.Vinodha S. Abraham Lincoln, J. Prakash

Abstract:

Multi-loop (De-centralized) Proportional-Integral- Derivative (PID) controllers have been used extensively in process industries due to their simple structure for control of multivariable processes. The objective of this work is to design multiple-model adaptive multi-loop PID strategy (Multiple Model Adaptive-PID) and neural network based multi-loop PID strategy (Neural Net Adaptive-PID) for the control of multivariable system. The first method combines the output of multiple linear PID controllers, each describing process dynamics at a specific level of operation. The global output is an interpolation of the individual multi-loop PID controller outputs weighted based on the current value of the measured process variable. In the second method, neural network is used to calculate the PID controller parameters based on the scheduling variable that corresponds to major shift in the process dynamics. The proposed control schemes are simple in structure with less computational complexity. The effectiveness of the proposed control schemes have been demonstrated on the CSTR process, which exhibits dynamic non-linearity.

Keywords: Multiple-model Adaptive PID controller, Multivariableprocess, CSTR process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2011
61 Arrival and Departure Scheduling at Hub Airports Considering Airlines Level

Authors: A. Nourmohammadzadeh, R. Tavakkoli- Moghaddam

Abstract:

As the air traffic increases at a hub airport, some flights cannot land or depart at their preferred target time. This event happens because the airport runways become occupied to near their capacity. It results in extra costs for both passengers and airlines because of the loss of connecting flights or more waiting, more fuel consumption, rescheduling crew members, etc. Hence, devising an appropriate scheduling method that determines a suitable runway and time for each flight in order to efficiently use the hub capacity and minimize the related costs is of great importance. In this paper, we present a mixed-integer zero-one model for scheduling a set of mixed landing and departing flights (despite of most previous studies considered only landings). According to the fact that the flight cost is strongly affected by the level of airline, we consider different airline categories in our model. This model presents a single objective minimizing the total sum of three terms, namely 1) the weighted deviation from targets, 2) the scheduled time of the last flight (i.e., makespan), and 3) the unbalancing the workload on runways. We solve 10 simulated instances of different sizes up to 30 flights and 4 runways. Optimal solutions are obtained in a reasonable time, which are satisfactory in comparison with the traditional rule, namely First- Come-First-Serve (FCFS) that is far apart from optimality in most cases.

Keywords: Arrival and departure scheduling, Airline level, Mixed-integer model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827
60 Improved Feature Processing for Iris Biometric Authentication System

Authors: Somnath Dey, Debasis Samanta

Abstract:

Iris-based biometric authentication is gaining importance in recent times. Iris biometric processing however, is a complex process and computationally very expensive. In the overall processing of iris biometric in an iris-based biometric authentication system, feature processing is an important task. In feature processing, we extract iris features, which are ultimately used in matching. Since there is a large number of iris features and computational time increases as the number of features increases, it is therefore a challenge to develop an iris processing system with as few as possible number of features and at the same time without compromising the correctness. In this paper, we address this issue and present an approach to feature extraction and feature matching process. We apply Daubechies D4 wavelet with 4 levels to extract features from iris images. These features are encoded with 2 bits by quantizing into 4 quantization levels. With our proposed approach it is possible to represent an iris template with only 304 bits, whereas existing approaches require as many as 1024 bits. In addition, we assign different weights to different iris region to compare two iris templates which significantly increases the accuracy. Further, we match the iris template based on a weighted similarity measure. Experimental results on several iris databases substantiate the efficacy of our approach.

Keywords: Iris recognition, biometric, feature processing, patternrecognition, pattern matching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2138
59 Using Mean-Shift Tracking Algorithms for Real-Time Tracking of Moving Images on an Autonomous Vehicle Testbed Platform

Authors: Benjamin Gorry, Zezhi Chen, Kevin Hammond, Andy Wallace, Greg Michaelson

Abstract:

This paper describes new computer vision algorithms that have been developed to track moving objects as part of a long-term study into the design of (semi-)autonomous vehicles. We present the results of a study to exploit variable kernels for tracking in video sequences. The basis of our work is the mean shift object-tracking algorithm; for a moving target, it is usual to define a rectangular target window in an initial frame, and then process the data within that window to separate the tracked object from the background by the mean shift segmentation algorithm. Rather than use the standard, Epanechnikov kernel, we have used a kernel weighted by the Chamfer distance transform to improve the accuracy of target representation and localization, minimising the distance between the two distributions in RGB color space using the Bhattacharyya coefficient. Experimental results show the improved tracking capability and versatility of the algorithm in comparison with results using the standard kernel. These algorithms are incorporated as part of a robot test-bed architecture which has been used to demonstrate their effectiveness.

Keywords: Hume, functional programming, autonomous vehicle, pioneer robot, vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1651
58 Towards the Use of Software Product Metrics as an Indicator for Measuring Mobile Applications Power Consumption

Authors: Ching Kin Keong, Koh Tieng Wei, Abdul Azim Abd. Ghani, Khaironi Yatim Sharif

Abstract:

Maintaining factory default battery endurance rate over time in supporting huge amount of running applications on energy-restricted mobile devices has created a new challenge for mobile applications developer. While delivering customers’ unlimited expectations, developers are barely aware of efficient use of energy from the application itself. Thus, developers need a set of valid energy consumption indicators in assisting them to develop energy saving applications. In this paper, we present a few software product metrics that can be used as an indicator to measure energy consumption of Android-based mobile applications in the early of design stage. In particular, Trepn Profiler (Power profiling tool for Qualcomm processor) has used to collect the data of mobile application power consumption, and then analyzed for the 23 software metrics in this preliminary study. The results show that McCabe cyclomatic complexity, number of parameters, nested block depth, number of methods, weighted methods per class, number of classes, total lines of code and method lines have direct relationship with power consumption of mobile application.

Keywords: Battery endurance, software metrics, mobile application, power consumption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1942
57 Advanced Hybrid Particle Swarm Optimization for Congestion and Power Loss Reduction in Distribution Networks with High Distributed Generation Penetration through Network Reconfiguration

Authors: C. Iraklis, G. Evmiridis, A. Iraklis

Abstract:

Renewable energy sources and distributed power generation units already have an important role in electrical power generation. A mixture of different technologies penetrating the electrical grid, adds complexity in the management of distribution networks. High penetration of distributed power generation units creates node over-voltages, huge power losses, unreliable power management, reverse power flow and congestion. This paper presents an optimization algorithm capable of reducing congestion and power losses, both described as a function of weighted sum. Two factors that describe congestion are being proposed. An upgraded selective particle swarm optimization algorithm (SPSO) is used as a solution tool focusing on the technique of network reconfiguration. The upgraded SPSO algorithm is achieved with the addition of a heuristic algorithm specializing in reduction of power losses, with several scenarios being tested. Results show significant improvement in minimization of losses and congestion while achieving very small calculation times.

Keywords: Congestion, distribution networks, loss reduction, particle swarm optimization, smart grid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 747
56 Expelling Policy Based Buffer Control during Congestion in Differentiated Service Routers

Authors: Kumar Padmanabh, Rajarshi Roy

Abstract:

In this paper a special kind of buffer management policy is studied where the packet are preempted even when sufficient space is available in the buffer for incoming packets. This is done to congestion for future incoming packets to improve QoS for certain type of packets. This type of study has been done in past for ATM type of scenario. We extend the same for heterogeneous traffic where data rate and size of the packets are very versatile in nature. Typical example of this scenario is the buffer management in Differentiated Service Router. There are two aspects that are of interest. First is the packet size: whether all packets have same or different sizes. Second aspect is the value or space priority of the packets, do all packets have the same space priority or different packets have different space priorities. We present two types of policies to achieve QoS goals for packets with different priorities: the push out scheme and the expelling scheme. For this work the scenario of packets of variable length is considered with two space priorities and main goal is to minimize the total weighted packet loss. Simulation and analytical studies show that, expelling policies can outperform the push out policies when it comes to offering variable QoS for packets of two different priorities and expelling policies also help improve the amount of admissible load. Some other comparisons of push out and expelling policies are also presented using simulations.

Keywords: Buffer Management Policy, Diffserv, ATM, Pushout Policy, Expeling Policy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1394
55 Application of Machine Learning Methods to Online Test Error Detection in Semiconductor Test

Authors: Matthias Kirmse, Uwe Petersohn, Elief Paffrath

Abstract:

As in today's semiconductor industries test costs can make up to 50 percent of the total production costs, an efficient test error detection becomes more and more important. In this paper, we present a new machine learning approach to test error detection that should provide a faster recognition of test system faults as well as an improved test error recall. The key idea is to learn a classifier ensemble, detecting typical test error patterns in wafer test results immediately after finishing these tests. Since test error detection has not yet been discussed in the machine learning community, we define central problem-relevant terms and provide an analysis of important domain properties. Finally, we present comparative studies reflecting the failure detection performance of three individual classifiers and three ensemble methods based upon them. As base classifiers we chose a decision tree learner, a support vector machine and a Bayesian network, while the compared ensemble methods were simple and weighted majority vote as well as stacking. For the evaluation, we used cross validation and a specially designed practical simulation. By implementing our approach in a semiconductor test department for the observation of two products, we proofed its practical applicability.

Keywords: Ensemble methods, fault detection, machine learning, semiconductor test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2273
54 Diagnosing the Cause and its Timing of Changes in Multivariate Process Mean Vector from Quality Control Charts using Artificial Neural Network

Authors: Farzaneh Ahmadzadeh

Abstract:

Quality control charts are very effective in detecting out of control signals but when a control chart signals an out of control condition of the process mean, searching for a special cause in the vicinity of the signal time would not always lead to prompt identification of the source(s) of the out of control condition as the change point in the process parameter(s) is usually different from the signal time. It is very important to manufacturer to determine at what point and which parameters in the past caused the signal. Early warning of process change would expedite the search for the special causes and enhance quality at lower cost. In this paper the quality variables under investigation are assumed to follow a multivariate normal distribution with known means and variance-covariance matrix and the process means after one step change remain at the new level until the special cause is being identified and removed, also it is supposed that only one variable could be changed at the same time. This research applies artificial neural network (ANN) to identify the time the change occurred and the parameter which caused the change or shift. The performance of the approach was assessed through a computer simulation experiment. The results show that neural network performs effectively and equally well for the whole shift magnitude which has been considered.

Keywords: Artificial neural network, change point estimation, monte carlo simulation, multivariate exponentially weighted movingaverage

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1376
53 Assessing the Adaptive Re-Use Potential of Buildings as Part of the Disaster Management Process

Authors: A. Esra İdemen, Sinan M. Şener, Emrah Acar

Abstract:

The technological paradigm of the disaster management field, especially in the case of governmental intervention strategies, is generally based on rapid and flexible accommodation solutions. From various technical solution patterns used to address the immediate housing needs of disaster victims, the adaptive re-use of existing buildings can be considered to be both low-cost and practical. However, there is a scarcity of analytical methods to screen, select and adapt buildings to help decision makers in cases of emergency. Following an extensive literature review, this paper aims to highlight key points and problem areas associated with the adaptive re-use of buildings within the disaster management context. In other disciplines such as real estate management, the adaptive re-use potential (ARP) of existing buildings is typically based on the prioritization of a set of technical and non-technical criteria which are then weighted to arrive at an economically viable investment decision. After a disaster, however, the assessment of the ARP of buildings requires consideration of different/additional layers of analysis which stem from general disaster management principles and the peculiarities of different types of disasters, as well as of their victims. In this paper, a discussion of the development of an adaptive re-use potential (ARP) assessment model is presented. It is thought that governmental and non-governmental decision makers who are required to take quick decisions to accommodate displaced masses following disasters are likely to benefit from the implementation of such a model.

Keywords: Adaptive re-use of buildings, assessment model, disaster management, temporary housing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685
52 Mixed Model Assembly Line Sequencing In Make to Order System with Available to Promise Consideration

Authors: N. Manavizadeh, A. Dehghani, M. Rabbani

Abstract:

Mixed model assembly lines (MMAL) are a type of production line where a variety of product models similar in product characteristics are assembled. The effective design of these lines requires that schedule for assembling the different products is determined. In this paper we tried to fit the sequencing problem with the main characteristics of make to order (MTO) environment. The problem solved in this paper is a multiple objective sequencing problem in mixed model assembly lines sequencing using weighted Sum Method (WSM) using GAMS software for small problem and an effective GA for large scale problems because of the nature of NP-hardness of our problem and vast time consume to find the optimum solution in large problems. In this problem three practically important objectives are minimizing: total utility work, keeping a constant production rate variation, and minimizing earliness and tardiness cost which consider the priority of each customer and different due date which is a real situation in mixed model assembly lines and it is the first time we consider different attribute to prioritize the customers which help the company to reduce the cost of earliness and tardiness. This mechanism is a way to apply an advance available to promise (ATP) in mixed model assembly line sequencing which is the main contribution of this paper.

Keywords: Available to promise, Earliness & Tardiness, GA, Mixed-Model assembly line Sequencing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2532
51 A Weighted-Profiling Using an Ontology Basefor Semantic-Based Search

Authors: Hikmat A. M. Abd-El-Jaber, Tengku M. T. Sembok

Abstract:

The information on the Web increases tremendously. A number of search engines have been developed for searching Web information and retrieving relevant documents that satisfy the inquirers needs. Search engines provide inquirers irrelevant documents among search results, since the search is text-based rather than semantic-based. Information retrieval research area has presented a number of approaches and methodologies such as profiling, feedback, query modification, human-computer interaction, etc for improving search results. Moreover, information retrieval has employed artificial intelligence techniques and strategies such as machine learning heuristics, tuning mechanisms, user and system vocabularies, logical theory, etc for capturing user's preferences and using them for guiding the search based on the semantic analysis rather than syntactic analysis. Although a valuable improvement has been recorded on search results, the survey has shown that still search engines users are not really satisfied with their search results. Using ontologies for semantic-based searching is likely the key solution. Adopting profiling approach and using ontology base characteristics, this work proposes a strategy for finding the exact meaning of the query terms in order to retrieve relevant information according to user needs. The evaluation of conducted experiments has shown the effectiveness of the suggested methodology and conclusion is presented.

Keywords: information retrieval, user profiles, semantic Web, ontology, search engine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3216
50 Optimization of Kinematics for Birds and UAVs Using Evolutionary Algorithms

Authors: Mohamed Hamdaoui, Jean-Baptiste Mouret, Stephane Doncieux, Pierre Sagaut

Abstract:

The aim of this work is to present a multi-objective optimization method to find maximum efficiency kinematics for a flapping wing unmanned aerial vehicle. We restrained our study to rectangular wings with the same profile along the span and to harmonic dihedral motion. It is assumed that the birdlike aerial vehicle (whose span and surface area were fixed respectively to 1m and 0.15m2) is in horizontal mechanically balanced motion at fixed speed. We used two flight physics models to describe the vehicle aerodynamic performances, namely DeLaurier-s model, which has been used in many studies dealing with flapping wings, and the model proposed by Dae-Kwan et al. Then, a constrained multi-objective optimization of the propulsive efficiency is performed using a recent evolutionary multi-objective algorithm called є-MOEA. Firstly, we show that feasible solutions (i.e. solutions that fulfil the imposed constraints) can be obtained using Dae-Kwan et al.-s model. Secondly, we highlight that a single objective optimization approach (weighted sum method for example) can also give optimal solutions as good as the multi-objective one which nevertheless offers the advantage of directly generating the set of the best trade-offs. Finally, we show that the DeLaurier-s model does not yield feasible solutions.

Keywords: Flight physics, evolutionary algorithm, optimization, Pareto surface.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1644
49 Applying the Extreme-Based Teaching Model in Post-Secondary Online Classroom Setting: A Field Experiment

Authors: Leon Pan

Abstract:

The first programming course within post-secondary education has long been recognized as a challenging endeavor for both educators and students alike. Historically, these courses have exhibited high failure rates and a notable number of dropouts. Instructors often lament students' lack of effort on their coursework, and students often express frustration that the teaching methods employed are not effective. Drawing inspiration from the successful principles of Extreme Programming, this study introduces an approach—the Extremes-based teaching model—aimed at enhancing the teaching of introductory programming courses. To empirically determine the effectiveness of the model, a comparison was made between a section taught using the extreme-based model and another utilizing traditional teaching methods. Notably, the extreme-based teaching class required students to work collaboratively on projects, while also demanding continuous assessment and performance enhancement within groups. This paper details the application of the extreme-based model within the post-secondary online classroom context and presents the compelling results that emphasize its effectiveness in advancing the teaching and learning experiences. The extreme-based model led to a significant increase of 13.46 points in the weighted total average and a commendable 10% reduction in the failure rate.

Keywords: Extreme-based teaching model, innovative pedagogical methods, project-based learning, team-based learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 128
48 A Generic Middleware to Instantly Sync Intensive Writes of Heterogeneous Massive Data via Internet

Authors: Haitao Yang, Zhenjiang Ruan, Fei Xu, Lanting Xia

Abstract:

Industry data centers often need to sync data changes reliably and instantly from a large-scale of heterogeneous autonomous relational databases accessed via the not-so-reliable Internet, for which a practical generic sync middleware of low maintenance and operation costs is most wanted. To this demand, this paper presented a generic sync middleware system (GSMS), which has been developed, applied and optimized since 2006, holding the principles or advantages that it must be SyncML-compliant and transparent to data application layer logic without referring to implementation details of databases synced, does not rely on host computer operating systems deployed, and its construction is light weighted and hence of low cost. Regarding these hard commitments of developing GSMS, in this paper we stressed the significant optimization breakthrough of GSMS sync delay being well below a fraction of millisecond per record sync. A series of ultimate tests with GSMS sync performance were conducted for a persuasive example, in which the source relational database underwent a broad range of write loads (from one thousand to one million intensive writes within a few minutes). All these tests showed that the performance of GSMS is competent and smooth even under ultimate write loads.

Keywords: Heterogeneous massive data, instantly sync intensive writes, Internet generic middleware design, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 453
47 Parametric Analysis and Optimal Design of Functionally Graded Plates Using Particle Swarm Optimization Algorithm and a Hybrid Meshless Method

Authors: Foad Nazari, Seyed Mahmood Hosseini, Mohammad Hossein Abolbashari, Mohammad Hassan Abolbashari

Abstract:

The present study is concerned with the optimal design of functionally graded plates using particle swarm optimization (PSO) algorithm. In this study, meshless local Petrov-Galerkin (MLPG) method is employed to obtain the functionally graded (FG) plate’s natural frequencies. Effects of two parameters including thickness to height ratio and volume fraction index on the natural frequencies and total mass of plate are studied by using the MLPG results. Then the first natural frequency of the plate, for different conditions where MLPG data are not available, is predicted by an artificial neural network (ANN) approach which is trained by back-error propagation (BEP) technique. The ANN results show that the predicted data are in good agreement with the actual one. To maximize the first natural frequency and minimize the mass of FG plate simultaneously, the weighted sum optimization approach and PSO algorithm are used. However, the proposed optimization process of this study can provide the designers of FG plates with useful data.

Keywords: Optimal design, natural frequency, FG plate, hybrid meshless method, MLPG method, ANN approach, particle swarm optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1432
46 Applying Element Free Galerkin Method on Beam and Plate

Authors: Mahdad M’hamed, Belaidi Idir

Abstract:

This paper develops a meshless approach, called Element Free Galerkin (EFG) method, which is based on the weak form Moving Least Squares (MLS) of the partial differential governing equations and employs the interpolation to construct the meshless shape functions. The variation weak form is used in the EFG where the trial and test functions are approximated bye the MLS approximation. Since the shape functions constructed by this discretization have the weight function property based on the randomly distributed points, the essential boundary conditions can be implemented easily. The local weak form of the partial differential governing equations is obtained by the weighted residual method within the simple local quadrature domain. The spline function with high continuity is used as the weight function. The presently developed EFG method is a truly meshless method, as it does not require the mesh, either for the construction of the shape functions, or for the integration of the local weak form. Several numerical examples of two-dimensional static structural analysis are presented to illustrate the performance of the present EFG method. They show that the EFG method is highly efficient for the implementation and highly accurate for the computation. The present method is used to analyze the static deflection of beams and plate hole

Keywords: Numerical computation, element-free Galerkin, moving least squares, meshless methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2434
45 Multidimensional Compromise Optimization for Development Ranking of the Gulf Cooperation Council Countries and Turkey

Authors: C. Ardil

Abstract:

In this research, a multidimensional  compromise optimization method is proposed for multidimensional decision making analysis in the development ranking of the Gulf Cooperation Council Countries and Turkey. The proposed approach presents ranking solutions resulting from different multicriteria decision analyses, which yield different ranking orders for the same ranking problem, consisting of a set of alternatives in terms of numerous competing criteria when they are applied with the same numerical data. The multiobjective optimization decision making problem is considered in three sequential steps. In the first step, five different criteria related to the development ranking are gathered from the research field. In the second step, identified evaluation criteria are, objectively, weighted using standard deviation procedure. In the third step, a country selection problem is illustrated with a numerical example as an application of the proposed multidimensional  compromise optimization model. Finally, multidimensional  compromise optimization approach is applied to rank the Gulf Cooperation Council Countries and Turkey. 

Keywords: Standard deviation, performance evaluation, multicriteria decision making, multidimensional compromise optimization, vector normalization, multicriteria decision making, multicriteria analysis, multidimensional decision analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 810
44 Modeling and Analysis for Effective Capacity of a Cross-Layer Optimized Wireless Networks

Authors: Reham A. El-mayet, Hesham M. El-Badawy, Salwa H. Elramly

Abstract:

New generation mobile communication networks have the ability of supporting triple play. In order that, Orthogonal Frequency Division Multiplexing (OFDM) access techniques have been chosen to enlarge the system ability for high data rates networks. Many of cross-layer modeling and optimization schemes for Quality of Service (QoS) and capacity of downlink multiuser OFDM system were proposed. In this paper, the Maximum Weighted Capacity (MWC) based resource allocation at the Physical (PHY) layer is used. This resource allocation scheme provides a much better QoS than the previous resource allocation schemes, while maintaining the highest or nearly highest capacity and costing similar complexity. In addition, the Delay Satisfaction (DS) scheduling at the Medium Access Control (MAC) layer, which allows more than one connection to be served in each slot is used. This scheduling technique is more efficient than conventional scheduling to investigate both of the number of users as well as the number of subcarriers against system capacity. The system will be optimized for different operational environments: the outdoor deployment scenarios as well as the indoor deployment scenarios are investigated and also for different channel models. In addition, effective capacity approach [1] is used not only for providing QoS for different mobile users, but also to increase the total wireless network's throughput.

Keywords: Cross-layer, effective capacity, LTE, OFDM, QoS, resource allocation, wireless networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795