Search results for: Minimum eigenvalue
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 965

Search results for: Minimum eigenvalue

245 Optimizing Materials Cost and Mechanical Properties of PVC Electrical Cable-s Insulation by Using Mixture Experimental Design Approach

Authors: Safwan Altarazi, Raghad Hemeimat, Mousa Wakileh, Ra'ad Qsous, Aya Khreisat

Abstract:

With the development of the Polyvinyl chloride (PVC) products in many applications, the challenge of investigating the raw material composition and reducing the cost have both become more and more important. Considerable research has been done investigating the effect of additives on the PVC products. Most of the PVC composites research investigates only the effect of single/few factors, at a time. This isolated consideration of the input factors does not take in consideration the interaction effect of the different factors. This paper implements a mixture experimental design approach to find out a cost-effective PVC composition for the production of electrical-insulation cables considering the ASTM Designation (D) 6096. The results analysis showed that a minimum cost can be achieved through using 20% virgin PVC, 18.75% recycled PVC, 43.75% CaCO3 with participle size 10 microns, 14% DOP plasticizer, and 3.5% CPW plasticizer. For maximum UTS the compound should consist of: 17.5% DOP, 62.5% virgin PVC, and 20.0% CaCO3 of particle size 5 microns. Finally, for the highest ductility the compound should be made of 35% virgin PVC, 20% CaCO3 of particle size 5 microns, and 45.0% DOP plasticizer.

Keywords: ASTM 6096, mixture experimental-design approach, PVC electrical cable insulation, recycled PVC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4660
244 Effective Defect Prevention Approach in Software Process for Achieving Better Quality Levels

Authors: Suma. V., T. R. Gopalakrishnan Nair

Abstract:

Defect prevention is the most vital but habitually neglected facet of software quality assurance in any project. If functional at all stages of software development, it can condense the time, overheads and wherewithal entailed to engineer a high quality product. The key challenge of an IT industry is to engineer a software product with minimum post deployment defects. This effort is an analysis based on data obtained for five selected projects from leading software companies of varying software production competence. The main aim of this paper is to provide information on various methods and practices supporting defect detection and prevention leading to thriving software generation. The defect prevention technique unearths 99% of defects. Inspection is found to be an essential technique in generating ideal software generation in factories through enhanced methodologies of abetted and unaided inspection schedules. On an average 13 % to 15% of inspection and 25% - 30% of testing out of whole project effort time is required for 99% - 99.75% of defect elimination. A comparison of the end results for the five selected projects between the companies is also brought about throwing light on the possibility of a particular company to position itself with an appropriate complementary ratio of inspection testing.

Keywords: Defect Detection and Prevention, Inspections, Software Engineering, Software Process, Testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1482
243 Impulse Response Shortening for Discrete Multitone Transceivers using Convex Optimization Approach

Authors: Ejaz Khan, Conor Heneghan

Abstract:

In this paper we propose a new criterion for solving the problem of channel shortening in multi-carrier systems. In a discrete multitone receiver, a time-domain equalizer (TEQ) reduces intersymbol interference (ISI) by shortening the effective duration of the channel impulse response. Minimum mean square error (MMSE) method for TEQ does not give satisfactory results. In [1] a new criterion for partially equalizing severe ISI channels to reduce the cyclic prefix overhead of the discrete multitone transceiver (DMT), assuming a fixed transmission bandwidth, is introduced. Due to specific constrained (unit morm constraint on the target impulse response (TIR)) in their method, the freedom to choose optimum vector (TIR) is reduced. Better results can be obtained by avoiding the unit norm constraint on the target impulse response (TIR). In this paper we change the cost function proposed in [1] to the cost function of determining the maximum of a determinant subject to linear matrix inequality (LMI) and quadratic constraint and solve the resulting optimization problem. Usefulness of the proposed method is shown with the help of simulations.

Keywords: Equalizer, target impulse response, convex optimization, matrix inequality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1659
242 Passive Flow Control in Twin Air-Intakes

Authors: Akshoy R. Paul, Pritanshu Ranjan, Ravi R. Upadhyay, Anuj Jain

Abstract:

Aircraft propulsion systems often use Y-shaped subsonic diffusing ducts as twin air-intakes to supply the ambient air into the engine compressor for thrust generation. Due to space constraint, the diffusers need to be curved, which causes severe flow non-uniformity at the engine face. The present study attempt to control flow in a mild-curved Y-duct diffuser using trapezoidalshaped vortex generators (VG) attached on either both the sidewalls or top and bottom walls of the diffuser at the inflexion plane. A commercial computational fluid dynamics (CFD) code is modified and is used to simulate the effects of SVG in flow of a Y-duct diffuser. A few experiments are conducted for CFD code validation, while the rest are done computationally. The best combination of Yduct diffuser is found with VG-2 arranged in co-rotating sequence and attached to both the sidewalls, which ensures highest static pressure recovery, lowest total pressure loss, minimum flow distortion and less flow separation in Y-duct diffuser. The decrease in VG height while attached to top and bottom walls further improves axial flow uniformity at the diffuser outlet by a great margin as compared to the bare duct.

Keywords: Twin air-intake, Vortex generator (VG), Turbulence model, Pressure recovery, Distortion coefficient

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2081
241 Economic Evaluation of Bowland Shale Gas Wells Development in the UK

Authors: Elijah Acquah-Andoh

Abstract:

The UK has had its fair share of the shale gas revolutionary waves blowing across the global oil and gas industry at present. Although, its exploitation is widely agreed to have been delayed, shale gas was looked upon favorably by the UK Parliament when they recognized it as genuine energy source and granted licenses to industry to search and extract the resource. This, although a significant progress by industry, there yet remains another test the UK fracking resource must pass in order to render shale gas extraction feasible – it must be economically extractible and sustainably so. Developing unconventional resources is much more expensive and risky, and for shale gas wells, producing in commercial volumes is conditional upon drilling horizontal wells and hydraulic fracturing, techniques which increase CAPEX. Meanwhile, investment in shale gas development projects is sensitive to gas price and technical and geological risks. Using a Two-Factor Model, the economics of the Bowland shale wells were analyzed and the operational conditions under which fracking is profitable in the UK was characterized. We find that there is a great degree of flexibility about Opex spending; hence Opex does not pose much threat to the fracking industry in the UK. However, we discover Bowland shale gas wells fail to add value at gas price of $8/ Mmbtu. A minimum gas price of $12/Mmbtu at Opex of no more than $2/ Mcf and no more than $14.95M Capex are required to create value within the present petroleum tax regime, in the UK fracking industry.

Keywords: Capex, economical, investment, profitability, shale gas development, sustainable.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2665
240 Non-Singular Gravitational Collapse of a Homogeneous Scalar Field in Deformed Phase Space

Authors: Amir Hadi Ziaie

Abstract:

In the present work, we revisit the collapse process of a spherically symmetric homogeneous scalar field (in FRW background) minimally coupled to gravity, when the phase-space deformations are taken into account. Such a deformation is mathematically introduced as a particular type of noncommutativity between the canonical momenta of the scale factor and of the scalar field. In the absence of such deformation, the collapse culminates in a spacetime singularity. However, when the phase-space is deformed, we find that the singularity is removed by a non-singular bounce, beyond which the collapsing cloud re-expands to infinity. More precisely, for negative values of the deformation parameter, we identify the appearance of a negative pressure, which decelerates the collapse to finally avoid the singularity formation. While in the un-deformed case, the horizon curve monotonically decreases to finally cover the singularity, in the deformed case the horizon has a minimum value that this value depends on deformation parameter and initial configuration of the collapse. Such a setting predicts a threshold mass for black hole formation in stellar collapse and manifests the role of non-commutative geometry in physics and especially in stellar collapse and supernova explosion.

Keywords: Gravitational collapse, non-commutative geometry, spacetime singularity, black hole physics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1367
239 A Speeded up Robust Scale-Invariant Feature Transform Currency Recognition Algorithm

Authors: Daliyah S. Aljutaili, Redna A. Almutlaq, Suha A. Alharbi, Dina M. Ibrahim

Abstract:

All currencies around the world look very different from each other. For instance, the size, color, and pattern of the paper are different. With the development of modern banking services, automatic methods for paper currency recognition become important in many applications like vending machines. One of the currency recognition architecture’s phases is Feature detection and description. There are many algorithms that are used for this phase, but they still have some disadvantages. This paper proposes a feature detection algorithm, which merges the advantages given in the current SIFT and SURF algorithms, which we call, Speeded up Robust Scale-Invariant Feature Transform (SR-SIFT) algorithm. Our proposed SR-SIFT algorithm overcomes the problems of both the SIFT and SURF algorithms. The proposed algorithm aims to speed up the SIFT feature detection algorithm and keep it robust. Simulation results demonstrate that the proposed SR-SIFT algorithm decreases the average response time, especially in small and minimum number of best key points, increases the distribution of the number of best key points on the surface of the currency. Furthermore, the proposed algorithm increases the accuracy of the true best point distribution inside the currency edge than the other two algorithms.

Keywords: Currency recognition, feature detection and description, SIFT algorithm, SURF algorithm, speeded up and robust features.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 812
238 Dynamic Routing to Multiple Destinations in IP Networks using Hybrid Genetic Algorithm (DRHGA)

Authors: K. Vijayalakshmi, S. Radhakrishnan

Abstract:

In this paper we have proposed a novel dynamic least cost multicast routing protocol using hybrid genetic algorithm for IP networks. Our protocol finds the multicast tree with minimum cost subject to delay, degree, and bandwidth constraints. The proposed protocol has the following features: i. Heuristic local search function has been devised and embedded with normal genetic operation to increase the speed and to get the optimized tree, ii. It is efficient to handle the dynamic situation arises due to either change in the multicast group membership or node / link failure, iii. Two different crossover and mutation probabilities have been used for maintaining the diversity of solution and quick convergence. The simulation results have shown that our proposed protocol generates dynamic multicast tree with lower cost. Results have also shown that the proposed algorithm has better convergence rate, better dynamic request success rate and less execution time than other existing algorithms. Effects of degree and delay constraints have also been analyzed for the multicast tree interns of search success rate.

Keywords: Dynamic Group membership change, Hybrid Genetic Algorithm, Link / node failure, QoS Parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1404
237 Enhanced Multi-Intensity Analysis in Multi-Scenery Classification-Based Macro and Micro Elements

Authors: R. Bremananth

Abstract:

Several computationally challenging issues are encountered while classifying complex natural scenes. In this paper, we address the problems that are encountered in rotation invariance with multi-intensity analysis for multi-scene overlapping. In the present literature, various algorithms proposed techniques for multi-intensity analysis, but there are several restrictions in these algorithms while deploying them in multi-scene overlapping classifications. In order to resolve the problem of multi-scenery overlapping classifications, we present a framework that is based on macro and micro basis functions. This algorithm conquers the minimum classification false alarm while pigeonholing multi-scene overlapping. Furthermore, a quadrangle multi-intensity decay is invoked. Several parameters are utilized to analyze invariance for multi-scenery classifications such as rotation, classification, correlation, contrast, homogeneity, and energy. Benchmark datasets were collected for complex natural scenes and experimented for the framework. The results depict that the framework achieves a significant improvement on gray-level matrix of co-occurrence features for overlapping in diverse degree of orientations while pigeonholing multi-scene overlapping.

Keywords: Automatic classification, contrast, homogeneity, invariant analysis, multi-scene analysis, overlapping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1070
236 Bipolar Square Wave Pulses for Liquid Food Sterilization using Cascaded H-Bridge Multilevel Inverter

Authors: Hanifah Jambari, Naziha A. Azli, M. Afendi M. Piah

Abstract:

This paper presents the generation of bipolar square wave pulses with characteristics that are suitable for liquid food sterilization using a Cascaded H-bridge Multilevel Inverter (CHMI). Bipolar square waves pulses have been reported as stable for a longer time during the sterilization process with minimum heat emission and increased efficiency. The CHMI allows the system to produce bipolar square wave pulses and yielding high output voltage without using a transformer while fulfilling the pulse requirements for effective liquid food sterilization. This in turn can reduce power consumption and cost of the overall liquid food sterilization system. The simulation results have shown that pulses with peak output voltage of 2.4 kV, pulse width of between 1 2s and 1 ms at frequencies of 50 Hz and 100 Hz can be generated by a 7-level CHMI. Results from the experimental set-up based on a 5-level CHMI has indicated the potential of the proposed circuit in producing bipolar square wave output pulses with peak values that depends on the DC source level supplied to the CHMI modules, pulse width of between 12.5 2s and 1 ms at frequencies of 50 Hz and 100 Hz.

Keywords: pulsed electric field, multilevel inverter, bipolarsquare wave, food sterilization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2505
235 Power Flow Tracing Based Reactive Power Ancillary Service (AS) in Restructured Power Market

Authors: M. Susithra, R. Gnanadass

Abstract:

Ancillary services are support services which are essential for humanizing and enhancing the reliability and security of the electric power system. Reactive power ancillary service is one of the important ancillary services in a restructured electricity market which determines the cost of supplying ancillary services and finding of how this cost would change with respect to operating decisions. This paper presents a new formation that can be used to minimize the Independent System Operator (ISO)’s total payment for reactive power ancillary service. The modified power flow tracing algorithm estimates the availability of reserve reactive power for ancillary service. In order to find optimum reactive power dispatch, Biogeography based optimization method (BPO) is proposed. Market Reactive Clearing Price (MRCP) is then estimated and it encourages generator companies (GENCOs) to participate in an ancillary service. Finally, optimal weighting factor and real time utilization factor of reactive power give the minimum ISO’s total payment. The effectiveness of proposed design is verified using IEEE 30 bus system.

Keywords: Biogeography based optimization method, Power flow tracing method, Reactive generation capability curve and Reactive power ancillary service.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3186
234 Activity Recognition by Smartphone Accelerometer Data Using Ensemble Learning Methods

Authors: Eu Tteum Ha, Kwang Ryel Ryu

Abstract:

As smartphones are equipped with various sensors, there have been many studies focused on using these sensors to create valuable applications. Human activity recognition is one such application motivated by various welfare applications, such as the support for the elderly, measurement of calorie consumption, lifestyle and exercise patterns analyses, and so on. One of the challenges one faces when using smartphone sensors for activity recognition is that the number of sensors should be minimized to save battery power. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we adopt to deal with this twelve-class problem uses various methods. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point, but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window. The experiments compared the performance of four kinds of basic multi-class classifiers and the performance of four kinds of ensemble learning methods based on three kinds of basic multi-class classifiers. The results show that while the method with the highest accuracy is ECOC based on Random forest.

Keywords: Ensemble learning, activity recognition, smartphone accelerometer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2111
233 Comparative Study of Evolutionary Model and Clustering Methods in Circuit Partitioning Pertaining to VLSI Design

Authors: K. A. Sumitra Devi, N. P. Banashree, Annamma Abraham

Abstract:

Partitioning is a critical area of VLSI CAD. In order to build complex digital logic circuits its often essential to sub-divide multi -million transistor design into manageable Pieces. This paper looks at the various partitioning techniques aspects of VLSI CAD, targeted at various applications. We proposed an evolutionary time-series model and a statistical glitch prediction system using a neural network with selection of global feature by making use of clustering method model, for partitioning a circuit. For evolutionary time-series model, we made use of genetic, memetic & neuro-memetic techniques. Our work focused in use of clustering methods - K-means & EM methodology. A comparative study is provided for all techniques to solve the problem of circuit partitioning pertaining to VLSI design. The performance of all approaches is compared using benchmark data provided by MCNC standard cell placement benchmark net lists. Analysis of the investigational results proved that the Neuro-memetic model achieves greater performance then other model in recognizing sub-circuits with minimum amount of interconnections between them.

Keywords: VLSI, circuit partitioning, memetic algorithm, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1595
232 Cyber Security Situational Awareness among Students: A Case Study in Malaysia

Authors: Yunos Zahri, Ab Hamid R. Susanty, Ahmad Mustaffa

Abstract:

This paper explores the need for a national baseline study on understanding the level of cyber security situational awareness among primary and secondary school students in Malaysia. The online survey method was deployed to administer the data collection exercise. The target groups were divided into three categories: Group 1 (primary school aged 7-9 years old), Group 2 (primary school aged 10-12 years old), and Group 3 (secondary school aged 13-17 years old). A different questionnaire set was designed for each group. The survey topics/areas included Internet and digital citizenship knowledge. Respondents were randomly selected from rural and urban areas throughout all 14 states in Malaysia. A total of 9,158 respondents participated in the survey, with most states meeting the minimum sample size requirement to represent the country’s demographics. The findings and recommendations from this baseline study are fundamental to develop teaching modules required for children to understand the security risks and threats associated with the Internet throughout their years in school. Early exposure and education will help ensure healthy cyber habits among millennials in Malaysia.

Keywords: Cyber security awareness, cyber security education, cyber security, students.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2910
231 Optimization of the Process of Osmo – Convective Drying of Edible Button Mushrooms using Response Surface Methodology (RSM)

Authors: Behrouz Mosayebi Dehkordi

Abstract:

Simultaneous effects of temperature, immersion time, salt concentration, sucrose concentration, pressure and convective dryer temperature on the combined osmotic dehydration - convective drying of edible button mushrooms were investigated. Experiments were designed according to Central Composite Design with six factors each at five different levels. Response Surface Methodology (RSM) was used to determine the optimum processing conditions that yield maximum water loss and rehydration ratio and minimum solid gain and shrinkage in osmotic-convective drying of edible button mushrooms. Applying surfaces profiler and contour plots optimum operation conditions were found to be temperature of 39 °C, immersion time of 164 min, salt concentration of 14%, sucrose concentration of 53%, pressure of 600 mbar and drying temperature of 40 °C. At these optimum conditions, water loss, solid gain, rehydration ratio and shrinkage were found to be 63.38 (g/100 g initial sample), 3.17 (g/100 g initial sample), 2.26 and 7.15%, respectively.

Keywords: Dehydration, Mushroom, Optimization, Osmotic, Response Surface Methodology

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1993
230 Optimization the Process of Osmo – Convective Drying of Edible Button Mushrooms using Response Surface Methodology (RSM)

Authors: Behrouz Mosayebi Dehkordi

Abstract:

Simultaneous effects of temperature, immersion time, salt concentration, sucrose concentration, pressure and convective dryer temperature on the combined osmotic dehydration - convective drying of edible button mushrooms were investigated. Experiments were designed according to Central Composite Design with six factors each at five different levels. Response Surface Methodology (RSM) was used to determine the optimum processing conditions that yield maximum water loss and rehydration ratio and minimum solid gain and shrinkage in osmotic-convective drying of edible button mushrooms. Applying surfaces profiler and contour plots optimum operation conditions were found to be temperature of 39 °C, immersion time of 164 min, salt concentration of 14%, sucrose concentration of 53%, pressure of 600 mbar and drying temperature of 40 °C. At these optimum conditions, water loss, solid gain, rehydration ratio and shrinkage were found to be 63.38 (g/100 g initial sample), 3.17 (g/100 g initial sample), 2.26 and 7.15%, respectively.

Keywords: Dehydration, mushroom, optimization, osmotic, response surface methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1418
229 Optic Disc Detection by Earth Mover's Distance Template Matching

Authors: Fernando C. Monteiro, Vasco Cadavez

Abstract:

This paper presents a method for the detection of OD in the retina which takes advantage of the powerful preprocessing techniques such as the contrast enhancement, Gabor wavelet transform for vessel segmentation, mathematical morphology and Earth Mover-s distance (EMD) as the matching process. The OD detection algorithm is based on matching the expected directional pattern of the retinal blood vessels. Vessel segmentation method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel-s feature vector. Feature vectors are composed of the pixel-s intensity and 2D Gabor wavelet transform responses taken at multiple scales. A simple matched filter is proposed to roughly match the direction of the vessels at the OD vicinity using the EMD. The minimum distance provides an estimate of the OD center coordinates. The method-s performance is evaluated on publicly available DRIVE and STARE databases. On the DRIVE database the OD center was detected correctly in all of the 40 images (100%) and on the STARE database the OD was detected correctly in 76 out of the 81 images, even in rather difficult pathological situations.

Keywords: Diabetic retinopathy, Earth Mover's distance, Gabor wavelets, optic disc detection, retinal images

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1948
228 400 kW Six Analytical High Speed Generator Designs for Smart Grid Systems

Authors: A. El Shahat, A. Keyhani, H. El Shewy

Abstract:

High Speed PM Generators driven by micro-turbines are widely used in Smart Grid System. So, this paper proposes comparative study among six classical, optimized and genetic analytical design cases for 400 kW output power at tip speed 200 m/s. These six design trials of High Speed Permanent Magnet Synchronous Generators (HSPMSGs) are: Classical Sizing; Unconstrained optimization for total losses and its minimization; Constrained optimized total mass with bounded constraints are introduced in the problem formulation. Then a genetic algorithm is formulated for obtaining maximum efficiency and minimizing machine size. In the second genetic problem formulation, we attempt to obtain minimum mass, the machine sizing that is constrained by the non-linear constraint function of machine losses. Finally, an optimum torque per ampere genetic sizing is predicted. All results are simulated with MATLAB, Optimization Toolbox and its Genetic Algorithm. Finally, six analytical design examples comparisons are introduced with study of machines waveforms, THD and rotor losses.

Keywords: High Speed, Micro - Turbines, Optimization, PM Generators, Smart Grid, MATLAB.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2405
227 Proposals for the Thermal Regulation of Buildings in Algeria: An Energy Label for Social Housing

Authors: Marco Morini, Nicolandrea Calabrese, Dario Chello

Abstract:

Despite the international commitment of Algeria towards the development of energy efficiency and renewable energy in the country, the internal energy demand has been continuously growing during the last decade due to the substantial increase of population and of living conditions, which in turn has led to an unprecedented expansion of the residential building sector. The RTB (Thermal Building Regulation) is the technical document that establishes the calculation framework for the thermal performance of buildings in Algeria, setting up minimum obligatory targets for the thermal performance of new buildings. An update of this regulation is due in the coming years and this paper discusses some proposals in this regard, with the aim to improve the energy efficiency of the building sector, particularly with regard to social housing. In particular, it proposes a methodology for drafting an energy performance label of new Algerian residential buildings, moving from the results of the thermal compliance verification and sizing of technical systems as defined in the RTB. Such an energy performance label – whose calculation method is briefly described in the paper – aims to raise citizens' awareness of the benefits of energy efficiency. It can represent the first step in a process of integrating technical installations into the calculation of the energy performance of buildings in Algeria.

Keywords: building, energy certification, energy efficiency, social housing, international cooperation, Mediterranean Region

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 514
226 Reducing Variation of Dyeing Process in Textile Manufacturing Industry

Authors: M. Zeydan, G. Toğa

Abstract:

This study deals with a multi-criteria optimization problem which has been transformed into a single objective optimization problem using Response Surface Methodology (RSM), Artificial Neural Network (ANN) and Grey Relational Analyses (GRA) approach. Grey-RSM and Grey-ANN are hybrid techniques which can be used for solving multi-criteria optimization problem. There have been two main purposes of this research as follows. 1. To determine optimum and robust fiber dyeing process conditions by using RSM and ANN based on GRA, 2. To obtain the best suitable model by comparing models developed by different methodologies. The design variables for fiber dyeing process in textile are temperature, time, softener, anti-static, material quantity, pH, retarder, and dispergator. The quality characteristics to be evaluated are nominal color consistency of fiber, maximum strength of fiber, minimum color of dyeing solution. GRA-RSM with exact level value, GRA-RSM with interval level value and GRA-ANN models were compared based on GRA output value and MSE (Mean Square Error) performance measurement of outputs with each other. As a result, GRA-ANN with interval value model seems to be suitable reducing the variation of dyeing process for GRA output value of the model.

Keywords: Artificial Neural Network, Grey Relational Analysis, Optimization, Response Surface Methodology

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3497
225 Effect of Size of the Step in the Response Surface Methodology using Nonlinear Test Functions

Authors: Jesús Everardo Olguín Tiznado, Rafael García Martínez, Claudia Camargo Wilson, Juan Andrés López Barreras, Everardo Inzunza González, Javier Ordorica Villalvazo

Abstract:

The response surface methodology (RSM) is a collection of mathematical and statistical techniques useful in the modeling and analysis of problems in which the dependent variable receives the influence of several independent variables, in order to determine which are the conditions under which should operate these variables to optimize a production process. The RSM estimated a regression model of first order, and sets the search direction using the method of maximum / minimum slope up / down MMS U/D. However, this method selects the step size intuitively, which can affect the efficiency of the RSM. This paper assesses how the step size affects the efficiency of this methodology. The numerical examples are carried out through Monte Carlo experiments, evaluating three response variables: efficiency gain function, the optimum distance and the number of iterations. The results in the simulation experiments showed that in response variables efficiency and gain function at the optimum distance were not affected by the step size, while the number of iterations is found that the efficiency if it is affected by the size of the step and function type of test used.

Keywords: RSM, dependent variable, independent variables, efficiency, simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1947
224 Optimal Model Order Selection for Transient Error Autoregressive Moving Average (TERA) MRI Reconstruction Method

Authors: Abiodun M. Aibinu, Athaur Rahman Najeeb, Momoh J. E. Salami, Amir A. Shafie

Abstract:

An alternative approach to the use of Discrete Fourier Transform (DFT) for Magnetic Resonance Imaging (MRI) reconstruction is the use of parametric modeling technique. This method is suitable for problems in which the image can be modeled by explicit known source functions with a few adjustable parameters. Despite the success reported in the use of modeling technique as an alternative MRI reconstruction technique, two important problems constitutes challenges to the applicability of this method, these are estimation of Model order and model coefficient determination. In this paper, five of the suggested method of evaluating the model order have been evaluated, these are: The Final Prediction Error (FPE), Akaike Information Criterion (AIC), Residual Variance (RV), Minimum Description Length (MDL) and Hannan and Quinn (HNQ) criterion. These criteria were evaluated on MRI data sets based on the method of Transient Error Reconstruction Algorithm (TERA). The result for each criterion is compared to result obtained by the use of a fixed order technique and three measures of similarity were evaluated. Result obtained shows that the use of MDL gives the highest measure of similarity to that use by a fixed order technique.

Keywords: Autoregressive Moving Average (ARMA), MagneticResonance Imaging (MRI), Parametric modeling, Transient Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571
223 Simultaneous HPAM/SDS Injection in Heterogeneous/Layered Models

Authors: M. H. Sedaghat, A. Zamani, S. Morshedi, R. Janamiri, M. Safdari, I. Mahdavi, A. Hosseini, A. Hatampour

Abstract:

Although lots of experiments have been done in enhanced oil recovery, the number of experiments which consider the effects of local and global heterogeneity on efficiency of enhanced oil recovery based on the polymer-surfactant flooding is low and rarely done. In this research, we have done numerous experiments of water flooding and polymer-surfactant flooding on a five spot glass micromodel in different conditions such as different positions of layers. In these experiments, five different micromodels with three different pore structures are designed. Three models with different layer orientation, one homogenous model and one heterogeneous model are designed. In order to import the effect of heterogeneity of porous media, three types of pore structures are distributed accidentally and with equal ratio throughout heterogeneous micromodel network according to random normal distribution. The results show that maximum EOR recovery factor will happen in a situation where the layers are orthogonal to the path of mainstream and the minimum EOR recovery factor will happen in a situation where the model is heterogeneous. This experiments show that in polymer-surfactant flooding, with increase of angles of layers the EOR recovery factor will increase and this recovery factor is strongly affected by local heterogeneity around the injection zone.

Keywords: Layered Reservoir, Micromodel, Local Heterogeneity, Polymer-Surfactant Flooding, Enhanced Oil Recovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2170
222 The Relationship between Inventory Management and Profitability: A Comparative Research on Turkish Firms Operated in Weaving Industry, Eatables Industry, Wholesale and Retail Industry

Authors: G. Sekeroglu, M. Altan

Abstract:

Working capital is identified as firm’s all current assets. Inventories which are one of the working capital elements are very important among current assets for firms. Because, profitability is an indicator for firms’ financial success is provided with minimum cost and optimum inventory quantity. So in this study, it is investigated as comparatively that the effect of inventory management on the profitability of Turkish firms which operated in weaving industry, eatables industry, wholesale and retail industry in between 2003 – 2012 years. Research data consist of profitability ratios and inventory turnovers ratio calculated by using balance sheets and income statements of firms which operated in Borsa Istanbul (BIST). In this research, the relationship between inventories and profitability is investigated by using SPSS-20 software with regression and correlation analysis. The results achieved from three industry departments which exist in study interpreted as comparatively. Accordingly, it is determined that there is a positive relationship between inventory management and profitability in eatables industry. However, it was founded that there is no relationship between inventory management and profitability in weaving industry and wholesale and retail industry.

Keywords: Profitability, regression analysis, inventory management, working capital.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7169
221 Network-Constrained AC Unit Commitment under Uncertainty Using a Bender’s Decomposition Approach

Authors: B. Janani, S. Thiruvenkadam

Abstract:

In this work, the system evaluates the impact of considering a stochastic approach on the day ahead basis Unit Commitment. Comparisons between stochastic and deterministic Unit Commitment solutions are provided. The Unit Commitment model consists in the minimization of the total operation costs considering unit’s technical constraints like ramping rates, minimum up and down time. Load shedding and wind power spilling is acceptable, but at inflated operational costs. The evaluation process consists in the calculation of the optimal unit commitment and in verifying the fulfillment of the considered constraints. For the calculation of the optimal unit commitment, an algorithm based on the Benders Decomposition, namely on the Dual Dynamic Programming, was developed. Two approaches were considered on the construction of stochastic solutions. Data related to wind power outputs from two different operational days are considered on the analysis. Stochastic and deterministic solutions are compared based on the actual measured wind power output at the operational day. Through a technique capability of finding representative wind power scenarios and its probabilities, the system can analyze a more detailed process about the expected final operational cost.

Keywords: Benders’ decomposition, network constrained AC unit commitment, stochastic programming, wind power uncertainty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1269
220 Evaluation of Microleakage of a New Generation Nano-Ionomer in Class II Restoration of Primary Molars

Authors: Ghada Salem, Nihal Kabel

Abstract:

Objective: This in vitro study was carried out to assess the microleakage properties of nano-filled glass ionomer in comparison to resin-reinforced glass ionomers. Material and Methods: 40 deciduous molar teeth were included in this study. Class-II cavity was prepared in a standard form for all the specimens. The teeth were randomly distributed into two groups (20 per group) according to the restorative material used either nano-glass ionomer or Photac Fill glass ionomer restoration. All specimens were thermocycled for 1000 cycles between 5 and 55 °C. After that, the teeth were immersed in 2% methylene blue dye then sectioned and evaluated under a stereomicroscope. Microleakage was assessed using linear dye penetration and on a scale from zero to five. Results: Two way ANOVA test revealed a statistically significant lower degree of microleakage in both occlusal and gingival restorations (0.4±0.2), (0.9±0.1) for nano-filled glass ionomer group in comparison to resin modified glass ionomer (2.3±0.7), (2.4±0.5). No statistical difference was found between gingival and occlusal leakage regarding the effect of the measured site. Conclusion: Nano-filled glass ionomer shows superior sealing ability which enables this type of restoration to be used in minimum invasive treatment.

Keywords: Microleakage, nano-ionomer, resin-reinforced glass ionomer, proximal cavity preparation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1195
219 Solving a New Mixed-Model Assembly LineSequencing Problem in a MTO Environment

Authors: N. Manavizadeh, M. Hosseini, M. Rabbani

Abstract:

In the last decades to supply the various and different demands of clients, a lot of manufacturers trend to use the mixedmodel assembly line (MMAL) in their production lines, since this policy make possible to assemble various and different models of the equivalent goods on the same line with the MTO approach. In this article, we determine the sequence of (MMAL) line, with applying the kitting approach and planning of rest time for general workers to reduce the wastages, increase the workers effectiveness and apply the sector of lean production approach. This Multi-objective sequencing problem solved in small size with GAMS22.2 and PSO meta heuristic in 10 test problems and compare their results together and conclude that their results are very similar together, next we determine the important factors in computing the cost, which improving them cost reduced. Since this problem, is NPhard in large size, we use the particle swarm optimization (PSO) meta-heuristic for solving it. In large size we define some test problems to survey it-s performance and determine the important factors in calculating the cost, that by change or improved them production in minimum cost will be possible.

Keywords: Mixed-Model Assembly Line, particle swarmoptimization, Multi-objective sequencing problem, MTO system, kitto-assembly, rest time

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1992
218 Image Compression with Back-Propagation Neural Network using Cumulative Distribution Function

Authors: S. Anna Durai, E. Anna Saro

Abstract:

Image Compression using Artificial Neural Networks is a topic where research is being carried out in various directions towards achieving a generalized and economical network. Feedforward Networks using Back propagation Algorithm adopting the method of steepest descent for error minimization is popular and widely adopted and is directly applied to image compression. Various research works are directed towards achieving quick convergence of the network without loss of quality of the restored image. In general the images used for compression are of different types like dark image, high intensity image etc. When these images are compressed using Back-propagation Network, it takes longer time to converge. The reason for this is, the given image may contain a number of distinct gray levels with narrow difference with their neighborhood pixels. If the gray levels of the pixels in an image and their neighbors are mapped in such a way that the difference in the gray levels of the neighbors with the pixel is minimum, then compression ratio as well as the convergence of the network can be improved. To achieve this, a Cumulative distribution function is estimated for the image and it is used to map the image pixels. When the mapped image pixels are used, the Back-propagation Neural Network yields high compression ratio as well as it converges quickly.

Keywords: Back-propagation Neural Network, Cumulative Distribution Function, Correlation, Convergence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2508
217 Revised PLWAP Tree with Non-frequent Items for Mining Sequential Pattern

Authors: R. Vishnu Priya, A. Vadivel

Abstract:

Sequential pattern mining is a challenging task in data mining area with large applications. One among those applications is mining patterns from weblog. Recent times, weblog is highly dynamic and some of them may become absolute over time. In addition, users may frequently change the threshold value during the data mining process until acquiring required output or mining interesting rules. Some of the recently proposed algorithms for mining weblog, build the tree with two scans and always consume large time and space. In this paper, we build Revised PLWAP with Non-frequent Items (RePLNI-tree) with single scan for all items. While mining sequential patterns, the links related to the nonfrequent items are not considered. Hence, it is not required to delete or maintain the information of nodes while revising the tree for mining updated transactions. The algorithm supports both incremental and interactive mining. It is not required to re-compute the patterns each time, while weblog is updated or minimum support changed. The performance of the proposed tree is better, even the size of incremental database is more than 50% of existing one. For evaluation purpose, we have used the benchmark weblog dataset and found that the performance of proposed tree is encouraging compared to some of the recently proposed approaches.

Keywords: Sequential pattern mining, weblog, frequent and non-frequent items, incremental and interactive mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1883
216 A Recommendation to Oncologists for Cancer Treatment by Immunotherapy: Quantitative and Qualitative Analysis

Authors: Mandana Kariminejad, Ali Ghaffari

Abstract:

Today, the treatment of cancer, in a relatively short period, with minimum adverse effects is a great concern for oncologists. In this paper, based on a recently used mathematical model for cancer, a guideline has been proposed for the amount and duration of drug doses for cancer treatment by immunotherapy. Dynamically speaking, the mathematical ordinary differential equation (ODE) model of cancer has different equilibrium points; one of them is unstable, which is called the no tumor equilibrium point. In this paper, based on the number of tumor cells an intelligent soft computing controller (a combination of fuzzy logic controller and genetic algorithm), decides regarding the amount and duration of drug doses, to eliminate the tumor cells and stabilize the unstable point in a relatively short time. Two different immunotherapy approaches; active and adoptive, have been studied and presented. It is shown that the rate of decay of tumor cells is faster and the doses of drug are lower in comparison with the result of some other literatures. It is also shown that the period of treatment and the doses of drug in adoptive immunotherapy are significantly less than the active method. A recommendation to oncologists has also been presented.

Keywords: Tumor, immunotherapy, fuzzy controller, Genetic algorithm, mathematical model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1010