Search results for: Complexity reduction approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6793

Search results for: Complexity reduction approach

6403 Thermodynamic Analysis of Ventilated Façades under Operating Conditions in Southern Spain

Authors: Carlos A. D. Torres, Antonio D. Delgado

Abstract:

In this work we study the thermodynamic behavior of some ventilated facades under summer operating conditions in Southern Spain. Under these climatic conditions, indoor comfort implies a high energetic demand due to high temperatures that usually are reached in this season in the considered geographical area.

The aim of this work is to determine if during summer operating conditions in Southern Spain, ventilated façades provide some energy saving compared to the non-ventilated façades and to deduce their behavior patterns in terms of energy efficiency.

The modelization of the air flow in the channel has been performed by using Navier-Stokes equations for thermodynamic flows. Numerical simulations have been carried out with a 2D Finite Element approach.

This way, we analyze the behavior of ventilated façades under different weather conditions as variable wind, variable temperature and different levels of solar irradiation.

CFD computations show the combined effect of the shading of the external wall and the ventilation by the natural convection into the air gap achieve a reduction of the heat load during the summer period. This reduction has been evaluated by comparing the thermodynamic performances of two ventilated and two unventilated façades with the same geometry and thermophysical characteristics.

Keywords: Passive cooling, ventilated façades, energy-efficient building, CFD, FEM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4904
6402 Attribute Based Comparison and Selection of Modular Self-Reconfigurable Robot Using Multiple Attribute Decision Making Approach

Authors: Manpreet Singh, V. P. Agrawal, Gurmanjot Singh Bhatti

Abstract:

From the last decades, there is a significant technological advancement in the field of robotics, and a number of modular self-reconfigurable robots were introduced that can help in space exploration, bucket to stuff, search, and rescue operation during earthquake, etc. As there are numbers of self-reconfigurable robots, choosing the optimum one is always a concern for robot user since there is an increase in available features, facilities, complexity, etc. The objective of this research work is to present a multiple attribute decision making based methodology for coding, evaluation, comparison ranking and selection of modular self-reconfigurable robots using a technique for order preferences by similarity to ideal solution approach. However, 86 attributes that affect the structure and performance are identified. A database for modular self-reconfigurable robot on the basis of different pertinent attribute is generated. This database is very useful for the user, for selecting a robot that suits their operational needs. Two visual methods namely linear graph and spider chart are proposed for ranking of modular self-reconfigurable robots. Using five robots (Atron, Smores, Polybot, M-Tran 3, Superbot), an example is illustrated, and raking of the robots is successfully done, which shows that Smores is the best robot for the operational need illustrated, and this methodology is found to be very effective and simple to use.

Keywords: Self-reconfigurable robots, MADM, TOPSIS, morphogenesis, scalability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 846
6401 Using the Combined Model of PROMETHEE and Fuzzy Analytic Network Process for Determining Question Weights in Scientific Exams through Data Mining Approach

Authors: Hassan Haleh, Amin Ghaffari, Parisa Farahpour

Abstract:

Need for an appropriate system of evaluating students- educational developments is a key problem to achieve the predefined educational goals. Intensity of the related papers in the last years; that tries to proof or disproof the necessity and adequacy of the students assessment; is the corroborator of this matter. Some of these studies tried to increase the precision of determining question weights in scientific examinations. But in all of them there has been an attempt to adjust the initial question weights while the accuracy and precision of those initial question weights are still under question. Thus In order to increase the precision of the assessment process of students- educational development, the present study tries to propose a new method for determining the initial question weights by considering the factors of questions like: difficulty, importance and complexity; and implementing a combined method of PROMETHEE and fuzzy analytic network process using a data mining approach to improve the model-s inputs. The result of the implemented case study proves the development of performance and precision of the proposed model.

Keywords: Assessing students, Analytic network process, Clustering, Data mining, Fuzzy sets, Multi-criteria decision making, and Preference function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1550
6400 An Investigation on Efficient Spreading Codes for Transmitter Based Techniques to Mitigate MAI and ISI in TDD/CDMA Downlink

Authors: Abhijit Mitra, C. Ardil

Abstract:

We investigate efficient spreading codes for transmitter based techniques of code division multiple access (CDMA) systems. The channel is considered to be known at the transmitter which is usual in a time division duplex (TDD) system where the channel is assumed to be the same on uplink and downlink. For such a TDD/CDMA system, both bitwise and blockwise multiuser transmission schemes are taken up where complexity is transferred to the transmitter side so that the receiver has minimum complexity. Different spreading codes are considered at the transmitter to spread the signal efficiently over the entire spectrum. The bit error rate (BER) curves portray the efficiency of the codes in presence of multiple access interference (MAI) as well as inter symbol interference (ISI).

Keywords: Code division multiple access, time division duplex, transmitter technique, precoding, pre-rake, rake, spreading code.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1441
6399 Product Ecodesign Approaches in ISO 14001 Certified Companies

Authors: Gregor Radonjič, Aleksandra P. Korda, Damijan Krajnc

Abstract:

The aim of the study was to investigate whether there is the promotion of product ecodesign measures as a result of adopting ISO 14001 certification in manufacturing companies in the Republic of Slovenia. Companies gave the most of their product development attention to waste and energy reduction during manufacturing process and reduction of material consumption per unit of product. Regarding the importance of different ecodesign criteria reduction of material consumption per unit of product was reported as the most important criterion. Less attention is paid to endof- life issues considering recycling or packaging. Most manufacturing enterprises considered ISO 14001 standard as a very useful tool or at least a useful tool helping them to accelerate and establish product ecodesign activities. Two most frequently considered ecodesign drivers are increased competitive advantage and legal requirements and two most important barriers are high development costs and insufficient market demand.

Keywords: ecodesign, environmental management system, ISO 14001, products

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1500
6398 An Evaluation of Carbon Dioxide Emissions Trading among Enterprises -The Tokyo Cap and Trade Program-

Authors: Hiroki Satou, Kayoko Yamamoto

Abstract:

This study aims to propose three evaluation methods to evaluate the Tokyo Cap and Trade Program when emissions trading is performed virtually among enterprises, focusing on carbon dioxide (CO2), which is the only emitted greenhouse gas that tends to increase. The first method clarifies the optimum reduction rate for the highest cost benefit, the second discusses emissions trading among enterprises through market trading, and the third verifies long-term emissions trading during the term of the plan (2010-2019), checking the validity of emissions trading partly using Geographic Information Systems (GIS). The findings of this study can be summarized in the following three points. 1. Since the total cost benefit is the greatest at a 44% reduction rate, it is possible to set it more highly than that of the Tokyo Cap and Trade Program to get more total cost benefit. 2. At a 44% reduction rate, among 320 enterprises, 8 purchasing enterprises and 245 sales enterprises gain profits from emissions trading, and 67 enterprises perform voluntary reduction without conducting emissions trading. Therefore, to further promote emissions trading, it is necessary to increase the sales volumes of emissions trading in addition to sales enterprises by increasing the number of purchasing enterprises. 3. Compared to short-term emissions trading, there are few enterprises which benefit in each year through the long-term emissions trading of the Tokyo Cap and Trade Program. Only 81 enterprises at the most can gain profits from emissions trading in FY 2019. Therefore, by setting the reduction rate more highly, it is necessary to increase the number of enterprises that participate in emissions trading and benefit from the restraint of CO2 emissions.

Keywords: Emissions Trading, Tokyo Cap and Trade Program, Carbon Dioxide (CO2), Global Warming, Geographic Information Systems (GIS)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2138
6397 Computationally Efficient Adaptive Rate Sampling and Adaptive Resolution Analysis

Authors: Saeed Mian Qaisar, Laurent Fesquet, Marc Renaudin

Abstract:

Mostly the real life signals are time varying in nature. For proper characterization of such signals, time-frequency representation is required. The STFT (short-time Fourier transform) is a classical tool used for this purpose. The limitation of the STFT is its fixed time-frequency resolution. Thus, an enhanced version of the STFT, which is based on the cross-level sampling, is devised. It can adapt the sampling frequency and the window function length by following the input signal local variations. Therefore, it provides an adaptive resolution time-frequency representation of the input. The computational complexity of the proposed STFT is deduced and compared to the classical one. The results show a significant gain of the computational efficiency and hence of the processing power. The processing error of the proposed technique is also discussed.

Keywords: Level Crossing Sampling, Activity Selection, Adaptive Resolution Analysis, Computational Complexity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1229
6396 Motion Area Estimated Motion Estimation with Triplet Search Patterns for H.264/AVC

Authors: T. Song, T. Shimamoto

Abstract:

In this paper a fast motion estimation method for H.264/AVC named Triplet Search Motion Estimation (TS-ME) is proposed. Similar to some of the traditional fast motion estimation methods and their improved proposals which restrict the search points only to some selected candidates to decrease the computation complexity, proposed algorithm separate the motion search process to several steps but with some new features. First, proposed algorithm try to search the real motion area using proposed triplet patterns instead of some selected search points to avoid dropping into the local minimum. Then, in the localized motion area a novel 3-step motion search algorithm is performed. Proposed search patterns are categorized into three rings on the basis of the distance from the search center. These three rings are adaptively selected by referencing the surrounding motion vectors to early terminate the motion search process. On the other hand, computation reduction for sub pixel motion search is also discussed considering the appearance probability of the sub pixel motion vector. From the simulation results, motion estimation speed improved by a factor of up to 38 when using proposed algorithm than that of the reference software of H.264/AVC with ignorable picture quality loss.

Keywords: Motion estimation, VLSI, image processing, search patterns

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1298
6395 Multiple Regression based Graphical Modeling for Images

Authors: Pavan S., Sridhar G., Sridhar V.

Abstract:

Super resolution is one of the commonly referred inference problems in computer vision. In the case of images, this problem is generally addressed using a graphical model framework wherein each node represents a portion of the image and the edges between the nodes represent the statistical dependencies. However, the large dimensionality of images along with the large number of possible states for a node makes the inference problem computationally intractable. In this paper, we propose a representation wherein each node can be represented as acombination of multiple regression functions. The proposed approach achieves a tradeoff between the computational complexity and inference accuracy by varying the number of regression functions for a node.

Keywords: Belief propagation, Graphical model, Regression, Super resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513
6394 Adjustment and Scale-Up Strategy of Pilot Liquid Fermentation Process of Azotobacter sp.

Authors: G. Quiroga-Cubides, A. Díaz, M. Gómez

Abstract:

The genus Azotobacter has been widely used as bio-fertilizer due to its significant effects on the stimulation and promotion of plant growth in various agricultural species of commercial interest. In order to obtain significantly viable cellular concentration, a scale-up strategy for a liquid fermentation process (SmF) with two strains of A. chroococcum (named Ac1 and Ac10) was validated and adjusted at laboratory and pilot scale. A batch fermentation process under previously defined conditions was carried out on a biorreactor Infors®, model Minifors of 3.5 L, which served as a baseline for this research. For the purpose of increasing process efficiency, the effect of the reduction of stirring speed was evaluated in combination with a fed-batch-type fermentation laboratory scale. To reproduce the efficiency parameters obtained, a scale-up strategy with geometric and fluid dynamic behavior similarities was evaluated. According to the analysis of variance, this scale-up strategy did not have significant effect on cellular concentration and in laboratory and pilot fermentations (Tukey, p > 0.05). Regarding air consumption, fermentation process at pilot scale showed a reduction of 23% versus the baseline. The percentage of reduction related to energy consumption reduction under laboratory and pilot scale conditions was 96.9% compared with baseline.

Keywords: Azotobacter chroococcum, scale-up, liquid fermentation, fed-batch process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1280
6393 Determining the Maximum Lateral Displacement Due to Sever Earthquakes without Using Nonlinear Analysis

Authors: Mussa Mahmoudi

Abstract:

For Seismic design, it is important to estimate, maximum lateral displacement (inelastic displacement) of the structures due to sever earthquakes for several reasons. Seismic design provisions estimate the maximum roof and storey drifts occurring in major earthquakes by amplifying the drifts of the structures obtained by elastic analysis subjected to seismic design load, with a coefficient named “displacement amplification factor" which is greater than one. Here, this coefficient depends on various parameters, such as ductility and overstrength factors. The present research aims to evaluate the value of the displacement amplification factor in seismic design codes and then tries to propose a value to estimate the maximum lateral structural displacement from sever earthquakes, without using non-linear analysis. In seismic codes, since the displacement amplification is related to “force reduction factor" hence; this aspect has been accepted in the current study. Meanwhile, two methodologies are applied to evaluate the value of displacement amplification factor and its relation with the force reduction factor. In the first methodology, which is applied for all structures, the ratio of displacement amplification and force reduction factors is determined directly. Whereas, in the second methodology that is applicable just for R/C moment resisting frame, the ratio is obtained by calculating both factors, separately. The acquired results of these methodologies are alike and estimate the ratio of two factors from 1 to 1.2. The results indicate that the ratio of the displacement amplification factor and the force reduction factor differs to those proposed by seismic provisions such as NEHRP, IBC and Iranian seismic code (standard no. 2800).

Keywords: Displacement amplification factor, Ductility factor, Force reduction factor, Maximum lateral displacement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2854
6392 The Role of Home Composting in Waste Management Cost Reduction

Authors: Nahid Hassanshahi, Ayoub Karimi-Jashni, Nasser Talebbeydokhti

Abstract:

Due to the economic and environmental benefits of producing less waste, the US Environmental Protection Agency (EPA) introduces source reduction as one of the most important means to deal with the problems caused by increased landfills and pollution. Waste reduction involves all waste management methods, including source reduction, recycling, and composting, which reduce waste flow to landfills or other disposal facilities. Source reduction of waste can be studied from two perspectives: avoiding waste production, or reducing per capita waste production, and waste deviation that indicates the reduction of waste transfer to landfills. The present paper has investigated home composting as a managerial solution for reduction of waste transfer to landfills. Home composting has many benefits. The use of household waste for the production of compost will result in a much smaller amount of waste being sent to landfills, which in turn will reduce the costs of waste collection, transportation and burial. Reducing the volume of waste for disposal and using them for the production of compost and plant fertilizer might help to recycle the material in a shorter time and to use them effectively in order to preserve the environment and reduce contamination. Producing compost in a home-based manner requires very small piece of land for preparation and recycling compared with other methods. The final product of home-made compost is valuable and helps to grow crops and garden plants. It is also used for modifying the soil structure and maintaining its moisture. The food that is transferred to landfills will spoil and produce leachate after a while. It will also release methane and greenhouse gases. But, composting these materials at home is the best way to manage degradable materials, use them efficiently and reduce environmental pollution. Studies have shown that the benefits of the sale of produced compost and the reduced costs of collecting, transporting, and burying waste can well be responsive to the costs of purchasing home compost machine and the cost of related trainings. Moreover, the process of producing home compost may be profitable within 4 to 5 years and as a result, it will have a major role in reducing waste management.

Keywords: Compost, home compost, reducing waste, waste management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 836
6391 Controller Design of Discrete Systems by Order Reduction Technique Employing Differential Evolution Optimization Algorithm

Authors: J. S. Yadav, N. P. Patidar, J. Singhai

Abstract:

One of the main objectives of order reduction is to design a controller of lower order which can effectively control the original high order system so that the overall system is of lower order and easy to understand. In this paper, a simple method is presented for controller design of a higher order discrete system. First the original higher order discrete system in reduced to a lower order model. Then a Proportional Integral Derivative (PID) controller is designed for lower order model. An error minimization technique is employed for both order reduction and controller design. For the error minimization purpose, Differential Evolution (DE) optimization algorithm has been employed. DE method is based on the minimization of the Integral Squared Error (ISE) between the desired response and actual response pertaining to a unit step input. Finally the designed PID controller is connected to the original higher order discrete system to get the desired specification. The validity of the proposed method is illustrated through a numerical example.

Keywords: Discrete System, Model Order Reduction, PIDController, Integral Squared Error, Differential Evolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1865
6390 Preparation and Characterization of Photocatalyst for the Conversion of Carbon Dioxide to Methanol

Authors: D. M. Reddy Prasad, Nur Sabrina Binti Rahmat, Huei Ruey Ong, Chin Kui Cheng, Maksudur Rahman Khan, D. Sathiyamoorthy

Abstract:

Carbon dioxide (CO2) emission to the environment is inevitable which is responsible for global warming. Photocatalytic reduction of CO2 to fuel, such as methanol, methane etc. is a promising way to reduce greenhouse gas CO2 emission. In the present work, Bi2S3/CdS was synthesized as an effective visible light responsive photocatalyst for CO2 reduction into methanol. The Bi2S3/CdS photocatalyst was prepared by hydrothermal reaction. The catalyst was characterized by X-ray diffraction (XRD) instrument. The photocatalytic activity of the catalyst has been investigated for methanol production as a function of time. Gas chromatograph flame ionization detector (GC-FID) was employed to analyze the product. The yield of methanol was found to increase with higher CdS concentration in Bi2S3/CdS and the maximum yield was obtained for 45 wt% of Bi2S3/CdS under visible light irradiation was 20 μmole/g. The result establishes that Bi2S3/CdS is favorable catalyst to reduce CO2 to methanol.

Keywords: Photocatalyst, Carbon dioxide reduction, visible light, Irradiation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1999
6389 Effectiveness of Moringa oleifera Coagulant Protein as Natural Coagulant aid in Removal of Turbidity and Bacteria from Turbid Waters

Authors: B. Bina, M.H. Mehdinejad, Gunnel Dalhammer, Guna RajaraoM. Nikaeen, H. Movahedian Attar

Abstract:

Coagulation of water involves the use of coagulating agents to bring the suspended matter in the raw water together for settling and the filtration stage. Present study is aimed to examine the effects of aluminum sulfate as coagulant in conjunction with Moringa Oleifera Coagulant Protein as coagulant aid on turbidity, hardness, and bacteria in turbid water. A conventional jar test apparatus was employed for the tests. The best removal was observed at a pH of 7 to 7.5 for all turbidities. Turbidity removal efficiency was resulted between % 80 to % 99 by Moringa Oleifera Coagulant Protein as coagulant aid. Dosage of coagulant and coagulant aid decreased with increasing turbidity. In addition, Moringa Oleifera Coagulant Protein significantly has reduced the required dosage of primary coagulant. Residual Al+3 in treated water were less than 0.2 mg/l and meets the environmental protection agency guidelines. The results showed that turbidity reduction of % 85.9- % 98 paralleled by a primary Escherichia coli reduction of 1-3 log units (99.2 – 99.97%) was obtained within the first 1 to 2 h of treatment. In conclusions, Moringa Oleifera Coagulant Protein as coagulant aid can be used for drinking water treatment without the risk of organic or nutrient release. We demonstrated that optimal design method is an efficient approach for optimization of coagulation-flocculation process and appropriate for raw water treatment.

Keywords: MOCP, Coagulant aid, turbidity removal, E.coliremoval, water, treatment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3494
6388 On the Need to have an Additional Methodology for the Psychological Product Measurement and Evaluation

Authors: Corneliu Sofronie, Roxana Zubcov

Abstract:

Cognitive Science appeared about 40 years ago, subsequent to the challenge of the Artificial Intelligence, as common territory for several scientific disciplines such as: IT, mathematics, psychology, neurology, philosophy, sociology, and linguistics. The new born science was justified by the complexity of the problems related to the human knowledge on one hand, and on the other by the fact that none of the above mentioned sciences could explain alone the mental phenomena. Based on the data supplied by the experimental sciences such as psychology or neurology, models of the human mind operation are built in the cognition science. These models are implemented in computer programs and/or electronic circuits (specific to the artificial intelligence) – cognitive systems – whose competences and performances are compared to the human ones, leading to the psychology and neurology data reinterpretation, respectively to the construction of new models. During these processes if psychology provides the experimental basis, philosophy and mathematics provides the abstraction level utterly necessary for the intermission of the mentioned sciences. The ongoing general problematic of the cognitive approach provides two important types of approach: the computational one, starting from the idea that the mental phenomenon can be reduced to 1 and 0 type calculus operations, and the connection one that considers the thinking products as being a result of the interaction between all the composing (included) systems. In the field of psychology measurements in the computational register use classical inquiries and psychometrical tests, generally based on calculus methods. Deeming things from both sides that are representing the cognitive science, we can notice a gap in psychological product measurement possibilities, regarded from the connectionist perspective, that requires the unitary understanding of the quality – quantity whole. In such approach measurement by calculus proves to be inefficient. Our researches, deployed for longer than 20 years, lead to the conclusion that measuring by forms properly fits to the connectionism laws and principles.

Keywords: complementary methodology, connection approach, networks without scaling, quantum psychology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3560
6387 Reduction of Leakage Power in Digital Logic Circuits Using Stacking Technique in 45 Nanometer Regime

Authors: P.K. Sharma, B. Bhargava, S. Akashe

Abstract:

Power dissipation due to leakage current in the digital circuits is a biggest factor which is considered specially while designing nanoscale circuits. This paper is exploring the ideas of reducing leakage current in static CMOS circuits by stacking the transistors in increasing numbers. Clearly it means that the stacking of OFF transistors in large numbers result a significant reduction in power dissipation. Increase in source voltage of NMOS transistor minimizes the leakage current. Thus stacking technique makes circuit with minimum power dissipation losses due to leakage current. Also some of digital circuits such as full adder, D flip flop and 6T SRAM have been simulated in this paper, with the application of reduction technique on ‘cadence virtuoso tool’ using specter at 45nm technology with supply voltage 0.7V.

Keywords: Stack, 6T SRAM cell, low power, threshold voltage

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3380
6386 Lean Models Classification: Towards a Holistic View

Authors: Y. Tiamaz, N. Souissi

Abstract:

The purpose of this paper is to present a classification of Lean models which aims to capture all the concepts related to this approach and thus facilitate its implementation. This classification allows the identification of the most relevant models according to several dimensions. From this perspective, we present a review and an analysis of Lean models literature and we propose dimensions for the classification of the current proposals while respecting among others the axes of the Lean approach, the maturity of the models as well as their application domains. This classification allowed us to conclude that researchers essentially consider the Lean approach as a toolbox also they design their models to solve problems related to a specific environment. Since Lean approach is no longer intended only for the automotive sector where it was invented, but to all fields (IT, Hospital, ...), we consider that this approach requires a generic model that is capable of being implemented in all areas.

Keywords: Lean approach, lean models, classification, dimensions, holistic view.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1212
6385 Studies on Properties of Knowledge Dependency and Reduction Algorithm in Tolerance Rough Set Model

Authors: Chen Wu, Lijuan Wang

Abstract:

Relation between tolerance class and indispensable attribute and knowledge dependency in rough set model with tolerance relation is explored. After giving definitions and concepts of knowledge dependency and knowledge dependency degree for incomplete information system in tolerance rough set model by distinguishing decision attribute containing missing attribute value or not, the result of maintaining reflectivity, transitivity, augmentation, decomposition law and merge law for complete knowledge dependency is proved. Knowledge dependency degrees (not complete knowledge dependency degrees) only satisfy some laws after transitivity, augmentation and decomposition operations. An algorithm to solve attribute reduction in an incomplete decision table is designed. The correctness is checked by an example.

Keywords: Incomplete information system, rough set, tolerance relation, knowledge dependence, attribute reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 689
6384 Design Transformation to Reduce Cost in Irrigation Using Value Engineering

Authors: F. S. Al-Anzi, M. Sarfraz, A. Elmi, A. R. Khan

Abstract:

Researchers are responding to the environmental challenges of Kuwait in localized, innovative, effective and economic ways. One of the vital and significant examples of the natural challenges is lack or water and desertification. In this research, the project team focuses on redesigning a prototype, using Value Engineering Methodology, which would provide similar functionalities to the well-known technology of Waterboxx kits while reducing the capital and operational costs and simplifying the process of manufacturing and usability by regular farmers. The design employs used tires and recycled plastic sheets as raw materials. Hence, this approach is going to help not just fighting desertification but also helping in getting rid of ever growing huge tire dumpsters in Kuwait, as well as helping in avoiding hazards of tire fires yielding in a safer and friendlier environment. Several alternatives for implementing the prototype have been considered. The best alternative in terms of value has been selected after thorough Function Analysis System Technique (FAST) exercise has been developed. A prototype has been fabricated and tested in a controlled simulated lab environment that is being followed by real environment field testing. Water and soil analysis conducted on the site of the experiment to cross compare between the composition of the soil before and after the experiment to insure that the prototype being tested is actually going to be environment safe. Experimentation shows that the design was equally as effective as, and may exceed, the original design with significant savings in cost. An estimated total cost reduction using the VE approach of 43.84% over the original design. This cost reduction does not consider the intangible costs of environmental issue of waste recycling which many further intensify the total savings of using the alternative VE design. This case study shows that Value Engineering Methodology can be an important tool in innovating new designs for reducing costs.

Keywords: Desertification, functional analysis, scrap tires, value engineering, waste recycling, water irrigation rationing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1423
6383 Empirical Exploration of Correlations between Software Design Measures: A Replication Study

Authors: Jehad Al Dallal

Abstract:

Software engineers apply different measures to quantify the quality of software design. These measures consider artifacts developed at low or high level software design phases. The results are used to point to design weaknesses and to indicate design points that have to be restructured. Understanding the relationship among the quality measures and among the design quality aspects considered by these measures is important to interpreting the impact of a measure for a quality aspect on other potentially related aspects. In addition, exploring the relationship between quality measures helps to explain the impact of different quality measures on external quality aspects, such as reliability and maintainability. In this paper, we report a replication study that empirically explores the correlation between six well known and commonly applied design quality measures. These measures consider several quality aspects, including complexity, cohesion, coupling, and inheritance. The results indicate that inheritance measures are weakly correlated to other measures, whereas complexity, coupling, and cohesion measures are mostly strongly correlated.  

Keywords: Quality attribute, quality measure, software design quality, spearman correlation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 768
6382 Particle Swarm Optimization with Reduction for Global Optimization Problems

Authors: Michiharu Maeda, Shinya Tsuda

Abstract:

This paper presents an algorithm of particle swarm optimization with reduction for global optimization problems. Particle swarm optimization is an algorithm which refers to the collective motion such as birds or fishes, and a multi-point search algorithm which finds a best solution using multiple particles. Particle swarm optimization is so flexible that it can adapt to a number of optimization problems. When an objective function has a lot of local minimums complicatedly, the particle may fall into a local minimum. For avoiding the local minimum, a number of particles are initially prepared and their positions are updated by particle swarm optimization. Particles sequentially reduce to reach a predetermined number of them grounded in evaluation value and particle swarm optimization continues until the termination condition is met. In order to show the effectiveness of the proposed algorithm, we examine the minimum by using test functions compared to existing algorithms. Furthermore the influence of best value on the initial number of particles for our algorithm is discussed.

Keywords: Particle swarm optimization, Global optimization, Metaheuristics, Reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584
6381 A New Approach for Flexible Document Categorization

Authors: Jebari Chaker, Ounelli Habib

Abstract:

In this paper we propose a new approach for flexible document categorization according to the document type or genre instead of topic. Our approach implements two homogenous classifiers: contextual classifier and logical classifier. The contextual classifier is based on the document URL, whereas, the logical classifier use the logical structure of the document to perform the categorization. The final categorization is obtained by combining contextual and logical categorizations. In our approach, each document is assigned to all predefined categories with different membership degrees. Our experiments demonstrate that our approach is best than other genre categorization approaches.

Keywords: Categorization, combination, flexible, logicalstructure, genre, category, URL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1453
6380 Interactive Concept-based Search using MOEA:The Hierarchical Preferences Case

Authors: Gideon Avigad, Amiram Moshaiov, Neima Brauner

Abstract:

An IEC technique is described for a multi-objective search of conceptual solutions. The survivability of solutions is influenced by both model-based fitness and subjective human preferences. The concepts- preferences are articulated via a hierarchy of sub-concepts. The suggested method produces an objectivesubjective front. Academic example is employed to demonstrate the proposed approach.

Keywords: Conceptual solution, engineering design, hierarchical planning, multi-objective search, problem reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 978
6379 Comparison of Conventional and “ECO“Transportation Pavements in Cyprus using Life Cycle Approach

Authors: Constantia Achilleos, Diofantos G. Hadjimitsis

Abstract:

Road industry has challenged the prospect of ecoconstruction. Pavements may fit within the framework of sustainable development. Hence, research implements assessments of conventional pavements impacts on environment in use of life cycle approach. To meet global, and often national, targets on pollution control, newly introduced pavement designs are under study. This is the case of Cyprus demonstration, which occurred within EcoLanes project work. This alternative pavement differs on concrete layer reinforced with tire recycling product. Processing of post-consumer tires produces steel fibers improving strength capacity against cracking. Thus maintenance works are relevantly limited in comparison to flexible pavement. This enables to be more ecofriendly, referenced to current study outputs. More specific, proposed concrete pavement life cycle processes emits 15 % less air pollutants and consumes 28 % less embodied energy than those of the asphalt pavement. In addition there is also a reduction on costs by 0.06 %.

Keywords: Environmental impact assessment, life cycle, tirerecycling, transportation pavement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2147
6378 The Effects of the Parent Training Program for Obesity Reduction on Health Behaviors of School-Age Children

Authors: Muntanavadee Maytapattana

Abstract:

The purposes of the study were to evaluate the effectiveness of the Parent Training Program for Obesity Reduction (PTPOR) on health behaviors of school-age children. An Ecological Systems Theory (EST) was approached the study and a randomized control trial was used in this study. Participants were school-age overweight or obese children and their parents. One hundred and one parent-child dyads were recruited and random assigned into the PTPOR (N=30), Educational Intervention or EI (N=32), and control group (N=39). The parents in the PTPOR group participated in five sessions including an educational session, a cooking session, aerobic exercise training, 2-time group discussion sessions, and 4-time telephoned counseling sessions. Repeated Measure ANCOVA was used to analyze data. The results presented that the outcomes of the PTPOR group were better than the EI and the control groups at 1st, 8th, and 32nd weeks after finishing the program such as child exercise behavior (F(2,97) = 3.98, p = .02) and child dietary behavior (F(2,97) = 9.42, p = .00). The results suggest that nurses and health care providers should utilize the PTPOR for child weight reduction and for the health promotion of a lifestyle among overweight and obese children.

Keywords: Parent training program for obesity reduction, child health behaviors, school-age children.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2108
6377 Deficits and Solutions in the Development of Modular Factory Systems

Authors: Achim Kampker, Peter Burggräf, Moritz Krunke, Hanno Voet

Abstract:

As a reaction to current challenges in factory planning, many companies think about introducing factory standards to lower planning times and decrease planning costs. If these factory standards are set-up with a high level of modularity, they are defined as modular factory systems. This paper deals with the main current problems in the application of modular factory systems in practice and presents a solution approach with its basic models. The methodology is based on methods from factory planning but also uses the tools of other disciplines like product development or technology management to deal with the high complexity, which the development of modular factory systems implies. The four basic models that such a methodology has to contain are introduced and pointed out.

Keywords: Factory planning, modular factory systems, factory standards, cost-benefit analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 942
6376 A Cross-Layer Approach for Cooperative MIMO Multi-hop Wireless Sensor Networks

Authors: Jain-Shing Liu

Abstract:

In this work, we study the problem of determining the minimum scheduling length that can satisfy end-to-end (ETE) traffic demand in scheduling-based multihop WSNs with cooperative multiple-input multiple-output (MIMO) transmission scheme. Specifically, we present a cross-layer formulation for the joint routing, scheduling and stream control problem by incorporating various power and rate adaptation schemes, and taking into account an antenna beam pattern model and the signal-to-interference-and-noise (SINR) constraint at the receiver. In the context, we also propose column generation (CG) solutions to get rid of the complexity requiring the enumeration of all possible sets of scheduling links.

Keywords: Wireless Sensor Networks, Cross-Layer Design, CooperativeMIMO System, Column Generation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1617
6375 A Survey on Performance Tools for OpenMP

Authors: Mubarak S. Mohsen, Rosni Abdullah, Yong M. Teo

Abstract:

Advances in processors architecture, such as multicore, increase the size of complexity of parallel computer systems. With multi-core architecture there are different parallel languages that can be used to run parallel programs. One of these languages is OpenMP which embedded in C/Cµ or FORTRAN. Because of this new architecture and the complexity, it is very important to evaluate the performance of OpenMP constructs, kernels, and application program on multi-core systems. Performance is the activity of collecting the information about the execution characteristics of a program. Performance tools consists of at least three interfacing software layers, including instrumentation, measurement, and analysis. The instrumentation layer defines the measured performance events. The measurement layer determines what performance event is actually captured and how it is measured by the tool. The analysis layer processes the performance data and summarizes it into a form that can be displayed in performance tools. In this paper, a number of OpenMP performance tools are surveyed, explaining how each is used to collect, analyse, and display data collection.

Keywords: Parallel performance tools, OpenMP, multi-core.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1870
6374 Assessing Complexity of Neuronal Multiunit Activity by Information Theoretic Measure

Authors: Young-Seok Choi

Abstract:

This paper provides a quantitative measure of the time-varying multiunit neuronal spiking activity using an entropy based approach. To verify the status embedded in the neuronal activity of a population of neurons, the discrete wavelet transform (DWT) is used to isolate the inherent spiking activity of MUA. Due to the de-correlating property of DWT, the spiking activity would be preserved while reducing the non-spiking component. By evaluating the entropy of the wavelet coefficients of the de-noised MUA, a multiresolution Shannon entropy (MRSE) of the MUA signal is developed. The proposed entropy was tested in the analysis of both simulated noisy MUA and actual MUA recorded from cortex in rodent model. Simulation and experimental results demonstrate that the dynamics of a population can be quantified by using the proposed entropy.

Keywords: Discrete wavelet transform, Entropy, Multiresolution, Multiunit activity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1806