Search results for: function points
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2852

Search results for: function points

662 Optimized Fuzzy Control by Particle Swarm Optimization Technique for Control of CSTR

Authors: Saeed Vaneshani, Hooshang Jazayeri-Rad

Abstract:

Fuzzy logic control (FLC) systems have been tested in many technical and industrial applications as a useful modeling tool that can handle the uncertainties and nonlinearities of modern control systems. The main drawback of the FLC methodologies in the industrial environment is challenging for selecting the number of optimum tuning parameters. In this paper, a method has been proposed for finding the optimum membership functions of a fuzzy system using particle swarm optimization (PSO) algorithm. A synthetic algorithm combined from fuzzy logic control and PSO algorithm is used to design a controller for a continuous stirred tank reactor (CSTR) with the aim of achieving the accurate and acceptable desired results. To exhibit the effectiveness of proposed algorithm, it is used to optimize the Gaussian membership functions of the fuzzy model of a nonlinear CSTR system as a case study. It is clearly proved that the optimized membership functions (MFs) provided better performance than a fuzzy model for the same system, when the MFs were heuristically defined.

Keywords: continuous stirred tank reactor (CSTR), fuzzy logiccontrol (FLC), membership function(MF), particle swarmoptimization (PSO)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3183
661 Media Regulation and Public Sphere in the Digital Age: An Analysis in the Light of Constructive Democracy

Authors: J. Bolzan, C. Marden

Abstract:

The article proposed intends to analyze the possibility (and conditions) of a media regulation law in a democratic rule of law in the twenty-first century. To do so, will be presented initially the idea of the public sphere (by Jürgen Habermas), showing how it is presented as an interface between the citizen and the state (or the private and public) and how important is it in a deliberative democracy. Based on this paradigm, the traditional perception of the role of public information (such as system functional element) and on the possibility of media regulation will be exposed, due to the public nature of their activity. A critical argument will then be displayed from two different perspectives: a) the formal function of the current media information, considering that the digital age has fragmented the information access; b) the concept of a constructive democracy, which reduces the need for representation, changing the strategic importance of the public sphere. The question to be addressed (based on the comparative law) is if the regulation is justified in a polycentric democracy, especially when it operates under the digital age (with immediate and virtual communication). The proposal is to be presented in the sense that even in a twenty-first century the media in a democratic rule of law still has an extremely important role and may be subject to regulation, but this should be on terms very different (and narrower) from those usually defended.

Keywords: Media regulation, public sphere, digital age, constructive democracy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2423
660 Numerical Applications of Tikhonov Regularization for the Fourier Multiplier Operators

Authors: Fethi Soltani, Adel Almarashi, Idir Mechai

Abstract:

Tikhonov regularization and reproducing kernels are the most popular approaches to solve ill-posed problems in computational mathematics and applications. And the Fourier multiplier operators are an essential tool to extend some known linear transforms in Euclidean Fourier analysis, as: Weierstrass transform, Poisson integral, Hilbert transform, Riesz transforms, Bochner-Riesz mean operators, partial Fourier integral, Riesz potential, Bessel potential, etc. Using the theory of reproducing kernels, we construct a simple and efficient representations for some class of Fourier multiplier operators Tm on the Paley-Wiener space Hh. In addition, we give an error estimate formula for the approximation and obtain some convergence results as the parameters and the independent variables approaches zero. Furthermore, using numerical quadrature integration rules to compute single and multiple integrals, we give numerical examples and we write explicitly the extremal function and the corresponding Fourier multiplier operators.

Keywords: Fourier multiplier operators, Gauss-Kronrod method of integration, Paley-Wiener space, Tikhonov regularization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1504
659 Experimental and Analytical Study of Scrap Tire Rubber Pad for Seismic Isolation

Authors: Huma Kanta Mishra, Akira Igarashi

Abstract:

A seismic isolation pad produced by utilizing the scrap tire rubber which contains interleaved steel reinforcing cords has been proposed. The steel cords are expected to function similar to the steel plates used in conventional laminated rubber bearings. The scrap tire rubber pad (STRP) isolator is intended to be used in low rise residential buildings of highly seismic areas of the developing countries. Experimental investigation was conducted on unbonded STRP isolators, and test results provided useful information including stiffness, damping values and an eventual instability of the isolation unit. Finite element analysis (FE analysis) of STRP isolator was carried out on properly bonded samples. These types of isolators provide positive incremental force resisting capacity up to shear strain level of 155%. This paper briefly discusses the force deformation behavior of bonded STRP isolators including stability of the isolation unit.

Keywords: base isolation, buckling load, finite element analysis, STRP isolators.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2933
658 Software Engineering Inspired Cost Estimation for Process Modelling

Authors: Felix Baumann, Aleksandar Milutinovic, Dieter Roller

Abstract:

Up to this point business process management projects in general and business process modelling projects in particular could not rely on a practical and scientifically validated method to estimate cost and effort. Especially the model development phase is not covered by a cost estimation method or model. Further phases of business process modelling starting with implementation are covered by initial solutions which are discussed in the literature. This article proposes a method of filling this gap by deriving a cost estimation method from available methods in similar domains namely software development or software engineering. Software development is regarded as closely similar to process modelling as we show. After the proposition of this method different ideas for further analysis and validation of the method are proposed. We derive this method from COCOMO II and Function Point which are established methods of effort estimation in the domain of software development. For this we lay out similarities of the software development process and the process of process modelling which is a phase of the Business Process Management life-cycle.

Keywords: Cost Estimation, Effort Estimation, Process Modelling, Business Process Management, COCOMO.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2265
657 An Examination and Validation of the Theoretical Resistivity-Temperature Relationship for Conductors

Authors: Fred Lacy

Abstract:

Electrical resistivity is a fundamental parameter of metals or electrical conductors. Since resistivity is a function of temperature, in order to completely understand the behavior of metals, a temperature dependent theoretical model is needed. A model based on physics principles has recently been developed to obtain an equation that relates electrical resistivity to temperature. This equation is dependent upon a parameter associated with the electron travel time before being scattered, and a parameter that relates the energy of the atoms and their separation distance. Analysis of the energy parameter reveals that the equation is optimized if the proportionality term in the equation is not constant but varies over the temperature range. Additional analysis reveals that the theoretical equation can be used to determine the mean free path of conduction electrons, the number of defects in the atomic lattice, and the ‘equivalent’ charge associated with the metallic bonding of the atoms. All of this analysis provides validation for the theoretical model and provides insight into the behavior of metals where performance is affected by temperatures (e.g., integrated circuits and temperature sensors).

Keywords: Callendar–van Dusen, conductivity, mean free path, resistance temperature detector, temperature sensor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2167
656 Performance Improvement in the Bivariate Models by using Modified Marginal Variance of Noisy Observations for Image-Denoising Applications

Authors: R. Senthilkumar

Abstract:

Most simple nonlinear thresholding rules for wavelet- based denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. This paper attempts to give a recipe for selecting one of the popular image-denoising algorithms based on VisuShrink, SureShrink, OracleShrink, BayesShrink and BiShrink and also this paper compares different Bivariate models used for image denoising applications. The first part of the paper compares different Shrinkage functions used for image-denoising. The second part of the paper compares different bivariate models and the third part of this paper uses the Bivariate model with modified marginal variance which is based on Laplacian assumption. This paper gives an experimental comparison on six 512x512 commonly used images, Lenna, Barbara, Goldhill, Clown, Boat and Stonehenge. The following noise powers 25dB,26dB, 27dB, 28dB and 29dB are added to the six standard images and the corresponding Peak Signal to Noise Ratio (PSNR) values are calculated for each noise level.

Keywords: BiShrink, Image-Denoising, PSNR, Shrinkage function

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1328
655 A Family of Entropies on Interval-valued Intuitionistic Fuzzy Sets and Their Applications in Multiple Attribute Decision Making

Authors: Min Sun, Jing Liu

Abstract:

The entropy of intuitionistic fuzzy sets is used to indicate the degree of fuzziness of an interval-valued intuitionistic fuzzy set(IvIFS). In this paper, we deal with the entropies of IvIFS. Firstly, we propose a family of entropies on IvIFS with a parameter λ ∈ [0, 1], which generalize two entropy measures defined independently by Zhang and Wei, for IvIFS, and then we prove that the new entropy is an increasing function with respect to the parameter λ. Furthermore, a new multiple attribute decision making (MADM) method using entropy-based attribute weights is proposed to deal with the decision making situations where the alternatives on attributes are expressed by IvIFS and the attribute weights information is unknown. Finally, a numerical example is given to illustrate the applications of the proposed method.

Keywords: Interval-valued intuitionistic fuzzy sets, intervalvalued intuitionistic fuzzy entropy, multiple attribute decision making

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1629
654 Degradation of EE2 by Different Consortium of Enriched Nitrifying Activated Sludge

Authors: Pantip Kayee

Abstract:

17α-ethinylestradiol (EE2) is a recalcitrant micropollutant which is found in small amounts in municipal wastewater. But these small amounts still adversely affect for the reproductive function of aquatic organisms. Evidence in the past suggested that full-scale WWTPs equipped with nitrification process enhanced the removal of EE2 in the municipal wastewater. EE2 has been proven to be able to be transformed by ammonia oxidizing bacteria (AOB) via co-metabolism. This research aims to clarify the EE2 degradation pattern by different consortium of ammonia oxidizing microorganism (AOM) including AOA (ammonia oxidizing archaea) and investigate contribution between the existing ammonia monooxygenase (AMO) and new synthesized AOM. The result showed that AOA or AOB of N. oligotropha cluster in enriched nitrifying activated sludge (NAS) from 2mM and 5mM, commonly found in municipal WWTPs, could degrade EE2 in wastewater via co-metabolism. Moreover, the investigation of the contribution between the existing ammonia monooxygenase (AMO) and new synthesized AOM demonstrated that the new synthesized AMO enzyme may perform ammonia oxidation rather than the existing AMO enzyme or the existing AMO enzyme may has a small amount to oxidize ammonia.

Keywords: 17α-ethinylestradiol, nitrification, ammonia oxidizing bacteria, ammonia oxidizing archaea.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2009
653 Analysis of Noise Level Effects on Signal-Averaged Electrocardiograms

Authors: Chun-Cheng Lin

Abstract:

Noise level has critical effects on the diagnostic performance of signal-averaged electrocardiogram (SAECG), because the true starting and end points of QRS complex would be masked by the residual noise and sensitive to the noise level. Several studies and commercial machines have used a fixed number of heart beats (typically between 200 to 600 beats) or set a predefined noise level (typically between 0.3 to 1.0 μV) in each X, Y and Z lead to perform SAECG analysis. However different criteria or methods used to perform SAECG would cause the discrepancies of the noise levels among study subjects. According to the recommendations of 1991 ESC, AHA and ACC Task Force Consensus Document for the use of SAECG, the determinations of onset and offset are related closely to the mean and standard deviation of noise sample. Hence this study would try to perform SAECG using consistent root-mean-square (RMS) noise levels among study subjects and analyze the noise level effects on SAECG. This study would also evaluate the differences between normal subjects and chronic renal failure (CRF) patients in the time-domain SAECG parameters. The study subjects were composed of 50 normal Taiwanese and 20 CRF patients. During the signal-averaged processing, different RMS noise levels were adjusted to evaluate their effects on three time domain parameters (1) filtered total QRS duration (fQRSD), (2) RMS voltage of the last QRS 40 ms (RMS40), and (3) duration of the low amplitude signals below 40 μV (LAS40). The study results demonstrated that the reduction of RMS noise level can increase fQRSD and LAS40 and decrease the RMS40, and can further increase the differences of fQRSD and RMS40 between normal subjects and CRF patients. The SAECG may also become abnormal due to the reduction of RMS noise level. In conclusion, it is essential to establish diagnostic criteria of SAECG using consistent RMS noise levels for the reduction of the noise level effects.

Keywords: Signal-averaged electrocardiogram, Ventricular latepotentials, Chronic renal failure, Noise level effects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1786
652 Effect of Eccentricity on Conjugate Natural Convection in Vertical Eccentric Annuli

Authors: A. Jamal, M. A. I. El-Shaarawi, E. M. A. Mokheimer

Abstract:

Combined conduction-free convection heat transfer in vertical eccentric annuli is numerically investigated using a finitedifference technique. Numerical results, representing the heat transfer parameters such as annulus walls temperature, heat flux, and heat absorbed in the developing region of the annulus, are presented for a Newtonian fluid of Prandtl number 0.7, fluid-annulus radius ratio 0.5, solid-fluid thermal conductivity ratio 10, inner and outer wall dimensionless thicknesses 0.1 and 0.2, respectively, and dimensionless eccentricities 0.1, 0.3, 0.5, and 0.7. The annulus walls are subjected to thermal boundary conditions, which are obtained by heating one wall isothermally whereas keeping the other wall at inlet fluid temperature. In the present paper, the annulus heights required to achieve thermal full development for prescribed eccentricities are obtained. Furthermore, the variation in the height of thermal full development as function of the geometrical parameter, i.e., eccentricity is also investigated.

Keywords: Conjugate natural convection, eccentricity, heat transfer, vertical eccentric annuli.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2207
651 Level of Service Based Methodology for Municipal Infrastructure Management

Authors: Z. Khan, O. Moselhi, T. Zayed

Abstract:

Development of levels of service in municipal context is a flexible vehicle to assist in performing quality-cost trade-off analysis for municipal services. This trade-off depends on the willingness of a community to pay as well as on the condition of the assets. Community perspective of the performance of an asset from service point of view may be quite different from the municipality perspective of the performance of the same asset from condition point of view. This paper presents a three phased level of service based methodology for water mains that consists of :1)development of an Analytical Hierarchy model of level of service 2) development of Fuzzy Weighted Sum model of water main condition index and 3) deriving a Fuzzy logic based function that maps level of service to asset condition index. This mapping will assist asset managers in quantifying condition improvement requirement to meet service goals and to make more informed decisions on interventions and relayed priorities.

Keywords: Asset Management, Level of Service, Condition Index, Analytical Hierarchy, Fuzzy Logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1935
650 Effect of Transglutaminase Cross Linking on the Functional Properties as a Function of NaCl Concentration of Legumes Protein Isolate

Authors: Nahid A. Ali, Salma H. Ahmed, ElShazali A. Mohamed, Isam A. Mohamed Ahmed, Elfadil E.Babiker

Abstract:

The effect of cross linking of the protein isolates of three legumes with the microbial enzyme transglutaminase (EC 2.3.2.13) on the functional properties at different NaCl concentration was studied. The reduction in the total free amino groups (OD340) of the polymerized protein showed that TGase treatment cross-linking the protein subunit of each legume. The solubility of the protein polymer of each legume was greatly improved at high concentration of NaCl. At 1.2 M NaCl the solubility of the native legumes protein was significantly decreased but after polymerization slightly improved. Cross linked proteins were less turbid on heating to higher temperature as compared to native proteins and the temperature at which the protein turns turbid also increased in the polymerized proteins. The emulsifying and foaming properties of the protein polymer were greatly improved at all concentrations of NaCl for all legumes.

Keywords: Functional properties, Legumes, Protein isolate, NaCl, Transglutaminase.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2568
649 Quantitative Analysis of Nutrient Inflow from River and Groundwater to Imazu Bay in Fukuoka, Japan

Authors: Keisuke Konishi, Yoshinari Hiroshiro, Kento Terashima, Atsushi Tsutsumi

Abstract:

Imazu Bay plays an important role for endangered species such as horseshoe crabs and black-faced spoonbills that stay in the bay for spawning or the passing of winter. However, this bay is semi-enclosed with slow water exchange, which could lead to eutrophication under the condition of excess nutrient inflow to the bay. Therefore, quantification of nutrient inflow is of great importance. Generally, analysis of nutrient inflow to the bays takes into consideration nutrient inflow from only the river, but that from groundwater should not be ignored for more accurate results. The main objective of this study is to estimate the amounts of nutrient inflow from river and groundwater to Imazu Bay by analyzing water budget in Zuibaiji River Basin and loads of T-N, T-P, NO3-N and NH4-N. The water budget computation in the basin is performed using groundwater recharge model and quasi three-dimensional two-phase groundwater flow model, and the multiplication of the measured amount of nutrient inflow with the computed discharge gives the total amount of nutrient inflow to the bay. In addition, in order to evaluate nutrient inflow to the bay, the result is compared with nutrient inflow from geologically similar river basins. The result shows that the discharge is 3.50×107 m3/year from the river and 1.04×107 m3/year from groundwater. The submarine groundwater discharge accounts for approximately 23 % of the total discharge, which is large compared to the other river basins. It is also revealed that the total nutrient inflow is not particularly large. The sum of NO3-N and NH4-N loadings from groundwater is less than 10 % of that from the river because of denitrification in groundwater. The Shin Seibu Sewage Treatment Plant located below the observation points discharges treated water of 15,400 m3/day and plans to increase it. However, the loads of T-N and T-P from the treatment plant are 3.9 mg/L and 0.19 mg/L, so that it does not contribute a lot to eutrophication.

Keywords: Eutrophication, groundwater recharge model, nutrient inflow, quasi three-dimensional two-phase groundwater flow model, Submarine groundwater discharge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1167
648 Influence of Machining Process on Surface Integrity of Plasma Coating

Authors: T. Zlámal, J. Petrů, M. Pagáč, P. Krajkovič

Abstract:

For the required function of components with the thermal spray coating, it is necessary to perform additional machining of the coated surface. The paper deals with assessing the surface integrity of Metco 2042, a plasma sprayed coating, after its machining. The selected plasma sprayed coating serves as an abradable sealing coating in a jet engine. Therefore, the spray and its surface must meet high quality and functional requirements. Plasma sprayed coatings are characterized by lamellar structure, which requires a special approach to their machining. Therefore, the experimental part involves the set-up of special cutting tools and cutting parameters under which the applied coating was machined. For the assessment of suitably set machining parameters, selected parameters of surface integrity were measured and evaluated during the experiment. To determine the size of surface irregularities and the effect of the selected machining technology on the sprayed coating surface, the surface roughness parameters Ra and Rz were measured. Furthermore, the measurement of sprayed coating surface hardness by the HR 15 Y method before and after machining process was used to determine the surface strengthening. The changes of strengthening were detected after the machining. The impact of chosen cutting parameters on the surface roughness after the machining was not proven.

Keywords: Machining, plasma sprayed coating, surface integrity, strengthening.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 999
647 Massive Open Online Course about Content Language Integrated Learning: A Methodological Approach for Content Language Integrated Learning Teachers

Authors: M. Zezou

Abstract:

This paper focuses on the design of a Massive Open Online Course (MOOC) about Content Language Integrated Learning (CLIL) and more specifically about how teachers can use CLIL as an educational approach incorporating technology in their teaching as well. All the four weeks of the MOOC will be presented and a step-by-step analysis of each lesson will be offered. Additionally, the paper includes detailed lesson plans about CLIL lessons with proposed CLIL activities and games in which technology plays a central part. The MOOC is structured based on certain criteria, in order to ensure success, as well as a positive experience that the learners need to have after completing this MOOC. It addresses to all language teachers who would like to implement CLIL into their teaching. In other words, it presents the methodology that needs to be followed so as to successfully carry out a CLIL lesson and achieve the learning objectives set at the beginning of the course. Firstly, in this paper, it is very important to give the definitions of MOOCs and LMOOCs, as well as to explore the difference between a structure-based MOOC (xMOOC) and a connectivist MOOC (cMOOC) and present the criteria of a successful MOOC. Moreover, the notion of CLIL will be explored, as it is necessary to fully understand this concept before moving on to the design of the MOOC. Onwards, the four weeks of the MOOC will be introduced as well as lesson plans will be presented: The type of the activities, the aims of each activity and the methodology that teachers have to follow. Emphasis will be placed on the role of technology in foreign language learning and on the ways in which we can involve technology in teaching a foreign language. Final remarks will be made and a summary of the main points will be offered at the end.

Keywords: Content language integrated learning, connectivist massive open online course, lesson plan, language MOOC, massive open online course criteria, massive open online course, technology, structure-based massive open online course.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 892
646 Spectral Entropy Employment in Speech Enhancement based on Wavelet Packet

Authors: Talbi Mourad, Salhi Lotfi, Chérif Adnen

Abstract:

In this work, we are interested in developing a speech denoising tool by using a discrete wavelet packet transform (DWPT). This speech denoising tool will be employed for applications of recognition, coding and synthesis. For noise reduction, instead of applying the classical thresholding technique, some wavelet packet nodes are set to zero and the others are thresholded. To estimate the non stationary noise level, we employ the spectral entropy. A comparison of our proposed technique to classical denoising methods based on thresholding and spectral subtraction is made in order to evaluate our approach. The experimental implementation uses speech signals corrupted by two sorts of noise, white and Volvo noises. The obtained results from listening tests show that our proposed technique is better than spectral subtraction. The obtained results from SNR computation show the superiority of our technique when compared to the classical thresholding method using the modified hard thresholding function based on u-law algorithm.

Keywords: Enhancement, spectral subtraction, SNR, discrete wavelet packet transform, spectral entropy Histogram

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1967
645 Reform-Oriented Teaching of Introductory Statistics in the Health, Social and Behavioral Sciences – Historical Context and Rationale

Authors: Rossi A. Hassad

Abstract:

There is widespread emphasis on reform in the teaching of introductory statistics at the college level. Underpinning this reform is a consensus among educators and practitioners that traditional curricular materials and pedagogical strategies have not been effective in promoting statistical literacy, a competency that is becoming increasingly necessary for effective decision-making and evidence-based practice. This paper explains the historical context of, and rationale for reform-oriented teaching of introductory statistics (at the college level) in the health, social and behavioral sciences (evidence-based disciplines). A firm understanding and appreciation of the basis for change in pedagogical approach is important, in order to facilitate commitment to reform, consensus building on appropriate strategies, and adoption and maintenance of best practices. In essence, reform-oriented pedagogy, in this context, is a function of the interaction among content, pedagogy, technology, and assessment. The challenge is to create an appropriate balance among these domains.

Keywords: Reform-oriented, reform, introductory statistics, health, behavioral sciences, evidence-based, psychology, teaching, learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 980
644 Influence of Confined Acoustic Phonons on the Shubnikov – de Haas Magnetoresistance Oscillations in a Doped Semiconductor Superlattice

Authors: Pham Ngoc Thang, Le Thai Hung, Nguyen Quang Bau

Abstract:

The influence of confined acoustic phonons on the Shubnikov – de Haas magnetoresistance oscillations in a doped semiconductor superlattice (DSSL), subjected in a magnetic field, DC electric field, and a laser radiation, has been theoretically studied based on quantum kinetic equation method. The analytical expression for the magnetoresistance in a DSSL has been obtained as a function of external fields, DSSL parameters, and especially the quantum number m characterizing the effect of confined acoustic phonons. When m goes to zero, the results for bulk phonons in a DSSL could be achieved. Numerical calculations are also achieved for the GaAs:Si/GaAs:Be DSSL and compared with other studies. Results show that the Shubnikov – de Haas magnetoresistance oscillations amplitude decrease as the increasing of phonon confinement effect.

Keywords: Shubnikov–de Haas magnetoresistance oscillations, quantum kinetic equation, confined acoustic phonons, laser radiation, doped semiconductor superlattices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1447
643 Food Security Model and the Role of Community Empowerment: The Case of a Marginalized Village in Mexico, Tatoxcac, Puebla

Authors: Marco Antonio Lara De la Calleja, María Catalina Ovando Chico, Eduardo Lopez Ruiz

Abstract:

Community empowerment has been proved to be a key element in the solution of the food security problem. As a result of a conceptual analysis, it was found that agricultural production, economic development and governance, are the traditional basis of food security models. Although the literature points to social inclusion as an important factor for food security, no model has considered it as the basis of it. The aim of this research is to identify different dimensions that make an integral model for food security, with emphasis on community empowerment. A diagnosis was made in the study community (Tatoxcac, Zacapoaxtla, Puebla), to know the aspects that impact the level of food insecurity. With a statistical sample integrated by 200 families, the Latin American and Caribbean Food Security Scale (ELCSA) was applied, finding that: in households composed by adults and children, have moderated food insecurity, (ELCSA scale has three levels, low, moderated and high); that result is produced mainly by the economic income capacity and the diversity of the diet on its food. With that being said, a model was developed to promote food security through five dimensions: 1. Regional context of the community; 2. Structure and system of local food; 3. Health and nutrition; 4. Information and technology access; and 5. Self-awareness and empowerment. The specific actions on each axis of the model, allowed a systemic approach needed to attend food security in the community, through the empowerment of society. It is concluded that the self-awareness of local communities is an area of extreme importance, which must be taken into account for participatory schemes to improve food security. In the long term, the model requires the integrated participation of different actors, such as government, companies and universities, to solve something such vital as food security.

Keywords: Community empowerment, food security, model, systemic approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1373
642 Power Quality Improvement Using PI and Fuzzy Logic Controllers Based Shunt Active Filter

Authors: Dipen A. Mistry, Bhupelly Dheeraj, Ravit Gautam, Manmohan Singh Meena, Suresh Mikkili

Abstract:

In recent years the large scale use of the power electronic equipment has led to an increase of harmonics in the power system. The harmonics results into a poor power quality and have great adverse economical impact on the utilities and customers. Current harmonics are one of the most common power quality problems and are usually resolved by using shunt active filter (SHAF). The main objective of this work is to develop PI and Fuzzy logic controllers (FLC) to analyze the performance of Shunt Active Filter for mitigating current harmonics under balanced and unbalanced sinusoidal source voltage conditions for normal load and increased load. When the supply voltages are ideal (balanced), both PI and FLC are converging to the same compensation characteristics. However, the supply voltages are non-ideal (unbalanced), FLC offers outstanding results. Simulation results validate the superiority of FLC with triangular membership function over the PI controller.

Keywords: DC link voltage, Fuzzy logic controller, Harmonics, PI controller, Shunt Active Filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5138
641 Using Genetic Algorithm for Distributed Generation Allocation to Reduce Losses and Improve Voltage Profile

Authors: M. Sedighizadeh, A. Rezazadeh

Abstract:

This paper presents a method for the optimal allocation of Distributed generation in distribution systems. In this paper, our aim would be optimal distributed generation allocation for voltage profile improvement and loss reduction in distribution network. Genetic Algorithm (GA) was used as the solving tool, which referring two determined aim; the problem is defined and objective function is introduced. Considering to fitness values sensitivity in genetic algorithm process, there is needed to apply load flow for decision-making. Load flow algorithm is combined appropriately with GA, till access to acceptable results of this operation. We used MATPOWER package for load flow algorithm and composed it with our Genetic Algorithm. The suggested method is programmed under MATLAB software and applied ETAP software for evaluating of results correctness. It was implemented on part of Tehran electricity distributing grid. The resulting operation of this method on some testing system is illuminated improvement of voltage profile and loss reduction indexes.

Keywords: Distributed Generation, Allocation, Voltage Profile, losses, Genetic Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1869
640 An Energy-Efficient Distributed Unequal Clustering Protocol for Wireless Sensor Networks

Authors: Sungju Lee, Jangsoo Lee , Hongjoong Sin, Seunghwan Yoo, Sanghyuck Lee, Jaesik Lee, Yongjun Lee, Sungchun Kim

Abstract:

The wireless sensor networks have been extensively deployed and researched. One of the major issues in wireless sensor networks is a developing energy-efficient clustering protocol. Clustering algorithm provides an effective way to prolong the lifetime of a wireless sensor networks. In the paper, we compare several clustering protocols which significantly affect a balancing of energy consumption. And we propose an Energy-Efficient Distributed Unequal Clustering (EEDUC) algorithm which provides a new way of creating distributed clusters. In EEDUC, each sensor node sets the waiting time. This waiting time is considered as a function of residual energy, number of neighborhood nodes. EEDUC uses waiting time to distribute cluster heads. We also propose an unequal clustering mechanism to solve the hot-spot problem. Simulation results show that EEDUC distributes the cluster heads, balances the energy consumption well among the cluster heads and increases the network lifetime.

Keywords: Wireless Sensor Network, Distributed UnequalClustering, Multi-hop, Lifetime.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2472
639 Establishing a Probabilistic Model of Extrapolated Wind Speed Data for Wind Energy Prediction

Authors: Mussa I. Mgwatu, Reuben R. M. Kainkwa

Abstract:

Wind is among the potential energy resources which can be harnessed to generate wind energy for conversion into electrical power. Due to the variability of wind speed with time and height, it becomes difficult to predict the generated wind energy more optimally. In this paper, an attempt is made to establish a probabilistic model fitting the wind speed data recorded at Makambako site in Tanzania. Wind speeds and direction were respectively measured using anemometer (type AN1) and wind Vane (type WD1) both supplied by Delta-T-Devices at a measurement height of 2 m. Wind speeds were then extrapolated for the height of 10 m using power law equation with an exponent of 0.47. Data were analysed using MINITAB statistical software to show the variability of wind speeds with time and height, and to determine the underlying probability model of the extrapolated wind speed data. The results show that wind speeds at Makambako site vary cyclically over time; and they conform to the Weibull probability distribution. From these results, Weibull probability density function can be used to predict the wind energy.

Keywords: Probabilistic models, wind speed, wind energy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2332
638 Performance Analysis of a Discrete-time GeoX/G/1 Queue with Single Working Vacation

Authors: Shan Gao, Zaiming Liu

Abstract:

This paper treats a discrete-time batch arrival queue with single working vacation. The main purpose of this paper is to present a performance analysis of this system by using the supplementary variable technique. For this purpose, we first analyze the Markov chain underlying the queueing system and obtain its ergodicity condition. Next, we present the stationary distributions of the system length as well as some performance measures at random epochs by using the supplementary variable method. Thirdly, still based on the supplementary variable method we give the probability generating function (PGF) of the number of customers at the beginning of a busy period and give a stochastic decomposition formulae for the PGF of the stationary system length at the departure epochs. Additionally, we investigate the relation between our discretetime system and its continuous counterpart. Finally, some numerical examples show the influence of the parameters on some crucial performance characteristics of the system.

Keywords: Discrete-time queue, batch arrival, working vacation, supplementary variable technique, stochastic decomposition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1415
637 A New Source Code Auditing Algorithm for Detecting LFI and RFI in PHP Programs

Authors: Seyed Ali Mir Heydari, Mohsen Sayadiharikandeh

Abstract:

Static analysis of source code is used for auditing web applications to detect the vulnerabilities. In this paper, we propose a new algorithm to analyze the PHP source code for detecting LFI and RFI potential vulnerabilities. In our approach, we first define some patterns for finding some functions which have potential to be abused because of unhandled user inputs. More precisely, we use regular expression as a fast and simple method to define some patterns for detection of vulnerabilities. As inclusion functions could be also used in a safe way, there could occur many false positives (FP). The first cause of these FP-s could be that the function does not use a usersupplied variable as an argument. So, we extract a list of usersupplied variables to be used for detecting vulnerable lines of code. On the other side, as vulnerability could spread among the variables like by multi-level assignment, we also try to extract the hidden usersupplied variables. We use the resulted list to decrease the false positives of our method. Finally, as there exist some ways to prevent the vulnerability of inclusion functions, we define also some patterns to detect them and decrease our false positives.

Keywords: User-supplied Variables, hidden user-supplied variables, PHP vulnerabilities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2491
636 Cross Project Software Fault Prediction at Design Phase

Authors: Pradeep Singh, Shrish Verma

Abstract:

Software fault prediction models are created by using the source code, processed metrics from the same or previous version of code and related fault data. Some company do not store and keep track of all artifacts which are required for software fault prediction. To construct fault prediction model for such company, the training data from the other projects can be one potential solution. Earlier we predicted the fault the less cost it requires to correct. The training data consists of metrics data and related fault data at function/module level. This paper investigates fault predictions at early stage using the cross-project data focusing on the design metrics. In this study, empirical analysis is carried out to validate design metrics for cross project fault prediction. The machine learning techniques used for evaluation is Naïve Bayes. The design phase metrics of other projects can be used as initial guideline for the projects where no previous fault data is available. We analyze seven datasets from NASA Metrics Data Program which offer design as well as code metrics. Overall, the results of cross project is comparable to the within company data learning.

Keywords: Software Metrics, Fault prediction, Cross project, Within project.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2524
635 Robust Statistics Based Algorithm to Remove Salt and Pepper Noise in Images

Authors: V.R.Vijaykumar, P.T.Vanathi, P.Kanagasabapathy, D.Ebenezer

Abstract:

In this paper, a robust statistics based filter to remove salt and pepper noise in digital images is presented. The function of the algorithm is to detect the corrupted pixels first since the impulse noise only affect certain pixels in the image and the remaining pixels are uncorrupted. The corrupted pixels are replaced by an estimated value using the proposed robust statistics based filter. The proposed method perform well in removing low to medium density impulse noise with detail preservation upto a noise density of 70% compared to standard median filter, weighted median filter, recursive weighted median filter, progressive switching median filter, signal dependent rank ordered mean filter, adaptive median filter and recently proposed decision based algorithm. The visual and quantitative results show the proposed algorithm outperforms in restoring the original image with superior preservation of edges and better suppression of impulse noise

Keywords: Image denoising, Nonlinear filter, Robust Statistics, and Salt and Pepper Noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2182
634 Join and Meet Block Based Default Definite Decision Rule Mining from IDT and an Incremental Algorithm

Authors: Chen Wu, Jingyu Yang

Abstract:

Using maximal consistent blocks of tolerance relation on the universe in incomplete decision table, the concepts of join block and meet block are introduced and studied. Including tolerance class, other blocks such as tolerant kernel and compatible kernel of an object are also discussed at the same time. Upper and lower approximations based on those blocks are also defined. Default definite decision rules acquired from incomplete decision table are proposed in the paper. An incremental algorithm to update default definite decision rules is suggested for effective mining tasks from incomplete decision table into which data is appended. Through an example, we demonstrate how default definite decision rules based on maximal consistent blocks, join blocks and meet blocks are acquired and how optimization is done in support of discernibility matrix and discernibility function in the incomplete decision table.

Keywords: rough set, incomplete decision table, maximalconsistent block, default definite decision rule, join and meet block.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1275
633 A Semi-Fragile Signature based Scheme for Ownership Identification and Color Image Authentication

Authors: M. Hamad Hassan, S.A.M. Gilani

Abstract:

In this paper, a novel scheme is proposed for ownership identification and authentication using color images by deploying Cryptography and Digital Watermarking as underlaying technologies. The former is used to compute the contents based hash and the latter to embed the watermark. The host image that will claim to be the rightful owner is first transformed from RGB to YST color space exclusively designed for watermarking based applications. Geometrically YS ÔèÑ T and T channel corresponds to the chrominance component of color image, therefore suitable for embedding the watermark. The T channel is divided into 4×4 nonoverlapping blocks. The size of block is important for enhanced localization, security and low computation. Each block along with ownership information is then deployed by SHA160, a one way hash function to compute the content based hash, which is always unique and resistant against birthday attack instead of using MD5 that may raise the condition i.e. H(m)=H(m'). The watermark payload varies from block to block and computed by the variance factorα . The quality of watermarked images is quite high both subjectively and objectively. Our scheme is blind, computationally fast and exactly locates the tampered region.

Keywords: Hash Collision, LSB, MD5, PSNR, SHA160.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1547