Search results for: Power System Control
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11969

Search results for: Power System Control

569 A Review on Factors Influencing Implementation of Secure Software Development Practices

Authors: Sri Lakshmi Kanniah, Mohd Naz’ri Mahrin

Abstract:

More and more businesses and services are depending on software to run their daily operations and business services. At the same time, cyber-attacks are becoming more covert and sophisticated, posing threats to software. Vulnerabilities exist in the software due to the lack of security practices during the phases of software development. Implementation of secure software development practices can improve the resistance to attacks. Many methods, models and standards for secure software development have been developed. However, despite the efforts, they still come up against difficulties in their deployment and the processes are not institutionalized. There is a set of factors that influence the successful deployment of secure software development processes. In this study, the methodology and results from a systematic literature review of factors influencing the implementation of secure software development practices is described. A total of 44 primary studies were analysed as a result of the systematic review. As a result of the study, a list of twenty factors has been identified. Some of factors that affect implementation of secure software development practices are: Involvement of the security expert, integration between security and development team, developer’s skill and expertise, development time and communication between stakeholders. The factors were further classified into four categories which are institutional context, people and action, project content and system development process. The results obtained show that it is important to take into account organizational, technical and people issues in order to implement secure software development initiatives.

Keywords: Secure software development, software development, software security, systematic literature review.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2456
568 Computational Investigation of Air-Gas Venturi Mixer for Powered Bi-Fuel Diesel Engine

Authors: Mofid Gorjibandpy, Mehdi Kazemi Sangsereki

Abstract:

In a bi-fuel diesel engine, the carburetor plays a vital role in switching from fuel gas to petrol mode operation and viceversa. The carburetor is the most important part of the fuel system of a diesel engine. All diesel engines carry variable venturi mixer carburetors. The basic operation of the carburetor mainly depends on the restriction barrel called the venturi. When air flows through the venturi, its speed increases and its pressure decreases. The main challenge focuses on designing a mixing device which mixes the supplied gas is the incoming air at an optimum ratio. In order to surmount the identified problems, the way fuel gas and air flow in the mixer have to be analyzed. In this case, the Computational Fluid Dynamics or CFD approach is applied in design of the prototype mixer. The present work is aimed at further understanding of the air and fuel flow structure by performing CFD studies using a software code. In this study for mixing air and gas in the condition that has been mentioned in continuance, some mixers have been designed. Then using of computational fluid dynamics, the optimum mixer has been selected. The results indicated that mixer with 12 holes can produce a homogenous mixture than those of 8-holes and 6-holes mixer. Also the result showed that if inlet convergency was smoother than outlet divergency, the mixture get more homogenous, the reason of that is in increasing turbulence in outlet divergency.

Keywords: Computational Fluid Dynamics, Venturi mixer, Air-fuel ratio, Turbulence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3937
567 Locating Cultural Centers in Shiraz (Iran) Applying Geographic Information System (GIS)

Authors: R. Mokhtari Malekabadi, S. Ghaed Rahmati, S. Aram

Abstract:

Optimal cultural site selection is one of the ways that can lead to the promotion of citizenship culture in addition to ensuring the health and leisure of city residents. This study examines the social and cultural needs of the community and optimal cultural site allocation and after identifying the problems and shortcomings, provides a suitable model for finding the best location for these centers where there is the greatest impact on the promotion of citizenship culture. On the other hand, non-scientific methods cause irreversible impacts to the urban environment and citizens. But modern efficient methods can reduce these impacts. One of these methods is using geographical information systems (GIS). In this study, Analytical Hierarchy Process (AHP) method was used to locate the optimal cultural site. In AHP, three principles (decomposition), (comparative analysis), and (combining preferences) are used. The objectives of this research include providing optimal contexts for passing time and performing cultural activities by Shiraz residents and also proposing construction of some cultural sites in different areas of the city. The results of this study show the correct positioning of cultural sites based on social needs of citizens. Thus, considering the population parameters and radii access, GIS and AHP model for locating cultural centers can meet social needs of citizens.

Keywords: Analytical Hierarchy Process (AHP), geographical information systems (GIS), Cultural site, locating, Shiraz.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582
566 Decision Tree Based Scheduling for Flexible Job Shops with Multiple Process Plans

Authors: H.-H. Doh, J.-M. Yu, Y.-J. Kwon, J.-H. Shin, H.-W. Kim, S.-H. Nam, D.-H. Lee

Abstract:

This paper suggests a decision tree based approach for flexible job shop scheduling with multiple process plans, i.e. each job can be processed through alternative operations, each of which can be processed on alternative machines. The main decision variables are: (a) selecting operation/machine pair; and (b) sequencing the jobs assigned to each machine. As an extension of the priority scheduling approach that selects the best priority rule combination after many simulation runs, this study suggests a decision tree based approach in which a decision tree is used to select a priority rule combination adequate for a specific system state and hence the burdens required for developing simulation models and carrying out simulation runs can be eliminated. The decision tree based scheduling approach consists of construction and scheduling modules. In the construction module, a decision tree is constructed using a four-stage algorithm, and in the scheduling module, a priority rule combination is selected using the decision tree. To show the performance of the decision tree based approach suggested in this study, a case study was done on a flexible job shop with reconfigurable manufacturing cells and a conventional job shop, and the results are reported by comparing it with individual priority rule combinations for the objectives of minimizing total flow time and total tardiness.

Keywords: Flexible job shop scheduling, Decision tree, Priority rules, Case study.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3297
565 A Study on the Condition Monitoring of Transmission Line by On-line Circuit Parameter Measurement

Authors: Il Dong Kim, Jin Rak Lee, Young Jun Ko, Young Taek Jin

Abstract:

An on-line condition monitoring method for transmission line is proposed using electrical circuit theory and IT technology in this paper. It is reasonable that the circuit parameters such as resistance (R), inductance (L), conductance (g) and capacitance (C) of a transmission line expose the electrical conditions and physical state of the line. Those parameters can be calculated from the linear equation composed of voltages and currents measured by synchro-phasor measurement technique at both end of the line. A set of linear voltage drop equations containing four terminal constants (A, B ,C ,D ) are mathematical models of the transmission line circuits. At least two sets of those linear equations are established from different operation condition of the line, they may mathematically yield those circuit parameters of the line. The conditions of line connectivity including state of connecting parts or contacting parts of the switching device may be monitored by resistance variations during operation. The insulation conditions of the line can be monitored by conductance (g) and capacitance(C) measurements. Together with other condition monitoring devices such as partial discharge, sensors and visual sensing device etc.,they may give useful information to monitor out any incipient symptoms of faults. The prototype of hardware system has been developed and tested through laboratory level simulated transmission lines. The test has shown enough evident to put the proposed method to practical uses.

Keywords: Transmission Line, Condition Monitoring, Circuit Parameters, Synchro- phasor Measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3177
564 Design and Application of NFC-Based Identity and Access Management in Cloud Services

Authors: Shin-Jer Yang, Kai-Tai Yang

Abstract:

In response to a changing world and the fast growth of the Internet, more and more enterprises are replacing web-based services with cloud-based ones. Multi-tenancy technology is becoming more important especially with Software as a Service (SaaS). This in turn leads to a greater focus on the application of Identity and Access Management (IAM). Conventional Near-Field Communication (NFC) based verification relies on a computer browser and a card reader to access an NFC tag. This type of verification does not support mobile device login and user-based access management functions. This study designs an NFC-based third-party cloud identity and access management scheme (NFC-IAM) addressing this shortcoming. Data from simulation tests analyzed with Key Performance Indicators (KPIs) suggest that the NFC-IAM not only takes less time in identity identification but also cuts time by 80% in terms of two-factor authentication and improves verification accuracy to 99.9% or better. In functional performance analyses, NFC-IAM performed better in salability and portability. The NFC-IAM App (Application Software) and back-end system to be developed and deployed in mobile device are to support IAM features and also offers users a more user-friendly experience and stronger security protection. In the future, our NFC-IAM can be employed to different environments including identification for mobile payment systems, permission management for remote equipment monitoring, among other applications.

Keywords: Cloud service, multi-tenancy, NFC, IAM, mobile device.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1104
563 Effective Traffic Lights Recognition Method for Real Time Driving Assistance Systemin the Daytime

Authors: Hyun-Koo Kim, Ju H. Park, Ho-Youl Jung

Abstract:

This paper presents an effective traffic lights recognition method at the daytime. First, Potential Traffic Lights Detector (PTLD) use whole color source of YCbCr channel image and make each binary image of green and red traffic lights. After PTLD step, Shape Filter (SF) use to remove noise such as traffic sign, street tree, vehicle, and building. At this time, noise removal properties consist of information of blobs of binary image; length, area, area of boundary box, etc. Finally, after an intermediate association step witch goal is to define relevant candidates region from the previously detected traffic lights, Adaptive Multi-class Classifier (AMC) is executed. The classification method uses Haar-like feature and Adaboost algorithm. For simulation, we are implemented through Intel Core CPU with 2.80 GHz and 4 GB RAM and tested in the urban and rural roads. Through the test, we are compared with our method and standard object-recognition learning processes and proved that it reached up to 94 % of detection rate which is better than the results achieved with cascade classifiers. Computation time of our proposed method is 15 ms.

Keywords: Traffic Light Detection, Multi-class Classification, Driving Assistance System, Haar-like Feature, Color SegmentationMethod, Shape Filter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2762
562 Examination of the Effect of Air Viscosity on Narrow Acoustic Tubes Using FEM Involving Complex Effective Density and Complex Bulk Modulus

Authors: M. Watanabe, T. Yamaguchi, M. Sasajima, Y. Kurosawa, Y. Koike

Abstract:

Earphones and headphones, which are compact electro-acoustic transducers, tend to have a lot of acoustic absorption materials and porous materials known as dampers, which often have a large number of extremely small holes and narrow slits to inhibit the resonance of the vibrating system, because the air viscosity significantly affects the acoustic characteristics in such acoustic paths. In order to perform simulations using the finite element method (FEM), it is necessary to be aware of material characteristics such as the impedance and propagation constants of sound absorbing materials and porous materials. The transfer function is widely known as a measurement method for an acoustic tube with such physical properties, but literature describing the measurements at the upper limits of the audible range is yet to be found. The acoustic tube, which is a measurement instrument, must be made narrow, and the distance between the two sets of microphones must be shortened in order to take measurements of acoustic characteristics at higher frequencies. When such a tube is made narrow, however, the characteristic impedance has been observed to become lower than the impedance of air. This paper considers the cause of this phenomenon to be the effect of the air viscosity and describes an FEM analysis of an acoustic tube considering air viscosity to compare to the theoretical formula by including the effect of air viscosity in the theoretical formula for an acoustic tube.

Keywords: Acoustic tube, air viscosity, earphones, FEM, porous materials, sound absorbing materials, transfer function method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1753
561 Secure Protocol for Short Message Service

Authors: Shubat S. Ahmeda, Ashraf M. Ali Edwila

Abstract:

Short Message Service (SMS) has grown in popularity over the years and it has become a common way of communication, it is a service provided through General System for Mobile Communications (GSM) that allows users to send text messages to others. SMS is usually used to transport unclassified information, but with the rise of mobile commerce it has become a popular tool for transmitting sensitive information between the business and its clients. By default SMS does not guarantee confidentiality and integrity to the message content. In the mobile communication systems, security (encryption) offered by the network operator only applies on the wireless link. Data delivered through the mobile core network may not be protected. Existing end-to-end security mechanisms are provided at application level and typically based on public key cryptosystem. The main concern in a public-key setting is the authenticity of the public key; this issue can be resolved by identity-based (IDbased) cryptography where the public key of a user can be derived from public information that uniquely identifies the user. This paper presents an encryption mechanism based on the IDbased scheme using Elliptic curves to provide end-to-end security for SMS. This mechanism has been implemented over the standard SMS network architecture and the encryption overhead has been estimated and compared with RSA scheme. This study indicates that the ID-based mechanism has advantages over the RSA mechanism in key distribution and scalability of increasing security level for mobile service.

Keywords: Elliptic Curve Cryptography (ECC), End-to-end Security, Identity-based Cryptography, Public Key, RSA, SMS Protocol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2207
560 Heat Transfer and Entropy Generation in a Partial Porous Channel Using LTNE and Exothermicity/Endothermicity Features

Authors: Mohsen Torabi, Nader Karimi, Kaili Zhang

Abstract:

This work aims to provide a comprehensive study on the heat transfer and entropy generation rates of a horizontal channel partially filled with a porous medium which experiences internal heat generation or consumption due to exothermic or endothermic chemical reaction. The focus has been given to the local thermal non-equilibrium (LTNE) model. The LTNE approach helps us to deliver more accurate data regarding temperature distribution within the system and accordingly to provide more accurate Nusselt number and entropy generation rates. Darcy-Brinkman model is used for the momentum equations, and constant heat flux is assumed for boundary conditions for both upper and lower surfaces. Analytical solutions have been provided for both velocity and temperature fields. By incorporating the investigated velocity and temperature formulas into the provided fundamental equations for the entropy generation, both local and total entropy generation rates are plotted for a number of cases. Bifurcation phenomena regarding temperature distribution and interface heat flux ratio are observed. It has been found that the exothermicity or endothermicity characteristic of the channel does have a considerable impact on the temperature fields and entropy generation rates.

Keywords: Entropy generation, exothermicity, endothermicity, forced convection, local thermal non-equilibrium, analytical modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 854
559 Myanmar Character Recognition Using Eight Direction Chain Code Frequency Features

Authors: Kyi Pyar Zaw, Zin Mar Kyu

Abstract:

Character recognition is the process of converting a text image file into editable and searchable text file. Feature Extraction is the heart of any character recognition system. The character recognition rate may be low or high depending on the extracted features. In the proposed paper, 25 features for one character are used in character recognition. Basically, there are three steps of character recognition such as character segmentation, feature extraction and classification. In segmentation step, horizontal cropping method is used for line segmentation and vertical cropping method is used for character segmentation. In the Feature extraction step, features are extracted in two ways. The first way is that the 8 features are extracted from the entire input character using eight direction chain code frequency extraction. The second way is that the input character is divided into 16 blocks. For each block, although 8 feature values are obtained through eight-direction chain code frequency extraction method, we define the sum of these 8 feature values as a feature for one block. Therefore, 16 features are extracted from that 16 blocks in the second way. We use the number of holes feature to cluster the similar characters. We can recognize the almost Myanmar common characters with various font sizes by using these features. All these 25 features are used in both training part and testing part. In the classification step, the characters are classified by matching the all features of input character with already trained features of characters.

Keywords: Chain code frequency, character recognition, feature extraction, features matching, segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 725
558 A Modification of Wireless and Internet Technologies for Logistics- Analysis

Authors: Apiwat Sangnoree

Abstract:

This research is designed for helping a WAPbased mobile phone-s user in order to analyze of logistics in the traffic area by applying and designing the accessible processes from mobile user to server databases. The research-s design comprises Mysql 4.1.8-nt database system for being the server which there are three sub-databases, traffic light – times of intersections in periods of the day, distances on the road of area-blocks where are divided from the main sample-area and speeds of sample vehicles (motorcycle, personal car and truck) in periods of the day. For interconnections between the server and user, PHP is used to calculate distances and travelling times from the beginning point to destination, meanwhile XHTML applied for receiving, sending and displaying data from PHP to user-s mobile. In this research, the main sample-area is focused at the Huakwang-Ratchada-s area, Bangkok, Thailand where usually the congested point and 6.25 km2 surrounding area which are split into 25 blocks, 0.25 km2 for each. For simulating the results, the designed server-database and all communicating models of this research have been uploaded to www.utccengineering.com/m4tg and used the mobile phone which supports WAP 2.0 XHTML/HTML multimode browser for observing values and displayed pictures. According to simulated results, user can check the route-s pictures from the requiring point to destination along with analyzed consuming times when sample vehicles travel in various periods of the day.

Keywords: WAP, logistics, XHTML, internet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1424
557 Hand Gesture Interpretation Using Sensing Glove Integrated with Machine Learning Algorithms

Authors: Aqsa Ali, Aleem Mushtaq, Attaullah Memon, Monna

Abstract:

In this paper, we present a low cost design for a smart glove that can perform sign language recognition to assist the speech impaired people. Specifically, we have designed and developed an Assistive Hand Gesture Interpreter that recognizes hand movements relevant to the American Sign Language (ASL) and translates them into text for display on a Thin-Film-Transistor Liquid Crystal Display (TFT LCD) screen as well as synthetic speech. Linear Bayes Classifiers and Multilayer Neural Networks have been used to classify 11 feature vectors obtained from the sensors on the glove into one of the 27 ASL alphabets and a predefined gesture for space. Three types of features are used; bending using six bend sensors, orientation in three dimensions using accelerometers and contacts at vital points using contact sensors. To gauge the performance of the presented design, the training database was prepared using five volunteers. The accuracy of the current version on the prepared dataset was found to be up to 99.3% for target user. The solution combines electronics, e-textile technology, sensor technology, embedded system and machine learning techniques to build a low cost wearable glove that is scrupulous, elegant and portable.

Keywords: American sign language, assistive hand gesture interpreter, human-machine interface, machine learning, sensing glove.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2706
556 Development of Cellulose Panels with Porous Structure for Sustainable Building Insulation

Authors: P. Garbagnoli, M. Musitelli, B. Del Curto, MP. Pedeferri

Abstract:

The study and development of an innovative material for building insulation is really important for a sustainable society in order to improve comfort and reducing energy consumption. The aim of this work is the development of insulating panels for sustainable buildings based on an innovative material made by cardboard and Phase Change Materials (PCMs). The research has consisted in laboratory tests whose purpose has been the obtaining of the required properties for insulation panels: lightweight, porous structures and mechanical resistance. PCMs have been used for many years in the building industry as smart insulation technology because of their properties of storage and release high quantity of latent heat at useful specific temperatures [1]- [2]. The integration of PCMs into cellulose matrix during the waste paper recycling process has been developed in order to obtain a composite material. Experiments on the productive process for the realization of insulating panels were done in order to make the new material suitable for building application. The addition of rising agents demonstrated the possibility to obtain a lighter structure with better insulation properties. Several tests were conducted to verify the new panel properties. The results obtained have shown the possibility to realize an innovative and sustainable material suitable to replace insulating panels currently used.

Keywords: Sustainability, recycling, waste cardboard, PCM, cladding system, insulating materials.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2267
555 Cardiopulmonary Disease in Bipolar Disorder Patient with History of SJS: Evidence Based Case Report

Authors: Zuhrotun Ulya, Muchammad Syamsulhadi, Debree Septiawan

Abstract:

Patients with bipolar disorder are three times more likely to suffer cardiovascular disorders than the general population, which will influence their level of morbidity and rate of mortality. Bipolar disorder also affects the pulmonary system. The choice of long term-monotherapy and other combinative therapies have clinical impacts on patients. This study investigates the case of a woman who has been suffering from bipolar disorder for 16 years, and who has a history of Steven Johnson Syndrome. At present she is suffering also from cardiovascular and pulmonary disorder. An analysis of the results of this study suggests that there is a relationship between cardiovascular disorder, drug therapies, Steven Johnson Syndrome and mood stabilizer obtained from the PubMed, Cochrane, Medline, and ProQuest (publications between 2005 and 2015). Combination therapy with mood stabilizer is recommended for patients who do not have side effect histories from these drugs. The replacement drugs and combinations may be applied, especially for those with bipolar disorders, and the combination between atypical antipsychotic groups and mood stabilizers is often made. Clinicians, however, should be careful with the patients’ physical and metabolic changes, especially those who have experienced long-term therapy and who showed a history of Steven Johnson Syndrome (for which clinicians probably prescribed one type of medicine).

Keywords: Cardio-pulmonary disease, bipolar disorder, Steven Johnson Syndrome, therapy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1480
554 Isolation and Classification of Red Blood Cells in Anemic Microscopic Images

Authors: Jameela Ali Alkrimi, Loay E. George, Azizah Suliman, Abdul Rahim Ahmad, Karim Al-Jashamy

Abstract:

Red blood cells (RBCs) are among the most commonly and intensively studied type of blood cells in cell biology. Anemia is a lack of RBCs is characterized by its level compared to the normal hemoglobin level. In this study, a system based image processing methodology was developed to localize and extract RBCs from microscopic images. Also, the machine learning approach is adopted to classify the localized anemic RBCs images. Several textural and geometrical features are calculated for each extracted RBCs. The training set of features was analyzed using principal component analysis (PCA). With the proposed method, RBCs were isolated in 4.3secondsfrom an image containing 18 to 27 cells. The reasons behind using PCA are its low computation complexity and suitability to find the most discriminating features which can lead to accurate classification decisions. Our classifier algorithm yielded accuracy rates of 100%, 99.99%, and 96.50% for K-nearest neighbor (K-NN) algorithm, support vector machine (SVM), and neural network RBFNN, respectively. Classification was evaluated in highly sensitivity, specificity, and kappa statistical parameters. In conclusion, the classification results were obtained within short time period, and the results became better when PCA was used.

Keywords: Red blood cells, pre-processing image algorithms, classification algorithms, principal component analysis PCA, confusion matrix, kappa statistical parameters, ROC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3179
553 All Types of Base Pair Substitutions Induced by γ-Rays in Haploid and Diploid Yeast Cells

Authors: Natalia Koltovaya, Nadezhda Zhuchkina, Ksenia Lyubimova

Abstract:

We study the biological effects induced by ionizing radiation in view of therapeutic exposure and the idea of space flights beyond Earth's magnetosphere. In particular, we examine the differences between base pair substitution induction by ionizing radiation in model haploid and diploid yeast Saccharomyces cerevisiae cells. Such mutations are difficult to study in higher eukaryotic systems. In our research, we have used a collection of six isogenic trp5-strains and 14 isogenic haploid and diploid cyc1-strains that are specific markers of all possible base-pair substitutions. These strains differ from each other only in single base substitutions within codon-50 of the trp5 gene or codon-22 of the cyc1 gene. Different mutation spectra for two different haploid genetic trp5- and cyc1-assays and different mutation spectra for the same genetic cyc1-system in cells with different ploidy — haploid and diploid — have been obtained. It was linear function for dose-dependence in haploid and exponential in diploid cells. We suggest that the differences between haploid yeast strains reflect the dependence on the sequence context, while the differences between haploid and diploid strains reflect the different molecular mechanisms of mutations.

Keywords: Base pair substitutions, γ-rays, haploid and diploid cells, yeast Saccharomyces cerevisiae.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 820
552 A Model for Estimation of Efforts in Development of Software Systems

Authors: Parvinder S. Sandhu, Manisha Prashar, Pourush Bassi, Atul Bisht

Abstract:

Software effort estimation is the process of predicting the most realistic use of effort required to develop or maintain software based on incomplete, uncertain and/or noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets. There are various models like Halstead, Walston-Felix, Bailey-Basili, Doty and GA Based models which have already used to estimate the software effort for projects. In this study Statistical Models, Fuzzy-GA and Neuro-Fuzzy (NF) Inference Systems are experimented to estimate the software effort for projects. The performances of the developed models were tested on NASA software project datasets and results are compared with the Halstead, Walston-Felix, Bailey-Basili, Doty and Genetic Algorithm Based models mentioned in the literature. The result shows that the NF Model has the lowest MMRE and RMSE values. The NF Model shows the best results as compared with the Fuzzy-GA based hybrid Inference System and other existing Models that are being used for the Effort Prediction with lowest MMRE and RMSE values.

Keywords: Neuro-Fuzzy Model, Halstead Model, Walston-Felix Model, Bailey-Basili Model, Doty Model, GA Based Model, Genetic Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3205
551 Operational Software Maturity: An Aerospace Industry Analysis

Authors: Raúl González Muñoz, Essam Shehab, Martin Weinitzke, Chris Fowler, Paul Baguley

Abstract:

Software applications have become crucial to the aerospace industry, providing a wide range of functionalities and capabilities used during the design, manufacturing and support of aircraft. However, as this criticality increases, so too does the risk for business operations when facing a software failure. Hence, there is a need for new methodologies to be developed to support aerospace companies in effectively managing their software portfolios, avoiding the hazards of business disruption and additional costs. This paper aims to provide a definition of operational software maturity, and how this can be used to assess software operational behaviour, as well as a view on the different aspects that drive software maturity within the aerospace industry. The key research question addressed is, how can operational software maturity monitoring assist the aerospace industry in effectively managing large software portfolios? This question has been addressed by conducting an in depth review of current literature, by working closely with aerospace professionals and by running an industry case study within a major aircraft manufacturer. The results are a software maturity model composed of a set of drivers and a prototype tool used for the testing and validation of the research findings. By utilising these methodologies to assess the operational maturity of software applications in aerospace, benefits in maintenance activities and operations disruption avoidance have been observed, supporting business cases for system improvement.

Keywords: Aerospace, capability maturity model, software maturity, software lifecycle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 892
550 Taguchi-Based Optimization of Surface Roughness and Dimensional Accuracy in Wire EDM Process with S7 Heat Treated Steel

Authors: Joseph C. Chen, Joshua Cox

Abstract:

This research focuses on the use of the Taguchi method to reduce the surface roughness and improve dimensional accuracy of parts machined by Wire Electrical Discharge Machining (EDM) with S7 heat treated steel material. Due to its high impact toughness, the material is a candidate for a wide variety of tooling applications which require high precision in dimension and desired surface roughness. This paper demonstrates that Taguchi Parameter Design methodology is able to optimize both dimensioning and surface roughness successfully by investigating seven wire-EDM controllable parameters: pulse on time (ON), pulse off time (OFF), servo voltage (SV), voltage (V), servo feed (SF), wire tension (WT), and wire speed (WS). The temperature of the water in the Wire EDM process is investigated as the noise factor in this research. Experimental design and analysis based on L18 Taguchi orthogonal arrays are conducted. This paper demonstrates that the Taguchi-based system enables the wire EDM process to produce (1) high precision parts with an average of 0.6601 inches dimension, while the desired dimension is 0.6600 inches; and (2) surface roughness of 1.7322 microns which is significantly improved from 2.8160 microns.

Keywords: Taguchi parameter design, surface roughness, dimensional accuracy, Wire EDM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1064
549 Review of Carbon Materials: Application in Alternative Energy Sources and Catalysis

Authors: Marita Pigłowska, Beata Kurc, Maciej Galiński

Abstract:

The application of carbon materials in the branches of the electrochemical industry shows an increasing tendency each year due to the many interesting properties they possess. These are, among others, a well-developed specific surface, porosity, high sorption capacity, good adsorption properties, low bulk density, electrical conductivity and chemical resistance. All these properties allow for their effective use, among others in supercapacitors, which can store electric charges of the order of 100 F due to carbon electrodes constituting the capacitor plates. Coals (including expanded graphite, carbon black, graphite carbon fibers, activated carbon) are commonly used in electrochemical methods of removing oil derivatives from water after tanker disasters, e.g., phenols and their derivatives by their electrochemical anodic oxidation. Phenol can occupy practically the entire surface of carbon material and leave the water clean of hydrophobic impurities. Regeneration of such electrodes is also not complicated, it is carried out by electrochemical methods consisting in unblocking the pores and reducing resistances, and thus their reactivation for subsequent adsorption processes. Graphite is commonly used as an anode material in lithium-ion cells, while due to the limited capacity it offers (372 mAh g-1), new solutions are sought that meet both capacitive, efficiency and economic criteria. Increasingly, biodegradable materials, green materials, biomass, waste (including agricultural waste) are used in order to reuse them and reduce greenhouse effects and, above all, to meet the biodegradability criterion necessary for the production of lithium-ion cells as chemical power sources. The most common of these materials are cellulose, starch, wheat, rice, and corn waste, e.g., from agricultural, paper and pharmaceutical production. Such products are subjected to appropriate treatments depending on the desired application (including chemical, thermal, electrochemical). Starch is a biodegradable polysaccharide that consists of polymeric units such as amylose and amylopectin that build an ordered (linear) and amorphous (branched) structure of the polymer. Carbon is also used as a catalyst. Elemental carbon has become available in many nano-structured forms representing the hybridization combinations found in the primary carbon allotropes, and the materials can be enriched with a large number of surface functional groups. There are many examples of catalytic applications of coal in the literature, but the development of this field has been hampered by the lack of a conceptual approach combining structure and function and a lack of understanding of material synthesis. In the context of catalytic applications, the integrity of carbon environmental management properties and parameters such as metal conductivity range and bond sequence management should be characterized. Such data, along with surface and textured information, can form the basis for the provision of network support services.

Keywords: carbon materials, catalysis, BET, capacitors, lithium ion cell

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1107
548 Holistic Face Recognition using Multivariate Approximation, Genetic Algorithms and AdaBoost Classifier: Preliminary Results

Authors: C. Villegas-Quezada, J. Climent

Abstract:

Several works regarding facial recognition have dealt with methods which identify isolated characteristics of the face or with templates which encompass several regions of it. In this paper a new technique which approaches the problem holistically dispensing with the need to identify geometrical characteristics or regions of the face is introduced. The characterization of a face is achieved by randomly sampling selected attributes of the pixels of its image. From this information we construct a set of data, which correspond to the values of low frequencies, gradient, entropy and another several characteristics of pixel of the image. Generating a set of “p" variables. The multivariate data set with different polynomials minimizing the data fitness error in the minimax sense (L∞ - Norm) is approximated. With the use of a Genetic Algorithm (GA) it is able to circumvent the problem of dimensionality inherent to higher degree polynomial approximations. The GA yields the degree and values of a set of coefficients of the polynomials approximating of the image of a face. By finding a family of characteristic polynomials from several variables (pixel characteristics) for each face (say Fi ) in the data base through a resampling process the system in use, is trained. A face (say F ) is recognized by finding its characteristic polynomials and using an AdaBoost Classifier from F -s polynomials to each of the Fi -s polynomials. The winner is the polynomial family closer to F -s corresponding to target face in data base.

Keywords: AdaBoost Classifier, Holistic Face Recognition, Minimax Multivariate Approximation, Genetic Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1476
547 Surrogate based Evolutionary Algorithm for Design Optimization

Authors: Maumita Bhattacharya

Abstract:

Optimization is often a critical issue for most system design problems. Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, finding optimal solution to complex high dimensional, multimodal problems often require highly computationally expensive function evaluations and hence are practically prohibitive. The Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model presented in our earlier work [14] reduced computation time by controlled use of meta-models to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the meta-model are generated from a single uniform model. Situations like model formation involving variable input dimensions and noisy data certainly can not be covered by this assumption. In this paper we present an enhanced version of DAFHEA that incorporates a multiple-model based learning approach for the SVM approximator. DAFHEA-II (the enhanced version of the DAFHEA framework) also overcomes the high computational expense involved with additional clustering requirements of the original DAFHEA framework. The proposed framework has been tested on several benchmark functions and the empirical results illustrate the advantages of the proposed technique.

Keywords: Evolutionary algorithm, Fitness function, Optimization, Meta-model, Stochastic method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1561
546 Studying the Environmental Effects of using Biogas Energy in Iran

Authors: Kambiz Tahvildari, Shakila ila Motamedi

Abstract:

Presently and in line with the United Nations (EPA), human thinking system has shifted towards clean fuels so as to maintain a cleaner environment and to save our planet earth. One of the most successful studies in order to achieve new energies includes the use of animal wastes and their organic residues, and the result of these researches has been represented in the form of very simple and cheap methods called biogas technology. Biogas technology has developed a lot in the recent decades; its reason is the high cost of fossil fuels and the greater attention of countries to the environmental pollutions due to the consumption of this kind of fuels. IRAN is ready for the optimized application of renewable energies, having much enriched resources of this kind of energies; so a special place could be considered for it when making programs. The purpose of biogas technology is the recovery of energy and finally the protection of the environment, which is much appropriate for the third world farmers with respect to their technical abilities and economic potentials. Studies show that the production and consumption of biogas is appropriate and economic in IRAN, because of the high amount of waste in the agriculture sector, the significant amount of animal and human excrement production, the great volume of garbage produced and the most important the specific social, climatic and agricultural conditions in IRAN, in order to proceed towards the reduction of pollution due to the use of fossil fuels.

Keywords: Agriculture, Biogas, Energy, Environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1769
545 Competency-Based Social Work Practice and Challenges in Child Case Management: Studies in the Districts Social Welfare Services, Malaysia

Authors: S. Brahim, M. S. Mohamad, E. Zakaria, N. Sarnon@Kusenin

Abstract:

This study aimed to explore the practical experience of child welfare caseworkers and professionalism in child case management in Malaysia. This paper discussed the specific social work practice competency and the challenges faced by child caseworkers in the fieldwork. This research was qualitative with grounded theory approach. Four sessions of focused group discussion (FGD) were conducted involving a total of 27 caseworkers (child protector and probation officers) in the Klang Valley. The study found that the four basic principles of knowledge in child case management namely: 1. knowledge in child case management; 2. professional values of caseworkers towards children; 3. skills in managing cases; and 4. culturally competent practice in child case management. In addition, major challenges faced by the child case manager are the capacity and commitment of the family in children’s rehabilitation program, the credibility of caseworkers are being challenged, and the challenges of support system from intra and interagency. This study is important for policy makers to take into account the capacity and the needs of the child’s caseworker in accordance with the national social work competency framework. It is expected that case management services for children will improve systematically in line with national standards.

Keywords: Social work practice, child case management, competency-based knowledge, and professionalism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2885
544 An Elaborate Survey on Node Replication Attack in Static Wireless Sensor Networks

Authors: N. S. Usha, E. A. Mary Anita

Abstract:

Recent innovations in the field of technology led to the use of   wireless sensor networks in various applications, which consists of a number of small, very tiny, low-cost, non-tamper proof and resource constrained sensor nodes. These nodes are often distributed and deployed in an unattended environment, so as to collaborate with each other to share data or information. Amidst various applications, wireless sensor network finds a major role in monitoring battle field in military applications. As these non-tamperproof nodes are deployed in an unattended location, they are vulnerable to many security attacks. Amongst many security attacks, the node replication attack seems to be more threatening to the network users. Node Replication attack is caused by an attacker, who catches one true node, duplicates the first certification and cryptographic materials, makes at least one or more copies of the caught node and spots them at certain key positions in the system to screen or disturb the network operations. Preventing the occurrence of such node replication attacks in network is a challenging task. In this survey article, we provide the classification of detection schemes and also explore the various schemes proposed in each category. Also, we compare the various detection schemes against certain evaluation parameters and also its limitations. Finally, we provide some suggestions for carrying out future research work against such attacks.

Keywords: Clone node, data security, detection schemes, node replication attack, wireless sensor networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 786
543 Feasibility Study on Designing a Flat Loop Heat Pipe (LHP) to Recover the Heat from Exhaust of a Gas Turbine

Authors: M.H.Ghaffari

Abstract:

A theoretical study is conducted to design and explore the effect of different parameters such as heat loads, the tube size of piping system, wick thickness, porosity and hole size on the performance and capability of a Loop Heat Pipe(LHP). This paper presents a steady state model that describes the different phenomena inside a LHP. Loop Heat Pipes(LHPs) are two-phase heat transfer devices with capillary pumping of a working fluid. By their original design comparing with heat pipes and special properties of the capillary structure, they-re capable of transferring heat efficiency for distances up to several meters at any orientation in the gravity field, or to several meters in a horizontal position. This theoretical model is described by different relations to satisfy important limits such as capillary and nucleate boiling. An algorithm is developed to predict the size of the LHP satisfying the limitations mentioned above for a wide range of applied loads. Finally, to assess and evaluate the algorithm and all the relations considered, we have used to design a new kind of LHP to recover the heat from the exhaust of an actual Gas Turbine. By finding the results, it showed that we can use the LHP as a very high efficient device to recover the heat even in high amount of loads(exhaust of a gas turbine). The sizes of all parts of the LHP were obtained using the developed algorithm.

Keywords: Loop Heat Pipe, Head Load, Liquid-Vapor Interface, Heat Transfer, Design Algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2052
542 The Mechanism Underlying Empathy-Related Helping Behavior: An Investigation of Empathy-Attitude- Action Model

Authors: Wan-Ting Liao, Angela K. Tzeng

Abstract:

Empathy has been an important issue in psychology, education, as well as cognitive neuroscience. Empathy has two major components: cognitive and emotional. Cognitive component refers to the ability to understand others’ perspectives, thoughts, and actions, whereas emotional component refers to understand how others feel. Empathy can be induced, attitude can then be changed, and with enough attitude change, helping behavior can occur. This finding leads us to two questions: is attitude change really necessary for prosocial behavior? And, what roles cognitive and affective empathy play? For the second question, participants with different psychopathic personality (PP) traits are critical because high PP people were found to suffer only affective empathy deficit. Their cognitive empathy shows no significant difference from the control group. 132 college students voluntarily participated in the current three-stage study. Stage 1 was to collect basic information including Interpersonal Reactivity Index (IRI), Psychopathic Personality Inventory-Revised (PPI-R), Attitude Scale, Visual Analogue Scale (VAS), and demographic data. Stage two was for empathy induction with three controversial scenarios, namely domestic violence, depression with a suicide attempt, and an ex-offender. Participants read all three stories and then rewrite the stories by one of two perspectives (empathetic vs. objective). They would then complete the VAS and Attitude Scale one more time for their post-attitude and emotional status. Three IVs were introduced for data analysis: PP (High vs. Low), Responsibility (whether or not the character is responsible for what happened), and Perspective-taking (Empathic vs. Objective). Stage 3 was for the action. Participants were instructed to freely use the 17 tokens they received as donations. They were debriefed and interviewed at the end of the experiment. The major findings were people with higher empathy tend to take more action in helping. Attitude change is not necessary for prosocial behavior. The controversy of the scenarios and how familiar participants are towards target groups play very important roles. Finally, people with high PP tend to show more public prosocial behavior due to their affective empathy deficit. Pre-existing value and belief as well as recent dramatic social events seem to have a big impact and possibly reduce the effect of the independent variables (IV) in our paradigm.

Keywords: Affective empathy, attitude, cognitive empathy, prosocial behavior, psychopathic traits.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 678
541 Effect of Flowrate and Coolant Temperature on the Efficiency of Progressive Freeze Concentration on Simulated Wastewater

Authors: M. Jusoh, R. Mohd Yunus, M. A. Abu Hassan

Abstract:

Freeze concentration freezes or crystallises the water molecules out as ice crystals and leaves behind a highly concentrated solution. In conventional suspension freeze concentration where ice crystals formed as a suspension in the mother liquor, separation of ice is difficult. The size of the ice crystals is still very limited which will require usage of scraped surface heat exchangers, which is very expensive and accounted for approximately 30% of the capital cost. This research is conducted using a newer method of freeze concentration, which is progressive freeze concentration. Ice crystals were formed as a layer on the designed heat exchanger surface. In this particular research, a helical structured copper crystallisation chamber was designed and fabricated. The effect of two operating conditions on the performance of the newly designed crystallisation chamber was investigated, which are circulation flowrate and coolant temperature. The performance of the design was evaluated by the effective partition constant, K, calculated from the volume and concentration of the solid and liquid phase. The system was also monitored by a data acquisition tool in order to see the temperature profile throughout the process. On completing the experimental work, it was found that higher flowrate resulted in a lower K, which translated into high efficiency. The efficiency is the highest at 1000 ml/min. It was also found that the process gives the highest efficiency at a coolant temperature of -6 °C.

Keywords: Freeze concentration, progressive freeze concentration, freeze wastewater treatment, ice crystals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2151
540 Evaluation of Coastal Erosion in the Jurisdiction of the Municipalities of Puerto Colombia and Tubará, Atlántico, Colombia in Google Earth Engine with Landsat and Sentinel 2 Images

Authors: Francisco Javier Reyes Salazar, Héctor Mauricio Ramírez

Abstract:

The coastal zones are home to mangrove swamps, coral reefs, and seagrass ecosystems, which are the most biodiverse and fragile on the planet. These areas support a great diversity of marine life; they are also extraordinarily important for humans in the provision of food, water, wood, and other associated goods and services; they also contribute to climate regulation. The lack of an automated model that generates information on the dynamics of changes in coastlines and coastal erosion is identified as a central problem. In this paper, coastlines were determined from 1984 to 2020 on the Google Earth Engine platform from Landsat and Sentinel images. Then, we determined the Modified Normalized Difference Water Index (MNDWI) and used Digital Shoreline Analysis System (DSAS) v5.0. Starting from the 2020 coastline; the 10-year prediction (Year 2031) was determined with the erosion of 238.32 hectares and an accretion of 181.96 hectares. For the 20-year prediction (Year 2041) will be presented an erosion of 544.04 hectares and an accretion of 133.94 hectares. The erosion and accretion of Playa Muelle in the municipality of Puerto Colombia were established, which will register the highest value of erosion. The coverage that presented the greatest change was that of artificialized territories.

Keywords: Coastline, coastal erosion, MNDWI, Google Earth Engine, Colombia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 151