Search results for: optimization for learning and data analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15606

Search results for: optimization for learning and data analysis

12816 MPSO based Model Order Formulation Technique for SISO Continuous Systems

Authors: S. N. Deepa, G. Sugumaran

Abstract:

This paper proposes a new version of the Particle Swarm Optimization (PSO) namely, Modified PSO (MPSO) for model order formulation of Single Input Single Output (SISO) linear time invariant continuous systems. In the General PSO, the movement of a particle is governed by three behaviors namely inertia, cognitive and social. The cognitive behavior helps the particle to remember its previous visited best position. In Modified PSO technique split the cognitive behavior into two sections like previous visited best position and also previous visited worst position. This modification helps the particle to search the target very effectively. MPSO approach is proposed to formulate the higher order model. The method based on the minimization of error between the transient responses of original higher order model and the reduced order model pertaining to the unit step input. The results obtained are compared with the earlier techniques utilized, to validate its ease of computation. The proposed method is illustrated through numerical example from literature.

Keywords: Continuous System, Model Order Formulation, Modified Particle Swarm Optimization, Single Input Single Output, Transfer Function Approach

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1765
12815 Parametric Investigation of Diode and CO2 Laser in Direct Metal Deposition of H13 Tool Steel on Copper Substrate

Authors: M. Khalid Imran, Syed Masood, Milan Brandt, Sudip Bhattacharya, Jyotirmoy Mazumder

Abstract:

In the present investigation, H13 tool steel has been deposited on copper alloy substrate using both CO2 and diode laser. A detailed parametric analysis has been carried out in order to find out optimum processing zone for coating defect free H13 tool steel on copper alloy substrate. Followed by parametric optimization, the microstructure and microhardness of the deposited clads have been evaluated. SEM micrographs revealed dendritic microstructure in both clads. However, the microhardness of CO2 laser deposited clad was much higher compared to diode laser deposited clad.

Keywords: CO2 laser, Diode laser, Direct Metal Deposition, Microstructure, Microhardness, Porosity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1980
12814 Patents as Indicators of Innovative Environment

Authors: S. Karklina, I. Erins

Abstract:

The main problem is that there is a very low innovation performance in Latvia. Since Latvia is a Member State of European Union, it also shall have to fulfill the set targets and to improve innovative results.Universities are one of the main performers to provide innovative capacity of country. University, industry and government need to cooperate for getting best results.The intellectual property is one of the indicators to determine innovation level in the country or organization, and patents are one of the characteristics of intellectual property.The objective of the article is to determine indicators characterizing innovative environment in Latvia and influence of the development of universities on them.The methods that will be used in the article to achieve the objectives are quantitative and qualitative analysis of the literature, statistical data analysis and graphical analysis methods.

Keywords: HEI, innovations, Latvia, patents.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1863
12813 Review of Studies on Agility in Knowledge Management

Authors: Ferdi Sönmez, Başak Buluz

Abstract:

Agility in Knowledge Management (AKM) tries to capture agility requirements and their respective answers within the framework of knowledge and learning for organizations. Since it is rather a new construct, it is difficult to claim that it has been sufficiently discussed and analyzed in practical and theoretical realms. Like the term ‘agile learning’, it is also commonly addressed in the software development and information technology fields and across the related areas where those technologies can be applied. The organizational perspective towards AKM, seems to need some more time to become scholarly mature. Nevertheless, in the literature one can come across some implicit usages of this term occasionally. This research is aimed to explore the conceptual background of agility in KM, re-conceptualize it and extend it to business applications with a special focus on e-business.

Keywords: Knowledge management, agility requirements, agility in knowledge management, knowledge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1235
12812 A Survey on Performance Tools for OpenMP

Authors: Mubarak S. Mohsen, Rosni Abdullah, Yong M. Teo

Abstract:

Advances in processors architecture, such as multicore, increase the size of complexity of parallel computer systems. With multi-core architecture there are different parallel languages that can be used to run parallel programs. One of these languages is OpenMP which embedded in C/Cµ or FORTRAN. Because of this new architecture and the complexity, it is very important to evaluate the performance of OpenMP constructs, kernels, and application program on multi-core systems. Performance is the activity of collecting the information about the execution characteristics of a program. Performance tools consists of at least three interfacing software layers, including instrumentation, measurement, and analysis. The instrumentation layer defines the measured performance events. The measurement layer determines what performance event is actually captured and how it is measured by the tool. The analysis layer processes the performance data and summarizes it into a form that can be displayed in performance tools. In this paper, a number of OpenMP performance tools are surveyed, explaining how each is used to collect, analyse, and display data collection.

Keywords: Parallel performance tools, OpenMP, multi-core.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1895
12811 Delay-Dependent H∞ Performance Analysis for Markovian Jump Systems with Time-Varying Delays

Authors: Yucai Ding, Hong Zhu, Shouming Zhong, Yuping Zhang

Abstract:

This paper considers ­H∞ performance for Markovian jump systems with Time-varying delays. The systems under consideration involve disturbance signal, Markovian switching and timevarying delays. By using a new Lyapunov-Krasovskii functional and a convex optimization approach, a delay-dependent stability condition in terms of linear matrix inequality (LMI) is addressed, which guarantee asymptotical stability in mean square and a prescribed ­H∞ performance index for the considered systems. Two numerical examples are given to illustrate the effectiveness and the less conservatism of the proposed main results. All these results are expected to be of use in the study of stochastic systems with time-varying delays.

Keywords: ­H∞ performance, Markovian switching, Delaydependent stability, Linear matrix inequality (LMI)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1597
12810 CFD Modeling of High Temperature Seal Chamber

Authors: Mikhail P. Strongin, Ragupathi Soundararajan

Abstract:

The purpose of this work is fast design optimization of the seal chamber. The study includes the mass transfer between lower and upper chamber on seal chamber for hot water application pumps. The use of Fluent 12.1 commercial code made it possible to capture complex flow with heat-mass transfer, radiation, Tailor instability, and buoyancy effect. Realizable k-epsilon model was used for turbulence modeling. Radiation heat losses were taken into account. The temperature distribution at seal region is predicted with respect to heat addition. Results show the possibilities of the model simplifications by excluding the water domain in low chamber from calculations. CFD simulations permit to improve seal chamber design to meet target water temperature around the seal. This study can be used for the analysis of different seal chamber configurations.

Keywords: CFD, heat transfer, seal chamber, high temperature water

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658
12809 An Unified Approach to Thermodynamics of Power Yield in Thermal, Chemical and Electrochemical Systems

Authors: S. Sieniutycz

Abstract:

This paper unifies power optimization approaches in various energy converters, such as: thermal, solar, chemical, and electrochemical engines, in particular fuel cells. Thermodynamics leads to converter-s efficiency and limiting power. Efficiency equations serve to solve problems of upgrading and downgrading of resources. While optimization of steady systems applies the differential calculus and Lagrange multipliers, dynamic optimization involves variational calculus and dynamic programming. In reacting systems chemical affinity constitutes a prevailing component of an overall efficiency, thus the power is analyzed in terms of an active part of chemical affinity. The main novelty of the present paper in the energy yield context consists in showing that the generalized heat flux Q (involving the traditional heat flux q plus the product of temperature and the sum products of partial entropies and fluxes of species) plays in complex cases (solar, chemical and electrochemical) the same role as the traditional heat q in pure heat engines. The presented methodology is also applied to power limits in fuel cells as to systems which are electrochemical flow engines propelled by chemical reactions. The performance of fuel cells is determined by magnitudes and directions of participating streams and mechanism of electric current generation. Voltage lowering below the reversible voltage is a proper measure of cells imperfection. The voltage losses, called polarization, include the contributions of three main sources: activation, ohmic and concentration. Examples show power maxima in fuel cells and prove the relevance of the extension of the thermal machine theory to chemical and electrochemical systems. The main novelty of the present paper in the FC context consists in introducing an effective or reduced Gibbs free energy change between products p and reactants s which take into account the decrease of voltage and power caused by the incomplete conversion of the overall reaction.

Keywords: Power yield, entropy production, chemical engines, fuel cells, exergy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1627
12808 A New Protocol for Concealed Data Aggregation in Wireless Sensor Networks

Authors: M. Abbasi Dezfouli, S. Mazraeh, M. H. Yektaie

Abstract:

Wireless sensor networks (WSN) consists of many sensor nodes that are placed on unattended environments such as military sites in order to collect important information. Implementing a secure protocol that can prevent forwarding forged data and modifying content of aggregated data and has low delay and overhead of communication, computing and storage is very important. This paper presents a new protocol for concealed data aggregation (CDA). In this protocol, the network is divided to virtual cells, nodes within each cell produce a shared key to send and receive of concealed data with each other. Considering to data aggregation in each cell is locally and implementing a secure authentication mechanism, data aggregation delay is very low and producing false data in the network by malicious nodes is not possible. To evaluate the performance of our proposed protocol, we have presented computational models that show the performance and low overhead in our protocol.

Keywords: Wireless Sensor Networks, Security, Concealed Data Aggregation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1720
12807 Optimization of Process Parameters of Pressure Die Casting using Taguchi Methodology

Authors: Satish Kumar, Arun Kumar Gupta, Pankaj Chandna

Abstract:

The present work analyses different parameters of pressure die casting to minimize the casting defects. Pressure diecasting is usually applied for casting of aluminium alloys. Good surface finish with required tolerances and dimensional accuracy can be achieved by optimization of controllable process parameters such as solidification time, molten temperature, filling time, injection pressure and plunger velocity. Moreover, by selection of optimum process parameters the pressure die casting defects such as porosity, insufficient spread of molten material, flash etc. are also minimized. Therefore, a pressure die casting component, carburetor housing of aluminium alloy (Al2Si2O5) has been considered. The effects of selected process parameters on casting defects and subsequent setting of parameters with the levels have been accomplished by Taguchi-s parameter design approach. The experiments have been performed as per the combination of levels of different process parameters suggested by L18 orthogonal array. Analyses of variance have been performed for mean and signal-to-noise ratio to estimate the percent contribution of different process parameters. Confidence interval has also been estimated for 95% consistency level and three conformational experiments have been performed to validate the optimum level of different parameters. Overall 2.352% reduction in defects has been observed with the help of suggested optimum process parameters.

Keywords: Aluminium Casting, Pressure Die Casting, Taguchi Methodology, Design of Experiments

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7316
12806 ECG Based Reliable User Identification Using Deep Learning

Authors: R. N. Begum, Ambalika Sharma, G. K. Singh

Abstract:

Identity theft has serious ramifications beyond data and personal information loss. This necessitates the implementation of robust and efficient user identification systems. Therefore, automatic biometric recognition systems are the need of the hour, and electrocardiogram (ECG)-based systems are unquestionably the best choice due to their appealing inherent characteristics. The Convolutional Neural Networks (CNNs) are the recent state-of-the-art techniques for ECG-based user identification systems. However, the results obtained are significantly below standards, and the situation worsens as the number of users and types of heartbeats in the dataset grows. As a result, this study proposes a highly accurate and resilient ECG-based person identification system using CNN's dense learning framework. The proposed research explores explicitly the caliber of dense CNNs in the field of ECG-based human recognition. The study tests four different configurations of dense CNN which are trained on a dataset of recordings collected from eight popular ECG databases. With the highest False Acceptance Rate (FAR)  of 0.04% and the highest False Rejection Rate (FRR)  of 5%, the best performing network achieved an identification accuracy of 99.94%. The best network is also tested with various train/test split ratios. The findings show that DenseNets are not only extremely reliable, but also highly efficient. Thus, they might also be implemented in real-time ECG-based human recognition systems.

Keywords: Biometrics, dense networks, identification rate, train/test split ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 501
12805 Hydrogen Production at the Forecourt from Off-Peak Electricity and Its Role in Balancing the Grid

Authors: Abdulla Rahil, Rupert Gammon, Neil Brown

Abstract:

The rapid growth of renewable energy sources and their integration into the grid have been motivated by the depletion of fossil fuels and environmental issues. Unfortunately, the grid is unable to cope with the predicted growth of renewable energy which would lead to its instability. To solve this problem, energy storage devices could be used. Electrolytic hydrogen production from an electrolyser is considered a promising option since it is a clean energy source (zero emissions). Choosing flexible operation of an electrolyser (producing hydrogen during the off-peak electricity period and stopping at other times) could bring about many benefits like reducing the cost of hydrogen and helping to balance the electric systems. This paper investigates the price of hydrogen during flexible operation compared with continuous operation, while serving the customer (hydrogen filling station) without interruption. The optimization algorithm is applied to investigate the hydrogen station in both cases (flexible and continuous operation). Three different scenarios are tested to see whether the off-peak electricity price could enhance the reduction of the hydrogen cost. These scenarios are: Standard tariff (1 tier system) during the day (assumed 12 p/kWh) while still satisfying the demand for hydrogen; using off-peak electricity at a lower price (assumed 5 p/kWh) and shutting down the electrolyser at other times; using lower price electricity at off-peak times and high price electricity at other times. This study looks at Derna city, which is located on the coast of the Mediterranean Sea (32° 46′ 0 N, 22° 38′ 0 E) with a high potential for wind resource. Hourly wind speed data which were collected over 24½ years from 1990 to 2014 were in addition to data on hourly radiation and hourly electricity demand collected over a one-year period, together with the petrol station data.

Keywords: Hydrogen filling station off-peak electricity, renewable energy, off-peak electricity, electrolytic hydrogen.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1243
12804 Demographic Factors Influencing Employees’ Salary Expectations and Labor Turnover

Authors: M. Osipova

Abstract:

Thanks to informational technologies development every sphere of economics is becoming more and more datacentralized as people are generating huge datasets containing information on any aspect of their life. Applying research of such data to human resources management allows getting scarce statistics on labor market state including salary expectations and potential employees’ typical career behavior, and this information can become a reliable basis for management decisions. The following article presents results of career behavior research based on freely accessible resume data. Information used for study is much wider than one usually uses in human resources surveys. That is why there is enough data for statistically significant results even for subgroups analysis.

Keywords: Human resources management, labor market, salary expectations, statistics, turnover.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827
12803 The Impacts of Off-Campus Students on Local Neighbourhood in Malaysia

Authors: Dasimah Bt Omar, Faizul Abdullah, Fatimah Yusof, Hazlina Hamdan, Naasah Nasrudin, Ishak Che Abullah

Abstract:

The impacts of near-campus student housing, or offcampus students accommodation cannot be ignored by the universities and as well as the community officials. Numerous scholarly studies, have highlighted the substantial economic impacts either; direct, indirect or induced, and cumulatively the roles of the universities have significantly contributed to the local economies. The issue of the impacts of off-campus student rental housing on neighbourhoods is one that has been of long-standing but increasing concern in Malaysia. Statistically, in Malaysia, there was approximately a total of 1.2 - 1.5 million students in 2009. By the year 2015, it is expected that 50 per cent of 18 to 30 year olds active population should gain access to university education, amounting to 120,000 yearly. The objectives of the research are to assess the impacts off-campus students on the local neighbourhood and specifically to obtain information on the living and learning conditions of off-campus students of Universiti Teknologi MARA Shah Alam, Malaysia. It is also to isolate those factors that may impede the successful learning so that priority can be given to them in subsequent policy implementations and actions by government and the higher education institutions.

Keywords: off-campus students, neighbourhood, impacts, living and learning conditions

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4390
12802 Simulation and Parameterization by the Finite Element Method of a C Shape Delectromagnet for Application in the Characterization of Magnetic Properties of Materials

Authors: A. A Velásquez, J.Baena

Abstract:

This article presents the simulation, parameterization and optimization of an electromagnet with the C–shaped configuration, intended for the study of magnetic properties of materials. The electromagnet studied consists of a C-shaped yoke, which provides self–shielding for minimizing losses of magnetic flux density, two poles of high magnetic permeability and power coils wound on the poles. The main physical variable studied was the static magnetic flux density in a column within the gap between the poles, with 4cm2 of square cross section and a length of 5cm, seeking a suitable set of parameters that allow us to achieve a uniform magnetic flux density of 1x104 Gaussor values above this in the column, when the system operates at room temperature and with a current consumption not exceeding 5A. By means of a magnetostatic analysis by the finite element method, the magnetic flux density and the distribution of the magnetic field lines were visualized and quantified. From the results obtained by simulating an initial configuration of electromagnet, a structural optimization of the geometry of the adjustable caps for the ends of the poles was performed. The magnetic permeability effect of the soft magnetic materials used in the poles system, such as low– carbon steel (0.08% C), Permalloy (45% Ni, 54.7% Fe) and Mumetal (21.2% Fe, 78.5% Ni), was also evaluated. The intensity and uniformity of the magnetic field in the gap showed a high dependence with the factors described above. The magnetic field achieved in the column was uniform and its magnitude ranged between 1.5x104 Gauss and 1.9x104 Gauss according to the material of the pole used, with the possibility of increasing the magnetic field by choosing a suitable geometry of the cap, introducing a cooling system for the coils and adjusting the spacing between the poles. This makes the device a versatile and scalable tool to generate the magnetic field necessary to perform magnetic characterization of materials by techniques such as vibrating sample magnetometry (VSM), Hall-effect, Kerr-effect magnetometry, among others. Additionally, a CAD design of the modules of the electromagnet is presented in order to facilitate the construction and scaling of the physical device.

Keywords: Electromagnet, Finite Elements Method, Magnetostatic, Magnetometry, Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1913
12801 Performance Evaluation of Al Jame’ Roundabout Using SIDRA

Authors: D. Muley, H. S. Al-Mandhari

Abstract:

This paper evaluates the performance of a multi-lane four legged modern roundabout operating in Muscat using SIDRA model. The performance measures include Degree of Saturation (DOS), average delay, and queue lengths. The geometric and traffic data were used for model preparation. Gap acceptance parameters, critical gap and follow up headway, were used for calibration of SIDRA model. The results from the analysis showed that currently the roundabout is experiencing delays up to 610 seconds per vehicle with DOS 1.67 during peak hour. Further, sensitivity analysis for general and roundabout parameters was performed, amongst lane width, cruise speed, inscribed diameter, entry radius and entry angle showed that inscribed diameter is most crucial factor affecting delay and DOS. Up gradation of roundabout to fully signalized junction was found as the suitable solution which will serve for future years with LOS C for design year having DOS of 0.9 with average control delay of 51.9 seconds per vehicle.

Keywords: Performance analysis, roundabout, sensitivity analysis, SIDRA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3277
12800 Algorithms for Computing of Optimization Problems with a Common Minimum-Norm Fixed Point with Applications

Authors: Apirak Sombat, Teerapol Saleewong, Poom Kumam, Parin Chaipunya, Wiyada Kumam, Anantachai Padcharoen, Yeol Je Cho, Thana Sutthibutpong

Abstract:

This research is aimed to study a two-step iteration process defined over a finite family of σ-asymptotically quasi-nonexpansive nonself-mappings. The strong convergence is guaranteed under the framework of Banach spaces with some additional structural properties including strict and uniform convexity, reflexivity, and smoothness assumptions. With similar projection technique for nonself-mapping in Hilbert spaces, we hereby use the generalized projection to construct a point within the corresponding domain. Moreover, we have to introduce the use of duality mapping and its inverse to overcome the unavailability of duality representation that is exploit by Hilbert space theorists. We then apply our results for σ-asymptotically quasi-nonexpansive nonself-mappings to solve for ideal efficiency of vector optimization problems composed of finitely many objective functions. We also showed that the obtained solution from our process is the closest to the origin. Moreover, we also give an illustrative numerical example to support our results.

Keywords: σ-asymptotically quasi-nonexpansive nonselfmapping, strong convergence, fixed point, uniformly convex and uniformly smooth Banach space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1078
12799 Large-Deflection Analysis of Automotive Vehicle's Door Wiring Harness System Using Finite Element Method

Authors: Byeong-Sam Kim, Kangsu Lee, Kyoungwoo Park, Samir Ben Chaabane

Abstract:

A Vehicle-s door wireing harness arrangement structure is provided. In vehicle-s door wiring harness(W/H) system is more toward to arrange a passenger compartment than a hinge and a weatherstrip. This article gives some insight into the dimensioning process, with special focus on large deflection analysis of wiring harness(W/H) in vehicle-s door structures for durability problem. An Finite elements analysis for door wiring harness(W/H) are used for residual stresses and dimensional stability with bending flexible. Durability test data for slim test specimens were compared with the numerical predicted fatigue life for verification. The final lifing of the component combines the effects of these microstructural features with the complex stress state arising from the combined service loading and residual stresses.

Keywords: Large deflection, wiring harness system, finite element analysis, vehicle's door.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3290
12798 Daily and Seasonal Changes of Air Pollution in Kuwait

Authors: H. Ettouney, A. AL-Haddad, S. Saqer

Abstract:

This paper focuses on assessment of air pollution in Umm-Alhyman, Kuwait, which is located south to oil refineries, power station, oil field, and highways. The measurements were made over a period of four days in March and July in 2001, 2004, and 2008. The measured pollutants included methanated and nonmethanated hydrocarbons (MHC, NMHC), CO, CO2, SO2, NOX, O3, and PM10. Also, meteorological parameters were measured, which includes temperature, wind speed and direction, and solar radiation. Over the study period, data analysis showed increase in measured SO2, NOX and CO by factors of 1.2, 5.5 and 2, respectively. This is explained in terms of increase in industrial activities, motor vehicle density, and power generation. Predictions of the measured data were made by the ISC-AERMOD software package and by using the ISCST3 model option. Finally, comparison was made between measured data against international standards.

Keywords: Air pollution, Emission inventory, ISCST3 model, Modeling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2389
12797 The New Method of Concealed Data Aggregation in Wireless Sensor: A Case Study

Authors: M. Abbasi Dezfouli, S. Mazraeh, M. H. Yektaie

Abstract:

Wireless sensor networks (WSN) consists of many sensor nodes that are placed on unattended environments such as military sites in order to collect important information. Implementing a secure protocol that can prevent forwarding forged data and modifying content of aggregated data and has low delay and overhead of communication, computing and storage is very important. This paper presents a new protocol for concealed data aggregation (CDA). In this protocol, the network is divided to virtual cells, nodes within each cell produce a shared key to send and receive of concealed data with each other. Considering to data aggregation in each cell is locally and implementing a secure authentication mechanism, data aggregation delay is very low and producing false data in the network by malicious nodes is not possible. To evaluate the performance of our proposed protocol, we have presented computational models that show the performance and low overhead in our protocol.

Keywords: Wireless Sensor Networks, Security, Concealed Data Aggregation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1748
12796 Survival Model for Partly Interval-Censored Data with Application to Anti D in Rhesus D Negative Studies

Authors: F. A. M. Elfaki, Amar Abobakar, M. Azram, M. Usman

Abstract:

This paper discusses regression analysis of partly interval-censored failure time data, which is occur in many fields including demographical, epidemiological, financial, medical and sociological studies. For the problem, we focus on the situation where the survival time of interest can be described by the additive hazards model in the present of partly interval-censored. A major advantage of the approach is its simplicity and it can be easily implemented by using R software. Simulation studies are conducted which indicate that the approach performs well for practical situations and comparable to the existing methods. The methodology is applied to a set of partly interval-censored failure time data arising from anti D in Rhesus D negative studies.

Keywords: Anti D in Rhesus D negative, Cox’s model, EM algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1670
12795 Thermal Assessment of Outer Rotor Direct Drive Gearless Small-Scale Wind Turbines

Authors: Yusuf Yasa, Erkan Mese

Abstract:

This paper investigates the thermal issue of permanent magnet synchronous generator which is frequently used in direct drive gearless small-scale wind turbine applications. Permanent Magnet Synchronous Generator (PMSG) is designed with 2.5 kW continuous and 6 kW peak power. Then considering generator geometry, mechanical design of wind turbine is performed. Thermal analysis and optimization is carried out considering all wind turbine components to reach realistic results. This issue is extremely important in research and development (R&D) process for wind turbine applications.

Keywords: Direct drive, gearless wind turbine, permanent magnet synchronous generator (PMSG), small-scale wind turbine, thermal management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2354
12794 Robust Design and Optimization of Production Wastes: An Application for Industries

Authors: Christopher C. Ihueze, Charles C. Okpala, Christian E. Okafor, Peter O. Ogbobe

Abstract:

This paper focuses on robust design and optimization of industrial production wastes. Past literatures were reviewed to case study Clamason Industries Limited (CIL) - a leading ladder-tops manufacturer. A painstaking study of the firm-s practices at the shop floor revealed that Over-production, Waiting time, Excess inventory, and Defects are the major wastes that are impeding their progress and profitability. Design expert8 software was used to apply Taguchi robust design and response surface methodology in order to model, analyse and optimise the wastes cost in CIL. Waiting time and overproduction rank first and second in contributing to the costs of wastes in CIL. For minimal wastes cost the control factors of overproduction, waiting-time, defects and excess-inventory must be set at 0.30, 390.70, 4 and 55.70 respectively for CIL. The optimal value of cost of wastes for the months studied was 22.3679. Finally, a recommendation was made that for the company to enhance their profitability and customer satisfaction, they must adopt the Shingeo Shingo-s Single Minute Exchange of Dies (SMED), which will immediately tackle the waste of waiting by drastically reducing their setup time.

Keywords: Excess-inventory, setup time, single minute exchange of dies, optimal value, over-production, robust design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695
12793 A Computational Cost-Effective Clustering Algorithm in Multidimensional Space Using the Manhattan Metric: Application to the Global Terrorism Database

Authors: Semeh Ben Salem, Sami Naouali, Moetez Sallami

Abstract:

The increasing amount of collected data has limited the performance of the current analyzing algorithms. Thus, developing new cost-effective algorithms in terms of complexity, scalability, and accuracy raised significant interests. In this paper, a modified effective k-means based algorithm is developed and experimented. The new algorithm aims to reduce the computational load without significantly affecting the quality of the clusterings. The algorithm uses the City Block distance and a new stop criterion to guarantee the convergence. Conducted experiments on a real data set show its high performance when compared with the original k-means version.

Keywords: Pattern recognition, partitional clustering, K-means clustering, Manhattan distance, terrorism data analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1336
12792 Case Study Approach Using Scenario Analysis to Analyze Unabsorbed Head Office Overheads

Authors: K. C. Iyer, T. Gupta, Y. M. Bindal

Abstract:

Head office overhead (HOOH) is an indirect cost and is recovered through individual project billings by the contractor. Delay in a project impacts the absorption of HOOH cost allocated to that particular project and thus diminishes the expected profit of the contractor. This unabsorbed HOOH cost is later claimed by contractors as damages. The subjective nature of the available formulae to compute unabsorbed HOOH is the difficulty that contractors and owners face and thus dispute it. The paper attempts to bring together the rationale of various HOOH formulae by gathering contractor’s HOOH cost data on all of its project, using case study approach and comparing variations in values of HOOH using scenario analysis. The case study approach uses project data collected from four construction projects of a contractor in India to calculate unabsorbed HOOH costs from various available formulae. Scenario analysis provides further variations in HOOH values after considering two independent situations mainly scope changes and new projects during the delay period. Interestingly, one of the findings in this study reveals that, in spite of HOOH getting absorbed by additional works available during the period of delay, a few formulae depict an increase in the value of unabsorbed HOOH, neglecting any absorption by the increase in scope. This indicates that these formulae are inappropriate for use in case of a change to the scope of work. Results of this study can help both parties in deciding on an appropriate formula more objectively, considering the events on a project causing the delay and contractor's position in respect of obtaining new projects.

Keywords: Absorbed and unabsorbed overheads, head office overheads, scenario analysis, scope variation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 794
12791 A Pilot Study for the Optimization of Routes for Waste Collection Vehicles for the Göçmenköy District of Lefkoşa

Authors: Nergiz Fırıncı, Aysun Çelik, Ertan Akün, Md. Atif Khan

Abstract:

A pilot project was carried out in 2007 by the senior students of Cyprus International University, aiming to minimize the total cost of waste collection in Northern Cyprus. Many developed and developing countries have cut their transportation costs – which lies between 30-40% – down at a rate of 40% percent, by implementing network models for their route assignments. Accordingly, a network model was implemented at Göçmenköy district, to optimize and standardize waste collection works. The work environment of the employees were also redesigned to provide maximum ergonomy and to increase productivity, efficiency and safety. Following the collection of the required data including waste densities, lengths of roads and population, a model was constructed to allocate the optimal route assignment for the waste collection trucks at Göçmenköy district.

Keywords: Minimization, waste collection, operations cost, transportation, ergonomy, productivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2647
12790 Heuristics Analysis for Distributed Scheduling using MONARC Simulation Tool

Authors: Florin Pop

Abstract:

Simulation is a very powerful method used for highperformance and high-quality design in distributed system, and now maybe the only one, considering the heterogeneity, complexity and cost of distributed systems. In Grid environments, foe example, it is hard and even impossible to perform scheduler performance evaluation in a repeatable and controllable manner as resources and users are distributed across multiple organizations with their own policies. In addition, Grid test-beds are limited and creating an adequately-sized test-bed is expensive and time consuming. Scalability, reliability and fault-tolerance become important requirements for distributed systems in order to support distributed computation. A distributed system with such characteristics is called dependable. Large environments, like Cloud, offer unique advantages, such as low cost, dependability and satisfy QoS for all users. Resource management in large environments address performant scheduling algorithm guided by QoS constrains. This paper presents the performance evaluation of scheduling heuristics guided by different optimization criteria. The algorithms for distributed scheduling are analyzed in order to satisfy users constrains considering in the same time independent capabilities of resources. This analysis acts like a profiling step for algorithm calibration. The performance evaluation is based on simulation. The simulator is MONARC, a powerful tool for large scale distributed systems simulation. The novelty of this paper consists in synthetic analysis results that offer guidelines for scheduler service configuration and sustain the empirical-based decision. The results could be used in decisions regarding optimizations to existing Grid DAG Scheduling and for selecting the proper algorithm for DAG scheduling in various actual situations.

Keywords: Scheduling, Simulation, Performance Evaluation, QoS, Distributed Systems, MONARC

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1734
12789 Performance Comparison of ADTree and Naive Bayes Algorithms for Spam Filtering

Authors: Thanh Nguyen, Andrei Doncescu, Pierre Siegel

Abstract:

Classification is an important data mining technique and could be used as data filtering in artificial intelligence. The broad application of classification for all kind of data leads to be used in nearly every field of our modern life. Classification helps us to put together different items according to the feature items decided as interesting and useful. In this paper, we compare two classification methods Naïve Bayes and ADTree use to detect spam e-mail. This choice is motivated by the fact that Naive Bayes algorithm is based on probability calculus while ADTree algorithm is based on decision tree. The parameter settings of the above classifiers use the maximization of true positive rate and minimization of false positive rate. The experiment results present classification accuracy and cost analysis in view of optimal classifier choice for Spam Detection. It is point out the number of attributes to obtain a tradeoff between number of them and the classification accuracy.

Keywords: Classification, data mining, spam filtering, naive Bayes, decision tree.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1475
12788 Promoting Non-Formal Learning Mobility in the Field of Youth

Authors: Juha Kettunen

Abstract:

The purpose of this study is to develop a framework for the assessment of research and development projects. The assessment map is developed in this study based on the strategy map of the balanced scorecard approach. The assessment map is applied in a project that aims to reduce the inequality and risk of exclusion of young people from disadvantaged social groups. The assessment map denotes that not only funding but also necessary skills and qualifications should be carefully assessed in the implementation of the project plans so as to achieve the objectives of projects and the desired impact. The results of this study are useful for those who want to develop the implementation of the Erasmus+ Programme and the project teams of research and development projects.

Keywords: Non-formal learning, youth work, social inclusion, innovation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 811
12787 An Efficient Data Mining Approach on Compressed Transactions

Authors: Jia-Yu Dai, Don-Lin Yang, Jungpin Wu, Ming-Chuan Hung

Abstract:

In an era of knowledge explosion, the growth of data increases rapidly day by day. Since data storage is a limited resource, how to reduce the data space in the process becomes a challenge issue. Data compression provides a good solution which can lower the required space. Data mining has many useful applications in recent years because it can help users discover interesting knowledge in large databases. However, existing compression algorithms are not appropriate for data mining. In [1, 2], two different approaches were proposed to compress databases and then perform the data mining process. However, they all lack the ability to decompress the data to their original state and improve the data mining performance. In this research a new approach called Mining Merged Transactions with the Quantification Table (M2TQT) was proposed to solve these problems. M2TQT uses the relationship of transactions to merge related transactions and builds a quantification table to prune the candidate itemsets which are impossible to become frequent in order to improve the performance of mining association rules. The experiments show that M2TQT performs better than existing approaches.

Keywords: Association rule, data mining, merged transaction, quantification table.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1947