Search results for: New Technology Based Companies (NTBC)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13107

Search results for: New Technology Based Companies (NTBC)

597 Automatic Removal of Ocular Artifacts using JADE Algorithm and Neural Network

Authors: V Krishnaveni, S Jayaraman, A Gunasekaran, K Ramadoss

Abstract:

The ElectroEncephaloGram (EEG) is useful for clinical diagnosis and biomedical research. EEG signals often contain strong ElectroOculoGram (EOG) artifacts produced by eye movements and eye blinks especially in EEG recorded from frontal channels. These artifacts obscure the underlying brain activity, making its visual or automated inspection difficult. The goal of ocular artifact removal is to remove ocular artifacts from the recorded EEG, leaving the underlying background signals due to brain activity. In recent times, Independent Component Analysis (ICA) algorithms have demonstrated superior potential in obtaining the least dependent source components. In this paper, the independent components are obtained by using the JADE algorithm (best separating algorithm) and are classified into either artifact component or neural component. Neural Network is used for the classification of the obtained independent components. Neural Network requires input features that exactly represent the true character of the input signals so that the neural network could classify the signals based on those key characters that differentiate between various signals. In this work, Auto Regressive (AR) coefficients are used as the input features for classification. Two neural network approaches are used to learn classification rules from EEG data. First, a Polynomial Neural Network (PNN) trained by GMDH (Group Method of Data Handling) algorithm is used and secondly, feed-forward neural network classifier trained by a standard back-propagation algorithm is used for classification and the results show that JADE-FNN performs better than JADEPNN.

Keywords: Auto Regressive (AR) Coefficients, Feed Forward Neural Network (FNN), Joint Approximation Diagonalisation of Eigen matrices (JADE) Algorithm, Polynomial Neural Network (PNN).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1861
596 Using Dynamic Glazing to Eliminate Mechanical Cooling in Multi-family Highrise Buildings

Authors: Ranojoy Dutta, Adam Barker

Abstract:

Multifamily residential buildings are increasingly being built with large glazed areas to provide tenants with greater daylight and outdoor views. However, traditional double-glazed window assemblies can lead to significant thermal discomfort from high radiant temperatures as well as increased cooling energy use to address solar gains. Dynamic glazing provides an effective solution by actively controlling solar transmission to maintain indoor thermal comfort, without compromising the visual connection to outdoors. This study uses thermal simulations across three Canadian cities (Toronto, Vancouver and Montreal) to verify if dynamic glazing along with operable windows and ceiling fans can maintain the indoor operative temperature of a prototype southwest facing high-rise apartment unit within the ASHRAE 55 adaptive comfort range for a majority of the year, without any mechanical cooling. Since this study proposes the use of natural ventilation for cooling and the typical building life cycle is 30-40 years, the typical weather files have been modified based on accepted global warming projections for increased air temperatures by 2050. Results for the prototype apartment confirm that thermal discomfort with dynamic glazing occurs only for less than 0.7% of the year. However, in the baseline scenario with low-E glass there are up to 7% annual hours of discomfort despite natural ventilation with operable windows and improved air movement with ceiling fans.

Keywords: Electrochromic, operable windows, thermal comfort, natural ventilation, adaptive comfort.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 539
595 Speciation, Preconcentration, and Determination of Iron(II) and (III) Using 1,10-Phenanthroline Immobilized on Alumina-Coated Magnetite Nanoparticles as a Solid Phase Extraction Sorbent in Pharmaceutical Products

Authors: Hossein Tavallali, Mohammad Ali Karimi, Gohar Deilamy-Rad

Abstract:

The proposed method for speciation, preconcentration and determination of Fe(II) and Fe(III) in pharmaceutical products was developed using of alumina-coated magnetite nanoparticles (Fe3O4/Al2O3 NPs) as solid phase extraction (SPE) sorbent in magnetic mixed hemimicell solid phase extraction (MMHSPE) technique followed by flame atomic absorption spectrometry analysis. The procedure is based on complexation of Fe(II) with 1, 10-phenanthroline (OP) as complexing reagent for Fe(II) that immobilized on the modified Fe3O4/Al2O3 NPs. The extraction and concentration process for pharmaceutical sample was carried out in a single step by mixing the extraction solvent, magnetic adsorbents under ultrasonic action. Then, the adsorbents were isolated from the complicated matrix easily with an external magnetic field. Fe(III) ions determined after facility reduced to Fe(II) by added a proper reduction agent to sample solutions. Compared with traditional methods, the MMHSPE method simplified the operation procedure and reduced the analysis time. Various influencing parameters on the speciation and preconcentration of trace iron, such as pH, sample volume, amount of sorbent, type and concentration of eluent, were studied. Under the optimized operating conditions, the preconcentration factor of the modified nano magnetite for Fe(II) 167 sample was obtained. The detection limits and linear range of this method for iron were 1.0 and 9.0 - 175 ng.mL−1, respectively. Also the relative standard deviation for five replicate determinations of 30.00 ng.mL-1 Fe2+ was 2.3%.

Keywords: Alumina-coated magnetite nanoparticles, magnetic mixed hemimicell solid-phase extraction, Fe(ΙΙ) and Fe(ΙΙΙ), pharmaceutical sample.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1186
594 Comparative Analysis of the Third Generation of Research Data for Evaluation of Solar Energy Potential

Authors: Claudineia Brazil, Elison Eduardo Jardim Bierhals, Luciane Teresa Salvi, Rafael Haag

Abstract:

Renewable energy sources are dependent on climatic variability, so for adequate energy planning, observations of the meteorological variables are required, preferably representing long-period series. Despite the scientific and technological advances that meteorological measurement systems have undergone in the last decades, there is still a considerable lack of meteorological observations that form series of long periods. The reanalysis is a system of assimilation of data prepared using general atmospheric circulation models, based on the combination of data collected at surface stations, ocean buoys, satellites and radiosondes, allowing the production of long period data, for a wide gamma. The third generation of reanalysis data emerged in 2010, among them is the Climate Forecast System Reanalysis (CFSR) developed by the National Centers for Environmental Prediction (NCEP), these data have a spatial resolution of 0.50 x 0.50. In order to overcome these difficulties, it aims to evaluate the performance of solar radiation estimation through alternative data bases, such as data from Reanalysis and from meteorological satellites that satisfactorily meet the absence of observations of solar radiation at global and/or regional level. The results of the analysis of the solar radiation data indicated that the reanalysis data of the CFSR model presented a good performance in relation to the observed data, with determination coefficient around 0.90. Therefore, it is concluded that these data have the potential to be used as an alternative source in locations with no seasons or long series of solar radiation, important for the evaluation of solar energy potential.

Keywords: Climate, reanalysis, renewable energy, solar radiation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 875
593 Cubic Splines and Fourier Series Approach to Study Temperature Variation in Dermal Layers of Elliptical Shaped Human Limbs

Authors: Mamta Agrawal, Neeru Adlakha, K.R. Pardasani

Abstract:

An attempt has been made to develop a seminumerical model to study temperature variations in dermal layers of human limbs. The model has been developed for two dimensional steady state case. The human limb has been assumed to have elliptical cross section. The dermal region has been divided into three natural layers namely epidermis, dermis and subdermal tissues. The model incorporates the effect of important physiological parameters like blood mass flow rate, metabolic heat generation, and thermal conductivity of the tissues. The outer surface of the limb is exposed to the environment and it is assumed that heat loss takes place at the outer surface by conduction, convection, radiation, and evaporation. The temperature of inner core of the limb also varies at the lower atmospheric temperature. Appropriate boundary conditions have been framed based on the physical conditions of the problem. Cubic splines approach has been employed along radial direction and Fourier series along angular direction to obtain the solution. The numerical results have been computed for different values of eccentricity resembling with the elliptic cross section of the human limbs. The numerical results have been used to obtain the temperature profile and to study the relationships among the various physiological parameters.

Keywords: Blood Mass Flow Rate, Metabolic Heat Generation, Fourier Series, Cubic splines and Thermal Conductivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1783
592 Decision-Making Strategies on Smart Dairy Farms: A Review

Authors: L. Krpalkova, N. O' Mahony, A. Carvalho, S. Campbell, G. Corkery, E. Broderick, J. Walsh

Abstract:

Farm management and operations will drastically change due to access to real-time data, real-time forecasting and tracking of physical items in combination with Internet of Things (IoT) developments to further automate farm operations. Dairy farms have embraced technological innovations and procured vast amounts of permanent data streams during the past decade; however, the integration of this information to improve the whole farm decision-making process does not exist. It is now imperative to develop a system that can collect, integrate, manage, and analyze on-farm and off-farm data in real-time for practical and relevant environmental and economic actions. The developed systems, based on machine learning and artificial intelligence, need to be connected for useful output, a better understanding of the whole farming issue and environmental impact. Evolutionary Computing (EC) can be very effective in finding the optimal combination of sets of some objects and finally, in strategy determination. The system of the future should be able to manage the dairy farm as well as an experienced dairy farm manager with a team of the best agricultural advisors. All these changes should bring resilience and sustainability to dairy farming as well as improving and maintaining good animal welfare and the quality of dairy products. This review aims to provide an insight into the state-of-the-art of big data applications and EC in relation to smart dairy farming and identify the most important research and development challenges to be addressed in the future. Smart dairy farming influences every area of management and its uptake has become a continuing trend.

Keywords: Big data, evolutionary computing, cloud, precision technologies

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 715
591 An Obesity Index Derived from Waist and Hip Circumferences Well-Matched with Other Indices in Children with Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Indices derived from anthropometric measurements [waist-to-hip ratio (WHR)] or body fat mass compositions [trunk-to-leg fat ratio (TLFR)] are used for the evaluation of obesity. The best for clinical practices is still being investigated. The aim of this study is to derive an index, which best suits the purpose for the discrimination of children with normal body mass index (N-BMI) from obese (OB) children. 83 children participated in the study. Groups 1 and 2 comprised 42 children with N-BMI and 41 OB children, whose age- and sex-adjusted BMI percentile values vary between 15-85 and 95-99, respectively. The institutional ethics committee approved the study protocol. Informed consent forms were filled by the parents of the participants. Anthropometric measurements (weight, height (Ht), waist circumference (WC), hip circumference (HC), neck circumference (NC) values) were taken. BMI, WHR, (WC+HC)/2, WC/Ht, (WC/HC)/Ht, WC*NC were calculated. Bioelectrical impedance analysis was performed to obtain body’s fat compartments in terms of total fat, trunk fat, leg fat, arm fat masses. TLFR, trunk-to-appendicular fat ratio (TAFR), (trunk fat+leg fat)/2 ((TF+LF)/2), fat mass index (FMI) and diagnostic obesity notation model assessment-II (D2I) index values were calculated. Statistical analysis was performed. Significantly higher values of (WC+HC)/2, (TF+LF)/2, D2I and FMI were observed in OB group than N-BMI group. Significant correlations were found between BMI and WC, (WC+HC)/2, (TF+LF)/2, TLFR, TAFR, D2I, FMI in both groups. Similar correlations were obtained for WC. (WC+HC)/2 was correlated with TLFR, TAFR, (TF+LF)/2, D2I and FMI in N-BMI group. In OB group, the correlations were the same except those with TLFR and TAFR. These correlations were not present with WHR. Correlations were observed between TLFR as well as TAFR and BMI, WC, (WC+HC)/2, (TF+LF)/2, D2I, FMI in N-BMI group. In OB group, correlations between TLFR or TAFR and BMI, WC as well as (WC+HC)/2 were missing. None was noted with WHR. In conclusion, the only correlation valid in both groups was that exists between (TF+LF)/2 and (WC+HC)/2, which was suggested as a link between fat-based and anthropometric indices. (WC+HC)/2, but not WHR, was much more suitable as an anthropometric obesity index.

Keywords: Children, hip circumference, obesity, waist circumference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 379
590 A Data Hiding Model with High Security Features Combining Finite State Machines and PMM method

Authors: Souvik Bhattacharyya, Gautam Sanyal

Abstract:

Recent years have witnessed the rapid development of the Internet and telecommunication techniques. Information security is becoming more and more important. Applications such as covert communication, copyright protection, etc, stimulate the research of information hiding techniques. Traditionally, encryption is used to realize the communication security. However, important information is not protected once decoded. Steganography is the art and science of communicating in a way which hides the existence of the communication. Important information is firstly hidden in a host data, such as digital image, video or audio, etc, and then transmitted secretly to the receiver.In this paper a data hiding model with high security features combining both cryptography using finite state sequential machine and image based steganography technique for communicating information more securely between two locations is proposed. The authors incorporated the idea of secret key for authentication at both ends in order to achieve high level of security. Before the embedding operation the secret information has been encrypted with the help of finite-state sequential machine and segmented in different parts. The cover image is also segmented in different objects through normalized cut.Each part of the encoded secret information has been embedded with the help of a novel image steganographic method (PMM) on different cuts of the cover image to form different stego objects. Finally stego image is formed by combining different stego objects and transmit to the receiver side. At the receiving end different opposite processes should run to get the back the original secret message.

Keywords: Cover Image, Finite state sequential machine, Melaymachine, Pixel Mapping Method (PMM), Stego Image, NCUT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2241
589 An Evaluation of the Feasibility of Several Industrial Wastes and Natural Materials as Precursors for the Production of Alkali Activated Materials

Authors: O. Alelweet, S. Pavia

Abstract:

In order to face current compelling environmental problems affecting the planet, the construction industry needs to adapt. It is widely acknowledged that there is a need for durable, high-performance, low-greenhouse gas emission binders that can be used as an alternative to Portland cement (PC) to lower the environmental impact of construction. Alkali activated materials (AAMs) are considered a more sustainable alternative to PC materials. The binders of AAMs result from the reaction of an alkali metal source and a silicate powder or precursor which can be a calcium silicate or an aluminosilicate-rich material. This paper evaluates the particle size, specific surface area, chemical and mineral composition and amorphousness of silicate materials (most industrial waste locally produced in Ireland and Saudi Arabia) to develop alkali-activated binders that can replace PC resources in specific applications. These include recycled ceramic brick, bauxite, illitic clay, fly ash and metallurgical slag. According to the results, the wastes are reactive and comply with building standards requirements. The study also evidenced that the reactivity of the Saudi bauxite (with significant kaolinite) can be enhanced on thermal activation; and high calcium in the slag will promote reaction; which should be possible with low alkalinity activators. The wastes evidenced variable water demands that will be taken into account for mixing with the activators. Finally, further research is proposed to further determine the reactive fraction of the clay-based precursors.

Keywords: Reactivity, water demand, alkali-activated materials, brick, bauxite, illitic clay, fly ash, slag.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 732
588 Nitrification Efficiency and Community Structure of Municipal Activated Sewage Sludge

Authors: Oluyemi O. Awolusi, Abimbola M. Enitan, Sheena Kumari, Faizal Bux

Abstract:

Nitrification is essential to biological processes designed to remove ammonia and/or total nitrogen. It removes excess nitrogenous compound in wastewater which could be very toxic to the aquatic fauna or cause serious imbalance of such aquatic ecosystem. Efficient nitrification is linked to an in-depth knowledge of the structure and dynamics of the nitrifying community structure within the wastewater treatment systems. In this study, molecular technique was employed for characterizing the microbial structure of activated sludge [ammonia oxidizing bacteria (AOB) and nitrite oxidizing bacteria (NOB)] in a municipal wastewater treatment with intention of linking it to the plant efficiency. PCR based phylogenetic analysis was also carried out. The average operating and environmental parameters as well as specific nitrification rate of plant was investigated during the study. During the investigation the average temperature was 23±1.5oC. Other operational parameters such as mixed liquor suspended solids and chemical oxygen demand inversely correlated with ammonia removal. The dissolved oxygen level in the plant was constantly lower than the optimum (between 0.24 and 1.267 mg/l) during this study. The plant was treating wastewater with influent ammonia concentration of 31.69 and 24.47 mg/L. The influent flow rates (ML/Day) was 96.81 during period. The dominant nitrifiers include: Nitrosomonas spp. Nitrobacter spp. and Nitrospira spp. The AOB had correlation with nitrification efficiency and temperature. This study shows that the specific ammonia oxidizing rate and the specific nitrate formation rates can serve as good indicator of the plant overall nitrification performance.

Keywords: Ammonia monooxygenase α-subunit (amoA) gene, ammonia-oxidizing bacteria (AOB), nitrite-oxidizing bacteria (NOB), specific nitrification rate, PCR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2304
587 Efficiency of Wood Vinegar Mixed with Some Plants Extract against the Housefly (Musca domestica L.)

Authors: U. Pangnakorn, S. Kanlaya

Abstract:

The efficiency of wood vinegar mixed with each individual of three plants extract such as: citronella grass (Cymbopogon nardus), neem seed (Azadirachta indica A. Juss), and yam bean seed (Pachyrhizus erosus Urb.) were tested against the second instar larvae of housefly (Musca domestica L.). Steam distillation was used for extraction of the citronella grass while neem and yam bean were simple extracted by fermentation with ethyl alcohol. Toxicity test was evaluated in laboratory based on two methods of larvicidal bioassay: topical application method (contact poison) and feeding method (stomach poison). Larval mortality was observed daily and larval survivability was recorded until the survived larvae developed to pupae and adults. The study resulted that treatment of wood vinegar mixed with citronella grass showed the highest larval mortality by topical application method (50.0%) and by feeding method (80.0%). However, treatment of mixed wood vinegar and neem seed showed the longest pupal duration to 25 day and 32 days for topical application method and feeding method respectively. Additional, larval duration on treated M. domestica larvae was extended to 13 days for topical application method and 11 days for feeding method. Thus, the feeding method gave higher efficiency compared with the topical application method.

Keywords: Housefly (Musca domestica L.), neem seed (Azadirachta indica), citronella grass (Cymbopogon nardus) yam bean seed (Pachyrhizus erosus), mortality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3529
586 Liquid Chromatography Microfluidics for Detection and Quantification of Urine Albumin Using Linear Regression Method

Authors: Patricia B. Cruz, Catrina Jean G. Valenzuela, Analyn N. Yumang

Abstract:

Nearly a hundred per million of the Filipino population is diagnosed with Chronic Kidney Disease (CKD). The early stage of CKD has no symptoms and can only be discovered once the patient undergoes urinalysis. Over the years, different methods were discovered and used for the quantification of the urinary albumin such as the immunochemical assays where most of these methods require large machinery that has a high cost in maintenance and resources, and a dipstick test which is yet to be proven and is still debated as a reliable method in detecting early stages of microalbuminuria. This research study involves the use of the liquid chromatography concept in microfluidic instruments with biosensor as a means of separation and detection respectively, and linear regression to quantify human urinary albumin. The researchers’ main objective was to create a miniature system that quantifies and detect patients’ urinary albumin while reducing the amount of volume used per five test samples. For this study, 30 urine samples of unknown albumin concentrations were tested using VITROS Analyzer and the microfluidic system for comparison. Based on the data shared by both methods, the actual vs. predicted regression were able to create a positive linear relationship with an R2 of 0.9995 and a linear equation of y = 1.09x + 0.07, indicating that the predicted values and actual values are approximately equal. Furthermore, the microfluidic instrument uses 75% less in total volume – sample and reagents combined, compared to the VITROS Analyzer per five test samples.

Keywords: Chronic kidney disease, microfluidics, linear regression, VITROS analyzer, urinary albumin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 826
585 Soil-Structure Interaction Models for the Reinforced Foundation System: A State-of-the-Art Review

Authors: Ashwini V. Chavan, Sukhanand S. Bhosale

Abstract:

Challenges of weak soil subgrade are often resolved either by stabilization or reinforcing it. However, it is also practiced to reinforce the granular fill to improve the load-settlement behavior of it over weak soil strata. The inclusion of reinforcement in the engineered granular fill provided a new impetus for the development of enhanced Soil-Structure Interaction (SSI) models, also known as mechanical foundation models or lumped parameter models. Several researchers have been working in this direction to understand the mechanism of granular fill-reinforcement interaction and the response of weak soil under the application of load. These models have been developed by extending available SSI models such as the Winkler Model, Pasternak Model, Hetenyi Model, Kerr Model etc., and are helpful to visualize the load-settlement behavior of a physical system through 1-D and 2-D analysis considering beam and plate resting on the foundation, respectively. Based on the literature survey, these models are categorized as ‘Reinforced Pasternak Model,’ ‘Double Beam Model,’ ‘Reinforced Timoshenko Beam Model,’ and ‘Reinforced Kerr Model’. The present work reviews the past 30+ years of research in the field of SSI models for reinforced foundation systems, presenting the conceptual development of these models systematically and discussing their limitations. A flow-chart showing procedure for compution of deformation and mobilized tension is also incorporated in the paper. Special efforts are taken to tabulate the parameters and their significance in the load-settlement analysis, which may be helpful in future studies for the comparison and enhancement of results and findings of physical models. 

Keywords: geosynthetics, mathematical modeling, reinforced foundation, soil-structure interaction, ground improvement, soft soil

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 617
584 Bio-Inspired Design Approach Analysis: A Case Study of Antoni Gaudi and Santiago Calatrava

Authors: Marzieh Imani

Abstract:

Antoni Gaudi and Santiago Calatrava have reputation for designing bio-inspired creative and technical buildings. Even though they have followed different independent approaches towards design, the source of bio-inspiration seems to be common. Taking a closer look at their projects reveals that Calatrava has been influenced by Gaudi in terms of interpreting nature and applying natural principles into the design process. This research firstly discusses the dialogue between Biomimicry and architecture. This review also explores human/nature discourse during the history by focusing on how nature revealed itself to the fine arts. This is explained by introducing naturalism and romantic style in architecture as the outcome of designers’ inclination towards nature. Reviewing the literature, theoretical background and practical illustration of nature have been included. The most dominant practical aspects of imitating nature are form and function. Nature has been reflected in architectural science resulted in shaping different architectural styles such as organic, green, sustainable, bionic, and biomorphic. By defining a set of common aspects of Gaudi and Calatrava‘s design approach and by considering biomimetic design categories (organism, ecosystem, and behaviour as the main division and form, function, process, material, and construction as subdivisions), Gaudi’s and Calatrava’s project have been analysed. This analysis explores if their design approaches are equivalent or different. Based on this analysis, Gaudi’s architecture can be recognised as biomorphic while Calatrava’s projects are literally biomimetic. Referring to these architects, this review suggests a new set of principles by which a bio-inspired project can be determined either biomorphic or biomimetic.

Keywords: Biomimicry, Calatrava, Gaudi, nature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3253
583 Elliptical Features Extraction Using Eigen Values of Covariance Matrices, Hough Transform and Raster Scan Algorithms

Authors: J. Prakash, K. Rajesh

Abstract:

In this paper, we introduce a new method for elliptical object identification. The proposed method adopts a hybrid scheme which consists of Eigen values of covariance matrices, Circular Hough transform and Bresenham-s raster scan algorithms. In this approach we use the fact that the large Eigen values and small Eigen values of covariance matrices are associated with the major and minor axial lengths of the ellipse. The centre location of the ellipse can be identified using circular Hough transform (CHT). Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain a small number of nonzero elements they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of circumference pixels is identified using raster scan algorithm which uses the geometrical symmetry property. This method does not require the evaluation of tangents or curvature of edge contours, which are generally very sensitive to noise working conditions. The proposed method has the advantages of small storage, high speed and accuracy in identifying the feature. The new method has been tested on both synthetic and real images. Several experiments have been conducted on various images with considerable background noise to reveal the efficacy and robustness. Experimental results about the accuracy of the proposed method, comparisons with Hough transform and its variants and other tangential based methods are reported.

Keywords: Circular Hough transform, covariance matrix, Eigen values, ellipse detection, raster scan algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2624
582 Simulating Flow Transients in Conveying Pipeline Systems by Rigid Column and Full Elastic Methods: Pump Combined with Air Chamber

Authors: I. Abuiziah, A. Oulhaj, K. Sebari, D. Ouazar, A. A. Saber

Abstract:

In water pipeline systems, the flow control is an integrated part of the operation, for instance, opening and closing the valves, starting and stopping the pumps, when these operations very quickly performed, they shall cause the hydraulic transient phenomena, which may cause pump and, valve failures and catastrophic pipe ruptures. Fluid transient analysis is one of the more challenging and complicated flow problems in the design and the operation of water pipeline systems. Transient control has become an essential requirement for ensuring safe operation of water pipeline systems. An accurate analysis and suitable protection devices should be used to protect water pipeline systems. The fourth-order Runge-Kutta method has been used to solve the dynamic and continuity equations in the rigid column method, while the characteristics method used to solve these equations in the full elastic methods. This paper presents the problem of modeling and simulating of transient phenomena in conveying pipeline systems based on the rigid column and full elastic methods. Also, it provides the influence of using the protection devices to protect the pipeline systems from damaging due to the gain pressure which occur in the transient state. The results obtained provide that the model is an efficient tool for flow transient analysis and provide approximately identical results by using these two methods. Moreover; using the closed surge tank reduces the unfavorable effects of transients.

Keywords: Flow transient, Pipeline, Air chamber, Numerical model, Protection devices, Elastic method, Rigid column method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4384
581 Increasing Sustainability Using the Potential of Urban Rivers in Developing Countries with a Biophilic Design Approach

Authors: Mohammad Reza Mohammadian, Dariush Sattarzadeh, Mir Mohammad Javad Poor Hadi Hosseini

Abstract:

Population growth, urban development and urban buildup have disturbed the balance between the nature and the city, and so leading to the loss of quality of sustainability of proximity to rivers. While in the past, the sides of urban rivers were considered as urban green space. Urban rivers and their sides that have environmental, social and economic values are important to achieve sustainable development. So far, efforts have been made at various scales in various cities around the world to revitalize these areas. On the other hand, biophilic design is an innovative design approach in which attention to natural details and relation to nature is a fundamental concept. The purpose of this study is to provide an integrated framework of urban design using the potential of urban rivers (in order to increase sustainability) with a biophilic design approach to be used in cities in developing countries. The methodology of the research is based on the collection of data and information from research and projects including a study on biophilic design, investigations and projects related to the urban rivers, and a review of the literature on sustainable urban development. Then studying the boundary of urban rivers is completed by examining case samples. Eventually, integrated framework of urban design, to design the boundaries of urban rivers in the cities of developing countries is presented regarding the factors affecting the design of these areas. The result shows that according to this framework, the potential of the river banks is utilized to increase not only the environmental sustainability but also social, economic and physical stability with regard to water, light, and the usage of indigenous materials, etc.

Keywords: Urban rivers, biophilic design, urban sustainability, nature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1240
580 Early Depression Detection for Young Adults with a Psychiatric and AI Interdisciplinary Multimodal Framework

Authors: Raymond Xu, Ashley Hua, Andrew Wang, Yuru Lin

Abstract:

During COVID-19, the depression rate has increased dramatically. Young adults are most vulnerable to the mental health effects of the pandemic. Lower-income families have a higher ratio to be diagnosed with depression than the general population, but less access to clinics. This research aims to achieve early depression detection at low cost, large scale, and high accuracy with an interdisciplinary approach by incorporating clinical practices defined by American Psychiatric Association (APA) as well as multimodal AI framework. The proposed approach detected the nine depression symptoms with Natural Language Processing sentiment analysis and a symptom-based Lexicon uniquely designed for young adults. The experiments were conducted on the multimedia survey results from adolescents and young adults and unbiased Twitter communications. The result was further aggregated with the facial emotional cues analyzed by the Convolutional Neural Network on the multimedia survey videos. Five experiments each conducted on 10k data entries reached consistent results with an average accuracy of 88.31%, higher than the existing natural language analysis models. This approach can reach 300+ million daily active Twitter users and is highly accessible by low-income populations to promote early depression detection to raise awareness in adolescents and young adults and reveal complementary cues to assist clinical depression diagnosis.

Keywords: Artificial intelligence, depression detection, facial emotion recognition, natural language processing, mental disorder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1127
579 Methodology of the Turkey’s National Geographic Information System Integration Project

Authors: Buse A. Ataç, Doğan K. Cenan, Arda Çetinkaya, Naz D. Şahin, Köksal Sanlı, Zeynep Koç, Akın Kısa

Abstract:

With its spatial data reliability, interpretation and questioning capabilities, Geographical Information Systems make significant contributions to scientists, planners and practitioners. Geographic information systems have received great attention in today's digital world, growing rapidly, and increasing the efficiency of use. Access to and use of current and accurate geographical data, which are the most important components of the Geographical Information System, has become a necessity rather than a need for sustainable and economic development. This project aims to enable sharing of data collected by public institutions and organizations on a web-based platform. Within the scope of the project, INSPIRE (Infrastructure for Spatial Information in the European Community) data specifications are considered as a road-map. In this context, Turkey's National Geographic Information System (TUCBS) Integration Project supports sharing spatial data within 61 pilot public institutions as complied with defined national standards. In this paper, which is prepared by the project team members in the TUCBS Integration Project, the technical process with a detailed methodology is explained. In this context, the main technical processes of the Project consist of Geographic Data Analysis, Geographic Data Harmonization (Standardization), Web Service Creation (WMS, WFS) and Metadata Creation-Publication. In this paper, the integration process carried out to provide the data produced by 61 institutions to be shared from the National Geographic Data Portal (GEOPORTAL), have been trying to be conveyed with a detailed methodology.

Keywords: Data specification, geoportal, GIS, INSPIRE, TUCBS, Turkey’s National Geographic Information System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 659
578 Prophylactic Effects of Dairy Kluyveromyces marxianus YAS through Overexpression of BAX, CASP 3, CASP 8 and CASP 9 on Human Colon Cancer Cell Lines

Authors: Amir Saber Gharamaleki, Beitollah Alipour, Zeinab Faghfoori, Ahmad YariKhosroushahi

Abstract:

Colorectal cancer (CRC) is one of the most prevalent cancers and intestinal microbial community plays an important role in colorectal tumorigenesis. Probiotics have recently been assessed as effective anti-proliferative agents and thus this study was performed to examine whether CRC undergo apoptosis by treating with isolated Iranian native dairy yeast, Kluyveromyces marxianus YAS, secretion metabolites. The cytotoxicity assessments on cells (HT-29, Caco-2) were accomplished through 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay as well as qualitative DAPI (4',6-diamidino-2-phenylindole staining) and quantitative (flow cytometry assessments) evaluations of apoptosis. To evaluate the main mechanism of apoptosis, Real time PCR method was applied. Kluyveromyces marxianus YAS secretions (IC50) showed significant cytotoxicity against HT-29 and Caco-2 cancer cell lines (66.57 % and 66.34 % apoptosis) similar to 5-Fluorouracil (5-FU) while apoptosis only was developed in 27.57 % of KDR normal cells. The prophylactic effects of Kluyveromyces marxianus (PTCC 5195), as a reference yeast, was not similar to Kluyveromyces marxianus YAS indicating strain dependency of bioactivities on CRC disease prevention. Based on real time PCR results, the main cytotoxicity is related to apoptosis phenomenon and the core related mechanism is depended on the overexpression of BAX, CASP 9, CASP 8 and CASP 3 inducing apoptosis genes. However, several investigations should be conducted to precisely determine the effective compounds to be used as anticancer therapeutics in the future.

Keywords: Anticancer, anti-proliferative, apoptosis, cytotoxicity, yeast.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1639
577 A Zero-Cost Collar Option Applied to Materials Procurement Contracts to Reduce Price Fluctuation Risks in Construction

Authors: H. L. Yim, S. H. Lee, S. K. Yoo, J. J. Kim

Abstract:

This study proposes a materials procurement contracts model to which the zero-cost collar option is applied for heading price fluctuation risks in construction.The material contract model based on the collar option that consists of the call option striking zone of the construction company(the buyer) following the materials price increase andthe put option striking zone of the material vendor(the supplier) following a materials price decrease. This study first determined the call option strike price Xc of the construction company by a simple approach: it uses the predicted profit at the project starting point and then determines the strike price of put option Xp that has an identical option value, which completes the zero-cost material contract.The analysis results indicate that the cost saving of the construction company increased as Xc decreased. This was because the critical level of the steel materials price increasewas set at a low level. However, as Xc decreased, Xpof a put option that had an identical option value gradually increased. Cost saving increased as Xc decreased. However, as Xp gradually increased, the risk of loss from a construction company increased as the steel materials price decreased. Meanwhile, cost saving did not occur for the construction company, because of volatility. This result originated in the zero-cost features of the two-way contract of the collar option. In the case of the regular one-way option, the transaction cost had to be subtracted from the cost saving. The transaction cost originated from an option value that fluctuated with the volatility. That is, the cost saving of the one-way option was affected by the volatility. Meanwhile, even though the collar option with zero transaction cost cut the connection between volatility and cost saving, there was a risk of exercising the put option.

Keywords: Construction materials, Supply chain management, Procurement, Payment, Collar option

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2498
576 Building Information Modelling for Construction Delay Management

Authors: Essa Alenazi, Zulfikar Adamu

Abstract:

The Kingdom of Saudi Arabia (KSA) is not an exception in relying on the growth of its construction industry to support rapid population growth. However, its need for infrastructure development is constrained by low productivity levels and cost overruns caused by factors such as delays to project completion. Delays in delivering a construction project are a global issue and while theories such as Optimism Bias have been used to explain such delays, in KSA, client-related causes of delays are also significant. The objective of this paper is to develop a framework-based approach to explore how the country’s construction industry can manage and reduce delays in construction projects through building information modelling (BIM) in order to mitigate the cost consequences of such delays.  It comprehensively and systematically reviewed the global literature on the subject and identified gaps, critical delay factors and the specific benefits that BIM can deliver for the delay management.  A case study comprising of nine hospital projects that have experienced delay and cost overruns was also carried out. Five critical delay factors related to the clients were identified as candidates that can be mitigated through BIM’s benefits. These factors are: Ineffective planning and scheduling of the project; changes during construction by the client; delay in progress payment; slowness in decision making by the client; and poor communication between clients and other stakeholders. In addition, data from the case study projects strongly suggest that optimism bias is present in many of the hospital projects. Further validation via key stakeholder interviews and documentations are planned.

Keywords: BIM, client perspective, delay management, optimism bias, public sector projects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2433
575 Study of the Effect of the Contra-Rotating Component on the Performance of the Centrifugal Compressor

Authors: Van Thang Nguyen, Amelie Danlos, Richard Paridaens, Farid Bakir

Abstract:

This article presents a study of the effect of a contra-rotating component on the efficiency of centrifugal compressors. A contra-rotating centrifugal compressor (CRCC) is constructed using two independent rotors, rotating in the opposite direction and replacing the single rotor of a conventional centrifugal compressor (REF). To respect the geometrical parameters of the REF one, two rotors of the CRCC are designed, based on a single rotor geometry, using the hub and shroud length ratio parameter of the meridional contour. Firstly, the first rotor is designed by choosing a value of length ratio. Then, the second rotor is calculated to be adapted to the fluid flow of the first rotor according aerodynamics principles. In this study, four values of length ratios 0.3, 0.4, 0.5, and 0.6 are used to create four configurations CF1, CF2, CF3, and CF4 respectively. For comparison purpose, the circumferential velocity at the outlet of the REF and the CRCC are preserved, which means that the single rotor of the REF and the second rotor of the CRCC rotate with the same speed of 16000rpm. The speed of the first rotor in this case is chosen to be equal to the speed of the second rotor. The CFD simulation is conducted to compare the performance of the CRCC and the REF with the same boundary conditions. The results show that the configuration with a higher length ratio gives higher pressure rise. However, its efficiency is lower. An investigation over the entire operating range shows that the CF1 is the best configuration in this case. In addition, the CRCC can improve the pressure rise as well as the efficiency by changing the speed of each rotor independently. The results of changing the first rotor speed show with a 130% speed increase, the pressure ratio rises of 8.7% while the efficiency remains stable at the flow rate of the design operating point.

Keywords: Centrifugal compressor, contra-rotating, interaction rotor, vacuum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 808
574 Effect of Exit Annular Area on the Flow Field Characteristics of an Unconfined Premixed Annular Swirl Burner

Authors: Vishnu Raj, Chockalingam Prathap

Abstract:

The objective of this study was to explore the impact of variation in the exit annular area on the local flow field features and the flame stability of an annular premixed swirl burner (unconfined) operated with a premixed n-butane air mixture at an equivalence ratio (Φ) = 1, 1 bar, and 300K. A swirl burner with an axial swirl generator having a swirl number of 1.5 was used. Three different burner heads were chosen to have the exit area increased from 100%, 160%, and 220% resulting in inner and outer diameters and cross-sectional areas as (1) 10 mm & 15 mm, 98 mm2 (2) 17.5 mm & 22.5 mm, 157 mm2 and (3) 25 mm & 30 mm, 216 mm2. The bulk velocity and Reynolds number based on the hydraulic diameter and unburned gas properties were kept constant at 12 m/s and 4000. (i) Planar Particle Image Velocimetry (PIV) with TiO2 seeding particles and (ii) CH* chemiluminescence was used to measure the velocity fields and reaction zones of the swirl flames at 5 Hz, respectively. Velocity fields and the jet spreading rates measured at the isothermal and reactive conditions revealed that the presence of a flame significantly altered the flow field in the radial direction due to the gas expansion. Important observations from the flame measurements were: the height and maximum width of the recirculation bubbles normalized by the hydraulic diameter, and the jet spreading angles for the flames for the three exit area cases were: (a) 4.52, 1.95, 34◦, (b) 6.78, 2.37, 26◦, and (c) 8.73, 2.32, 22◦. The lean blowout (LBO) was also measured, and the respective equivalence ratios were: 0.80, 0.92, and 0.82. LBO was relatively narrow for the 157 mm2 case. For this case, PIV measurements showed that Turbulent Kinetic Energy and turbulent intensity were relatively high compared to the other two cases, resulting in higher stretch rates and narrower LBO.

Keywords: Chemiluminescence, jet spreading rate, lean blow out, swirl flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 209
573 Evaluation of Easy-to-Use Energy Building Design Tools for Solar Access Analysis in Urban Contexts: Comparison of Friendly Simulation Design Tools for Architectural Practice in the Early Design Stage

Authors: M. Iommi, G. Losco

Abstract:

Current building sector is focused on reduction of energy requirements, on renewable energy generation and on regeneration of existing urban areas. These targets need to be solved with a systemic approach, considering several aspects simultaneously such as climate conditions, lighting conditions, solar radiation, PV potential, etc. The solar access analysis is an already known method to analyze the solar potentials, but in current years, simulation tools have provided more effective opportunities to perform this type of analysis, in particular in the early design stage. Nowadays, the study of the solar access is related to the easiness of the use of simulation tools, in rapid and easy way, during the design process. This study presents a comparison of three simulation tools, from the point of view of the user, with the aim to highlight differences in the easy-to-use of these tools. Using a real urban context as case study, three tools; Ecotect, Townscope and Heliodon, are tested, performing models and simulations and examining the capabilities and output results of solar access analysis. The evaluation of the ease-to-use of these tools is based on some detected parameters and features, such as the types of simulation, requirements of input data, types of results, etc. As a result, a framework is provided in which features and capabilities of each tool are shown. This framework shows the differences among these tools about functions, features and capabilities. The aim of this study is to support users and to improve the integration of simulation tools for solar access with the design process.

Keywords: Solar access analysis, energy building design tools, urban planning, solar potential.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2039
572 Municipal Solid Waste Management Using Life Cycle Assessment Approach: Case Study of Maku City, Iran

Authors: L. Heidari, M. Jalili Ghazizade

Abstract:

This paper aims to determine the best environmental and economic scenario for Municipal Solid Waste (MSW) management of the Maku city by using Life Cycle Assessment (LCA) approach. The functional elements of this study are collection, transportation, and disposal of MSW in Maku city. Waste composition and density, as two key parameters of MSW, have been determined by field sampling, and then, the other important specifications of MSW like chemical formula, thermal energy and water content were calculated. These data beside other information related to collection and disposal facilities are used as a reliable source of data to assess the environmental impacts of different waste management options, including landfills, composting, recycling and energy recovery. The environmental impact of MSW management options has been investigated in 15 different scenarios by Integrated Waste Management (IWM) software. The photochemical smog, greenhouse gases, acid gases, toxic emissions, and energy consumption of each scenario are measured. Then, the environmental indices of each scenario are specified by weighting these parameters. Economic costs of scenarios have been also compared with each other based on literature. As final result, since the organic materials make more than 80% of the waste, compost can be a suitable method. Although the major part of the remaining 20% of waste can be recycled, due to the high cost of necessary equipment, the landfill option has been suggested. Therefore, the scenario with 80% composting and 20% landfilling is selected as superior environmental and economic scenario. This study shows that, to select a scenario with practical applications, simultaneously environmental and economic aspects of different scenarios must be considered.

Keywords: IWM software, life cycle assessment, Maku, municipal solid waste management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1287
571 Thermo-mechanical Deformation Behavior of Functionally Graded Rectangular Plates Subjected to Various Boundary Conditions and Loadings

Authors: Mohammad Talha, B. N. Singh

Abstract:

This paper deals with the thermo-mechanical deformation behavior of shear deformable functionally graded ceramicmetal (FGM) plates. Theoretical formulations are based on higher order shear deformation theory with a considerable amendment in the transverse displacement using finite element method (FEM). The mechanical properties of the plate are assumed to be temperaturedependent and graded in the thickness direction according to a powerlaw distribution in terms of the volume fractions of the constituents. The temperature field is supposed to be a uniform distribution over the plate surface (XY plane) and varied in the thickness direction only. The fundamental equations for the FGM plates are obtained using variational approach by considering traction free boundary conditions on the top and bottom faces of the plate. A C0 continuous isoparametric Lagrangian finite element with thirteen degrees of freedom per node have been employed to accomplish the results. Convergence and comparison studies have been performed to demonstrate the efficiency of the present model. The numerical results are obtained for different thickness ratios, aspect ratios, volume fraction index and temperature rise with different loading and boundary conditions. Numerical results for the FGM plates are provided in dimensionless tabular and graphical forms. The results proclaim that the temperature field and the gradient in the material properties have significant role on the thermo-mechanical deformation behavior of the FGM plates.

Keywords: Functionally graded material, higher order shear deformation theory, finite element method, independent field variables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2302
570 Parameter Optimization and Thermal Simulation in Laser Joining of Coach Peel Panels of Dissimilar Materials

Authors: Masoud Mohammadpour, Blair Carlson, Radovan Kovacevic

Abstract:

The quality of laser welded-brazed (LWB) joints were strongly dependent on the main process parameters, therefore the effect of laser power (3.2–4 kW), welding speed (60–80 mm/s) and wire feed rate (70–90 mm/s) on mechanical strength and surface roughness were investigated in this study. The comprehensive optimization process by means of response surface methodology (RSM) and desirability function was used for multi-criteria optimization. The experiments were planned based on Box– Behnken design implementing linear and quadratic polynomial equations for predicting the desired output properties. Finally, validation experiments were conducted on an optimized process condition which exhibited good agreement between the predicted and experimental results. AlSi3Mn1 was selected as the filler material for joining aluminum alloy 6022 and hot-dip galvanized steel in coach peel configuration. The high scanning speed could control the thickness of IMC as thin as 5 µm. The thermal simulations of joining process were conducted by the Finite Element Method (FEM), and results were validated through experimental data. The Fe/Al interfacial thermal history evidenced that the duration of critical temperature range (700–900 °C) in this high scanning speed process was less than 1 s. This short interaction time leads to the formation of reaction-control IMC layer instead of diffusion-control mechanisms.

Keywords: Laser welding-brazing, finite element, response surface methodology, multi-response optimization, cross-beam laser.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 942
569 Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data

Authors: Gen Sakoda, Hideki Takayasu, Misako Takayasu

Abstract:

Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant  of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual  value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.

Keywords: Food waste reduction, particle filter, point of sales, sustainable development goals, Taylor's Law, time series analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 840
568 Evaluation of Ensemble Classifiers for Intrusion Detection

Authors: M. Govindarajan

Abstract:

One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection. 

Keywords: Data mining, ensemble, radial basis function, support vector machine, accuracy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1673