Search results for: Material efficiency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4218

Search results for: Material efficiency

318 A CFD Study of Turbulent Convective Heat Transfer Enhancement in Circular Pipeflow

Authors: Perumal Kumar, Rajamohan Ganesan

Abstract:

Addition of milli or micro sized particles to the heat transfer fluid is one of the many techniques employed for improving heat transfer rate. Though this looks simple, this method has practical problems such as high pressure loss, clogging and erosion of the material of construction. These problems can be overcome by using nanofluids, which is a dispersion of nanosized particles in a base fluid. Nanoparticles increase the thermal conductivity of the base fluid manifold which in turn increases the heat transfer rate. Nanoparticles also increase the viscosity of the basefluid resulting in higher pressure drop for the nanofluid compared to the base fluid. So it is imperative that the Reynolds number (Re) and the volume fraction have to be optimum for better thermal hydraulic effectiveness. In this work, the heat transfer enhancement using aluminium oxide nanofluid using low and high volume fraction nanofluids in turbulent pipe flow with constant wall temperature has been studied by computational fluid dynamic modeling of the nanofluid flow adopting the single phase approach. Nanofluid, up till a volume fraction of 1% is found to be an effective heat transfer enhancement technique. The Nusselt number (Nu) and friction factor predictions for the low volume fractions (i.e. 0.02%, 0.1 and 0.5%) agree very well with the experimental values of Sundar and Sharma (2010). While, predictions for the high volume fraction nanofluids (i.e. 1%, 4% and 6%) are found to have reasonable agreement with both experimental and numerical results available in the literature. So the computationally inexpensive single phase approach can be used for heat transfer and pressure drop prediction of new nanofluids.

Keywords: Heat transfer intensification, nanofluid, CFD, friction factor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2855
317 Combined Effect of Heat Stimulation and Delayed Addition of Superplasticizer with Slag on Fresh and Hardened Property of Mortar

Authors: Faraidoon Rahmanzai, Mizuki Takigawa, Yu Bomura, Shigeyuki Date

Abstract:

To obtain the high quality and essential workability of mortar, different types of superplasticizers are used. The superplasticizers are the chemical admixture used in the mix to improve the fluidity of mortar. Many factors influenced the superplasticizer to disperse the cement particle in the mortar. Nature and amount of replaced cement by slag, mixing procedure, delayed addition time, and heat stimulation technique of superplasticizer cause the varied effect on the fluidity of the cementitious material. In this experiment, the superplasticizers were heated for 1 hour under 60 °C in a thermostatic chamber. Furthermore, the effect of delayed addition time of heat stimulated superplasticizers (SP) was also analyzed. This method was applied to two types of polycarboxylic acid based ether SP (precast type superplasticizer (SP2) and ready-mix type superplasticizer (SP1)) in combination with a partial replacement of normal Portland cement with blast furnace slag (BFS) with 30% w/c ratio. On the other hands, the fluidity, air content, fresh density, and compressive strength for 7 and 28 days were studied. The results indicate that the addition time and heat stimulation technique improved the flow and air content, decreased the density, and slightly decreased the compressive strength of mortar. Moreover, the slag improved the flow of mortar by increasing the amount of slag, and the effect of external temperature of SP on the flow of mortar was decreased. In comparison, the flow of mortar was improved on 5-minute delay for both kinds of SP, but SP1 has improved the flow in all conditions. Most importantly, the transition points in both types of SP appear to be the same, at about 5±1 min.  In addition, the optimum addition time of SP to mortar should be in this period.

Keywords: Combined effect, delayed addition, heat stimulation, flow of mortar.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 827
316 Cooperative Cross Layer Topology for Concurrent Transmission Scheduling Scheme in Broadband Wireless Networks

Authors: Gunasekaran Raja, Ramkumar Jayaraman

Abstract:

In this paper, we consider CCL-N (Cooperative Cross Layer Network) topology based on the cross layer (both centralized and distributed) environment to form network communities. Various performance metrics related to the IEEE 802.16 networks are discussed to design CCL-N Topology. In CCL-N topology, nodes are classified as master nodes (Master Base Station [MBS]) and serving nodes (Relay Station [RS]). Nodes communities are organized based on the networking terminologies. Based on CCL-N Topology, various simulation analyses for both transparent and non-transparent relays are tabulated and throughput efficiency is calculated. Weighted load balancing problem plays a challenging role in IEEE 802.16 network. CoTS (Concurrent Transmission Scheduling) Scheme is formulated in terms of three aspects – transmission mechanism based on identical communities, different communities and identical node communities. CoTS scheme helps in identifying the weighted load balancing problem. Based on the analytical results, modularity value is inversely proportional to that of the error value. The modularity value plays a key role in solving the CoTS problem based on hop count. The transmission mechanism for identical node community has no impact since modularity value is same for all the network groups. In this paper three aspects of communities based on the modularity value which helps in solving the problem of weighted load balancing and CoTS are discussed.

Keywords: Cross layer network topology, concurrent scheduling, modularity value, network communities and weighted load balancing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1412
315 Accounting for Rice Productivity Heterogeneity in Ghana: The Two-Step Stochastic Metafrontier Approach

Authors: Franklin Nantui Mabe, Samuel A. Donkoh, Seidu Al-Hassan

Abstract:

Rice yields among agro-ecological zones are heterogeneous. Farmers, researchers and policy makers are making frantic efforts to bridge rice yield gaps between agro-ecological zones through the promotion of improved agricultural technologies (IATs). Farmers are also modifying these IATs and blending them with indigenous farming practices (IFPs) to form farmer innovation systems (FISs). Also, different metafrontier models have been used in estimating productivity performances and their drivers. This study used the two-step stochastic metafrontier model to estimate the productivity performances of rice farmers and their determining factors in GSZ, FSTZ and CSZ. The study used both primary and secondary data. Farmers in CSZ are the most technically efficient. Technical inefficiencies of farmers are negatively influenced by age, sex, household size, education years, extension visits, contract farming, access to improved seeds, access to irrigation, high rainfall amount, less lodging of rice, and well-coordinated and synergized adoption of technologies. Albeit farmers in CSZ are doing well in terms of rice yield, they still have the highest potential of increasing rice yield since they had the lowest TGR. It is recommended that government through the ministry of food and agriculture, development partners and individual private companies promote the adoption of IATs as well as educate farmers on how to coordinate and synergize the adoption of the whole package. Contract farming concept and agricultural extension intensification should be vigorously pursued to the latter.

Keywords: Efficiency, farmer innovation systems, improved agricultural technologies, two-step stochastic metafrontier approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 823
314 Transesterification of Waste Cooking Oil for Biodiesel Production Using Modified Clinoptilolite Zeolite as a Heterogeneous Catalyst

Authors: D. Mowla, N. Rasti, P. Keshavarz

Abstract:

Reduction of fossil fuels sources, increasing of pollution gases emission, and global warming effects increase the demand of renewable fuels. One of the main candidates of alternative fuels is biodiesel. Biodiesel limits greenhouse gas effects due to the closed CO2 cycle. Biodiesel has more biodegradability, lower combustion emissions such as CO, SOx, HC, PM and lower toxicity than petro diesel. However, biodiesel has high production cost due to high price of plant oils as raw material. So, the utilization of waste cooking oils (WCOs) as feedstock, due to their low price and disposal problems reduce biodiesel production cost. In this study, production of biodiesel by transesterification of methanol and WCO using modified sodic potassic (SP) clinoptilolite zeolite and sodic potassic calcic (SPC) clinoptilolite zeolite as heterogeneous catalysts have been investigated. These natural clinoptilolite zeolites were modified by KOH solution to increase the site activity. The optimum biodiesel yields for SP clinoptilolite and SPC clinoptilolite were 95.8% and 94.8%, respectively. Produced biodiesel were analyzed and compared with petro diesel and ASTM limits. The properties of produced biodiesel confirm well with ASTM limits. The density, kinematic viscosity, cetane index, flash point, cloud point, and pour point of produced biodiesel were all higher than petro diesel but its acid value was lower than petro diesel. Finally, the reusability and regeneration of catalysts were investigated. The results indicated that the spent zeolites cannot be reused directly for the transesterification, but they can be regenerated easily and can obtain high activity.

Keywords: Biodiesel, renewable fuel, transesterification, waste cooking oil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1457
313 Computational Investigation of Secondary Flow Losses in Linear Turbine Cascade by Modified Leading Edge Fence

Authors: K. N. Kiran, S. Anish

Abstract:

It is well known that secondary flow loses account about one third of the total loss in any axial turbine. Modern gas turbine height is smaller and have longer chord length, which might lead to increase in secondary flow. In order to improve the efficiency of the turbine, it is important to understand the behavior of secondary flow and device mechanisms to curtail these losses. The objective of the present work is to understand the effect of a stream wise end-wall fence on the aerodynamics of a linear turbine cascade. The study is carried out computationally by using commercial software ANSYS CFX. The effect of end-wall on the flow field are calculated based on RANS simulation by using SST transition turbulence model. Durham cascade which is similar to high-pressure axial flow turbine for simulation is used. The aim of fencing in blade passage is to get the maximum benefit from flow deviation and destroying the passage vortex in terms of loss reduction. It is observed that, for the present analysis, fence in the blade passage helps reducing the strength of horseshoe vortex and is capable of restraining the flow along the blade passage. Fence in the blade passage helps in reducing the under turning by 70 in comparison with base case. Fence on end-wall is effective in preventing the movement of pressure side leg of horseshoe vortex and helps in breaking the passage vortex. Computations are carried for different fence height whose curvature is different from the blade camber. The optimum fence geometry and location reduces the loss coefficient by 15.6% in comparison with base case.

Keywords: Boundary layer fence, horseshoe vortex, linear cascade, passage vortex, secondary flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2012
312 Efficacy of Biofeedback-Assisted Pelvic Floor Muscle Training on Postoperative Stress Urinary Incontinence

Authors: Asmaa M. El-Bandrawy, Afaf M. Botla, Ghada E. El-Refaye, Hassan O. Ghareeb

Abstract:

Background: Urinary incontinence is a common problem among adults. Its incidence increases with age and it is more frequent in women. Pelvic floor muscle training (PFMT) is the first-line therapy in the treatment of pelvic floor dysfunction (PFD) either alone or combined with biofeedback-assisted PFMT. The aim of the work: The purpose of this study is to evaluate the efficacy of biofeedback-assisted PFMT in postoperative stress urinary incontinence. Settings and Design: A single blind controlled trial design was. Methods and Material: This study was carried out in 30 volunteer patients diagnosed as severe degree of stress urinary incontinence and they were admitted to surgical treatment. They were divided randomly into two equal groups: (Group A) consisted of 15 patients who had been treated with post-operative biofeedback-assisted PFMT and home exercise program (Group B) consisted of 15 patients who had been treated with home exercise program only. Assessment of all patients in both groups (A) and (B) was carried out before and after the treatment program by measuring intra-vaginal pressure in addition to the visual analog scale. Results: At the end of the treatment program, there was a highly statistically significant difference between group (A) and group (B) in the intra-vaginal pressure and the visual analog scale favoring the group (A). Conclusion: biofeedback-assisted PFMT is an effective method for the symptomatic relief of post-operative female stress urinary incontinence.

Keywords: Stress urinary incontinence, pelvic floor muscles, pelvic floor exercises, biofeedback.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1388
311 Influence of Compactive Efforts on Cement- Bagasse Ash Treatment of Expansive Black Cotton Soil

Authors: Moses, G, Osinubi, K. J.

Abstract:

A laboratory study on the influence of compactive effort on expansive black cotton specimens treated with up to 8% ordinary Portland cement (OPC) admixed with up to 8% bagasse ash (BA) by dry weight of soil and compacted using the energies of the standard Proctor (SP), West African Standard (WAS) or “intermediate” and modified Proctor (MP) were undertaken. The expansive black cotton soil was classified as A-7-6 (16) or CL using the American Association of Highway and Transportation Officials (AASHTO) and Unified Soil Classification System (USCS), respectively. The 7day unconfined compressive strength (UCS) values of the natural soil for SP, WAS and MP compactive efforts are 286, 401 and 515kN/m2 respectively, while peak values of 1019, 1328 and 1420kN/m2 recorded at 8% OPC/ 6% BA, 8% OPC/ 2% BA and 6% OPC/ 4% BA treatments, respectively were less than the UCS value of 1710kN/m2 conventionally used as criterion for adequate cement stabilization. The soaked California bearing ratio (CBR) values of the OPC/BA stabilized soil increased with higher energy level from 2, 4 and 10% for the natural soil to Peak values of 55, 18 and 8% were recorded at 8% OPC/4% BA 8% OPC/2% BA and 8% OPC/4% BA, treatments when SP, WAS and MP compactive effort were used, respectively. The durability of specimens was determined by immersion in water. Soils treatment at 8% OPC/ 4% BA blend gave a value of 50% resistance to loss in strength value which is acceptable because of the harsh test condition of 7 days soaking period specimens were subjected instead of the 4 days soaking period that specified a minimum resistance to loss in strength of 80%. Finally An optimal blend of is 8% OPC/ 4% BA is recommended for treatment of expansive black cotton soil for use as a sub-base material.

Keywords: Bagasse ash, California bearing ratio, Compaction, Durability, Ordinary Portland cement, Unconfined compressive strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3533
310 Work System Design in Productivity for Small and Medium Enterprises: A Systematic Literature Review

Authors: S. Halofaki, D. R. Seenivasagam, P. Bijay, K. Singh, R. Ananthanarayanan

Abstract:

This comprehensive literature review delves into the effects and applications of work system design on the performance of Small and Medium-sized Enterprises (SMEs). The review process involved three independent reviewers who screened 514 articles through a four-step procedure: removing duplicates, assessing keyword relevance, evaluating abstract content, and thoroughly reviewing full-text articles. Various criteria such as relevance to the research topic, publication type, study type, language, publication date, and methodological quality were employed to exclude certain publications. A portion of articles that met the predefined inclusion criteria were included as a result of this systematic literature review. These selected publications underwent data extraction and analysis to compile insights regarding the influence of work system design on SME performance. Additionally, the quality of the included studies was assessed, and the level of confidence in the body of evidence was established. The findings of this review shed light on how work system design impacts SME performance, emphasizing important implications and applications. Furthermore, the review offers suggestions for further research in this critical area and summarizes the current state of knowledge in the field. Understanding the intricate connections between work system design and SME success can enhance operational efficiency, employee engagement, and overall competitiveness for SMEs. This comprehensive examination of the literature contributes significantly to both academic research and practical decision-making for SMEs.

Keywords: Literature review, productivity, small and medium-sized enterprises, SMEs, work system design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 45
309 Prioritizing the Most Important Information from Contractors’ BIM Handover for Firefighters’ Responsibilities

Authors: Akram Mahdaviparsa, Tamera McCuen, Vahideh Karimimansoob

Abstract:

Fire service is responsible for protecting life, assets, and natural resources from fire and other hazardous incidents. Search and rescue in unfamiliar buildings is a vital part of firefighters’ responsibilities. Providing firefighters with precise building information in an easy-to-understand format is a potential solution for mitigating the negative consequences of fire hazards. The negative effect of insufficient knowledge about a building’s indoor environment impedes firefighters’ capabilities and leads to lost property. A data rich building information modeling (BIM) is a potentially useful source in three-dimensional (3D) visualization and data/information storage for fire emergency response. Therefore, this research’s purpose is prioritizing the required information for firefighters from the most important information to the least important. A survey was carried out with firefighters working in the Norman Fire Department to obtain the importance of each building information item. The results show that “the location of exit doors, windows, corridors, elevators, and stairs”, “material of building elements”, and “building data” are the three most important information specified by firefighters. The results also implied that the 2D model of architectural, structural and way finding is more understandable in comparison with the 3D model, while the 3D model of MEP system could convey more information than the 2D model. Furthermore, color in visualization can help firefighters to understand the building information easier and quicker. Sufficient internal consistency of all responses was proven through developing the Pearson Correlation Matrix and obtaining Cronbach’s alpha of 0.916. Therefore, the results of this study are reliable and could be applied to the population.

Keywords: BIM, building fire response, ranking, visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 513
308 Acceleration-Based Motion Model for Visual SLAM

Authors: Daohong Yang, Xiang Zhang, Wanting Zhou, Lei Li

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) is a technology that gathers information about the surrounding environment to ascertain its own position and create a map. It is widely used in computer vision, robotics, and various other fields. Many visual SLAM systems, such as OBSLAM3, utilize a constant velocity motion model. The utilization of this model facilitates the determination of the initial pose of the current frame, thereby enhancing the efficiency and precision of feature matching. However, it is often difficult to satisfy the constant velocity motion model in actual situations. This can result in a significant deviation between the obtained initial pose and the true value, leading to errors in nonlinear optimization results. Therefore, this paper proposes a motion model based on acceleration that can be applied to most SLAM systems. To provide a more accurate description of the camera pose acceleration, we separate the pose transformation matrix into its rotation matrix and translation vector components. The rotation matrix is now represented by a rotation vector. We assume that, over a short period, the changes in rotating angular velocity and translation vector remain constant. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of the constant velocity model is analyzed theoretically. Finally, we apply our proposed approach to the ORBSLAM3 system and evaluate two sets of sequences from the TUM datasets. The results show that our proposed method has a more accurate initial pose estimation, resulting in an improvement of 6.61% and 6.46% in the accuracy of the ORBSLAM3 system on the two test sequences, respectively.

Keywords: Error estimation, constant acceleration motion model, pose estimation, visual SLAM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 189
307 Effect of High-Energy Ball Milling on the Electrical and Piezoelectric Properties of (K0.5Na0.5)(Nb0.9Ta0.1)O3 Lead-Free Piezoceramics

Authors: Chongtham Jiten, K. Chandramani Singh, Radhapiyari Laishram

Abstract:

Nanocrystalline powders of the lead-free piezoelectric material, tantalum-substituted potassium sodium niobate (K0.5Na0.5)(Nb0.9Ta0.1)O3 (KNNT), were produced using a Retsch PM100 planetary ball mill by setting the milling time to 15h, 20h, 25h, 30h, 35h and 40h, at a fixed speed of 250rpm. The average particle size of the milled powders was found to decrease from 12nm to 3nm as the milling time increases from 15h to 25h, which is in agreement with the existing theoretical model. An anomalous increase to 98nm and then a drop to 3nm in the particle size were observed as the milling time further increases to 30h and 40h respectively. Various sizes of these starting KNNT powders were used to investigate the effect of milling time on the microstructure, dielectric properties, phase transitions and piezoelectric properties of the resulting KNNT ceramics. The particle size of starting KNNT was somewhat proportional to the grain size. As the milling time increases from 15h to 25h, the resulting ceramics exhibit enhancement in the values of relative density from 94.8% to 95.8%, room temperature dielectric constant (εRT) from 878 to 1213, and piezoelectric charge coefficient (d33) from 108pC/N to 128pC/N. For this range of ceramic samples, grain size refinement suppresses the maximum dielectric constant (εmax), shifts the Curie temperature (Tc) to a lower temperature and the orthorhombic-tetragonal phase transition (Tot) to a higher temperature. Further increase of milling time from 25h to 40h produces a gradual degradation in the values of relative density, εRT, and d33 of the resulting ceramics.

Keywords: Ceramics, Dielectric, High-energy milling, Perovskite.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2570
306 DYVELOP Method Implementation for the Research Development in Small and Middle Enterprises

Authors: Jiří F. Urbánek, David Král

Abstract:

Small and Middle Enterprises (SME) have a specific mission, characteristics, and behavior in global business competitive environments. They must respect policy, rules, requirements and standards in all their inherent and outer processes of supply - customer chains and networks. Paper aims and purposes are to introduce computational assistance, which enables us the using of prevailing operation system MS Office (SmartArt...) for mathematical models, using DYVELOP (Dynamic Vector Logistics of Processes) method. It is providing for SMS´s global environment the capability and profit to achieve its commitment regarding the effectiveness of the quality management system in customer requirements meeting and also the continual improvement of the organization’s and SME´s processes overall performance and efficiency, as well as its societal security via continual planning improvement. DYVELOP model´s maps - the Blazons are able mathematically - graphically express the relationships among entities, actors, and processes, including the discovering and modeling of the cycling cases and their phases. The blazons need live PowerPoint presentation for better comprehension of this paper mission – added value analysis. The crisis management of SMEs is obliged to use the cycles for successful coping of crisis situations.  Several times cycling of these cases is a necessary condition for the encompassment of the both the emergency event and the mitigation of organization´s damages. Uninterrupted and continuous cycling process is a good indicator and controlling actor of SME continuity and its sustainable development advanced possibilities.

Keywords: Blazons, computational assistance, DYVELOP method, small and middle enterprises.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 676
305 Quantifying the UK’s Future Thermal Electricity Generation Water Use: Regional Analysis

Authors: Daniel Murrant, Andrew Quinn, Lee Chapman

Abstract:

A growing population has led to increasing global water and energy demand. This demand, combined with the effects of climate change and an increasing need to maintain and protect the natural environment, represents a potentially severe threat to many national infrastructure systems. This has resulted in a considerable quantity of published material on the interdependencies that exist between the supply of water and the thermal generation of electricity, often known as the water-energy nexus. Focusing specifically on the UK, there is a growing concern that the future availability of water may at times constrain thermal electricity generation, and therefore hinder the UK in meeting its increasing demand for a secure, and affordable supply of low carbon electricity. To provide further information on the threat the water-energy nexus may pose to the UK’s energy system, this paper models the regional water demand of UK thermal electricity generation in 2030 and 2050. It uses the strategically important Energy Systems Modelling Environment model developed by the Energy Technologies Institute. Unlike previous research, this paper was able to use abstraction and consumption factors specific to UK power stations. It finds that by 2050 the South East, Yorkshire and Humber, the West Midlands and North West regions are those with the greatest freshwater demand and therefore most likely to suffer from a lack of resource. However, it finds that by 2050 it is the East, South West and East Midlands regions with the greatest total water (fresh, estuarine and seawater) demand and the most likely to be constrained by environmental standards.

Keywords: Water-energy nexus, water resources, abstraction, climate change, power station cooling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1526
304 Fabrication of Nanoengineered Radiation Shielding Multifunctional Polymeric Sandwich Composites

Authors: Nasim Abuali Galehdari, Venkat Mani, Ajit D. Kelkar

Abstract:

Space Radiation has become one of the major factors in successful long duration space exploration. Exposure to space radiation not only can affect the health of astronauts but also can disrupt or damage materials and electronics. Hazards to materials include degradation of properties, such as, modulus, strength, or glass transition temperature. Electronics may experience single event effects, gate rupture, burnout of field effect transistors and noise. Presently aluminum is the major component in most of the space structures due to its lightweight and good structural properties. However, aluminum is ineffective at blocking space radiation. Therefore, most of the past research involved studying at polymers which contain large amounts of hydrogen. Again, these materials are not structural materials and would require large amounts of material to achieve the structural properties needed. One of the materials to alleviate this problem is polymeric composite materials, which has good structural properties and use polymers that contained large amounts of hydrogen. This paper presents steps involved in fabrication of multi-functional hybrid sandwich panels that can provide beneficial radiation shielding as well as structural strength. Multifunctional hybrid sandwich panels were manufactured using vacuum assisted resin transfer molding process and were subjected to radiation treatment. Study indicates that various nanoparticles including Boron Nano powder, Boron Carbide and Gadolinium nanoparticles can be successfully used to block the space radiation without sacrificing the structural integrity.

Keywords: Multi-functional, polymer composites, radiation shielding, sandwich composites.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1791
303 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, KL divergence, quickest change detection, time series data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 967
302 Flood Modeling in Urban Area Using a Well-Balanced Discontinuous Galerkin Scheme on Unstructured Triangular Grids

Authors: Rabih Ghostine, Craig Kapfer, Viswanathan Kannan, Ibrahim Hoteit

Abstract:

Urban flooding resulting from a sudden release of water due to dam-break or excessive rainfall is a serious threatening environment hazard, which causes loss of human life and large economic losses. Anticipating floods before they occur could minimize human and economic losses through the implementation of appropriate protection, provision, and rescue plans. This work reports on the numerical modelling of flash flood propagation in urban areas after an excessive rainfall event or dam-break. A two-dimensional (2D) depth-averaged shallow water model is used with a refined unstructured grid of triangles for representing the urban area topography. The 2D shallow water equations are solved using a second-order well-balanced discontinuous Galerkin scheme. Theoretical test case and three flood events are described to demonstrate the potential benefits of the scheme: (i) wetting and drying in a parabolic basin (ii) flash flood over a physical model of the urbanized Toce River valley in Italy; (iii) wave propagation on the Reyran river valley in consequence of the Malpasset dam-break in 1959 (France); and (iv) dam-break flood in October 1982 at the town of Sumacarcel (Spain). The capability of the scheme is also verified against alternative models. Computational results compare well with recorded data and show that the scheme is at least as efficient as comparable second-order finite volume schemes, with notable efficiency speedup due to parallelization.

Keywords: Flood modeling, dam-break, shallow water equations, Discontinuous Galerkin scheme, MUSCL scheme.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 916
301 A Corporate Social Responsibility Project to Improve the Democratization of Scientific Education in Brazil

Authors: Denise Levy

Abstract:

Nuclear technology is part of our everyday life and its beneficial applications help to improve the quality of our lives. Nevertheless, in Brazil, most often the media and social networks tend to associate radiation to nuclear weapons and major accidents, and there is still great misunderstanding about the peaceful applications of nuclear science. The Educational Portal Radioatividades (Radioactivities) is a corporate social responsibility initiative that takes advantage of the growing impact of Internet to offer high quality scientific information for teachers and students throughout Brazil. This web-based initiative focusses on the positive applications of nuclear technology, presenting the several contributions of ionizing radiation in different contexts, such as nuclear medicine, agriculture techniques, food safety and electric power generation, proving nuclear technology as part of modern life and a must to improve the quality of our lifestyle. This educational project aims to contribute for democratization of scientific education and social inclusion, approaching society to scientific knowledge, promoting critical thinking and inspiring further reflections. The website offers a wide variety of ludic activities such as curiosities, interactive exercises and short courses. Moreover, teachers are offered free web-based material with full instructions to be developed in class. Since year 2013, the project has been developed and improved according to a comprehensive study about the realistic scenario of ICTs infrastructure in Brazilian schools and in full compliance with the best e-learning national and international recommendations.

Keywords: Information and communication technologies, nuclear technology, science communication, society and education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1189
300 Innovative Waste Management Practices in Remote Areas

Authors: Dolores Hidalgo, Jesús M. Martín-Marroquín, Francisco Corona

Abstract:

Municipal waste consist of a variety of items that are everyday discarded by the population. They are usually collected by municipalities and include waste generated by households, commercial activities (local shops) and public buildings. The composition of municipal waste varies greatly from place to place, being mostly related to levels and patterns of consumption, rates of urbanization, lifestyles, and local or national waste management practices. Each year, a huge amount of resources is consumed in the EU, and according to that, also a huge amount of waste is produced. The environmental problems derived from the management and processing of these waste streams are well known, and include impacts on land, water and air. The situation in remote areas is even worst. Difficult access when climatic conditions are adverse, remoteness of centralized municipal treatment systems or dispersion of the population, are all factors that make remote areas a real municipal waste treatment challenge. Furthermore, the scope of the problem increases significantly because the total lack of awareness of the existing risks in this area together with the poor implementation of advanced culture on waste minimization and recycling responsibly. The aim of this work is to analyze the existing situation in remote areas in reference to the production of municipal waste and evaluate the efficiency of different management alternatives. Ideas for improving waste management in remote areas include, for example: the implementation of self-management systems for the organic fraction; establish door-to-door collection models; promote small-scale treatment facilities or adjust the rates of waste generation thereof.

Keywords: Door to door collection, islands, isolated areas, municipal waste, remote areas, rural communities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2124
299 Structure-Activity Relationship of Gold Catalysts on Alumina Supported Cu-Ce Oxides for CO and Volatile Organic Compound Oxidation

Authors: Tatyana T. Tabakova, Elitsa N. Kolentsova, Dimitar Y. Dimitrov, Krasimir I. Ivanov, Yordanka G. Karakirova, Petya Cv. Petrova, Georgi V. Avdeev

Abstract:

The catalytic oxidation of CO and volatile organic compounds (VOCs) is considered as one of the most efficient ways to reduce harmful emissions from various chemical industries. The effectiveness of gold-based catalysts for many reactions of environmental significance was proven during the past three decades. The aim of this work was to combine the favorable features of Au and Cu-Ce mixed oxides in the design of new catalytic materials of improved efficiency and economic viability for removal of air pollutants in waste gases from formaldehyde production. Supported oxides of copper and cerium with Cu: Ce molar ratio 2:1 and 1:5 were prepared by wet impregnation of g-alumina. Gold (2 wt.%) catalysts were synthesized by a deposition-precipitation method. Catalysts characterization was carried out by texture measurements, powder X-ray diffraction, temperature programmed reduction and electron paramagnetic resonance spectroscopy. The catalytic activity in the oxidation of CO, CH3OH and (CH3)2O was measured using continuous flow equipment with fixed bed reactor. Both Cu-Ce/alumina samples demonstrated similar catalytic behavior. The addition of gold caused significant enhancement of CO and methanol oxidation activity (100 % degree of CO and CH3OH conversion at about 60 and 140 oC, respectively). The composition of Cu-Ce mixed oxides affected the performance of gold-based samples considerably. Gold catalyst on Cu-Ce/γ-Al2O3 1:5 exhibited higher activity for CO and CH3OH oxidation in comparison with Au on Cu-Ce/γ-Al2O3 2:1. The better performance of Au/Cu-Ce 1:5 was related to the availability of highly dispersed gold particles and copper oxide clusters in close contact with ceria.

Keywords: CO and VOCs oxidation, copper oxide, ceria, gold catalysts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 990
298 Performance Analysis of Search Medical Imaging Service on Cloud Storage Using Decision Trees

Authors: González A. Julio, Ramírez L. Leonardo, Puerta A. Gabriel

Abstract:

Telemedicine services use a large amount of data, most of which are diagnostic images in Digital Imaging and Communications in Medicine (DICOM) and Health Level Seven (HL7) formats. Metadata is generated from each related image to support their identification. This study presents the use of decision trees for the optimization of information search processes for diagnostic images, hosted on the cloud server. To analyze the performance in the server, the following quality of service (QoS) metrics are evaluated: delay, bandwidth, jitter, latency and throughput in five test scenarios for a total of 26 experiments during the loading and downloading of DICOM images, hosted by the telemedicine group server of the Universidad Militar Nueva Granada, Bogotá, Colombia. By applying decision trees as a data mining technique and comparing it with the sequential search, it was possible to evaluate the search times of diagnostic images in the server. The results show that by using the metadata in decision trees, the search times are substantially improved, the computational resources are optimized and the request management of the telemedicine image service is improved. Based on the experiments carried out, search efficiency increased by 45% in relation to the sequential search, given that, when downloading a diagnostic image, false positives are avoided in management and acquisition processes of said information. It is concluded that, for the diagnostic images services in telemedicine, the technique of decision trees guarantees the accessibility and robustness in the acquisition and manipulation of medical images, in improvement of the diagnoses and medical procedures in patients.

Keywords: Cloud storage, decision trees, diagnostic image, search, telemedicine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 912
297 Survey of Potato Viral Infection Using Das-Elisa Method in Georgia

Authors: Maia Kukhaleishvili, Ekaterine Bulauri, Iveta Megrelishvili, Tamar Shamatava, Tamar Chipashvili

Abstract:

Plant viruses can cause loss of yield and quality in a lot of important crops. Symptoms of pathogens are variable depending on the cultivars and virus strain. Selection of resistant potato varieties would reduce the risk of virus transmission and significant economic impact. Other way to avoid reduced harvest yields is regular potato seed production sampling and testing for viral infection. The aim of this study was to determine the occurrence and distribution of viral diseases according potato cultivars for further selection of virus-free material in Georgia. During the summer 2015- 2016, 5 potato cultivars (Sante, Laura, Jelly, Red Sonia, Anushka) at 5 different farms located in Akhalkalaki were tested for 6 different potato viruses: Potato virus A (PVA), Potato virus M (PVM), Potato virus S (PVS), Potato virus X (PVX), Potato virus Y (PVY) and potato leaf roll virus (PLRV). A serological method, Double Antibody Sandwich-Enzyme linked Immunosorbent Assay (DASELISA) was used at the laboratory to analyze the results. The result showed that PVY (21.4%) and PLRV (19.7%) virus presence in collected samples was relatively high compared to others. Researched potato cultivars except Jelly and Laura were infected by PVY with different concentrations. PLRV was found only in three potato cultivars (Sante, Jelly, Red Sonia) and PVM virus (3.12%) was characterized with low prevalence. PVX, PVA and PVS virus infection was not reported. It would be noted that 7.9% of samples were containing PVY/PLRV mix infection. Based on the results it can be concluded that PVY and PLRV infections are dominant in all research cultivars. Therefore significant yield losses are expected. Systematic, long-term control of potato viral infection, especially seed-potatoes, must be regarded as the most important factor to increase seed productivity.

Keywords: Diseases, infection, potato, virus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 914
296 Multiple Targets Classification and Fuzzy Logic Decision Fusion in Wireless Sensor Networks

Authors: Ahmad Aljaafreh

Abstract:

This paper proposes a hierarchical hidden Markov model (HHMM) to model the detection of M vehicles in a wireless sensor network (WSN). The HHMM model contains an extra level of hidden Markov model to model the temporal transitions of each state of the first HMM. By modeling the temporal transitions, only those hypothesis with nonzero transition probabilities needs to be tested. Thus, this method efficiently reduces the computation load, which is preferable in WSN applications.This paper integrates several techniques to optimize the detection performance. The output of the states of the first HMM is modeled as Gaussian Mixture Model (GMM), where the number of states and the number of Gaussians are experimentally determined, while the other parameters are estimated using Expectation Maximization (EM). HHMM is used to model the sequence of the local decisions which are based on multiple hypothesis testing with maximum likelihood approach. The states in the HHMM represent various combinations of vehicles of different types. Due to the statistical advantages of multisensor data fusion, we propose a heuristic based on fuzzy weighted majority voting to enhance cooperative classification of moving vehicles within a region that is monitored by a wireless sensor network. A fuzzy inference system weighs each local decision based on the signal to noise ratio of the acoustic signal for target detection and the signal to noise ratio of the radio signal for sensor communication. The spatial correlation among the observations of neighboring sensor nodes is efficiently utilized as well as the temporal correlation. Simulation results demonstrate the efficiency of this scheme.

Keywords: Classification, decision fusion, fuzzy logic, hidden Markov model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6226
295 Capacity Building for Hazmat Transport Emergency Preparedness: 'Hotspot Impact Zone' Mapping from Flammable and Toxic Releases

Authors: U K Chakrabarti, Jigisha Parikh

Abstract:

Hazardous Material transportation by road is coupled with inherent risk of accidents causing loss of lives, grievous injuries, property losses and environmental damages. The most common type of hazmat road accident happens to be the releases (78%) of hazardous substances, followed by fires (28%), explosions (14%) and vapour/ gas clouds (6 %.). The paper is discussing initially the probable 'Impact Zones' likely to be caused by one flammable (LPG) and one toxic (ethylene oxide) chemicals being transported through a sizable segment of a State Highway connecting three notified Industrial zones in Surat district in Western India housing 26 MAH industrial units. Three 'hotspots' were identified along the highway segment depending on the particular chemical traffic and the population distribution within 500 meters on either sides. The thermal radiation and explosion overpressure have been calculated for LPG / Ethylene Oxide BLEVE scenarios along with toxic release scenario for ethylene oxide. Besides, the dispersion calculations for ethylene oxide toxic release have been made for each 'hotspot' location and the impact zones have been mapped for the LOC concentrations. Subsequently, the maximum Initial Isolation and the protective zones were calculated based on ERPG-3 and ERPG-2 values of ethylene oxide respectively which are estimated taking the worst case scenario under worst weather conditions. The data analysis will be helpful to the local administration in capacity building with respect to rescue / evacuation and medical preparedness and quantitative inputs to augment the District Offsite Emergency Plan document.

Keywords: Hotspot, Ethylene Oxide, LPG, MAH (MajorAccident Hazard).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1785
294 Delamination Fracture Toughness Benefits of Inter-Woven Plies in Composite Laminates Produced through Automated Fibre Placement

Authors: Jayden Levy, Garth M. K. Pearce

Abstract:

An automated fibre placement method has been developed to build through-thickness reinforcement into carbon fibre reinforced plastic laminates during their production, with the goal of increasing delamination fracture toughness while circumventing the additional costs and defects imposed by post-layup stitching and z-pinning. Termed ‘inter-weaving’, the method uses custom placement sequences of thermoset prepreg tows to distribute regular fibre link regions in traditionally clean ply interfaces. Inter-weaving’s impact on mode I delamination fracture toughness was evaluated experimentally through double cantilever beam tests (ASTM standard D5528-13) on [±15°]9 laminates made from Park Electrochemical Corp. E-752-LT 1/4” carbon fibre prepreg tape. Unwoven and inter-woven automated fibre placement samples were compared to those of traditional laminates produced from standard uni-directional plies of the same material system. Unwoven automated fibre placement laminates were found to suffer a mostly constant 3.5% decrease in mode I delamination fracture toughness compared to flat uni-directional plies. Inter-weaving caused significant local fracture toughness increases (up to 50%), though these were offset by a matching overall reduction. These positive and negative behaviours of inter-woven laminates were respectively found to be caused by fibre breakage and matrix deformation at inter-weave sites, and the 3D layering of inter-woven ply interfaces providing numerous paths of least resistance for crack propagation.

Keywords: AFP, automated fibre placement, delamination, fracture toughness, inter-weaving.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 649
293 An Empirical Investigation on the Dynamics of Knowledge and IT Industries in Korea

Authors: Sang Ho Lee, Tae Heon Moon, Youn Taik Leem, Kwang Woo Nam

Abstract:

Knowledge and IT inputs to other industrial production have become more important as a key factor for the competitiveness of national and regional economies, such as knowledge economies in smart cities. Knowledge and IT industries lead the industrial innovation and technical (r)evolution through low cost, high efficiency in production, and by creating a new value chain and new production path chains, which is referred as knowledge and IT dynamics. This study aims to investigate the knowledge and IT dynamics in Korea, which are analyzed through the input-output model and structural path analysis. Twenty-eight industries were reclassified into seven categories; Agriculture and Mining, IT manufacture, Non-IT manufacture, Construction, IT-service, Knowledge service, Non-knowledge service to take close look at the knowledge and IT dynamics. Knowledge and IT dynamics were analyzed through the change of input output coefficient and multiplier indices in terms of technical innovation, as well as the changes of the structural paths of the knowledge and IT to other industries in terms of new production value creation from 1985 and 2010. The structural paths of knowledge and IT explain not only that IT foster the generation, circulation and use of knowledge through IT industries and IT-based service, but also that knowledge encourages IT use through creating, sharing and managing knowledge. As a result, this paper found the empirical investigation on the knowledge and IT dynamics of the Korean economy. Knowledge and IT has played an important role regarding the inter-industrial transactional input for production, as well as new industrial creation. The birth of the input-output production path has mostly originated from the knowledge and IT industries, while the death of the input-output production path took place in the traditional industries from 1985 and 2010. The Korean economy has been in transition to a knowledge economy in the Smart City.

Keywords: Knowledge and IT industries, input-output model, structural path analysis, dynamics of knowledge and IT, knowledge economy, knowledge city, smart city.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1147
292 On the Factors Affecting Computing Students’ Awareness of the Latest ICTs

Authors: O. D. Adegbehingbe, S. D. Eyono Obono

Abstract:

The education sector is constantly faced with rapid changes in technologies in terms of ensuring that the curriculum is up to date and in terms of making sure that students are aware of these technological changes. This challenge can be seen as the motivation for this study, which is to examine the factors affecting computing students’ awareness of the latest Information Technologies (ICTs). The aim of this study is divided into two sub-objectives which are: the selection of relevant theories and the design of a conceptual model to support it as well as the empirical testing of the designed model. The first objective is achieved by a review of existing literature on technology adoption theories and models. The second objective is achieved using a survey of computing students in the four universities of the KwaZulu-Natal province of South Africa. Data collected from this survey is analyzed using Statistical package for the Social Science (SPSS) using descriptive statistics, ANOVA and Pearson correlations. The main hypothesis of this study is that there is a relationship between the demographics and the prior conditions of the computing students and their awareness of general ICT trends and of Digital Switch Over (DSO) a new technology which involves the change from analog to digital television broadcasting in order to achieve improved spectrum efficiency. The prior conditions of the computing students that were considered in this study are students’ perceived exposure to career guidance and students’ perceived curriculum currency. The results of this study confirm that gender, ethnicity, and high school computing course affect students’ perceived curriculum currency while high school location affects students’ awareness of DSO. The results of this study also confirm that there is a relationship between students prior conditions and their awareness of general ICT trends and DSO in particular.

Keywords: Education, Information Technologies, IDT, awareness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2208
291 Multi-Scale Gabor Feature Based Eye Localization

Authors: Sanghoon Kim, Sun-Tae Chung, Souhwan Jung, Dusik Oh, Jaemin Kim, Seongwon Cho

Abstract:

Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported so far still need to be improved about precision and computational time for successful applications. In this paper, we propose an eye location method based on multi-scale Gabor feature vectors, which is more robust with respect to initial points. The eye localization based on Gabor feature vectors first needs to constructs an Eye Model Bunch for each eye (left or right eye) which consists of n Gabor jets and average eye coordinates of each eyes obtained from n model face images, and then tries to localize eyes in an incoming face image by utilizing the fact that the true eye coordinates is most likely to be very close to the position where the Gabor jet will have the best Gabor jet similarity matching with a Gabor jet in the Eye Model Bunch. Similar ideas have been already proposed in such as EBGM (Elastic Bunch Graph Matching). However, the method used in EBGM is known to be not robust with respect to initial values and may need extensive search range for achieving the required performance, but extensive search ranges will cause much more computational burden. In this paper, we propose a multi-scale approach with a little increased computational burden where one first tries to localize eyes based on Gabor feature vectors in a coarse face image obtained from down sampling of the original face image, and then localize eyes based on Gabor feature vectors in the original resolution face image by using the eye coordinates localized in the coarse scaled image as initial points. Several experiments and comparisons with other eye localization methods reported in the other papers show the efficiency of our proposed method.

Keywords: Eye Localization, Gabor features, Multi-scale, Gabor wavelets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1799
290 Anti-Aging Effects of Retinol and Alpha Hydroxy Acid on Elastin Fibers of Artificially Photo-Aged Human Dermal Fibroblast Cell Lines

Authors: M. Jarrar, S. Behl, N. Shaheen, A. Fatima, R. Nasab

Abstract:

Skin aging is a slow multifactorial process influenced by both internal as well as external factors. Ultra-violet radiations (UV), diet, smoking and personal habits are the most common environmental factors that affect skin aging. Fat contents and fibrous proteins as collagen and elastin are core internal structural components. The direct influence of UV on elastin integrity and health is central on aging of skin especially by time. The deposition of abnormal elastic material is a major marker in a photo-aged skin. Searching for compounds that may protect against cutaneous photodamage is exceedingly valued. Retinoids and alpha hydroxy acids have been endorsed by some researchers as possible candidates for protecting and or repairing the effect of UV damaged skin. For consolidating a better system of anti- and protective effects of such anti-aging agents, we evaluated the combinatory effects of various dosages of lactic acid and retinol on the dermal fibroblast’s elastin levels exposed to UV. The UV exposed cells showed significant reduction in the elastin levels. A combination of drugs with a higher concentration of lactic acid (30 -35 mM) and a lower concentration of retinol (10-15mg/mL) showed to work better in maintaining elastin concentration in UV exposed cells. We assume this preservation could be the result of increased tropo-elastin gene expression stimulated by retinol whereas lactic acid probably repaired the UV irradiated damage by enhancing the amount and integrity of the elastin fibers.

Keywords: Alpha Hydroxy Acid, Elastin, Retinol, Ultraviolet radiations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3070
289 A Robust Method for Finding Nearest-Neighbor using Hexagon Cells

Authors: Ahmad Attiq Al-Ogaibi, Ahmad Sharieh, Moh’d Belal Al-Zoubi, R. Bremananth

Abstract:

In pattern clustering, nearest neighborhood point computation is a challenging issue for many applications in the area of research such as Remote Sensing, Computer Vision, Pattern Recognition and Statistical Imaging. Nearest neighborhood computation is an essential computation for providing sufficient classification among the volume of pixels (voxels) in order to localize the active-region-of-interests (AROI). Furthermore, it is needed to compute spatial metric relationships of diverse area of imaging based on the applications of pattern recognition. In this paper, we propose a new methodology for finding the nearest neighbor point, depending on making a virtually grid of a hexagon cells, then locate every point beneath them. An algorithm is suggested for minimizing the computation and increasing the turnaround time of the process. The nearest neighbor query points Φ are fetched by seeking fashion of hexagon holistic. Seeking will be repeated until an AROI Φ is to be expected. If any point Υ is located then searching starts in the nearest hexagons in a circular way. The First hexagon is considered be level 0 (L0) and the surrounded hexagons is level 1 (L1). If Υ is located in L1, then search starts in the next level (L2) to ensure that Υ is the nearest neighbor for Φ. Based on the result and experimental results, we found that the proposed method has an advantage over the traditional methods in terms of minimizing the time complexity required for searching the neighbors, in turn, efficiency of classification will be improved sufficiently.

Keywords: Hexagon cells, k-nearest neighbors, Nearest Neighbor, Pattern recognition, Query pattern, Virtually grid

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2770