Search results for: large scale maps
12082 Developing High-Definition Flood Inundation Maps (HD-Fims) Using Raster Adjustment with Scenario Profiles (RASPTM)
Authors: Robert Jacobsen
Abstract:
Flood inundation maps (FIMs) are an essential tool in communicating flood threat scenarios to the public as well as in floodplain governance. With an increasing demand for online raster FIMs, the FIM State-of-the-Practice (SOP) is rapidly advancing to meet the dual requirements for high-resolution and high-accuracy—or High-Definition. Importantly, today’s technology also enables the resolution of problems of local—neighborhood-scale—bias errors that often occur in FIMs, even with the use of SOP two-dimensional flood modeling. To facilitate the development of HD-FIMs, a new GIS method--Raster Adjustment with Scenario Profiles, RASPTM—is described for adjusting kernel raster FIMs to match refined scenario profiles. With RASPTM, flood professionals can prepare HD-FIMs for a wide range of scenarios with available kernel rasters, including kernel rasters prepared from vector FIMs. The paper provides detailed procedures for RASPTM, along with an example of applying RASPTM to prepare an HD-FIM for the August 2016 Flood in Louisiana using both an SOP kernel raster and a kernel raster derived from an older vector-based flood insurance rate map. The accuracy of the HD-FIMs achieved with the application of RASPTM to the two kernel rasters is evaluated.Keywords: hydrology, mapping, high-definition, inundation
Procedia PDF Downloads 7812081 Status of the European Atlas of Natural Radiation
Authors: G. Cinelli, T. Tollefsen, P. Bossew, V. Gruber, R. Braga, M. A. Hernández-Ceballos, M. De Cort
Abstract:
In 2006, the Joint Research Centre (JRC) of the European Commission started the project of the 'European Atlas of Natural Radiation'. The Atlas aims at preparing a collection of maps of Europe displaying the levels of natural radioactivity caused by different sources (indoor and outdoor radon, cosmic radiation, terrestrial radionuclides, terrestrial gamma radiation, etc). The overall goal of the project is to estimate, in geographical resolution, the annual dose that the public may receive from natural radioactivity, combining all the information from the different radiation components. The first map which has been developed is the European map of indoor radon (Rn) since in most cases Rn is the most important contribution to exposure. New versions of the map are realised when new countries join the project or when already participating countries send new data. We show the latest status of this map which currently includes 25 European countries. Second, the JRC has undertaken to map a variable which measures 'what earth delivers' in terms of Rn. The corresponding quantity is called geogenic radon potential (RP). Due to the heterogeneity of data sources across the Europe there is need to develop a harmonized quantity which at the one hand adequately measures or classifies the RP, and on the other hand is suited to accommodate the variety of input data used to estimate this target quantity. Candidates for input quantities which may serve as predictors of the RP, and for which data are available across Europe, to different extent, are Uranium (U) concentration in rocks and soils, soil gas radon and soil permeability, terrestrial gamma dose rate, geological information and indoor data from ground floor. The European Geogenic Radon Map gives the possibility to characterize areas, on European geographical scale, for radon hazard where indoor radon measurements are not available. Parallel to ongoing work on the European Indoor Radon, Geogenic Radon and Cosmic Radiation Maps, we made progress in the development of maps of terrestrial gamma radiation and U, Th and K concentrations in soil and bedrock. We show the first, preliminary map of the terrestrial gamma dose rate, estimated using the data of ambient dose equivalent rate available from the EURDEP system (about 5000 fixed monitoring stations across Europe). Also, the first maps of U, Th, and K concentrations in soil and bedrock are shown in the present work.Keywords: Europe, natural radiation, mapping, indoor radon
Procedia PDF Downloads 29112080 A Comparative Assessment Method For Map Alignment Techniques
Authors: Rema Daher, Theodor Chakhachiro, Daniel Asmar
Abstract:
In the era of autonomous robot mapping, assessing the goodness of the generated maps is important, and is usually performed by aligning them to ground truth. Map alignment is difficult for two reasons: first, the query maps can be significantly distorted from ground truth, and second, establishing what constitutes ground truth for different settings is challenging. Most map alignment techniques to this date have addressed the first problem, while paying too little importance to the second. In this paper, we propose a benchmark dataset, which consists of synthetically transformed maps with their corresponding displacement fields. Furthermore, we propose a new system for comparison, where the displacement field of any map alignment technique can be computed and compared to the ground truth using statistical measures. The local information in displacement fields renders the evaluation system applicable to any alignment technique, whether it is linear or not. In our experiments, the proposed method was applied to different alignment methods from the literature, allowing for a comparative assessment between them all.Keywords: assessment methods, benchmark, image deformation, map alignment, robot mapping, robot motion
Procedia PDF Downloads 11912079 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis
Authors: Meng Su
Abstract:
High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis
Procedia PDF Downloads 10812078 Resource-Constrained Heterogeneous Workflow Scheduling Algorithms in Heterogeneous Computing Clusters
Authors: Lei Wang, Jiahao Zhou
Abstract:
The development of heterogeneous computing clusters provides a strong computility guarantee for large-scale workflows (e.g., scientific computing, artificial intelligence (AI), etc.). However, the tasks within large-scale workflows have also gradually become heterogeneous due to different demands on computing resources, which leads to the addition of a task resource-restricted constraint to the workflow scheduling problem on heterogeneous computing platforms. In this paper, we propose a heterogeneous constrained minimum makespan scheduling algorithm based on the idea of greedy strategy, which provides an efficient solution to the heterogeneous workflow scheduling problem in a heterogeneous platform. In this paper, we test the effectiveness of our proposed scheduling algorithm by randomly generating heterogeneous workflows with heterogeneous computing platform, and the experiments show that our method improves 15.2% over the state-of-the-art methods.Keywords: heterogeneous computing, workflow scheduling, constrained resources, minimal makespan
Procedia PDF Downloads 3412077 Auto Calibration and Optimization of Large-Scale Water Resources Systems
Authors: Arash Parehkar, S. Jamshid Mousavi, Shoubo Bayazidi, Vahid Karami, Laleh Shahidi, Arash Azaranfar, Ali Moridi, M. Shabakhti, Tayebeh Ariyan, Mitra Tofigh, Kaveh Masoumi, Alireza Motahari
Abstract:
Water resource systems modelling have constantly been a challenge through history for human being. As the innovative methodological development is evolving alongside computer sciences on one hand, researches are likely to confront more complex and larger water resources systems due to new challenges regarding increased water demands, climate change and human interventions, socio-economic concerns, and environment protection and sustainability. In this research, an automatic calibration scheme has been applied on the Gilan’s large-scale water resource model using mathematical programming. The water resource model’s calibration is developed in order to attune unknown water return flows from demand sites in the complex Sefidroud irrigation network and other related areas. The calibration procedure is validated by comparing several gauged river outflows from the system in the past with model results. The calibration results are pleasantly reasonable presenting a rational insight of the system. Subsequently, the unknown optimized parameters were used in a basin-scale linear optimization model with the ability to evaluate the system’s performance against a reduced inflow scenario in future. Results showed an acceptable match between predicted and observed outflows from the system at selected hydrometric stations. Moreover, an efficient operating policy was determined for Sefidroud dam leading to a minimum water shortage in the reduced inflow scenario.Keywords: auto-calibration, Gilan, large-scale water resources, simulation
Procedia PDF Downloads 33512076 Timing and Noise Data Mining Algorithm and Software Tool in Very Large Scale Integration (VLSI) Design
Authors: Qing K. Zhu
Abstract:
Very Large Scale Integration (VLSI) design becomes very complex due to the continuous integration of millions of gates in one chip based on Moore’s law. Designers have encountered numerous report files during design iterations using timing and noise analysis tools. This paper presented our work using data mining techniques combined with HTML tables to extract and represent critical timing/noise data. When we apply this data-mining tool in real applications, the running speed is important. The software employs table look-up techniques in the programming for the reasonable running speed based on performance testing results. We added several advanced features for the application in one industry chip design.Keywords: VLSI design, data mining, big data, HTML forms, web, VLSI, EDA, timing, noise
Procedia PDF Downloads 25412075 The Rule of Architectural Firms in Enhancing Building Energy Efficiency in Emerging Countries: Processes and Tools Evaluation of Architectural Firms in Egypt
Authors: Mahmoud F. Mohamadin, Ahmed Abdel Malek, Wessam Said
Abstract:
Achieving energy efficient architecture in general, and in emerging countries in particular, is a challenging process that requires the contribution of various governmental, institutional, and individual entities. The rule of architectural design is essential in this process as it is considered as one of the earliest steps on the road to sustainability. Architectural firms have a moral and professional responsibility to respond to these challenges and deliver buildings that consume less energy. This study aims to evaluate the design processes and tools in practice of Egyptian architectural firms based on a limited survey to investigate if their processes and methods can lead to projects that meet the Egyptian Code of Energy Efficiency Improvement. A case study of twenty architectural firms in Cairo was selected and categorized according to their scale; large-scale, medium-scale, and small-scale. A questionnaire was designed and distributed to the firms, and personal meetings with the firms’ representatives took place. The questionnaire answered three main points; the design processes adopted, the usage of performance-based simulation tools, and the usage of BIM tools for energy efficiency purposes. The results of the study revealed that only little percentage of the large-scale firms have clear strategies for building energy efficiency in their building design, however the application is limited to certain project types, or according to the client request. On the other hand, the percentage of medium-scale firms is much less, and it is almost absent in the small-scale ones. This demonstrates the urgent need of enhancing the awareness of the Egyptian architectural design community of the great importance of implementing these methods starting from the early stages of the building design. Finally, the study proposed recommendations for such firms to be able to create a healthy built environment and improve the quality of life in emerging countries.Keywords: architectural firms, emerging countries, energy efficiency, performance-based simulation tools
Procedia PDF Downloads 28312074 Simulation Studies of Solid-Particle and Liquid-Drop Erosion of NiAl Alloy
Authors: Rong Liu, Kuiying Chen, Ju Chen, Jingrong Zhao, Ming Liang
Abstract:
This article presents modeling studies of NiAl alloy under solid-particle erosion and liquid-drop erosion. In the solid particle erosion simulation, attention is paid to the oxide scale thickness variation on the alloy in high-temperature erosion environments. The erosion damage is assumed to be deformation wear and cutting wear mechanisms, incorporating the influence of the oxide scale on the eroded surface; thus the instantaneous oxide thickness is the result of synergetic effect of erosion and oxidation. For liquid-drop erosion, special interest is in investigating the effects of drop velocity and drop size on the damage of the target surface. The models of impact stress wave, mean depth of penetration, and maximum depth of erosion rate (Max DER) are employed to develop various maps for NiAl alloy, including target thickness vs. drop size (diameter), rate of mean depth of penetration (MDRP) vs. drop impact velocity, and damage threshold velocity (DTV) vs. drop size.Keywords: liquid-drop erosion, NiAl alloy, oxide scale thickness, solid-particle erosion
Procedia PDF Downloads 57512073 Stochastic Control of Decentralized Singularly Perturbed Systems
Authors: Walid S. Alfuhaid, Saud A. Alghamdi, John M. Watkins, M. Edwin Sawan
Abstract:
Designing a controller for stochastic decentralized interconnected large scale systems usually involves a high degree of complexity and computation ability. Noise, observability, and controllability of all system states, connectivity, and channel bandwidth are other constraints to design procedures for distributed large scale systems. The quasi-steady state model investigated in this paper is a reduced order model of the original system using singular perturbation techniques. This paper results in an optimal control synthesis to design an observer based feedback controller by standard stochastic control theory techniques using Linear Quadratic Gaussian (LQG) approach and Kalman filter design with less complexity and computation requirements. Numerical example is given at the end to demonstrate the efficiency of the proposed method.Keywords: decentralized, optimal control, output, singular perturb
Procedia PDF Downloads 37012072 Effects of Empathy Priming on Idea Generation
Authors: Tejas Dhadphale
Abstract:
The user-centered design (UCD) approach has led to an increased interest in empathy within the product development process. Designers have explored several empathetic methods and tools such as personas, empathy maps, journey maps, user needs statements and user scenarios to capture and visualize users’ needs. The goal of these tools is not only to generate a deeper and shared understanding of user needs but also to become a point of reference for subsequent decision making, brainstorming and concept evaluation tasks. The purpose of this study is to measure the effect of empathy priming on divergent brainstorming tasks. This study compares the effects of three empathy tools, personas, empathy maps and user needs statements, on ideation fluency and originality of ideas during brainstorming tasks. In a three-between-subjects experimental design study, sixty product design students were randomly assigned to one of three conditions: persona, empathy maps and user needs statements. A one-way, between-subjects analysis of variance (ANOVA) revealed a a statistically significant difference in empathy priming on fluency and originality of ideas. Participants in the persona group showed higher ideation fluency and generated a greater number of original ideas compared to the other groups. The results show that participants in the user need statement group to generate a greater number of feasible and relevant ideas. The study also aims to understand how formatting and visualization of empathy tools impact divergent brainstorming tasks. Participants were interviewed to understand how different visualizations of users’ needs (personas, empathy maps and user needs statements) facilitated idea generation during brainstorming tasks. Implications for design education are discussed.Keywords: empathy, persona, priming, Design research
Procedia PDF Downloads 8712071 Liquid-Liquid Plug Flow Characteristics in Microchannel with T-Junction
Authors: Anna Yagodnitsyna, Alexander Kovalev, Artur Bilsky
Abstract:
The efficiency of certain technological processes in two-phase microfluidics such as emulsion production, nanomaterial synthesis, nitration, extraction processes etc. depends on two-phase flow regimes in microchannels. For practical application in chemistry and biochemistry it is very important to predict the expected flow pattern for a large variety of fluids and channel geometries. In the case of immiscible liquids, the plug flow is a typical and optimal regime for chemical reactions and needs to be predicted by empirical data or correlations. In this work flow patterns of immiscible liquid-liquid flow in a rectangular microchannel with T-junction are investigated. Three liquid-liquid flow systems are considered, viz. kerosene – water, paraffin oil – water and castor oil – paraffin oil. Different flow patterns such as parallel flow, slug flow, plug flow, dispersed (droplet) flow, and rivulet flow are observed for different velocity ratios. New flow pattern of the parallel flow with steady wavy interface (serpentine flow) has been found. It is shown that flow pattern maps based on Weber numbers for different liquid-liquid systems do not match well. Weber number multiplied by Ohnesorge number is proposed as a parameter to generalize flow maps. Flow maps based on this parameter are superposed well for all liquid-liquid systems of this work and other experiments. Plug length and velocity are measured for the plug flow regime. When dispersed liquid wets channel walls plug length cannot be predicted by known empirical correlations. By means of particle tracking velocimetry technique instantaneous velocity fields in a plug flow regime were measured. Flow circulation inside plug was calculated using velocity data that can be useful for mass flux prediction in chemical reactions.Keywords: flow patterns, hydrodynamics, liquid-liquid flow, microchannel
Procedia PDF Downloads 39512070 Exploring Teachers’ Beliefs about Diagnostic Language Assessment Practices in a Large-Scale Assessment Program
Authors: Oluwaseun Ijiwade, Chris Davison, Kelvin Gregory
Abstract:
In Australia, like other parts of the world, the debate on how to enhance teachers using assessment data to inform teaching and learning of English as an Additional Language (EAL, Australia) or English as a Foreign Language (EFL, United States) have occupied the centre of academic scholarship. Traditionally, this approach was conceptualised as ‘Formative Assessment’ and, in recent times, ‘Assessment for Learning (AfL)’. The central problem is that teacher-made tests are limited in providing data that can inform teaching and learning due to variability of classroom assessments, which are hindered by teachers’ characteristics and assessment literacy. To address this concern, scholars in language education and testing have proposed a uniformed large-scale computer-based assessment program to meet the needs of teachers and promote AfL in language education. In Australia, for instance, the Victoria state government commissioned a large-scale project called 'Tools to Enhance Assessment Literacy (TEAL) for Teachers of English as an additional language'. As part of the TEAL project, a tool called ‘Reading and Vocabulary assessment for English as an Additional Language (RVEAL)’, as a diagnostic language assessment (DLA), was developed by language experts at the University of New South Wales for teachers in Victorian schools to guide EAL pedagogy in the classroom. Therefore, this study aims to provide qualitative evidence for understanding beliefs about the diagnostic language assessment (DLA) among EAL teachers in primary and secondary schools in Victoria, Australia. To realize this goal, this study raises the following questions: (a) How do teachers use large-scale assessment data for diagnostic purposes? (b) What skills do language teachers think are necessary for using assessment data for instruction in the classroom? and (c) What factors, if any, contribute to teachers’ beliefs about diagnostic assessment in a large-scale assessment? Semi-structured interview method was used to collect data from at least 15 professional teachers who were selected through a purposeful sampling. The findings from the resulting data analysis (thematic analysis) provide an understanding of teachers’ beliefs about DLA in a classroom context and identify how these beliefs are crystallised in language teachers. The discussion shows how the findings can be used to inform professional development processes for language teachers as well as informing important factor of teacher cognition in the pedagogic processes of language assessment. This, hopefully, will help test developers and testing organisations to align the outcome of this study with their test development processes to design assessment that can enhance AfL in language education.Keywords: beliefs, diagnostic language assessment, English as an additional language, teacher cognition
Procedia PDF Downloads 19912069 A Model Suggestion on Competitiveness and Sustainability of SMEs in Developing Countries
Authors: Ahmet Diken, Tahsin Karabulut
Abstract:
The factor which developing countries are in need is capital. Such countries make an effort to increase their income in order to meet their expenses for employment, infrastructure, superstructure investments, education, health and defense. The sole income of the countries is taxes collected from businesses. The businesses should drive profit and return in order to be able to toll. In a world where competition exists, different strategies may be followed by business in developing countries and they must specify their target markets. İn order to minimize cost and maximize profit, SMEs have to concentrate on target markets and select cost oriented strategy. In this study, a theoretical model is suggested that SME firms have to act as cluster between each other, and also must be optimal provider for large scale firms. SMEs’ policy must be supported by public. This relationship can benefit large scale firms to have brand over the world, and this organization increases value added for developing countries.Keywords: competitiveness, countries, SMEs developing, sustainability
Procedia PDF Downloads 31412068 Large-Area Film Fabrication for Perovskite Solar Cell via Scalable Thermal-Assisted and Meniscus-Guided Bar Coating
Authors: Gizachew Belay Adugna
Abstract:
Scalable and cost-effective device fabrication techniques are urgent to commercialize the perovskite solar cells (PSCs) for the next photovoltaic (PV) technology. Herein, large-area films of perovskite and hole-transporting materials (HTMs) were developed via a rapid and scalable thermal-assisting bar-coating process in the open air. High-quality and large crystalline grains of MAPbI₃ with homogenous morphology and thickness were obtained on a large-area (10 cm×10 cm) solution-sheared mp-TiO₂/c-TiO₂/FTO substrate. Encouraging photovoltaic performance of 19.02% was achieved for devices fabricated from the bar-coated perovskite film compared to that from the small-scale spin-coated film (17.27%) with 2,2′,7,7′-tetrakis-(N,N-di-p-methoxyphenylamine)-9,9′-spirobifluorene (spiro-OMeTAD) as an HTM whereas a higher power conversion efficiency of 19.89% with improved device stability was achieved by capping a fluorinated (HYC-2) HTM as an alternative to the traditional spiro-OMeTAD. The fluorinated exhibited better molecular packing in the HTM film and deeper HOMO level compared to the nonfluorinated counterpart; thus, improved hole mobility and overall charge extraction in the device were demonstrated. Furthermore, excellent film processability and an impressive PCE of 18.52% were achieved in the large area bar-coated HYC-2 prepared sequentially on the perovskite underlayer in the open atmosphere, compared to the bar-coated spiro-OMeTAD/perovskite (17.51%). This all-solution approach demonstrated the feasibility of high-quality films on a large-area substrate for PSCs, which is a vital step toward industrial-scale PV production.Keywords: perovskite solar cells, hole transporting materials, up-scaling process, power conversion efficiency
Procedia PDF Downloads 7112067 Large Scale Production of Polyhydroxyalkanoates (PHAs) from Waste Water: A Study of Techno-Economics, Energy Use, and Greenhouse Gas Emissions
Authors: Cora Fernandez Dacosta, John A. Posada, Andrea Ramirez
Abstract:
The biodegradable family of polymers polyhydroxyalkanoates are interesting substitutes for convectional fossil-based plastics. However, the manufacturing and environmental impacts associated with their production via intracellular bacterial fermentation are strongly dependent on the raw material used and on energy consumption during the extraction process, limiting their potential for commercialization. Industrial wastewater is studied in this paper as a promising alternative feedstock for waste valorization. Based on results from laboratory and pilot-scale experiments, a conceptual process design, techno-economic analysis and life cycle assessment are developed for the large-scale production of the most common type of polyhydroxyalkanoate, polyhydroxbutyrate. Intracellular polyhydroxybutyrate is obtained via fermentation of microbial community present in industrial wastewater and the downstream processing is based on chemical digestion with surfactant and hypochlorite. The economic potential and environmental performance results help identifying bottlenecks and best opportunities to scale-up the process prior to industrial implementation. The outcome of this research indicates that the fermentation of wastewater towards PHB presents advantages compared to traditional PHAs production from sugars because the null environmental burdens and financial costs of the raw material in the bioplastic production process. Nevertheless, process optimization is still required to compete with the petrochemicals counterparts.Keywords: circular economy, life cycle assessment, polyhydroxyalkanoates, waste valorization
Procedia PDF Downloads 45712066 A Case Study of Low Head Hydropower Opportunities at Existing Infrastructure in South Africa
Authors: Ione Loots, Marco van Dijk, Jay Bhagwan
Abstract:
Historically, South Africa had various small-scale hydropower installations in remote areas that were not incorporated in the national electricity grid. Unfortunately, in the 1960s most of these plants were decommissioned when Eskom, the national power utility, rapidly expanded its grid and capability to produce cheap, reliable, coal-fired electricity. This situation persisted until 2008, when rolling power cuts started to affect all citizens. This, together with the rising monetary and environmental cost of coal-based power generation, has sparked new interest in small-scale hydropower development, especially in remote areas or at locations (like wastewater treatment works) that could not afford to be without electricity for long periods at a time. Even though South Africa does not have the same, large-scale, hydropower potential as some other African countries, significant potential for micro- and small-scale hydropower is hidden in various places. As an example, large quantities of raw and potable water are conveyed daily under either pressurized or gravity conditions over large distances and elevations. Due to the relative water scarcity in the country, South Africa also has more than 4900 registered dams of varying capacities. However, institutional capacity and skills have not been maintained in recent years and therefore the identification of hydropower potential, as well as the development of micro- and small-scale hydropower plants has not gained significant momentum. An assessment model and decision support system for low head hydropower development has been developed to assist designers and decision makers with first-order potential analysis. As a result, various potential sites were identified and many of these sites were situated at existing infrastructure like weirs, barrages or pipelines. One reason for the specific interest in existing infrastructure is the fact that capital expenditure could be minimized and another is the reduced negative environmental impact compared to greenfield sites. This paper will explore the case study of retrofitting an unconventional and innovative hydropower plant to the outlet of a wastewater treatment works in South Africa.Keywords: low head hydropower, retrofitting, small-scale hydropower, wastewater treatment works
Procedia PDF Downloads 25212065 Comparing Stability Index MAPping (SINMAP) Landslide Susceptibility Models in the Río La Carbonera, Southeast Flank of Pico de Orizaba Volcano, Mexico
Authors: Gabriel Legorreta Paulin, Marcus I. Bursik, Lilia Arana Salinas, Fernando Aceves Quesada
Abstract:
In volcanic environments, landslides and debris flows occur continually along stream systems of large stratovolcanoes. This is the case on Pico de Orizaba volcano, the highest mountain in Mexico. The volcano has a great potential to impact and damage human settlements and economic activities by landslides. People living along the lower valleys of Pico de Orizaba volcano are in continuous hazard by the coalescence of upstream landslide sediments that increased the destructive power of debris flows. These debris flows not only produce floods, but also cause the loss of lives and property. Although the importance of assessing such process, there is few landslide inventory maps and landslide susceptibility assessment. As a result in México, no landslide susceptibility models assessment has been conducted to evaluate advantage and disadvantage of models. In this study, a comprehensive study of landslide susceptibility models assessment using GIS technology is carried out on the SE flank of Pico de Orizaba volcano. A detailed multi-temporal landslide inventory map in the watershed is used as framework for the quantitative comparison of two landslide susceptibility maps. The maps are created based on 1) the Stability Index MAPping (SINMAP) model by using default geotechnical parameters and 2) by using findings of volcanic soils geotechnical proprieties obtained in the field. SINMAP combines the factor of safety derived from the infinite slope stability model with the theory of a hydrologic model to produce the susceptibility map. It has been claimed that SINMAP analysis is reasonably successful in defining areas that intuitively appear to be susceptible to landsliding in regions with sparse information. The validations of the resulting susceptibility maps are performed by comparing them with the inventory map under LOGISNET system which provides tools to compare by using a histogram and a contingency table. Results of the experiment allow for establishing how the individual models predict the landslide location, advantages, and limitations. The results also show that although the model tends to improve with the use of calibrated field data, the landslide susceptibility map does not perfectly represent existing landslides.Keywords: GIS, landslide, modeling, LOGISNET, SINMAP
Procedia PDF Downloads 31312064 Developing of Attitude towards Using Complementary Treatments Scale in Turkey
Authors: Ayşegül Bilge, Merve Uğuryol, Şeyda Dülgerler, Mustafa Yıldız
Abstract:
The purpose of this research is to prove the Attitude towards Using Complementary Treatments Scale reliability and validity. The research is a methodological type of research that has been planned to determine the validity and reliability of the Attitude towards Using Complementary Treatments Scale. The scale has been developed by the researchers. In the scale, there are 23 questions including complementary and modern therapies individuals apply when they have health problems 4-item Likert-type evaluation has been carried out in preparing the questionnaire. High score obtained from the scale indicates a positive attitude towards complementary therapies. In the course of validity assessment of the scale, expert opinion has been received, and the content validity of the scale has been determined by using Kendall coefficient correlation test (Wa=0.200, p = 0.460). In the course of the reliability assessment of the scale, total score correlations of 23 materials have been examined, and those under 0.20 correlation limit has been removed from the scale correlation. As a result, the scale was left to be 13 items. In the internal consistency tests of the analyses, Cronbach's alpha value has been found to be 0.79. As a result, of the validity analyses of the Attitude towards Using Complementary Treatments Scale, the content and language validity analyses has been found to be at the expected level. It has been determined to be a highly reliable scale as the result of the reliability analyses. In conclusion, Attitude towards Using Complementary Treatments Scale is a valid and reliable scale.Keywords: alternative health care, complementary treatment, instrument development, nursing practice
Procedia PDF Downloads 40012063 Challenge of Baseline Hydrology Estimation at Large-Scale Watersheds
Authors: Can Liu, Graham Markowitz, John Balay, Ben Pratt
Abstract:
Baseline or natural hydrology is commonly employed for hydrologic modeling and quantification of hydrologic alteration due to manmade activities. It can inform planning and policy related efforts for various state and federal water resource agencies to restore natural streamflow flow regimes. A common challenge faced by hydrologists is how to replicate unaltered streamflow conditions, particularly in large watershed settings prone to development and regulation. Three different methods were employed to estimate baseline streamflow conditions for 6 major subbasins the Susquehanna River Basin; those being: 1) incorporation of consumptive water use and reservoir operations back into regulated gaged records; 2) using a map correlation method and flow duration (exceedance probability) regression equations; 3) extending the pre-regulation streamflow records based on the relationship between concurrent streamflows at unregulated and regulated gage locations. Parallel analyses were perform among the three methods and limitations associated with each are presented. Results from these analyses indicate that generating baseline streamflow records at large-scale watersheds remain challenging, even with long-term continuous stream gage records available.Keywords: baseline hydrology, streamflow gage, subbasin, regression
Procedia PDF Downloads 32412062 An Object-Based Image Resizing Approach
Authors: Chin-Chen Chang, I-Ta Lee, Tsung-Ta Ke, Wen-Kai Tai
Abstract:
Common methods for resizing image size include scaling and cropping. However, these two approaches have some quality problems for reduced images. In this paper, we propose an image resizing algorithm by separating the main objects and the background. First, we extract two feature maps, namely, an enhanced visual saliency map and an improved gradient map from an input image. After that, we integrate these two feature maps to an importance map. Finally, we generate the target image using the importance map. The proposed approach can obtain desired results for a wide range of images.Keywords: energy map, visual saliency, gradient map, seam carving
Procedia PDF Downloads 47612061 Map Matching Performance under Various Similarity Metrics for Heterogeneous Robot Teams
Authors: M. C. Akay, A. Aybakan, H. Temeltas
Abstract:
Aerial and ground robots have various advantages of usage in different missions. Aerial robots can move quickly and get a different sight of view of the area, but those vehicles cannot carry heavy payloads. On the other hand, unmanned ground vehicles (UGVs) are slow moving vehicles, since those can carry heavier payloads than unmanned aerial vehicles (UAVs). In this context, we investigate the performances of various Similarity Metrics to provide a common map for Heterogeneous Robot Team (HRT) in complex environments. Within the usage of Lidar Odometry and Octree Mapping technique, the local 3D maps of the environment are gathered. In order to obtain a common map for HRT, informative theoretic similarity metrics are exploited. All types of these similarity metrics gave adequate as allowable simulation time and accurate results that can be used in different types of applications. For the heterogeneous multi robot team, those methods can be used to match different types of maps.Keywords: common maps, heterogeneous robot team, map matching, informative theoretic similarity metrics
Procedia PDF Downloads 16812060 Proxisch: An Optimization Approach of Large-Scale Unstable Proxy Servers Scheduling
Authors: Xiaoming Jiang, Jinqiao Shi, Qingfeng Tan, Wentao Zhang, Xuebin Wang, Muqian Chen
Abstract:
Nowadays, big companies such as Google, Microsoft, which have adequate proxy servers, have perfectly implemented their web crawlers for a certain website in parallel. But due to lack of expensive proxy servers, it is still a puzzle for researchers to crawl large amounts of information from a single website in parallel. In this case, it is a good choice for researchers to use free public proxy servers which are crawled from the Internet. In order to improve efficiency of web crawler, the following two issues should be considered primarily: (1) Tasks may fail owing to the instability of free proxy servers; (2) A proxy server will be blocked if it visits a single website frequently. In this paper, we propose Proxisch, an optimization approach of large-scale unstable proxy servers scheduling, which allow anyone with extremely low cost to run a web crawler efficiently. Proxisch is designed to work efficiently by making maximum use of reliable proxy servers. To solve second problem, it establishes a frequency control mechanism which can ensure the visiting frequency of any chosen proxy server below the website’s limit. The results show that our approach performs better than the other scheduling algorithms.Keywords: proxy server, priority queue, optimization algorithm, distributed web crawling
Procedia PDF Downloads 21112059 Development of a Method to Prepare In-School Tactile Guide Maps for Visually Impaired School Children
Authors: K. Doi, T. Nishimura, M. Kawano, H. Fujimoto, Y. Tanaka, M. Sawada, S. Oouchi, T. Kaneko, K. Kanamori
Abstract:
As part of reasonable accommodation for people with disabilities in Japan, which has ratified the Convention on the Rights of Persons with Disabilities, tactile guide maps are necessary. Such maps can enable visually impaired children to attend schools of special needs education (visual impairments) to grasp the arrangement of classrooms on their school campuses. However, it takes many years to be able to use a tactile guide map without difficulty. Thus, information support, in which audio information is added in addition to tactile information, is required. In the present research, a method to prepare an in-school tactile guide map with an additional audio reading function was developed. This map can enable visually impaired school children attending schools of special needs education (visual impairments) to grasp the arrangement of classrooms on their school campuses.Keywords: accessible design, visually impaired, braille, tactile map, in-school tactile guide map
Procedia PDF Downloads 36212058 Hyperspectral Image Classification Using Tree Search Algorithm
Authors: Shreya Pare, Parvin Akhter
Abstract:
Remotely sensing image classification becomes a very challenging task owing to the high dimensionality of hyperspectral images. The pixel-wise classification methods fail to take the spatial structure information of an image. Therefore, to improve the performance of classification, spatial information can be integrated into the classification process. In this paper, the multilevel thresholding algorithm based on a modified fuzzy entropy function is used to perform the segmentation of hyperspectral images. The fuzzy parameters of the MFE function have been optimized by using a new meta-heuristic algorithm based on the Tree-Search algorithm. The segmented image is classified by a large distribution machine (LDM) classifier. Experimental results are shown on a hyperspectral image dataset. The experimental outputs indicate that the proposed technique (MFE-TSA-LDM) achieves much higher classification accuracy for hyperspectral images when compared to state-of-art classification techniques. The proposed algorithm provides accurate segmentation and classification maps, thus becoming more suitable for image classification with large spatial structures.Keywords: classification, hyperspectral images, large distribution margin, modified fuzzy entropy function, multilevel thresholding, tree search algorithm, hyperspectral image classification using tree search algorithm
Procedia PDF Downloads 17712057 Modeling Geogenic Groundwater Contamination Risk with the Groundwater Assessment Platform (GAP)
Authors: Joel Podgorski, Manouchehr Amini, Annette Johnson, Michael Berg
Abstract:
One-third of the world’s population relies on groundwater for its drinking water. Natural geogenic arsenic and fluoride contaminate ~10% of wells. Prolonged exposure to high levels of arsenic can result in various internal cancers, while high levels of fluoride are responsible for the development of dental and crippling skeletal fluorosis. In poor urban and rural settings, the provision of drinking water free of geogenic contamination can be a major challenge. In order to efficiently apply limited resources in the testing of wells, water resource managers need to know where geogenically contaminated groundwater is likely to occur. The Groundwater Assessment Platform (GAP) fulfills this need by providing state-of-the-art global arsenic and fluoride contamination hazard maps as well as enabling users to create their own groundwater quality models. The global risk models were produced by logistic regression of arsenic and fluoride measurements using predictor variables of various soil, geological and climate parameters. The maps display the probability of encountering concentrations of arsenic or fluoride exceeding the World Health Organization’s (WHO) stipulated concentration limits of 10 µg/L or 1.5 mg/L, respectively. In addition to a reconsideration of the relevant geochemical settings, these second-generation maps represent a great improvement over the previous risk maps due to a significant increase in data quantity and resolution. For example, there is a 10-fold increase in the number of measured data points, and the resolution of predictor variables is generally 60 times greater. These same predictor variable datasets are available on the GAP platform for visualization as well as for use with a modeling tool. The latter requires that users upload their own concentration measurements and select the predictor variables that they wish to incorporate in their models. In addition, users can upload additional predictor variable datasets either as features or coverages. Such models can represent an improvement over the global models already supplied, since (a) users may be able to use their own, more detailed datasets of measured concentrations and (b) the various processes leading to arsenic and fluoride groundwater contamination can be isolated more effectively on a smaller scale, thereby resulting in a more accurate model. All maps, including user-created risk models, can be downloaded as PDFs. There is also the option to share data in a secure environment as well as the possibility to collaborate in a secure environment through the creation of communities. In summary, GAP provides users with the means to reliably and efficiently produce models specific to their region of interest by making available the latest datasets of predictor variables along with the necessary modeling infrastructure.Keywords: arsenic, fluoride, groundwater contamination, logistic regression
Procedia PDF Downloads 34812056 Pharmaceutical Scale up for Solid Dosage Forms
Authors: A. Shashank Tiwari, S. P. Mahapatra
Abstract:
Scale-up is defined as the process of increasing batch size. Scale-up of a process viewed as a procedure for applying the same process to different output volumes. There is a subtle difference between these two definitions: batch size enlargement does not always translate into a size increase of the processing volume. In mixing applications, scale-up is indeed concerned with increasing the linear dimensions from the laboratory to the plant size. On the other hand, processes exist (e.g., tableting) where the term ‘scale-up’ simply means enlarging the output by increasing the speed. To complete the picture, one should point out special procedures where an increase of the scale is counterproductive and ‘scale-down’ is required to improve the quality of the product. In moving from Research and Development (R&D) to production scale, it is sometimes essential to have an intermediate batch scale. This is achieved at the so-called pilot scale, which is defined as the manufacturing of drug product by a procedure fully representative of and simulating that used for full manufacturing scale. This scale also makes it possible to produce enough products for clinical testing and to manufacture samples for marketing. However, inserting an intermediate step between R&D and production scales does not, in itself, guarantee a smooth transition. A well-defined process may generate a perfect product both in the laboratory and the pilot plant and then fail quality assurance tests in production.Keywords: scale up, research, size, batch
Procedia PDF Downloads 41312055 Micro Grids, Solution to Power Off-Grid Areas in Pakistan
Authors: M. Naveed Iqbal, Sheza Fatima, Noman Shabbir
Abstract:
In the presence of energy crisis in Pakistan, off-grid remote areas are not on priority list. The use of new large scale coal fired power plants will also make this situation worst. Therefore, the greatest challenge in our society is to explore new ways to power off grid remote areas with renewable energy sources. It is time for a sustainable energy policy which puts consumers, the environment, human health, and peace first. The renewable energy is one of the biggest growing sectors of the energy industry. Therefore, the large scale use of micro grid is thus described here with modeling, simulation, planning and operating of the micro grid. The goal of this research paper is to go into detail of a library of major components of micro grid. The introduction will go through the detail view of micro grid definition. Then, the simulation of Micro Grid in MATLAB/ Simulink including the Photo Voltaic Cell will be described with the detailed modeling. The simulation with the design and modeling will be introduced too.Keywords: micro grids, distribution generation, PV, off-grid operations
Procedia PDF Downloads 31212054 Communication of Sensors in Clustering for Wireless Sensor Networks
Authors: Kashish Sareen, Jatinder Singh Bal
Abstract:
The use of wireless sensor networks (WSNs) has grown vastly in the last era, pointing out the crucial need for scalable and energy-efficient routing and data gathering and aggregation protocols in corresponding large-scale environments. Wireless Sensor Networks have now recently emerged as a most important computing platform and continue to grow in diverse areas to provide new opportunities for networking and services. However, the energy constrained and limited computing resources of the sensor nodes present major challenges in gathering data. The sensors collect data about their surrounding and forward it to a command centre through a base station. The past few years have witnessed increased interest in the potential use of wireless sensor networks (WSNs) as they are very useful in target detecting and other applications. However, hierarchical clustering protocols have maximum been used in to overall system lifetime, scalability and energy efficiency. In this paper, the state of the art in corresponding hierarchical clustering approaches for large-scale WSN environments is shown.Keywords: clustering, DLCC, MLCC, wireless sensor networks
Procedia PDF Downloads 48212053 Feasibility Study on Hybrid Multi-Stage Direct-Drive Generator for Large-Scale Wind Turbine
Authors: Jin Uk Han, Hye Won Han, Hyo Lim Kang, Tae An Kim, Seung Ho Han
Abstract:
Direct-drive generators for large-scale wind turbine, which are divided into AFPM(Axial Flux Permanent Magnet) and RFPM(Radial Flux Permanent Magnet) type machine, have attracted interest because of a higher energy density in comparison with gear train type generators. Each type of the machines provides distinguishable geometrical features such as narrow width with a large diameter for the AFPM-type machine and wide width with a certain diameter for the RFPM-type machine. When the AFPM-type machine is applied, an increase of electric power production through a multi-stage arrangement in axial direction is easily achieved. On the other hand, the RFPM-type machine can be applied by using its geometric feature of wide width. In this study, a hybrid two-stage direct-drive generator for 6.2MW class wind turbine was proposed, in which the two-stage AFPM-type machine for 5 MW was composed of two models arranged in axial direction with a hollow shape topology of the rotor with annular disc, the stator and the main shaft mounted on coupled slew bearings. In addition, the RFPM-type machine for 1.2MW was installed at the empty space of the rotor. Analytic results obtained from an electro-magnetic and structural interaction analysis showed that the structural weight of the proposed hybrid two-stage direct-drive generator can be achieved as 155tonf in a condition satisfying the requirements of structural behaviors such as allowable air-gap clearance and strength. Therefore, it was sure that the 6.2MW hybrid two-stage direct-drive generator is competitive than conventional generators. (NRF grant funded by the Korea government MEST, No. 2017R1A2B4005405).Keywords: AFPM-type machine, direct-drive generator, electro-magnetic analysis, large-scale wind turbine, RFPM-type machine
Procedia PDF Downloads 167