Search results for: large scale mapping
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12156

Search results for: large scale mapping

12066 Design, Construction, Validation And Use Of A Novel Portable Fire Effluent Sampling Analyser

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

Current large scale fire tests focus on flammability and heat release measurements. Smoke toxicity isn’t considered despite it being a leading cause of death and injury in unwanted fires. A key reason could be that the practical difficulties associated with quantifying individual toxic components present in a fire effluent often require specialist equipment and expertise. Fire effluent contains a mixture of unreactive and reactive gases, water, organic vapours and particulate matter, which interact with each other. This interferes with the operation of the analytical instrumentation and must be removed without changing the concentration of the target analyte. To mitigate the need for expensive equipment and time-consuming analysis, a portable gas analysis system was designed, constructed and tested for use in large-scale fire tests as a simpler and more robust alternative to online FTIR measurements. The novel equipment aimed to be easily portable and able to run on battery or mains electricity; be able to be calibrated at the test site; be capable of quantifying CO, CO2, O2, HCN, HBr, HCl, NOx and SO2 accurately and reliably; be capable of independent data logging; be capable of automated switchover of 7 bubblers; be able to withstand fire effluents; be simple to operate; allow individual bubbler times to be pre-set; be capable of being controlled remotely. To test the analysers functionality, it was used alongside the ISO/TS 19700 Steady State Tube Furnace (SSTF). A series of tests were conducted to assess the validity of the box analyser measurements and the data logging abilities of the apparatus. PMMA and PA 6.6 were used to assess the validity of the box analyser measurements. The data obtained from the bench-scale assessments showed excellent agreement. Following this, the portable analyser was used to monitor gas concentrations during large-scale testing using the ISO 9705 room corner test. The analyser was set up, calibrated and set to record smoke toxicity measurements in the doorway of the test room. The analyser was successful in operating without manual interference and successfully recorded data for 12 of the 12 tests conducted in the ISO room tests. At the end of each test, the analyser created a data file (formatted as .csv) containing the measured gas concentrations throughout the test, which do not require specialist knowledge to interpret. This validated the portable analyser’s ability to monitor fire effluent without operator intervention on both a bench and large-scale. The portable analyser is a validated and significantly more practical alternative to FTIR, proven to work for large-scale fire testing for quantification of smoke toxicity. The analyser is a cheaper, more accessible option to assess smoke toxicity, mitigating the need for expensive equipment and specialist operators.

Keywords: smoke toxicity, large-scale tests, iso 9705, analyser, novel equipment

Procedia PDF Downloads 55
12065 Data Access, AI Intensity, and Scale Advantages

Authors: Chuping Lo

Abstract:

This paper presents a simple model demonstrating that ceteris paribus countries with lower barriers to accessing global data tend to earn higher incomes than other countries. Therefore, large countries that inherently have greater data resources tend to have higher incomes than smaller countries, such that the former may be more hesitant than the latter to liberalize cross-border data flows to maintain this advantage. Furthermore, countries with higher artificial intelligence (AI) intensity in production technologies tend to benefit more from economies of scale in data aggregation, leading to higher income and more trade as they are better able to utilize global data.

Keywords: digital intensity, digital divide, international trade, scale of economics

Procedia PDF Downloads 43
12064 Modular Data and Calculation Framework for a Technology-based Mapping of the Manufacturing Process According to the Value Stream Management Approach

Authors: Tim Wollert, Fabian Behrendt

Abstract:

Value Stream Management (VSM) is a widely used methodology in the context of Lean Management for improving end-to-end material and information flows from a supplier to a customer from a company’s perspective. Whereas the design principles, e.g. Pull, value-adding, customer-orientation and further ones are still valid against the background of an increasing digitalized and dynamic environment, the methodology itself for mapping a value stream is characterized as time- and resource-intensive due to the high degree of manual activities. The digitalization of processes in the context of Industry 4.0 enables new opportunities to reduce these manual efforts and make the VSM approach more agile. The paper at hand aims at providing a modular data and calculation framework, utilizing the available business data, provided by information and communication technologies for automizing the value stream mapping process with focus on the manufacturing process.

Keywords: lean management 4.0, value stream management (VSM) 4.0, dynamic value stream mapping, enterprise resource planning (ERP)

Procedia PDF Downloads 125
12063 Satisfaction Evaluation on the Fundamental Public Services for a Large-Scale Indemnificatory Residential Community: A Case Study of Nanjing

Authors: Dezhi Li, Peng Cui, Bo Zhang, Tengyuan Chang

Abstract:

In order to solve the housing problem for the low-income families, the construction of affordable housing is booming in China. However, due to various reasons, the service facilities and systems in the indemnificatory residential community meet many problems. This article established a Satisfaction Evaluation System of the Fundamental Public Services for Large-scale Indemnificatory Residential Community based on the national standards and local criteria and developed evaluation methods and processes. At last, in the case of Huagang project in Nanjing, the satisfaction of basic public service is calculated according to a survey of local residents.

Keywords: indemnificatory residential community, public services, satisfaction evaluation, structural equation modeling

Procedia PDF Downloads 334
12062 Path Integrals and Effective Field Theory of Large Scale Structure

Authors: Revant Nayar

Abstract:

In this work, we recast the equations describing large scale structure, and by extension all nonlinear fluids, in the path integral formalism. We first calculate the well known two and three point functions using Schwinger Keldysh formalism used commonly to perturbatively solve path integrals in non- equilibrium systems. Then we include EFT corrections due to pressure, viscosity, and noise as effects on the time-dependent propagator. We are able to express results for arbitrary two and three point correlation functions in LSS in terms of differential operators acting on a triple K master intergral. We also, for the first time, get analytical results for more general initial conditions deviating from the usual power law P∝kⁿ by introducing a mass scale in the initial conditions. This robust field theoretic formalism empowers us with tools from strongly coupled QFT to study the strongly non-linear regime of LSS and turbulent fluid dynamics such as OPE and holographic duals. These could be used to capture fully the strongly non-linear dynamics of fluids and move towards solving the open problem of classical turbulence.

Keywords: quantum field theory, cosmology, effective field theory, renormallisation

Procedia PDF Downloads 111
12061 Mapping the Turbulence Intensity and Excess Energy Available to Small Wind Systems over 4 Major UK Cities

Authors: Francis C. Emejeamara, Alison S. Tomlin, James Gooding

Abstract:

Due to the highly turbulent nature of urban air flows, and by virtue of the fact that turbines are likely to be located within the roughness sublayer of the urban boundary layer, proposed urban wind installations are faced with major challenges compared to rural installations. The challenge of operating within turbulent winds can however, be counteracted by the development of suitable gust tracking solutions. In order to assess the cost effectiveness of such controls, a detailed understanding of the urban wind resource, including its turbulent characteristics, is required. Estimating the ambient turbulence and total kinetic energy available at different control response times is essential in evaluating the potential performance of wind systems within the urban environment should effective control solutions be employed. However, high resolution wind measurements within the urban roughness sub-layer are uncommon, and detailed CFD modelling approaches are too computationally expensive to apply routinely on a city wide scale. This paper therefore presents an alternative semi-empirical methodology for estimating the excess energy content (EEC) present in the complex and gusty urban wind. An analytical methodology for predicting the total wind energy available at a potential turbine site is proposed by assessing the relationship between turbulence intensities and EEC, for different control response times. The semi-empirical model is then incorporated with an analytical methodology that was initially developed to predict mean wind speeds at various heights within the built environment based on detailed mapping of its aerodynamic characteristics. Based on the current methodology, additional estimates of turbulence intensities and EEC allow a more complete assessment of the available wind resource. The methodology is applied to 4 UK cities with results showing the potential of mapping turbulence intensities and the total wind energy available at different heights within each city. Considering the effect of ambient turbulence and choice of wind system, the wind resource over neighbourhood regions (of 250 m uniform resolution) and building rooftops within the 4 cities were assessed with results highlighting the promise of mapping potential turbine sites within each city.

Keywords: excess energy content, small-scale wind, turbulence intensity, urban wind energy, wind resource assessment

Procedia PDF Downloads 450
12060 Crustal Scale Seismic Surveys in Search for Gawler Craton Iron Oxide Cu-Au (IOCG) under Very Deep Cover

Authors: E. O. Okan, A. Kepic, P. Williams

Abstract:

Iron oxide copper gold (IOCG) deposits constitute important sources of copper and gold in Australia especially since the discovery of the supergiant Olympic Dam deposits in 1975. They are considered to be metasomatic expressions of large crustal-scale alteration events occasioned by intrusive actions and are associated with felsic igneous rocks in most cases, commonly potassic igneous magmatism, with the deposits ranging from ~2.2 –1.5 Ga in age. For the past two decades, geological, geochemical and potential methods have been used to identify the structures hosting these deposits follow up by drilling. Though these methods have largely been successful for shallow targets, at deeper depth due to low resolution they are limited to mapping only very large to gigantic deposits with sufficient contrast. As the search for ore-bodies under regolith cover continues due to depletion of the near surface deposits, there is a compelling need to develop new exploration technology to explore these deep seated ore-bodies within 1-4km which is the current mining depth range. Seismic reflection method represents this new technology as it offers a distinct advantage over all other geophysical techniques because of its great depth of penetration and superior spatial resolution maintained with depth. Further, in many different geological scenarios, it offers a greater ‘3D mapability’ of units within the stratigraphic boundary. Despite these superior attributes, no arguments for crustal scale seismic surveys have been proposed because there has not been a compelling argument of economic benefit to proceed with such work. For the seismic reflection method to be used at these scales (100’s to 1000’s of square km covered) the technical risks or the survey costs have to be reduced. In addition, as most IOCG deposits have large footprint due to its association with intrusions and large fault zones; we hypothesized that these deposits can be found by mainly looking for the seismic signatures of intrusions along prospective structures. In this study, we present two of such cases: - Olympic Dam and Vulcan iron-oxide copper-gold (IOCG) deposits all located in the Gawler craton, South Australia. Results from our 2D modelling experiments revealed that seismic reflection surveys using 20m geophones and 40m shot spacing as an exploration tool for locating IOCG deposit is possible even when hosted in very complex structures. The migrated sections were not only able to identify and trace various layers plus the complex structures but also show reflections around the edges of intrusive packages. The presences of such intrusions were clearly detected from 100m to 1000m depth range without losing its resolution. The modelled seismic images match the available real seismic data and have the hypothesized characteristics; thus, the seismic method seems to be a valid exploration tool to find IOCG deposits. We therefore propose that 2D seismic survey is viable for IOCG exploration as it can detect mineralised intrusive structures along known favourable corridors. This would help in reducing the exploration risk associated with locating undiscovered resources as well as conducting a life-of-mine study which will enable better development decisions at the very beginning.

Keywords: crustal scale, exploration, IOCG deposit, modelling, seismic surveys

Procedia PDF Downloads 308
12059 Assessment of Planet Image for Land Cover Mapping Using Soft and Hard Classifiers

Authors: Lamyaa Gamal El-Deen Taha, Ashraf Sharawi

Abstract:

Planet image is a new data source from planet lab. This research is concerned with the assessment of Planet image for land cover mapping. Two pixel based classifiers and one subpixel based classifier were compared. Firstly, rectification of Planet image was performed. Secondly, a comparison between minimum distance, maximum likelihood and neural network classifications for classification of Planet image was performed. Thirdly, the overall accuracy of classification and kappa coefficient were calculated. Results indicate that neural network classification is best followed by maximum likelihood classifier then minimum distance classification for land cover mapping.

Keywords: planet image, land cover mapping, rectification, neural network classification, multilayer perceptron, soft classifiers, hard classifiers

Procedia PDF Downloads 163
12058 Block N Lvi from the Northern Side of Parthenon Frieze: A Case Study of Augmented Reality for Museum Application

Authors: Donato Maniello, Alessandra Cirafici, Valeria Amoretti

Abstract:

This paper aims to present a new method that consists in the use of video mapping techniques – that is a particular form of augmented reality, which could produce new tools - different from the ones that are actually in use - for an interactive Museum experience. With the words 'augmented reality', we mean the addition of more information than what the visitor would normally perceive; this information is mediated by the use of computer and projector. The proposed application involves the creation of a documentary that depicts and explains the history of the artifact and illustrates its features; this must be projected on the surface of the faithful copy of the freeze (obtained in full-scale with a 3D printer). This mode of operation uses different techniques that allow passing from the creation of the model to the creation of contents through an accurate historical and artistic analysis, and finally to the warping phase, that will permit to overlap real and virtual models. The ultimate step, that is still being studied, includes the creation of interactive contents that would be activated by visitors through appropriate motion sensors.

Keywords: augmented reality, multimedia, parthenon frieze, video mapping

Procedia PDF Downloads 360
12057 Impact of Map Generalization in Spatial Analysis

Authors: Lin Li, P. G. R. N. I. Pussella

Abstract:

When representing spatial data and their attributes on different types of maps, the scale plays a key role in the process of map generalization. The process is consisted with two main operators such as selection and omission. Once some data were selected, they would undergo of several geometrical changing processes such as elimination, simplification, smoothing, exaggeration, displacement, aggregation and size reduction. As a result of these operations at different levels of data, the geometry of the spatial features such as length, sinuosity, orientation, perimeter and area would be altered. This would be worst in the case of preparation of small scale maps, since the cartographer has not enough space to represent all the features on the map. What the GIS users do is when they wanted to analyze a set of spatial data; they retrieve a data set and does the analysis part without considering very important characteristics such as the scale, the purpose of the map and the degree of generalization. Further, the GIS users use and compare different maps with different degrees of generalization. Sometimes, GIS users are going beyond the scale of the source map using zoom in facility and violate the basic cartographic rule 'it is not suitable to create a larger scale map using a smaller scale map'. In the study, the effect of map generalization for GIS analysis would be discussed as the main objective. It was used three digital maps with different scales such as 1:10000, 1:50000 and 1:250000 which were prepared by the Survey Department of Sri Lanka, the National Mapping Agency of Sri Lanka. It was used common features which were on above three maps and an overlay analysis was done by repeating the data with different combinations. Road data, River data and Land use data sets were used for the study. A simple model, to find the best place for a wild life park, was used to identify the effects. The results show remarkable effects on different degrees of generalization processes. It can see that different locations with different geometries were received as the outputs from this analysis. The study suggests that there should be reasonable methods to overcome this effect. It can be recommended that, as a solution, it would be very reasonable to take all the data sets into a common scale and do the analysis part.

Keywords: generalization, GIS, scales, spatial analysis

Procedia PDF Downloads 312
12056 Land Use/Land Cover Mapping Using Landsat 8 and Sentinel-2 in a Mediterranean Landscape

Authors: Moschos Vogiatzis, K. Perakis

Abstract:

Spatial-explicit and up-to-date land use/land cover information is fundamental for spatial planning, land management, sustainable development, and sound decision-making. In the last decade, many satellite-derived land cover products at different spatial, spectral, and temporal resolutions have been developed, such as the European Copernicus Land Cover product. However, more efficient and detailed information for land use/land cover is required at the regional or local scale. A typical Mediterranean basin with a complex landscape comprised of various forest types, crops, artificial surfaces, and wetlands was selected to test and develop our approach. In this study, we investigate the improvement of Copernicus Land Cover product (CLC2018) using Landsat 8 and Sentinel-2 pixel-based classification based on all available existing geospatial data (Forest Maps, LPIS, Natura2000 habitats, cadastral parcels, etc.). We examined and compared the performance of the Random Forest classifier for land use/land cover mapping. In total, 10 land use/land cover categories were recognized in Landsat 8 and 11 in Sentinel-2A. A comparison of the overall classification accuracies for 2018 shows that Landsat 8 classification accuracy was slightly higher than Sentinel-2A (82,99% vs. 80,30%). We concluded that the main land use/land cover types of CLC2018, even within a heterogeneous area, can be successfully mapped and updated according to CLC nomenclature. Future research should be oriented toward integrating spatiotemporal information from seasonal bands and spectral indexes in the classification process.

Keywords: classification, land use/land cover, mapping, random forest

Procedia PDF Downloads 104
12055 Technology of Gyro Orientation Measurement Unit (Gyro Omu) for Underground Utility Mapping Practice

Authors: Mohd Ruzlin Mohd Mokhtar

Abstract:

At present, most operators who are working on projects for utilities such as power, water, oil, gas, telecommunication and sewerage are using technologies e.g. Total station, Global Positioning System (GPS), Electromagnetic Locator (EML) and Ground Penetrating Radar (GPR) to perform underground utility mapping. With the increase in popularity of Horizontal Directional Drilling (HDD) method among the local authorities and asset owners, most of newly installed underground utilities need to use the HDD method. HDD method is seen as simple and create not much disturbance to the public and traffic. Thus, it was the preferred utilities installation method in most of areas especially in urban areas. HDDs were installed much deeper than exiting utilities (some reports saying that HDD is averaging 5 meter in depth). However, this impacts the accuracy or ability of existing underground utility mapping technologies. In most of Malaysia underground soil condition, those technologies were limited to maximum of 3 meter depth. Thus, those utilities which were installed much deeper than 3 meter depth could not be detected by using existing detection tools. The accuracy and reliability of existing underground utility mapping technologies or work procedure were in doubt. Thus, a mitigation action plan is required. While installing new utility using Horizontal Directional Drilling (HDD) method, a more accurate underground utility mapping can be achieved by using Gyro OMU compared to existing practice using e.g. EML and GPR. Gyro OMU is a method to accurately identify the location of HDD thus this mapping can be used or referred to avoid those cost of breakdown due to future HDD works which can be caused by inaccurate underground utility mapping.

Keywords: Gyro Orientation Measurement Unit (Gyro OMU), Horizontal Directional Drilling (HDD), Ground Penetrating Radar (GPR), Electromagnetic Locator (EML)

Procedia PDF Downloads 116
12054 Improved K-Means Clustering Algorithm Using RHadoop with Combiner

Authors: Ji Eun Shin, Dong Hoon Lim

Abstract:

Data clustering is a common technique used in data analysis and is used in many applications, such as artificial intelligence, pattern recognition, economics, ecology, psychiatry and marketing. K-means clustering is a well-known clustering algorithm aiming to cluster a set of data points to a predefined number of clusters. In this paper, we implement K-means algorithm based on MapReduce framework with RHadoop to make the clustering method applicable to large scale data. RHadoop is a collection of R packages that allow users to manage and analyze data with Hadoop. The main idea is to introduce a combiner as a function of our map output to decrease the amount of data needed to be processed by reducers. The experimental results demonstrated that K-means algorithm using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also showed that our K-means algorithm using RHadoop with combiner was faster than regular algorithm without combiner as the size of data set increases.

Keywords: big data, combiner, K-means clustering, RHadoop

Procedia PDF Downloads 408
12053 A Proposal for Systematic Mapping Study of Software Security Testing, Verification and Validation

Authors: Adriano Bessa Albuquerque, Francisco Jose Barreto Nunes

Abstract:

Software vulnerabilities are increasing and not only impact services and processes availability as well as information confidentiality, integrity and privacy, but also cause changes that interfere in the development process. Security test could be a solution to reduce vulnerabilities. However, the variety of test techniques with the lack of real case studies of applying tests focusing on software development life cycle compromise its effective use. This paper offers an overview of how a Systematic Mapping Study (MS) about security verification, validation and test (VVT) was performed, besides presenting general results about this study.

Keywords: software test, software security verification validation and test, security test institutionalization, systematic mapping study

Procedia PDF Downloads 378
12052 An Approach in Design of Large-Scale Hydrogen Plants

Authors: Hamidreza Sahaleh

Abstract:

Because of the stringent prerequisite of low sulfur and heavier raw oil feedstock more hydrogen will be devoured in the refineries. Specifically if huge scale limits are the reaction to an expanded hydrogen request, certain configuration and building background are obliged with, which will be depicted in this paper with an illustration. Chosen procedure plan prerequisite will be recorded and portrayed in agreement to the flowsheet. Also, a determination of imaginative outline elements, similar to process condensate reuse, safe reformer start up and prerequisites will be highlighted.

Keywords: low sulfur, raw oil, refineries, flowsheet

Procedia PDF Downloads 271
12051 A GIS Based Composite Land Degradation Assessment and Mapping of Tarkwa Mining Area

Authors: Bernard Kumi-Boateng, Kofi Bonsu

Abstract:

The clearing of vegetation in the Tarkwa Mining Area (TMA) for the purposes of mining, lumbering and development of settlement for the increasing population has caused a large scale denudation of the forest cover and erosion of the top soil thereby degrading the agriculture land. It is, therefore, essential to know the current status of land degradation in TMA so as to facilitate land conservation policy-making. The types of degradation, the extents of the degradations and their various degrees were combined to develop a composite land degradation index to assess the current status of land degradation in TMA using GIS based techniques. The assessment revealed that the most significant types of degradation in TMA were open pit and quarry mining; urbanisation and other construction projects; and surface scraping during land clearing. It was found that 21.62 % of the total area of TMA (353.07 km2) had high degradation index rating. It is recommended that decision makers use this assessment as a reference point for future initiatives that will be taken in order to develop land conservation policy.

Keywords: degradation, GIS, land, mining

Procedia PDF Downloads 329
12050 Auto Calibration and Optimization of Large-Scale Water Resources Systems

Authors: Arash Parehkar, S. Jamshid Mousavi, Shoubo Bayazidi, Vahid Karami, Laleh Shahidi, Arash Azaranfar, Ali Moridi, M. Shabakhti, Tayebeh Ariyan, Mitra Tofigh, Kaveh Masoumi, Alireza Motahari

Abstract:

Water resource systems modelling have constantly been a challenge through history for human being. As the innovative methodological development is evolving alongside computer sciences on one hand, researches are likely to confront more complex and larger water resources systems due to new challenges regarding increased water demands, climate change and human interventions, socio-economic concerns, and environment protection and sustainability. In this research, an automatic calibration scheme has been applied on the Gilan’s large-scale water resource model using mathematical programming. The water resource model’s calibration is developed in order to attune unknown water return flows from demand sites in the complex Sefidroud irrigation network and other related areas. The calibration procedure is validated by comparing several gauged river outflows from the system in the past with model results. The calibration results are pleasantly reasonable presenting a rational insight of the system. Subsequently, the unknown optimized parameters were used in a basin-scale linear optimization model with the ability to evaluate the system’s performance against a reduced inflow scenario in future. Results showed an acceptable match between predicted and observed outflows from the system at selected hydrometric stations. Moreover, an efficient operating policy was determined for Sefidroud dam leading to a minimum water shortage in the reduced inflow scenario.

Keywords: auto-calibration, Gilan, large-scale water resources, simulation

Procedia PDF Downloads 313
12049 Mapping New Technologies for Sustainability along the Fashion Supply Chain

Authors: Hilde Heim

Abstract:

The textile industry is known for its swift adoption of innovations in fashion technology (Fash-Tech). The industry is also known for its harmful effects on the environment. Opportunely, Fash-Tech is expected to facilitate the turn towards more sustainable practice. However, although several technologies have the potential for advancing sustainable practice, many industry players, whether large or small, are confused and misinformed about Fash-Tech adoption, application, and impact. Through a visual poster presentation, this project aims to map global fashion innovations along the supply chain from fibre production to waste management, thus providing a clearer picture of numbers, scale, and adoption. While the project aims to identify Fash-Tech effectiveness in reaching sustainability goals, it also identifies areas of congestion as well as insufficiency in the accessibility of Fash-Tech. This project intends to help inform future decisions in business, investment, and policy for the advancement of sustainable practice.

Keywords: fashion technology, sustainability, supply chain, enterprise management

Procedia PDF Downloads 216
12048 Timing and Noise Data Mining Algorithm and Software Tool in Very Large Scale Integration (VLSI) Design

Authors: Qing K. Zhu

Abstract:

Very Large Scale Integration (VLSI) design becomes very complex due to the continuous integration of millions of gates in one chip based on Moore’s law. Designers have encountered numerous report files during design iterations using timing and noise analysis tools. This paper presented our work using data mining techniques combined with HTML tables to extract and represent critical timing/noise data. When we apply this data-mining tool in real applications, the running speed is important. The software employs table look-up techniques in the programming for the reasonable running speed based on performance testing results. We added several advanced features for the application in one industry chip design.

Keywords: VLSI design, data mining, big data, HTML forms, web, VLSI, EDA, timing, noise

Procedia PDF Downloads 234
12047 The Rule of Architectural Firms in Enhancing Building Energy Efficiency in Emerging Countries: Processes and Tools Evaluation of Architectural Firms in Egypt

Authors: Mahmoud F. Mohamadin, Ahmed Abdel Malek, Wessam Said

Abstract:

Achieving energy efficient architecture in general, and in emerging countries in particular, is a challenging process that requires the contribution of various governmental, institutional, and individual entities. The rule of architectural design is essential in this process as it is considered as one of the earliest steps on the road to sustainability. Architectural firms have a moral and professional responsibility to respond to these challenges and deliver buildings that consume less energy. This study aims to evaluate the design processes and tools in practice of Egyptian architectural firms based on a limited survey to investigate if their processes and methods can lead to projects that meet the Egyptian Code of Energy Efficiency Improvement. A case study of twenty architectural firms in Cairo was selected and categorized according to their scale; large-scale, medium-scale, and small-scale. A questionnaire was designed and distributed to the firms, and personal meetings with the firms’ representatives took place. The questionnaire answered three main points; the design processes adopted, the usage of performance-based simulation tools, and the usage of BIM tools for energy efficiency purposes. The results of the study revealed that only little percentage of the large-scale firms have clear strategies for building energy efficiency in their building design, however the application is limited to certain project types, or according to the client request. On the other hand, the percentage of medium-scale firms is much less, and it is almost absent in the small-scale ones. This demonstrates the urgent need of enhancing the awareness of the Egyptian architectural design community of the great importance of implementing these methods starting from the early stages of the building design. Finally, the study proposed recommendations for such firms to be able to create a healthy built environment and improve the quality of life in emerging countries.

Keywords: architectural firms, emerging countries, energy efficiency, performance-based simulation tools

Procedia PDF Downloads 262
12046 Flood-prone Urban Area Mapping Using Machine Learning, a Case Sudy of M'sila City (Algeria)

Authors: Medjadj Tarek, Ghribi Hayet

Abstract:

This study aims to develop a flood sensitivity assessment tool using machine learning (ML) techniques and geographic information system (GIS). The importance of this study is integrating the geographic information systems (GIS) and machine learning (ML) techniques for mapping flood risks, which help decision-makers to identify the most vulnerable areas and take the necessary precautions to face this type of natural disaster. To reach this goal, we will study the case of the city of M'sila, which is among the areas most vulnerable to floods. This study drew a map of flood-prone areas based on the methodology where we have made a comparison between 3 machine learning algorithms: the xGboost model, the Random Forest algorithm and the K Nearest Neighbour algorithm. Each of them gave an accuracy respectively of 97.92 - 95 - 93.75. In the process of mapping flood-prone areas, the first model was relied upon, which gave the greatest accuracy (xGboost).

Keywords: Geographic information systems (GIS), machine learning (ML), emergency mapping, flood disaster management

Procedia PDF Downloads 70
12045 Stochastic Control of Decentralized Singularly Perturbed Systems

Authors: Walid S. Alfuhaid, Saud A. Alghamdi, John M. Watkins, M. Edwin Sawan

Abstract:

Designing a controller for stochastic decentralized interconnected large scale systems usually involves a high degree of complexity and computation ability. Noise, observability, and controllability of all system states, connectivity, and channel bandwidth are other constraints to design procedures for distributed large scale systems. The quasi-steady state model investigated in this paper is a reduced order model of the original system using singular perturbation techniques. This paper results in an optimal control synthesis to design an observer based feedback controller by standard stochastic control theory techniques using Linear Quadratic Gaussian (LQG) approach and Kalman filter design with less complexity and computation requirements. Numerical example is given at the end to demonstrate the efficiency of the proposed method.

Keywords: decentralized, optimal control, output, singular perturb

Procedia PDF Downloads 347
12044 Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping

Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa

Abstract:

The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.

Keywords: neural network computing, continuous functions generating the input-output mapping, decreasing the training time, machines with big memories

Procedia PDF Downloads 260
12043 Exploring Teachers’ Beliefs about Diagnostic Language Assessment Practices in a Large-Scale Assessment Program

Authors: Oluwaseun Ijiwade, Chris Davison, Kelvin Gregory

Abstract:

In Australia, like other parts of the world, the debate on how to enhance teachers using assessment data to inform teaching and learning of English as an Additional Language (EAL, Australia) or English as a Foreign Language (EFL, United States) have occupied the centre of academic scholarship. Traditionally, this approach was conceptualised as ‘Formative Assessment’ and, in recent times, ‘Assessment for Learning (AfL)’. The central problem is that teacher-made tests are limited in providing data that can inform teaching and learning due to variability of classroom assessments, which are hindered by teachers’ characteristics and assessment literacy. To address this concern, scholars in language education and testing have proposed a uniformed large-scale computer-based assessment program to meet the needs of teachers and promote AfL in language education. In Australia, for instance, the Victoria state government commissioned a large-scale project called 'Tools to Enhance Assessment Literacy (TEAL) for Teachers of English as an additional language'. As part of the TEAL project, a tool called ‘Reading and Vocabulary assessment for English as an Additional Language (RVEAL)’, as a diagnostic language assessment (DLA), was developed by language experts at the University of New South Wales for teachers in Victorian schools to guide EAL pedagogy in the classroom. Therefore, this study aims to provide qualitative evidence for understanding beliefs about the diagnostic language assessment (DLA) among EAL teachers in primary and secondary schools in Victoria, Australia. To realize this goal, this study raises the following questions: (a) How do teachers use large-scale assessment data for diagnostic purposes? (b) What skills do language teachers think are necessary for using assessment data for instruction in the classroom? and (c) What factors, if any, contribute to teachers’ beliefs about diagnostic assessment in a large-scale assessment? Semi-structured interview method was used to collect data from at least 15 professional teachers who were selected through a purposeful sampling. The findings from the resulting data analysis (thematic analysis) provide an understanding of teachers’ beliefs about DLA in a classroom context and identify how these beliefs are crystallised in language teachers. The discussion shows how the findings can be used to inform professional development processes for language teachers as well as informing important factor of teacher cognition in the pedagogic processes of language assessment. This, hopefully, will help test developers and testing organisations to align the outcome of this study with their test development processes to design assessment that can enhance AfL in language education.

Keywords: beliefs, diagnostic language assessment, English as an additional language, teacher cognition

Procedia PDF Downloads 178
12042 Spectral Anomaly Detection and Clustering in Radiological Search

Authors: Thomas L. McCullough, John D. Hague, Marylesa M. Howard, Matthew K. Kiser, Michael A. Mazur, Lance K. McLean, Johanna L. Turk

Abstract:

Radiological search and mapping depends on the successful recognition of anomalies in large data sets which contain varied and dynamic backgrounds. We present a new algorithmic approach for real-time anomaly detection which is resistant to common detector imperfections, avoids the limitations of a source template library and provides immediate, and easily interpretable, user feedback. This algorithm is based on a continuous wavelet transform for variance reduction and evaluates the deviation between a foreground measurement and a local background expectation using methods from linear algebra. We also present a technique for recognizing and visualizing spectrally similar clusters of data. This technique uses Laplacian Eigenmap Manifold Learning to perform dimensional reduction which preserves the geometric "closeness" of the data while maintaining sensitivity to outlying data. We illustrate the utility of both techniques on real-world data sets.

Keywords: radiological search, radiological mapping, radioactivity, radiation protection

Procedia PDF Downloads 676
12041 A Model Suggestion on Competitiveness and Sustainability of SMEs in Developing Countries

Authors: Ahmet Diken, Tahsin Karabulut

Abstract:

The factor which developing countries are in need is capital. Such countries make an effort to increase their income in order to meet their expenses for employment, infrastructure, superstructure investments, education, health and defense. The sole income of the countries is taxes collected from businesses. The businesses should drive profit and return in order to be able to toll. In a world where competition exists, different strategies may be followed by business in developing countries and they must specify their target markets. İn order to minimize cost and maximize profit, SMEs have to concentrate on target markets and select cost oriented strategy. In this study, a theoretical model is suggested that SME firms have to act as cluster between each other, and also must be optimal provider for large scale firms. SMEs’ policy must be supported by public. This relationship can benefit large scale firms to have brand over the world, and this organization increases value added for developing countries.

Keywords: competitiveness, countries, SMEs developing, sustainability

Procedia PDF Downloads 293
12040 Large-Area Film Fabrication for Perovskite Solar Cell via Scalable Thermal-Assisted and Meniscus-Guided Bar Coating

Authors: Gizachew Belay Adugna

Abstract:

Scalable and cost-effective device fabrication techniques are urgent to commercialize the perovskite solar cells (PSCs) for the next photovoltaic (PV) technology. Herein, large-area films of perovskite and hole-transporting materials (HTMs) were developed via a rapid and scalable thermal-assisting bar-coating process in the open air. High-quality and large crystalline grains of MAPbI₃ with homogenous morphology and thickness were obtained on a large-area (10 cm×10 cm) solution-sheared mp-TiO₂/c-TiO₂/FTO substrate. Encouraging photovoltaic performance of 19.02% was achieved for devices fabricated from the bar-coated perovskite film compared to that from the small-scale spin-coated film (17.27%) with 2,2′,7,7′-tetrakis-(N,N-di-p-methoxyphenylamine)-9,9′-spirobifluorene (spiro-OMeTAD) as an HTM whereas a higher power conversion efficiency of 19.89% with improved device stability was achieved by capping a fluorinated (HYC-2) HTM as an alternative to the traditional spiro-OMeTAD. The fluorinated exhibited better molecular packing in the HTM film and deeper HOMO level compared to the nonfluorinated counterpart; thus, improved hole mobility and overall charge extraction in the device were demonstrated. Furthermore, excellent film processability and an impressive PCE of 18.52% were achieved in the large area bar-coated HYC-2 prepared sequentially on the perovskite underlayer in the open atmosphere, compared to the bar-coated spiro-OMeTAD/perovskite (17.51%). This all-solution approach demonstrated the feasibility of high-quality films on a large-area substrate for PSCs, which is a vital step toward industrial-scale PV production.

Keywords: perovskite solar cells, hole transporting materials, up-scaling process, power conversion efficiency

Procedia PDF Downloads 44
12039 Large Scale Production of Polyhydroxyalkanoates (PHAs) from Waste Water: A Study of Techno-Economics, Energy Use, and Greenhouse Gas Emissions

Authors: Cora Fernandez Dacosta, John A. Posada, Andrea Ramirez

Abstract:

The biodegradable family of polymers polyhydroxyalkanoates are interesting substitutes for convectional fossil-based plastics. However, the manufacturing and environmental impacts associated with their production via intracellular bacterial fermentation are strongly dependent on the raw material used and on energy consumption during the extraction process, limiting their potential for commercialization. Industrial wastewater is studied in this paper as a promising alternative feedstock for waste valorization. Based on results from laboratory and pilot-scale experiments, a conceptual process design, techno-economic analysis and life cycle assessment are developed for the large-scale production of the most common type of polyhydroxyalkanoate, polyhydroxbutyrate. Intracellular polyhydroxybutyrate is obtained via fermentation of microbial community present in industrial wastewater and the downstream processing is based on chemical digestion with surfactant and hypochlorite. The economic potential and environmental performance results help identifying bottlenecks and best opportunities to scale-up the process prior to industrial implementation. The outcome of this research indicates that the fermentation of wastewater towards PHB presents advantages compared to traditional PHAs production from sugars because the null environmental burdens and financial costs of the raw material in the bioplastic production process. Nevertheless, process optimization is still required to compete with the petrochemicals counterparts.

Keywords: circular economy, life cycle assessment, polyhydroxyalkanoates, waste valorization

Procedia PDF Downloads 432
12038 A Case Study of Low Head Hydropower Opportunities at Existing Infrastructure in South Africa

Authors: Ione Loots, Marco van Dijk, Jay Bhagwan

Abstract:

Historically, South Africa had various small-scale hydropower installations in remote areas that were not incorporated in the national electricity grid. Unfortunately, in the 1960s most of these plants were decommissioned when Eskom, the national power utility, rapidly expanded its grid and capability to produce cheap, reliable, coal-fired electricity. This situation persisted until 2008, when rolling power cuts started to affect all citizens. This, together with the rising monetary and environmental cost of coal-based power generation, has sparked new interest in small-scale hydropower development, especially in remote areas or at locations (like wastewater treatment works) that could not afford to be without electricity for long periods at a time. Even though South Africa does not have the same, large-scale, hydropower potential as some other African countries, significant potential for micro- and small-scale hydropower is hidden in various places. As an example, large quantities of raw and potable water are conveyed daily under either pressurized or gravity conditions over large distances and elevations. Due to the relative water scarcity in the country, South Africa also has more than 4900 registered dams of varying capacities. However, institutional capacity and skills have not been maintained in recent years and therefore the identification of hydropower potential, as well as the development of micro- and small-scale hydropower plants has not gained significant momentum. An assessment model and decision support system for low head hydropower development has been developed to assist designers and decision makers with first-order potential analysis. As a result, various potential sites were identified and many of these sites were situated at existing infrastructure like weirs, barrages or pipelines. One reason for the specific interest in existing infrastructure is the fact that capital expenditure could be minimized and another is the reduced negative environmental impact compared to greenfield sites. This paper will explore the case study of retrofitting an unconventional and innovative hydropower plant to the outlet of a wastewater treatment works in South Africa.

Keywords: low head hydropower, retrofitting, small-scale hydropower, wastewater treatment works

Procedia PDF Downloads 221
12037 Application of Unmanned Aerial Vehicle in Geohazard Mapping: Case Study Dominica

Authors: Michael Mickson

Abstract:

The recent development of unmanned aerial vehicles (UAVs) has been increasing the number of technical solutions that can be used to identify, map, and manage the effects of geohazards. UAVs are generally cheaper and more versatile than traditional remote-sensing techniques, and they can be therefore considered as a good alternative for the acquisition of imagery and other remote sensing data before, during and after a natural hazard event. This study aims to use UAV for investigating areas susceptible to high mobility flows such as debris flow in Dominica, especially after the 2017 Hurricane Maria. The use of UAVs in identifying, mapping and managing of natural hazards helps to mitigate the negative effects of natural hazards on livelihood, properties and the built environment.

Keywords: unmanned aerial vehicle (UAV), geohazards, remote sensing, mapping, Dominica

Procedia PDF Downloads 98