Search results for: grid visualization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1478

Search results for: grid visualization

188 An Investigation into the Potential of Industrial Low Grade Heat in Membrane Distillation for Freshwater Production

Authors: Yehia Manawi, Ahmad Kayvanifard

Abstract:

Membrane distillation is an emerging technology which has been used to produce freshwater and purify different types of aqueous mixtures. Qatar is an arid country where almost 100% of its freshwater demand is supplied through the energy-intensive thermal desalination process. The country’s need for water has reached an all-time high which stipulates finding an alternative way to augment freshwater without adding any drastic affect to the environment. The objective of this paper was to investigate the potential of using the industrial low grade waste heat to produce freshwater using membrane distillation. The main part of this work was conducting a heat audit on selected Qatari chemical industries to estimate the amounts of freshwater produced if such industrial waste heat were to be recovered. By the end of this work, the main objective was met and the heat audit conducted on the Qatari chemical industries enabled us to estimate both the amounts of waste heat which can be potentially recovered in addition to the amounts of freshwater which can be produced if such waste heat were to be recovered. By the end, the heat audit showed that around 605 Mega Watts of waste heat can be recovered from the studied Qatari chemical industries which resulted in a total daily production of 5078.7 cubic meter of freshwater. This water can be used in a wide variety of applications such as human consumption or industry. The amount of produced freshwater may look small when compared to that produced through thermal desalination plants; however, one must bear in mind that this water comes from waste and can be used to supply water for small cities or remote areas which are not connected to the water grid. The idea of producing freshwater from the two widely-available wastes (thermal rejected brine and waste heat) seems promising as less environmental and economic impacts will be associated with freshwater production which may in the near future augment the conventional way of producing freshwater currently being thermal desalination. This work has shown that low grade waste heat in the chemical industries in Qatar and perhaps the rest of the world can contribute to additional production of freshwater using membrane distillation without significantly adding to the environmental impact.

Keywords: membrane distillation, desalination, heat recovery, environment

Procedia PDF Downloads 296
187 Effectiveness of Control Measures for Ambient Fine Particulate Matters Concentration Improvement in Taiwan

Authors: Jiun-Horng Tsai, Shi-Jie, Nieh

Abstract:

Fine particulate matter (PM₂.₅) has become an important issue all over the world over the last decade. Annual mean PM₂.₅ concentration has been over the ambient air quality standard of PM₂.₅ (annual average concentration as 15μg/m³) which adapted by Taiwan Environmental Protection Administration (TEPA). TEPA, therefore, has developed a number of air pollution control measures to improve the ambient concentration by reducing the emissions of primary fine particulate matter and the precursors of secondary PM₂.₅. This study investigated the potential improvement of ambient PM₂.₅ concentration by the TEPA program and the other scenario for further emission reduction on various sources. Four scenarios had been evaluated in this study, including a basic case and three reduction scenarios (A to C). The ambient PM₂.₅ concentration was evaluated by Community Multi-scale Air Quality modelling system (CMAQ) ver. 4.7.1 along with the Weather Research and Forecasting Model (WRF) ver. 3.4.1. The grid resolutions in the modelling work are 81 km × 81 km for domain 1 (covers East Asia), 27 km × 27 km for domain 2 (covers Southeast China and Taiwan), and 9 km × 9 km for domain 3 (covers Taiwan). The result of PM₂.₅ concentration simulation in different regions of Taiwan shows that the annual average concentration of basic case is 24.9 μg/m³, and are 22.6, 18.8, and 11.3 μg/m³, respectively, for scenarios A to C. The annual average concentration of PM₂.₅ would be reduced by 9-55 % for those control scenarios. The result of scenario C (the emissions of precursors reduce to allowance levels) could improve effectively the airborne PM₂.₅ concentration to attain the air quality standard. According to the results of unit precursor reduction contribution, the allowance emissions of PM₂.₅, SOₓ, and NOₓ are 16.8, 39, and 62 thousand tons per year, respectively. In the Kao-Ping air basin, the priority for reducing precursor emissions is PM₂.₅ > NOₓ > SOₓ, whereas the priority for reducing precursor emissions is PM₂.₅ > SOₓ > NOₓ in others area. The result indicates that the target pollutants that need to be reduced in different air basin are different, and the control measures need to be adapted to local conditions.

Keywords: airborne PM₂.₅, community multi-scale air quality modelling system, control measures, weather research and forecasting model

Procedia PDF Downloads 114
186 A Study on Shear Field Test Method in Timber Shear Modulus Determination Using Stereo Vision System

Authors: Niaz Gharavi, Hexin Zhang

Abstract:

In the structural timber design, the shear modulus of the timber beam is an important factor that needs to be determined accurately. According to BS EN 408, shear modulus can be determined using torsion test or shear field test method. Although torsion test creates pure shear status in the beam, it does not represent the real-life situation when the beam is in the service. On the other hand, shear field test method creates similar loading situation as in reality. The latter method is based on shear distortion measurement of the beam at the zone with the constant transverse load in the standardized four-point bending test as indicated in BS EN 408. Current testing practice code advised using two metallic arms act as an instrument to measure the diagonal displacement of the constructing square. Timber is not a homogenous material, but a heterogeneous and this characteristic makes timber to undergo a non-uniform deformation. Therefore, the dimensions and the location of the constructing square in the area with the constant transverse force might alter the shear modulus determination. This study aimed to investigate the impact of the shape, size, and location of the square in the shear field test method. A binocular stereo vision system was developed to capture the 3D displacement of a grid of target points. This approach is an accurate and non-contact method to extract the 3D coordination of targeted object using two cameras. Two group of three glue laminated beams were produced and tested by the mean of four-point bending test according to BS EN 408. Group one constructed using two materials, laminated bamboo lumber and structurally graded C24 timber and group two consisted only structurally graded C24 timber. Analysis of Variance (ANOVA) was performed on the acquired data to evaluate the significance of size and location of the square in the determination of shear modulus of the beam. The results have shown that the size of the square is an affecting factor in shear modulus determination. However, the location of the square in the area with the constant shear force does not affect the shear modulus.

Keywords: shear field test method, BS EN 408, timber shear modulus, photogrammetry approach

Procedia PDF Downloads 185
185 A World Map of Seabed Sediment Based on 50 Years of Knowledge

Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès

Abstract:

Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.

Keywords: marine sedimentology, seabed map, sediment classification, world ocean

Procedia PDF Downloads 203
184 Application Case and Result Consideration About Basic and Working Design of Floating PV Generation System Installed in the Upstream of Dam

Authors: Jang-Hwan Yin, Hae-Jeong Jeong, Hyo-Geun Jeong

Abstract:

K-water (Korea Water Resources Corporation) conducted basic and working design about floating PV generation system installed above water in the upstream of dam to develop clean energy using water with importance of green growth is magnified ecumenically. PV Generation System on the ground applied considerably until now raise environmental damage by using farmland and forest land, PV generation system on the building roof is already installed at almost the whole place of business and additional installation is almost impossible. Installation space of PV generation system is infinite and efficient national land use is possible because it is installed above water. Also, PV module's efficiency increase by natural water cooling method and no shade. So it is identified that annual power generation is more than PV generation system on the ground by operating performance data. Although it is difficult to design and construct by high cost, little application case, difficult installation of floater, mooring device, underwater cable, etc. However, it has been examined cost reduction plan such as structure weight lightening, floater optimal design, etc. This thesis described basic and working design result systematically about K-water's floating PV generation system development and suggested optimal design method of floating PV generation system. Main contents are photovoltaic array location select, substation location select related underwater cable, PV module and inverter design, transmission and substation equipment design, floater design related structure weight lightening, mooring system design related water level fluctuation, grid connecting technical review, remote control and monitor equipment design, etc. This thesis will contribute to optimal design and business extension of floating PV generation system, and it will be opportunity revitalize clean energy development using water.

Keywords: PV generation system, clean energy, green growth, solar energy

Procedia PDF Downloads 390
183 Generation of ZnO-Au Nanocomposite in Water Using Pulsed Laser Irradiation

Authors: Elmira Solati, Atousa Mehrani, Davoud Dorranian

Abstract:

Generation of ZnO-Au nanocomposite under laser irradiation of a mixture of the ZnO and Au colloidal suspensions are experimentally investigated. In this work, firstly ZnO and Au nanoparticles are prepared by pulsed laser ablation of the corresponding metals in water using the 1064 nm wavelength of Nd:YAG laser. In a second step, the produced ZnO and Au colloidal suspensions were mixed in different volumetric ratio and irradiated using the second harmonic of a Nd:YAG laser operating at 532 nm wavelength. The changes in the size of the nanostructure and optical properties of the ZnO-Au nanocomposite are studied as a function of the volumetric ratio of ZnO and Au colloidal suspensions. The crystalline structure of the ZnO-Au nanocomposites was analyzed by X-ray diffraction (XRD). The optical properties of the samples were examined at room temperature by a UV-Vis-NIR absorption spectrophotometer. Transmission electron microscopy (TEM) was done by placing a drop of the concentrated suspension on a carbon-coated copper grid. To further confirm the morphology of ZnO-Au nanocomposites, we performed Scanning electron microscopy (SEM) analysis. Room temperature photoluminescence (PL) of the ZnO-Au nanocomposites was measured to characterize the luminescence properties of the ZnO-Au nanocomposites. The ZnO-Au nanocomposites were characterized by Fourier transform infrared (FTIR) spectroscopy. The X-ray diffraction pattern shows that the ZnO-Au nanocomposites had the polycrystalline structure of Au. The behavior observed by images of transmission electron microscope reveals that soldering of Au and ZnO nanoparticles include their adhesion. The plasmon peak in ZnO-Au nanocomposites was red-shifted and broadened in comparison with pure Au nanoparticles. By using the Tauc’s equation, the band gap energy for ZnO-Au nanocomposites is calculated to be 3.15–3.27 eV. In this work, the formation of ZnO-Au nanocomposites shifts the FTIR peak of metal oxide bands to higher wavenumbers. PL spectra of the ZnO-Au nanocomposites show that several weak peaks in the ultraviolet region and several relatively strong peaks in the visible region. SEM image indicates that the morphology of ZnO-Au nanocomposites produced in water was spherical. The TEM images of ZnO-Au nanocomposites demonstrate that with increasing the volumetric ratio of Au colloidal suspension the adhesion increased. According to the size distribution graphs of ZnO-Au nanocomposites with increasing the volumetric ratio of Au colloidal suspension the amount of ZnO-Au nanocomposites with the smaller size is further.

Keywords: Au nanoparticles, pulsed laser ablation, ZnO-Au nanocomposites, ZnO nanoparticles

Procedia PDF Downloads 309
182 Modeling Landscape Performance: Evaluating the Performance Benefits of the Olmsted Brothers’ Proposed Parkway Designs for Los Angeles

Authors: Aaron Liggett

Abstract:

This research focuses on the visionary proposal made by the Olmsted Brothers Landscape Architecture firm in the 1920s for a network of interconnected parkways in Los Angeles. Their envisioned parkways aimed to address environmental and cultural strains by providing green space for recreation, wildlife habitat, and stormwater management while serving as multimodal transportation routes. Although the parkways were never constructed, through an evidence-based approach, this research presents a framework for evaluating the potential functionality and success of the parkways by modeling and visualizing their quantitative and qualitative landscape performance and benefits. Historical documents and innovative digital modeling tools produce detailed analysis, modeling, and visualization of the parkway designs. A set of 1928 construction documents are used to analyze and interpret the design intent of the parkways. Grading plans are digitized in CAD and modeled in Sketchup to produce 3D visualizations of the parkway. Drainage plans are digitized to model stormwater performance. Planting plans are analyzed to model urban forestry and biodiversity. The EPA's Storm Water Management Model (SWMM) predicts runoff quantity and quality. The USDA Forests Service tools evaluate carbon sequestration and air quality. Spatial and overlay analysis techniques are employed to assess urban connectivity and the spatial impacts of the parkway designs. The study reveals how the integration of blue infrastructure, green infrastructure, and transportation infrastructure within the parkway design creates a multifunctional landscape capable of offering alternative spatial and temporal uses. The analysis demonstrates the potential for multiple functional, ecological, aesthetic, and social benefits to be derived from the proposed parkways. The analysis of the Olmsted Brothers' proposed Los Angeles parkways, which predated contemporary ecological design and resiliency practices, demonstrates the potential for providing multiple functional, ecological, aesthetic, and social benefits within urban designs. The findings highlight the importance of integrated blue, green, and transportation infrastructure in creating a multifunctional landscape that simultaneously serves multiple purposes. The research contributes new methods for modeling and visualizing landscape performance benefits, providing insights and techniques for informing future designs and sustainable development strategies.

Keywords: landscape architecture, ecological urban design, greenway, landscape performance

Procedia PDF Downloads 91
181 CFD Simulation of the Pressure Distribution in the Upper Airway of an Obstructive Sleep Apnea Patient

Authors: Christina Hagen, Pragathi Kamale Gurmurthy, Thorsten M. Buzug

Abstract:

CFD simulations are performed in the upper airway of a patient suffering from obstructive sleep apnea (OSA) that is a sleep related breathing disorder characterized by repetitive partial or complete closures of the upper airways. The simulations are aimed at getting a better understanding of the pathophysiological flow patterns in an OSA patient. The simulation is compared to medical data of a sleep endoscopic examination under sedation. A digital model consisting of surface triangles of the upper airway is extracted from the MR images by a region growing segmentation process and is followed by a careful manual refinement. The computational domain includes the nasal cavity with the nostrils as the inlet areas and the pharyngeal volume with an outlet underneath the larynx. At the nostrils a flat inflow velocity profile is prescribed by choosing the velocity such that a volume flow rate of 150 ml/s is reached. Behind the larynx at the outlet a pressure of -10 Pa is prescribed. The stationary incompressible Navier-Stokes equations are numerically solved using finite elements. A grid convergence study has been performed. The results show an amplification of the maximal velocity of about 2.5 times the inlet velocity at a constriction of the pharyngeal volume in the area of the tongue. It is the same region that also shows the highest pressure drop from about 5 Pa. This is in agreement with the sleep endoscopic examinations of the same patient under sedation showing complete contractions in the area of the tongue. CFD simulations can become a useful tool in the diagnosis and therapy of obstructive sleep apnea by giving insight into the patient’s individual fluid dynamical situation in the upper airways giving a better understanding of the disease where experimental measurements are not feasible. Within this study, it could been shown on one hand that constriction areas within the upper airway lead to a significant pressure drop and on the other hand a good agreement of the area of pressure drop and the area of contraction could be shown.

Keywords: biomedical engineering, obstructive sleep apnea, pharynx, upper airways

Procedia PDF Downloads 281
180 Four-Electron Auger Process for Hollow Ions

Authors: Shahin A. Abdel-Naby, James P. Colgan, Michael S. Pindzola

Abstract:

A time-dependent close-coupling method is developed to calculate a total, double and triple autoionization rates for hollow atomic ions of four-electron systems. This work was motivated by recent observations of the four-electron Auger process in near K-edge photoionization of C+ ions. The time-dependent close-coupled equations are solved using lattice techniques to obtain a discrete representation of radial wave functions and all operators on a four-dimensional grid with uniform spacing. Initial excited states are obtained by relaxation of the Schrodinger equation in imaginary time using a Schmidt orthogonalization method involving interior subshells. The radial wave function grids are partitioned over the cores on a massively parallel computer, which is essential due to the large memory requirements needed to store the coupled-wave functions and the long run times needed to reach the convergence of the ionization process. Total, double, and triple autoionization rates are obtained by the propagation of the time-dependent close-coupled equations in real-time using integration over bound and continuum single-particle states. These states are generated by matrix diagonalization of one-electron Hamiltonians. The total autoionization rates for each L excited state is found to be slightly above the single autoionization rate for the excited configuration using configuration-average distorted-wave theory. As expected, we find the double and triple autoionization rates to be much smaller than the total autoionization rates. Future work can be extended to study electron-impact triple ionization of atoms or ions. The work was supported in part by grants from the American University of Sharjah and the US Department of Energy. Computational work was carried out at the National Energy Research Scientific Computing Center (NERSC) in Berkeley, California, USA.

Keywords: hollow atoms, autoionization, auger rates, time-dependent close-coupling method

Procedia PDF Downloads 131
179 The Spatial Pattern of Economic Rents of an Airport Development Area: Lessons Learned from the Suvarnabhumi International Airport, Thailand

Authors: C. Bejrananda, Y. Lee, T. Khamkaew

Abstract:

With the rise of the importance of air transportation in the 21st century, the role of economics in airport planning and decision-making has become more important to the urban structure and land value around it. Therefore, this research aims to examine the relationship between an airport and its impacts on the distribution of urban land uses and land values by applying the Alonso’s bid rent model. The New Bangkok International Airport (Suvarnabhumi International Airport) was taken as a case study. The analysis was made over three different time periods of airport development (after the airport site was proposed, during airport construction, and after the opening of the airport). The statistical results confirm that Alonso’s model can be used to explain the impacts of the new airport only for the northeast quadrant of the airport, while proximity to the airport showed the inverse relationship with the land value of all six types of land use activities through three periods of time. It indicates that the land value for commercial land use is the most sensitive to the location of the airport or has the strongest requirement for accessibility to the airport compared to the residential and manufacturing land use. Also, the bid-rent gradients of the six types of land use activities have declined dramatically through the three time periods because of the Asian Financial Crisis in 1997. Therefore, the lesson learned from this research concerns about the reliability of the data used. The major concern involves the use of different areal units for assessing land value for different time periods between zone block (1995) and grid block (2002, 2009). As a result, this affect the investigation of the overall trends of land value assessment, which are not readily apparent. In addition, the next concern is the availability of the historical data. With the lack of collecting historical data for land value assessment by the government, some of data of land values and aerial photos are not available to cover the entire study area. Finally, the different formats of using aerial photos between hard-copy (1995) and digital photo (2002, 2009) made difficult for measuring distances. Therefore, these problems also affect the accuracy of the results of the statistical analyses.

Keywords: airport development area, economic rents, spatial pattern, suvarnabhumi international airport

Procedia PDF Downloads 255
178 Experimental Study on Different Load Operation and Rapid Load-change Characteristics of Pulverized Coal Combustion with Self-preheating Technology

Authors: Hongliang Ding, Ziqu Ouyang

Abstract:

Under the basic national conditions that the energy structure is dominated by coal, it is of great significance to realize deep and flexible peak shaving of boilers in pulverized coal power plants, and maximize the consumption of renewable energy in the power grid, to ensure China's energy security and scientifically achieve the goals of carbon peak and carbon neutrality. With the promising self-preheating combustion technology, which had the potential of broad-load regulation and rapid response to load changes, this study mainly investigated the different load operation and rapid load-change characteristics of pulverized coal combustion. Four effective load-stabilization bases were proposed according to preheating temperature, coal gas composition (calorific value), combustion temperature (spatial mean temperature and mean square temperature fluctuation coefficient), and flue gas emissions (CO and NOx concentrations), on the basis of which the load-change rates were calculated to assess the load response characteristics. Due to the improvement of the physicochemical properties of pulverized coal after preheating, stable ignition and combustion conditions could be obtained even at a low load of 25%, with a combustion efficiency of over 97.5%, and NOx emission reached the lowest at 50% load, with the concentration of 50.97 mg/Nm3 (@6%O2). Additionally, the load ramp-up stage displayed higher load-change rates than the load ramp-down stage, with maximum rates of 3.30 %/min and 3.01 %/min, respectively. Furthermore, the driving force formed by high step load was conducive to the increase of load-change rate. The rates based on the preheating indicator attained the highest value of 3.30 %/min, while the rates based on the combustion indicator peaked at 2.71 %/min. In comparison, the combustion indicator accurately described the system’s combustion state and load changes, whereas the preheating indicator was easier to acquire, with a higher load-change rate, hence the appropriate evaluation strategy should depend on the actual situation. This study verified a feasible method for deep and flexible peak shaving of coal-fired power units, further providing basic data and technical supports for future engineering applications.

Keywords: clean coal combustion, load-change rate, peak shaving, self-preheating

Procedia PDF Downloads 47
177 Inviscid Steady Flow Simulation Around a Wing Configuration Using MB_CNS

Authors: Muhammad Umar Kiani, Muhammad Shahbaz, Hassan Akbar

Abstract:

Simulation of a high speed inviscid steady ideal air flow around a 2D/axial-symmetry body was carried out by the use of mb_cns code. mb_cns is a program for the time-integration of the Navier-Stokes equations for two-dimensional compressible flows on a multiple-block structured mesh. The flow geometry may be either planar or axisymmetric and multiply-connected domains can be modeled by patching together several blocks. The main simulation code is accompanied by a set of pre and post-processing programs. The pre-processing programs scriptit and mb_prep start with a short script describing the geometry, initial flow state and boundary conditions and produce a discretized version of the initial flow state. The main flow simulation program (or solver as it is sometimes called) is mb_cns. It takes the files prepared by scriptit and mb_prep, integrates the discrete form of the gas flow equations in time and writes the evolved flow data to a set of output files. This output data may consist of the flow state (over the whole domain) at a number of instants in time. After integration in time, the post-processing programs mb_post and mb_cont can be used to reformat the flow state data and produce GIF or postscript plots of flow quantities such as pressure, temperature and Mach number. The current problem is an example of supersonic inviscid flow. The flow domain for the current problem (strake configuration wing) is discretized by a structured grid and a finite-volume approach is used to discretize the conservation equations. The flow field is recorded as cell-average values at cell centers and explicit time stepping is used to update conserved quantities. MUSCL-type interpolation and one of three flux calculation methods (Riemann solver, AUSMDV flux splitting and the Equilibrium Flux Method, EFM) are used to calculate inviscid fluxes across cell faces.

Keywords: steady flow simulation, processing programs, simulation code, inviscid flux

Procedia PDF Downloads 404
176 A Visualization Classification Method for Identifying the Decayed Citrus Fruit Infected by Fungi Based on Hyperspectral Imaging

Authors: Jiangbo Li, Wenqian Huang

Abstract:

Early detection of fungal infection in citrus fruit is one of the major problems in the postharvest commercialization process. The automatic and nondestructive detection of infected fruits is still a challenge for the citrus industry. At present, the visual inspection of rotten citrus fruits is commonly performed by workers through the ultraviolet induction fluorescence technology or manual sorting in citrus packinghouses to remove fruit subject with fungal infection. However, the former entails a number of problems because exposing people to this kind of lighting is potentially hazardous to human health, and the latter is very inefficient. Orange is used as a research object. This study would focus on this problem and proposed an effective method based on Vis-NIR hyperspectral imaging in the wavelength range of 400-1000 nm with a spectroscopic resolution of 2.8 nm. In this work, three normalization approaches are applied prior to analysis to reduce the effect of sample curvature on spectral profiles, and it is found that mean normalization was the most effective pretreatment for decreasing spectral variability due to curvature. Then, principal component analysis (PCA) was applied to a dataset composing of average spectra from decayed and normal tissue to reduce the dimensionality of data and observe the ability of Vis-NIR hyper-spectra to discriminate data from two classes. In this case, it was observed that normal and decayed spectra were separable along the resultant first principal component (PC1) axis. Subsequently, five wavelengths (band) centered at 577, 702, 751, 808, and 923 nm were selected as the characteristic wavelengths by analyzing the loadings of PC1. A multispectral combination image was generated based on five selected characteristic wavelength images. Based on the obtained multispectral combination image, the intensity slicing pseudocolor image processing method is used to generate a 2-D visual classification image that would enhance the contrast between normal and decayed tissue. Finally, an image segmentation algorithm for detection of decayed fruit was developed based on the pseudocolor image coupled with a simple thresholding method. For the investigated 238 independent set samples including infected fruits infected by Penicillium digitatum and normal fruits, the total success rate is 100% and 97.5%, respectively, and, the proposed algorithm also used to identify the orange infected by penicillium italicum with a 100% identification accuracy, indicating that the proposed multispectral algorithm here is an effective method and it is potential to be applied in citrus industry.

Keywords: citrus fruit, early rotten, fungal infection, hyperspectral imaging

Procedia PDF Downloads 272
175 Exploration of Hydrocarbon Unconventional Accumulations in the Argillaceous Formation of the Autochthonous Miocene Succession in the Carpathian Foredeep

Authors: Wojciech Górecki, Anna Sowiżdżał, Grzegorz Machowski, Tomasz Maćkowski, Bartosz Papiernik, Michał Stefaniuk

Abstract:

The article shows results of the project which aims at evaluating possibilities of effective development and exploitation of natural gas from argillaceous series of the Autochthonous Miocene in the Carpathian Foredeep. To achieve the objective, the research team develop a world-trend based but unique methodology of processing and interpretation, adjusted to data, local variations and petroleum characteristics of the area. In order to determine the zones in which maximum volumes of hydrocarbons might have been generated and preserved as shale gas reservoirs, as well as to identify the most preferable well sites where largest gas accumulations are anticipated a number of task were accomplished. Evaluation of petrophysical properties and hydrocarbon saturation of the Miocene complex is based on laboratory measurements as well as interpretation of well-logs and archival data. The studies apply mercury porosimetry (MICP), micro CT and nuclear magnetic resonance imaging (using the Rock Core Analyzer). For prospective location (e.g. central part of Carpathian Foredeep – Brzesko-Wojnicz area) reprocessing and reinterpretation of detailed seismic survey data with the use of integrated geophysical investigations has been made. Construction of quantitative, structural and parametric models for selected areas of the Carpathian Foredeep is performed on the basis of integrated, detailed 3D computer models. Modeling are carried on with the Schlumberger’s Petrel software. Finally, prospective zones are spatially contoured in a form of regional 3D grid, which will be framework for generation modelling and comprehensive parametric mapping, allowing for spatial identification of the most prospective zones of unconventional gas accumulation in the Carpathian Foredeep. Preliminary results of research works indicate a potentially prospective area for occurrence of unconventional gas accumulations in the Polish part of Carpathian Foredeep.

Keywords: autochthonous Miocene, Carpathian foredeep, Poland, shale gas

Procedia PDF Downloads 199
174 An Overview of Posterior Fossa Associated Pathologies and Segmentation

Authors: Samuel J. Ahmad, Michael Zhu, Andrew J. Kobets

Abstract:

Segmentation tools continue to advance, evolving from manual methods to automated contouring technologies utilizing convolutional neural networks. These techniques have evaluated ventricular and hemorrhagic volumes in the past but may be applied in novel ways to assess posterior fossa-associated pathologies such as Chiari malformations. Herein, we summarize literature pertaining to segmentation in the context of this and other posterior fossa-based diseases such as trigeminal neuralgia, hemifacial spasm, and posterior fossa syndrome. A literature search for volumetric analysis of the posterior fossa identified 27 papers where semi-automated, automated, manual segmentation, linear measurement-based formulas, and the Cavalieri estimator were utilized. These studies produced superior data than older methods utilizing formulas for rough volumetric estimations. The most commonly used segmentation technique was semi-automated segmentation (12 studies). Manual segmentation was the second most common technique (7 studies). Automated segmentation techniques (4 studies) and the Cavalieri estimator (3 studies), a point-counting method that uses a grid of points to estimate the volume of a region, were the next most commonly used techniques. The least commonly utilized segmentation technique was linear measurement-based formulas (1 study). Semi-automated segmentation produced accurate, reproducible results. However, it is apparent that there does not exist a single semi-automated software, open source or otherwise, that has been widely applied to the posterior fossa. Fully-automated segmentation via such open source software as FSL and Freesurfer produced highly accurate posterior fossa segmentations. Various forms of segmentation have been used to assess posterior fossa pathologies and each has its advantages and disadvantages. According to our results, semi-automated segmentation is the predominant method. However, atlas-based automated segmentation is an extremely promising method that produces accurate results. Future evolution of segmentation technologies will undoubtedly yield superior results, which may be applied to posterior fossa related pathologies. Medical professionals will save time and effort analyzing large sets of data due to these advances.

Keywords: chiari, posterior fossa, segmentation, volumetric

Procedia PDF Downloads 71
173 Numerical and Experimental Investigation of Air Distribution System of Larder Type Refrigerator

Authors: Funda Erdem Şahnali, Ş. Özgür Atayılmaz, Tolga N. Aynur

Abstract:

Almost all of the domestic refrigerators operate on the principle of the vapor compression refrigeration cycle and removal of heat from the refrigerator cabinets is done via one of the two methods: natural convection or forced convection. In this study, airflow and temperature distributions inside a 375L no-frost type larder cabinet, in which cooling is provided by forced convection, are evaluated both experimentally and numerically. Airflow rate, compressor capacity and temperature distribution in the cooling chamber are known to be some of the most important factors that affect the cooling performance and energy consumption of a refrigerator. The objective of this study is to evaluate the original temperature distribution in the larder cabinet, and investigate for better temperature distribution solutions throughout the refrigerator domain via system optimizations that could provide uniform temperature distribution. The flow visualization and airflow velocity measurements inside the original refrigerator are performed via Stereoscopic Particle Image Velocimetry (SPIV). In addition, airflow and temperature distributions are investigated numerically with Ansys Fluent. In order to study the heat transfer inside the aforementioned refrigerator, forced convection theories covering the following cases are applied: closed rectangular cavity representing heat transfer inside the refrigerating compartment. The cavity volume has been represented with finite volume elements and is solved computationally with appropriate momentum and energy equations (Navier-Stokes equations). The 3D model is analyzed as transient, with k-ε turbulence model and SIMPLE pressure-velocity coupling for turbulent flow situation. The results obtained with the 3D numerical simulations are in quite good agreement with the experimental airflow measurements using the SPIV technique. After Computational Fluid Dynamics (CFD) analysis of the baseline case, the effects of three parameters: compressor capacity, fan rotational speed and type of shelf (glass or wire) are studied on the energy consumption; pull down time, temperature distributions in the cabinet. For each case, energy consumption based on experimental results is calculated. After the analysis, the main effective parameters for temperature distribution inside a cabin and energy consumption based on CFD simulation are determined and simulation results are supplied for Design of Experiments (DOE) as input data for optimization. The best configuration with minimum energy consumption that provides minimum temperature difference between the shelves inside the cabinet is determined.

Keywords: air distribution, CFD, DOE, energy consumption, experimental, larder cabinet, refrigeration, uniform temperature

Procedia PDF Downloads 80
172 An Exploration of Policy-related Documents on District Heating and Cooling in Flanders: a Slow and Bottom-up Process

Authors: Isaura Bonneux

Abstract:

District heating and cooling (DHC) is increasingly recognized as a viable path towards sustainable heating and cooling. While some countries like Sweden and Denmark have a longstanding tradition of DHC, Belgium is lacking behind. The Northern part of Belgium, Flanders, had only a total of 95 heating networks in July 2023. Nevertheless, it is increasingly exploring its possibilities to enhance the scope of DHC. DHC is a complex energy system, requiring a lot of collaboration between various stakeholders on various levels. Therefore, it is of interest to look closer at policy-related documents at the Flemish (regional) level, as these policies set the scene for DHC development in the Flemish region. This kind of analysis has not been undertaken so far. This paper has the following research question: “Who talks about DHC, and in which way and context is DHC discussed in Flemish policy-related documents?” To answer this question, the Overton policy database was used to search and retrieve relevant policy-related documents. Overton retrieves data from governments, think thanks, NGOs, and IGOs. In total, out of the 244 original results, 117 documents between 2009 and 2023 were analyzed. Every selected document included theme keywords, policymaking department(s), date, and document type. These elements were used for quantitative data description and visualization. Further, qualitative content analysis revealed patterns and main themes regarding DHC in Flanders. Four main conclusions can be drawn: First, it is obvious from the timeframe that DHC is a new topic in Flanders with still limited attention; 2014, 2016 and 2017 were the years with the most documents, yet this number is still only 12 documents. In addition, many documents talked about DHC but not much in depth and painted it as a future scenario with a lot of uncertainty around it. The largest part of the issuing government departments had a link to either energy or climate (e.g. Flemish Environmental Agency) or policy (e.g. Socio-Economic Council of Flanders) Second, DHC is mentioned most within an ‘Environment and Sustainability’ context, followed by ‘General Policy and Regulation’. This is intuitive, as DHC is perceived as a sustainable heating and cooling technique and this analysis compromises policy-related documents. Third, Flanders seems mostly interested in using waste or residual heat as a heating source for DHC. The harbors and waste incineration plants are identified as potential and promising supply sources. This approach tries to conciliate environmental and economic incentives. Last, local councils get assigned a central role and the initiative is mostly taken by them. The policy documents and policy advices demonstrate that Flanders opts for a bottom-up organization. As DHC is very dependent on local conditions, this seems a logic step. Nevertheless, this can impede smaller councils to create DHC networks and slow down systematic and fast implementation of DHC throughout Flanders.

Keywords: district heating and cooling, flanders, overton database, policy analysis

Procedia PDF Downloads 15
171 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh

Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila

Abstract:

Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.

Keywords: data culture, data-driven organization, data mesh, data quality for business success

Procedia PDF Downloads 97
170 Life Cycle Assessment of Biogas Energy Production from a Small-Scale Wastewater Treatment Plant in Central Mexico

Authors: Joel Bonales, Venecia Solorzano, Carlos Garcia

Abstract:

A great percentage of the wastewater generated in developing countries don’t receive any treatment, which leads to numerous environmental impacts. In response to this, a paradigm change in the current wastewater treatment model based on large scale plants towards a small and medium scale based model has been proposed. Nevertheless, small scale wastewater treatment (SS-WTTP) with novel technologies such as anaerobic digesters, as well as the utilization of derivative co-products such as biogas, still presents diverse environmental impacts which must be assessed. This study consisted in a Life Cycle Assessment (LCA) performed to a SS-WWTP which treats wastewater from a small commercial block in the city of Morelia, Mexico. The treatment performed in the SS-WWTP consists in anaerobic and aerobic digesters with a daily capacity of 5,040 L. Two different scenarios were analyzed: the current plant conditions and a hypothetical energy use of biogas obtained in situ. Furthermore, two different allocation criteria were applied: full impact allocation to the system’s main product (treated water) and substitution credits for replacing Mexican grid electricity (biogas) and clean water pumping (treated water). The results showed that the analyzed plant had bigger impacts than what has been reported in the bibliography in the basis of wastewater volume treated, which may imply that this plant is currently operating inefficiently. The evaluated impacts appeared to be focused in the aerobic digestion and electric generation phases due to the plant’s particular configuration. Additional findings prove that the allocation criteria applied is crucial for the interpretation of impacts and that that the energy use of the biogas obtained in this plant can help mitigate associated climate change impacts. It is concluded that SS-WTTP is a environmentally sound alternative for wastewater treatment from a systemic perspective. However, this type of studies must be careful in the selection of the allocation criteria and replaced products, since these factors have a great influence in the results of the assessment.

Keywords: biogas, life cycle assessment, small scale treatment, wastewater treatment

Procedia PDF Downloads 99
169 Talent Management in Small and Medium Sized Companies: A Multilevel Approach Contextualized in France

Authors: Kousay Abid

Abstract:

The aim of this paper is to better understand talent and talent management (TM) in small French companies as well as in medium-sized ones (SME). While previous empirical investigations have largely focused on multinationals and big companies and concentrated on the Anglo-Saxon context, we focus on the pressing need for implementing TM strategies and practices, not only on a new ground of SME but also within a new European context related to France and the French context. This study also aims at understanding strategies adopted by those firms as means to attract, retain, maintain and to develop talents. We contribute to TM issues by adopting a multilevel approach, holding the goal of reaching a global holistic vision of interactions between various levels while applying TM, to make it more and more familiar to us. A qualitative research methodology based on a multiple-case study design, bottomed firstly on a qualitative survey and secondly on two in-depth case study, both built on interviews, will be used in order to develop an ideal analysis for TM strategies and practices. The findings will be based on data collected from more than 15 French SMEs. Our theoretical contributions are the fruit of context considerations and the dynamic of multilevel approach. Theoretically, we attempt first to clarify how talents and TM are seen and defined in French SMEs and consequently to enrich the literature on TM in SMEs out of the Anglo-Saxon context. Moreover, we seek to understand how SMEs manage jointly their talents and their TM strategies by setting up this contextualized pilot study. As well, we focus on the systematic TM model issue from French SMEs. Our prior managerial goal is to shed light on the need for TM to achieve a better management of these organizations by directing leaders to better identify the talented people whom they hold at all levels. In addition, our TM systematic model strengthens our analysis grid as recommendations for CEO and Human Resource Development (HRD) to make them rethink about the companies’ HR business strategies. Therefore, our outputs present a multiple lever of action that should be taken into consideration while reviewing HR strategies and systems, as well as their impact beyond organizational boundaries.

Keywords: french context, multilevel approach, small and medium-sized enterprises, talent management

Procedia PDF Downloads 155
168 Developing Confidence of Visual Literacy through Using MIRO during Online Learning

Authors: Rachel S. E. Lim, Winnie L. C. Tan

Abstract:

Visual literacy is about making meaning through the interaction of images, words, and sounds. Graphic communication students typically develop visual literacy through critique and production of studio-based projects for their portfolios. However, the abrupt switch to online learning during the COVID-19 pandemic has made it necessary to consider new strategies of visualization and planning to scaffold teaching and learning. This study, therefore, investigated how MIRO, a cloud-based visual collaboration platform, could be used to develop the visual literacy confidence of 30 diploma in graphic communication students attending a graphic design course at a Singapore arts institution. Due to COVID-19, the course was taught fully online throughout a 16-week semester. Guided by Kolb’s Experiential Learning Cycle, the two lecturers developed students’ engagement with visual literacy concepts through different activities that facilitated concrete experiences, reflective observation, abstract conceptualization, and active experimentation. Throughout the semester, students create, collaborate, and centralize communication in MIRO with infinite canvas, smart frameworks, a robust set of widgets (i.e., sticky notes, freeform pen, shapes, arrows, smart drawing, emoticons, etc.), and powerful platform capabilities that enable asynchronous and synchronous feedback and interaction. Students then drew upon these multimodal experiences to brainstorm, research, and develop their motion design project. A survey was used to examine students’ perceptions of engagement (E), confidence (C), learning strategies (LS). Using multiple regression, it¬ was found that the use of MIRO helped students develop confidence (C) with visual literacy, which predicted performance score (PS) that was measured against their application of visual literacy to the creation of their motion design project. While students’ learning strategies (LS) with MIRO did not directly predict confidence (C) or performance score (PS), it fostered positive perceptions of engagement (E) which in turn predicted confidence (C). Content analysis of students’ open-ended survey responses about their learning strategies (LS) showed that MIRO provides organization and structure in documenting learning progress, in tandem with establishing standards and expectations as a preparatory ground for generating feedback. With the clarity and sequence of the mentioned conditions set in place, these prerequisites then lead to the next level of personal action for self-reflection, self-directed learning, and time management. The study results show that the affordances of MIRO can develop visual literacy and make up for the potential pitfalls of student isolation, communication, and engagement during online learning. The context of how MIRO could be used by lecturers to orientate students for learning in visual literacy and studio-based projects for future development are discussed.

Keywords: design education, graphic communication, online learning, visual literacy

Procedia PDF Downloads 90
167 Assessment and Optimisation of Building Services Electrical Loads for Off-Grid or Hybrid Operation

Authors: Desmond Young

Abstract:

In building services electrical design, a key element of any project will be assessing the electrical load requirements. This needs to be done early in the design process to allow the selection of infrastructure that would be required to meet the electrical needs of the type of building. The type of building will define the type of assessment made, and the values applied in defining the maximum demand for the building, and ultimately the size of supply or infrastructure required, and the application that needs to be made to the distribution network operator, or alternatively to an independent network operator. The fact that this assessment needs to be undertaken early in the design process provides limits on the type of assessment that can be used, as different methods require different types of information, and sometimes this information is not available until the latter stages of a project. A common method applied in the earlier design stages of a project, typically during stages 1,2 & 3, is the use of benchmarks. It is a possibility that some of the benchmarks applied are excessive in relation to the current loads that exist in a modern installation. This lack of accuracy is based on information which does not correspond to the actual equipment loads that are used. This includes lighting and small power loads, where the use of more efficient equipment and lighting has reduced the maximum demand required. The electrical load can be used as part of the process to assess the heat generated from the equipment, with the heat gains from other sources, this feeds into the sizing of the infrastructure required to cool the building. Any overestimation of the loads would contribute to the increase in the design load for the heating and ventilation systems. Finally, with the new policies driving the industry to decarbonise buildings, a prime example being the recently introduced London Plan, loads are potentially going to increase. In addition, with the advent of the pandemic and changes to working practices, and the adoption of electric heating and vehicles, a better understanding of the loads that should be applied will aid in ensuring that infrastructure is not oversized, as a cost to the client, or undersized to the detriment of the building. In addition, more accurate benchmarks and methods will allow assessments to be made for the incorporation of energy storage and renewable technologies as these technologies become more common in buildings new or refurbished.

Keywords: energy, ADMD, electrical load assessment, energy benchmarks

Procedia PDF Downloads 80
166 The Invaluable Contributions of Radiography and Radiotherapy in Modern Medicine

Authors: Sahar Heidary

Abstract:

Radiography and radiotherapy have emerged as crucial pillars of modern medical practice, revolutionizing diagnostics and treatment for a myriad of health conditions. This abstract highlights the pivotal role of radiography and radiotherapy in favor of healthcare and society. Radiography, a non-invasive imaging technique, has significantly advanced medical diagnostics by enabling the visualization of internal structures and abnormalities within the human body. With the advent of digital radiography, clinicians can obtain high-resolution images promptly, leading to faster diagnoses and informed treatment decisions. Radiography plays a pivotal role in detecting fractures, tumors, infections, and various other conditions, allowing for timely interventions and improved patient outcomes. Moreover, its widespread accessibility and cost-effectiveness make it an indispensable tool in healthcare settings worldwide. On the other hand, radiotherapy, a branch of medical science that utilizes high-energy radiation, has become an integral component of cancer treatment and management. By precisely targeting and damaging cancerous cells, radiotherapy offers a potent strategy to control tumor growth and, in many cases, leads to cancer eradication. Additionally, radiotherapy is often used in combination with surgery and chemotherapy, providing a multifaceted approach to combat cancer comprehensively. The continuous advancements in radiotherapy techniques, such as intensity-modulated radiotherapy and stereotactic radiosurgery, have further improved treatment precision while minimizing damage to surrounding healthy tissues. Furthermore, radiography and radiotherapy have demonstrated their worth beyond oncology. Radiography is instrumental in guiding various medical procedures, including catheter placement, joint injections, and dental evaluations, reducing complications and enhancing procedural accuracy. On the other hand, radiotherapy finds applications in non-cancerous conditions like benign tumors, vascular malformations, and certain neurological disorders, offering therapeutic options for patients who may not benefit from traditional surgical interventions. In conclusion, radiography and radiotherapy stand as indispensable tools in modern medicine, driving transformative improvements in patient care and treatment outcomes. Their ability to diagnose, treat, and manage a wide array of medical conditions underscores their favor in medical practice. As technology continues to advance, radiography and radiotherapy will undoubtedly play an ever more significant role in shaping the future of healthcare, ultimately saving lives and enhancing the quality of life for countless individuals worldwide.

Keywords: radiology, radiotherapy, medical imaging, cancer treatment

Procedia PDF Downloads 38
165 Modeling Geogenic Groundwater Contamination Risk with the Groundwater Assessment Platform (GAP)

Authors: Joel Podgorski, Manouchehr Amini, Annette Johnson, Michael Berg

Abstract:

One-third of the world’s population relies on groundwater for its drinking water. Natural geogenic arsenic and fluoride contaminate ~10% of wells. Prolonged exposure to high levels of arsenic can result in various internal cancers, while high levels of fluoride are responsible for the development of dental and crippling skeletal fluorosis. In poor urban and rural settings, the provision of drinking water free of geogenic contamination can be a major challenge. In order to efficiently apply limited resources in the testing of wells, water resource managers need to know where geogenically contaminated groundwater is likely to occur. The Groundwater Assessment Platform (GAP) fulfills this need by providing state-of-the-art global arsenic and fluoride contamination hazard maps as well as enabling users to create their own groundwater quality models. The global risk models were produced by logistic regression of arsenic and fluoride measurements using predictor variables of various soil, geological and climate parameters. The maps display the probability of encountering concentrations of arsenic or fluoride exceeding the World Health Organization’s (WHO) stipulated concentration limits of 10 µg/L or 1.5 mg/L, respectively. In addition to a reconsideration of the relevant geochemical settings, these second-generation maps represent a great improvement over the previous risk maps due to a significant increase in data quantity and resolution. For example, there is a 10-fold increase in the number of measured data points, and the resolution of predictor variables is generally 60 times greater. These same predictor variable datasets are available on the GAP platform for visualization as well as for use with a modeling tool. The latter requires that users upload their own concentration measurements and select the predictor variables that they wish to incorporate in their models. In addition, users can upload additional predictor variable datasets either as features or coverages. Such models can represent an improvement over the global models already supplied, since (a) users may be able to use their own, more detailed datasets of measured concentrations and (b) the various processes leading to arsenic and fluoride groundwater contamination can be isolated more effectively on a smaller scale, thereby resulting in a more accurate model. All maps, including user-created risk models, can be downloaded as PDFs. There is also the option to share data in a secure environment as well as the possibility to collaborate in a secure environment through the creation of communities. In summary, GAP provides users with the means to reliably and efficiently produce models specific to their region of interest by making available the latest datasets of predictor variables along with the necessary modeling infrastructure.

Keywords: arsenic, fluoride, groundwater contamination, logistic regression

Procedia PDF Downloads 316
164 Suitable Site Selection of Small Dams Using Geo-Spatial Technique: A Case Study of Dadu Tehsil, Sindh

Authors: Zahid Khalil, Saad Ul Haque, Asif Khan

Abstract:

Decision making about identifying suitable sites for any project by considering different parameters is difficult. Using GIS and Multi-Criteria Analysis (MCA) can make it easy for those projects. This technology has proved to be an efficient and adequate in acquiring the desired information. In this study, GIS and MCA were employed to identify the suitable sites for small dams in Dadu Tehsil, Sindh. The GIS software is used to create all the spatial parameters for the analysis. The parameters that derived are slope, drainage density, rainfall, land use / land cover, soil groups, Curve Number (CN) and runoff index with a spatial resolution of 30m. The data used for deriving above layers include 30-meter resolution SRTM DEM, Landsat 8 imagery, and rainfall from National Centre of Environment Prediction (NCEP) and soil data from World Harmonized Soil Data (WHSD). Land use/Land cover map is derived from Landsat 8 using supervised classification. Slope, drainage network and watershed are delineated by terrain processing of DEM. The Soil Conservation Services (SCS) method is implemented to estimate the surface runoff from the rainfall. Prior to this, SCS-CN grid is developed by integrating the soil and land use/land cover raster. These layers with some technical and ecological constraints are assigned weights on the basis of suitability criteria. The pairwise comparison method, also known as Analytical Hierarchy Process (AHP) is taken into account as MCA for assigning weights on each decision element. All the parameters and group of parameters are integrated using weighted overlay in GIS environment to produce suitable sites for the Dams. The resultant layer is then classified into four classes namely, best suitable, suitable, moderate and less suitable. This study reveals a contribution to decision-making about suitable sites analysis for small dams using geospatial data with minimal amount of ground data. This suitability maps can be helpful for water resource management organizations in determination of feasible rainwater harvesting structures (RWH).

Keywords: Remote sensing, GIS, AHP, RWH

Procedia PDF Downloads 362
163 Climate Species Lists: A Combination of Methods for Urban Areas

Authors: Andrea Gion Saluz, Tal Hertig, Axel Heinrich, Stefan Stevanovic

Abstract:

Higher temperatures, seasonal changes in precipitation, and extreme weather events are increasingly affecting trees. To counteract the increasing challenges of urban trees, strategies are increasingly being sought to preserve existing tree populations on the one hand and to prepare for the coming years on the other. One such strategy lies in strategic climate tree species selection. The search is on for species or varieties that can cope with the new climatic conditions. Many efforts in German-speaking countries deal with this in detail, such as the tree lists of the German Conference of Garden Authorities (GALK), the project Stadtgrün 2021, or the instruments of the Climate Species Matrix by Prof. Dr. Roloff. In this context, different methods for a correct species selection are offered. One possibility is to select certain physiological attributes that indicate the climate resilience of a species. To calculate the dissimilarity of the present climate of different geographic regions in relation to the future climate of any city, a weighted (standardized) Euclidean distance (SED) for seasonal climate values is calculated for each region of the Earth. The calculation was performed in the QGIS geographic information system, using global raster datasets on monthly climate values in the 1981-2010 standard period. Data from a European forest inventory were used to identify tree species growing in the calculated analogue climate regions. The inventory used is the compilation of georeferenced point data at a 1 km grid resolution on the occurrence of tree species in 21 European countries. In this project, the results of the methodological application are shown for the city of Zurich for the year 2060. In the first step, analog climate regions based on projected climate values for the measuring station Kirche Fluntern (ZH) were searched for. In a further step, the methods mentioned above were applied to generate tree species lists for the city of Zurich. These lists were then qualitatively evaluated with respect to the suitability of the different tree species for the Zurich area to generate a cleaned and thus usable list of possible future tree species.

Keywords: climate change, climate region, climate tree, urban tree

Procedia PDF Downloads 76
162 Exploring the Spatial Characteristics of Mortality Map: A Statistical Area Perspective

Authors: Jung-Hong Hong, Jing-Cen Yang, Cai-Yu Ou

Abstract:

The analysis of geographic inequality heavily relies on the use of location-enabled statistical data and quantitative measures to present the spatial patterns of the selected phenomena and analyze their differences. To protect the privacy of individual instance and link to administrative units, point-based datasets are spatially aggregated to area-based statistical datasets, where only the overall status for the selected levels of spatial units is used for decision making. The partition of the spatial units thus has dominant influence on the outcomes of the analyzed results, well known as the Modifiable Areal Unit Problem (MAUP). A new spatial reference framework, the Taiwan Geographical Statistical Classification (TGSC), was recently introduced in Taiwan based on the spatial partition principles of homogeneous consideration of the number of population and households. Comparing to the outcomes of the traditional township units, TGSC provides additional levels of spatial units with finer granularity for presenting spatial phenomena and enables domain experts to select appropriate dissemination level for publishing statistical data. This paper compares the results of respectively using TGSC and township unit on the mortality data and examines the spatial characteristics of their outcomes. For the mortality data between the period of January 1st, 2008 and December 31st, 2010 of the Taitung County, the all-cause age-standardized death rate (ASDR) ranges from 571 to 1757 per 100,000 persons, whereas the 2nd dissemination area (TGSC) shows greater variation, ranged from 0 to 2222 per 100,000. The finer granularity of spatial units of TGSC clearly provides better outcomes for identifying and evaluating the geographic inequality and can be further analyzed with the statistical measures from other perspectives (e.g., population, area, environment.). The management and analysis of the statistical data referring to the TGSC in this research is strongly supported by the use of Geographic Information System (GIS) technology. An integrated workflow that consists of the tasks of the processing of death certificates, the geocoding of street address, the quality assurance of geocoded results, the automatic calculation of statistic measures, the standardized encoding of measures and the geo-visualization of statistical outcomes is developed. This paper also introduces a set of auxiliary measures from a geographic distribution perspective to further examine the hidden spatial characteristics of mortality data and justify the analyzed results. With the common statistical area framework like TGSC, the preliminary results demonstrate promising potential for developing a web-based statistical service that can effectively access domain statistical data and present the analyzed outcomes in meaningful ways to avoid wrong decision making.

Keywords: mortality map, spatial patterns, statistical area, variation

Procedia PDF Downloads 227
161 Geospatial Multi-Criteria Evaluation to Predict Landslide Hazard Potential in the Catchment of Lake Naivasha, Kenya

Authors: Abdel Rahman Khider Hassan

Abstract:

This paper describes a multi-criteria geospatial model for prediction of landslide hazard zonation (LHZ) for Lake Naivasha catchment (Kenya), based on spatial analysis of integrated datasets of location intrinsic parameters (slope stability factors) and external landslides triggering factors (natural and man-made factors). The intrinsic dataset included: lithology, geometry of slope (slope inclination, aspect, elevation, and curvature) and land use/land cover. The landslides triggering factors included: rainfall as the climatic factor, in addition to the destructive effects reflected by proximity of roads and drainage network to areas that are susceptible to landslides. No published study on landslides has been obtained for this area. Thus, digital datasets of the above spatial parameters were conveniently acquired, stored, manipulated and analyzed in a Geographical Information System (GIS) using a multi-criteria grid overlay technique (in ArcGIS 10.2.2 environment). Deduction of landslide hazard zonation is done by applying weights based on relative contribution of each parameter to the slope instability, and finally, the weighted parameters grids were overlaid together to generate a map of the potential landslide hazard zonation (LHZ) for the lake catchment. From the total surface of 3200 km² of the lake catchment, most of the region (78.7 %; 2518.4 km²) is susceptible to moderate landslide hazards, whilst about 13% (416 km²) is occurring under high hazards. Only 1.0% (32 km²) of the catchment is displaying very high landslide hazards, and the remaining area (7.3 %; 233.6 km²) displays low probability of landslide hazards. This result confirms the importance of steep slope angles, lithology, vegetation land cover and slope orientation (aspect) as the major determining factors of slope failures. The information provided by the produced map of landslide hazard zonation (LHZ) could lay the basis for decision making as well as mitigation and applications in avoiding potential losses caused by landslides in the Lake Naivasha catchment in the Kenya Highlands.

Keywords: decision making, geospatial, landslide, multi-criteria, Naivasha

Procedia PDF Downloads 176
160 The Effectiveness of Prefabricated Vertical Drains for Accelerating Consolidation of Tunis Soft Soil

Authors: Marwa Ben Khalifa, Zeineb Ben Salem, Wissem Frikha

Abstract:

The purpose of the present work is to study the consolidation behavior of highly compressible Tunis soft soil “TSS” by means of prefabricated vertical drains (PVD’s) associated to preloading based on laboratory and field investigations. In the first hand, the field performance of PVD’s on the layer of Tunis soft soil was analysed based on the case study of the construction of embankments of “Radès la Goulette” bridge project. PVD’s Geosynthetics drains types were installed with triangular grid pattern until 10 m depth associated with step-by-step surcharge. The monitoring of the soil settlement during preloading stage for Radès La Goulette Bridge project was provided by an instrumentation composed by various type of tassometer installed in the soil. The distribution of water pressure was monitored through piezocone penetration. In the second hand, a laboratory reduced tests are performed on TSS subjected also to preloading and improved with PVD's Mebradrain 88 (Mb88) type. A specific test apparatus was designed and manufactured to study the consolidation. Two series of consolidation tests were performed on TSS specimens. The first series included consolidation tests for soil improved by one central drain. In thesecond series, a triangular mesh of three geodrains was used. The evolution of degree of consolidation and measured settlements versus time derived from laboratory tests and field data were presented and discussed. The obtained results have shown that PVD’s have considerably accelerated the consolidation of Tunis soft soil by shortening the drainage path. The model with mesh of three drains gives results more comparative to field one. A longer consolidation time is observed for the cell improved by a single central drain. A comparison with theoretical analysis, basically that of Barron (1948) and Carillo (1942), was presented. It’s found that these theories overestimate the degree of consolidation in the presence of PVD.

Keywords: tunis soft soil, prefabricated vertical drains, acceleration of consolidation, dissipation of excess pore water pressures, radès bridge project, barron and carillo’s theories

Procedia PDF Downloads 99
159 Air–Water Two-Phase Flow Patterns in PEMFC Microchannels

Authors: Ibrahim Rassoul, A. Serir, E-K. Si Ahmed, J. Legrand

Abstract:

The acronym PEM refers to Proton Exchange Membrane or alternatively Polymer Electrolyte Membrane. Due to its high efficiency, low operating temperature (30–80 °C), and rapid evolution over the past decade, PEMFCs are increasingly emerging as a viable alternative clean power source for automobile and stationary applications. Before PEMFCs can be employed to power automobiles and homes, several key technical challenges must be properly addressed. One technical challenge is elucidating the mechanisms underlying water transport in and removal from PEMFCs. On one hand, sufficient water is needed in the polymer electrolyte membrane or PEM to maintain sufficiently high proton conductivity. On the other hand, too much liquid water present in the cathode can cause “flooding” (that is, pore space is filled with excessive liquid water) and hinder the transport of the oxygen reactant from the gas flow channel (GFC) to the three-phase reaction sites. The experimental transparent fuel cell used in this work was designed to represent actual full scale of fuel cell geometry. According to the operating conditions, a number of flow regimes may appear in the microchannel: droplet flow, blockage water liquid bridge /plug (concave and convex forms), slug/plug flow and film flow. Some of flow patterns are new, while others have been already observed in PEMFC microchannels. An algorithm in MATLAB was developed to automatically determine the flow structure (e.g. slug, droplet, plug, and film) of detected liquid water in the test microchannels and yield information pertaining to the distribution of water among the different flow structures. A video processing algorithm was developed to automatically detect dynamic and static liquid water present in the gas channels and generate relevant quantitative information. The potential benefit of this software allows the user to obtain a more precise and systematic way to obtain measurements from images of small objects. The void fractions are also determined based on images analysis. The aim of this work is to provide a comprehensive characterization of two-phase flow in an operating fuel cell which can be used towards the optimization of water management and informs design guidelines for gas delivery microchannels for fuel cells and its essential in the design and control of diverse applications. The approach will combine numerical modeling with experimental visualization and measurements.

Keywords: polymer electrolyte fuel cell, air-water two phase flow, gas diffusion layer, microchannels, advancing contact angle, receding contact angle, void fraction, surface tension, image processing

Procedia PDF Downloads 281