Search results for: noise mapping
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2199

Search results for: noise mapping

159 Spectroscopic Autoradiography of Alpha Particles on Geologic Samples at the Thin Section Scale Using a Parallel Ionization Multiplier Gaseous Detector

Authors: Hugo Lefeuvre, Jerôme Donnard, Michael Descostes, Sophie Billon, Samuel Duval, Tugdual Oger, Herve Toubon, Paul Sardini

Abstract:

Spectroscopic autoradiography is a method of interest for geological sample analysis. Indeed, researchers may face different issues such as radioelement identification and quantification in the field of environmental studies. Imaging gaseous ionization detectors find their place in geosciences for conducting specific measurements of radioactivity to improve the monitoring of natural processes using naturally-occurring radioactive tracers, but also for the nuclear industry linked to the mining sector. In geological samples, the location and identification of the radioactive-bearing minerals at the thin-section scale remains a major challenge as the detection limit of the usual elementary microprobe techniques is far higher than the concentration of most of the natural radioactive decay products. The spatial distribution of each decay product in the case of uranium in a geomaterial is interesting for relating radionuclides concentration to the mineralogy. The present study aims to provide spectroscopic autoradiography analysis method for measuring the initial energy of alpha particles with a parallel ionization multiplier gaseous detector. The analysis method has been developed thanks to Geant4 modelling of the detector. The track of alpha particles recorded in the gas detector allow the simultaneous measurement of the initial point of emission and the reconstruction of the initial particle energy by a selection based on the linear energy distribution. This spectroscopic autoradiography method was successfully used to reproduce the alpha spectra from a 238U decay chain on a geological sample at the thin-section scale. The characteristics of this measurement are an energy spectrum resolution of 17.2% (FWHM) at 4647 keV and a spatial resolution of at least 50 µm. Even if the efficiency of energy spectrum reconstruction is low (4.4%) compared to the efficiency of a simple autoradiograph (50%), this novel measurement approach offers the opportunity to select areas on an autoradiograph to perform an energy spectrum analysis within that area. This opens up possibilities for the detailed analysis of heterogeneous geological samples containing natural alpha emitters such as uranium-238 and radium-226. This measurement will allow the study of the spatial distribution of uranium and its descendants in geo-materials by coupling scanning electron microscope characterizations. The direct application of this dual modality (energy-position) of analysis will be the subject of future developments. The measurement of the radioactive equilibrium state of heterogeneous geological structures, and the quantitative mapping of 226Ra radioactivity are now being actively studied.

Keywords: alpha spectroscopy, digital autoradiography, mining activities, natural decay products

Procedia PDF Downloads 127
158 The Charge Exchange and Mixture Formation Model in the ASz-62IR Radial Aircraft Engine

Authors: Pawel Magryta, Tytus Tulwin, Paweł Karpiński

Abstract:

The ASz62IR engine is a radial aircraft engine with 9 cylinders. This object is produced by the Polish company WSK "PZL-KALISZ" S.A. This is engine is currently being developed by the above company and Lublin University of Technology. In order to provide an effective work of the technological development of this unit it was decided to made the simulation model. The model of ASz-62IR was developed with AVL BOOST software which is a tool dedicated to the one-dimensional modeling of internal combustion engines. This model can be used to calculate parameters of an air and fuel flow in an intake system including charging devices as well as combustion and exhaust flow to the environment. The main purpose of this model is the analysis of the charge exchange and mixture formation in this engine. For this purpose, the model consists of elements such: as air inlet, throttle system, compressor connector, charging compressor, inlet pipes and injectors, outlet pipes, fuel injection and model of fuel mixing and evaporation. The model of charge exchange and mixture formation was based on the model of mass flow rate in intake and exhaust pipes, and also on the calculation of gas properties values like gas constant or thermal capacity. This model was based on the equations to describe isentropic flow. The energy equation to describe flow under steady conditions was transformed into the mass flow equation. In the model the flow coefficient μσ was used, that varies with the stroke/valve opening and was determined in a steady flow state. The geometry of the inlet channels and other key components was mapped with reference to the technical documentation of the engine and empirical measurements of the structure elements. The volume of elements on the charge flow path between the air inlet and the exhaust outlet was measured by the CAD mapping of the structure. Taken from the technical documentation, the original characteristics of the compressor engine was entered into the model. Additionally, the model uses a general model for the transport of chemical compounds of the mixture. There are 7 compounds used, i.e. fuel, O2, N2, CO2, H2O, CO, H2. A gasoline fuel of a calorific value of 43.5 MJ/kg and an air mass fraction for stoichiometric mixture of 14.5 were used. Indirect injection into the intake manifold is used in this model. The model assumes the following simplifications: the mixture is homogenous at the beginning of combustion, accordingly, mixture stoichiometric coefficient A/F remains constant during combustion, combusted and non-combusted charges show identical pressures and temperatures although their compositions change. As a result of the simulation studies based on the model described above, the basic parameters of combustion process, charge exchange, mixture formation in cylinders were obtained. The AVL Boost software is very useful for the piston engine performance simulations. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: aviation propulsion, AVL Boost, engine model, charge exchange, mixture formation

Procedia PDF Downloads 295
157 Mapping the Suitable Sites for Food Grain Crops Using Geographical Information System (GIS) and Analytical Hierarchy Process (AHP)

Authors: Md. Monjurul Islam, Tofael Ahamed, Ryozo Noguchi

Abstract:

Progress continues in the fight against hunger, yet an unacceptably large number of people still lack food they need for an active and healthy life. Bangladesh is one of the rising countries in the South-Asia but still lots of people are food insecure. In the last few years, Bangladesh has significant achievements in food grain production but still food security at national to individual levels remain a matter of major concern. Ensuring food security for all is one of the major challenges that Bangladesh faces today, especially production of rice in the flood and poverty prone areas. Northern part is more vulnerable than any other part of Bangladesh. To ensure food security, one of the best way is to increase domestic production. To increase production, it is necessary to secure lands for achieving optimum utilization of resources. One of the measures is to identify the vulnerable and potential areas using Land Suitability Assessment (LSA) to increase rice production in the poverty prone areas. Therefore, the aim of the study was to identify the suitable sites for food grain crop rice production in the poverty prone areas located at the northern part of Bangladesh. Lack of knowledge on the best combination of factors that suit production of rice has contributed to the low production. To fulfill the research objective, a multi-criteria analysis was done and produced a suitable map for crop production with the help of Geographical Information System (GIS) and Analytical Hierarchy Process (AHP). Primary and secondary data were collected from ground truth information and relevant offices. The suitability levels for each factor were ranked based on the structure of FAO land suitability classification as: Currently Not Suitable (N2), Presently Not Suitable (N1), Marginally Suitable (S3), Moderately Suitable (S2) and Highly Suitable (S1). The suitable sites were identified using spatial analysis and compared with the recent raster image from Google Earth Pro® to validate the reliability of suitability analysis. For producing a suitability map for rice farming using GIS and multi-criteria analysis tool, AHP was used to rank the relevant factors, and the resultant weights were used to create the suitability map using weighted sum overlay tool in ArcGIS 10.3®. Then, the suitability map for rice production in the study area was formed. The weighted overly was performed and found that 22.74 % (1337.02 km2) of the study area was highly suitable, while 28.54% (1678.04 km2) was moderately suitable, 14.86% (873.71 km2) was marginally suitable, and 1.19% (69.97 km2) was currently not suitable for rice farming. On the other hand, 32.67% (1920.87 km2) was permanently not suitable which occupied with settlements, rivers, water bodies and forests. This research provided information at local level that could be used by farmers to select suitable fields for rice production, and then it can be applied to other crops. It will also be helpful for the field workers and policy planner who serves in the agricultural sector.

Keywords: AHP, GIS, spatial analysis, land suitability

Procedia PDF Downloads 210
156 Social Value of Travel Time Savings in Sub-Saharan Africa

Authors: Richard Sogah

Abstract:

The significance of transport infrastructure investments for economic growth and development has been central to the World Bank’s strategy for poverty reduction. Among the conventional surface transport infrastructures, road infrastructure is significant in facilitating the movement of human capital goods and services. When transport projects (i.e., roads, super-highways) are implemented, they come along with some negative social values (costs), such as increased noise and air pollution for local residents living near these facilities, displaced individuals, etc. However, these projects also facilitate better utilization of existing capital stock and generate other observable benefits that can be easily quantified. For example, the improvement or construction of roads creates employment, stimulates revenue generation (toll), reduces vehicle operating costs and accidents, increases accessibility, trade expansion, safety improvement, etc. Aside from these benefits, travel time savings (TTSs) which are the major economic benefits of urban and inter-urban transport projects and therefore integral in the economic assessment of transport projects, are often overlooked and omitted when estimating the benefits of transport projects, especially in developing countries. The absence of current and reliable domestic travel data and the inability of replicated models from the developed world to capture the actual value of travel time savings due to the large unemployment, underemployment, and other labor-induced distortions has contributed to the failure to assign value to travel time savings when estimating the benefits of transport schemes in developing countries. This omission of the value of travel time savings from the benefits of transport projects in developing countries poses problems for investors and stakeholders to either accept or dismiss projects based on schemes that favor reduced vehicular operating costs and other parameters rather than those that ease congestion, increase average speed, facilitate walking and handloading, and thus save travel time. Given the complex reality in the estimation of the value of travel time savings and the presence of widespread informal labour activities in Sub-Saharan Africa, we construct a “nationally ranked distribution of time values” and estimate the value of travel time savings based on the area beneath the distribution. Compared with other approaches, our method captures both formal sector workers and individuals/people who work outside the formal sector and hence changes in their time allocation occur in the informal economy and household production activities. The dataset for the estimations is sourced from the World Bank, the International Labour Organization, etc.

Keywords: road infrastructure, transport projects, travel time savings, congestion, Sub-Sahara Africa

Procedia PDF Downloads 82
155 The Brain’s Attenuation Coefficient as a Potential Estimator of Temperature Elevation during Intracranial High Intensity Focused Ultrasound Procedures

Authors: Daniel Dahis, Haim Azhari

Abstract:

Noninvasive image-guided intracranial treatments using high intensity focused ultrasound (HIFU) are on the course of translation into clinical applications. They include, among others, tumor ablation, hyperthermia, and blood-brain-barrier (BBB) penetration. Since many of these procedures are associated with local temperature elevation, thermal monitoring is essential. MRI constitutes an imaging method with high spatial resolution and thermal mapping capacity. It is the currently leading modality for temperature guidance, commonly under the name MRgHIFU (magnetic-resonance guided HIFU). Nevertheless, MRI is a very expensive non-portable modality which jeopardizes its accessibility. Ultrasonic thermal monitoring, on the other hand, could provide a modular, cost-effective alternative with higher temporal resolution and accessibility. In order to assess the feasibility of ultrasonic brain thermal monitoring, this study investigated the usage of brain tissue attenuation coefficient (AC) temporal changes as potential estimators of thermal changes. Newton's law of cooling describes a temporal exponential decay behavior for the temperature of a heated object immersed in a relatively cold surrounding. Similarly, in the case of cerebral HIFU treatments, the temperature in the region of interest, i.e., focal zone, is suggested to follow the same law. Thus, it was hypothesized that the AC of the irradiated tissue may follow a temporal exponential behavior during cool down regime. Three ex-vivo bovine brain tissue specimens were inserted into plastic containers along with four thermocouple probes in each sample. The containers were placed inside a specially built ultrasonic tomograph and scanned at room temperature. The corresponding pixel-averaged AC was acquired for each specimen and used as a reference. Subsequently, the containers were placed in a beaker containing hot water and gradually heated to about 45ᵒC. They were then repeatedly rescanned during cool down using ultrasonic through-transmission raster trajectory until reaching about 30ᵒC. From the obtained images, the normalized AC and its temporal derivative as a function of temperature and time were registered. The results have demonstrated high correlation (R² > 0.92) between both the brain AC and its temporal derivative to temperature. This indicates the validity of the hypothesis and the possibility of obtaining brain tissue temperature estimation from the temporal AC thermal changes. It is important to note that each brain yielded different AC values and slopes. This implies that a calibration step is required for each specimen. Thus, for a practical acoustic monitoring of the brain, two steps are suggested. The first step consists of simply measuring the AC at normal body temperature. The second step entails measuring the AC after small temperature elevation. In face of the urging need for a more accessible thermal monitoring technique for brain treatments, the proposed methodology enables a cost-effective high temporal resolution acoustical temperature estimation during HIFU treatments.

Keywords: attenuation coefficient, brain, HIFU, image-guidance, temperature

Procedia PDF Downloads 140
154 Electro-Hydrodynamic Effects Due to Plasma Bullet Propagation

Authors: Panagiotis Svarnas, Polykarpos Papadopoulos

Abstract:

Atmospheric-pressure cold plasmas continue to gain increasing interest for various applications due to their unique properties, like cost-efficient production, high chemical reactivity, low gas temperature, adaptability, etc. Numerous designs have been proposed for these plasmas production in terms of electrode configuration, driving voltage waveform and working gas(es). However, in order to exploit most of the advantages of these systems, the majority of the designs are based on dielectric-barrier discharges (DBDs) either in filamentary or glow regimes. A special category of the DBD-based atmospheric-pressure cold plasmas refers to the so-called plasma jets, where a carrier noble gas is guided by the dielectric barrier (usually a hollow cylinder) and left to flow up to the atmospheric air where a complicated hydrodynamic interplay takes place. Although it is now well established that these plasmas are generated due to ionizing waves reminding in many ways streamer propagation, they exhibit discrete characteristics which are better mirrored on the terms 'guided streamers' or 'plasma bullets'. These 'bullets' travel with supersonic velocities both inside the dielectric barrier and the channel formed by the noble gas during its penetration into the air. The present work is devoted to the interpretation of the electro-hydrodynamic effects that take place downstream of the dielectric barrier opening, i.e., in the noble gas-air mixing area where plasma bullet propagate under the influence of local electric fields in regions of variable noble gas concentration. Herein, we focus on the role of the local space charge and the residual ionic charge left behind after the bullet propagation in the gas flow field modification. The study communicates both experimental and numerical results, coupled in a comprehensive manner. The plasma bullets are here produced by a custom device having a quartz tube as a dielectric barrier and two external ring-type electrodes driven by sinusoidal high voltage at 10 kHz. Helium gas is fed to the tube and schlieren photography is employed for mapping the flow field downstream of the tube orifice. Mixture mass conservation equation, momentum conservation equation, energy conservation equation in terms of temperature and helium transfer equation are simultaneously solved, leading to the physical mechanisms that govern the experimental results. Namely, we deal with electro-hydrodynamic effects mainly due to momentum transfer from atomic ions to neutrals. The atomic ions are left behind as residual charge after the bullet propagation and gain energy from the locally created electric field. The electro-hydrodynamic force is eventually evaluated.

Keywords: atmospheric-pressure plasmas, dielectric-barrier discharges, schlieren photography, electro-hydrodynamic force

Procedia PDF Downloads 123
153 Benefits of Shaping a Balance on Environmental and Economic Sustainability for Population Health

Authors: Edna Negron-Martinez

Abstract:

Our time's global challenges and trends —like those associated with climate change, demographics displacements, growing health inequalities, and increasing burden of diseases— have complex connections to the determinants of health. Information on the burden of disease causes and prevention is fundamental for public health actions, like preparedness and responses for disasters, and recovery resources after the event. For instance, there is an increasing consensus about key findings of the effects and connections of the global burden of disease, as it generates substantial healthcare costs, consumes essential resources and prevents the attainment of optimal health and well-being. The goal of this research endeavor is to promote a comprehensive understanding of the connections between social, environmental, and economic influences on health. These connections are illustrated by pulling from clearly the core curriculum of multidisciplinary areas —as urban design, energy, housing, and economy— as well as in the health system itself. A systematic review of primary and secondary data included a variety of issues as global health, natural disasters, and critical pollution impacts on people's health and the ecosystems. Environmental health is challenged by the unsustainable consumption patterns and the resulting contaminants that abound in many cities and urban settings around the world. Poverty, inadequate housing, and poor health are usually linked. The house is a primary environmental health context for any individual and especially for more vulnerable groups; such as children, older adults and those who are sick. Nevertheless, very few countries show strong decoupling of environmental degradation from economic growth, as indicated by a recent 2017 Report of the World Bank. Worth noting, the environmental fraction of the global burden of disease in a 2016 World Health Organization (WHO) report estimated that 12.6 million global deaths, accounting for 23% (95% CI: 13-34%) of all deaths were attributable to the environment. Among the environmental contaminants include heavy metals, noise pollution, light pollution, and urban sprawl. Those key findings make a call to the significance to urgently adopt in a global scale the United Nations post-2015 Sustainable Development Goals (SDGs). The SDGs address the social, environmental, and economic factors that influence health and health inequalities, advising how these sectors, in turn, benefit from a healthy population. Consequently, more actions are necessary from an inter-sectoral and systemic paradigm to enforce an integrated sustainability policy implementation aimed at the environmental, social, and economic determinants of health.

Keywords: building capacity for workforce development, ecological and environmental health effects of pollution, public health education, sustainability

Procedia PDF Downloads 86
152 Stereological and Morphometric Evaluation of Wound Healing Burns Treated with Ulmo Honey (Eucryphia cordifolia) Unsupplemented and Supplemented with Ascorbic Acid in Guinea Pig (Cavia porcellus)

Authors: Carolina Schencke, Cristian Sandoval, Belgica Vasquez, Mariano Del Sol

Abstract:

Introduction: In a burn injury, the successful repair requires not only the participation of various cells, such as granulocytes and fibroblasts, but also of collagen, which plays a crucial role as a structural and regulatory molecule of scar tissue. Since honey and ascorbic acid have presented a great therapeutic potential to cellular and structural level, experimental studies have proposed its combination in the treatment of wounds. Aim: To evaluate stereological and morphometric parameters of healing wounds, caused by burns, treated with honey Ulmo (Eucryphia cordifolia) unsupplemented, comparing its effect with Ulmo honey supplemented with ascorbic acid. Materials and Methods: Fifteen healthy adult guinea pigs (Cavia porcellus) were used, of both sexes, average weight 450 g from the Centro de Excelencia en Estudios Morfológicos y Quirúrgicos (CEMyQ) at the Universidad de La Frontera, Chile. The animals were divided at random into three groups: positive control (C+), honey only (H) and supplemented honey (SH) and were fed on pellets supplemented with ascorbic acid and water ad libitum, under ambient conditions controlled for temperature, ambient noise and a cycle of 12h light–darkness. The protocol for the experiment was approved by the Scientific Ethics Committee of the Universidad de La Frontera, Chile. The parameters measured were number density per area (NA), volume density (VV), and surface density (SV) of fibroblast; NA and VV of polymorphonuclear cells (PMN) and, evaluation of the content of collagen fibers in the scar dermis. One-way ANOVA was used for statistics analysis and its respective Post hoc tests. Results: The ANOVA analysis for NA, VV and SV of fibroblasts, NA and VV of PMN, and evaluation of collagen content, type I and III, showed that at least one group differs from other (P≤ 0.001). There were differences (P= 0.000) in NA of fibroblast between the groups [C+= 3599.560 mm-2 (SD= 764.461), H= 3355.336 mm-2 (SD= 699.443) and SH= 4253.025 mm-2 (SD= 1041.751)]. The VV and SV of fibroblast increased (P= 0.000) in the SH group [20.400% (SD= 5.897) and 100.876 mm2/mm3 (SD= 29.431), respectively], compared to the C+ [16.324% (SD= 7.719) and 81.676 mm2/mm3 (SD= 28.884), respectively). The mean values of NA and VV of PMN were higher (P= 0.000) in the H [756.875 mm-2 (SD= 516.489) and 2.686% (SD= 2.380), respectively) group. Regarding to the evaluation of the content of collagen fibers, type I and III, the one-way analysis of ANOVA showed a statistically significant difference (P< 0.05). The content of collagen fibers type I was higher in C+ (1988.292 μm2; SD= 1312.379), while the content of collagen fibers type III was higher in SH (1967.163 μm2; SD= 1047.944 μm2) group. Conclusions: The stereological results were correlated with the stage of healing observed for each group. These results suggest that the combination of honey with ascorbic acid potentiate the healing effect, where both participated synergistically.

Keywords: ascorbic acid, morphometry, stereology, Ulmo honey

Procedia PDF Downloads 254
151 Tall Building Transit-Oriented Development (TB-TOD) and Energy Efficiency in Suburbia: Case Studies, Sydney, Toronto, and Washington D.C.

Authors: Narjes Abbasabadi

Abstract:

As the world continues to urbanize and suburbanize, where suburbanization associated with mass sprawl has been the dominant form of this expansion, sustainable development challenges will be more concerned. Sprawling, characterized by low density and automobile dependency, presents significant environmental issues regarding energy consumption and Co2 emissions. This paper examines the vertical expansion of suburbs integrated into mass transit nodes as a planning strategy for boosting density, intensification of land use, conversion of single family homes to multifamily dwellings or mixed use buildings and development of viable alternative transportation choices. It analyzes the spatial patterns of tall building transit-oriented development (TB-TOD) of suburban regions in Sydney (Australia), Toronto (Canada), and Washington D.C. (United States). The main objectives of this research seek to understand the effect of the new morphology of suburban tall, the physical dimensions of individual buildings and their arrangement at a larger scale with energy efficiency. This study aims to answer these questions: 1) why and how can the potential phenomenon of vertical expansion or high-rise development be integrated into suburb settings? 2) How can this phenomenon contribute to an overall denser development of suburbs? 3) Which spatial pattern or typologies/ sub-typologies of the TB-TOD model do have the greatest energy efficiency? It addresses these questions by focusing on 1) energy, heat energy demand (excluding cooling and lighting) related to design issues at two levels: macro, urban scale and micro, individual buildings—physical dimension, height, morphology, spatial pattern of tall buildings and their relationship with each other and transport infrastructure; 2) Examining TB-TOD to provide more evidence of how the model works regarding ridership. The findings of the research show that the TB-TOD model can be identified as the most appropriate spatial patterns of tall buildings in suburban settings. And among the TB-TOD typologies/ sub-typologies, compact tall building blocks can be the most energy efficient one. This model is associated with much lower energy demands in buildings at the neighborhood level as well as lower transport needs in an urban scale while detached suburban high rise or low rise suburban housing will have the lowest energy efficiency. The research methodology is based on quantitative study through applying the available literature and static data as well as mapping and visual documentations of urban regions such as Google Earth, Microsoft Bing Bird View and Streetview. It will examine each suburb within each city through the satellite imagery and explore the typologies/ sub-typologies which are morphologically distinct. The study quantifies heat energy efficiency of different spatial patterns through simulation via GIS software.

Keywords: energy efficiency, spatial pattern, suburb, tall building transit-oriented development (TB-TOD)

Procedia PDF Downloads 231
150 Mapping Potential Soil Salinization Using Rule Based Object Oriented Image Analysis

Authors: Zermina Q., Wasif Y., Naeem S., Urooj S., Sajid R. A.

Abstract:

Land degradation, a leading environemtnal problem and a decrease in the quality of land has become a major global issue, caused by human activities. By land degradation, more than half of the world’s drylands are affected. The worldwide scope of main saline soils is approximately 955 M ha, whereas inferior salinization affected approximately 77 M ha. In irrigated areas, a total of 58% of these soils is found. As most of the vegetation types requires fertile soil for their growth and quality production, salinity causes serious problem to the production of these vegetation types and agriculture demands. This research aims to identify the salt affected areas in the selected part of Indus Delta, Sindh province, Pakistan. This particular mangroves dominating coastal belt is important to the local community for their crop growth. Object based image analysis approach has been adopted on Landsat TM imagery of year 2011 by incorporating different mathematical band ratios, thermal radiance and salinity index. Accuracy assessment of developed salinity landcover map was performed using Erdas Imagine Accuracy Assessment Utility. Rain factor was also considered before acquiring satellite imagery and conducting field survey, as wet soil can greatly affect the condition of saline soil of the area. Dry season considered best for the remote sensing based observation and monitoring of the saline soil. These areas were trained with the ground truth data w.r.t pH and electric condutivity of the soil samples. The results were obtained from the object based image analysis of Keti bunder and Kharo chan shows most of the region under low saline soil.Total salt affected soil was measured to be 46,581.7 ha in Keti Bunder, which represents 57.81 % of the total area of 80,566.49 ha. High Saline Area was about 7,944.68 ha (9.86%). Medium Saline Area was about 17,937.26 ha (22.26 %) and low Saline Area was about 20,699.77 ha (25.69%). Where as total salt affected soil was measured to be 52,821.87 ha in Kharo Chann, which represents 55.87 % of the total area of 94,543.54 ha. High Saline Area was about 5,486.55 ha (5.80 %). Medium Saline Area was about 13,354.72 ha (14.13 %) and low Saline Area was about 33980.61 ha (35.94 %). These results show that the area is low to medium saline in nature. Accuracy of the soil salinity map was found to be 83 % with the Kappa co-efficient of 0.77. From this research, it was evident that this area as a whole falls under the category of low to medium saline area and being close to coastal area, mangrove forest can flourish. As Mangroves are salt tolerant plant so this area is consider heaven for mangrove plantation. It would ultimately benefit both the local community and the environment. Increase in mangrove forest control the problem of soil salinity and prevent sea water to intrude more into coastal area. So deforestation of mangrove should be regularly monitored.

Keywords: indus delta, object based image analysis, soil salinity, thematic mapper

Procedia PDF Downloads 592
149 A Lexicographic Approach to Obstacles Identified in the Ontological Representation of the Tree of Life

Authors: Sandra Young

Abstract:

The biodiversity literature is vast and heterogeneous. In today’s data age, numbers of data integration and standardisation initiatives aim to facilitate simultaneous access to all the literature across biodiversity domains for research and forecasting purposes. Ontologies are being used increasingly to organise this information, but the rationalisation intrinsic to ontologies can hit obstacles when faced with the intrinsic fluidity and inconsistency found in the domains comprising biodiversity. Essentially the problem is a conceptual one: biological taxonomies are formed on the basis of specific, physical specimens yet nomenclatural rules are used to provide labels to describe these physical objects. These labels are ambiguous representations of the physical specimen. An example of this is with the genus Melpomene, the scientific nomenclatural representation of a genus of ferns, but also for a genus of spiders. The physical specimens for each of these are vastly different, but they have been assigned the same nomenclatural reference. While there is much research into the conceptual stability of the taxonomic concept versus the nomenclature used, to the best of our knowledge as yet no research has looked empirically at the literature to see the conceptual plurality or singularity of the use of these species’ names, the linguistic representation of a physical entity. Language itself uses words as symbols to represent real world concepts, whether physical entities or otherwise, and as such lexicography has a well-founded history in the conceptual mapping of words in context for dictionary making. This makes it an ideal candidate to explore this problem. The lexicographic approach uses corpus-based analysis to look at word use in context, with a specific focus on collocated word frequencies (the frequencies of words used in specific grammatical and collocational contexts). It allows for inconsistencies and contradictions in the source data and in fact includes these in the word characterisation so that 100% of the available evidence is counted. Corpus analysis is indeed suggested as one of the ways to identify concepts for ontology building, because of its ability to look empirically at data and show patterns in language usage, which can indicate conceptual ideas which go beyond words themselves. In this sense it could potentially be used to identify if the hierarchical structures present within the empirical body of literature match those which have been identified in ontologies created to represent them. The first stages of this research have revealed a hierarchical structure that becomes apparent in the biodiversity literature when annotating scientific species’ names, common names and more general names as classes, which will be the focus of this paper. The next step in the research is focusing on a larger corpus in which specific words can be analysed and then compared with existing ontological structures looking at the same material, to evaluate the methods by means of an alternative perspective. This research aims to provide evidence as to the validity of the current methods in knowledge representation for biological entities, and also shed light on the way that scientific nomenclature is used within the literature.

Keywords: ontology, biodiversity, lexicography, knowledge representation, corpus linguistics

Procedia PDF Downloads 116
148 Isolation of Clitorin and Manghaslin from Carica papaya L. Leaves by CPC and Its Quantitative Analysis by QNMR

Authors: Norazlan Mohmad Misnan, Maizatul Hasyima Omar, Mohd Isa Wasiman

Abstract:

Papaya (Carica papaya L., Caricaceae) is a tree which mainly cultivated for its fruits in many tropical regions including Australia, Brazil, China, Hawaii, and Malaysia. Beside of fruits, its leaves, seeds, and latex have also been traditionally used for treating diseases, which also reported to possess anti-cancer and anti- malaria properties. Its leaves have been reported to consist of various chemical compounds such as alkaloids, flavonoids and phenolics. Clitorin and manghaslin are among major flavonoids presence. Thus, the aim of this study is to quantify the purity of these isolated compounds (clitorin and manghsalin) by using quantitative Nuclear Magnetic Resonance (qNMR) analysis. Only fresh C. papaya leaves were used for juice extraction procedure and subsequently was freeze-dried to obtain a dark green powdered form of the extract prior to Centrifugal Partition Chromatography (CPC) separation. The CPC experiments were performed using a two-phase solvent system comprising ethyl acetate/butanol/water (1:4:5, v/v/v/v) solvent. The upper organic phase was used as the stationary phase, and the lower aqueous phase was employed as the mobile phase. Ten fractions were obtained after an hour runtime analysis. Fraction 6 and fraction 8 has been identified as clitorin (m/z 739.21 [M-H]-) and manghaslin (m/z 755.21 [M-H]-), respectively, based on LCMS data and full analysis of NMR (1H NMR, 13C NMR, HMBC, and HSQC). The 1H-qNMR measurements were carried out using a 400 MHz NMR spectrometer (JEOL ECS 400MHz, Japan) and deuterated methanol was used as a solvent. Quantification was performed using the AQARI method (Accurate Quantitative NMR) with deuterated 1,4-Bis(trimethylsilyl)benzene (BTMSB) as an internal reference substances. This AQARI protocol includes not only NMR measurement but also sample preparation that provide highest precision and accuracy than other qNMR methods. The 90° pulse length and the T1 relaxation times for compounds and BTMSB were determined prior to the quantification to give the best signal-to-noise ratio. Regions containing the two downfield signals from aromatic part (6.00–6.89 ppm), and the singlet signal, (18H) arising from BTMSB (0.63-1.05ppm) were selected for integration. The purity of clitorin and manghaslin were calculated to be 52.22% and 43.36%, respectively. Further purification is needed in order to increase its purity. This finding has demonstrated the use of qNMR for quality control and standardization of various plant extracts and which can be applied for NMR fingerprinting of other plant-based products with good reproducibility and in the case where commercial standards is not readily available.

Keywords: Carica papaya, clitorin, manghaslin, quantitative Nuclear Magnetic Resonance, Centrifugal Partition Chromatography

Procedia PDF Downloads 467
147 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 94
146 Validating the Micro-Dynamic Rule in Opinion Dynamics Models

Authors: Dino Carpentras, Paul Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is dedicated to modeling the dynamic evolution of people's opinions. Models in this field are based on a micro-dynamic rule, which determines how people update their opinion when interacting. Despite the high number of new models (many of them based on new rules), little research has been dedicated to experimentally validate the rule. A few studies started bridging this literature gap by experimentally testing the rule. However, in these studies, participants are forced to express their opinion as a number instead of using natural language. Furthermore, some of these studies average data from experimental questions, without testing if differences existed between them. Indeed, it is possible that different topics could show different dynamics. For example, people may be more prone to accepting someone's else opinion regarding less polarized topics. In this work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions using natural language ('agree' or 'disagree') and the certainty of their answer, expressed as a number between 1 and 10. To keep the interaction based on natural language, certainty was not shown to other participants. We then showed to the participant someone else's opinion on the same topic and, after a distraction task, we repeated the measurement. To produce data compatible with standard opinion dynamics models, we multiplied the opinion (encoded as agree=1 and disagree=-1) with the certainty to obtain a single 'continuous opinion' ranging from -10 to 10. By analyzing the topics independently, we observed that each one shows a different initial distribution. However, the dynamics (i.e., the properties of the opinion change) appear to be similar between all topics. This suggested that the same micro-dynamic rule could be applied to unpolarized topics. Another important result is that participants that change opinion tend to maintain similar levels of certainty. This is in contrast with typical micro-dynamics rules, where agents move to an average point instead of directly jumping to the opposite continuous opinion. As expected, in the data, we also observed the effect of social influence. This means that exposing someone with 'agree' or 'disagree' influenced participants to respectively higher or lower values of the continuous opinion. However, we also observed random variations whose effect was stronger than the social influence’s one. We even observed cases of people that changed from 'agree' to 'disagree,' even if they were exposed to 'agree.' This phenomenon is surprising, as, in the standard literature, the strength of the noise is usually smaller than the strength of social influence. Finally, we also built an opinion dynamics model from the data. The model was able to explain more than 80% of the data variance. Furthermore, by iterating the model, we were able to produce polarized states even starting from an unpolarized population. This experimental approach offers a way to test the micro-dynamic rule. This also allows us to build models which are directly grounded on experimental results.

Keywords: experimental validation, micro-dynamic rule, opinion dynamics, update rule

Procedia PDF Downloads 135
145 Current Applications of Artificial Intelligence (AI) in Chest Radiology

Authors: Angelis P. Barlampas

Abstract:

Learning Objectives: The purpose of this study is to inform briefly the reader about the applications of AI in chest radiology. Background: Currently, there are 190 FDA-approved radiology AI applications, with 42 (22%) pertaining specifically to thoracic radiology. Imaging findings OR Procedure details Aids of AI in chest radiology1: Detects and segments pulmonary nodules. Subtracts bone to provide an unobstructed view of the underlying lung parenchyma and provides further information on nodule characteristics, such as nodule location, nodule two-dimensional size or three dimensional (3D) volume, change in nodule size over time, attenuation data (i.e., mean, minimum, and/or maximum Hounsfield units [HU]), morphological assessments, or combinations of the above. Reclassifies indeterminate pulmonary nodules into low or high risk with higher accuracy than conventional risk models. Detects pleural effusion . Differentiates tension pneumothorax from nontension pneumothorax. Detects cardiomegaly, calcification, consolidation, mediastinal widening, atelectasis, fibrosis and pneumoperitoneum. Localises automatically vertebrae segments, labels ribs and detects rib fractures. Measures the distance from the tube tip to the carina and localizes both endotracheal tubes and central vascular lines. Detects consolidation and progression of parenchymal diseases such as pulmonary fibrosis or chronic obstructive pulmonary disease (COPD).Can evaluate lobar volumes. Identifies and labels pulmonary bronchi and vasculature and quantifies air-trapping. Offers emphysema evaluation. Provides functional respiratory imaging, whereby high-resolution CT images are post-processed to quantify airflow by lung region and may be used to quantify key biomarkers such as airway resistance, air-trapping, ventilation mapping, lung and lobar volume, and blood vessel and airway volume. Assesses the lung parenchyma by way of density evaluation. Provides percentages of tissues within defined attenuation (HU) ranges besides furnishing automated lung segmentation and lung volume information. Improves image quality for noisy images with built-in denoising function. Detects emphysema, a common condition seen in patients with history of smoking and hyperdense or opacified regions, thereby aiding in the diagnosis of certain pathologies, such as COVID-19 pneumonia. It aids in cardiac segmentation and calcium detection, aorta segmentation and diameter measurements, and vertebral body segmentation and density measurements. Conclusion: The future is yet to come, but AI already is a helpful tool for the daily practice in radiology. It is assumed, that the continuing progression of the computerized systems and the improvements in software algorithms , will redder AI into the second hand of the radiologist.

Keywords: artificial intelligence, chest imaging, nodule detection, automated diagnoses

Procedia PDF Downloads 47
144 A Numerical Studies for Improving the Performance of Vertical Axis Wind Turbine by a Wind Power Tower

Authors: Soo-Yong Cho, Chong-Hyun Cho, Chae-Whan Rim, Sang-Kyu Choi, Jin-Gyun Kim, Ju-Seok Nam

Abstract:

Recently, vertical axis wind turbines (VAWT) have been widely used to produce electricity even in urban. They have several merits such as low sound noise, easy installation of the generator and simple structure without yaw-control mechanism and so on. However, their blades are operated under the influence of the trailing vortices generated by the preceding blades. This phenomenon deteriorates its output power and makes difficulty predicting correctly its performance. In order to improve the performance of VAWT, wind power towers can be applied. Usually, the wind power tower can be constructed as a multi-story building to increase the frontal area of the wind stream. Hence, multiple sets of the VAWT can be installed within the wind power tower, and they can be operated at high elevation. Many different types of wind power tower can be used in the field. In this study, a wind power tower with circular column shape was applied, and the VAWT was installed at the center of the wind power tower. Seven guide walls were used as a strut between the floors of the wind power tower. These guide walls were utilized not only to increase the wind velocity within the wind power tower but also to adjust the wind direction for making a better working condition on the VAWT. Hence, some important design variables, such as the distance between the wind turbine and the guide wall, the outer diameter of the wind power tower, the direction of the guide wall against the wind direction, should be considered to enhance the output power on the VAWT. A numerical analysis was conducted to find the optimum dimension on design variables by using the computational fluid dynamics (CFD) among many prediction methods. The CFD could be an accurate prediction method compared with the stream-tube methods. In order to obtain the accurate results in the CFD, it needs the transient analysis and the full three-dimensional (3-D) computation. However, this full 3-D CFD could be hard to be a practical tool because it requires huge computation time. Therefore, the reduced computational domain is applied as a practical method. In this study, the computations were conducted in the reduced computational domain and they were compared with the experimental results in the literature. It was examined the mechanism of the difference between the experimental results and the computational results. The computed results showed this computational method could be an effective method in the design methodology using the optimization algorithm. After validation of the numerical method, the CFD on the wind power tower was conducted with the important design variables affecting the performance of VAWT. The results showed that the output power of the VAWT obtained using the wind power tower was increased compared to them obtained without the wind power tower. In addition, they showed that the increased output power on the wind turbine depended greatly on the dimension of the guide wall.

Keywords: CFD, performance, VAWT, wind power tower

Procedia PDF Downloads 360
143 Stability of a Natural Weak Rock Slope under Rapid Water Drawdowns: Interaction between Guadalfeo Viaduct and Rules Reservoir, Granada, Spain

Authors: Sonia Bautista Carrascosa, Carlos Renedo Sanchez

Abstract:

The effect of a rapid drawdown is a classical scenario to be considered in slope stability under submerged conditions. This situation arises when totally or partially submerged slopes experience a descent of the external water level and is a typical verification to be done in a dam engineering discipline, as reservoir water levels commonly fluctuate noticeably during seasons and due to operational reasons. Although the scenario is well known and predictable in general, site conditions can increase the complexity of its assessment and external factors are not always expected, can cause a reduction in the stability or even a failure in a slope under a rapid drawdown situation. The present paper describes and discusses the interaction between two different infrastructures, a dam and a highway, and the impact on the stability of a natural rock slope overlaid by the north abutment of a viaduct of the A-44 Highway due to the rapid drawdown of the Rules Dam, in the province of Granada (south of Spain). In the year 2011, with both infrastructures, the A-44 Highway and the Rules Dam already constructed, delivered and under operation, some movements start to be recorded in the approximation embankment and north abutment of the Guadalfeo Viaduct, included in the highway and developed to solve the crossing above the tail of the reservoir. The embankment and abutment were founded in a low-angle natural rock slope formed by grey graphic phyllites, distinctly weathered and intensely fractured, with pre-existing fault and weak planes. After the first filling of the reservoir, to a relative level of 243m, three consecutive drawdowns were recorded in the autumns 2010, 2011 and 2012, to relative levels of 234m, 232m and 225m. To understand the effect of these drawdowns in the weak rock mass strength and in its stability, a new geological model was developed, after reviewing all the available ground investigations, updating the geological mapping of the area and supplemented with an additional geotechnical and geophysical investigations survey. Together with all this information, rainfall and reservoir level evolution data have been reviewed in detail to incorporate into the monitoring interpretation. The analysis of the monitoring data and the new geological and geotechnical interpretation, supported by the use of limit equilibrium software Slide2, concludes that the movement follows the same direction as the schistosity of the phyllitic rock mass, coincident as well with the direction of the natural slope, indicating a deep-seated movement of the whole slope towards the reservoir. As part of these conclusions, the solutions considered to reinstate the highway infrastructure to the required FoS will be described, and the geomechanical characterization of these weak rocks discussed, together with the influence of water level variations, not only in the water pressure regime but in its geotechnical behavior, by the modification of the strength parameters and deformability.

Keywords: monitoring, rock slope stability, water drawdown, weak rock

Procedia PDF Downloads 138
142 Unfolding Architectural Assemblages: Mapping Contemporary Spatial Objects' Affective Capacity

Authors: Panagiotis Roupas, Yota Passia

Abstract:

This paper aims at establishing an index of design mechanisms - immanent in spatial objects - based on the affective capacity of their material formations. While spatial objects (design objects, buildings, urban configurations, etc.) are regarded as systems composed of interacting parts, within the premises of assemblage theory, their ability to affect and to be affected has not yet been mapped or sufficiently explored. This ability lies in excess, a latent potentiality they contain, not transcendental but immanent in their pre-subjective aesthetic power. As spatial structures are theorized as assemblages - composed of heterogeneous elements that enter into relations with one another - and since all assemblages are parts of larger assemblages, their components' ability to engage is contingent. We thus seek to unfold the mechanisms inherent in spatial objects that allow to the constituent parts of design assemblages to perpetually enter into new assemblages. To map architectural assemblage's affective ability, spatial objects are analyzed in two axes. The first axis focuses on the relations that the assemblage's material and expressive components develop in order to enter the assemblages. Material components refer to those material elements that an assemblage requires in order to exist, while expressive components includes non-linguistic (sense impressions) as well as linguistic (beliefs). The second axis records the processes known as a-signifying signs or a-signs, which are the triggering mechanisms able to territorialize or deterritorialize, stabilize or destabilize the assemblage and thus allow it to assemble anew. As a-signs cannot be isolated from matter, we point to their resulting effects, which without entering the linguistic level they are expressed in terms of intensity fields: modulations, movements, speeds, rhythms, spasms, etc. They belong to a molecular level where they operate in the pre-subjective world of perceptions, effects, drives, and emotions. A-signs have been introduced as intensities that transform the object beyond meaning, beyond fixed or known cognitive procedures. To that end, from an archive of more than 100 spatial objects by contemporary architects and designers, we have created an effective mechanisms index is created, where each a-sign is now connected with the list of effects it triggers and which thoroughly defines it. And vice versa, the same effect can be triggered by different a-signs, allowing the design object to lie in a perpetual state of becoming. To define spatial objects, A-signs are categorized in terms of their aesthetic power to affect and to be affected on the basis of the general categories of form, structure and surface. Thus, different part's degree of contingency are evaluated and measured and finally, we introduce as material information that is immanent in the spatial object while at the same time they confer no meaning; they only convey some information without semantic content. Through this index, we are able to analyze and direct the final form of the spatial object while at the same time establishing the mechanism to measure its continuous transformation.

Keywords: affective mechanisms index, architectural assemblages, a-signifying signs, cartography, virtual

Procedia PDF Downloads 102
141 Mapping and Measuring the Vulnerability Level of the Belawan District Community in Encountering the Rob Flood Disaster

Authors: Dessy Pinem, Rahmadian Sembiring, Adanil Bushra

Abstract:

Medan Belawan is one of the subdistricts of 21 districts in Medan. Medan Belawan Sub-district is directly adjacent to the Malacca Strait in the North. Due to its direct border with the Malacca Strait, the problem in this sub-district, which has continued for many years, is a flood of rob. In 2015, rob floods inundated Sicanang urban village, Belawan I urban village, Belawan Bahagia urban village and Bagan Deli village. The extent of inundation in the flood of rob that occurred in September 2015 reached 540, 938 ha. Rob flood is a phenomenon where the sea water is overflowing into the mainland. Rob floods can also be interpreted as a puddle of water on the coastal land that occurs when the tidal waters. So this phenomenon will inundate parts of the coastal plain or lower place of high tide sea level. Rob flood is a daily disaster faced by the residents in the district of Medan Belawan. Rob floods can happen every month and last for a week. The flood is not only the residents' houses, the flood also soaked the main road to Belawan Port reaching 50 cm. To deal with the problems caused by the flood and to prepare coastal communities to face the character of coastal areas, it is necessary to know the vulnerability of the people who are always the victims of the rob flood. Are the people of Medan Belawan sub-district, especially in the flood-affected villages, able to cope with the consequences of the floods? To answer this question, it is necessary to assess the vulnerability of the Belawan District community in the face of the flood disaster. This research is descriptive, qualitative and quantitative. Data were collected by observation, interview and questionnaires in 4 urban villages often affected by rob flood. The vulnerabilities measured are physical, economic, social, environmental, organizational and motivational vulnerabilities. For vulnerability in the physical field, the data collected is the distance of the building, floor area ratio, drainage, and building materials. For economic vulnerability, data collected are income, employment, building ownership, and insurance ownership. For the vulnerability in the social field, the data collected is education, number of family members, children, the elderly, gender, training for disasters, and how to dispose of waste. For the vulnerability in the field of organizational data collected is the existence of organizations that advocate for the victims, their policies and laws governing the handling of tidal flooding. The motivational vulnerability is seen from the information center or question and answer about the rob flood, and the existence of an evacuation plan or path to avoid disaster or reduce the victim. The results of this study indicate that most people in Medan Belawan sub-district have a high-level vulnerability in physical, economic, social, environmental, organizational and motivational fields. They have no access to economic empowerment, no insurance, no motivation to solve problems and only hope to the government, not to have organizations that support and defend them, and have physical buildings that are easily destroyed by rob floods.

Keywords: disaster, rob flood, Medan Belawan, vulnerability

Procedia PDF Downloads 106
140 Corporate Social Responsibility and Corporate Reputation: A Bibliometric Analysis

Authors: Songdi Li, Louise Spry, Tony Woodall

Abstract:

Nowadays, Corporate Social responsibility (CSR) is becoming a buzz word, and more and more academics are putting efforts on CSR studies. It is believed that CSR could influence Corporate Reputation (CR), and they hold a favourable view that CSR leads to a positive CR. To be specific, the CSR related activities in the reputational context have been regarded as ways that associate to excellent financial performance, value creation, etc. Also, it is argued that CSR and CR are two sides of one coin; hence, to some extent, doing CSR is equal to establishing a good reputation. Still, there is no consensus of the CSR-CR relationship in the literature; thus, a systematic literature review is highly in need. This research conducts a systematic literature review with both bibliometric and content analysis. Data are selected from English language sources, and academic journal articles only, then, keyword combinations are applied to identify relevant sources. Data from Scopus and WoS are gathered for bibliometric analysis. Scopus search results were saved in RIS and CSV formats, and Web of Science (WoS) data were saved in TXT format and CSV formats in order to process data in the Bibexcel software for further analysis which later will be visualised by the software VOSviewer. Also, content analysis was applied to analyse the data clusters and the key articles. In terms of the topic of CSR-CR, this literature review with bibliometric analysis has made four achievements. First, this paper has developed a systematic study which quantitatively depicts the knowledge structure of CSR and CR by identifying terms closely related to CSR-CR (such as ‘corporate governance’) and clustering subtopics emerged in co-citation analysis. Second, content analysis is performed to acquire insight on the findings of bibliometric analysis in the discussion section. And it highlights some insightful implications for the future research agenda, for example, a psychological link between CSR-CR is identified from the result; also, emerging economies and qualitative research methods are new elements emerged in the CSR-CR big picture. Third, a multidisciplinary perspective presents through the whole bibliometric analysis mapping and co-word and co-citation analysis; hence, this work builds a structure of interdisciplinary perspective which potentially leads to an integrated conceptual framework in the future. Finally, Scopus and WoS are compared and contrasted in this paper; as a result, Scopus which has more depth and comprehensive data is suggested as a tool for future bibliometric analysis studies. Overall, this paper has fulfilled its initial purposes and contributed to the literature. To the author’s best knowledge, this paper conducted the first literature review of CSR-CR researches that applied both bibliometric analysis and content analysis; therefore, this paper achieves its methodological originality. And this dual approach brings advantages of carrying out a comprehensive and semantic exploration in the area of CSR-CR in a scientific and realistic method. Admittedly, its work might exist subjective bias in terms of search terms selection and paper selection; hence triangulation could reduce the subjective bias to some degree.

Keywords: corporate social responsibility, corporate reputation, bibliometric analysis, software program

Procedia PDF Downloads 107
139 Educational Infrastructure a Barrier for Teaching and Learning Architecture

Authors: Alejandra Torres-Landa López

Abstract:

Introduction: Can architecture students be creative in spaces conformed by an educational infrastructure build with paradigms of the past?, this question and others related are answered in this paper as it presents the PhD research: An anthropic conflict in Mexican Higher Education Institutes, problems and challenges of the educational infrastructure in teaching and learning History of Architecture. This research was finished in 2013 and is one of the first studies conducted nationwide in Mexico that analysis the educational infrastructure impact in learning architecture; its objective was to identify which elements of the educational infrastructure of Mexican Higher Education Institutes where architects are formed, hinder or contribute to the teaching and learning of History of Architecture; how and why it happens. The methodology: A mixed methodology was used combining quantitative and qualitative analysis. Different resources and strategies for data collection were used, such as questionnaires for students and teachers, interviews to architecture research experts, direct observations in Architecture classes, among others; the data collected was analyses using SPSS and MAXQDA. The veracity of the quantitative data was supported by the Cronbach’s Alpha Coefficient, obtaining a 0.86, figure that gives the data enough support. All the above enabled to certify the anthropic conflict in which Mexican Universities are. Major findings of the study: Although some of findings were probably not unknown, they haven’t been systematized and analyzed with the depth to which it’s done in this research. So, it can be said, that the educational infrastructure of most of the Higher Education Institutes studied, is a barrier to the educational process, some of the reasons are: the little morphological variation of space, the inadequate control of lighting, noise, temperature, equipment and furniture, the poor or none accessibility for disable people; as well as the absence, obsolescence and / or insufficiency of information technologies are some of the issues that generate an anthropic conflict understanding it as the trouble that teachers and students have to relate between them, in order to achieve significant learning). It is clear that most of the educational infrastructure of Mexican Higher Education Institutes is anchored to paradigms of the past; it seems that they respond to the previous era of industrialization. The results confirm that the educational infrastructure of Mexican Higher Education Institutes where architects are formed, is perceived as a "closed container" of people and data; infrastructure that becomes a barrier to teaching and learning process. Conclusion: The research results show it's time to change the paradigm in which we conceive the educational infrastructure, it’s time to stop seen it just only as classrooms, workshops, laboratories and libraries, as it must be seen from a constructive, urban, architectural and human point of view, taking into account their different dimensions: physical, technological, documental, social, among others; so the educational infrastructure can become a set of elements that organize and create spaces where ideas and thoughts can be shared; to be a social catalyst where people can interact between each other and with the space itself.

Keywords: educational infrastructure, impact of space in learning architecture outcomes, learning environments, teaching architecture, learning architecture

Procedia PDF Downloads 385
138 Mapping Vulnerabilities: A Social and Political Study of Disasters in Eastern Himalayas, Region of Darjeeling

Authors: Shailendra M. Pradhan, Upendra M. Pradhan

Abstract:

Disasters are perennial features of human civilization. The recurring earthquakes, floods, cyclones, among others, that result in massive loss of lives and devastation, is a grim reminder of the fact that, despite all our success stories of development, and progress in science and technology, human society is perennially at risk to disasters. The apparent threat of climate change and global warming only severe our disaster risks. Darjeeling hills, situated along Eastern Himalayan region of India, and famous for its three Ts – tea, tourism and toy-train – is also equally notorious for its disasters. The recurring landslides and earthquakes, the cyclone Aila, and the Ambootia landslides, considered as the largest landslide in Asia, are strong evidence of the vulnerability of Darjeeling hills to natural disasters. Given its geographical location along the Hindu-Kush Himalayas, the region is marked by rugged topography, geo-physically unstable structure, high-seismicity, and fragile landscape, making it prone to disasters of different kinds and magnitudes. Most of the studies on disasters in Darjeeling hills are, however, scientific and geographical in orientation that focuses on the underlying geological and physical processes to the neglect of social and political conditions. This has created a tendency among the researchers and policy-makers to endorse and promote a particular type of discourse that does not consider the social and political aspects of disasters in Darjeeling hills. Disaster, this paper argues, is a complex phenomenon, and a result of diverse factors, both physical and human. The hazards caused by the physical and geological agents, and the vulnerabilities produced and rooted in political, economic, social and cultural structures of a society, together result in disasters. In this sense, disasters are as much a result of political and economic conditions as it is of physical environment. The human aspect of disasters, therefore, compels us to address intricating social and political challenges that ultimately determine our resilience and vulnerability to disasters. Set within the above milieu, the aims of the paper are twofold: a) to provide a political and sociological account of disasters in Darjeeling hills; and, b) to identify and address the root causes of its vulnerabilities to disasters. In situating disasters in Darjeeling Hills, the paper adopts the Pressure and Release Model (PAR) that provides a theoretical insight into the study of social and political aspects of disasters, and to examine myriads of other related issues therein. The PAR model conceptualises risk as a complex combination of vulnerabilities, on the one hand, and hazards, on the other. Disasters, within the PAR framework, occur when hazards interact with vulnerabilities. The root causes of vulnerability, in turn, could be traced to social and political structures such as legal definitions of rights, gender relations, and other ideological structures and processes. In this way, the PAR model helps the present study to identify and unpack the root causes of vulnerabilities and disasters in Darjeeling hills that have largely remained neglected in dominant discourses, thereby providing a more nuanced and sociologically sensitive understanding of disasters.

Keywords: Darjeeling, disasters, PAR, vulnerabilities

Procedia PDF Downloads 252
137 The Confluence between Autism Spectrum Disorder and the Schizoid Personality

Authors: Murray David Schane

Abstract:

Though years of clinical encounters with patients with autism spectrum disorders and those with a schizoid personality the many defining diagnostic features shared between these conditions have been explored and current neurobiological differences have been reviewed; and, critical and different treatment strategies for each have been devised. The paper compares and contrasts the apparent similarities between autism spectrum disorders and the schizoid personality are found in these DSM descriptive categories: restricted range of social-emotional reciprocity; poor non-verbal communicative behavior in social interactions; difficulty developing and maintaining relationships; detachment from social relationships; lack of the desire for or enjoyment of close relationships; and preference for solitary activities. In this paper autism, fundamentally a communicative disorder, is revealed to present clinically as a pervasive aversive response to efforts to engage with or be engaged by others. Autists with the Asperger presentation typically have language but have difficulty understanding humor, irony, sarcasm, metaphoric speech, and even narratives about social relationships. They also tend to seek sameness, possibly to avoid problems of social interpretation. Repetitive behaviors engage many autists as a screen against ambient noise, social activity, and challenging interactions. Also in this paper, the schizoid personality is revealed as a pattern of social avoidance, self-sufficiency and apparent indifference to others as a complex psychological defense against a deep, long-abiding fear of appropriation and perverse manipulation. Neither genetic nor MRI studies have yet located the explanatory data that identifies the cause or the neurobiology of autism. Similarly, studies of the schizoid have yet to group that condition with those found in schizophrenia. Through presentations of clinical examples, the treatment of autists of the Asperger type is revealed to address the autist’s extreme social aversion which also precludes the experience of empathy. Autists will be revealed as forming social attachments but without the capacity to interact with mutual concern. Empathy will be shown be teachable and, as social avoidance relents, understanding of the meaning and signs of empathic needs that autists can recognize and acknowledge. Treatment of schizoids will be shown to revolve around joining empathically with the schizoid’s apprehensions about interpersonal, interactive proximity. Models of both autism and schizoid personality traits have yet to be replicated in animals, thereby eliminating the role of translational research in providing the kind of clues to behavioral patterns that can be related to genetic, epigenetic and neurobiological measures. But as these clinical examples will attest, treatment strategies have significant impact.

Keywords: autism spectrum, schizoid personality traits, neurobiological implications, critical diagnostic distinctions

Procedia PDF Downloads 94
136 Climate Change and Health: Scoping Review of Scientific Literature 1990-2015

Authors: Niamh Herlihy, Helen Fischer, Rainer Sauerborn, Anneliese Depoux, Avner Bar-Hen, Antoine Flauhault, Stefanie Schütte

Abstract:

In the recent decades, there has been an increase in the number of publications both in the scientific and grey literature on the potential health risks associated with climate change. Though interest in climate change and health is growing, there are still many gaps to adequately assess our future health needs in a warmer world. Generating a greater understanding of the health impacts of climate change could be a key step in inciting the changes necessary to decelerate global warming and to target new strategies to mitigate the consequences on health systems. A long term and broad overview of existing scientific literature in the field of climate change and health is currently missing in order to ensure that all priority areas are being adequately addressed. We conducted a scoping review of published peer-reviewed literature on climate change and health from two large databases, PubMed and Web of Science, between 1990 and 2015. A scoping review allowed for a broad analysis of this complex topic on a meta-level as opposed to a thematically refined literature review. A detailed search strategy including specific climate and health terminology was used to search the two databases. Inclusion and exclusion criteria were applied in order to capture the most relevant literature on the human health impact of climate change within the chosen timeframe. Two reviewers screened the papers independently and any differences arising were resolved by a third party. Data was extracted, categorized and coded both manually and using R software. Analytics and infographics were developed from results. There were 7269 articles identified between the two databases following the removal of duplicates. After screening of the articles by both reviewers 3751 were included. As expected, preliminary results indicate that the number of publications on the topic has increased over time. Geographically, the majority of publications address the impact of climate change and health in Europe and North America, This is particularly alarming given that countries in the Global South will bear the greatest health burden. Concerning health outcomes, infectious diseases, particularly dengue fever and other mosquito transmitted infections are the most frequently cited. We highlight research gaps in certain areas e.g climate migration and mental health issues. We are developing a database of the identified climate change and health publications and are compiling a report for publication and dissemination of the findings. As health is a major co-beneficiary to climate change mitigation strategies, our results may serve as a useful source of information for research funders and investors when considering future research needs as well as the cost-effectiveness of climate change strategies. This study is part of an interdisciplinary project called 4CHealth that confronts results of the research done on scientific, political and press literature to better understand how the knowledge on climate change and health circulates within those different fields and whether and how it is translated to real world change.

Keywords: climate change, health, review, mapping

Procedia PDF Downloads 298
135 A Socio-Spatial Analysis of Financialization and the Formation of Oligopolies in Brazilian Basic Education

Authors: Gleyce Assis Da Silva Barbosa

Abstract:

In recent years, we have witnessed a vertiginous growth of large education companies. Daughters of national and world capital, these companies expand both through consolidated physical networks in the form of branches spread across the territory and through institutional networks such as business networks through mergers, acquisitions, creation of new companies and influence. They do this by incorporating small, medium and large schools and universities, teaching systems and other products and services. They are also able to weave their webs directly or indirectly in philanthropic circles, limited partnerships, family businesses and even in public education through various mechanisms of outsourcing, privatization and commercialization of products for the sector. Although the growth of these groups in basic education seems to us a recent phenomenon in peripheral countries such as Brazil, its diffusion is closely linked to higher education conglomerates and other sectors of the economy forming oligopolies, which began to expand in the 1990s with strong state support and through political reforms that redefined its role, transforming it into a fundamental agent in the formation of guidelines to boost the incorporation of neoliberal logic. This expansion occurred through the objectification of education, commodifying it and transforming students into consumer clients. Financial power combined with the neo-liberalization of state public policies allowed the profusion of social exclusion, the increase of individuals without access to basic services, deindustrialization, automation, capital volatility and the indetermination of the economy; in addition, this process causes capital to be valued and devalued at rates never seen before, which together generates various impacts such as the precariousness of work. Understanding the connection between these processes, which engender the economy, allows us to see their consequences in labor relations and in the territory. In this sense, it is necessary to analyze the geographic-economic context and the role of the facilitating agents of this process, which can give us clues about the ongoing transformations and the directions of education in the national and even international scenario since this process is linked to the multiple scales of financial globalization. Therefore, the present research has the general objective of analyzing the socio-spatial impacts of financialization and the formation of oligopolies in Brazilian basic education. For this, the survey of laws, data, and public policies on the subject in question was used as a methodology. As a methodology, the work was based on some data from these companies available on websites for investors. Survey of information from global and national companies that operate in Brazilian basic education. In addition to mapping the expansion of educational oligopolies using public data on the location of schools. With this, the research intends to provide information about the ongoing commodification process in the country. Discuss the consequences of the oligopolization of education, considering the impacts that financialization can bring to teaching work.

Keywords: financialization, oligopolies, education, Brazil

Procedia PDF Downloads 41
134 COVID Prevention and Working Environmental Risk Prevention and Buisness Continuety among the Sme’s in Selected Districts in Sri Lanka

Authors: Champika Amarasinghe

Abstract:

Introduction: Covid 19 pandemic was badly hit to the Sri Lankan economy during the year 2021. More than 65% of the Sri Lankan work force is engaged with small and medium scale businesses which no doubt that they had to struggle for their survival and business continuity during the pandemic. Objective: To assess the association of adherence to the new norms during the Covid 19 pandemic and maintenance of healthy working environmental conditions for business continuity. A cross sectional study was carried out to assess the OSH status and adequacy of Covid 19 preventive strategies among the 200 SME’S in selected two districts in Sri Lanka. These two districts were selected considering the highest availability of SME’s. Sample size was calculated, and probability propionate to size was used to select the SME’s which were registered with the small and medium scale development authority. An interviewer administrated questionnaire was used to collect the data, and OSH risk assessment was carried out by a team of experts to assess the OSH status in these industries. Results: According to the findings, more than 90% of the employees in these industries had a moderate awareness related to COVID 19 disease and preventive strategies such as the importance of Mask use, hand sainting practices, and distance maintenance, but the only forty percent of them were adhered to implementation of these practices. Furthermore, only thirty five percent of the employees and employers in these SME’s new the reasons behind the new norms, which may be the reason for reluctance to implement these strategies and reluctance to adhering to the new norms in this sector. The OSH risk assessment findings revealed that the working environmental organization while maintaining the distance between two employees was poor due to the inadequacy of space in these entities. More than fifty five percent of the SME’s had proper ventilation and lighting facilities. More than eighty five percent of these SME’s had poor electrical safety measures. Furthermore, eighty two percent of them had not maintained fire safety measures. Eighty five percent of them were exposed to heigh noise levels and chemicals where they were not using any personal protectives nor any other engineering controls were not imposed. Floor conditions were poor, and they were not maintaining the occupational accident nor occupational disease diseases. Conclusions: Based on the findings, proper awareness sessions were carried out by NIOSH. Six physical training sessions and continues online trainings were carried out to overcome these issues, which made a drastic change in their working environments and ended up with hundred percent implementation of the Covid 19 preventive strategies, which intern improved the worker participation in the businesses. Reduced absentees and improved business opportunities, and continued their businesses without any interruption during the third episode of Covid 19 in Sri Lanka.

Keywords: working environment, Covid 19, occupational diseases, occupational accidents

Procedia PDF Downloads 67
133 Assessment of Five Photoplethysmographic Methods for Estimating Heart Rate Variability

Authors: Akshay B. Pawar, Rohit Y. Parasnis

Abstract:

Heart Rate Variability (HRV) is a widely used indicator of the regulation between the autonomic nervous system (ANS) and the cardiovascular system. Besides being non-invasive, it also has the potential to predict mortality in cases involving critical injuries. The gold standard method for determining HRV is based on the analysis of RR interval time series extracted from ECG signals. However, because it is much more convenient to obtain photoplethysmogramic (PPG) signals as compared to ECG signals (which require the attachment of several electrodes to the body), many researchers have used pulse cycle intervals instead of RR intervals to estimate HRV. They have also compared this method with the gold standard technique. Though most of their observations indicate a strong correlation between the two methods, recent studies show that in healthy subjects, except for a few parameters, the pulse-based method cannot be a surrogate for the standard RR interval- based method. Moreover, the former tends to overestimate short-term variability in heart rate. This calls for improvements in or alternatives to the pulse-cycle interval method. In this study, besides the systolic peak-peak interval method (PP method) that has been studied several times, four recent PPG-based techniques, namely the first derivative peak-peak interval method (P1D method), the second derivative peak-peak interval method (P2D method), the valley-valley interval method (VV method) and the tangent-intersection interval method (TI method) were compared with the gold standard technique. ECG and PPG signals were obtained from 10 young and healthy adults (consisting of both males and females) seated in the armchair position. In order to de-noise these signals and eliminate baseline drift, they were passed through certain digital filters. After filtering, the following HRV parameters were computed from PPG using each of the five methods and also from ECG using the gold standard method: time domain parameters (SDNN, pNN50 and RMSSD), frequency domain parameters (Very low-frequency power (VLF), Low-frequency power (LF), High-frequency power (HF) and Total power or “TP”). Besides, Poincaré plots were also plotted and their SD1/SD2 ratios determined. The resulting sets of parameters were compared with those yielded by the standard method using measures of statistical correlation (correlation coefficient) as well as statistical agreement (Bland-Altman plots). From the viewpoint of correlation, our results show that the best PPG-based methods for the determination of most parameters and Poincaré plots are the P2D method (shows more than 93% correlation with the standard method) and the PP method (mean correlation: 88%) whereas the TI, VV and P1D methods perform poorly (<70% correlation in most cases). However, our evaluation of statistical agreement using Bland-Altman plots shows that none of the five techniques agrees satisfactorily well with the gold standard method as far as time-domain parameters are concerned. In conclusion, excellent statistical correlation implies that certain PPG-based methods provide a good amount of information on the pattern of heart rate variation, whereas poor statistical agreement implies that PPG cannot completely replace ECG in the determination of HRV.

Keywords: photoplethysmography, heart rate variability, correlation coefficient, Bland-Altman plot

Procedia PDF Downloads 298
132 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring

Authors: Zheng Wang, Zhenhong Li, Jon Mills

Abstract:

Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.

Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring

Procedia PDF Downloads 138
131 Window Opening Behavior in High-Density Housing Development in Subtropical Climate

Authors: Minjung Maing, Sibei Liu

Abstract:

This research discusses the results of a study of window opening behavior of large housing developments in the high-density megacity of Hong Kong. The methods used for the study involved field observations using photo documentation of the four cardinal elevations (north, south-east, and west) of two large housing developments in a very dense urban area of approx. 46,000 persons per square meter within the city of Hong Kong. The targeted housing developments (A and B) are large public housing with a population of about 13,000 in each development of lower income. However, the mean income level in development A is about 40% higher than development B and home ownership is 60% in development A and 0% in development B. Mapping of the surrounding amenities and layout of the developments were also studied to understand the available activities to the residents. The photo documentation of the elevations was taken from November 2016 to February 2018 to gather a full spectrum of different seasons and both in the morning and afternoon (am/pm) times. From the photograph, the window opening behavior was measured by counting the amount of windows opened as a percentage of all the windows on that façade. For each date of survey data collected, weather data was recorded from weather stations located in the same region to collect temperature, humidity and wind speed. To further understand the behavior, simulation studies of microclimate conditions of the housing development was conducted using the software ENVI-met, a widely used simulation tool by researchers studying urban climate. Four major conclusions can be drawn from the data analysis and simulation results. Firstly, there is little change in the amount of window opening during the different seasons within a temperature range of 10 to 35 degrees Celsius. This means that people who tend to open their windows have consistent window opening behavior throughout the year and high tolerance of indoor thermal conditions. Secondly, for all four elevations the lower-income development B opened more windows (almost two times more units) than higher-income development A meaning window opening behavior had strong correlations with income level. Thirdly, there is a lack of correlation between outdoor horizontal wind speed and window opening behavior, as the changes of wind speed do not seem to affect the action of opening windows in most conditions. Similar to the low correlation between horizontal wind speed and window opening percentage, it is found that vertical wind speed also cannot explain the window opening behavior of occupants. Fourthly, there is a slightly higher average of window opening on the south elevation than the north elevation, which may be due to the south elevation being well shaded from high angle sun during the summer and allowing heat into units from lower angle sun during the winter season. These findings are important to providing insight into how to better design urban environments and indoor thermal environments for a liveable high density city.

Keywords: high-density housing, subtropical climate, urban behavior, window opening

Procedia PDF Downloads 106
130 Temperature Contour Detection of Salt Ice Using Color Thermal Image Segmentation Method

Authors: Azam Fazelpour, Saeed Reza Dehghani, Vlastimil Masek, Yuri S. Muzychka

Abstract:

The study uses a novel image analysis based on thermal imaging to detect temperature contours created on salt ice surface during transient phenomena. Thermal cameras detect objects by using their emissivities and IR radiance. The ice surface temperature is not uniform during transient processes. The temperature starts to increase from the boundary of ice towards the center of that. Thermal cameras are able to report temperature changes on the ice surface at every individual moment. Various contours, which show different temperature areas, appear on the ice surface picture captured by a thermal camera. Identifying the exact boundary of these contours is valuable to facilitate ice surface temperature analysis. Image processing techniques are used to extract each contour area precisely. In this study, several pictures are recorded while the temperature is increasing throughout the ice surface. Some pictures are selected to be processed by a specific time interval. An image segmentation method is applied to images to determine the contour areas. Color thermal images are used to exploit the main information. Red, green and blue elements of color images are investigated to find the best contour boundaries. The algorithms of image enhancement and noise removal are applied to images to obtain a high contrast and clear image. A novel edge detection algorithm based on differences in the color of the pixels is established to determine contour boundaries. In this method, the edges of the contours are obtained according to properties of red, blue and green image elements. The color image elements are assessed considering their information. Useful elements proceed to process and useless elements are removed from the process to reduce the consuming time. Neighbor pixels with close intensities are assigned in one contour and differences in intensities determine boundaries. The results are then verified by conducting experimental tests. An experimental setup is performed using ice samples and a thermal camera. To observe the created ice contour by the thermal camera, the samples, which are initially at -20° C, are contacted with a warmer surface. Pictures are captured for 20 seconds. The method is applied to five images ,which are captured at the time intervals of 5 seconds. The study shows the green image element carries no useful information; therefore, the boundary detection method is applied on red and blue image elements. In this case study, the results indicate that proposed algorithm shows the boundaries more effective than other edges detection methods such as Sobel and Canny. Comparison between the contour detection in this method and temperature analysis, which states real boundaries, shows a good agreement. This color image edge detection method is applicable to other similar cases according to their image properties.

Keywords: color image processing, edge detection, ice contour boundary, salt ice, thermal image

Procedia PDF Downloads 294