Search results for: spatial temporal filter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3741

Search results for: spatial temporal filter

411 Forecasting Thermal Energy Demand in District Heating and Cooling Systems Using Long Short-Term Memory Neural Networks

Authors: Kostas Kouvaris, Anastasia Eleftheriou, Georgios A. Sarantitis, Apostolos Chondronasios

Abstract:

To achieve the objective of almost zero carbon energy solutions by 2050, the EU needs to accelerate the development of integrated, highly efficient and environmentally friendly solutions. In this direction, district heating and cooling (DHC) emerges as a viable and more efficient alternative to conventional, decentralized heating and cooling systems, enabling a combination of more efficient renewable and competitive energy supplies. In this paper, we develop a forecasting tool for near real-time local weather and thermal energy demand predictions for an entire DHC network. In this fashion, we are able to extend the functionality and to improve the energy efficiency of the DHC network by predicting and adjusting the heat load that is distributed from the heat generation plant to the connected buildings by the heat pipe network. Two case-studies are considered; one for Vransko, Slovenia and one for Montpellier, France. The data consists of i) local weather data, such as humidity, temperature, and precipitation, ii) weather forecast data, such as the outdoor temperature and iii) DHC operational parameters, such as the mass flow rate, supply and return temperature. The external temperature is found to be the most important energy-related variable for space conditioning, and thus it is used as an external parameter for the energy demand models. For the development of the forecasting tool, we use state-of-the-art deep neural networks and more specifically, recurrent networks with long-short-term memory cells, which are able to capture complex non-linear relations among temporal variables. Firstly, we develop models to forecast outdoor temperatures for the next 24 hours using local weather data for each case-study. Subsequently, we develop models to forecast thermal demand for the same period, taking under consideration past energy demand values as well as the predicted temperature values from the weather forecasting models. The contributions to the scientific and industrial community are three-fold, and the empirical results are highly encouraging. First, we are able to predict future thermal demand levels for the two locations under consideration with minimal errors. Second, we examine the impact of the outdoor temperature on the predictive ability of the models and how the accuracy of the energy demand forecasts decreases with the forecast horizon. Third, we extend the relevant literature with a new dataset of thermal demand and examine the performance and applicability of machine learning techniques to solve real-world problems. Overall, the solution proposed in this paper is in accordance with EU targets, providing an automated smart energy management system, decreasing human errors and reducing excessive energy production.

Keywords: machine learning, LSTMs, district heating and cooling system, thermal demand

Procedia PDF Downloads 123
410 Telomerase, a Biomarker in Oral Cancer Cell Proliferation and Tool for Its Prevention at Initial Stage

Authors: Shaista Suhail

Abstract:

As cancer populations is increasing sharply, the incidence of oral squamous cell carcinoma (OSCC) has also been expected to increase. Oral carcinogenesis is a highly complex, multistep process which involves accumulation of genetic alterations that lead to the induction of proteins promoting cell growth (encoded by oncogenes), increased enzymatic (telomerase) activity promoting cancer cell proliferation. The global increase in frequency and mortality, as well as the poor prognosis of oral squamous cell carcinoma, has intensified current research efforts in the field of prevention and early detection of this disease. The advances in the understanding of the molecular basis of oral cancer should help in the identification of new markers. The study of the carcinogenic process of the oral cancer, including continued analysis of new genetic alterations, along with their temporal sequencing during initiation, promotion and progression, will allow us to identify new diagnostic and prognostic factors, which will provide a promising basis for the application of more rational and efficient treatments. Telomerase activity has been readily found in most cancer biopsies, in premalignant lesions or germ cells. Activity of telomerase is generally absent in normal tissues. It is known to be induced upon immortalization or malignant transformation of human cells such as in oral cancer cells. Maintenance of telomeres plays an essential role during transformation of precancer to malignant stage. Mammalian telomeres, a specialized nucleoprotein structures are composed of large conctamers of the guanine-rich sequence 5_-TTAGGG-3_. The roles of telomeres in regulating both stability of genome and replicative immortality seem to contribute in essential ways in cancer initiation and progression. It is concluded that activity of telomerase can be used as a biomarker for diagnosis of malignant oral cancer and a target for inactivation in chemotherapy or gene therapy. Its expression will also prove to be an important diagnostic tool as well as a novel target for cancer therapy. The activation of telomerase may be an important step in tumorgenesis which can be controlled by inactivating its activity during chemotherapy. The expression and activity of telomerase are indispensable for cancer development. There are no drugs which can effect extremely to treat oral cancers. There is a general call for new emerging drugs or methods that are highly effective towards cancer treatment, possess low toxicity, and have a minor environment impact. Some novel natural products also offer opportunities for innovation in drug discovery. Natural compounds isolated from medicinal plants, as rich sources of novel anticancer drugs, have been of increasing interest with some enzyme (telomerase) blockage property. The alarming reports of cancer cases increase the awareness amongst the clinicians and researchers pertaining to investigate newer drug with low toxicity.

Keywords: oral carcinoma, telomere, telomerase, blockage

Procedia PDF Downloads 156
409 Interference of Mild Drought Stress on Estimation of Nitrogen Status in Winter Wheat by Some Vegetation Indices

Authors: H. Tavakoli, S. S. Mohtasebi, R. Alimardani, R. Gebbers

Abstract:

Nitrogen (N) is one of the most important agricultural inputs affecting crop growth, yield and quality in rain-fed cereal production. N demand of crops varies spatially across fields due to spatial differences in soil conditions. In addition, the response of a crop to the fertilizer applications is heavily reliant on plant available water. Matching N supply to water availability is thus essential to achieve an optimal crop response. The objective of this study was to determine effect of drought stress on estimation of nitrogen status of winter wheat by some vegetation indices. During the 2012 growing season, a field experiment was conducted at the Bundessortenamt (German Plant Variety Office) Marquardt experimental station which is located in the village of Marquardt about 5 km northwest of Potsdam, Germany (52°27' N, 12°57' E). The experiment was designed as a randomized split block design with two replications. Treatments consisted of four N fertilization rates (0, 60, 120 and 240 kg N ha-1, in total) and two water regimes (irrigated (Irr) and non-irrigated (NIrr)) in total of 16 plots with dimension of 4.5 × 9.0 m. The indices were calculated using readings of a spectroradiometer made of tec5 components. The main parts were two “Zeiss MMS1 nir enh” diode-array sensors with a nominal rage of 300 to 1150 nm with less than 10 nm resolutions and an effective range of 400 to 1000 nm. The following vegetation indices were calculated: NDVI, GNDVI, SR, MSR, NDRE, RDVI, REIP, SAVI, OSAVI, MSAVI, and PRI. All the experiments were conducted during the growing season in different plant growth stages including: stem elongation (BBCH=32-41), booting stage (BBCH=43), inflorescence emergence, heading (BBCH=56-58), flowering (BBCH=65-69), and development of fruit (BBCH=71). According to the results obtained, among the indices, NDRE and REIP were less affected by drought stress and can provide reliable wheat nitrogen status information, regardless of water status of the plant. They also showed strong relations with nitrogen status of winter wheat.

Keywords: nitrogen status, drought stress, vegetation indices, precision agriculture

Procedia PDF Downloads 303
408 Computational Study of Composite Films

Authors: Rudolf Hrach, Stanislav Novak, Vera Hrachova

Abstract:

Composite and nanocomposite films represent the class of promising materials and are often objects of the study due to their mechanical, electrical and other properties. The most interesting ones are probably the composite metal/dielectric structures consisting of a metal component embedded in an oxide or polymer matrix. Behaviour of composite films varies with the amount of the metal component inside what is called filling factor. The structures contain individual metal particles or nanoparticles completely insulated by the dielectric matrix for small filling factors and the films have more or less dielectric properties. The conductivity of the films increases with increasing filling factor and finally a transition into metallic state occurs. The behaviour of composite films near a percolation threshold, where the change of charge transport mechanism from a thermally-activated tunnelling between individual metal objects to an ohmic conductivity is observed, is especially important. Physical properties of composite films are given not only by the concentration of metal component but also by the spatial and size distributions of metal objects which are influenced by a technology used. In our contribution, a study of composite structures with the help of methods of computational physics was performed. The study consists of two parts: -Generation of simulated composite and nanocomposite films. The techniques based on hard-sphere or soft-sphere models as well as on atomic modelling are used here. Characterizations of prepared composite structures by image analysis of their sections or projections follow then. However, the analysis of various morphological methods must be performed as the standard algorithms based on the theory of mathematical morphology lose their sensitivity when applied to composite films. -The charge transport in the composites was studied by the kinetic Monte Carlo method as there is a close connection between structural and electric properties of composite and nanocomposite films. It was found that near the percolation threshold the paths of tunnel current forms so-called fuzzy clusters. The main aim of the present study was to establish the correlation between morphological properties of composites/nanocomposites and structures of conducting paths in them in the dependence on the technology of composite films.

Keywords: composite films, computer modelling, image analysis, nanocomposite films

Procedia PDF Downloads 374
407 A Differential Detection Method for Chip-Scale Spin-Exchange Relaxation Free Atomic Magnetometer

Authors: Yi Zhang, Yuan Tian, Jiehua Chen, Sihong Gu

Abstract:

Chip-scale spin-exchange relaxation free (SERF) atomic magnetometer makes use of millimeter-scale vapor cells micro-fabricated by Micro-electromechanical Systems (MEMS) technique and SERF mechanism, resulting in the characteristics of high spatial resolution and high sensitivity. It is useful for biomagnetic imaging including magnetoencephalography and magnetocardiography. In a prevailing scheme, circularly polarized on-resonance laser beam is adapted for both pumping and probing the atomic polarization. And the magnetic-field-sensitive signal is extracted by transmission laser intensity enhancement as a result of atomic polarization increase on zero field level crossing resonance. The scheme is very suitable for integration, however, the laser amplitude modulation (AM) noise and laser frequency modulation to amplitude modulation (FM-AM) noise is superimposed on the photon shot noise reducing the signal to noise ratio (SNR). To suppress AM and FM-AM noise the paper puts forward a novel scheme which adopts circularly polarized on-resonance light pumping and linearly polarized frequency-detuning laser probing. The transmission beam is divided into transmission and reflection beams by a polarization analyzer, the angle between the analyzer's transmission polarization axis and frequency-detuning laser polarization direction is set to 45°. The magnetic-field-sensitive signal is extracted by polarization rotation enhancement of frequency-detuning laser which induces two beams intensity difference increase as the atomic polarization increases. Therefore, AM and FM-AM noise in two beams are common-mode and can be almost entirely canceled by differential detection. We have carried out an experiment to study our scheme. The experiment reveals that the noise in the differential signal is obviously smaller than that in each beam. The scheme is promising to be applied for developing more sensitive chip-scale magnetometer.

Keywords: atomic magnetometer, chip scale, differential detection, spin-exchange relaxation free

Procedia PDF Downloads 156
406 Research on Energy Field Intervening in Lost Space Renewal Strategy

Authors: Tianyue Wan

Abstract:

Lost space is the space that has not been used for a long time and is in decline, proposed by Roger Trancik. And in his book Finding Lost Space: Theories of Urban Design, the concept of lost space is defined as those anti-traditional spaces that are unpleasant, need to be redesigned, and have no benefit to the environment and users. They have no defined boundaries and do not connect the various landscape elements in a coherent way. With the rapid development of urbanization in China, the blind areas of urban renewal have become a chaotic lost space that is incompatible with the rapid development of urbanization. Therefore, lost space needs to be reconstructed urgently under the background of infill development and reduction planning in China. The formation of lost space is also an invisible division of social hierarchy. This paper tries to break down the social class division and the estrangement between people through the regeneration of lost space. Ultimately, it will enhance vitality, rebuild a sense of belonging, and create a continuous open public space for local people. Based on the concept of lost space and energy field, this paper clarifies the significance of the energy field in the lost space renovation. Then it introduces the energy field into lost space by using the magnetic field in physics as a prototype. The construction of the energy field is support by space theory, spatial morphology analysis theory, public communication theory, urban diversity theory and city image theory. Taking Wuhan’s Lingjiao Park of China as an example, this paper chooses the lost space on the west side of the park as the research object. According to the current situation of this site, the energy intervention strategies are proposed from four aspects: natural ecology, space rights, intangible cultural heritage and infrastructure configuration. And six specific lost space renewal methods are used in this work, including “riveting”, “breakthrough”, “radiation”, “inheritance”, “connection” and “intersection”. After the renovation, space will be re-introduced into the active crow. The integration of activities and space creates a sense of place, improve the walking experience, restores the vitality of the space, and provides a reference for the reconstruction of lost space in the city.

Keywords: dynamic vitality intervention, lost space, space vitality, sense of place

Procedia PDF Downloads 91
405 Rethinking Urban Informality through the Lens of Inclusive Planning and Governance in Contemporary Cities: A Case Study of Johannesburg, South Africa

Authors: Blessings Masuku

Abstract:

Background: Considering that Africa is urbanizing faster than any other region globally, managing cities in the global South has become the centerpiece for the New Urban Agenda (i.e., a shared vision of how we rethink, rebuild, and manage our cities for a better and more sustainable future). This study is centered on governance and planning of urban informality practices with particular reference to the relationship between the state, informal actors (e.g., informal traders and informal dwellers), and other city stakeholders who are public space users (commuters, businesses, and environmental activists), and how informal actors organize themselves to lobby the state and claim for their rights in the city, and how they navigate their everyday livelihood strategies. Aim: The purpose of this study is to examine and interrogate contemporary approaches, policy and regulatory frameworks to urban spatial planning and management of informality in one of South Africa’s busiest and major cities, Johannesburg. Setting: The study uses the metropolitan region of the city of Johannesburg, South Africa to understand how this contemporary industrial city manages urban informality practices, including the use of public space, land zoning and street life, and paying a closer look at what progress has been made and gaps in their inclusive urban policy frameworks. Methods: This study utilized a qualitative approach that includes surveys (open-ended questions), archival research (i., e policy and other key document reviews), and key interviews mainly with city officials, and informality actors. A thematic analysis was used to analyze the data collected. Contribution: This study contributes to large urban informality scholarship in the global South cities by exploring how major cities particularly in Africa regulate and manage informality patterns and practices in their quest to build “utopian” smart cities. This study also brings a different perspective on the hacking ways used by the informal actors to resist harsh regulations and remain invisible in the city, which is something that previous literature has barely delved in-depth.

Keywords: inclusive planning and governance, infrastructure systems, livelihood strategies urban informality, urban space

Procedia PDF Downloads 56
404 A Measurement and Motor Control System for Free Throw Shots in Basketball Using Gyroscope Sensor

Authors: Niloofar Zebarjad

Abstract:

This research aims at finding a tool to provide basketball players with real-time audio feedback on their shooting form in free throw shots. Free throws played a pivotal role in taking the lead in fierce competitions. The major problem in performing an accurate free throw seems to be improper training. Since the arm movement during the free throw shot is complex, the coach or the athlete might miss the movement details during practice. Hence, there is a necessity to create a system that measures arm movements' critical characteristics and control for improper kinematics. The proposed setup in this study quantifies arm kinematics and provides real-time feedback as an audio signal consisting of a gyroscope sensor. Spatial shoulder angle data are transmitted in a mobile application in real-time and can be saved and processed for statistical and analysis purposes. The proposed system is easy to use, inexpensive, portable, and real-time applicable. Objectives: This research aims to modify and control the free throw using audio feedback and determine if and to what extent the new setup reduces errors in arm formations during throws and finally assesses the successful throw rate. Methods: One group of elite basketball athletes and two novice athletes (control and study group) participated in this study. Each group contains 5 participants being studied in three separate sessions over a week. Results: Empirical results showed enhancements in the free throw shooting style, shot pocket (SP), and locked position (LP). The mean values of shoulder angle were controlled on 25° and 45° for SP and LP, respectively, recommended by valid FIBA references. Conclusion: Throughout the experiments, the system helped correct and control the shoulder angles toward the targeted pattern of shot pocket (SP) and locked position (LP). According to the desired results for arm motion, adding another sensor to measure and control the elbow angle is recommended.

Keywords: audio-feedback, basketball, free-throw, locked-position, motor-control, shot-pocket

Procedia PDF Downloads 267
403 Monte Carlo Simulation of Thyroid Phantom Imaging Using Geant4-GATE

Authors: Parimalah Velo, Ahmad Zakaria

Abstract:

Introduction: Monte Carlo simulations of preclinical imaging systems allow opportunity to enable new research that could range from designing hardware up to discovery of new imaging application. The simulation system which could accurately model an imaging modality provides a platform for imaging developments that might be inconvenient in physical experiment systems due to the expense, unnecessary radiation exposures and technological difficulties. The aim of present study is to validate the Monte Carlo simulation of thyroid phantom imaging using Geant4-GATE for Siemen’s e-cam single head gamma camera. Upon the validation of the gamma camera simulation model by comparing physical characteristic such as energy resolution, spatial resolution, sensitivity, and dead time, the GATE simulation of thyroid phantom imaging is carried out. Methods: A thyroid phantom is defined geometrically which comprises of 2 lobes with 80mm in diameter, 1 hot spot, and 3 cold spots. This geometry accurately resembling the actual dimensions of thyroid phantom. A planar image of 500k counts with 128x128 matrix size was acquired using simulation model and in actual experimental setup. Upon image acquisition, quantitative image analysis was performed by investigating the total number of counts in image, the contrast of the image, radioactivity distributions on image and the dimension of hot spot. Algorithm for each quantification is described in detail. The difference in estimated and actual values for both simulation and experimental setup is analyzed for radioactivity distribution and dimension of hot spot. Results: The results show that the difference between contrast level of simulation image and experimental image is within 2%. The difference in the total count between simulation and actual study is 0.4%. The results of activity estimation show that the relative difference between estimated and actual activity for experimental and simulation is 4.62% and 3.03% respectively. The deviation in estimated diameter of hot spot for both simulation and experimental study are similar which is 0.5 pixel. In conclusion, the comparisons show good agreement between the simulation and experimental data.

Keywords: gamma camera, Geant4 application of tomographic emission (GATE), Monte Carlo, thyroid imaging

Procedia PDF Downloads 257
402 Development of Three-Dimensional Bio-Reactor Using Magnetic Field Stimulation to Enhance PC12 Cell Axonal Extension

Authors: Eiji Nakamachi, Ryota Sakiyama, Koji Yamamoto, Yusuke Morita, Hidetoshi Sakamoto

Abstract:

The regeneration of injured central nerve network caused by the cerebrovascular accidents is difficult, because of poor regeneration capability of central nerve system composed of the brain and the spinal cord. Recently, new regeneration methods such as transplant of nerve cells and supply of nerve nutritional factor were proposed and examined. However, there still remain many problems with the canceration of engrafted cells and so on and it is strongly required to establish an efficacious treating method of a central nerve system. Blackman proposed the electromagnetic stimulation method to enhance the axonal nerve extension. In this study, we try to design and fabricate a new three-dimensional (3D) bio-reactor, which can load a uniform AC magnetic field stimulation on PC12 cells in the extracellular environment for enhancement of an axonal nerve extension and 3D nerve network generation. Simultaneously, we measure the morphology of PC12 cell bodies, axons, and dendrites by the multiphoton excitation fluorescence microscope (MPM) and evaluate the effectiveness of the uniform AC magnetic stimulation to enhance the axonal nerve extension. Firstly, we designed and fabricated the uniform AC magnetic field stimulation bio-reactor. For the AC magnetic stimulation system, we used the laminated silicon steel sheets for a yoke structure of 3D chamber, which had a high magnetic permeability. Next, we adopted the pole piece structure and installed similar specification coils on both sides of the yoke. We searched an optimum pole piece structure using the magnetic field finite element (FE) analyses and the response surface methodology. We confirmed that the optimum 3D chamber structure showed a uniform magnetic flux density in the PC12 cell culture area by using FE analysis. Then, we fabricated the uniform AC magnetic field stimulation bio-reactor by adopting analytically determined specifications, such as the size of chamber and electromagnetic conditions. We confirmed that measurement results of magnetic field in the chamber showed a good agreement with FE results. Secondly, we fabricated a dish, which set inside the uniform AC magnetic field stimulation of bio-reactor. PC12 cells were disseminated with collagen gel and could be 3D cultured in the dish. The collagen gel were poured in the dish. The collagen gel, which had a disk shape of 6 mm diameter and 3mm height, was set on the membrane filter, which was located at 4 mm height from the bottom of dish. The disk was full filled with the culture medium inside the dish. Finally, we evaluated the effectiveness of the uniform AC magnetic field stimulation to enhance the nurve axonal extension. We confirmed that a 6.8 increase in the average axonal extension length of PC12 under the uniform AC magnetic field stimulation at 7 days culture in our bio-reactor, and a 24.7 increase in the maximum axonal extension length. Further, we confirmed that a 60 increase in the number of dendrites of PC12 under the uniform AC magnetic field stimulation. Finally, we confirm the availability of our uniform AC magnetic stimulation bio-reactor for the nerve axonal extension and the nerve network generation.

Keywords: nerve regeneration, axonal extension , PC12 cell, magnetic field, three-dimensional bio-reactor

Procedia PDF Downloads 156
401 Hydro-Meteorological Vulnerability and Planning in Urban Area: The Case of Yaoundé City in Cameroon

Authors: Ouabo Emmanuel Romaric, Amougou Armathe

Abstract:

Background and aim: The study of impacts of floods and landslides at a small scale, specifically in the urban areas of developing countries is done to provide tools and actors for a better management of risks in such areas, which are now being affected by climate change. The main objective of this study is to assess the hydrometeorological vulnerabilities associated with flooding and urban landslides to propose adaptation measures. Methods: Climatic data analyses were done by calculation of indices of climate change within 50 years (1960-2012). Analyses of field data to determine causes, the level of risk and its consequences on the area of study was carried out using SPSS 18 software. The cartographic analysis and GIS were used to refine the work in space. Then, spatial and terrain analyses were carried out to determine the morphology of field in relation with floods and landslide, and the diffusion on the field. Results: The interannual changes in precipitation has highlighted the surplus years (21), the deficit years (24) and normal years (7). Barakat method bring out evolution of precipitation by jerks and jumps. Floods and landslides are correlated to high precipitation during surplus and normal years. Data field analyses show that populations are conscious (78%) of the risks with 74% of them exposed, but their capacities of adaptation is very low (51%). Floods are the main risk. The soils are classed as feralitic (80%), hydromorphic (15%) and raw mineral (5%). Slope variation (5% to 15%) of small hills and deep valley with anarchic construction favor flood and landslide during heavy precipitation. Mismanagement of waste produce blocks free circulation of river and accentuate floods. Conclusion: Vulnerability of population to hydrometeorological risks in Yaoundé VI is the combination of variation of parameters like precipitation, temperature due to climate change, and the bad planning of construction in urban areas. Because of lack of channels for water to circulate due to saturation of soils, the increase of heavy precipitation and mismanagement of waste, the result are floods and landslides which causes many damages on goods and people.

Keywords: climate change, floods, hydrometeorological, vulnerability

Procedia PDF Downloads 449
400 Effect of Forests and Forest Cover Change on Rainfall in the Central Rift Valley of Ethiopia

Authors: Alemayehu Muluneh, Saskia Keesstra, Leo Stroosnijder, Woldeamlak Bewket, Ashenafi Burka

Abstract:

There are some scientific evidences and a belief by many that forests attract rain and deforestation contributes to a decline of rainfall. However, there is still a lack of concrete scientific evidence on the role of forests in rainfall amount. In this paper, we investigate the forest-rainfall relationships in the environmentally hot spot area of the Central Rift Valley (CRV) of Ethiopia. Specifically, we evaluate long term (1970-2009) rainfall variability and its relationship with historical forest cover and the relationship between existing forest cover and topographical variables and rainfall distribution. The study used 16 long term and 15 short term rainfall stations. The Mann-Kendall test, bi variate and multiple regression models were used. The results show forest and wood land cover continuously declined over the 40 years period (1970-2009), but annual rainfall in the rift valley floor increased by 6.42 mm/year. But, on the escarpment and highlands, annual rainfall decreased by 2.48 mm/year. The increase in annual rainfall in the rift valley floor is partly attributable to the increase in evaporation as a result of increasing temperatures from the 4 existing lakes in the rift valley floor. Though, annual rainfall is decreasing on the escarpment and highlands, there was no significant correlation between this rainfall decrease and forest and wood land decline and also rainfall variability in the region was not explained by forest cover. Hence, the decrease in annual rainfall on the escarpment and highlands is likely related to the global warming of the atmosphere and the surface waters of the Indian Ocean. Spatial variability of number of rainy days from systematically observed two-year’s rainfall data (2012-2013) was significantly (R2=-0.63) explained by forest cover (distance from forest). But, forest cover was not a significant variable (R2=-0.40) in explaining annual rainfall amount. Generally, past deforestation and existing forest cover showed very little effect on long term and short term rainfall distribution, but a significant effect on number of rainy days in the CRV of Ethiopia.

Keywords: elevation, forest cover, rainfall, slope

Procedia PDF Downloads 520
399 Accounting and Prudential Standards of Banks and Insurance Companies in EU: What Stakes for Long Term Investment?

Authors: Sandra Rigot, Samira Demaria, Frederic Lemaire

Abstract:

The starting point of this research is the contemporary capitalist paradox: there is a real scarcity of long term investment despite the boom of potential long term investors. This gap represents a major challenge: there are important needs for long term financing in developed and emerging countries in strategic sectors such as energy, transport infrastructure, information and communication networks. Moreover, the recent financial and sovereign debt crises, which have respectively reduced the ability of financial banking intermediaries and governments to provide long term financing, questions the identity of the actors able to provide long term financing, their methods of financing and the most appropriate forms of intermediation. The issue of long term financing is deemed to be very important by the EU Commission, as it issued a 2013 Green Paper (GP) on long-term financing of the EU economy. Among other topics, the paper discusses the impact of the recent regulatory reforms on long-term investment, both in terms of accounting (in particular fair value) and prudential standards for banks. For banks, prudential and accounting standards are also crucial. Fair value is indeed well adapted to the trading book in a short term view, but this method hardly suits for a medium and long term portfolio. Banks’ ability to finance the economy and long term projects depends on their ability to distribute credit and the way credit is valued (fair value or amortised cost) leads to different banking strategies. Furthermore, in the banking industry, accounting standards are directly connected to the prudential standards, as the regulatory requirements of Basel III use accounting figures with prudential filter to define the needs for capital and to compute regulatory ratios. The objective of these regulatory requirements is to prevent insolvency and financial instability. In the same time, they can represent regulatory constraints to long term investing. The balance between financial stability and the need to stimulate long term financing is a key question raised by the EU GP. Does fair value accounting contributes to short-termism in the investment behaviour? Should prudential rules be “appropriately calibrated” and “progressively implemented” not to prevent banks from providing long-term financing? These issues raised by the EU GP lead us to question to what extent the main regulatory requirements incite or constrain banks to finance long term projects. To that purpose, we study the 292 responses received by the EU Commission during the public consultation. We analyze these contributions focusing on particular questions related to fair value accounting and prudential norms. We conduct a two stage content analysis of the responses. First, we proceed to a qualitative coding to identify arguments of respondents and subsequently we run a quantitative coding in order to conduct statistical analyses. This paper provides a better understanding of the position that a large panel of European stakeholders have on these issues. Moreover, it adds to the debate on fair value accounting and its effects on prudential requirements for banks. This analysis allows us to identify some short term bias in banking regulation.

Keywords: basel 3, fair value, securitization, long term investment, banks, insurers

Procedia PDF Downloads 270
398 How Did a Blind Child Begin Understanding Her “Blind Self”?: A Longitudinal Analysis Of Conversation between Her and Adults

Authors: Masahiro Nochi

Abstract:

This study explores the process in which a Japanese child with congenital blindness deepens understanding of the condition of being “unable to see” and develops the idea of “blind self,” despite having no direct experience of vision. The rehabilitation activities of a child with a congenital visual impairment that were video-recorded from 1 to 6 years old were analyzed qualitatively. The duration of the video was about 80 hours. The recordings were transcribed verbatim, and the episodes in which the child used the words related to the act of “looking” were extracted. Detailed transcripts were constructed referencing the notations of conversation analysis. Characteristics of interactions in those episodes were identified and compared longitudinally. Results showed that the child used the expression "look" under certain interaction patterns and her body expressions and interaction with adults developed in conjunction with the development of language use. Four stages were identified. At the age of 1, interactions involving “look” began to occur. The child said "Look" in the sequence: the child’s “Look,” an adult’s “I’m looking,” certain performances by the child, and the adult’s words of praise. At the age of 3, the child began to behave in accordance with the spatial attributes of the act of "looking," such as turning her face to the adult’s voice before saying, “Look.” She also began to use the expression “Keep looking,” which seemed to reflect her understanding of the temporality of the act of “looking.” At the age of 4, the use of “Look” or “Keep looking” became three times more frequent. She also started to refer to the act of looking in the future, such as “Come and look at my puppy someday.” At the age of 5, she moved her hands toward the adults when she was holding something she wanted to show them. She seemed to understand that people could see the object more clearly when it was in close priximity. About that time, she began to say “I cannot see” to her mother, which suggested a heightened understanding of her own blindness. The findings indicate that as she grew up, the child came to utilize nonverbal behavior before and after the order "Look" to make the progress of the interaction with adults even more certain. As a result, actions that reflect the characteristics of the sighted person's visual experience were incorporated into the interaction chain. The purpose of "Look," with which she intended to attract the adult's attention at first, changed and became something that requests a confirmation she was unable to make herself. It is considered that such a change in the use of the word as well as interaction with sighted adults reflected her heightened self-awareness as someone who could not do what sighted people could do easily. A blind child can gradually deepen their understanding of their own characteristics of blindness among sighted people around them. The child can also develop “blind self” by learning how to interact with others even without direct visual experiences.

Keywords: blindness, child development, conversation analysis, self-concept

Procedia PDF Downloads 105
397 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics

Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere

Abstract:

Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciences

Keywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet

Procedia PDF Downloads 120
396 Modelling Forest Fire Risk in the Goaso Forest Area of Ghana: Remote Sensing and Geographic Information Systems Approach

Authors: Bernard Kumi-Boateng, Issaka Yakubu

Abstract:

Forest fire, which is, an uncontrolled fire occurring in nature has become a major concern for the Forestry Commission of Ghana (FCG). The forest fires in Ghana usually result in massive destruction and take a long time for the firefighting crews to gain control over the situation. In order to assess the effect of forest fire at local scale, it is important to consider the role fire plays in vegetation composition, biodiversity, soil erosion, and the hydrological cycle. The occurrence, frequency and behaviour of forest fires vary over time and space, primarily as a result of the complicated influences of changes in land use, vegetation composition, fire suppression efforts, and other indigenous factors. One of the forest zones in Ghana with a high level of vegetation stress is the Goaso forest area. The area has experienced changes in its traditional land use such as hunting, charcoal production, inefficient logging practices and rural abandonment patterns. These factors which were identified as major causes of forest fire, have recently modified the incidence of fire in the Goaso area. In spite of the incidence of forest fires in the Goaso forest area, most of the forest services do not provide a cartographic representation of the burned areas. This has resulted in significant amount of information being required by the firefighting unit of the FCG to understand fire risk factors and its spatial effects. This study uses Remote Sensing and Geographic Information System techniques to develop a fire risk hazard model using the Goaso Forest Area (GFA) as a case study. From the results of the study, natural forest, agricultural lands and plantation cover types were identified as the major fuel contributing loads. However, water bodies, roads and settlements were identified as minor fuel contributing loads. Based on the major and minor fuel contributing loads, a forest fire risk hazard model with a reasonable accuracy has been developed for the GFA to assist decision making.

Keywords: forest, GIS, remote sensing, Goaso

Procedia PDF Downloads 434
395 Flood Hazard Assessment and Land Cover Dynamics of the Orai Khola Watershed, Bardiya, Nepal

Authors: Loonibha Manandhar, Rajendra Bhandari, Kumud Raj Kafle

Abstract:

Nepal’s Terai region is a part of the Ganges river basin which is one of the most disaster-prone areas of the world, with recurrent monsoon flooding causing millions in damage and the death and displacement of hundreds of people and households every year. The vulnerability of human settlements to natural disasters such as floods is increasing, and mapping changes in land use practices and hydro-geological parameters is essential in developing resilient communities and strong disaster management policies. The objective of this study was to develop a flood hazard zonation map of Orai Khola watershed and map the decadal land use/land cover dynamics of the watershed. The watershed area was delineated using SRTM DEM, and LANDSAT images were classified into five land use classes (forest, grassland, sediment and bare land, settlement area and cropland, and water body) using pixel-based semi-automated supervised maximum likelihood classification. Decadal changes in each class were then quantified using spatial modelling. Flood hazard mapping was performed by assigning weights to factors slope, rainfall distribution, distance from the river and land use/land cover on the basis of their estimated influence in causing flood hazard and performing weighed overlay analysis to identify areas that are highly vulnerable. The forest and grassland coverage increased by 11.53 km² (3.8%) and 1.43 km² (0.47%) from 1996 to 2016. The sediment and bare land areas decreased by 12.45 km² (4.12%) from 1996 to 2016 whereas settlement and cropland areas showed a consistent increase to 14.22 km² (4.7%). Waterbody coverage also increased to 0.3 km² (0.09%) from 1996-2016. 1.27% (3.65 km²) of total watershed area was categorized into very low hazard zone, 20.94% (60.31 km²) area into low hazard zone, 37.59% (108.3 km²) area into moderate hazard zone, 29.25% (84.27 km²) area into high hazard zone and 31 villages which comprised 10.95% (31.55 km²) were categorized into high hazard zone area.

Keywords: flood hazard, land use/land cover, Orai river, supervised maximum likelihood classification, weighed overlay analysis

Procedia PDF Downloads 325
394 Contamination of the Groundwater by the Flow of the Discharge in Khouribga City (Morocco) and the Danger It Presents to the Health of the Surrounding Population.

Authors: Najih Amina

Abstract:

Our study focuses on monitoring the spatial evolution of a number of physico-chemical parameters of wells waters located at different distances from the discharge of the city of Khouribga (S0 upstream station, S1, S2 et S3 are respectively located at 5.5, 7.5, 11 Km away from solid waste discharge of the city). The absence of a source of drinking water in this region involves the population to feeding on its groundwater wells. Through the results, we note that most of the analyzed parameters exceed the potable water standards from S1. At this source of water, we find that the conductivity (1290 μmScm-1; Standard 1000 μmScm-1), Total Hardness TH (67.2°F/ Standard 50° F), Ca2 + (146 mg l-1 standard 60 mg l-1), Cl- (369 mg l-1 standard 150 mg l-1), NaCl (609 mgl-1), Methyl orange alakanity “M. alk” (280 mg l-1) greatly exceed the drinking water standards. By following these parameters, it is obvious that some values have decreased in the downstream stations, while others become important. We find that the conductivity is always higher than 950 μmScm-1; the TH registers 72°F in S3; Ca 2+ is in the range of 153 mg l-1 in S3, Cl- and NaCl- reached 426 mg l-1 and 702 mg l-1 respectively in S2, M alk becomes higher and reaches 430 to 350 in S3. At the wells S2, we found that the nitrites are well beyond the standard 1.05 mg l-1. Whereas, at the control station S0, the values are lower or at the limit of drinking water standards: conductivity (452 μmScm-1), TH (34 F°), Ca2+ (68 mg l-1), Cl- (157 mg l-1), NaCl- (258 mg l-1), M alk (220 mg l-1). Thus, the diagnosis reveals the presence of a high pollution caused by the leachates of the household waste discharge and by the effluents of the sewage waste water plant (SWWP). The phenomenon of the water hardness could, also, be generated by the processes of erosion, leaching and soil infiltration in the region (phosphate layers, intercalated layers of marl and limestone), phenomenons also caused by the acidity due to this surrounding pollution. The source S1 is the nearest surrounding site of the discharge and the most affected by the phenomenon of pollution, especially, it is near to a superficial water source S’1 polluted by the effluents coming from the sewage waste water plant of the city. In the light of these data, we can deduce that the consumption of this water from S1 does not conform the standards of drinking waters, and could affect the human health.

Keywords: physico-chemical parameters, ground water wells, infiltration, leaching, pollution, leachate discharge effluent SWWP, human health.

Procedia PDF Downloads 393
393 Assessment of Metal Dynamics in Dissolved and Particulate Phase in Human Impacted Hooghly River Estuary, India

Authors: Soumita Mitra, Santosh Kumar Sarkar

Abstract:

Hooghly river estuary (HRE), situated at the north eastern part of Bay of Bengal has global significance due to its holiness. It is of immense importance to the local population as it gives perpetual water supply for various activities such as transportation, fishing, boating, bathing etc. to the local people who settled on both the banks of this estuary. This study was done to assess the dissolved and particulate trace metal in the estuary covering a stretch of about 175 Km. The water samples were collected from the surface (0-5 cm) along the salinity gradient and metal concentration were studied both in dissolved and particulate phase using Graphite Furnace Atomic Absorption Spectrophotometer (GF-AAS) along some physical characteristics such as water temperature, salinity, pH, turbidity and total dissolved solids. Although much significant spatial variation was noticed but little enrichment was found along the downstream of the estuary. The mean concentration of the metals in the dissolved and particulate phase followed the same trend and as follows: Fe>Mn>Cr>Zn>Cu>Ni>Pb. The concentration of the metals in the particulate phase were much greater than that in dissolved phase which was also depicted from the values of the partition coefficient (Kd)(ml mg-1). The Kdvalues ranged from 1.5x105 (in case of Pb) to 4.29x106 (in case of Cr). The high value of Kd for Cr denoted that the metal Cr is mostly bounded with the suspended particulate matter while the least value for Pb signified it presence more in dissolved phase. Moreover, the concentrations of all the studied metals in the dissolved phase were many folds higher than their respective permissible limits assested by WHO 2008, 2009 and 2011. On the other hand, according to Sediment Quality Guidelines (SQGs), Zn, Cu and Ni in the particulate phase lied between ERL and ERM values but Cr exceeded ERM values at all the stations confirming that the estuary is mostly contaminated with the particulate Cr and it might cause frequent adverse effects on the aquatic life. Multivariate statistics Cluster analysis was also performed which separated the stations according to the level of contamination from several point and nonpoint sources. Thus, it is found that the estuarine system is much polluted by the toxic metals and further investigation, toxicological studies should be implemented for full risk assessment of this system, better management and restoration of the water quality of this globally significant aquatic system.

Keywords: dissolved and particulate phase, Hooghly river estuary, partition coefficient, surface water, toxic metals

Procedia PDF Downloads 259
392 Influence of Packing Density of Layers Placed in Specific Order in Composite Nonwoven Structure for Improved Filtration Performance

Authors: Saiyed M Ishtiaque, Priyal Dixit

Abstract:

Objectives: An approach is being suggested to design the filter media to maximize the filtration efficiency with minimum possible pressure drop of composite nonwoven by incorporating the layers of different packing densities induced by fibre of different deniers and punching parameters by using the concept of sequential punching technique in specific order in layered composite nonwoven structure. X-ray computed tomography technique is used to measure the packing density along the thickness of layered nonwoven structure composed by placing the layer of differently oriented fibres influenced by fibres of different deniers and punching parameters in various combinations to minimize the pressure drop at maximum possible filtration efficiency. Methodology Used: This work involves preparation of needle punched layered structure with batts 100g/m2 basis weight having fibre denier, punch density and needle penetration depth as variables to produce 300 g/m2 basis weight nonwoven composite. X-ray computed tomography technique is used to measure the packing density along the thickness of layered nonwoven structure composed by placing the layers of differently oriented fibres influenced by considered variables in various combinations. to minimize the pressure drop at maximum possible filtration efficiencyFor developing layered nonwoven fabrics, batts made of fibre of different deniers having 100g/m2 each basis weight were placed in various combinations. For second set of experiment, the composite nonwoven fabrics were prepared by using 3 denier circular cross section polyester fibre having 64 mm length on needle punched nonwoven machine by using the sequential punching technique to prepare the composite nonwoven fabrics. In this technique, three semi punched fabrics of 100 g/m2 each having either different punch densities or needle penetration depths were prepared for first phase of fabric preparation. These fabrics were later punched altogether to obtain the overall basis weight of 300 g/m2. The total punch density of the composite nonwoven fabric was kept at 200 punches/ cm2 with a needle penetration depth of 10 mm. The layered structures so formed were subcategorised into two groups- homogeneous layered structure in which all the three batts comprising the nonwoven fabric were made from same denier of fibre, punch density and needle penetration depth and were placed in different positions in respective fabric and heterogeneous layered structure in which batts were made from fibres of different deniers, punch densities and needle penetration depths and were placed in different positions. Contributions: The results concluded that reduction in pressure drop is not derived by the overall packing density of the layered nonwoven fabric rather sequencing of layers of specific packing density in layered structure decides the pressure drop. Accordingly, creation of inverse gradient of packing density in layered structure provided maximum filtration efficiency with least pressure drop. This study paves the way for the possibility of customising the composite nonwoven fabrics by the incorporation of differently oriented fibres in constituent layers induced by considered variablres for desired filtration properties.

Keywords: filtration efficiency, layered nonwoven structure, packing density, pressure drop

Procedia PDF Downloads 53
391 Application of Thermal Dimensioning Tools to Consider Different Strategies for the Disposal of High-Heat-Generating Waste

Authors: David Holton, Michelle Dickinson, Giovanni Carta

Abstract:

The principle of geological disposal is to isolate higher-activity radioactive wastes deep inside a suitable rock formation to ensure that no harmful quantities of radioactivity reach the surface environment. To achieve this, wastes will be placed in an engineered underground containment facility – the geological disposal facility (GDF) – which will be designed so that natural and man-made barriers work together to minimise the escape of radioactivity. Internationally, various multi-barrier concepts have been developed for the disposal of higher-activity radioactive wastes. High-heat-generating wastes (HLW, spent fuel and Pu) provide a number of different technical challenges to those associated with the disposal of low-heat-generating waste. Thermal management of the disposal system must be taken into consideration in GDF design; temperature constraints might apply to the wasteform, container, buffer and host rock. Of these, the temperature limit placed on the buffer component of the engineered barrier system (EBS) can be the most constraining factor. The heat must therefore be managed such that the properties of the buffer are not compromised to the extent that it cannot deliver the required level of safety. The maximum temperature of a buffer surrounding a container at the centre of a fixed array of heat-generating sources, arises due to heat diffusing from neighbouring heat-generating wastes, incrementally contributing to the temperature of the EBS. A range of strategies can be employed for managing heat in a GDF, including the spatial arrangements or patterns of those containers; different geometrical configurations can influence the overall thermal density in a disposal facility (or area within a facility) and therefore the maximum buffer temperature. A semi-analytical thermal dimensioning tool and methodology have been applied at a generic stage to explore a range of strategies to manage the disposal of high-heat-generating waste. A number of examples, including different geometrical layouts and chequer-boarding, have been illustrated to demonstrate how these tools can be used to consider safety margins and inform strategic disposal options when faced with uncertainty, at a generic stage of the development of a GDF.

Keywords: buffer, geological disposal facility, high-heat-generating waste, spent fuel

Procedia PDF Downloads 262
390 The Philosophical Hermeneutics Contribution to Form a Highly Qualified Judiciary in Brazil

Authors: Thiago R. Pereira

Abstract:

The philosophical hermeneutics is able to change the Brazilian Judiciary because of the understanding of the characteristics of the human being. It is impossible for humans, to be invested in the function of being a judge, making absolutely neutral decisions, but the philosophical hermeneutics can assist the judge making impartial decisions, based on the federal constitution. The normative legal positivism imagined a neutral judge, a judge able to try without any preconceived ideas, without allowing his/her background to influence him/her. When a judge arbitrates based on legal rules, the problem is smaller, but when there are no clear legal rules, and the judge must try based on principles, the risk of the decision is based on what they believe in. Solipsistically, this issue gains a huge dimension. Today, the Brazilian judiciary is independent, but there must be a greater knowledge of philosophy and the philosophy of law, partially because the bigger problem is the unpredictability of decisions made by the judiciary. Actually, when a lawsuit is filed, the result of this judgment is absolutely unpredictable. It is almost a gamble. There must be the slightest legal certainty and predictability of judicial decisions, so that people, with similar cases, may not receive opposite sentences. The relativism, since classical antiquity, believes in the possibility of multiple answers. Since the Greeks in in the sixth century before Christ, through the Germans in the eighteenth century, and even today, it has been established the constitution as the great law, the Groundnorm, and thus, the relativism of life can be greatly reduced when a hermeneut uses the Constitution as North interpretational, where all interpretation must act as the hermeneutic constitutional filter. For a current philosophy of law, that inside a legal system with a Federal Constitution, there is a single correct answer to a specific case. The challenge is how to find this right answer. The only answer to this question will be that we should use the constitutional principles. But in many cases, a collision between principles will take place, and to resolve this issue, the judge or the hermeneut will choose a solipsism way, using what they personally believe to be the right one. For obvious reasons, that conduct is not safe. Thus, a theory of decision is necessary to seek justice, and the hermeneutic philosophy and the linguistic turn will be necessary for one to find the right answer. In order to help this difficult mission, it will be necessary to use philosophical hermeneutics in order to find the right answer, which is the constitutionally most appropriate response. The constitutionally appropriate response will not always be the answer that individuals agree to, but we must put aside our preferences and defend the answer that the Constitution gives us. Therefore, the hermeneutics applied to Law, in search constitutionally appropriate response, should be the safest way to avoid judicial individual decisions. The aim of this paper is to present the science of law starting from the linguistic turn, the philosophical hermeneutics, moving away from legal positivism. The methodology used in this paper is qualitative, academic and theoretical, philosophical hermeneutics with the mission to conduct research proposing a new way of thinking about the science of law. The research sought to demonstrate the difficulty of the Brazilian courts to depart from the secular influence of legal positivism. Moreover, the research sought to demonstrate the need to think science of law within a contemporary perspective, where the linguistic turn, philosophical hermeneutics, will be the surest way to conduct the science of law in the present century.

Keywords: hermeneutic, right answer, solipsism, Brazilian judiciary

Procedia PDF Downloads 330
389 Selfie: Redefining Culture of Narcissism

Authors: Junali Deka

Abstract:

“Pictures speak more than a thousand words”. It is the power of image which can have multiple meanings the way it is read by the viewers. This research article is an outcome of the extensive study of the phenomenon of‘selfie culture’ and dire need of self-constructed virtual identity among youths. In the recent times, there has been a revolutionary change in the concept of photography in terms of both techniques and applications. The popularity of ‘self-portraits’ mainly depend on the temporal space and time created on social networking sites like Facebook, Instagram. With reference to Stuart’s Hall encoding and decoding process, the article studies the behavior of the users who post photographs online. The photographic messages (Roland Barthes) are interpreted differently by different viewers. The notion of ‘self’, ‘self-love and practice of looking (Marita Sturken) and ways of seeing (John Berger) got new definition and dimensional together. After Oscars Night, show host Ellen DeGeneres’s selfie created the most buzz and hype in the social media. The term was judged the word of 2013, and has earned its place in the dictionary. “In November 2013, the word "selfie" was announced as being the "word of the year" by the Oxford English Dictionary. By the end of 2012, Time magazine considered selfie one of the "top 10 buzzwords" of that year; although selfies had existed long before, it was in 2012 that the term "really hit the big time an Australian origin. The present study was carried to understand the concept of ‘selfie-bug’ and the phenomenon it has created among youth (especially students) at large in developing a pseudo-image of its own. The topic was relevant and gave a platform to discuss about the cultural, psychological and sociological implications of selfie in the age of digital technology. At the first level, content analysis of the primary and secondary sources including newspapers articles and online resources was carried out followed by a small online survey conducted with the help of questionnaire to find out the student’s view on selfie and its social and psychological effects. The newspapers reports and online resources confirmed that selfie is a new trend in the digital media and it has redefined the notion of beauty and self-love. The Facebook and Instagram are the major platforms used to express one-self and creation of virtual identity. The findings clearly reflected the active participation of female students in comparison to male students. The study of the photographs of few selected respondents revealed the difference of attitude and image building among male and female users. The study underlines some basic questions about the desire of reconstruction of identity among young generation, such as - are they becoming culturally narcissist; responsible factors for cultural, social and moral changes in the society, psychological and technological effects caused by Smartphone as well, culminating into a big question mark whether the selfie is a social signifier of identity construction.

Keywords: Culture, Narcissist, Photographs, Selfie

Procedia PDF Downloads 388
388 Indoor Air Pollution and Reduced Lung Function in Biomass Exposed Women: A Cross Sectional Study in Pune District, India

Authors: Rasmila Kawan, Sanjay Juvekar, Sandeep Salvi, Gufran Beig, Rainer Sauerborn

Abstract:

Background: Indoor air pollution especially from the use of biomass fuels, remains a potentially large global health threat. The inefficient use of such fuels in poorly ventilated conditions results in high levels of indoor air pollution, most seriously affecting women and young children. Objectives: The main aim of this study was to measure and compare the lung function of the women exposed in the biomass fuels and LPG fuels and relate it to the indoor emission measured using a structured questionnaire, spirometer and filter based low volume samplers respectively. Methodology: This cross-sectional comparative study was conducted among the women (aged > 18 years) living in rural villages of Pune district who were not diagnosed of chronic pulmonary diseases or any other respiratory diseases and using biomass fuels or LPG for cooking for a minimum period of 5 years or more. Data collection was done from April to June 2017 in dry season. Spirometer was performed using the portable, battery-operated ultrasound Easy One spirometer (Spiro bank II, NDD Medical Technologies, Zurich, Switzerland) to determine the lung function over Forced expiratory volume. The primary outcome variable was forced expiratory volume in 1 second (FEV1). Secondary outcome was chronic obstruction pulmonary disease (post bronchodilator FEV1/ Forced Vital Capacity (FVC) < 70%) as defined by the Global Initiative for Obstructive Lung Disease. Potential confounders such as age, height, weight, smoking history, occupation, educational status were considered. Results: Preliminary results showed that the lung function of the women using Biomass fuels (FEV1/FVC = 85% ± 5.13) had comparatively reduced lung function than the LPG users (FEV1/FVC = 86.40% ± 5.32). The mean PM 2.5 mass concentration in the biomass user’s kitchen was 274.34 ± 314.90 and 85.04 ± 97.82 in the LPG user’s kitchen. Black carbon amount was found higher in the biomass users (black carbon = 46.71 ± 46.59 µg/m³) than LPG users (black carbon=11.08 ± 22.97 µg/m³). Most of the houses used separate kitchen. Almost all the houses that used the clean fuel like LPG had minimum amount of the particulate matter 2.5 which might be due to the background pollution and cross ventilation from the houses using biomass fuels. Conclusions: Therefore, there is an urgent need to adopt various strategies to improve indoor air quality. There is a lacking of current state of climate active pollutants emission from different stove designs and identify major deficiencies that need to be tackled. Moreover, the advancement in research tools, measuring technique in particular, is critical for researchers in developing countries to improve their capability to study the emissions for addressing the growing climate change and public health concerns.

Keywords: black carbon, biomass fuels, indoor air pollution, lung function, particulate matter

Procedia PDF Downloads 160
387 Prioritizing Biodiversity Conservation Areas based on the Vulnerability and the Irreplaceability Framework in Mexico

Authors: Alma Mendoza-Ponce, Rogelio Corona-Núñez, Florian Kraxner

Abstract:

Mexico is a megadiverse country and it has nearly halved its natural vegetation in the last century due to agricultural and livestock expansion. Impacts of land use cover change and climate change are unevenly distributed and spatial prioritization to minimize the affectations on biodiversity is crucial. Global and national efforts for prioritizing biodiversity conservation show that ~33% to 45% of Mexico should be protected. The width of these targets makes difficult to lead resources. We use a framework based on vulnerability and irreplaceability to prioritize conservation efforts in Mexico. Vulnerability considered exposure, sensitivity and adaptive capacity under two scenarios (business as usual, BAU based, on the SSP2 and RCP 4.5 and a Green scenario, based on the SSP1 and the RCP 2.6). Exposure to land use is the magnitude of change from natural vegetation to anthropogenic covers while exposure to climate change is the difference between current and future values for both scenarios. Sensitivity was considered as the number of endemic species of terrestrial vertebrates which are critically endangered and endangered. Adaptive capacity is used as the ration between the percentage of converted area (natural to anthropogenic) and the percentage of protected area at municipality level. The results suggest that by 2050, between 11.6 and 13.9% of Mexico show vulnerability ≥ 50%, and by 2070, between 12.0 and 14.8%, in the Green and BAU scenario, respectively. From an ecosystem perspective cloud forests, followed by tropical dry forests, natural grasslands and temperate forests will be the most vulnerable (≥ 50%). Amphibians are the most threatened vertebrates; 62% of the endemic amphibians are critically endangered or endangered while 39%, 12% and 9% of the mammals, birds, and reptiles, respectively. However, the distribution of these amphibians counts for only 3.3% of the country, while mammals, birds, and reptiles in these categories represent 10%, 16% and 29% of Mexico. There are 5 municipalities out of the 2,457 that Mexico has that represent 31% of the most vulnerable areas (70%).These municipalities account for 0.05% of Mexico. This multiscale approach can be used to address resources to conservation targets as ecosystems, municipalities or species considering land use cover change, climate change and biodiversity uniqueness.

Keywords: biodiversity, climate change, land use change, Mexico, vulnerability

Procedia PDF Downloads 150
386 Temperature-Dependent Post-Mortem Changes in Human Cardiac Troponin-T (cTnT): An Approach in Determining Postmortem Interval

Authors: Sachil Kumar, Anoop Kumar Verma, Wahid Ali, Uma Shankar Singh

Abstract:

Globally approximately 55.3 million people die each year. In the India there were 95 lakh annual deaths in 2013. The number of deaths resulted from homicides, suicides and unintentional injuries in the same period was about 5.7 lakh. The ever-increasing crime rate necessitated the development of methods for determining time since death. An erroneous time of death window can lead investigators down the wrong path or possibly focus a case on an innocent suspect. In this regard a research was carried out by analyzing the temperature dependent degradation of a Cardiac Troponin-T protein (cTnT) in the myocardium postmortem as a marker for time since death. Cardiac tissue samples were collected from (n=6) medico-legal autopsies, (in the Department of Forensic Medicine and Toxicology, King George’s Medical University, Lucknow India) after informed consent from the relatives and studied post-mortem degradation by incubation of the cardiac tissue at room temperature (20±2 OC), 12 0C, 25 0C and 37 0C for different time periods ((~5, 26, 50, 84, 132, 157, 180, 205, and 230 hours). The cases included were the subjects of road traffic accidents (RTA) without any prior history of disease who died in the hospital and their exact time of death was known. The analysis involved extraction of the protein, separation by denaturing gel electrophoresis (SDS-PAGE) and visualization by Western blot using cTnT specific monoclonal antibodies. The area of the bands within a lane was quantified by scanning and digitizing the image using Gel Doc. The data shows a distinct temporal profile corresponding to the degradation of cTnT by proteases found in cardiac muscle. The disappearance of intact cTnT and the appearance of lower molecular weight bands are easily observed. Western blot data clearly showed the intact protein at 42 kDa, two major (27 kDa, 10kDa) fragments, two additional minor fragments (32 kDa) and formation of low molecular weight fragments as time increases. At 12 0C the intensity of band (intact cTnT) decreased steadily as compared to RT, 25 0C and 37 0C. Overall, both PMI and temperature had a statistically significant effect where the greatest amount of protein breakdown was observed within the first 38 h and at the highest temperature, 37 0C. The combination of high temperature (37 0C) and long Postmortem interval (105.15 hrs) had the most drastic effect on the breakdown of cTnT. If the percent intact cTnT is calculated from the total area integrated within a Western blot lane, then the percent intact cTnT shows a pseudo-first order relationship when plotted against the log of the time postmortem. These plots show a good coefficient of correlation of r = 0.95 (p=0.003) for the regression of the human heart at different temperature conditions. The data presented demonstrates that this technique can provide an extended time range during which Postmortem interval can be more accurately estimated.

Keywords: degradation, postmortem interval, proteolysis, temperature, troponin

Procedia PDF Downloads 367
385 West Nile Virus in North-Eastern Italy: Overview of Integrated Surveillance Activities

Authors: Laura Amato, Paolo Mulatti, Fabrizio Montarsi, Matteo Mazzucato, Laura Gagliazzo, Michele Brichese, Manlio Palei, Gioia Capelli, Lebana Bonfanti

Abstract:

West Nile virus (WNV) re-emerged in north-eastern Italy in 2008, after ten years from its first appearance in Tuscany. In 2009, a national surveillance programme was implemented, and re-modulated in north-eastern Italy in 2011. Hereby, we present the results of surveillance activities in 2008-2016 in the north-eastern Italian regions, with inferences on WNV epidemiological trend in the area. The re-modulated surveillance programmes aimed at early detecting WNV seasonal reactivation by searching IgM antibodies in horses. In 2013, the surveillance plans were further modified including a risk-based approach. Spatial analysis techniques, including Bernoulli space-time scan-statistics, were applied to the results of 2010–2012 surveillance on mosquitoes, equines, and humans to identify areas where WNV reactivation was more likely to occur. From 2008 to 2016, residential horses tested positive for anti-WNV antibodies on a yearly basis (503 cases), also in areas where WNV circulation was not detected in mosquito populations. Surveillance activities detected 26 syndromic cases in horses, 102 infected mosquito pools and WNV in 18 dead wild birds. Human cases were also recurrently detected in the study area during the surveillance period (68 cases of West Nile neuroinvasive disease). The recurrent identification of WNV in animals, mosquitoes, and humans indicates the virus has likely become endemic in the area. In 2016, findings of WNV positives in horses or mosquitoes were included as triggers for enhancing screening activities in humans. The evolution of the epidemiological situation prompts for continuous and accurate surveillance measures. The results of the 2013-2016 surveillance indicate that the risk-based approach was effective in early detecting seasonal reactivation of WNV, key factor of the integrated surveillance strategy in endemic areas.

Keywords: arboviruses, horses, Italy, surveillance, west nile virus, zoonoses

Procedia PDF Downloads 340
384 Determination of Potential Agricultural Lands Using Landsat 8 OLI Images and GIS: Case Study of Gokceada (Imroz) Turkey

Authors: Rahmi Kafadar, Levent Genc

Abstract:

In present study, it was aimed to determine potential agricultural lands (PALs) in Gokceada (Imroz) Island of Canakkale province, Turkey. Seven-band Landsat 8 OLI images acquired on July 12 and August 13, 2013, and their 14-band combination image were used to identify current Land Use Land Cover (LULC) status. Principal Component Analysis (PCA) was applied to three Landsat datasets in order to reduce the correlation between the bands. A total of six Original and PCA images were classified using supervised classification method to obtain the LULC maps including 6 main classes (“Forest”, “Agriculture”, “Water Surface”, “Residential Area-Bare Soil”, “Reforestation” and “Other”). Accuracy assessment was performed by checking the accuracy of 120 randomized points for each LULC maps. The best overall accuracy and Kappa statistic values (90.83%, 0.8791% respectively) were found for PCA images which were generated from 14-bands combined images called 3-B/JA. Digital Elevation Model (DEM) with 15 m spatial resolution (ASTER) was used to consider topographical characteristics. Soil properties were obtained by digitizing 1:25000 scaled soil maps of rural services directorate general. Potential Agricultural Lands (PALs) were determined using Geographic information Systems (GIS). Procedure was applied considering that “Other” class of LULC map may be used for agricultural purposes in the future properties. Overlaying analysis was conducted using Slope (S), Land Use Capability Class (LUCC), Other Soil Properties (OSP) and Land Use Capability Sub-Class (SUBC) properties. A total of 901.62 ha areas within “Other” class (15798.2 ha) of LULC map were determined as PALs. These lands were ranked as “Very Suitable”, “Suitable”, “Moderate Suitable” and “Low Suitable”. It was determined that the 8.03 ha were classified as “Very Suitable” while 18.59 ha as suitable and 11.44 ha as “Moderate Suitable” for PALs. In addition, 756.56 ha were found to be “Low Suitable”. The results obtained from this preliminary study can serve as basis for further studies.

Keywords: digital elevation model (DEM), geographic information systems (GIS), gokceada (Imroz), lANDSAT 8 OLI-TIRS, land use land cover (LULC)

Procedia PDF Downloads 337
383 Caged Compounds as Light-Dependent Initiators for Enzyme Catalysis Reactions

Authors: Emma Castiglioni, Nigel Scrutton, Derren Heyes, Alistair Fielding

Abstract:

By using light as trigger, it is possible to study many biological processes, such as the activity of genes, proteins, and other molecules, with precise spatiotemporal control. Caged compounds, where biologically active molecules are generated from an inert precursor upon laser photolysis, offer the potential to initiate such biological reactions with high temporal resolution. As light acts as the trigger for cleaving the protecting group, the ‘caging’ technique provides a number of advantages as it can be intracellular, rapid and controlled in a quantitative manner. We are developing caging strategies to study the catalytic cycle of a number of enzyme systems, such as nitric oxide synthase and ethanolamine ammonia lyase. These include the use of caged substrates, caged electrons and the possibility of caging the enzyme itself. In addition, we are developing a novel freeze-quench instrument to study these reactions, which combines rapid mixing and flashing capabilities. Reaction intermediates will be trapped at low temperatures and will be analysed by using electron paramagnetic resonance (EPR) spectroscopy to identify the involvement of any radical species during catalysis. EPR techniques typically require relatively long measurement times and very often, low temperatures to fully characterise these short-lived species. Therefore, common rapid mixing techniques, such as stopped-flow or quench-flow are not directly suitable. However, the combination of rapid freeze-quench (RFQ) followed by EPR analysis provides the ideal approach to kinetically trap and spectroscopically characterise these transient radical species. In a typical RFQ experiment, two reagent solutions are delivered to the mixer via two syringes driven by a pneumatic actuator or stepper motor. The new mixed solution is then sprayed into a cryogenic liquid or surface, and the frozen sample is then collected and packed into an EPR tube for analysis. The earliest RFQ instrument consisted of a hydraulic ram unit as a drive unit with direct spraying of the sample into a cryogenic liquid (nitrogen, isopentane or petroleum). Improvements to the RFQ technique have arisen from the design of new mixers in order to reduce both the volume and the mixing time. In addition, the cryogenic isopentane bath has been coupled to a filtering system or replaced by spraying the solution onto a surface that is frozen via thermal conductivity with a cryogenic liquid. In our work, we are developing a novel RFQ instrument which combines the freeze-quench technology with flashing capabilities to enable the studies of both thermally-activated and light-activated biological reactions. This instrument also uses a new rotating plate design based on magnetic couplings and removes the need for mechanical motorised rotation, which can otherwise be problematic at cryogenic temperatures.

Keywords: caged compounds, freeze-quench apparatus, photolysis, radicals

Procedia PDF Downloads 194
382 Method for Improving ICESAT-2 ATL13 Altimetry Data Utility on Rivers

Authors: Yun Chen, Qihang Liu, Catherine Ticehurst, Chandrama Sarker, Fazlul Karim, Dave Penton, Ashmita Sengupta

Abstract:

The application of ICESAT-2 altimetry data in river hydrology critically depends on the accuracy of the mean water surface elevation (WSE) at a virtual station (VS) where satellite observations intersect with water. The ICESAT-2 track generates multiple VSs as it crosses the different water bodies. The difficulties are particularly pronounced in large river basins where there are many tributaries and meanders often adjacent to each other. One challenge is to split photon segments along a beam to accurately partition them to extract only the true representative water height for individual elements. As far as we can establish, there is no automated procedure to make this distinction. Earlier studies have relied on human intervention or river masks. Both approaches are unsatisfactory solutions where the number of intersections is large, and river width/extent changes over time. We describe here an automated approach called “auto-segmentation”. The accuracy of our method was assessed by comparison with river water level observations at 10 different stations on 37 different dates along the Lower Murray River, Australia. The congruence is very high and without detectable bias. In addition, we compared different outlier removal methods on the mean WSE calculation at VSs post the auto-segmentation process. All four outlier removal methods perform almost equally well with the same R2 value (0.998) and only subtle variations in RMSE (0.181–0.189m) and MAE (0.130–0.142m). Overall, the auto-segmentation method developed here is an effective and efficient approach to deriving accurate mean WSE at river VSs. It provides a much better way of facilitating the application of ICESAT-2 ATL13 altimetry to rivers compared to previously reported studies. Therefore, the findings of our study will make a significant contribution towards the retrieval of hydraulic parameters, such as water surface slope along the river, water depth at cross sections, and river channel bathymetry for calculating flow velocity and discharge from remotely sensed imagery at large spatial scales.

Keywords: lidar sensor, virtual station, cross section, mean water surface elevation, beam/track segmentation

Procedia PDF Downloads 45