Search results for: brand image transfer
61 Establishing Correlation between Urban Heat Island and Urban Greenery Distribution by Means of Remote Sensing and Statistics Data to Prioritize Revegetation in Yerevan
Authors: Linara Salikhova, Elmira Nizamova, Aleksandra Katasonova, Gleb Vitkov, Olga Sarapulova.
Abstract:
While most European cities conduct research on heat-related risks, there is a research gap in the Caucasus region, particularly in Yerevan, Armenia. This study aims to test the method of establishing a correlation between urban heat islands (UHI) and urban greenery distribution for prioritization of heat-vulnerable areas for revegetation. Armenia has failed to consider measures to mitigate UHI in urban development strategies despite a 2.1°C increase in average annual temperature over the past 32 years. However, planting vegetation in the city is commonly used to deal with air pollution and can be effective in reducing UHI if it prioritizes heat-vulnerable areas. The research focuses on establishing such priorities while considering the distribution of urban greenery across the city. The lack of spatially explicit air temperature data necessitated the use of satellite images to achieve the following objectives: (1) identification of land surface temperatures (LST) and quantification of temperature variations across districts; (2) classification of massifs of land surface types using normalized difference vegetation index (NDVI); (3) correlation of land surface classes with LST. Examination of the heat-vulnerable city areas (in this study, the proportion of individuals aged 75 years and above) is based on demographic data (Census 2011). Based on satellite images (Sentinel-2) captured on June 5, 2021, NDVI calculations were conducted. The massifs of the land surface were divided into five surface classes. Due to capacity limitations, the average LST for each district was identified using one satellite image from Landsat-8 on August 15, 2021. In this research, local relief is not considered, as the study mainly focuses on the interconnection between temperatures and green massifs. The average temperature in the city is 3.8°C higher than in the surrounding non-urban areas. The temperature excess ranges from a low in Norq Marash to a high in Nubarashen. Norq Marash and Avan have the highest tree and grass coverage proportions, with 56.2% and 54.5%, respectively. In other districts, the balance of wastelands and buildings is three times higher than the grass and trees, ranging from 49.8% in Quanaqer-Zeytun to 76.6% in Nubarashen. Studies have shown that decreased tree and grass coverage within a district correlates with a higher temperature increase. The temperature excess is highest in Erebuni, Ajapnyak, and Nubarashen districts. These districts have less than 25% of their area covered with grass and trees. On the other hand, Avan and Norq Marash districts have a lower temperature difference, as more than 50% of their areas are covered with trees and grass. According to the findings, a significant proportion of the elderly population (35%) aged 75 years and above reside in the Erebuni, Ajapnyak, and Shengavit neighborhoods, which are more susceptible to heat stress with an LST higher than in other city districts. The findings suggest that the method of comparing the distribution of green massifs and LST can contribute to the prioritization of heat-vulnerable city areas for revegetation. The method can become a rationale for the formation of an urban greening program.Keywords: heat-vulnerability, land surface temperature, urban greenery, urban heat island, vegetation
Procedia PDF Downloads 7260 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry
Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood
Abstract:
The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.Keywords: ADV, experimental data, multiple Reynolds number, post-processing
Procedia PDF Downloads 14859 Expression Profiling of Chlorophyll Biosynthesis Pathways in Chlorophyll B-Lacking Mutants of Rice (Oryza sativa L.)
Authors: Khiem M. Nguyen, Ming C. Yang
Abstract:
Chloroplast pigments are extremely important during photosynthesis since they play essential roles in light absorption and energy transfer. Therefore, understanding the efficiency of chlorophyll (Chl) biosynthesis could facilitate enhancement in photo-assimilates accumulation, and ultimately, in crop yield. The Chl-deficient mutants have been used extensively to study the Chl biosynthetic pathways and the biogenesis of the photosynthetic apparatus. Rice (Oryza sativa L.) is one of the most leading food crops, serving as staple food for many parts of the world. To author’s best knowledge, Chl b–lacking rice has been found; however the molecular mechanism of Chl biosynthesis still remains unclear compared to wild-type rice. In this study, the ultrastructure analysis, photosynthetic properties, and transcriptome profile of wild-type rice (Norin No.8, N8) and its Chl b-lacking mutant (Chlorina 1, C1) were examined. The finding concluded that total Chl content and Chl b content in the C1 leaves were strongly reduced compared to N8 leaves, suggesting that reduction in the total Chl content contributes to leaf color variation at the physiological level. Plastid ultrastructure of C1 possessed abnormal thylakoid membranes with loss of starch granule, large number of vesicles, and numerous plastoglobuli. The C1 rice also exhibited thinner stacked grana, which was caused by a reduction in the number of thylakoid membranes per granum. Thus, the different Chl a/b ratio of C1 may reflect the abnormal plastid development and function. Transcriptional analysis identified 23 differentially expressed genes (DEGs) and 671 transcription factors (TFs) that were involved in Chl metabolism, chloroplast development, cell division, and photosynthesis. The transcriptome profile and DEGs revealed that the gene encoding PsbR (PSII core protein) was down-regulated, therefore suggesting that the lower in light-harvesting complex proteins are responsible for the lower photosynthetic capacity in C1. In addition, expression level of cell division protein (FtsZ) genes were significantly reduced in C1, causing chloroplast division defect. A total of 19 DEGs were identified based on KEGG pathway assignment involving Chl biosynthesis pathway. Among these DEGs, the GluTR gene was down-regulated, whereas the UROD, CPOX, and MgCH genes were up-regulated. Observation through qPCR suggested that later stages of Chl biosynthesis were enhanced in C1, whereas the early stages were inhibited. Plastid structure analysis together with transcriptomic analysis suggested that the Chl a/b ratio was amplified both by the reduction in Chl contents accumulation, owning to abnormal chloroplast development, and by the enhanced conversion of Chl b to Chl a. Moreover, the results indicated the same Chl-cycle pattern in the wild-type and C1 rice, indicating another Chl b degradation pathway. Furthermore, the results demonstrated that normal grana stacking, along with the absence of Chl b and greatly reduced levels of Chl a in C1, provide evidence to support the conclusion that other factors along with LHCII proteins are involved in grana stacking. The findings of this study provide insight into the molecular mechanisms that underlie different Chl a/b ratios in rice.Keywords: Chl-deficient mutant, grana stacked, photosynthesis, RNA-Seq, transcriptomic analysis
Procedia PDF Downloads 12458 Triassic and Liassic Paleoenvironments during the Central Atlantic Magmatique Province (CAMP) Effusion in the Moroccan Coastal Meseta: The Mohammedia-Benslimane-El Gara-Berrechid Basin
Authors: Rachid Essamoud, Abdelkrim Afenzar, Ahmed Belqadi
Abstract:
During the Early Mesozoic, the northwestern part of the African continent was affected by initial fracturing associated with the early stages of the opening of the Central Atlantic (Atlantic Rift). During this rifting phase, the Moroccan Meseta experienced an extensive tectonic regime. This extension favored the formation of a set of rift-type basins, including the Mohammedia-Benslimane-ElGara-Berrechid basin. Thus, it is essential to know the nature of the deposits in this basin and their evolution over time as well as their relationship with the basaltic effusion of the Central Atlantic Magmatic Province (CAMP). These deposits are subdivided into two large series: The Lower clay-salt series attributed to the Triassic and the Upper clay-salt series attributed to the Liassic. The two series are separated by the Upper Triassic-Lower Liassic basaltic complex. The detailed sedimentological analysis made it possible to characterize four mega-sequences, fifteen types of facies and eight architectural elements and facies associations in the Triassic series. A progressive decrease observed in paleo-slope over time led to the evolution of the paleoenvironment from a proximal system of alluvial fans to a braided fluvial style, then to an anastomosed system. These environments eventually evolved into an alluvial plain associated with a coastal plain where playa lakes, mudflats and lagoons had developed. The pure and massive halitic facies at the top of the series probably indicate an evolution of the depositional environment towards a shallow subtidal environment. The presence of these evaporites indicates a climate that favored their precipitation, in this case, a fairly hot and humid climate. The sedimentological analysis of the supra-basaltic part shows that during the Lower Liassic, the paleopente after basaltic effusion remained weak with distal environments. The faciological analysis revealed the presence of four major sandstone, silty, clayey and evaporitic lithofacies organized in two mega-sequences: the sedimentation of the first rock-salt mega-sequence took place in a brine depression system free, followed by saline mudflats under continental influences. The upper clay mega-sequence displays facies documenting sea level fluctuations from the final transgression of the Tethys or the opening Atlantic. Saliferous sedimentation is therefore favored from the Upper Triassic, but experienced a sudden rupture by the emission of basaltic flows which are interstratified in the azoic salt clays of very shallow seas. This basaltic emission which belongs to the CAMP would come from a fissural volcanism probably carried out through transfer faults located in the NW and SE of the basin. Their emplacement is probably subaquatic to subaerial. From a chronological and paleogeographic point of view, this main volcanism, dated between the Upper Triassic and the Lower Liassic (180-200 MA), is linked to the fragmentation of Pangea and managed by a progressive expansion triggered in the West in close relation with the initial phases of Central Atlantic rifting and seems to coincide with the major mass extinction at the Triassic-Jurassic boundary.Keywords: Basalt, CAMP, Liassic, sedimentology, Triassic, Morocco
Procedia PDF Downloads 7557 Targeting Basic Leucine Zipper Transcription Factor ATF-Like Mediated Immune Cells Regulation to Reduce Crohn’s Disease Fistula Incidence
Authors: Mohammadjavad Sotoudeheian, Soroush Nematollahi
Abstract:
Crohn’s disease (CD) is a chronic gastrointestinal segment inflammation encompassing immune dysregulation in a genetically susceptible individual in response to the environmental triggers and interaction between the microbiome and immune system. Uncontrolled inflammation leads to long-term complications, including fibrotic strictures and enteric fistulae. Increased production of Th1 and Th17-cell cytokines and defects in T-regulatory cells have been associated with CD. Th17-cells are essential for protection against extracellular pathogens, but their atypical activity can cause autoimmunity. Intrinsic defects in the control of programmed cell death in the mucosal T-cell compartment are strongly implicated in the pathogenesis of CD. The apoptosis defect in mucosal T-cells in CD has been endorsed as an imbalance of the Bcl-2 and the Bax. The immune system encounters foreign antigens through microbial colonization of mucosal surfaces or infections. In addition, FOSL downregulated IL-26 expression, a cytokine that marks inflammatory Th17-populations in patients suffering from CD. Furthermore, the expression of IL-23 is associated with the transcription factor primary leucine zipper transcription factor ATF-like (Batf). Batf-deficiency demonstrated the crucial role of Batf in colitis development. Batf and IL-23 mediate their effects by inducing IL-6 production. Strong association of IL-23R, Stat3, and Stat4 with IBD susceptibility point to a critical involvement of T-cells. IL-23R levels in transfer fistula were dependent on the AP-1 transcription factor JunB that additionally controlled levels of RORγt by facilitating DNA binding of Batf. T lymphocytes lacking JunB failed to induce IL-23- and Th17-mediated experimental colitis highlighting the relevance of JunB for the IL-23/ Th17 pathway. The absence of T-bet causes unrestrained Th17-cell differentiation. T-cells are central parts of immune-mediated colon fistula. Especially Th17-cells were highly prevalent in inflamed IBD tissues, as RORγt is effective in preventing colitis. Intraepithelial lymphocytes (IEL) contain unique T-cell subsets, including cells expressing RORγt. Increased activated Th17 and decreased T-regulatory cells in inflamed intestinal tissues had been seen. T-cells differentiate in response to many cytokines, including IL-1β, IL-6, IL-23, and TGF-β, into Th17-cells, a process which is critically dependent on the Batf. IL-23 promotes Th17-cell in the colon. Batf manages the generation of IL-23 induced IL-23R+ Th17-cells. Batf is necessary for TGF-β/IL-6-induced Th17-polarization. Batf-expressing T-cells are the core of T-cell-mediated colitis. The human-specific parts of three AP-1 transcription factors, FOSL1, FOSL2, and BATF, are essential during the early stages of Th17 differentiation. BATF supports the Th17 lineage. FOSL1, FOSL2, and BATF make possession of regulatory loci of genes in the Th17 lineage cascade. The AP1 transcription factor Batf is identified to control intestinal inflammation and seems to regulate pathways within lymphocytes, which could theoretically control the expression of several genes. It shows central regulatory properties over Th17-cell development and is intensely upregulated within IBD-affected tissues. Here, we demonstrated that targeting Batf in IBD appears as a therapeutic approach that reduces colitogenic T-cell activities during fistula formation while aiming to affect inflammation in the gut epithelial cells.Keywords: immune system, Crohn’s Disease, BATF, T helper cells, Bcl, interleukin, FOSL
Procedia PDF Downloads 14556 Membrane Permeability of Middle Molecules: A Computational Chemistry Approach
Authors: Sundaram Arulmozhiraja, Kanade Shimizu, Yuta Yamamoto, Satoshi Ichikawa, Maenaka Katsumi, Hiroaki Tokiwa
Abstract:
Drug discovery is shifting from small molecule based drugs targeting local active site to middle molecules (MM) targeting large, flat, and groove-shaped binding sites, for example, protein-protein interface because at least half of all targets assumed to be involved in human disease have been classified as “difficult to drug” with traditional small molecules. Hence, MMs such as peptides, natural products, glycans, nucleic acids with various high potent bioactivities become important targets for drug discovery programs in the recent years as they could be used for ‘undruggable” intracellular targets. Cell membrane permeability is one of the key properties of pharmacodynamically active MM drug compounds and so evaluating this property for the potential MMs is crucial. Computational prediction for cell membrane permeability of molecules is very challenging; however, recent advancement in the molecular dynamics simulations help to solve this issue partially. It is expected that MMs with high membrane permeability will enable drug discovery research to expand its borders towards intracellular targets. Further to understand the chemistry behind the permeability of MMs, it is necessary to investigate their conformational changes during the permeation through membrane and for that their interactions with the membrane field should be studied reliably because these interactions involve various non-bonding interactions such as hydrogen bonding, -stacking, charge-transfer, polarization dispersion, and non-classical weak hydrogen bonding. Therefore, parameters-based classical mechanics calculations are hardly sufficient to investigate these interactions rather, quantum mechanical (QM) calculations are essential. Fragment molecular orbital (FMO) method could be used for such purpose as it performs ab initio QM calculations by dividing the system into fragments. The present work is aimed to study the cell permeability of middle molecules using molecular dynamics simulations and FMO-QM calculations. For this purpose, a natural compound syringolin and its analogues were considered in this study. Molecular simulations were performed using NAMD and Gromacs programs with CHARMM force field. FMO calculations were performed using the PAICS program at the correlated Resolution-of-Identity second-order Moller Plesset (RI-MP2) level with the cc-pVDZ basis set. The simulations clearly show that while syringolin could not permeate the membrane, its selected analogues go through the medium in nano second scale. These correlates well with the existing experimental evidences that these syringolin analogues are membrane-permeable compounds. Further analyses indicate that intramolecular -stacking interactions in the syringolin analogues influenced their permeability positively. These intramolecular interactions reduce the polarity of these analogues so that they could permeate the lipophilic cell membrane. Conclusively, the cell membrane permeability of various middle molecules with potent bioactivities is efficiently studied using molecular dynamics simulations. Insight of this behavior is thoroughly investigated using FMO-QM calculations. Results obtained in the present study indicate that non-bonding intramolecular interactions such as hydrogen-bonding and -stacking along with the conformational flexibility of MMs are essential for amicable membrane permeation. These results are interesting and are nice example for this theoretical calculation approach that could be used to study the permeability of other middle molecules. This work was supported by Japan Agency for Medical Research and Development (AMED) under Grant Number 18ae0101047.Keywords: fragment molecular orbital theory, membrane permeability, middle molecules, molecular dynamics simulation
Procedia PDF Downloads 18855 Urban Sprawl: A Case Study of Suryapet Town in Nalgonda District of Telangana State, a Geoinformatic Approach
Authors: Ashok Kumar Lonavath, V. Sathish Kumar
Abstract:
Urban sprawl is the uncontrolled and uncoordinated outgrowth of towns and cities. The process of urban sprawl can be described by change in pattern over time, like proportional increase in built-up surface to population leading to rapid urban spatial expansion. Significant economic and livelihood opportunities in the urban areas results in lack of basic amenities due to the unplanned growth The patterns, processes, dynamic causes and consequences of sprawl can be explored and designed with the help of spatial planning support system. In India context the urban area is defined as the population more than 5000, density more than 400 persons per sq. km and 75% of the population is involved in non-agricultural occupations. India’s urban population is increasing at the rate of 2.35% pa. The class I town’s population of India according to 2011 census is 18.8% that accounts for 60.4% of total unban population. Similarly in Erstwhile Andhra Pradesh it is 22.9% which accounts for 68.8% of total urban population. Suryapet town has historical recognition as ‘Gate Way of Telangana’ in the Indian State of Andhra Pradesh. The Municipality was constituted in 1952 as Grade-III, later upgraded into Grade-II in 1984 and to Grade-I in 1998. The area is 35 Sq.kms. Three major tanks located in three different directions and Musi River is flowing from a distance of 8 kms. The average ground water table is about 50m below ground. It is a fast growing town with a population of 1, 06,805 and 25,448 households. Density is 3051pp sq km, It is a Class I city as per population census. It secured the ISO 14001-2004 certificate for establishing and maintaining an environment-friendly system for solid waste disposal. It is the first municipality in the country to receive such a certificate. It won HUDCO award under environment management, award of appreciation and cash from Ministry of Housing and Poverty Elevation from Government of India and undivided Andhra Pradesh under UN Human Settlement Programme, Greentech Excellance award, Supreme Courts appreciation for solid waste management. Foreign delegates from different countries and also from various other states of India visited Suryapet municipality for study tour and training programs as part of their official visit Suryapet is located at 17°5’ North Latitude and 79°37’ East Longitude. The average elevation is 266m, annual mean temperature is 36°C and average rainfall is 821.0 mm. The people of this town are engaged in Commercial and agriculture activities hence the town has become a centre for marketing and stocking agricultural produce. It is also educational centre in this region. The present paper on urban sprawl is a theoretical framework to analyze the interaction of planning and governance on the extent of outgrowth and level of services. The GIS techniques, SOI Toposheet, satellite imageries and image analysis techniques are extensively used to explore the sprawl and measure the urban land-use. This paper concludes outlining the challenges in addressing urban sprawl while ensuring adequate level of services that planning and governance have to ensure towards achieving sustainable urbanization.Keywords: remote sensing, GIS, urban sprawl, urbanization
Procedia PDF Downloads 22954 Ragging and Sludging Measurement in Membrane Bioreactors
Authors: Pompilia Buzatu, Hazim Qiblawey, Albert Odai, Jana Jamaleddin, Mustafa Nasser, Simon J. Judd
Abstract:
Membrane bioreactor (MBR) technology is challenged by the tendency for the membrane permeability to decrease due to ‘clogging’. Clogging includes ‘sludging’, the filling of the membrane channels with sludge solids, and ‘ragging’, the aggregation of short filaments to form long rag-like particles. Both sludging and ragging demand manual intervention to clear out the solids, which is time-consuming, labour-intensive and potentially damaging to the membranes. These factors impact on costs more significantly than membrane surface fouling which, unlike clogging, is largely mitigated by the chemical clean. However, practical evaluation of MBR clogging has thus far been limited. This paper presents the results of recent work attempting to quantify sludging and clogging based on simple bench-scale tests. Results from a novel ragging simulation trial indicated that rags can be formed within 24-36 hours from dispersed < 5 mm-long filaments at concentrations of 5-10 mg/L under gently agitated conditions. Rag formation occurred for both a cotton wool standard and samples taken from an operating municipal MBR, with between 15% and 75% of the added fibrous material forming a single rag. The extent of rag formation depended both on the material type or origin – lint from laundering operations forming zero rags – and the filament length. Sludging rates were quantified using a bespoke parallel-channel test cell representing the membrane channels of an immersed flat sheet MBR. Sludge samples were provided from two local MBRs, one treating municipal and the other industrial effluent. Bulk sludge properties measured comprised mixed liquor suspended solids (MLSS) concentration, capillary suction time (CST), particle size, soluble COD (sCOD) and rheology (apparent viscosity μₐ vs shear rate γ). The fouling and sludging propensity of the sludge was determined using the test cell, ‘fouling’ being quantified as the pressure incline rate against flux via the flux step test (for which clogging was absent) and sludging by photographing the channel and processing the image to determine the ratio of the clogged to unclogged regions. A substantial difference in rheological and fouling behaviour was evident between the two sludge sources, the industrial sludge having a higher viscosity but less shear-thinning than the municipal. Fouling, as manifested by the pressure increase Δp/Δt, as a function of flux from classic flux-step experiments (where no clogging was evident), was more rapid for the industrial sludge. Across all samples of both sludge origins the expected trend of increased fouling propensity with increased CST and sCOD was demonstrated, whereas no correlation was observed between clogging rate and these parameters. The relative contribution of fouling and clogging was appraised by adjusting the clogging propensity via increasing the MLSS both with and without a commensurate increase in the COD. Results indicated that whereas for the municipal sludge the fouling propensity was affected by the increased sCOD, there was no associated increased in the sludging propensity (or cake formation). The clogging rate actually decreased on increasing the MLSS. Against this, for the industrial sludge the clogging rate dramatically increased with solids concentration despite a decrease in the soluble COD. From this was surmised that sludging did not relate to fouling.Keywords: clogging, membrane bioreactors, ragging, sludge
Procedia PDF Downloads 17853 Organization Structure of Towns and Villages System in County Area Based on Fractal Theory and Gravity Model: A Case Study of Suning, Hebei Province, China
Authors: Liuhui Zhu, Peng Zeng
Abstract:
With the rapid development in China, the urbanization has entered the transformation and promotion stage, and its direction of development has shifted to overall regional synergy. China has a large number of towns and villages, with comparative small scale and scattered distribution, which always support and provide resources to cities leading to urban-rural opposition, so it is difficult to achieve common development in a single town or village. In this context, the regional development should focus more on towns and villages to form a synergetic system, joining the regional association with cities. Thus, the paper raises the question about how to effectively organize towns and villages system to regulate the resource allocation and improve the comprehensive value of the regional area. To answer the question, it is necessary to find a suitable research unit and analysis of its present situation of towns and villages system for optimal development. By combing relevant researches and theoretical models, the county is the most basic administrative unit in China, which can directly guide and regulate the development of towns and villages, so the paper takes county as the research unit. Following the theoretical concept of ‘three structures and one network’, the paper concludes the research framework to analyse the present situation of towns and villages system, including scale structure, functional structure, spatial structure, and organization network. The analytical methods refer to the fractal theory and gravity model, using statistics and spatial data. The scale structure analyzes rank-size dimensions and uses the principal component method to calculate the comprehensive scale of towns and villages. The functional structure analyzes the functional types and industrial development of towns and villages. The spatial structure analyzes the aggregation dimension, network dimension, and correlation dimension of spatial elements to represent the overall spatial relationships. In terms of organization network, from the perspective of entity and ono-entity, the paper analyzes the transportation network and gravitational network. Based on the present situation analysis, the optimization strategies are proposed in order to achieve a synergetic relationship between towns and villages in the county area. The paper uses Suning county in the Beijing-Tianjin-Hebei region as a case study to apply the research framework and methods and then proposes the optimization orientations. The analysis results indicate that: (1) The Suning county is lack of medium-scale towns to transfer effect from towns to villages. (2) The distribution of gravitational centers is uneven, and the effect of gravity is limited only for nearby towns and villages. The gravitational network is not complete, leading to economic activities scattered and isolated. (3) The overall development of towns and villages system is immature, staying at ‘single heart and multi-core’ stage, and some specific optimization strategies are proposed. This study provides a regional view for the development of towns and villages and concludes the research framework and methods of towns and villages system for forming an effective synergetic relationship between them, contributing to organize resources and stimulate endogenous motivation, and form counter magnets to join the urban-rural integration.Keywords: towns and villages system, organization structure, county area, fractal theory, gravity model
Procedia PDF Downloads 13652 Water Monitoring Sentinel Cloud Platform: Water Monitoring Platform Based on Satellite Imagery and Modeling Data
Authors: Alberto Azevedo, Ricardo Martins, André B. Fortunato, Anabela Oliveira
Abstract:
Water is under severe threat today because of the rising population, increased agricultural and industrial needs, and the intensifying effects of climate change. Due to sea-level rise, erosion, and demographic pressure, the coastal regions are of significant concern to the scientific community. The Water Monitoring Sentinel Cloud platform (WORSICA) service is focused on providing new tools for monitoring water in coastal and inland areas, taking advantage of remote sensing, in situ and tidal modeling data. WORSICA is a service that can be used to determine the coastline, coastal inundation areas, and the limits of inland water bodies using remote sensing (satellite and Unmanned Aerial Vehicles - UAVs) and in situ data (from field surveys). It applies to various purposes, from determining flooded areas (from rainfall, storms, hurricanes, or tsunamis) to detecting large water leaks in major water distribution networks. This service was built on components developed in national and European projects, integrated to provide a one-stop-shop service for remote sensing information, integrating data from the Copernicus satellite and drone/unmanned aerial vehicles, validated by existing online in-situ data. Since WORSICA is operational using the European Open Science Cloud (EOSC) computational infrastructures, the service can be accessed via a web browser and is freely available to all European public research groups without additional costs. In addition, the private sector will be able to use the service, but some usage costs may be applied, depending on the type of computational resources needed by each application/user. Although the service has three main sub-services i) coastline detection; ii) inland water detection; iii) water leak detection in irrigation networks, in the present study, an application of the service to Óbidos lagoon in Portugal is shown, where the user can monitor the evolution of the lagoon inlet and estimate the topography of the intertidal areas without any additional costs. The service has several distinct methodologies implemented based on the computations of the water indexes (e.g., NDWI, MNDWI, AWEI, and AWEIsh) retrieved from the satellite image processing. In conjunction with the tidal data obtained from the FES model, the system can estimate a coastline with the corresponding level or even topography of the inter-tidal areas based on the Flood2Topo methodology. The outcomes of the WORSICA service can be helpful for several intervention areas such as i) emergency by providing fast access to inundated areas to support emergency rescue operations; ii) support of management decisions on hydraulic infrastructures operation to minimize damage downstream; iii) climate change mitigation by minimizing water losses and reduce water mains operation costs; iv) early detection of water leakages in difficult-to-access water irrigation networks, promoting their fast repair.Keywords: remote sensing, coastline detection, water detection, satellite data, sentinel, Copernicus, EOSC
Procedia PDF Downloads 12651 Deep Learning Based Polarimetric SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry
Procedia PDF Downloads 9050 Mixed Mode Fracture Analyses Using Finite Element Method of Edge Cracked Heavy Annulus Pulley
Authors: Bijit Kalita, K. V. N. Surendra
Abstract:
The pulley works under both compressive loading due to contacting belt in tension and central torque due to cause rotation. In a power transmission system, the belt pulley assemblies offer a contact problem in the form of two mating cylindrical parts. In this work, we modeled a pulley as a heavy two-dimensional circular disk. Stress analysis due to contact loading in the pulley mechanism is performed. Finite element analysis (FEA) is conducted for a pulley to investigate the stresses experienced on its inner and outer periphery. In most of the heavy-duty applications, most frequently used mechanisms to transmit power in applications such as automotive engines, industrial machines, etc. is Belt Drive. Usually, very heavy circular disks are used as pulleys. A pulley could be entitled as a drum and may have a groove between two flanges around the circumference. A rope, belt, cable or chain can be the driving element of a pulley system that runs over the pulley inside the groove. A pulley is experienced by normal and shear tractions on its contact region in the process of motion transmission. The region may be belt-pulley contact surface or pulley-shaft contact surface. In 1895, Hertz solved the elastic contact problem for point contact and line contact of an ideal smooth object. Afterward, this hypothesis is generally utilized for computing the actual contact zone. Detailed stress analysis in such contact region of such pulleys is quite necessary to prevent early failure. In this paper, the results of the finite element analyses carried out on the compressed disk of a belt pulley arrangement using fracture mechanics concepts are shown. Based on the literature on contact stress problem induced in the wide field of applications, generated stress distribution on the shaft-pulley and belt-pulley interfaces due to the application of high-tension and torque was evaluated in this study using FEA concepts. Finally, the results obtained from ANSYS (APDL) were compared with the Hertzian contact theory. The study is mainly focused on the fatigue life estimation of a rotating part as a component of an engine assembly using the most famous Paris equation. Digital Image Correlation (DIC) analyses have been performed using the open-source software. From the displacement computed using the images acquired at a minimum and maximum force, displacement field amplitude is computed. From these fields, the crack path is defined and stress intensity factors and crack tip position are extracted. A non-linear least-squares projection is used for the purpose of the estimation of fatigue crack growth. Further study will be extended for the various application of rotating machinery such as rotating flywheel disk, jet engine, compressor disk, roller disk cutter etc., where Stress Intensity Factor (SIF) calculation plays a significant role on the accuracy and reliability of a safe design. Additionally, this study will be progressed to predict crack propagation in the pulley using maximum tangential stress (MTS) criteria for mixed mode fracture.Keywords: crack-tip deformations, contact stress, stress concentration, stress intensity factor
Procedia PDF Downloads 12449 Complete Genome Sequence Analysis of Pasteurella multocida Subspecies multocida Serotype A Strain PMTB2.1
Authors: Shagufta Jabeen, Faez J. Firdaus Abdullah, Zunita Zakaria, Nurulfiza M. Isa, Yung C. Tan, Wai Y. Yee, Abdul R. Omar
Abstract:
Pasteurella multocida (PM) is an important veterinary opportunistic pathogen particularly associated with septicemic pasteurellosis, pneumonic pasteurellosis and hemorrhagic septicemia in cattle and buffaloes. P. multocida serotype A has been reported to cause fatal pneumonia and septicemia. Pasteurella multocida subspecies multocida of serotype A Malaysian isolate PMTB2.1 was first isolated from buffaloes died of septicemia. In this study, the genome of P. multocida strain PMTB2.1 was sequenced using third-generation sequencing technology, PacBio RS2 system and analyzed bioinformatically via de novo analysis followed by in-depth analysis based on comparative genomics. Bioinformatics analysis based on de novo assembly of PacBio raw reads generated 3 contigs followed by gap filling of aligned contigs with PCR sequencing, generated a single contiguous circular chromosome with a genomic size of 2,315,138 bp and a GC content of approximately 40.32% (Accession number CP007205). The PMTB2.1 genome comprised of 2,176 protein-coding sequences, 6 rRNA operons and 56 tRNA and 4 ncRNAs sequences. The comparative genome sequence analysis of PMTB2.1 with nine complete genomes which include Actinobacillus pleuropneumoniae, Haemophilus parasuis, Escherichia coli and five P. multocida complete genome sequences including, PM70, PM36950, PMHN06, PM3480, PMHB01 and PMTB2.1 was carried out based on OrthoMCL analysis and Venn diagram. The analysis showed that 282 CDs (13%) are unique to PMTB2.1and 1,125 CDs with orthologs in all. This reflects overall close relationship of these bacteria and supports the classification in the Gamma subdivision of the Proteobacteria. In addition, genomic distance analysis among all nine genomes indicated that PMTB2.1 is closely related with other five Pasteurella species with genomic distance less than 0.13. Synteny analysis shows subtle differences in genetic structures among different P.multocida indicating the dynamics of frequent gene transfer events among different P. multocida strains. However, PM3480 and PM70 exhibited exceptionally large structural variation since they were swine and chicken isolates. Furthermore, genomic structure of PMTB2.1 is more resembling that of PM36950 with a genomic size difference of approximately 34,380 kb (smaller than PM36950) and strain-specific Integrative and Conjugative Elements (ICE) which was found only in PM36950 is absent in PMTB2.1. Meanwhile, two intact prophages sequences of approximately 62 kb were found to be present only in PMTB2.1. One of phage is similar to transposable phage SfMu. The phylogenomic tree was constructed and rooted with E. coli, A. pleuropneumoniae and H. parasuis based on OrthoMCL analysis. The genomes of P. multocida strain PMTB2.1 were clustered with bovine isolates of P. multocida strain PM36950 and PMHB01 and were separated from avian isolate PM70 and swine isolates PM3480 and PMHN06 and are distant from Actinobacillus and Haemophilus. Previous studies based on Single Nucleotide Polymorphism (SNPs) and Multilocus Sequence Typing (MLST) unable to show a clear phylogenetic relatedness between Pasteurella multocida and the different host. In conclusion, this study has provided insight on the genomic structure of PMTB2.1 in terms of potential genes that can function as virulence factors for future study in elucidating the mechanisms behind the ability of the bacteria in causing diseases in susceptible animals.Keywords: comparative genomics, DNA sequencing, phage, phylogenomics
Procedia PDF Downloads 18848 An Integrated Approach to the Carbonate Reservoir Modeling: Case Study of the Eastern Siberia Field
Authors: Yana Snegireva
Abstract:
Carbonate reservoirs are known for their heterogeneity, resulting from various geological processes such as diagenesis and fracturing. These complexities may cause great challenges in understanding fluid flow behavior and predicting the production performance of naturally fractured reservoirs. The investigation of carbonate reservoirs is crucial, as many petroleum reservoirs are naturally fractured, which can be difficult due to the complexity of their fracture networks. This can lead to geological uncertainties, which are important for global petroleum reserves. The problem outlines the key challenges in carbonate reservoir modeling, including the accurate representation of fractures and their connectivity, as well as capturing the impact of fractures on fluid flow and production. Traditional reservoir modeling techniques often oversimplify fracture networks, leading to inaccurate predictions. Therefore, there is a need for a modern approach that can capture the complexities of carbonate reservoirs and provide reliable predictions for effective reservoir management and production optimization. The modern approach to carbonate reservoir modeling involves the utilization of the hybrid fracture modeling approach, including the discrete fracture network (DFN) method and implicit fracture network, which offer enhanced accuracy and reliability in characterizing complex fracture systems within these reservoirs. This study focuses on the application of the hybrid method in the Nepsko-Botuobinskaya anticline of the Eastern Siberia field, aiming to prove the appropriateness of this method in these geological conditions. The DFN method is adopted to model the fracture network within the carbonate reservoir. This method considers fractures as discrete entities, capturing their geometry, orientation, and connectivity. But the method has significant disadvantages since the number of fractures in the field can be very high. Due to limitations in the amount of main memory, it is very difficult to represent these fractures explicitly. By integrating data from image logs (formation micro imager), core data, and fracture density logs, a discrete fracture network (DFN) model can be constructed to represent fracture characteristics for hydraulically relevant fractures. The results obtained from the DFN modeling approaches provide valuable insights into the East Siberia field's carbonate reservoir behavior. The DFN model accurately captures the fracture system, allowing for a better understanding of fluid flow pathways, connectivity, and potential production zones. The analysis of simulation results enables the identification of zones of increased fracturing and optimization opportunities for reservoir development with the potential application of enhanced oil recovery techniques, which were considered in further simulations on the dual porosity and dual permeability models. This approach considers fractures as separate, interconnected flow paths within the reservoir matrix, allowing for the characterization of dual-porosity media. The case study of the East Siberia field demonstrates the effectiveness of the hybrid model method in accurately representing fracture systems and predicting reservoir behavior. The findings from this study contribute to improved reservoir management and production optimization in carbonate reservoirs with the use of enhanced and improved oil recovery methods.Keywords: carbonate reservoir, discrete fracture network, fracture modeling, dual porosity, enhanced oil recovery, implicit fracture model, hybrid fracture model
Procedia PDF Downloads 7547 Diffusion MRI: Clinical Application in Radiotherapy Planning of Intracranial Pathology
Authors: Pomozova Kseniia, Gorlachev Gennadiy, Chernyaev Aleksandr, Golanov Andrey
Abstract:
In clinical practice, and especially in stereotactic radiosurgery planning, the significance of diffusion-weighted imaging (DWI) is growing. This makes the existence of software capable of quickly processing and reliably visualizing diffusion data, as well as equipped with tools for their analysis in terms of different tasks. We are developing the «MRDiffusionImaging» software on the standard C++ language. The subject part has been moved to separate class libraries and can be used on various platforms. The user interface is Windows WPF (Windows Presentation Foundation), which is a technology for managing Windows applications with access to all components of the .NET 5 or .NET Framework platform ecosystem. One of the important features is the use of a declarative markup language, XAML (eXtensible Application Markup Language), with which you can conveniently create, initialize and set properties of objects with hierarchical relationships. Graphics are generated using the DirectX environment. The MRDiffusionImaging software package has been implemented for processing diffusion magnetic resonance imaging (dMRI), which allows loading and viewing images sorted by series. An algorithm for "masking" dMRI series based on T2-weighted images was developed using a deformable surface model to exclude tissues that are not related to the area of interest from the analysis. An algorithm of distortion correction using deformable image registration based on autocorrelation of local structure has been developed. Maximum voxel dimension was 1,03 ± 0,12 mm. In an elementary brain's volume, the diffusion tensor is geometrically interpreted using an ellipsoid, which is an isosurface of the probability density of a molecule's diffusion. For the first time, non-parametric intensity distributions, neighborhood correlations, and inhomogeneities are combined in one segmentation of white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF) algorithm. A tool for calculating the coefficient of average diffusion and fractional anisotropy has been created, on the basis of which it is possible to build quantitative maps for solving various clinical problems. Functionality has been created that allows clustering and segmenting images to individualize the clinical volume of radiation treatment and further assess the response (Median Dice Score = 0.963 ± 0,137). White matter tracts of the brain were visualized using two algorithms: deterministic (fiber assignment by continuous tracking) and probabilistic using the Hough transform. The proposed algorithms test candidate curves in the voxel, assigning to each one a score computed from the diffusion data, and then selects the curves with the highest scores as the potential anatomical connections. White matter fibers were visualized using a Hough transform tractography algorithm. In the context of functional radiosurgery, it is possible to reduce the irradiation volume of the internal capsule receiving 12 Gy from 0,402 cc to 0,254 cc. The «MRDiffusionImaging» will improve the efficiency and accuracy of diagnostics and stereotactic radiotherapy of intracranial pathology. We develop software with integrated, intuitive support for processing, analysis, and inclusion in the process of radiotherapy planning and evaluating its results.Keywords: diffusion-weighted imaging, medical imaging, stereotactic radiosurgery, tractography
Procedia PDF Downloads 8546 A Basic Concept for Installing Cooling and Heating System Using Seawater Thermal Energy from the West Coast of Korea
Authors: Jun Byung Joon, Seo Seok Hyun, Lee Seo Young
Abstract:
As carbon dioxide emissions increase due to rapid industrialization and reckless development, abnormal climates such as floods and droughts are occurring. In order to respond to such climate change, the use of existing fossil fuels is reduced, and the proportion of eco-friendly renewable energy is gradually increasing. Korea is an energy resource-poor country that depends on imports for 93% of its total energy. As the global energy supply chain instability experienced due to the Russia-Ukraine crisis increases, countries around the world are resetting energy policies to minimize energy dependence and strengthen security. Seawater thermal energy is a renewable energy that replaces the existing air heat energy. It uses the characteristic of having a higher specific heat than air to cool and heat main spaces of buildings to increase heat transfer efficiency and minimize power consumption to generate electricity using fossil fuels, and Carbon dioxide emissions can be minimized. In addition, the effect on the marine environment is very small by using only the temperature characteristics of seawater in a limited way. K-water carried out a demonstration project of supplying cooling and heating energy to spaces such as the central control room and presentation room in the management building by acquiring the heat source of seawater circulated through the power plant's waterway by using the characteristics of the tidal power plant. Compared to the East Sea and the South Sea, the main system was designed in consideration of the large tidal difference, small temperature difference, and low-temperature characteristics, and its performance was verified through operation during the demonstration period. In addition, facility improvements were made for major deficiencies to strengthen monitoring functions, provide user convenience, and improve facility soundness. To spread these achievements, the basic concept was to expand the seawater heating and cooling system with a scale of 200 USRT at the Tidal Culture Center. With the operational experience of the demonstration system, it will be possible to establish an optimal seawater heat cooling and heating system suitable for the characteristics of the west coast ocean. Through this, it is possible to reduce operating costs by KRW 33,31 million per year compared to air heat, and through industry-university-research joint research, it is possible to localize major equipment and materials and develop key element technologies to revitalize the seawater heat business and to advance into overseas markets. The government's efforts are needed to expand the seawater heating and cooling system. Seawater thermal energy utilizes only the thermal energy of infinite seawater. Seawater thermal energy has less impact on the environment than river water thermal energy, except for environmental pollution factors such as bottom dredging, excavation, and sand or stone extraction. Therefore, it is necessary to increase the sense of speed in project promotion by innovatively simplifying unnecessary licensing/permission procedures. In addition, support should be provided to secure business feasibility by dramatically exempting the usage fee of public waters to actively encourage development in the private sector.Keywords: seawater thermal energy, marine energy, tidal power plant, energy consumption
Procedia PDF Downloads 10245 Optimized Electron Diffraction Detection and Data Acquisition in Diffraction Tomography: A Complete Solution by Gatan
Authors: Saleh Gorji, Sahil Gulati, Ana Pakzad
Abstract:
Continuous electron diffraction tomography, also known as microcrystal electron diffraction (MicroED) or three-dimensional electron diffraction (3DED), is a powerful technique, which in combination with cryo-electron microscopy (cryo-ED), can provide atomic-scale 3D information about the crystal structure and composition of different classes of crystalline materials such as proteins, peptides, and small molecules. Unlike the well-established X-ray crystallography method, 3DED does not require large single crystals and can collect accurate electron diffraction data from crystals as small as 50 – 100 nm. This is a critical advantage as growing larger crystals, as required by X-ray crystallography methods, is often very difficult, time-consuming, and expensive. In most cases, specimens studied via 3DED method are electron beam sensitive, which means there is a limitation on the maximum amount of electron dose one can use to collect the required data for a high-resolution structure determination. Therefore, collecting data using a conventional scintillator-based fiber coupled camera brings additional challenges. This is because of the inherent noise introduced during the electron-to-photon conversion in the scintillator and transfer of light via the fibers to the sensor, which results in a poor signal-to-noise ratio and requires a relatively higher and commonly specimen-damaging electron dose rates, especially for protein crystals. As in other cryo-EM techniques, damage to the specimen can be mitigated if a direct detection camera is used which provides a high signal-to-noise ratio at low electron doses. In this work, we have used two classes of such detectors from Gatan, namely the K3® camera (a monolithic active pixel sensor) and Stela™ (that utilizes DECTRIS hybrid-pixel technology), to address this problem. The K3 is an electron counting detector optimized for low-dose applications (like structural biology cryo-EM), and Stela is also a counting electron detector but optimized for diffraction applications with high speed and high dynamic range. Lastly, data collection workflows, including crystal screening, microscope optics setup (for imaging and diffraction), stage height adjustment at each crystal position, and tomogram acquisition, can be one of the other challenges of the 3DED technique. Traditionally this has been all done manually or in a partly automated fashion using open-source software and scripting, requiring long hours on the microscope (extra cost) and extensive user interaction with the system. We have recently introduced Latitude® D in DigitalMicrograph® software, which is compatible with all pre- and post-energy-filter Gatan cameras and enables 3DED data acquisition in an automated and optimized fashion. Higher quality 3DED data enables structure determination with higher confidence, while automated workflows allow these to be completed considerably faster than before. Using multiple examples, this work will demonstrate how to direct detection electron counting cameras enhance 3DED results (3 to better than 1 Angstrom) for protein and small molecule structure determination. We will also show how Latitude D software facilitates collecting such data in an integrated and fully automated user interface.Keywords: continuous electron diffraction tomography, direct detection, diffraction, Latitude D, Digitalmicrograph, proteins, small molecules
Procedia PDF Downloads 10744 Sentinel-2 Based Burn Area Severity Assessment Tool in Google Earth Engine
Authors: D. Madhushanka, Y. Liu, H. C. Fernando
Abstract:
Fires are one of the foremost factors of land surface disturbance in diverse ecosystems, causing soil erosion and land-cover changes and atmospheric effects affecting people's lives and properties. Generally, the severity of the fire is calculated as the Normalized Burn Ratio (NBR) index. This is performed manually by comparing two images obtained afterward. Then by using the bitemporal difference of the preprocessed satellite images, the dNBR is calculated. The burnt area is then classified as either unburnt (dNBR<0.1) or burnt (dNBR>= 0.1). Furthermore, Wildfire Severity Assessment (WSA) classifies burnt areas and unburnt areas using classification levels proposed by USGS and comprises seven classes. This procedure generates a burn severity report for the area chosen by the user manually. This study is carried out with the objective of producing an automated tool for the above-mentioned process, namely the World Wildfire Severity Assessment Tool (WWSAT). It is implemented in Google Earth Engine (GEE), which is a free cloud-computing platform for satellite data processing, with several data catalogs at different resolutions (notably Landsat, Sentinel-2, and MODIS) and planetary-scale analysis capabilities. Sentinel-2 MSI is chosen to obtain regular processes related to burnt area severity mapping using a medium spatial resolution sensor (15m). This tool uses machine learning classification techniques to identify burnt areas using NBR and to classify their severity over the user-selected extent and period automatically. Cloud coverage is one of the biggest concerns when fire severity mapping is performed. In WWSAT based on GEE, we present a fully automatic workflow to aggregate cloud-free Sentinel-2 images for both pre-fire and post-fire image compositing. The parallel processing capabilities and preloaded geospatial datasets of GEE facilitated the production of this tool. This tool consists of a Graphical User Interface (GUI) to make it user-friendly. The advantage of this tool is the ability to obtain burn area severity over a large extent and more extended temporal periods. Two case studies were carried out to demonstrate the performance of this tool. The Blue Mountain national park forest affected by the Australian fire season between 2019 and 2020 is used to describe the workflow of the WWSAT. This site detected more than 7809 km2, using Sentinel-2 data, giving an error below 6.5% when compared with the area detected on the field. Furthermore, 86.77% of the detected area was recognized as fully burnt out, of which high severity (17.29%), moderate-high severity (19.63%), moderate-low severity (22.35%), and low severity (27.51%). The Arapaho and Roosevelt National Forest Park, California, the USA, which is affected by the Cameron peak fire in 2020, is chosen for the second case study. It was found that around 983 km2 had burned out, of which high severity (2.73%), moderate-high severity (1.57%), moderate-low severity (1.18%), and low severity (5.45%). These spots also can be detected through the visual inspection made possible by cloud-free images generated by WWSAT. This tool is cost-effective in calculating the burnt area since satellite images are free and the cost of field surveys is avoided.Keywords: burnt area, burnt severity, fires, google earth engine (GEE), sentinel-2
Procedia PDF Downloads 23543 The Influence of Screen Translation on Creative Audiovisual Writing: A Corpus-Based Approach
Authors: John D. Sanderson
Abstract:
The popularity of American cinema worldwide has contributed to the development of sociolects related to specific film genres in other cultural contexts by means of screen translation, in many cases eluding norms of usage in the target language, a process whose result has come to be known as 'dubbese'. A consequence for the reception in countries where local audiovisual fiction consumption is far lower than American imported productions is that this linguistic construct is preferred, even though it differs from common everyday speech. The iconography of film genres such as science-fiction, western or sword-and-sandal films, for instance, generates linguistic expectations in international audiences who will accept more easily the sociolects assimilated by the continuous reception of American productions, even if the themes, locations, characters, etc., portrayed on screen may belong in origin to other cultures. And the non-normative language (e.g., calques, semantic loans) used in the preferred mode of linguistic transfer, whether it is translation for dubbing or subtitling, has diachronically evolved in many cases into a status of canonized sociolect, not only accepted but also required, by foreign audiences of American films. However, a remarkable step forward is taken when this typology of artificial linguistic constructs starts being used creatively by nationals of these target cultural contexts. In the case of Spain, the success of American sitcoms such as Friends in the 1990s led Spanish television scriptwriters to include in national productions lexical and syntactical indirect borrowings (Anglicisms not formally identifiable as such because they include elements from their own language) in order to target audiences of the former. However, this commercial strategy had already taken place decades earlier when Spain became a favored location for the shooting of foreign films in the early 1960s. The international popularity of the then newly developed sub-genre known as Spaghetti-Western encouraged Spanish investors to produce their own movies, and local scriptwriters made use of the dubbese developed nationally since the advent of sound in film instead of using normative language. As a result, direct Anglicisms, as well as lexical and syntactical borrowings made up the creative writing of these Spanish productions, which also became commercially successful. Interestingly enough, some of these films were even marketed in English-speaking countries as original westerns (some of the names of actors and directors were anglified to that purpose) dubbed into English. The analysis of these 'back translations' will also foreground some semantic distortions that arose in the process. In order to perform the research on these issues, a wide corpus of American films has been used, which chronologically range from Stagecoach (John Ford, 1939) to Django Unchained (Quentin Tarantino, 2012), together with a shorter corpus of Spanish films produced during the golden age of Spaghetti Westerns, from una tumba para el sheriff (Mario Caiano; in English lone and angry man, William Hawkins) to tu fosa será la exacta, amigo (Juan Bosch, 1972; in English my horse, my gun, your widow, John Wood). The methodology of analysis and the conclusions reached could be applied to other genres and other cultural contexts.Keywords: dubbing, film genre, screen translation, sociolect
Procedia PDF Downloads 17142 Continued usage of Wearable FItness Technology: An Extended UTAUT2 Model Perspective
Authors: Rasha Elsawy
Abstract:
Aside from the rapid growth of global information technology and the Internet, another key trend is the swift proliferation of wearable technologies. The future of wearable technologies is very bright as an emerging revolution in this technological world. Beyond this, individual continuance intention toward IT is an important area that drew academics' and practitioners' attention. The literature review exhibits that continuance usage is an important concern that needs to be addressed for any technology to be advantageous and for consumers to succeed. However, consumers noticeably abandon their wearable devices soon after purchase, losing all subsequent benefits that can only be achieved through continued usage. Purpose-This thesis aims to develop an integrated model designed to explain and predict consumers' behavioural intention(BI) and continued use (CU) of wearable fitness technology (WFT) to identify the determinants of the CU of technology. Because of this, the question arises as to whether there are differences between technology adoption and post-adoption (CU) factors. Design/methodology/approach- The study employs the unified theory of acceptance and use of technology2 (UTAUT2), which has the best explanatory power, as an underpinning framework—extending it with further factors, along with user-specific personal characteristics as moderators. All items will be adapted from previous literature and slightly modified according to the WFT/SW context. A longitudinal investigation will be carried out to examine the research model, wherein a survey will include these constructs involved in the conceptual model. A quantitative approach based on a questionnaire survey will collect data from existing wearable technology users. Data will be analysed using the structural equation modelling (SEM) method based on IBM SPSS statistics and AMOS 28.0. Findings- The research findings will provide unique perspectives on user behaviour, intention, and actual continuance usage when accepting WFT. Originality/value- Unlike previous works, the current thesis comprehensively explores factors that affect consumers' decisions to continue using wearable technology. That is influenced by technological/utilitarian, affective, emotional, psychological, and social factors, along with the role of proposed moderators. That novel research framework is proposed by extending the UTAUT2 model with additional contextual variables classified into Performance Expectancy, Effort Expectancy, Social Influence (societal pressure regarding body image), Facilitating Conditions, Hedonic Motivation (to be split up into two concepts: perceived enjoyment and perceived device annoyance), Price value, and Habit-forming techniques; adding technology upgradability as determinants of consumers' behavioural intention and continuance usage of Information Technology (IT). Further, using personality traits theory and proposing relevant user-specific personal characteristics (openness to technological innovativeness, conscientiousness in health, extraversion, neuroticism, and agreeableness) to moderate the research model. Thus, the present thesis obtains a more convincing explanation expected to provide theoretical foundations for future emerging IT (such as wearable fitness devices) research from a behavioural perspective.Keywords: wearable technology, wearable fitness devices/smartwatches, continuance use, behavioural intention, upgradability, longitudinal study
Procedia PDF Downloads 11341 Identification of Tangible and Intangible Heritage and Preparation of Conservation Proposal for the Historic City of Karanja Laad
Authors: Prachi Buche Marathe
Abstract:
Karanja Laad is a city located in the Vidarbha region in the state of Maharashtra, India. It has a huge amount of tangible and intangible heritage in the form of monuments, precincts, a group of structures, festivals and procession route, which is neglected and lost with time. Three different religions Hinduism, Islam and Jainism along with associations of being a birthplace of Swami Nrusinha Saraswati, an exponent of Datta Sampradaya sect and the British colonial layer have shaped the culture and society of the place over the period. The architecture of the town Karanja Laad has enhanced its unique historic and cultural value with a combination of all these historic layers. Karanja Laad is also a traditional trading historic town with unique hybrid architectural style and has a good potential for developing as a tourist place along with the present image of a pilgrim destination of Datta Sampradaya. The aim of the research is to prepare a conservation proposal for the historic town along with the management framework. Objectives of the research are to study the evolution of Karanja town, to identify the cultural resources along with issues of the historic core of the city, to understand Datta sampradaya, and contribution of Saint Nrusinha Saraswati in the religious sect and his association as an important personality with Karanja. The methodology of the research is site visits to the Karanja city, making field surveys for documentation and discussions and questionnaires with the residents to establish heritage and identify potential and issues within the historic core thereby establishing a case for conservation. Field surveys are conducted for town level study of land use, open spaces, occupancy, ownership, traditional commodity and community, infrastructure, streetscapes, and precinct activities during the festival and non-festival period. Building level study includes establishing various typologies like residential, institutional commercial, religious, and traditional infrastructure from the mythological references like waterbodies (kund), lake and wells. One of the main issues is that the loss of the traditional footprint as well as the traditional open spaces which are getting lost due to the new illegal encroachments and lack of guidelines for the new additions to conserve the original fabric of the structures. Traditional commodities are getting lost since there is no promotion of these skills like pottery and painting. Lavish bungalows like Kannava mansion, main temple Wada (birthplace of the saint) have a huge potential to be developed as a museum by adaptive re-use which will, in turn, attract many visitors during festivals which will boost the economy. Festival procession routes can be identified and a heritage walk can be developed so as to highlight the traditional features of the town. Overall study has resulted in establishing a heritage map with 137 heritage structures identified as potential. Conservation proposal is worked out on the town level, precinct level and building level with interventions such as developing construction guidelines for further development and establishing a heritage cell consisting architects and engineers for the upliftment of the existing rich heritage of the Karanja city.Keywords: built heritage, conservation, Datta Sampradaya, Karanja Laad, Swami Nrusinha Saraswati, procession route
Procedia PDF Downloads 16140 Developing and Testing a Questionnaire of Music Memorization and Practice
Authors: Diana Santiago, Tania Lisboa, Sophie Lee, Alexander P. Demos, Monica C. S. Vasconcelos
Abstract:
Memorization has long been recognized as an arduous and anxiety-evoking task for musicians, and yet, it is an essential aspect of performance. Research shows that musicians are often not taught how to memorize. While memorization and practice strategies of professionals have been studied, little research has been done to examine how student musicians learn to practice and memorize music in different cultural settings. We present the process of developing and testing a questionnaire of music memorization and musical practice for student musicians in the UK and Brazil. A survey was developed for a cross-cultural research project aiming at examining how young orchestral musicians (aged 7–18 years) in different learning environments and cultures engage in instrumental practice and memorization. The questionnaire development included members of a UK/US/Brazil research team of music educators and performance science researchers. A pool of items was developed for each aspect of practice and memorization identified, based on literature, personal experiences, and adapted from existing questionnaires. Item development took the varying levels of cognitive and social development of the target populations into consideration. It also considered the diverse target learning environments. Items were initially grouped in accordance with a single underlying construct/behavior. The questionnaire comprised three sections: a demographics section, a section on practice (containing 29 items), and a section on memorization (containing 40 items). Next, the response process was considered and a 5-point Likert scale ranging from ‘always’ to ‘never’ with a verbal label and an image assigned to each response option was selected, following effective questionnaire design for children and youths. Finally, a pilot study was conducted with young orchestral musicians from diverse learning environments in Brazil and the United Kingdom. Data collection took place in either one-to-one or group settings to facilitate the participants. Cognitive interviews were utilized to establish response process validity by confirming the readability and accurate comprehension of the questionnaire items or highlighting the need for item revision. Internal reliability was investigated by measuring the consistency of the item groups using the statistical test Cronbach’s alpha. The pilot study successfully relied on the questionnaire to generate data about the engagement of young musicians of different levels and instruments, across different learning and cultural environments, in instrumental practice and memorization. Interaction analysis of the cognitive interviews undertaken with these participants, however, exposed the fact that certain items, and the response scale, could be interpreted in multiple ways. The questionnaire text was, therefore, revised accordingly. The low Cronbach’s Alpha scores of many item groups indicated another issue with the original questionnaire: its low level of internal reliability. Several reasons for each poor reliability can be suggested, including the issues with item interpretation revealed through interaction analysis of the cognitive interviews, the small number of participants (34), and the elusive nature of the construct in question. The revised questionnaire measures 78 specific behaviors or opinions. It can be seen to provide an efficient means of gathering information about the engagement of young musicians in practice and memorization on a large scale.Keywords: cross-cultural, memorization, practice, questionnaire, young musicians
Procedia PDF Downloads 12339 Use of Artificial Intelligence and Two Object-Oriented Approaches (k-NN and SVM) for the Detection and Characterization of Wetlands in the Centre-Val de Loire Region, France
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
Nowadays, wetlands are the subject of contradictory debates opposing scientific, political and administrative meanings. Indeed, given their multiple services (drinking water, irrigation, hydrological regulation, mineral, plant and animal resources...), wetlands concentrate many socio-economic and biodiversity issues. In some regions, they can cover vast areas (>100 thousand ha) of the landscape, such as the Camargue area in the south of France, inside the Rhone delta. The high biological productivity of wetlands, the strong natural selection pressures and the diversity of aquatic environments have produced many species of plants and animals that are found nowhere else. These environments are tremendous carbon sinks and biodiversity reserves depending on their age, composition and surrounding environmental conditions, wetlands play an important role in global climate projections. Covering more than 3% of the earth's surface, wetlands have experienced since the beginning of the 1990s a tremendous revival of interest, which has resulted in the multiplication of inventories, scientific studies and management experiments. The geographical and physical characteristics of the wetlands of the central region conceal a large number of natural habitats that harbour a great biological diversity. These wetlands, one of the natural habitats, are still influenced by human activities, especially agriculture, which affects its layout and functioning. In this perspective, decision-makers need to delimit spatial objects (natural habitats) in a certain way to be able to take action. Thus, wetlands are no exception to this rule even if it seems to be a difficult exercise to delimit a type of environment as whose main characteristic is often to occupy the transition between aquatic and terrestrial environment. However, it is possible to map wetlands with databases, derived from the interpretation of photos and satellite images, such as the European database Corine Land cover, which allows quantifying and characterizing for each place the characteristic wetland types. Scientific studies have shown limitations when using high spatial resolution images (SPOT, Landsat, ASTER) for the identification and characterization of small wetlands (1 hectare). To address this limitation, it is important to note that these wetlands generally represent spatially complex features. Indeed, the use of very high spatial resolution images (>3m) is necessary to map small and large areas. However, with the recent evolution of artificial intelligence (AI) and deep learning methods for satellite image processing have shown a much better performance compared to traditional processing based only on pixel structures. Our research work is also based on spectral and textural analysis on THR images (Spot and IRC orthoimage) using two object-oriented approaches, the nearest neighbour approach (k-NN) and the Super Vector Machine approach (SVM). The k-NN approach gave good results for the delineation of wetlands (wet marshes and moors, ponds, artificial wetlands water body edges, ponds, mountain wetlands, river edges and brackish marshes) with a kappa index higher than 85%.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 7238 Fe Modified Tin Oxide Thin Film Based Matrix for Reagentless Uric Acid Biosensing
Authors: Kashima Arora, Monika Tomar, Vinay Gupta
Abstract:
Biosensors have found potential applications ranging from environmental testing and biowarfare agent detection to clinical testing, health care, and cell analysis. This is driven in part by the desire to decrease the cost of health care and to obtain precise information more quickly about the health status of patient by the development of various biosensors, which has become increasingly prevalent in clinical testing and point of care testing for a wide range of biological elements. Uric acid is an important byproduct in human body and a number of pathological disorders are related to its high concentration in human body. In past few years, rapid growth in the development of new materials and improvements in sensing techniques have led to the evolution of advanced biosensors. In this context, metal oxide thin film based matrices due to their bio compatible nature, strong adsorption ability, high isoelectric point (IEP) and abundance in nature have become the materials of choice for recent technological advances in biotechnology. In the past few years, wide band-gap metal oxide semiconductors including ZnO, SnO₂ and CeO₂ have gained much attention as a matrix for immobilization of various biomolecules. Tin oxide (SnO₂), wide band gap semiconductor (Eg =3.87 eV), despite having multifunctional properties for broad range of applications including transparent electronics, gas sensors, acoustic devices, UV photodetectors, etc., it has not been explored much for biosensing purpose. To realize a high performance miniaturized biomolecular electronic device, rf sputtering technique is considered to be the most promising for the reproducible growth of good quality thin films, controlled surface morphology and desired film crystallization with improved electron transfer property. Recently, iron oxide and its composites have been widely used as matrix for biosensing application which exploits the electron communication feature of Fe, for the detection of various analytes using urea, hemoglobin, glucose, phenol, L-lactate, H₂O₂, etc. However, to the authors’ knowledge, no work is being reported on modifying the electronic properties of SnO₂ by implanting with suitable metal (Fe) to induce the redox couple in it and utilizing it for reagentless detection of uric acid. In present study, Fe implanted SnO₂ based matrix has been utilized for reagentless uric acid biosensor. Implantation of Fe into SnO₂ matrix is confirmed by energy-dispersive X-Ray spectroscopy (EDX) analysis. Electrochemical techniques have been used to study the response characteristics of Fe modified SnO₂ matrix before and after uricase immobilization. The developed uric acid biosensor exhibits a high sensitivity to about 0.21 mA/mM and a linear variation in current response over concentration range from 0.05 to 1.0 mM of uric acid besides high shelf life (~20 weeks). The Michaelis-Menten kinetic parameter (Km) is found to be relatively very low (0.23 mM), which indicates high affinity of the fabricated bioelectrode towards uric acid (analyte). Also, the presence of other interferents present in human serum has negligible effect on the performance of biosensor. Hence, obtained results highlight the importance of implanted Fe:SnO₂ thin film as an attractive matrix for realization of reagentless biosensors towards uric acid.Keywords: Fe implanted tin oxide, reagentless uric acid biosensor, rf sputtering, thin film
Procedia PDF Downloads 18137 Mesovarial Morphological Changes in Offspring Exposed to Maternal Cold Stress
Authors: Ariunaa.S., Javzandulam E., Chimegsaikhan S., Altantsetseg B., Oyungerel S., Bat-Erdene T., Naranbaatar S., Otgonbayar B., Suvdaa N., Tumenbayar B.
Abstract:
Introduction: Prenatal stress has been linked to heightened allergy sensitivity in offspring. However, there is a notable absence of research on the mesovarium structure of offspring born from mothers subjected to cold stress during pregnancy. Understanding the impact of maternal cold stress on the mesovarium structure could provide valuable insights into reproductive health outcomes in offspring. Objective: This study aims to investigate structural changes in the mesovarium of offspring born from cold-stress affected rats. Material and Methods: 20 female Westar rats weighing around 200g were chosen and evenly divided into four containers; then, 2-3 male rats were introduced to each container. The Papanicolaou method was used to estimate the spermatozoa and estrus period from vaginal swabs taken from female rats at 8:00 a.m. Female rats examined with the presence of spermatozoa during the estrous phase of the estrous cycle are defined as pregnant. Pregnant rats are divided into experimental and control groups. The experimental group was stressed using the model of severe and chronic cold stress for 30 days. They were exposed to cold stress for 3 hours each morning between 8:00 and 11:00 o’clock at a temperature of minus 15 degrees Celsius. The control group was kept under normal laboratory conditions. Newborn female rats from both experimental and control groups were selected. At 2 months of age, rats were euthanized by decapitation, and their mesovaria were collected. Tissues were fixed in 4% formalin, embedded in paraffin, and sectioned into 5μm thick slices. The sections were stained with H&E and digitized by digital microscope. The area of brown fat and inflammatory infiltrations were quantified using Image J software. The blood cortisol levels were measured using ELISA. Data are expressed as the mean ± standard error of the mean (SEM). The Mann-Whitney test was used to compare the two groups. All analyses were performed using Prism (GraphPad Software). A p-value of < 0.05 was considered statistically significant. Result: Offspring born from stressed mothers exhibited significant physiological differences compared to the control group. Specifically, the body weight of offspring from stressed mothers was significantly lower than the control group (p=0.0002). Conversely, the cortisol level in offspring from stressed mothers was significantly higher (p=0.0446). Offspring born from stressed mothers showed a statistically significant increase in brown fat area compared to the control group (p=0.01). Additionally, offspring from stressed mothers had a significantly higher number of inflammatory infiltrates in their mesovarium compared to the control group (p<0.047). These results indicate the profound impact of maternal stress on offspring physiology, affecting body weight, stress hormone levels, metabolic characteristics, and inflammatory responses. Conclusion: Exposure to cold stress during pregnancy has significant repercussions on offspring physiology. Our findings demonstrate that cold stress exposure leads to increased blood cortisol levels, brown fat accumulation, and inflammatory cell infiltration in offspring. These results underscore the profound impact of maternal stress on offspring health and highlight the importance of mitigating environmental stressors during pregnancy to promote optimal offspring outcomes.Keywords: brown fat, cold stress during pregnancy, inflammation, mesovarium
Procedia PDF Downloads 4536 Wind Turbine Scaling for the Investigation of Vortex Shedding and Wake Interactions
Authors: Sarah Fitzpatrick, Hossein Zare-Behtash, Konstantinos Kontis
Abstract:
Traditionally, the focus of horizontal axis wind turbine (HAWT) blade aerodynamic optimisation studies has been the outer working region of the blade. However, recent works seek to better understand, and thus improve upon, the performance of the inboard blade region to enhance power production, maximise load reduction and better control the wake behaviour. This paper presents the design considerations and characterisation of a wind turbine wind tunnel model devised to further the understanding and fundamental definition of horizontal axis wind turbine root vortex shedding and interactions. Additionally, the application of passive and active flow control mechanisms – vortex generators and plasma actuators – to allow for the manipulation and mitigation of unsteady aerodynamic behaviour at the blade inboard section is investigated. A static, modular blade wind turbine model has been developed for use in the University of Glasgow’s de Havilland closed return, low-speed wind tunnel. The model components - which comprise of a half span blade, hub, nacelle and tower - are scaled using the equivalent full span radius, R, for appropriate Mach and Strouhal numbers, and to achieve a Reynolds number in the range of 1.7x105 to 5.1x105 for operational speeds up to 55m/s. The half blade is constructed to be modular and fully dielectric, allowing for the integration of flow control mechanisms with a focus on plasma actuators. Investigations of root vortex shedding and the subsequent wake characteristics using qualitative – smoke visualisation, tufts and china clay flow – and quantitative methods – including particle image velocimetry (PIV), hot wire anemometry (HWA), and laser Doppler anemometry (LDA) – were conducted over a range of blade pitch angles 0 to 15 degrees, and Reynolds numbers. This allowed for the identification of shed vortical structures from the maximum chord position, the transitional region where the blade aerofoil blends into a cylindrical joint, and the blade nacelle connection. Analysis of the trailing vorticity interactions between the wake core and freestream shows the vortex meander and diffusion is notably affected by the Reynold’s number. It is hypothesized that the shed vorticity from the blade root region directly influences and exacerbates the nacelle wake expansion in the downstream direction. As the design of inboard blade region form is, by necessity, driven by function rather than aerodynamic optimisation, a study is undertaken for the application of flow control mechanisms to manipulate the observed vortex phenomenon. The designed model allows for the effective investigation of shed vorticity and wake interactions with a focus on the accurate geometry of a root region which is representative of small to medium power commercial HAWTs. The studies undertaken allow for an enhanced understanding of the interplay of shed vortices and their subsequent effect in the near and far wake. This highlights areas of interest within the inboard blade area for the potential use of passive and active flow control devices which contrive to produce a more desirable wake quality in this region.Keywords: vortex shedding, wake interactions, wind tunnel model, wind turbine
Procedia PDF Downloads 23535 Familiarity with Intercultural Conflicts and Global Work Performance: Testing a Theory of Recognition Primed Decision-Making
Authors: Thomas Rockstuhl, Kok Yee Ng, Guido Gianasso, Soon Ang
Abstract:
Two meta-analyses show that intercultural experience is not related to intercultural adaptation or performance in international assignments. These findings have prompted calls for a deeper grounding of research on international experience in the phenomenon of global work. Two issues, in particular, may limit current understanding of the relationship between international experience and global work performance. First, intercultural experience is too broad a construct that may not sufficiently capture the essence of global work, which to a large part involves sensemaking and managing intercultural conflicts. Second, the psychological mechanisms through which intercultural experience affects performance remains under-explored, resulting in a poor understanding of how experience is translated into learning and performance outcomes. Drawing on recognition primed decision-making (RPD) research, the current study advances a cognitive processing model to highlight the importance of intercultural conflict familiarity. Compared to intercultural experience, intercultural conflict familiarity is a more targeted construct that captures individuals’ previous exposure to dealing with intercultural conflicts. Drawing on RPD theory, we argue that individuals’ intercultural conflict familiarity enhances their ability to make accurate judgments and generate effective responses when intercultural conflicts arise. In turn, the ability to make accurate situation judgements and effective situation responses is an important predictor of global work performance. A relocation program within a multinational enterprise provided the context to test these hypotheses using a time-lagged, multi-source field study. Participants were 165 employees (46% female; with an average of 5 years of global work experience) from 42 countries who relocated from country to regional offices as part a global restructuring program. Within the first two weeks of transfer to the regional office, employees completed measures of their familiarity with intercultural conflicts, cultural intelligence, cognitive ability, and demographic information. They also completed an intercultural situational judgment test (iSJT) to assess their situation judgment and situation response. The iSJT comprised four validated multimedia vignettes of challenging intercultural work conflicts and prompted employees to provide protocols of their situation judgment and situation response. Two research assistants, trained in intercultural management but blind to the study hypotheses, coded the quality of employee’s situation judgment and situation response. Three months later, supervisors rated employees’ global work performance. Results using multilevel modeling (vignettes nested within employees) support the hypotheses that greater familiarity with intercultural conflicts is positively associated with better situation judgment, and that situation judgment mediates the effect of intercultural familiarity on situation response quality. Also, aggregated situation judgment and situation response quality both predicted supervisor-rated global work performance. Theoretically, our findings highlight the important but under-explored role of familiarity with intercultural conflicts; a shift in attention from the general nature of international experience assessed in terms of number and length of overseas assignments. Also, our cognitive approach premised on RPD theory offers a new theoretical lens to understand the psychological mechanisms through which intercultural conflict familiarity affects global work performance. Third, and importantly, our study contributes to the global talent identification literature by demonstrating that the cognitive processes engaged in resolving intercultural conflicts predict actual performance in the global workplace.Keywords: intercultural conflict familiarity, job performance, judgment and decision making, situational judgment test
Procedia PDF Downloads 17934 Describing Cognitive Decline in Alzheimer's Disease via a Picture Description Writing Task
Authors: Marielle Leijten, Catherine Meulemans, Sven De Maeyer, Luuk Van Waes
Abstract:
For the diagnosis of Alzheimer's disease (AD), a large variety of neuropsychological tests are available. In some of these tests, linguistic processing - both oral and written - is an important factor. Language disturbances might serve as a strong indicator for an underlying neurodegenerative disorder like AD. However, the current diagnostic instruments for language assessment mainly focus on product measures, such as text length or number of errors, ignoring the importance of the process that leads to written or spoken language production. In this study, it is our aim to describe and test differences between cognitive and impaired elderly on the basis of a selection of writing process variables (inter- and intrapersonal characteristics). These process variables are mainly related to pause times, because the number, length, and location of pauses have proven to be an important indicator of the cognitive complexity of a process. Method: Participants that were enrolled in our research were chosen on the basis of a number of basic criteria necessary to collect reliable writing process data. Furthermore, we opted to match the thirteen cognitively impaired patients (8 MCI and 5 AD) with thirteen cognitively healthy elderly. At the start of the experiment, participants were each given a number of tests, such as the Mini-Mental State Examination test (MMSE), the Geriatric Depression Scale (GDS), the forward and backward digit span and the Edinburgh Handedness Inventory (EHI). Also, a questionnaire was used to collect socio-demographic information (age, gender, eduction) of the subjects as well as more details on their level of computer literacy. The tests and questionnaire were followed by two typing tasks and two picture description tasks. For the typing tasks participants had to copy (type) characters, words and sentences from a screen, whereas the picture description tasks each consisted of an image they had to describe in a few sentences. Both the typing and the picture description tasks were logged with Inputlog, a keystroke logging tool that allows us to log and time stamp keystroke activity to reconstruct and describe text production processes. The main rationale behind keystroke logging is that writing fluency and flow reveal traces of the underlying cognitive processes. This explains the analytical focus on pause (length, number, distribution, location, etc.) and revision (number, type, operation, embeddedness, location, etc.) characteristics. As in speech, pause times are seen as indexical of cognitive effort. Results. Preliminary analysis already showed some promising results concerning pause times before, within and after words. For all variables, mixed effects models were used that included participants as a random effect and MMSE scores, GDS scores and word categories (such as determiners and nouns) as a fixed effect. For pause times before and after words cognitively impaired patients paused longer than healthy elderly. These variables did not show an interaction effect between the group participants (cognitively impaired or healthy elderly) belonged to and word categories. However, pause times within words did show an interaction effect, which indicates pause times within certain word categories differ significantly between patients and healthy elderly.Keywords: Alzheimer's disease, keystroke logging, matching, writing process
Procedia PDF Downloads 36633 A Compact Standing-Wave Thermoacoustic Refrigerator Driven by a Rotary Drive Mechanism
Authors: Kareem Abdelwahed, Ahmed Salama, Ahmed Rabie, Ahmed Hamdy, Waleed Abdelfattah, Ahmed Abd El-Rahman
Abstract:
Conventional vapor-compression refrigeration systems rely on typical refrigerants, such as CFC, HCFC and ammonia. Despite of their suitable thermodynamic properties and their stability in the atmosphere, their corresponding global warming potential and ozone depletion potential raise concerns about their usage. Thus, the need for new refrigeration systems, which are environment-friendly, inexpensive and simple in construction, has strongly motivated the development of thermoacoustic energy conversion systems. A thermoacoustic refrigerator (TAR) is a device that is mainly consisting of a resonator, a stack and two heat exchangers. Typically, the resonator is a long circular tube, made of copper or steel and filled with Helium as a the working gas, while the stack has short and relatively low thermal conductivity ceramic parallel plates aligned with the direction of the prevailing resonant wave. Typically, the resonator of a standing-wave refrigerator has one end closed and is bounded by the acoustic driver at the other end enabling the propagation of half-wavelength acoustic excitation. The hot and cold heat exchangers are made of copper to allow for efficient heat transfer between the working gas and the external heat source and sink respectively. TARs are interesting because they have no moving parts, unlike conventional refrigerators, and almost no environmental impact exists as they rely on the conversion of acoustic and heat energies. Their fabrication process is rather simpler and sizes span wide variety of length scales. The viscous and thermal interactions between the stack plates, heat exchangers' plates and the working gas significantly affect the flow field within the plates' channels, and the energy flux density at the plates' surfaces, respectively. Here, the design, the manufacture and the testing of a compact refrigeration system that is based on the thermoacoustic energy-conversion technology is reported. A 1-D linear acoustic model is carefully and specifically developed, which is followed by building the hardware and testing procedures. The system consists of two harmonically-oscillating pistons driven by a simple 1-HP rotary drive mechanism operating at a frequency of 42Hz -hereby, replacing typical expensive linear motors and loudspeakers-, and a thermoacoustic stack within which the energy conversion of sound into heat is taken place. Air at ambient conditions is used as the working gas while the amplitude of the driver's displacement reaches 19 mm. The 30-cm-long stack is a simple porous ceramic material having 100 square channels per square inch. During operation, both oscillating-gas pressure and solid-stack temperature are recorded for further analysis. Measurements show a maximum temperature difference of about 27 degrees between the stack hot and cold ends with a Carnot coefficient of performance of 11 and estimated cooling capacity of five Watts, when operating at ambient conditions. A dynamic pressure of 7-kPa-amplitude is recorded, yielding a drive ratio of 7% approximately, and found in a good agreement with theoretical prediction. The system behavior is clearly non-linear and significant non-linear loss mechanisms are evident. This work helps understanding the operation principles of thermoacoustic refrigerators and presents a keystone towards developing commercial thermoacoustic refrigerator units.Keywords: refrigeration system, rotary drive mechanism, standing-wave, thermoacoustic refrigerator
Procedia PDF Downloads 36832 Embodied Empowerment: A Design Framework for Augmenting Human Agency in Assistive Technologies
Authors: Melina Kopke, Jelle Van Dijk
Abstract:
Persons with cognitive disabilities, such as Autism Spectrum Disorder (ASD) are often dependent on some form of professional support. Recent transformations in Dutch healthcare have spurred institutions to apply new, empowering methods and tools to enable their clients to cope (more) independently in daily life. Assistive Technologies (ATs) seem promising as empowering tools. While ATs can, functionally speaking, help people to perform certain activities without human assistance, we hold that, from a design-theoretical perspective, such technologies often fail to empower in a deeper sense. Most technologies serve either to prescribe or to monitor users’ actions, which in some sense objectifies them, rather than strengthening their agency. This paper proposes that theories of embodied interaction could help formulating a design vision in which interactive assistive devices augment, rather than replace, human agency and thereby add to a persons’ empowerment in daily life settings. It aims to close the gap between empowerment theory and the opportunities provided by assistive technologies, by showing how embodiment and empowerment theory can be applied in practice in the design of new, interactive assistive devices. Taking a Research-through-Design approach, we conducted a case study of designing to support independently living people with ASD with structuring daily activities. In three iterations we interlaced design action, active involvement and prototype evaluations with future end-users and healthcare professionals, and theoretical reflection. Our co-design sessions revealed the issue of handling daily activities being multidimensional. Not having the ability to self-manage one’s daily life has immense consequences on one’s self-image, and also has major effects on the relationship with professional caregivers. Over the course of the project relevant theoretical principles of both embodiment and empowerment theory together with user-insights, informed our design decisions. This resulted in a system of wireless light units that users can program as a reminder for tasks, but also to record and reflect on their actions. The iterative process helped to gradually refine and reframe our growing understanding of what it concretely means for a technology to empower a person in daily life. Drawing on the case study insights we propose a set of concrete design principles that together form what we call the embodied empowerment design framework. The framework includes four main principles: Enabling ‘reflection-in-action’; making information ‘publicly available’ in order to enable co-reflection and social coupling; enabling the implementation of shared reflections into an ‘endurable-external feedback loop’ embedded in the persons familiar ’lifeworld’; and nudging situated actions with self-created action-affordances. In essence, the framework aims for the self-development of a suitable routine, or ‘situated practice’, by building on a growing shared insight of what works for the person. The framework, we propose, may serve as a starting point for AT designers to create truly empowering interactive products. In a set of follow-up projects involving the participation of persons with ASD, Intellectual Disabilities, Dementia and Acquired Brain Injury, the framework will be applied, evaluated and further refined.Keywords: assistive technology, design, embodiment, empowerment
Procedia PDF Downloads 278