Search results for: spatial checklist
241 Queer Anti-Urbanism: An Exploration of Queer Space Through Design
Authors: William Creighton, Jan Smitheram
Abstract:
Queer discourse has been tied to a middle-class, urban-centric, white approach to the discussion of queerness. In doing so, the multilayeredness of queer existence has been washed away in favour of palatable queer occupation. This paper uses design to explore a queer anti-urbanist approach to facilitate a more egalitarian architectural occupancy. Scott Herring’s work on queer anti-urbanism is key to this approach. Herring redeploys anti-urbanism from its historical understanding of open hostility, rejection and desire to destroy the city towards a mode of queer critique that counters normative ideals of homonormative metronormative gay lifestyles. He questions how queer identity has been closed down into a more diminutive frame where those who do not fit within this frame are subjected to persecution or silenced through their absence. We extend these ideas through design to ask how a queer anti-urbanist approach facilitates a more egalitarian architectural occupancy. Following a “design as research” methodology, the design outputs allow a vehicle to ask how we might live, otherwise, in architectural space. A design as research methodologically is a process of questioning, designing and reflecting – in a non-linear, iterative approach – establishes itself through three projects, each increasing in scale and complexity. Each of the three scales tackled a different body relationship. The project began exploring the relations between body to body, body to known others, and body to unknown others. Moving through increasing scales was not to privilege the objective, the public and the large scale; instead, ‘intra-scaling’ acts as a tool to re-think how scale reproduces normative ideas of the identity of space. There was a queering of scale. Through this approach, the results were an installation that brings two people together to co-author space where the installation distorts the sensory experience and forces a more intimate and interconnected experience challenging our socialized proxemics: knees might touch. To queer the home, the installation was used as a drawing device, a tool to study and challenge spatial perception, drawing convention, and as a way to process practical information about the site and existing house – the device became a tool to embrace the spontaneous. The final design proposal operates as a multi-scalar boundary-crossing through “private” and “public” to support kinship through communal labour, queer relationality and mooring. The resulting design works to set adrift bodies in a sea of sensations through a mix of pleasure programmes. To conclude, through three design proposals, this design research creates a relationship between queer anti-urbanism and design. It asserts that queering the design process and outcome allows a more inclusive way to consider place, space and belonging. The projects lend to a queer relationality and interdependence by making spaces that support the unsettled, out-of-place, but is it queer enough?Keywords: queer, queer anti-urbanism, design as research, design
Procedia PDF Downloads 175240 Applying Image Schemas and Cognitive Metaphors to Teaching/Learning Italian Preposition a in Foreign/Second Language Context
Authors: Andrea Fiorista
Abstract:
The learning of prepositions is a quite problematic aspect in foreign language instruction, and Italian is certainly not an exception. In their prototypical function, prepositions express schematic relations of two entities in a highly abstract, typically image-schematic way. In other terms, prepositions assume concepts such as directionality, collocation of objects in space and time and, in Cognitive Linguistics’ terms, the position of a trajector with respect to a landmark. Learners of different native languages may conceptualize them differently, implying that they are supposed to operate a recategorization (or create new categories) fitting with the target language. However, most current Italian Foreign/Second Language handbooks and didactic grammars do not facilitate learners in carrying out the task, as they tend to provide partial and idiosyncratic descriptions, with the consequent learner’s effort to memorize them, most of the time without success. In their prototypical meaning, prepositions are used to specify precise topographical positions in the physical environment which become less and less accurate as they radiate out from what might be termed a concrete prototype. According to that, the present study aims to elaborate a cognitive and conceptually well-grounded analysis of some extensive uses of the Italian preposition a, in order to propose effective pedagogical solutions in the Teaching/Learning process. Image schemas, cognitive metaphors and embodiment represent efficient cognitive tools in a task like this. Actually, while learning the merely spatial use of the preposition a (e.g. Sono a Roma = I am in Rome; vado a Roma = I am going to Rome,…) is quite straightforward, it is more complex when a appears in constructions such as verbs of motion +a + infinitive (e.g. Vado a studiare = I am going to study), inchoative periphrasis (e.g. Tra poco mi metto a leggere = In a moment I will read), causative construction (e.g. Lui mi ha mandato a lavorare = He sent me to work). The study reports data from a teaching intervention of Focus on Form, in which a basic cognitive schema is used to facilitate both teachers and students to respectively explain/understand the extensive uses of a. The educational material employed translates Cognitive Linguistics’ theoretical assumptions, such as image schemas and cognitive metaphors, into simple images or proto-scenes easily comprehensible for learners. Illustrative material, indeed, is supposed to make metalinguistic contents more accessible. Moreover, the concept of embodiment is pedagogically applied through activities including motion and learners’ bodily involvement. It is expected that replacing rote learning with a methodology that gives grammatical elements a proper meaning, makes learning process more effective both in the short and long term.Keywords: cognitive approaches to language teaching, image schemas, embodiment, Italian as FL/SL
Procedia PDF Downloads 86239 Mapping and Mitigation Strategy for Flash Flood Hazards: A Case Study of Bishoftu City
Authors: Berhanu Keno Terfa
Abstract:
Flash floods are among the most dangerous natural disasters that pose a significant threat to human existence. They occur frequently and can cause extensive damage to homes, infrastructure, and ecosystems while also claiming lives. Although flash floods can happen anywhere in the world, their impact is particularly severe in developing countries due to limited financial resources, inadequate drainage systems, substandard housing options, lack of early warning systems, and insufficient preparedness. To address these challenges, a comprehensive study has been undertaken to analyze and map flood inundation using Geographic Information System (GIS) techniques by considering various factors that contribute to flash flood resilience and developing effective mitigation strategies. Key factors considered in the analysis include slope, drainage density, elevation, Curve Number, rainfall patterns, land-use/cover classes, and soil data. These variables were computed using ArcGIS software platforms, and data from the Sentinel-2 satellite image (with a 10-meter resolution) were utilized for land-use/cover classification. Additionally, slope, elevation, and drainage density data were generated from the 12.5-meter resolution of the ALOS Palsar DEM, while other relevant data were obtained from the Ethiopian Meteorological Institute. By integrating and regularizing the collected data through GIS and employing the analytic hierarchy process (AHP) technique, the study successfully delineated flash flood hazard zones (FFHs) and generated a suitable land map for urban agriculture. The FFH model identified four levels of risk in Bishoftu City: very high (2106.4 ha), high (10464.4 ha), moderate (1444.44 ha), and low (0.52 ha), accounting for 15.02%, 74.7%, 10.1%, and 0.004% of the total area, respectively. The results underscore the vulnerability of many residential areas in Bishoftu City, particularly the central areas that have been previously developed. Accurate spatial representation of flood-prone areas and potential agricultural zones is crucial for designing effective flood mitigation and agricultural production plans. The findings of this study emphasize the importance of flood risk mapping in raising public awareness, demonstrating vulnerability, strengthening financial resilience, protecting the environment, and informing policy decisions. Given the susceptibility of Bishoftu City to flash floods, it is recommended that the municipality prioritize urban agriculture adaptation, proper settlement planning, and drainage network design.Keywords: remote sensing, flush flood hazards, Bishoftu, GIS.
Procedia PDF Downloads 34238 The Potential of On-Demand Shuttle Services to Reduce Private Car Use
Authors: B. Mack, K. Tampe-Mai, E. Diesch
Abstract:
Findings of an ongoing discrete choice study of future transport mode choice will be presented. Many urban centers face the triple challenge of having to cope with ever increasing traffic congestion, environmental pollution, and greenhouse gas emission brought about by private car use. In principle, private car use may be diminished by extending public transport systems like bus lines, trams, tubes, and trains. However, there are limits to increasing the (perceived) spatial and temporal flexibility and reducing peak-time crowding of classical public transport systems. An emerging new type of system, publicly or privately operated on-demand shuttle bus services, seem suitable to ameliorate the situation. A fleet of on-demand shuttle busses operates without fixed stops and schedules. It may be deployed efficiently in that each bus picks up passengers whose itineraries may be combined into an optimized route. Crowding may be minimized by limiting the number of seats and the inter-seat distance for each bus. The study is conducted as a discrete choice experiment. The choice between private car, public transport, and shuttle service is registered as a function of several push and pull factors (financial costs, travel time, walking distances, mobility tax/congestion charge, and waiting time/parking space search time). After the completion of the discrete choice items, the study participant is asked to rate the three modes of transport with regard to the pull factors of comfort, safety, privacy, and opportunity to engage in activities like reading or surfing the internet. These ratings are entered as additional predictors into the discrete choice experiment regression model. The study is conducted in the region of Stuttgart in southern Germany. N=1000 participants are being recruited. Participants are between 18 and 69 years of age, hold a driver’s license, and live in the city or the surrounding region of Stuttgart. In the discrete choice experiment, participants are asked to assume they lived within the Stuttgart region, but outside of the city, and were planning the journey from their apartment to their place of work, training, or education during the peak traffic time in the morning. Then, for each item of the discrete choice experiment, they are asked to choose between the transport modes of private car, public transport, and on-demand shuttle in the light of particular values of the push and pull factors studied. The study will provide valuable information on the potential of switching from private car use to the use of on-demand shuttles, but also on the less desirable potential of switching from public transport to on-demand shuttle services. Furthermore, information will be provided on the modulation of these switching potentials by pull and push factors.Keywords: determinants of travel mode choice, on-demand shuttle services, private car use, public transport
Procedia PDF Downloads 182237 Estimating Evapotranspiration Irrigated Maize in Brazil Using a Hybrid Modelling Approach and Satellite Image Inputs
Authors: Ivo Zution Goncalves, Christopher M. U. Neale, Hiran Medeiros, Everardo Mantovani, Natalia Souza
Abstract:
Multispectral and thermal infrared imagery from satellite sensors coupled with climate and soil datasets were used to estimate evapotranspiration and biomass in center pivots planted to maize in Brazil during the 2016 season. The hybrid remote sensing based model named Spatial EvapoTranspiration Modelling Interface (SETMI) was applied using multispectral and thermal infrared imagery from the Landsat Thematic Mapper instrument. Field data collected by the IRRIGER center pivot management company included daily weather information such as maximum and minimum temperature, precipitation, relative humidity for estimating reference evapotranspiration. In addition, soil water content data were obtained every 0.20 m in the soil profile down to 0.60 m depth throughout the season. Early season soil samples were used to obtain water-holding capacity, wilting point, saturated hydraulic conductivity, initial volumetric soil water content, layer thickness, and saturated volumetric water content. Crop canopy development parameters and irrigation application depths were also inputs of the model. The modeling approach is based on the reflectance-based crop coefficient approach contained within the SETMI hybrid ET model using relationships developed in Nebraska. The model was applied to several fields located in Minas Gerais State in Brazil with approximate latitude: -16.630434 and longitude: -47.192876. The model provides estimates of real crop evapotranspiration (ET), crop irrigation requirements and all soil water balance outputs, including biomass estimation using multi-temporal satellite image inputs. An interpolation scheme based on the growing degree-day concept was used to model the periods between satellite inputs, filling the gaps between image dates and obtaining daily data. Actual and accumulated ET, accumulated cold temperature and water stress and crop water requirements estimated by the model were compared with data measured at the experimental fields. Results indicate that the SETMI modeling approach using data assimilation, showed reliable daily ET and crop water requirements for maize, interpolated between remote sensing observations, confirming the applicability of the SETMI model using new relationships developed in Nebraska for estimating mainly ET and water requirements in Brazil under tropical conditions.Keywords: basal crop coefficient, irrigation, remote sensing, SETMI
Procedia PDF Downloads 140236 The Biomechanical Assessment of Balance and Gait for Stroke Patients and the Implications in the Diagnosis and Rehabilitation
Authors: A. Alzahrani, G. Arnold, W. Wang
Abstract:
Background: Stroke commonly occurs in middle-aged and elderly populations, and the diagnosis of early stroke is still difficult. Patients who have suffered a stroke have different balance and gait patterns from healthy people. Advanced techniques of motion analysis have been routinely used in the clinical assessment of cerebral palsy. However, so far, little research has been done on the direct diagnosis of early stroke patients using motion analysis. Objectives: The aim of this study was to investigate whether patients with stroke have different balance and gait from healthy people and which biomechanical parameters could be used to predict and diagnose potential patients who are at a potential risk to stroke. Methods: Thirteen patients with stroke were recruited as subjects whose gait and balance was analysed. Twenty normal subjects at the matched age participated in this study as a control group. All subjects’ gait and balance were collected using Vicon Nexus® to obtain the gait parameters, kinetic, and kinematic parameters of the hip, knee, and ankle joints in three planes of both limbs. Participants stood on force platforms to perform a single leg balance test. Then, they were asked to walk along a 10 m walkway at their comfortable speed. Participants performed 6 trials of single-leg balance for each side and 10 trials of walking. From the recorded trials, three good ones were analysed using the Vicon Plug-in-Gait model to obtain gait parameters, e.g., walking speed, cadence, stride length, and joint parameters, e.g., joint angle, force, moments, etc. Result: The temporal-spatial variables of Stroke subjects were compared with the healthy subjects; it was found that there was a significant difference (p < 0.05) between the groups. The step length, speed, cadence were lower in stroke subjects as compared to the healthy groups. The stroke patients group showed significantly decreased in gait speed (mean and SD: 0.85 ± 0.33 m/s), cadence ( 96.71 ± 16.14 step/min), and step length (0.509 ± 017 m) in compared to healthy people group whereas the gait speed was 1.2 ± 0.11 m/s, cadence 112 ± 8.33 step/min, and step length 0.648 ± 0.43 m. Moreover, it was observed that patients with stroke have significant differences in the ankle, hip, and knee joints’ kinematics in the sagittal and coronal planes. Also, the result showed that there was a significant difference between groups in the single-leg balance test, e.g., maintaining single-leg stance time in the stroke patients showed shorter duration (5.97 ± 6.36 s) in compared to healthy people group (14.36 ± 10.20 s). Conclusion: Our result showed that there are significantly differences between stroke patients and healthy subjects in the various aspects of gait analysis and balance test, as a consequences of these findings some of the biomechanical parameters such as joints kinematics, gait parameters, and single-leg stance balance test could be used in clinical practice to predict and diagnose potential patients who are at a high risk of further stroke.Keywords: gait analysis, kinetics, kinematics, single-leg stance, Stroke
Procedia PDF Downloads 141235 Evaluation of Groundwater Quality and Contamination Sources Using Geostatistical Methods and GIS in Miryang City, Korea
Authors: H. E. Elzain, S. Y. Chung, V. Senapathi, Kye-Hun Park
Abstract:
Groundwater is considered a significant source for drinking and irrigation purposes in Miryang city, and it is attributed to a limited number of a surface water reservoirs and high seasonal variations in precipitation. Population growth in addition to the expansion of agricultural land uses and industrial development may affect the quality and management of groundwater. This research utilized multidisciplinary approaches of geostatistics such as multivariate statistics, factor analysis, cluster analysis and kriging technique in order to identify the hydrogeochemical process and characterizing the control factors of the groundwater geochemistry distribution for developing risk maps, exploiting data obtained from chemical investigation of groundwater samples under the area of study. A total of 79 samples have been collected and analyzed using atomic absorption spectrometer (AAS) for major and trace elements. Chemical maps using 2-D spatial Geographic Information System (GIS) of groundwater provided a powerful tool for detecting the possible potential sites of groundwater that involve the threat of contamination. GIS computer based map exhibited that the higher rate of contamination observed in the central and southern area with relatively less extent in the northern and southwestern parts. It could be attributed to the effect of irrigation, residual saline water, municipal sewage and livestock wastes. At wells elevation over than 85m, the scatter diagram represents that the groundwater of the research area was mainly influenced by saline water and NO3. Level of pH measurement revealed low acidic condition due to dissolved atmospheric CO2 in the soil, while the saline water had a major impact on the higher values of TDS and EC. Based on the cluster analysis results, the groundwater has been categorized into three group includes the CaHCO3 type of the fresh water, NaHCO3 type slightly influenced by sea water and Ca-Cl, Na-Cl types which are heavily affected by saline water. The most predominant water type was CaHCO3 in the study area. Contamination sources and chemical characteristics were identified from factor analysis interrelationship and cluster analysis. The chemical elements that belong to factor 1 analysis were related to the effect of sea water while the elements of factor 2 associated with agricultural fertilizers. The degree level, distribution, and location of groundwater contamination have been generated by using Kriging methods. Thus, geostatistics model provided more accurate results for identifying the source of contamination and evaluating the groundwater quality. GIS was also a creative tool to visualize and analyze the issues affecting water quality in the Miryang city.Keywords: groundwater characteristics, GIS chemical maps, factor analysis, cluster analysis, Kriging techniques
Procedia PDF Downloads 167234 Distribution, Source Apportionment and Assessment of Pollution Level of Trace Metals in Water and Sediment of a Riverine Wetland of the Brahmaputra Valley
Authors: Kali Prasad Sarma, Sanghita Dutta
Abstract:
Deepor Beel (DB), the lone Ramsar site and an important wetland of the Brahmaputra valley in the state of Assam. The local people from fourteen peripheral villages traditionally utilize the wetland for harvesting vegetables, flowers, aquatic seeds, medicinal plants, fish, molluscs, fodder for domestic cattle etc. Therefore, it is of great importance to understand the concentration and distribution of trace metals in water-sediment system of the beel in order to protect its ecological environment. DB lies between26°05′26′′N to 26°09′26′′N latitudes and 90°36′39′′E to 91°41′25′′E longitudes. Water samples from the surface layer of water up to 40cm deep and sediment samples from the top 5cm layer of surface sediments were collected. The trace metals in waters and sediments were analysed using ICP-OES. The organic Carbon was analysed using the TOC analyser. The different mineral present in the sediments were confirmed by X-ray diffraction method (XRD). SEM images were recorded for the samples using SEM, attached with energy dispersive X-ray unit, with an accelerating voltage of 20 kv. All the statistical analyses were performed using SPSS20.0 for windows. In the present research, distribution, source apportionment, temporal and spatial variability, extent of pollution and the ecological risk of eight toxic trace metals in sediments and water of DB were investigated. The average concentrations of chromium(Cr) (both the seasons), copper(Cu) and lead(Pb) (pre-monsoon) and zinc(Zn) and cadmium(Cd) (post-monsoon) in sediments were higher than the consensus based threshold concentration(TEC). The persistent exposure of toxic trace metals in sediments pose a potential threat, especially to sediment dwelling organisms. The degree of pollution in DB sediments for Pb, Cobalt (Co) Zn, Cd, Cr, Cu and arsenic (As) was assessed using Enrichment Factor (EF), Geo-accumulation index (Igeo) and Pollution Load Index (PLI). The results indicated that contamination of surface sediments in DB is dominated by Pb and Cd and to a lesser extent by Co, Fe, Cu, Cr, As and Zn. A significant positive correlation among the pairs of element Co/Fe, Zn/As in water, and Cr/Zn, Fe/As in sediments indicates similar source of origin of these metals. The effects of interaction among trace metals between water and sediments shows significant variations (F =94.02, P < 0.001), suggesting maximum mobility of trace metals in DB sediments and water. The source apportionment of the heavy metals was carried out using Principal Component Analysis (PCA). SEM-EDS detects the presence of Cd, Cu, Cr, Zn, Pb, As and Fe in the sediment sample. The average concentration of Cd, Zn, Pb and As in the bed sediments of DB are found to be higher than the crustal abundance. The EF values indicate that Cd and Pb are significantly enriched. From source apportionment studies of the eight metals using PCA revealed that Cd was anthropogenic in origin; Pb, As, Cr, and Zn had mixed sources; whereas Co, Cu and Fe were natural in origin.Keywords: Deepor Beel, enrichment factor, principal component analysis, trace metals
Procedia PDF Downloads 287233 Leveraging Power BI for Advanced Geotechnical Data Analysis and Visualization in Mining Projects
Authors: Elaheh Talebi, Fariba Yavari, Lucy Philip, Lesley Town
Abstract:
The mining industry generates vast amounts of data, necessitating robust data management systems and advanced analytics tools to achieve better decision-making processes in the development of mining production and maintaining safety. This paper highlights the advantages of Power BI, a powerful intelligence tool, over traditional Excel-based approaches for effectively managing and harnessing mining data. Power BI enables professionals to connect and integrate multiple data sources, ensuring real-time access to up-to-date information. Its interactive visualizations and dashboards offer an intuitive interface for exploring and analyzing geotechnical data. Advanced analytics is a collection of data analysis techniques to improve decision-making. Leveraging some of the most complex techniques in data science, advanced analytics is used to do everything from detecting data errors and ensuring data accuracy to directing the development of future project phases. However, while Power BI is a robust tool, specific visualizations required by geotechnical engineers may have limitations. This paper studies the capability to use Python or R programming within the Power BI dashboard to enable advanced analytics, additional functionalities, and customized visualizations. This dashboard provides comprehensive tools for analyzing and visualizing key geotechnical data metrics, including spatial representation on maps, field and lab test results, and subsurface rock and soil characteristics. Advanced visualizations like borehole logs and Stereonet were implemented using Python programming within the Power BI dashboard, enhancing the understanding and communication of geotechnical information. Moreover, the dashboard's flexibility allows for the incorporation of additional data and visualizations based on the project scope and available data, such as pit design, rock fall analyses, rock mass characterization, and drone data. This further enhances the dashboard's usefulness in future projects, including operation, development, closure, and rehabilitation phases. Additionally, this helps in minimizing the necessity of utilizing multiple software programs in projects. This geotechnical dashboard in Power BI serves as a user-friendly solution for analyzing, visualizing, and communicating both new and historical geotechnical data, aiding in informed decision-making and efficient project management throughout various project stages. Its ability to generate dynamic reports and share them with clients in a collaborative manner further enhances decision-making processes and facilitates effective communication within geotechnical projects in the mining industry.Keywords: geotechnical data analysis, power BI, visualization, decision-making, mining industry
Procedia PDF Downloads 90232 Port Miami in the Caribbean and Mesoamerica: Data, Spatial Networks and Trends
Authors: Richard Grant, Landolf Rhode-Barbarigos, Shouraseni Sen Roy, Lucas Brittan, Change Li, Aiden Rowe
Abstract:
Ports are critical for the US economy, connecting farmers, manufacturers, retailers, consumers and an array of transport and storage operators. Port facilities vary widely in terms of their productivity, footprint, specializations, and governance. In this context, Port Miami is considered as one of the busiest ports providing both cargo and cruise services in connecting the wider region of the Caribbean and Mesoamerica to the global networks. It is considered as the “Cruise Capital of the World and Global Gateway of the Americas” and “leading container port in Florida.” Furthermore, it has also been ranked as one of the top container ports in the world and the second most efficient port in North America. In this regard, Port Miami has made significant investments in the strategic and capital infrastructure of about US$1 billion, including increasing the channel depth and other onshore infrastructural enhancements. Therefore, this study involves a detailed analysis of Port Miami’s network, using publicly available multiple years of data about marine vessel traffic, cargo, and connectivity and performance indices from 2015-2021. Through the analysis of cargo and cruise vessels to and from Port Miami and its relative performance at the global scale from 2015 to 2021, this study examines the port’s long-term resilience and future growth potential. The main results of the analyses indicate that the top category for both inbound and outbound cargo is manufactured products and textiles. In addition, there are a lot of fresh fruits, vegetables, and produce for inbound and processed food for outbound cargo. Furthermore, the top ten port connections for Port Miami are all located in the Caribbean region, the Gulf of Mexico, and the Southeast USA. About half of the inbound cargo comes from Savannah, Saint Thomas, and Puerto Plata, while outbound cargo is from Puerto Corte, Freeport, and Kingston. Additionally, for cruise vessels, a significantly large number of vessels originate from Nassau, followed by Freeport. The number of passenger's vessels pre-COVID was almost 1,000 per year, which dropped substantially in 2020 and 2021 to around 300 vessels. Finally, the resilience and competitiveness of Port Miami were also assessed in terms of its network connectivity by examining the inbound and outbound maritime vessel traffic. It is noteworthy that the most frequent port connections for Port Miami were Freeport and Savannah, followed by Kingston, Nassau, and New Orleans. However, several of these ports, Puerto Corte, Veracruz, Puerto Plata, and Santo Thomas, have low resilience and are highly vulnerable, which needs to be taken into consideration for the long-term resilience of Port Miami in the future.Keywords: port, Miami, network, cargo, cruise
Procedia PDF Downloads 79231 Slope Stability and Landslides Hazard Analysis, Limitations of Existing Approaches, and a New Direction
Authors: Alisawi Alaa T., Collins P. E. F.
Abstract:
The analysis and evaluation of slope stability and landslide hazards are landslide hazards are critically important in civil engineering projects and broader considerations of safety. The level of slope stability risk should be identified due to its significant and direct financial and safety effects. Slope stability hazard analysis is performed considering static and/or dynamic loading circumstances. To reduce and/or prevent the failure hazard caused by landslides, a sophisticated and practical hazard analysis method using advanced constitutive modeling should be developed and linked to an effective solution that corresponds to the specific type of slope stability and landslides failure risk. Previous studies on slope stability analysis methods identify the failure mechanism and its corresponding solution. The commonly used approaches include used approaches include limit equilibrium methods, empirical approaches for rock slopes (e.g., slope mass rating and Q-slope), finite element or finite difference methods, and district element codes. This study presents an overview and evaluation of these analysis techniques. Contemporary source materials are used to examine these various methods on the basis of hypotheses, the factor of safety estimation, soil types, load conditions, and analysis conditions and limitations. Limit equilibrium methods play a key role in assessing the level of slope stability hazard. The slope stability safety level can be defined by identifying the equilibrium of the shear stress and shear strength. The slope is considered stable when the movement resistance forces are greater than those that drive the movement with a factor of safety (ratio of the resistance of the resistance of the driving forces) that is greater than 1.00. However, popular and practical methods, including limit equilibrium approaches, are not effective when the slope experiences complex failure mechanisms, such as progressive failure, liquefaction, internal deformation, or creep. The present study represents the first episode of an ongoing project that involves the identification of the types of landslides hazards, assessment of the level of slope stability hazard, development of a sophisticated and practical hazard analysis method, linkage of the failure type of specific landslides conditions to the appropriate solution and application of an advanced computational method for mapping the slope stability properties in the United Kingdom, and elsewhere through geographical information system (GIS) and inverse distance weighted spatial interpolation(IDW) technique. This study investigates and assesses the different assesses the different analysis and solution techniques to enhance the knowledge on the mechanism of slope stability and landslides hazard analysis and determine the available solutions for each potential landslide failure risk.Keywords: slope stability, finite element analysis, hazard analysis, landslides hazard
Procedia PDF Downloads 98230 Coastal Vulnerability Index and Its Projection for Odisha Coast, East Coast of India
Authors: Bishnupriya Sahoo, Prasad K. Bhaskaran
Abstract:
Tropical cyclone is one among the worst natural hazards that results in a trail of destruction causing enormous damage to life, property, and coastal infrastructures. In a global perspective, the Indian Ocean is considered as one of the cyclone prone basins in the world. Specifically, the frequency of cyclogenesis in the Bay of Bengal is higher compared to the Arabian Sea. Out of the four maritime states in the East coast of India, Odisha is highly susceptible to tropical cyclone landfall. Historical records clearly decipher the fact that the frequency of cyclones have reduced in this basin. However, in the recent decades, the intensity and size of tropical cyclones have increased. This is a matter of concern as the risk and vulnerability level of Odisha coast exposed to high wind speed and gusts during cyclone landfall have increased. In this context, there is a need to assess and evaluate the severity of coastal risk, area of exposure under risk, and associated vulnerability with a higher dimension in a multi-risk perspective. Changing climate can result in the emergence of a new hazard and vulnerability over a region with differential spatial and socio-economic impact. Hence there is a need to have coastal vulnerability projections in a changing climate scenario. With this motivation, the present study attempts to estimate the destructiveness of tropical cyclones based on Power Dissipation Index (PDI) for those cyclones that made landfall along Odisha coast that exhibits an increasing trend based on historical data. The study also covers the futuristic scenarios of integral coastal vulnerability based on the trends in PDI for the Odisha coast. This study considers 11 essential and important parameters; the cyclone intensity, storm surge, onshore inundation, mean tidal range, continental shelf slope, topo-graphic elevation onshore, rate of shoreline change, maximum wave height, relative sea level rise, rainfall distribution, and coastal geomorphology. The study signifies that over a decadal scale, the coastal vulnerability index (CVI) depends largely on the incremental change in variables such as cyclone intensity, storm surge, and associated inundation. In addition, the study also performs a critical analysis on the modulation of PDI on storm surge and inundation characteristics for the entire coastal belt of Odisha State. Interestingly, the study brings to light that a linear correlation exists between the storm-tide with PDI. The trend analysis of PDI and its projection for coastal Odisha have direct practical applications in effective coastal zone management and vulnerability assessment.Keywords: Bay of Bengal, coastal vulnerability index, power dissipation index, tropical cyclone
Procedia PDF Downloads 234229 Origins: An Interpretive History of MMA Design Studio’s Exhibition for the 2023 Venice Biennale
Authors: Jonathan A. Noble
Abstract:
‘Origins’ is an exhibition designed and installed by MMA Design Studio, at the 2023 Venice Biennale. The instillation formed part of the ‘Dangerous Liaisons’ group exhibition at the Arsenale building. An immersive experience was created for those who visited, where video projection and the bodies of visitors interacted with the scene. Designed by South African architect, Mphethi Morojele – founder and owner of MMA – the primary inspiration for ‘Origins’ was the recent discovery by Professor Karim Sadr in 2019, of a substantial Tswana settlement. Situated in present day Suikerbosrand Nature Reserve, some 45km south of Johannesburg, this precolonial city named Kweneng, has been dated back to the fifteenth century. This remarkable discovery was achieved thanks to advanced aerial, LiDAR scanning technology, which was used to capture the traces of Kweneng, spanning a terrain of some 10km long and 2km wide. Discovered by light (LiDAR) and exhibited through light, Origins presents a simulated experience of Kweneng. The presentation of Kweneng was achieved primarily though video, with a circular projection onto the floor of an animated LiDAR data sequence, and onto the walls a filmed dance sequence choreographed to embody the architectural, spatial and symbolic significance of Kweneng. This paper documents the design process that was involved in the conceptualization, development and final realization of this noteworthy exhibition, with an elucidation upon key social and cultural questions pertaining to precolonial heritage, reimagined histories and postcolonial identity. Periods of change and of social awakening sometimes spark an interest in questions of origin, of cultural lineage and belonging – and which certainly is the case for contemporary, post-Apartheid South Africa. Researching this paper has required primary study of MMA Design Studio’s project archive, including various proposals and other design related documents, conceptual design sketches, architectural drawings and photographs. This material is supported by the authors first-hand interviews with Morejele and others who were involved, especially with respect to the choreography of the interpretive dance, LiDAR visualization techniques and video production that informed the simulated, immersive experience at the exhibition. Presenting a ‘dangerous liaison’ between architecture and dance, Origins looks into the distant past to frame contemporary questions pertaining to intangible heritage, animism and embodiment through architecture and dance – considerations which are required “to survive the future”, says Morojele.Keywords: architecture and dance, Kweneng, MMA design studio, origins, Venice Biennale
Procedia PDF Downloads 86228 Role of Institutional Quality as a Key Determinant of FDI Flows in Developing Asian Economies
Authors: Bikash Ranjan Mishra, Lopamudra D. Satpathy
Abstract:
In the wake of the phenomenal surge in international business in the last decades or more, both the developed and developing economies around the world are in massive competition to attract more and more FDI flows. While the developed countries have marched ahead in the race, the developing countries, especially those of Asian economies, have followed them at a rapid pace. While most of the previous studies have analysed the role of institutional quality in the promotion of FDI flows in developing countries, very few studies have taken an integrated approach of examining the comprehensive impact of institutional quality, globalization pattern and domestic financial development on FDI flows. In this context, the paper contributes to the literature in two important ways. Firstly, two composite indices of institutional quality and domestic financial development for the Asian countries are constructed in comparison to earlier studies that resort to a single variable for indicating the institutional quality and domestic financial development. Secondly, the impact of these variables on FDI flows through their interaction with geographical region is investigated. The study uses panel data covering the time period of 1996 to 2012 by selecting twenty Asian developing countries by emphasizing the quality of institutions from the geographical regions of eastern, south-eastern, southern and western Asia. Control of corruption, better rule of law, regulatory quality, effectiveness of the government, political stability and voice and accountability are used as indicators of institutional quality. Besides these, the study takes into account the domestic credits in the hands of public, private sectors and in stock markets as domestic financial indicators. First in the specification of model, a factor analysis is performed to reduce the vast determinants, which are highly correlated with each other, to a manageable size. Afterwards, a reduced version of the model is estimated with the extracted factors in the form of index as independent variables along with a set of control variables. It is found that the institutional quality index and index of globalization exert a significant effect on FDI inflows of the host countries; in contrast, the domestic financial index does not seem to play much worthy role. Finally, some robustness tests are performed to make sure that the results are not sensitive to temporal and spatial unobserved heterogeneity. On the basis of the above study, one general inference can be drawn from the policy prescription point of view that the government of these developing countries should strengthen their domestic institution, both financial and non-financial. In addition to these, welfare policies should also target for rapid globalization. If the financial and non-financial institutions of these developing countries become sound and grow more globalized in the economic, social and political domain, then they can appeal to more amounts of FDI inflows that will subsequently result in advancement of these economies.Keywords: Asian developing economies, FDI, institutional quality, panel data
Procedia PDF Downloads 313227 An Integrated Framework for Wind-Wave Study in Lakes
Authors: Moien Mojabi, Aurelien Hospital, Daniel Potts, Chris Young, Albert Leung
Abstract:
The wave analysis is an integral part of the hydrotechnical assessment carried out during the permitting and design phases for coastal structures, such as marinas. This analysis aims in quantifying: i) the Suitability of the coastal structure design against Small Craft Harbour wave tranquility safety criterion; ii) Potential environmental impacts of the structure (e.g., effect on wave, flow, and sediment transport); iii) Mooring and dock design and iv) Requirements set by regulatory agency’s (e.g., WSA section 11 application). While a complex three-dimensional hydrodynamic modelling approach can be applied on large-scale projects, the need for an efficient and reliable wave analysis method suitable for smaller scale marina projects was identified. As a result, Tetra Tech has developed and applied an integrated analysis framework (hereafter TT approach), which takes the advantage of the state-of-the-art numerical models while preserving the level of simplicity that fits smaller scale projects. The present paper aims to describe the TT approach and highlight the key advantages of using this integrated framework in lake marina projects. The core of this methodology is made by integrating wind, water level, bathymetry, and structure geometry data. To respond to the needs of specific projects, several add-on modules have been added to the core of the TT approach. The main advantages of this method over the simplified analytical approaches are i) Accounting for the proper physics of the lake through the modelling of the entire lake (capturing real lake geometry) instead of a simplified fetch approach; ii) Providing a more realistic representation of the waves by modelling random waves instead of monochromatic waves; iii) Modelling wave-structure interaction (e.g. wave transmission/reflection application for floating structures and piles amongst others); iv) Accounting for wave interaction with the lakebed (e.g. bottom friction, refraction, and breaking); v) Providing the inputs for flow and sediment transport assessment at the project site; vi) Taking in consideration historical and geographical variations of the wind field; and vii) Independence of the scale of the reservoir under study. Overall, in comparison with simplified analytical approaches, this integrated framework provides a more realistic and reliable estimation of wave parameters (and its spatial distribution) in lake marinas, leading to a realistic hydrotechnical assessment accessible to any project size, from the development of a new marina to marina expansion and pile replacement. Tetra Tech has successfully utilized this approach since many years in the Okanagan area.Keywords: wave modelling, wind-wave, extreme value analysis, marina
Procedia PDF Downloads 81226 The Development of Local-Global Perceptual Bias across Cultures: Examining the Effects of Gender, Education, and Urbanisation
Authors: Helen J. Spray, Karina J. Linnell
Abstract:
Local-global bias in adulthood is strongly dependent on environmental factors and a global bias is not the universal characteristic of adult perception it was once thought to be: whilst Western adults typically demonstrate a global bias, Namibian adults living in traditional villages possess a strong local bias. Furthermore, environmental effects on local-global bias have been shown to be highly gender-specific; whereas urbanisation promoted a global bias in urbanised Namibian women but not men, education promoted a global bias in urbanised Namibian men but not women. Adult populations, however, provide only a snapshot of the gene-environment interactions which shape perceptual bias. Yet, to date, there has been little work on the development of local-global bias across environmental settings. In the current study, local-global bias was assessed using a similarity-matching task with Navon figures in children aged between 4 and 15 years from across three populations: traditional Namibians, urban Namibians, and urban British. For the two Namibian groups, measures of urbanisation and education were obtained. Data were subjected to both between-group and within-group analyses. Between-group analyses compared developmental trajectories across population and gender. These analyses revealed a global bias from even as early as 4 in the British sample, and showed that the developmental onset of a global bias is not fixed. Urbanised Namibian children ultimately developed a global bias that was indistinguishable from British children; however, a global bias did not emerge until much later in development. For all populations, the greatest developmental effects were observed directly following the onset of formal education. No overall gender effects were observed; however, there was a significant gender by age interaction which was difficult to reconcile with existing biological-level accounts of gender differences in the development of local-global bias. Within-group analyses compared the effects of urbanisation and education on local-global bias for traditional and urban Namibian boys and girls separately. For both traditional and urban boys, education mediated all effects of age and urbanisation; however, this was not the case for girls. Traditional Namibian girls retained a local bias regardless of age, education, or urbanisation, and in urbanised girls, the development of a global bias was not attributable to any one factor specifically. These results are broadly consistent with aforementioned findings that education promoted a global bias in urbanised Namibian men but not women. The development of local-global bias does not follow a fixed trajectory but is subject to environmental control. Understanding how variability in the development of local-global bias might arise, particularly in the context of gender, may have far-reaching implications. For example, a number of educationally important cognitive functions (e.g., spatial ability) are known to show consistent gender differences in childhood and local-global bias may mediate some of these effects. With education becoming an increasingly prevalent force across much of the developing world it will be important to understand the processes that underpin its effects and their implications.Keywords: cross-cultural, development, education, gender, local-global bias, perception, urbanisation, urbanization
Procedia PDF Downloads 138225 A Proper Continuum-Based Reformulation of Current Problems in Finite Strain Plasticity
Authors: Ladislav Écsi, Roland Jančo
Abstract:
Contemporary multiplicative plasticity models assume that the body's intermediate configuration consists of an assembly of locally unloaded neighbourhoods of material particles that cannot be reassembled together to give the overall stress-free intermediate configuration since the neighbourhoods are not necessarily compatible with each other. As a result, the plastic deformation gradient, an inelastic component in the multiplicative split of the deformation gradient, cannot be integrated, and the material particle moves from the initial configuration to the intermediate configuration without a position vector and a plastic displacement field when plastic flow occurs. Such behaviour is incompatible with the continuum theory and the continuum physics of elastoplastic deformations, and the related material models can hardly be denoted as truly continuum-based. The paper presents a proper continuum-based reformulation of current problems in finite strain plasticity. It will be shown that the incompatible neighbourhoods in real material are modelled by the product of the plastic multiplier and the yield surface normal when the plastic flow is defined in the current configuration. The incompatible plastic factor can also model the neighbourhoods as the solution of the system of differential equations whose coefficient matrix is the above product when the plastic flow is defined in the intermediate configuration. The incompatible tensors replace the compatible spatial plastic velocity gradient in the former case or the compatible plastic deformation gradient in the latter case in the definition of the plastic flow rule. They act as local imperfections but have the same position vector as the compatible plastic velocity gradient or the compatible plastic deformation gradient in the definitions of the related plastic flow rules. The unstressed intermediate configuration, the unloaded configuration after the plastic flow, where the residual stresses have been removed, can always be calculated by integrating either the compatible plastic velocity gradient or the compatible plastic deformation gradient. However, the corresponding plastic displacement field becomes permanent with both elastic and plastic components. The residual strains and stresses originate from the difference between the compatible plastic/permanent displacement field gradient and the prescribed incompatible second-order tensor characterizing the plastic flow in the definition of the plastic flow rule, which becomes an assignment statement rather than an equilibrium equation. The above also means that the elastic and plastic factors in the multiplicative split of the deformation gradient are, in reality, gradients and that there is no problem with the continuum physics of elastoplastic deformations. The formulation is demonstrated in a numerical example using the regularized Mooney-Rivlin material model and modified equilibrium statements where the intermediate configuration is calculated, whose analysis results are compared with the identical material model using the current equilibrium statements. The advantages and disadvantages of each formulation, including their relationship with multiplicative plasticity, are also discussed.Keywords: finite strain plasticity, continuum formulation, regularized Mooney-Rivlin material model, compatibility
Procedia PDF Downloads 123224 The Solid-Phase Sensor Systems for Fluorescent and SERS-Recognition of Neurotransmitters for Their Visualization and Determination in Biomaterials
Authors: Irina Veselova, Maria Makedonskaya, Olga Eremina, Alexandr Sidorov, Eugene Goodilin, Tatyana Shekhovtsova
Abstract:
Such catecholamines as dopamine, norepinephrine, and epinephrine are the principal neurotransmitters in the sympathetic nervous system. Catecholamines and their metabolites are considered to be important markers of socially significant diseases such as atherosclerosis, diabetes, coronary heart disease, carcinogenesis, Alzheimer's and Parkinson's diseases. Currently, neurotransmitters can be studied via electrochemical and chromatographic techniques that allow their characterizing and quantification, although these techniques can only provide crude spatial information. Besides, the difficulty of catecholamine determination in biological materials is associated with their low normal concentrations (~ 1 nM) in biomaterials, which may become even one more order lower because of some disorders. In addition, in blood they are rapidly oxidized by monoaminooxidases from thrombocytes and, for this reason, the determination of neurotransmitter metabolism indicators in an organism should be very rapid (15—30 min), especially in critical states. Unfortunately, modern instrumental analysis does not offer a complex solution of this problem: despite its high sensitivity and selectivity, HPLC-MS cannot provide sufficiently rapid analysis, while enzymatic biosensors and immunoassays for the determination of the considered analytes lack sufficient sensitivity and reproducibility. Fluorescent and SERS-sensors remain a compelling technology for approaching the general problem of selective neurotransmitter detection. In recent years, a number of catecholamine sensors have been reported including RNA aptamers, fluorescent ribonucleopeptide (RNP) complexes, and boronic acid based synthetic receptors and the sensor operated in a turn-off mode. In this work we present the fluorescent and SERS turn-on sensor systems based on the bio- or chemorecognizing nanostructured films {chitosan/collagen-Tb/Eu/Cu-nanoparticles-indicator reagents} that provide the selective recognition, visualization, and sensing of the above mentioned catecholamines on the level of nanomolar concentrations in biomaterials (cell cultures, tissue etc.). We have (1) developed optically transparent porous films and gels of chitosan/collagen; (2) ensured functionalization of the surface by molecules-'recognizers' (by impregnation and immobilization of components of the indicator systems: biorecognizing and auxiliary reagents); (3) performed computer simulation for theoretical prediction and interpretation of some properties of the developed materials and obtained analytical signals in biomaterials. We are grateful for the financial support of this research from Russian Foundation for Basic Research (grants no. 15-03-05064 a, and 15-29-01330 ofi_m).Keywords: biomaterials, fluorescent and SERS-recognition, neurotransmitters, solid-phase turn-on sensor system
Procedia PDF Downloads 406223 X-Ray Detector Technology Optimization In CT Imaging
Authors: Aziz Ikhlef
Abstract:
Most of multi-slices CT scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80kVp and 140kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts
Procedia PDF Downloads 269222 Teachers’ Instructional Decisions When Teaching Geometric Transformations
Authors: Lisa Kasmer
Abstract:
Teachers’ instructional decisions shape the structure and content of mathematics lessons and influence the mathematics that students are given the opportunity to learn. Therefore, it is important to better understand how teachers make instructional decisions and thus find new ways to help practicing and future teachers give their students a more effective and robust learning experience. Understanding the relationship between teachers’ instructional decisions and their goals, resources, and orientations (beliefs) is important given the heightened focus on geometric transformations in the middle school mathematics curriculum. This work is significant as the development and support of current and future teachers need more effective ways to teach geometry to their students. The following research questions frame this study: (1) As middle school mathematics teachers plan and enact instruction related to teaching transformations, what thinking processes do they engage in to make decisions about teaching transformations with or without a coordinate system and (2) How do the goals, resources and orientations of these teachers impact their instructional decisions and reveal about their understanding of teaching transformations? Teachers and students alike struggle with understanding transformations; many teachers skip or hurriedly teach transformations at the end of the school year. However, transformations are an important mathematical topic as this topic supports students’ understanding of geometric and spatial reasoning. Geometric transformations are a foundational concept in mathematics, not only for understanding congruence and similarity but for proofs, algebraic functions, and calculus etc. Geometric transformations also underpin the secondary mathematics curriculum, as features of transformations transfer to other areas of mathematics. Teachers’ instructional decisions in terms of goals, orientations, and resources that support these instructional decisions were analyzed using open-coding. Open-coding is recognized as an initial first step in qualitative analysis, where comparisons are made, and preliminary categories are considered. Initial codes and categories from current research on teachers’ thinking processes that are related to the decisions they make while planning and reflecting on the lessons were also noted. Surfacing ideas and additional themes common across teachers while seeking patterns, were compared and analyzed. Finally, attributes of teachers’ goals, orientations and resources were identified in order to begin to build a picture of the reasoning behind their instructional decisions. These categories became the basis for the organization and conceptualization of the data. Preliminary results suggest that teachers often rely on their own orientations about teaching geometric transformations. These beliefs are underpinned by the teachers’ own mathematical knowledge related to teaching transformations. When a teacher does not have a robust understanding of transformations, they are limited by this lack of knowledge. These shortcomings impact students’ opportunities to learn, and thus disadvantage their own understanding of transformations. Teachers’ goals are also limited by their paucity of knowledge regarding transformations, as these goals do not fully represent the range of comprehension a teacher needs to teach this topic well.Keywords: coordinate plane, geometric transformations, instructional decisions, middle school mathematics
Procedia PDF Downloads 87221 Characterization of Thin Woven Composites Used in Printed Circuit Boards by Combining Numerical and Experimental Approaches
Authors: Gautier Girard, Marion Martiny, Sebastien Mercier, Mohamad Jrad, Mohamed-Slim Bahi, Laurent Bodin, Francois Lechleiter, David Nevo, Sophie Dareys
Abstract:
Reliability of electronic devices has always been of highest interest for Aero-MIL and space applications. In any electronic device, Printed Circuit Board (PCB), providing interconnection between components, is a key for reliability. During the last decades, PCB technologies evolved to sustain and/or fulfill increased original equipment manufacturers requirements and specifications, higher densities and better performances, faster time to market and longer lifetime, newer material and mixed buildups. From the very beginning of the PCB industry up to recently, qualification, experiments and trials, and errors were the most popular methods to assess system (PCB) reliability. Nowadays OEM, PCB manufacturers and scientists are working together in a close relationship in order to develop predictive models for PCB reliability and lifetime. To achieve that goal, it is fundamental to characterize precisely base materials (laminates, electrolytic copper, …), in order to understand failure mechanisms and simulate PCB aging under environmental constraints by means of finite element method for example. The laminates are woven composites and have thus an orthotropic behaviour. The in-plane properties can be measured by combining classical uniaxial testing and digital image correlation. Nevertheless, the out-of-plane properties cannot be evaluated due to the thickness of the laminate (a few hundred of microns). It has to be noted that the knowledge of the out-of-plane properties is fundamental to investigate the lifetime of high density printed circuit boards. A homogenization method combining analytical and numerical approaches has been developed in order to obtain the complete elastic orthotropic behaviour of a woven composite from its precise 3D internal structure and its experimentally measured in-plane elastic properties. Since the mechanical properties of the resin surrounding the fibres are unknown, an inverse method is proposed to estimate it. The methodology has been applied to one laminate used in hyperfrequency spatial applications in order to get its elastic orthotropic behaviour at different temperatures in the range [-55°C; +125°C]. Next; numerical simulations of a plated through hole in a double sided PCB are performed. Results show the major importance of the out-of-plane properties and the temperature dependency of these properties on the lifetime of a printed circuit board. Acknowledgements—The support of the French ANR agency through the Labcom program ANR-14-LAB7-0003-01, support of CNES, Thales Alenia Space and Cimulec is acknowledged.Keywords: homogenization, orthotropic behaviour, printed circuit board, woven composites
Procedia PDF Downloads 202220 A Holistic View of Microbial Community Dynamics during a Toxic Harmful Algal Bloom
Authors: Shi-Bo Feng, Sheng-Jie Zhang, Jin Zhou
Abstract:
The relationship between microbial diversity and algal bloom has received considerable attention for decades. Microbes undoubtedly affect annual bloom events and impact the physiology of both partners, as well as shape ecosystem diversity. However, knowledge about interactions and network correlations among broader-spectrum microbes that lead to the dynamics in a complete bloom cycle are limited. In this study, pyrosequencing and network approaches simultaneously assessed the associate patterns among bacteria, archaea, and microeukaryotes in surface water and sediments in response to a natural dinoflagellate (Alexandrium sp.) bloom. In surface water, among the bacterial community, Gamma-Proteobacteria and Bacteroidetes dominated in the initial bloom stage, while Alpha-Proteobacteria, Cyanobacteria, and Actinobacteria become the most abundant taxa during the post-stage. In the archaea biosphere, it clustered predominantly with Methanogenic members in the early pre-bloom period while the majority of species identified in the later-bloom stage were ammonia-oxidizing archaea and Halobacteriales. In eukaryotes, dinoflagellate (Alexandrium sp.) was dominated in the onset stage, whereas multiply species (such as microzooplankton, diatom, green algae, and rotifera) coexistence in bloom collapse stag. In sediments, the microbial species biomass and richness are much higher than the water body. Only Flavobacteriales and Rhodobacterales showed a slight response to bloom stages. Unlike the bacteria, there are small fluctuations of archaeal and eukaryotic structure in the sediment. The network analyses among the inter-specific associations show that bacteria (Alteromonadaceae, Oceanospirillaceae, Cryomorphaceae, and Piscirickettsiaceae) and some zooplankton (Mediophyceae, Mamiellophyceae, Dictyochophyceae and Trebouxiophyceae) have a stronger impact on the structuring of phytoplankton communities than archaeal effects. The changes in population were also significantly shaped by water temperature and substrate availability (N & P resources). The results suggest that clades are specialized at different time-periods and that the pre-bloom succession was mainly a bottom-up controlled, and late-bloom period was controlled by top-down patterns. Additionally, phytoplankton and prokaryotic communities correlated better with each other, which indicate interactions among microorganisms are critical in controlling plankton dynamics and fates. Our results supplied a wider view (temporal and spatial scales) to understand the microbial ecological responses and their network association during algal blooming. It gives us a potential multidisciplinary explanation for algal-microbe interaction and helps us beyond the traditional view linked to patterns of algal bloom initiation, development, decline, and biogeochemistry.Keywords: microbial community, harmful algal bloom, ecological process, network
Procedia PDF Downloads 113219 Peak Constituent Fluxes from Small Arctic Rivers Generated by Late Summer Episodic Precipitation Events
Authors: Shawn G. Gallaher, Lilli E. Hirth
Abstract:
As permafrost thaws with the continued warming of the Alaskan North Slope, a progressively thicker active thaw layer is evidently releasing previously sequestered nutrients, metals, and particulate matter exposed to fluvial transport. In this study, we estimate material fluxes on the North Slope of Alaska during the 2019-2022 melt seasons. The watershed of the Alaskan North Slope can be categorized into three regions: mountains, tundra, and coastal plain. Precipitation and discharge data were collected from repeat visits to 14 sample sites for biogeochemical surface water samples, 7 point discharge measurements, 3 project deployed meteorology stations, and 2 U. S. Geological Survey (USGS) continuous discharge observation sites. The timing, intensity, and spatial distribution of precipitation determine the material flux composition in the Sagavanirktok and surrounding bodies of water, with geogenic constituents (e.g., dissolved inorganic carbon (DIC)) expected from mountain flushed events and biogenic constituents (e.g., dissolved organic compound (DOC)) expected from transitional tundra precipitation events. Project goals include connecting late summer precipitation events to peak discharge to determine the responses of the watershed to localized atmospheric forcing. Field study measurements showed widespread precipitation in August 2019, generating an increase in total suspended solids, dissolved organic carbon, and iron fluxes from the tundra, shifting the main-stem mountain river biogeochemistry toward tundra source characteristics typically only observed during the spring floods. Intuitively, a large-scale precipitation event (as defined by this study as exceeding 12.5 mm of precipitation on a single observation day) would dilute a body of water; however, in this study, concentrations increased with higher discharge responses on several occasions. These large-scale precipitation events continue to produce peak constituent fluxes as the thaw layer increases in depth and late summer precipitation increases, evidenced by 6 large-scale events in July 2022 alone. This increase in late summer events is in sharp contrast to the 3 or fewer large events in July in each of the last 10 years. Changes in precipitation intensity, timing, and location have introduced late summer peak constituent flux events previously confined to the spring freshet.Keywords: Alaska North Slope, arctic rivers, material flux, precipitation
Procedia PDF Downloads 73218 Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses
Authors: André Jesus, Yanjie Zhu, Irwanda Laory
Abstract:
Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.Keywords: bayesian, calibration, numerical model, system identification, systematic uncertainty, Gaussian process
Procedia PDF Downloads 324217 Investigating the Flow Physics within Vortex-Shockwave Interactions
Authors: Frederick Ferguson, Dehua Feng, Yang Gao
Abstract:
No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme
Procedia PDF Downloads 136216 Predicting Long-Term Performance of Concrete under Sulfate Attack
Authors: Elakneswaran Yogarajah, Toyoharu Nawa, Eiji Owaki
Abstract:
Cement-based materials have been using in various reinforced concrete structural components as well as in nuclear waste repositories. The sulfate attack has been an environmental issue for cement-based materials exposed to sulfate bearing groundwater or soils, and it plays an important role in the durability of concrete structures. The reaction between penetrating sulfate ions and cement hydrates can result in swelling, spalling and cracking of cement matrix in concrete. These processes induce a reduction of mechanical properties and a decrease of service life of an affected structure. It has been identified that the precipitation of secondary sulfate bearing phases such as ettringite, gypsum, and thaumasite can cause the damage. Furthermore, crystallization of soluble salts such as sodium sulfate crystals induces degradation due to formation and phase changes. Crystallization of mirabilite (Na₂SO₄:10H₂O) and thenardite (Na₂SO₄) or their phase changes (mirabilite to thenardite or vice versa) due to temperature or sodium sulfate concentration do not involve any chemical interaction with cement hydrates. Over the past couple of decades, an intensive work has been carried out on sulfate attack in cement-based materials. However, there are several uncertainties still exist regarding the mechanism for the damage of concrete in sulfate environments. In this study, modelling work has been conducted to investigate the chemical degradation of cementitious materials in various sulfate environments. Both internal and external sulfate attack are considered for the simulation. In the internal sulfate attack, hydrate assemblage and pore solution chemistry of co-hydrating Portland cement (PC) and slag mixing with sodium sulfate solution are calculated to determine the degradation of the PC and slag-blended cementitious materials. Pitzer interactions coefficients were used to calculate the activity coefficients of solution chemistry at high ionic strength. The deterioration mechanism of co-hydrating cementitious materials with 25% of Na₂SO₄ by weight is the formation of mirabilite crystals and ettringite. Their formation strongly depends on sodium sulfate concentration and temperature. For the external sulfate attack, the deterioration of various types of cementitious materials under external sulfate ingress is simulated through reactive transport model. The reactive transport model is verified with experimental data in terms of phase assemblage of various cementitious materials with spatial distribution for different sulfate solution. Finally, the reactive transport model is used to predict the long-term performance of cementitious materials exposed to 10% of Na₂SO₄ for 1000 years. The dissolution of cement hydrates and secondary formation of sulfate-bearing products mainly ettringite are the dominant degradation mechanisms, but not the sodium sulfate crystallization.Keywords: thermodynamic calculations, reactive transport, radioactive waste disposal, PHREEQC
Procedia PDF Downloads 160215 Seismic Data Analysis of Intensity, Orientation and Distribution of Fractures in Basement Rocks for Reservoir Characterization
Authors: Mohit Kumar
Abstract:
Natural fractures are classified in two broad categories of joints and faults on the basis of shear movement in the deposited strata. Natural fracture always has high structural relationship with extensional or non-extensional tectonics and sometimes the result is seen in the form of micro cracks. Geological evidences suggest that both large and small-scale fractures help in to analyze the seismic anisotropy which essentially contribute into characterization of petro physical properties behavior associated with directional migration of fluid. We generally question why basement study is much needed as historically it is being treated as non-productive and geoscientist had no interest in exploration of these basement rocks. Basement rock goes under high pressure and temperature, and seems to be highly fractured because of the tectonic stresses that are applied to the formation along with the other geological factors such as depositional trend, internal stress of the rock body, rock rheology, pore fluid and capillary pressure. Sometimes carbonate rocks also plays the role of basement and igneous body e.g basalt deposited over the carbonate rocks and fluid migrate from carbonate to igneous rock due to buoyancy force and adequate permeability generated by fracturing. So in order to analyze the complete petroleum system, FMC (Fluid Migration Characterization) is necessary through fractured media including fracture intensity, orientation and distribution both in basement rock and county rock. Thus good understanding of fractures can lead to project the correct wellbore trajectory or path which passes through potential permeable zone generated through intensified P-T and tectonic stress condition. This paper deals with the analysis of these fracture property such as intensity, orientation and distribution in basement rock as large scale fracture can be interpreted on seismic section, however, small scale fractures show ambiguity in interpretation because fracture in basement rock lies below the seismic wavelength and hence shows erroneous result in identification. Seismic attribute technique also helps us to delineate the seismic fracture and subtle changes in fracture zone and these can be inferred from azimuthal anisotropy in velocity and amplitude and spectral decomposition. Seismic azimuthal anisotropy derives fracture intensity and orientation from compressional wave and converted wave data and based on variation of amplitude or velocity with azimuth. Still detailed analysis of fractured basement required full isotropic and anisotropic analysis of fracture matrix and surrounding rock matrix in order to characterize the spatial variability of basement fracture which support the migration of fluid from basement to overlying rock.Keywords: basement rock, natural fracture, reservoir characterization, seismic attribute
Procedia PDF Downloads 196214 Tailoring Quantum Oscillations of Excitonic Schrodinger’s Cats as Qubits
Authors: Amit Bhunia, Mohit Kumar Singh, Maryam Al Huwayz, Mohamed Henini, Shouvik Datta
Abstract:
We report [https://arxiv.org/abs/2107.13518] experimental detection and control of Schrodinger’s Cat like macroscopically large, quantum coherent state of a two-component Bose-Einstein condensate of spatially indirect electron-hole pairs or excitons using a resonant tunneling diode of III-V Semiconductors. This provides access to millions of excitons as qubits to allow efficient, fault-tolerant quantum computation. In this work, we measure phase-coherent periodic oscillations in photo-generated capacitance as a function of an applied voltage bias and light intensity over a macroscopically large area. Periodic presence and absence of splitting of excitonic peaks in the optical spectra measured by photocapacitance point towards tunneling induced variations in capacitive coupling between the quantum well and quantum dots. Observation of negative ‘quantum capacitance’ due to a screening of charge carriers by the quantum well indicates Coulomb correlations of interacting excitons in the plane of the sample. We also establish that coherent resonant tunneling in this well-dot heterostructure restricts the available momentum space of the charge carriers within this quantum well. Consequently, the electric polarization vector of the associated indirect excitons collective orients along the direction of applied bias and these excitons undergo Bose-Einstein condensation below ~100 K. Generation of interference beats in photocapacitance oscillation even with incoherent white light further confirm the presence of stable, long-range spatial correlation among these indirect excitons. We finally demonstrate collective Rabi oscillations of these macroscopically large, ‘multipartite’, two-level, coupled and uncoupled quantum states of excitonic condensate as qubits. Therefore, our study not only brings the physics and technology of Bose-Einstein condensation within the reaches of semiconductor chips but also opens up experimental investigations of the fundamentals of quantum physics using similar techniques. Operational temperatures of such two-component excitonic BEC can be raised further with a more densely packed, ordered array of QDs and/or using materials having larger excitonic binding energies. However, fabrications of single crystals of 0D-2D heterostructures using 2D materials (e.g. transition metal di-chalcogenides, oxides, perovskites etc.) having higher excitonic binding energies are still an open challenge for semiconductor optoelectronics. As of now, these 0D-2D heterostructures can already be scaled up for mass production of miniaturized, portable quantum optoelectronic devices using the existing III-V and/or Nitride based semiconductor fabrication technologies.Keywords: exciton, Bose-Einstein condensation, quantum computation, heterostructures, semiconductor Physics, quantum fluids, Schrodinger's Cat
Procedia PDF Downloads 178213 X-Ray Detector Technology Optimization in Computed Tomography
Authors: Aziz Ikhlef
Abstract:
Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts
Procedia PDF Downloads 193212 Land, History and Housing: Colonial Legacies and Land Tenure in Kuala Lumpur
Authors: Nur Fareza Mustapha
Abstract:
Solutions to policy problems need to be curated to the local context, taking into account the trajectory of the local development path to ensure its efficacy. For Kuala Lumpur, rapid urbanization and migration into the city for the past few decades have increased the demand for housing to accommodate a growing urban population. As a critical factor affecting housing affordability, land supply constraints have been attributed to intensifying market pressures, which grew in tandem with the demands of urban development, along with existing institutional constraints in the governance of land. While demand-side pressures are inevitable given the fixed supply of land, supply-side constraints in regulations distort markets and if addressed inappropriately, may lead to mistargeted policy interventions. Given Malaysia’s historical development, regulatory barriers for land may originate from the British colonial period, when many aspects of the current laws governing tenure were introduced and formalized, and henceforth, became engrained in the system. This research undertakes a postcolonial institutional analysis approach to uncover the causal mechanism driving the evolution of land tenure systems in post-colonial Kuala Lumpur. It seeks to determine the sources of these shifts, focusing on the incentives and bargaining positions of actors during periods of institutional flux/change. It aims to construct a conceptual framework to further this understanding and to elucidate how this historical trajectory affects current access to urban land markets for housing. Archival analysis is used to outline and analyse the evolution of land tenure systems in Kuala Lumpur while stakeholder interviews are used to analyse its impact on the current urban land market, with a particular focus on the provision of and access to affordable housing in the city. Preliminary findings indicate that many aspects of the laws governing tenure that were introduced and formalized during the British colonial period have endured until the present day. Customary rules of tenure were displaced by rules following a European tradition, which found legitimacy through a misguided interpretation of local laws regarding the ownership of land. Colonial notions of race and its binary view of native vs. non-natives have also persisted in the construction and implementation of current legislation regarding land tenure. More concrete findings from this study will generate a more nuanced understanding of the regulatory land supply constraints in Kuala Lumpur, taking into account both the long and short term spatial and temporal processes that affect how these rules are created, implemented and enforced.Keywords: colonial discourse, historical institutionalism, housing, land policy, post-colonial city
Procedia PDF Downloads 127