Search results for: complex%20society
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4970

Search results for: complex%20society

110 Diamond-Like Carbon-Based Structures as Functional Layers on Shape-Memory Alloy for Orthopedic Applications

Authors: Piotr Jablonski, Krzysztof Mars, Wiktor Niemiec, Agnieszka Kyziol, Marek Hebda, Halina Krawiec, Karol Kyziol

Abstract:

NiTi alloys, possessing unique mechanical properties such as pseudoelasticity and shape memory effect (SME), are suitable for many applications, including implanthology and biomedical devices. Additionally, these alloys have similar values of elastic modulus to those of human bones, what is very important in orthopedics. Unfortunately, the environment of physiological fluids in vivo causes unfavorable release of Ni ions, which in turn may lead to metalosis as well as allergic reactions and toxic effects in the body. For these reasons, the surface properties of NiTi alloys should be improved to increase corrosion resistance, taking into account biological properties, i.e. excellent biocompatibility. The prospective in this respect are layers based on DLC (Diamond-Like Carbon) structures, which are an attractive solution for many applications in implanthology. These coatings (DLC), usually obtained by PVD (Physical Vapour Deposition) and PA CVD (Plasma Activated Chemical Vapour Deposition) methods, can be also modified by doping with other elements like silicon, nitrogen, oxygen, fluorine, titanium and silver. These methods, in combination with a suitably designed structure of the layers, allow the possibility co-decide about physicochemical and biological properties of modified surfaces. Mentioned techniques provide specific physicochemical properties of substrates surface in a single technological process. In this work, the following types of layers based on DLC structures (incl. Si-DLC or Si/N-DLC) were proposed as prospective and attractive approach in surface functionalization of shape memory alloy. Nitinol substrates were modified in plasma conditions, using RF CVD (Radio Frequency Chemical Vapour Deposition). The influence of plasma treatment on the useful properties of modified substrates after deposition DLC layers doped with silica and/or nitrogen atoms, as well as only pre-treated in O2 NH3 plasma atmosphere in a RF reactor was determined. The microstructure and topography of the modified surfaces were characterized using scanning electron microscopy (SEM) and atomic force microscopy (AFM). Furthermore, the atomic structure of coatings was characterized by IR and Raman spectroscopy. The research also included the evaluation of surface wettability, surface energy as well as the characteristics of selected mechanical and biological properties of the layers. In addition, the corrosion properties of alloys after and before modification in the physiological saline were also investigated. In order to determine the corrosion resistance of NiTi in the Ringer solution, the potentiodynamic polarization curves (LSV – Linear Sweep Voltamperometry) were plotted. Furthermore, the evolution of corrosion potential versus immersion time of TiNi alloy in Ringer solution was performed. Based on all carried out research, the usefullness of proposed modifications of nitinol for medical applications was assessed. It was shown, inter alia, that the obtained Si-DLC layers on the surface of NiTi alloy exhibit a characteristic complex microstructure, increased surface development, which is an important aspect in improving the osteointegration of an implant. Furthermore, the modified alloy exhibits biocompatibility, the transfer of the metal (Ni, Ti) to Ringer’s solution is clearly limited.

Keywords: bioactive coatings, corrosion resistance, doped DLC structure, NiTi alloy, RF CVD

Procedia PDF Downloads 201
109 Modelling Spatial Dynamics of Terrorism

Authors: André Python

Abstract:

To this day, terrorism persists as a worldwide threat, exemplified by the recent deadly attacks in January 2015 in Paris and the ongoing massacres perpetrated by ISIS in Iraq and Syria. In response to this threat, states deploy various counterterrorism measures, the cost of which could be reduced through effective preventive measures. In order to increase the efficiency of preventive measures, policy-makers may benefit from accurate predictive models that are able to capture the complex spatial dynamics of terrorism occurring at a local scale. Despite empirical research carried out at country-level that has confirmed theories explaining the diffusion processes of terrorism across space and time, scholars have failed to assess diffusion’s theories on a local scale. Moreover, since scholars have not made the most of recent statistical modelling approaches, they have been unable to build up predictive models accurate in both space and time. In an effort to address these shortcomings, this research suggests a novel approach to systematically assess the theories of terrorism’s diffusion on a local scale and provide a predictive model of the local spatial dynamics of terrorism worldwide. With a focus on the lethal terrorist events that occurred after 9/11, this paper addresses the following question: why and how does lethal terrorism diffuse in space and time? Based on geolocalised data on worldwide terrorist attacks and covariates gathered from 2002 to 2013, a binomial spatio-temporal point process is used to model the probability of terrorist attacks on a sphere (the world), the surface of which is discretised in the form of Delaunay triangles and refined in areas of specific interest. Within a Bayesian framework, the model is fitted through an integrated nested Laplace approximation - a recent fitting approach that computes fast and accurate estimates of posterior marginals. Hence, for each location in the world, the model provides a probability of encountering a lethal terrorist attack and measures of volatility, which inform on the model’s predictability. Diffusion processes are visualised through interactive maps that highlight space-time variations in the probability and volatility of encountering a lethal attack from 2002 to 2013. Based on the previous twelve years of observation, the location and lethality of terrorist events in 2014 are statistically accurately predicted. Throughout the global scope of this research, local diffusion processes such as escalation and relocation are systematically examined: the former process describes an expansion from high concentration areas of lethal terrorist events (hotspots) to neighbouring areas, while the latter is characterised by changes in the location of hotspots. By controlling for the effect of geographical, economical and demographic variables, the results of the model suggest that the diffusion processes of lethal terrorism are jointly driven by contagious and non-contagious factors that operate on a local scale – as predicted by theories of diffusion. Moreover, by providing a quantitative measure of predictability, the model prevents policy-makers from making decisions based on highly uncertain predictions. Ultimately, this research may provide important complementary tools to enhance the efficiency of policies that aim to prevent and combat terrorism.

Keywords: diffusion process, terrorism, spatial dynamics, spatio-temporal modeling

Procedia PDF Downloads 321
108 Crisis In/Out, Emergent, and Adaptive Urban Organisms

Authors: Alessandra Swiny, Michalis Georgiou, Yiorgos Hadjichristou

Abstract:

This paper focuses on the questions raised through the work of Unit 5: ‘In/Out of crisis, emergent and adaptive’; an architectural research-based studio at the University of Nicosia. It focusses on sustainable architectural and urban explorations tackling with the ever growing crises in its various types, phases and locations. ‘Great crisis situations’ are seen as ‘great chances’ that trigger investigations for further development and evolution of the built environment in an ultimate sustainable approach. The crisis is taken as an opportunity to rethink the urban and architectural directions as new forces for inventions leading to emergent and adaptive built environments. The Unit 5’s identity and environment facilitates the students to respond optimistically, alternatively and creatively towards the global current crisis. Mark Wigley’s notion that “crises are ultimately productive” and “They force invention” intrigued and defined the premises of the Unit. ‘Weather and nature are coauthors of the built environment’ Jonathan Hill states in his ‘weather architecture’ discourse. The weather is constantly changing and new environments, the subnatures are created which derived from the human activities David Gissen explains. The above set of premises triggered innovative responses by the Unit’s students. They thoroughly investigated the various kinds of crisis and their causes in relation to their various types of Terrains. The tools used for the research and investigation were chosen in contradictive pairs to generate further crisis situations: The re-used/salvaged competed with the new, the handmade rivalling with the fabrication, the analogue juxtaposed with digital. Students were asked to delve into state of art technologies in order to propose sustainable emergent and adaptive architectures and Urbanities, having though always in mind that the human and the social aspects of the community should be the core of the investigation. The resulting unprecedented spatial conditions and atmospheres of the emergent new ways of living are deemed to be the ultimate aim of the investigation. Students explored a variety of sites and crisis conditions such as: The vague terrain of the Green Line in Nicosia, the lost footprints of the sinking Venice, the endangered Australian coral reefs, the earthquake torn town of Crevalcore, and the decaying concrete urbanscape of Athens. Among other projects, ‘the plume project’ proposes a cloud-like, floating and almost dream-like living environment with unprecedented spatial conditions to the inhabitants of the coal mine of Centralia, USA, not just to enable them to survive but even to prosper in this unbearable environment due to the process of the captured plumes of smoke and heat. Existing water wells inspire inversed vertical structures creating a new living underground network, protecting the nomads from catastrophic sand storms in the Araoune of Mali. “Inverted utopia: Lost things in the sand”, weaves a series of tea-houses and a library holding lost artifacts and transcripts into a complex underground labyrinth by the utilization of the sand solidification technology. Within this methodology, crisis is seen as a mechanism for allowing an emergence of new and fascinating ultimate sustainable future cultures and cities.

Keywords: adaptive built environments, crisis as opportunity, emergent urbanities, forces for inventions

Procedia PDF Downloads 410
107 Exploring Type V Hydrogen Storage Tanks: Shape Analysis and Material Evaluation for Enhanced Safety and Efficiency Focusing on Drop Test Performance

Authors: Mariam Jaber, Abdullah Yahya, Mohammad Alkhedher

Abstract:

The shift toward sustainable energy solutions increasingly focuses on hydrogen, recognized for its potential as a clean energy carrier. Despite its benefits, hydrogen storage poses significant challenges, primarily due to its low energy density and high volatility. Among the various solutions, pressure vessels designed for hydrogen storage range from Type I to Type V, each tailored for specific needs and benefits. Notably, Type V vessels, with their all-composite, liner-less design, significantly reduce weight and costs while optimizing space and decreasing maintenance demands. This study focuses on optimizing Type V hydrogen storage tanks by examining how different shapes affect performance in drop tests—a crucial aspect of achieving ISO 15869 certification. This certification ensures that if a tank is dropped, it will fail in a controlled manner, ideally by leaking before bursting. While cylindrical vessels are predominant in mobile applications due to their manufacturability and efficient use of space, spherical vessels offer superior stress distribution and require significantly less material thickness for the same pressure tolerance, making them advantageous for high-pressure scenarios. However, spherical tanks are less efficient in terms of packing and more complex to manufacture. Additionally, this study introduces toroidal vessels to assess their performance relative to the more traditional shapes, noting that the toroidal shape offers a more space-efficient option. The research evaluates how different shapes—spherical, cylindrical, and toroidal—affect drop test outcomes when combined with various composite materials and layup configurations. The ultimate goal is to identify optimal vessel geometries that enhance the safety and efficiency of hydrogen storage systems. For our materials, we selected high-performance composites such as Carbon T-700/Epoxy, Kevlar/Epoxy, E-Glass Fiber/Epoxy, and Basalt/Epoxy, configured in various orientations like [0,90]s, [45,-45]s, and [54,-54]. Our tests involved dropping tanks from different angles—horizontal, vertical, and 45 degrees—with an internal pressure of 35 MPa to replicate real-world scenarios as closely as possible. We used finite element analysis and first-order shear deformation theory, conducting tests with the Abaqus Explicit Dynamics software, which is ideal for handling the quick, intense stresses of an impact. The results from these simulations will provide valuable insights into how different designs and materials can enhance the durability and safety of hydrogen storage tanks. Our findings aim to guide future designs, making them more effective at withstanding impacts and safer overall. Ultimately, this research will contribute to the broader field of lightweight composite materials and polymers, advancing more innovative and practical approaches to hydrogen storage. By refining how we design these tanks, we are moving toward more reliable and economically feasible hydrogen storage solutions, further emphasizing hydrogen's role in the landscape of sustainable energy carriers.

Keywords: hydrogen storage, drop test, composite materials, type V tanks, finite element analysis

Procedia PDF Downloads 12
106 Physical Aspects of Shape Memory and Reversibility in Shape Memory Alloys

Authors: Osman Adiguzel

Abstract:

Shape memory alloys take place in a class of smart materials by exhibiting a peculiar property called the shape memory effect. This property is characterized by the recoverability of two certain shapes of material at different temperatures. These materials are often called smart materials due to their functionality and their capacity of responding to changes in the environment. Shape memory materials are used as shape memory devices in many interdisciplinary fields such as medicine, bioengineering, metallurgy, building industry and many engineering fields. The shape memory effect is performed thermally by heating and cooling after first cooling and stressing treatments, and this behavior is called thermoelasticity. This effect is based on martensitic transformations characterized by changes in the crystal structure of the material. The shape memory effect is the result of successive thermally and stress-induced martensitic transformations. Shape memory alloys exhibit thermoelasticity and superelasticity by means of deformation in the low-temperature product phase and high-temperature parent phase region, respectively. Superelasticity is performed by stressing and releasing the material in the parent phase region. Loading and unloading paths are different in the stress-strain diagram, and the cycling loop reveals energy dissipation. The strain energy is stored after releasing, and these alloys are mainly used as deformation absorbent materials in control of civil structures subjected to seismic events, due to the absorbance of strain energy during any disaster or earthquake. Thermal-induced martensitic transformation occurs thermally on cooling, along with lattice twinning with cooperative movements of atoms by means of lattice invariant shears, and ordered parent phase structures turn into twinned martensite structures, and twinned structures turn into the detwinned structures by means of stress-induced martensitic transformation by stressing the material in the martensitic condition. Thermal induced transformation occurs with the cooperative movements of atoms in two opposite directions, <110 > -type directions on the {110} - type planes of austenite matrix which is the basal plane of martensite. Copper-based alloys exhibit this property in the metastable β-phase region, which has bcc-based structures at high-temperature parent phase field. Lattice invariant shear and twinning is not uniform in copper-based ternary alloys and gives rise to the formation of complex layered structures, depending on the stacking sequences on the close-packed planes of the ordered parent phase lattice. In the present contribution, x-ray diffraction and transmission electron microscopy (TEM) studies were carried out on two copper-based CuAlMn and CuZnAl alloys. X-ray diffraction profiles and electron diffraction patterns reveal that both alloys exhibit superlattice reflections inherited from the parent phase due to the displacive character of martensitic transformation. X-ray diffractograms taken in a long time interval show that diffraction angles and intensities of diffraction peaks change with the aging duration at room temperature. In particular, some of the successive peak pairs providing a special relation between Miller indices come close to each other. This result refers to the rearrangement of atoms in a diffusive manner.

Keywords: shape memory effect, martensitic transformation, reversibility, superelasticity, twinning, detwinning

Procedia PDF Downloads 161
105 Investigating Links in Achievement and Deprivation (ILiAD): A Case Study Approach to Community Differences

Authors: Ruth Leitch, Joanne Hughes

Abstract:

This paper presents the findings of a three-year government-funded study (ILiAD) that aimed to understand the reasons for differential educational achievement within and between socially and economically deprived areas in Northern Ireland. Previous international studies have concluded that there is a positive correlation between deprivation and underachievement. Our preliminary secondary data analysis suggested that the factors involved in educational achievement within multiple deprived areas may be more complex than this, with some areas of high multiple deprivation having high levels of student attainment, whereas other less deprived areas demonstrated much lower levels of student attainment, as measured by outcomes on high stakes national tests. The study proposed that no single explanation or disparate set of explanations could easily account for the linkage between levels of deprivation and patterns of educational achievement. Using a social capital perspective that centralizes the connections within and between individuals and social networks in a community as a valuable resource for educational achievement, the ILiAD study involved a multi-level case study analysis of seven community sites in Northern Ireland, selected on the basis of religious composition (housing areas are largely segregated by religious affiliation), measures of multiple deprivation and differentials in educational achievement. The case study approach involved three (interconnecting) levels of qualitative data collection and analysis - what we have termed Micro (or community/grassroots level) understandings, Meso (or school level) explanations and Macro (or policy/structural) factors. The analysis combines a statistical mapping of factors with qualitative, in-depth data interpretation which, together, allow for deeper understandings of the dynamics and contributory factors within and between the case study sites. Thematic analysis of the qualitative data reveals both cross-cutting factors (e.g. demographic shifts and loss of community, place of the school in the community, parental capacity) and analytic case studies of explanatory factors associated with each of the community sites also permit a comparative element. Issues arising from the qualitative analysis are classified either as drivers or inhibitors of educational achievement within and between communities. Key issues that are emerging as inhibitors/drivers to attainment include: the legacy of the community conflict in Northern Ireland, not least in terms of inter-generational stress, related with substance abuse and mental health issues; differing discourses on notions of ‘community’ and ‘achievement’ within/between community sites; inter-agency and intra-agency levels of collaboration and joined-up working; relationship between the home/school/community triad and; school leadership and school ethos. At this stage, the balance of these factors can be conceptualized in terms of bonding social capital (or lack of it) within families, within schools, within each community, within agencies and also bridging social capital between the home/school/community, between different communities and between key statutory and voluntary organisations. The presentation will outline the study rationale, its methodology, present some cross-cutting findings and use an illustrative case study of the findings from a community site to underscore the importance of attending to community differences when trying to engage in research to understand and improve educational attainment for all.

Keywords: educational achievement, multiple deprivation, community case studies, social capital

Procedia PDF Downloads 350
104 Climate Safe House: A Community Housing Project Tackling Catastrophic Sea Level Rise in Coastal Communities

Authors: Chris Fersterer, Col Fay, Tobias Danielmeier, Kat Achterberg, Scott Willis

Abstract:

New Zealand, an island nation, has an extensive coastline peppered with small communities of iconic buildings known as Bachs. Post WWII, these modest buildings were constructed by their owners as retreats and generally were small, low cost, often using recycled material and often they fell below current acceptable building standards. In the latter part of the 20th century, real estate prices in many of these communities remained low and these areas became permanent residences for people attracted to this affordable lifestyle choice. The Blueskin Resilient Communities Trust (BRCT) is an organisation that recognises the vulnerability of communities in low lying settlements as now being prone to increased flood threat brought about by climate change and sea level rise. Some of the inhabitants of Blueskin Bay, Otago, NZ have already found their properties to be un-insurable because of increased frequency of flood events and property values have slumped accordingly. Territorial authorities also acknowledge this increased risk and have created additional compliance measures for new buildings that are less than 2 m above tidal peaks. Community resilience becomes an additional concern where inhabitants are attracted to a lifestyle associated with a specific location and its people when this lifestyle is unable to be met in a suburban or city context. Traditional models of social housing fail to provide the sense of community connectedness and identity enjoyed by the current residents of Blueskin Bay. BRCT have partnered with the Otago Polytechnic Design School to design a new form of community housing that can react to this environmental change. It is a longitudinal project incorporating participatory approaches as a means of getting people ‘on board’, to understand complex systems and co-develop solutions. In the first period, they are seeking industry support and funding to develop a transportable and fully self-contained housing model that exploits current technologies. BRCT also hope that the building will become an educational tool to highlight climate change issues facing us today. This paper uses the Climate Safe House (CSH) as a case study for education in architectural sustainability through experiential learning offered as part of the Otago Polytechnics Bachelor of Design. Students engage with the project with research methodologies, including site surveys, resident interviews, data sourced from government agencies and physical modelling. The process involves collaboration across design disciplines including product and interior design but also includes connections with industry, both within the education institution and stakeholder industries introduced through BRCT. This project offers a rich learning environment where students become engaged through project based learning within a community of practice, including architecture, construction, energy and other related fields. The design outcomes are expressed in a series of public exhibitions and forums where community input is sought in a truly participatory process.

Keywords: community resilience, problem based learning, project based learning, case study

Procedia PDF Downloads 251
103 Use of Machine Learning Algorithms to Pediatric MR Images for Tumor Classification

Authors: I. Stathopoulos, V. Syrgiamiotis, E. Karavasilis, A. Ploussi, I. Nikas, C. Hatzigiorgi, K. Platoni, E. P. Efstathopoulos

Abstract:

Introduction: Brain and central nervous system (CNS) tumors form the second most common group of cancer in children, accounting for 30% of all childhood cancers. MRI is the key imaging technique used for the visualization and management of pediatric brain tumors. Initial characterization of tumors from MRI scans is usually performed via a radiologist’s visual assessment. However, different brain tumor types do not always demonstrate clear differences in visual appearance. Using only conventional MRI to provide a definite diagnosis could potentially lead to inaccurate results, and so histopathological examination of biopsy samples is currently considered to be the gold standard for obtaining definite diagnoses. Machine learning is defined as the study of computational algorithms that can use, complex or not, mathematical relationships and patterns from empirical and scientific data to make reliable decisions. Concerning the above, machine learning techniques could provide effective and accurate ways to automate and speed up the analysis and diagnosis for medical images. Machine learning applications in radiology are or could potentially be useful in practice for medical image segmentation and registration, computer-aided detection and diagnosis systems for CT, MR or radiography images and functional MR (fMRI) images for brain activity analysis and neurological disease diagnosis. Purpose: The objective of this study is to provide an automated tool, which may assist in the imaging evaluation and classification of brain neoplasms in pediatric patients by determining the glioma type, grade and differentiating between different brain tissue types. Moreover, a future purpose is to present an alternative way of quick and accurate diagnosis in order to save time and resources in the daily medical workflow. Materials and Methods: A cohort, of 80 pediatric patients with a diagnosis of posterior fossa tumor, was used: 20 ependymomas, 20 astrocytomas, 20 medulloblastomas and 20 healthy children. The MR sequences used, for every single patient, were the following: axial T1-weighted (T1), axial T2-weighted (T2), FluidAttenuated Inversion Recovery (FLAIR), axial diffusion weighted images (DWI), axial contrast-enhanced T1-weighted (T1ce). From every sequence only a principal slice was used that manually traced by two expert radiologists. Image acquisition was carried out on a GE HDxt 1.5-T scanner. The images were preprocessed following a number of steps including noise reduction, bias-field correction, thresholding, coregistration of all sequences (T1, T2, T1ce, FLAIR, DWI), skull stripping, and histogram matching. A large number of features for investigation were chosen, which included age, tumor shape characteristics, image intensity characteristics and texture features. After selecting the features for achieving the highest accuracy using the least number of variables, four machine learning classification algorithms were used: k-Nearest Neighbour, Support-Vector Machines, C4.5 Decision Tree and Convolutional Neural Network. The machine learning schemes and the image analysis are implemented in the WEKA platform and MatLab platform respectively. Results-Conclusions: The results and the accuracy of images classification for each type of glioma by the four different algorithms are still on process.

Keywords: image classification, machine learning algorithms, pediatric MRI, pediatric oncology

Procedia PDF Downloads 125
102 Upper Jurassic to Lower Cretaceous Oysters (Bivalvia, Ostreoidea) from Siberia: Taxonomy and Variations of Carbon and Oxygen Isotopes

Authors: Igor N. Kosenko

Abstract:

The present contribution is an analysis of more than 300 specimens of Upper Jurassic to Lower Cretaceous oysters collected by V.A. Zakharov during the 1960s and currently stored in the Trofimuk Institute of Geology and Geophysics SB RAS (Novosibirsk, Russia). They were sampled in the northwestern bounder of Western Siberia (Yatriya, Maurynia, Tol’ya and Lopsiya rivers) and the north of Eastern Siberia (Boyarka, Bolshaya Romanikha and Dyabaka-Tari rivers). During the last five years, they were examined with taxonomical and palaeoecological purposes. Based on carbonate material of oyster’s shells were performed isotopic analyses and associated palaeotemperatures. Taxonomical study consists on classical morphofunctional and biometrical analyses. It is completed by another large amount of Cretaceous oysters from Crimea as well as modern Pacific oyster - Crassostrea gigas. Those were studied to understand the range of modification variability between different species. Oysters previously identified as Liostrea are attributed now to four genera: Praeexogyra and Helvetostrea (Flemingostreidae), Pernostrea (Gryphaeidae) and one new genus (Gryphaeidae), including one species “Liostrea” roemeri (Quenstedt). This last is characterized by peculiar ethology, being attached to floating ammonites and morphology, outlined by a beak-shaped umbo on the right (!) valve. Endemic Siberian species from the Pernostrea genus have been included into the subgenus Boreiodeltoideum subgen. nov. Pernostrea and Deltoideum genera have been included into the tribe Pernostreini n. trib. from the Gryphaeinae subfamily. Model of phylogenetic relationships between species of this tribe has been proposed. Siberian oyster complexes were compared with complexes from Western Europe, Poland and East European Platform. In western Boreal and Subboreal Realm (England, northern France and Poland) two stages of oyster’s development were recognized: Jurassic-type and Cretaceous-type. In Siberia, Jurassic and Lower Cretaceous oysters formed a unique complex. It may be due to the isolation of the Siberian Basin toward the West during the Early Cretaceous. Seven oyster’s shells of Pernostrea (Pernostrea) uralensis (Zakharov) from the Jurassic/Cretaceous Boundary Interval (Upper Volgian – Lower Ryazanian) of Maurynia river were used to perform δ13C and δ18O isotopic analyses. The preservation of the carbonate material was controlled by: cathodoluminescence analyses; content of Fe, Mn, Sr; absence of correlation between δ13C and δ18O and content of Fe and Mn. The obtained δ13C and δ18O data were compared with isotopic data based on belemnites from the same stratigraphical interval of the same section and were used to trace palaeotemperatures. A general trend towards negative δ18O values is recorded in the Maurynia section, from the lower part of the Upper Volgian to the middle part of the Ryazanian Chetaites sibiricus ammonite zone. This trend was previously recorded in the Nordvik section. The higher palaeotemperatures (2°C in average) determined from oyster’s shells indicate that belemnites likely migrated laterally and lived part of their lives in cooler waters. This work financially supported by the Russian Foundation for Basic Researches (grant no. 16-35-00003).

Keywords: isotopes, oysters, Siberia, taxonomy

Procedia PDF Downloads 170
101 Optimization of Geometric Parameters of Microfluidic Channels for Flow-Based Studies

Authors: Parth Gupta, Ujjawal Singh, Shashank Kumar, Mansi Chandra, Arnab Sarkar

Abstract:

Microfluidic devices have emerged as indispensable tools across various scientific disciplines, offering precise control and manipulation of fluids at the microscale. Their efficacy in flow-based research, spanning engineering, chemistry, and biology, relies heavily on the geometric design of microfluidic channels. This work introduces a novel approach to optimise these channels through Response Surface Methodology (RSM), departing from the conventional practice of addressing one parameter at a time. Traditionally, optimising microfluidic channels involved isolated adjustments to individual parameters, limiting the comprehensive understanding of their combined effects. In contrast, our approach considers the simultaneous impact of multiple parameters, employing RSM to efficiently explore the complex design space. The outcome is an innovative microfluidic channel that consumes an optimal sample volume and minimises flow time, enhancing overall efficiency. The relevance of geometric parameter optimization in microfluidic channels extends significantly in biomedical engineering. The flow characteristics of porous materials within these channels depend on many factors, including fluid viscosity, environmental conditions (such as temperature and humidity), and specific design parameters like sample volume, channel width, channel length, and substrate porosity. This intricate interplay directly influences the performance and efficacy of microfluidic devices, which, if not optimized, can lead to increased costs and errors in disease testing and analysis. In the context of biomedical applications, the proposed approach addresses the critical need for precision in fluid flow. it mitigate manufacturing costs associated with trial-and-error methodologies by optimising multiple geometric parameters concurrently. The resulting microfluidic channels offer enhanced performance and contribute to a streamlined, cost-effective process for testing and analyzing diseases. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing.

Keywords: microfluidic device, minitab, statistical optimization, response surface methodology

Procedia PDF Downloads 28
100 Micro-Oculi Facades as a Sustainable Urban Facade

Authors: Ok-Kyun Im, Kyoung Hee Kim

Abstract:

We live in an era that faces global challenges of climate changes and resource depletion. With the rapid urbanization and growing energy consumption in the built environment, building facades become ever more important in architectural practice and environmental stewardship. Furthermore, building facade undergoes complex dynamics of social, cultural, environmental and technological changes. Kinetic facades have drawn attention of architects, designers, and engineers in the field of adaptable, responsive and interactive architecture since 1980’s. Materials and building technologies have gradually evolved to address the technical implications of kinetic facades. The kinetic façade is becoming an independent system of the building, transforming the design methodology to sustainable building solutions. Accordingly, there is a need for a new design methodology to guide the design of a kinetic façade and evaluate its sustainable performance. The research objectives are two-fold: First, to establish a new design methodology for kinetic facades and second, to develop a micro-oculi façade system and assess its performance using the established design method. The design approach to the micro-oculi facade is comprised of 1) façade geometry optimization and 2) dynamic building energy simulation. The façade geometry optimization utilizes multi-objective optimization process, aiming to balance the quantitative and qualitative performances to address the sustainability of the built environment. The dynamic building energy simulation was carried out using EnergyPlus and Radiance simulation engines with scripted interfaces. The micro-oculi office was compared with an office tower with a glass façade in accordance with ASHRAE 90.1 2013 to understand its energy efficiency. The micro-oculi facade is constructed with an array of circular frames attached to a pair of micro-shades called a micro-oculus. The micro-oculi are encapsulated between two glass panes to protect kinetic mechanisms with longevity. The micro-oculus incorporates rotating gears that transmit the power to adjacent micro-oculi to minimize the number of mechanical parts. The micro-oculus rotates around its center axis with a step size of 15deg depending on the sun’s position while maximizing daylighting potentials and view-outs. A 2 ft by 2ft prototyping was undertaken to identify operational challenges and material implications of the micro-oculi facade. In this research, a systematic design methodology was proposed, that integrates multi-objectives of kinetic façade design criteria and whole building energy performance simulation within a holistic design process. This design methodology is expected to encourage multidisciplinary collaborations between designers and engineers to collaborate issues of the energy efficiency, daylighting performance and user experience during design phases. The preliminary energy simulation indicated that compared to a glass façade, the micro-oculi façade showed energy savings due to its improved thermal properties, daylighting attributes, and dynamic solar performance across the day and seasons. It is expected that the micro oculi façade provides a cost-effective, environmentally-friendly, sustainable, and aesthetically pleasing alternative to glass facades. Recommendations for future studies include lab testing to validate the simulated data of energy and optical properties of the micro-oculi façade. A 1:1 performance mock-up of the micro-oculi façade can suggest in-depth understanding of long-term operability and new development opportunities applicable for urban façade applications.

Keywords: energy efficiency, kinetic facades, sustainable architecture, urban facades

Procedia PDF Downloads 232
99 Service Blueprinting: A New Application for Evaluating Service Provision in the Hospice Sector

Authors: L. Sudbury-Riley, P. Hunter-Jones, L. Menzies, M. Pyrah, H. Knight

Abstract:

Just as manufacturing firms aim for zero defects, service providers strive to avoid service failures where customer expectations are not met. However, because services comprise unique human interactions, service failures are almost inevitable. Consequently, firms focus on service recovery strategies to fix problems and retain their customers for the future. Because a hospice offers care to terminally ill patients, it may not get the opportunity to correct a service failure. This situation makes the identification of what hospice users really need and want, and to ascertain perceptions of the hospice’s service delivery from the user’s perspective, even more important than for other service providers. A well-documented and fundamental barrier to improving end-of-life care is a lack of service quality measurement tools that capture the experiences of user’s from their own perspective. In palliative care, many quantitative measures are used and these focus on issues such as how quickly patients are assessed, whether they receive information leaflets, whether a discussion about their emotional needs is documented, and so on. Consequently, quality of service from the user’s perspective is overlooked. The current study was designed to overcome these limitations by adapting service blueprinting - never before used in the hospice sector - in order to undertake a ‘deep-dive’ to examine the impact of hospice services upon different users. Service blueprinting is a customer-focused approach for service innovation and improvement, where the ‘onstage’ visible service user and provider interactions must be supported by the ‘backstage’ employee actions and support processes. The study was conducted in conjunction with East Cheshire Hospice in England. The Hospice provides specialist palliative care for patients with progressive life-limiting illnesses, offering services to patients, carers and families via inpatient and outpatient units. Using service blueprinting to identify every service touchpoint, in-depth qualitative interviews with 38 in-patients, outpatients, visitors and bereaved families enabled a ‘deep-dive’ to uncover perceptions of the whole service experience among these diverse users. Interviews were recorded and transcribed, and thematic analysis of over 104,000 words of data revealed many excellent aspects of Hospice service. Staff frequently exceed people’s expectations. Striking gratifying comparisons to hospitals emerged. The Hospice makes people feel safe. Nevertheless, the technique uncovered many areas for improvement, including serendipity of referrals processes, the need for better communications with external agencies, improvements amid the daunting arrival and admissions process, a desperate need for more depression counselling, clarity of communication pertaining to actual end of life, and shortcomings in systems dealing with bereaved families. The study reveals that the adapted service blueprinting tool has major advantages of alternative quantitative evaluation techniques, including uncovering the complex nature of service user’s experiences in health-care service systems, highlighting more fully the interconnected configurations within the system and making greater sense of the impact of the service upon different service users. Unlike other tools, this in-depth examination reveals areas for improvement, many of which have already been implemented by the Hospice. The technique has potential to improve experiences of palliative and end-of-life care among patients and their families.

Keywords: hospices, end-of-life-care, service blueprinting, service delivery

Procedia PDF Downloads 170
98 Strategy to Evaluate Health Risks of Short-Term Exposure of Air Pollution in Vulnerable Individuals

Authors: Sarah Nauwelaerts, Koen De Cremer, Alfred Bernard, Meredith Verlooy, Kristel Heremans, Natalia Bustos Sierra, Katrien Tersago, Tim Nawrot, Jordy Vercauteren, Christophe Stroobants, Sigrid C. J. De Keersmaecker, Nancy Roosens

Abstract:

Projected climate changes could lead to exacerbation of respiratory disorders associated with reduced air quality. Air pollution and climate changes influence each other through complex interactions. The poor air quality in urban and rural areas includes high levels of particulate matter (PM), ozone (O3) and nitrogen oxides (NOx), representing a major threat to public health and especially for the most vulnerable population strata, and especially young children. In this study, we aim to develop generic standardized policy supporting tools and methods that allow evaluating in future follow-up larger scale epidemiological studies the risks of the combined short-term effects of O3 and PM on the cardiorespiratory system of children. We will use non-invasive indicators of airway damage/inflammation and of genetic or epigenetic variations by using urine or saliva as alternative to blood samples. Therefore, a multi-phase field study will be organized in order to assess the sensitivity and applicability of these tests in large cohorts of children during episodes of air pollution. A first test phase was planned in March 2018, not yet taking into account ‘critical’ pollution periods. Working with non-invasive samples, choosing the right set-up for the field work and the volunteer selection were parameters to consider, as they significantly influence the feasibility of this type of study. During this test phase, the selection of the volunteers was done in collaboration with medical doctors from the Centre for Student Assistance (CLB), by choosing a class of pre-pubertal children of 9-11 years old in a primary school in Flemish Brabant, Belgium. A questionnaire, collecting information on the health and background of children and an informed consent document were drawn up for the parents as well as a simplified cartoon-version of this document for the children. A detailed study protocol was established, giving clear information on the study objectives, the recruitment, the sample types, the medical examinations to be performed, the strategy to ensure anonymity, and finally on the sample processing. Furthermore, the protocol describes how this field study will be conducted in relation with the prevision and monitoring of air pollutants for the future phases. Potential protein, genetic and epigenetic biomarkers reflecting the respiratory function and the levels of air pollution will be measured in the collected samples using unconventional technologies. The test phase results will be used to address the most important bottlenecks before proceeding to the following phases of the study where the combined effect of O3 and PM during pollution peaks will be examined. This feasibility study will allow identifying possible bottlenecks and providing missing scientific knowledge, necessary for the preparation, implementation and evaluation of federal policies/strategies, based on the most appropriate epidemiological studies on the health effects of air pollution. The research leading to these results has been funded by the Belgian Science Policy Office through contract No.: BR/165/PI/PMOLLUGENIX-V2.

Keywords: air pollution, biomarkers, children, field study, feasibility study, non-invasive

Procedia PDF Downloads 148
97 Lack of Regulation Leads to Complexity: A Case Study of the Free Range Chicken Meat Sector in the Western Cape, South Africa

Authors: A. Coetzee, C. F. Kelly, E. Even-Zahav

Abstract:

Dominant approaches to livestock production are harmful to the environment, human health and animal welfare, yet global meat consumption is rising. Sustainable alternative production approaches are therefore urgently required, and ‘free range’ is the main alternative for chicken meat offered in South Africa (and globally). Although the South African Poultry Association provides non-binding guidelines, there is a lack of formal definition and regulation of free range chicken production, meaning it is unclear what this alternative entails and if it is consistently practised (a trend observed globally). The objective of this exploratory qualitative case study is therefore to investigate who and what determines free range chicken. The case study, conducted from a social constructivist worldview, uses semi-structured interviews, photographs and document analysis to collect data. Interviews are conducted with those involved with bringing free range chicken to the market - farmers, chefs, retailers, and regulators. Data is analysed using thematic analysis to establish dominant patterns in the data. The five major themes identified (based on prevalence in data and on achieving the research objective) are: 1) free range means a bird reared with good animal welfare in mind, 2) free range means quality meat, 3) free range means a profitable business, 4) free range is determined by decision makers or by access to markets, and 5) free range is coupled with concerns about the lack of regulation. Unpacking the findings in the context of the literature reveals who and what determines free range. The research uncovers wide-ranging interpretations of ‘free range’, driven by the absence of formal regulation for free range chicken practices and the lack of independent private certification. This means that the term ‘free range’ is socially constructed, thus varied and complex. The case study also shows that whether chicken meat is free range is generally determined by those who have access to markets. Large retailers claim adherence to the internationally recognised Five Freedoms, also include in the South African Poultry Association Code of Good Practice, which others in the sector say are too broad to be meaningful. Producers describe animal welfare concerns as the main driver for how they practice/view free range production, yet these interpretations vary. An additional driver is a focus on human health, which participants achieve mainly through the use of antibiotic-free feed, resulting in what participants regard as higher quality meat. The participants are also strongly driven by business imperatives, with most stating that free range chicken should carry a higher price than conventionally-reared chicken due to increased production costs. Recommendations from this study focus on, inter alia, a need to understand consumers’ perspectives on free range chicken, given that those in the sector claim they are responding to consumer demand, and conducting environmental research such as life cycle assessment studies to establish the true (environmental) sustainability of free range production. At present, it seems the sector mostly responds to social sustainability: human health and animal welfare.

Keywords: chicken meat production, free range, socially constructed, sustainability

Procedia PDF Downloads 123
96 The Effects of the Interaction between Prenatal Stress and Diet on Maternal Insulin Resistance and Inflammatory Profile

Authors: Karen L. Lindsay, Sonja Entringer, Claudia Buss, Pathik D. Wadhwa

Abstract:

Maternal nutrition and stress are independently recognized as among the most important factors that influence prenatal biology, with implications for fetal development and poor pregnancy outcomes. While there is substantial evidence from non-pregnancy human and animal studies that a complex, bi-directional relationship exists between nutrition and stress, to the author’s best knowledge, their interaction in the context of pregnancy has been significantly understudied. The aim of this study is to assess the interaction between maternal psychological stress and diet quality across pregnancy and its effects on biomarkers of prenatal insulin resistance and inflammation. This is a prospective longitudinal study of N=235 women carrying a healthy, singleton pregnancy, recruited from prenatal clinics of the University of California, Irvine Medical Center. Participants completed a 4-day ambulatory assessment in early, middle and late pregnancy, which included multiple daily electronic diary entries using Ecological Momentary Assessment (EMA) technology on a dedicated study smartphone. The EMA diaries gathered moment-level data on maternal perceived stress, negative mood, positive mood and quality of social interactions. The numerical scores for these variables were averaged across each study time-point and converted to Z-scores. A single composite variable for 'STRESS' was computed as follows: (Negative mood+Perceived stress)–(Positive mood+Social interaction quality). Dietary intakes were assessed by three 24-hour dietary recalls conducted within two weeks of each 4-day assessment. Daily nutrient and food group intakes were averaged across each study time-point. The Alternative Healthy Eating Index adapted for pregnancy (AHEI-P) was computed for early, middle and late pregnancy as a validated summary measure of diet quality. At the end of each 4-day ambulatory assessment, women provided a fasting blood sample, which was assayed for levels of glucose, insulin, Interleukin (IL)-6 and Tumor Necrosis Factor (TNF)-α. Homeostasis Model Assessment of Insulin Resistance (HOMA-IR) was computed. Pearson’s correlation was used to explore the relationship between maternal STRESS and AHEI-P within and between each study time-point. Linear regression was employed to test the association of the stress-diet interaction (STRESS*AHEI-P) with the biological markers HOMA-IR, IL-6 and TNF-α at each study time-point, adjusting for key covariates (pre-pregnancy body mass index, maternal education level, race/ethnicity). Maternal STRESS and AHEI-P were significantly inversely correlated in early (r=-0.164, p=0.018) and mid-pregnancy (-0.160, p=0.019), and AHEI-P from earlier gestational time-points correlated with later STRESS (early AHEI-P x mid STRESS: r=-0.168, p=0.017; mid AHEI-P x late STRESS: r=-0.142, p=0.041). In regression models, the interaction term was not associated with HOMA-IR or IL-6 at any gestational time-point. The stress-diet interaction term was significantly associated with TNF-α according to the following patterns: early AHEI-P*early STRESS vs early TNF-α (p=0.005); early AHEI-P*early STRESS vs mid TNF-α (p=0.002); early AHEI-P*mid STRESS vs mid TNF-α (p=0.005); mid AHEI-P*mid STRESS vs mid TNF-α (p=0.070); mid AHEI-P*late STRESS vs late TNF-α (p=0.011). Poor diet quality is significantly related to higher psychosocial stress levels in pregnant women across gestation, which may promote inflammation via TNF-α. Future prenatal studies should consider the combined effects of maternal stress and diet when evaluating either one of these factors on pregnancy or infant outcomes.

Keywords: diet quality, inflammation, insulin resistance, nutrition, pregnancy, stress, tumor necrosis factor-alpha

Procedia PDF Downloads 167
95 Shared Versus Pooled Automated Vehicles: Exploring Behavioral Intentions Towards On-Demand Automated Vehicles

Authors: Samira Hamiditehrani

Abstract:

Automated vehicles (AVs) are emerging technologies that could potentially offer a wide range of opportunities and challenges for the transportation sector. The advent of AV technology has also resulted in new business models in shared mobility services where many ride hailing and car sharing companies are developing on-demand AVs including shared automated vehicles (SAVs) and pooled automated vehicles (Pooled AVs). SAVs and Pooled AVs could provide alternative shared mobility services which encourage sustainable transport systems, mitigate traffic congestion, and reduce automobile dependency. However, the success of on-demand AVs in addressing major transportation policy issues depends on whether and how the public adopts them as regular travel modes. To identify conditions under which individuals may adopt on-demand AVs, previous studies have applied human behavior and technology acceptance theories, where Theory of Planned Behavior (TPB) has been validated and is among the most tested in on-demand AV research. In this respect, this study has three objectives: (a) to propose and validate a theoretical model for behavioral intention to use SAVs and Pooled AVs by extending the original TPB model; (b) to identify the characteristics of early adopters of SAVs, who prefer to have a shorter and private ride, versus prospective users of Pooled AVs, who choose more affordable but longer and shared trips; and (c) to investigate Canadians’ intentions to adopt on-demand AVs for regular trips. Toward this end, this study uses data from an online survey (n = 3,622) of workers or adult students (18 to 75 years old) conducted in October and November 2021 for six major Canadian metropolitan areas: Toronto, Vancouver, Ottawa, Montreal, Calgary, and Hamilton. To accomplish the goals of this study, a base bivariate ordered probit model, in which both SAV and Pooled AV adoptions are estimated as ordered dependent variables, alongside a full structural equation modeling (SEM) system are estimated. The findings of this study indicate that affective motivations such as attitude towards AV technology, perceived privacy, and subjective norms, matter more than sociodemographic and travel behavior characteristic in adopting on-demand AVs. Also, the results of second objective provide evidence that although there are a few affective motivations, such as subjective norms and having ample knowledge, that are common between early adopters of SAVs and PooledAVs, many examined motivations differ among SAV and Pooled AV adoption factors. In other words, motivations influencing intention to use on-demand AVs differ among the service types. Likewise, depending on the types of on-demand AVs, the sociodemographic characteristics of early adopters differ significantly. In general, findings paint a complex picture with respect to the application of constructs from common technology adoption models to the study of on-demand AVs. Findings from the final objective suggest that policymakers, planners, the vehicle and technology industries, and the public at large should moderate their expectations that on-demand AVs may suddenly transform the entire transportation sector. Instead, this study suggests that SAVs and Pooled AVs (when they entire the Canadian market) are likely to be adopted as supplementary mobility tools rather than substitutions for current travel modes

Keywords: automated vehicles, Canadian perception, theory of planned behavior, on-demand AVs

Procedia PDF Downloads 35
94 Sea Level Rise and Sediment Supply Explain Large-Scale Patterns of Saltmarsh Expansion and Erosion

Authors: Cai J. T. Ladd, Mollie F. Duggan-Edwards, Tjeerd J. Bouma, Jordi F. Pages, Martin W. Skov

Abstract:

Salt marshes are valued for their role in coastal flood protection, carbon storage, and for supporting biodiverse ecosystems. As a biogeomorphic landscape, marshes evolve through the complex interactions between sea level rise, sediment supply and wave/current forcing, as well as and socio-economic factors. Climate change and direct human modification could lead to a global decline marsh extent if left unchecked. Whilst the processes of saltmarsh erosion and expansion are well understood, empirical evidence on the key drivers of long-term lateral marsh dynamics is lacking. In a GIS, saltmarsh areal extent in 25 estuaries across Great Britain was calculated from historical maps and aerial photographs, at intervals of approximately 30 years between 1846 and 2016. Data on the key perceived drivers of lateral marsh change (namely sea level rise rates, suspended sediment concentration, bedload sediment flux rates, and frequency of both river flood and storm events) were collated from national monitoring centres. Continuous datasets did not extend beyond 1970, therefore predictor variables that best explained rate change of marsh extent between 1970 and 2016 was calculated using a Partial Least Squares Regression model. Information about the spread of Spartina anglica (an invasive marsh plant responsible for marsh expansion around the globe) and coastal engineering works that may have impacted on marsh extent, were also recorded from historical documents and their impacts assessed on long-term, large-scale marsh extent change. Results showed that salt marshes in the northern regions of Great Britain expanded an average of 2.0 ha/yr, whilst marshes in the south eroded an average of -5.3 ha/yr. Spartina invasion and coastal engineering works could not explain these trends since a trend of either expansion or erosion preceded these events. Results from the Partial Least Squares Regression model indicated that the rate of relative sea level rise (RSLR) and availability of suspended sediment concentration (SSC) best explained the patterns of marsh change. RSLR increased from 1.6 to 2.8 mm/yr, as SSC decreased from 404.2 to 78.56 mg/l along the north-to-south gradient of Great Britain, resulting in the shift from marsh expansion to erosion. Regional differences in RSLR and SSC are due to isostatic rebound since deglaciation, and tidal amplitudes respectively. Marshes exposed to low RSLR and high SSC likely leads to sediment accumulation at the coast suitable for colonisation by marsh plants and thus lateral expansion. In contrast, high RSLR with are likely not offset deposition under low SSC, thus average water depth at the marsh edge increases, allowing larger wind-waves to trigger marsh erosion. Current global declines in sediment flux to the coast are likely to diminish the resilience of salt marshes to RSLR. Monitoring and managing suspended sediment supply is not common-place, but may be critical to mitigating coastal impacts from climate change.

Keywords: lateral saltmarsh dynamics, sea level rise, sediment supply, wave forcing

Procedia PDF Downloads 110
93 In-Process Integration of Resistance-Based, Fiber Sensors during the Braiding Process for Strain Monitoring of Carbon Fiber Reinforced Composite Materials

Authors: Oscar Bareiro, Johannes Sackmann, Thomas Gries

Abstract:

Carbon fiber reinforced polymer composites (CFRP) are used in a wide variety of applications due to its advantageous properties and design versatility. The braiding process enables the manufacture of components with good toughness and fatigue strength. However, failure mechanisms of CFRPs are complex and still present challenges associated with their maintenance and repair. Within the broad scope of structural health monitoring (SHM), strain monitoring can be applied to composite materials to improve reliability, reduce maintenance costs and safely exhaust service life. Traditional SHM systems employ e.g. fiber optics, piezoelectrics as sensors, which are often expensive, time consuming and complicated to implement. A cost-efficient alternative can be the exploitation of the conductive properties of fiber-based sensors such as carbon, copper, or constantan - a copper-nickel alloy – that can be utilized as sensors within composite structures to achieve strain monitoring. This allows the structure to provide feedback via electrical signals to a user which are essential for evaluating the structural condition of the structure. This work presents a strategy for the in-process integration of resistance-based sensors (Elektrisola Feindraht AG, CuNi23Mn, Ø = 0.05 mm) into textile preforms during its manufacture via the braiding process (Herzog RF-64/120) to achieve strain monitoring of braided composites. For this, flat samples of instrumented composite laminates of carbon fibers (Toho Tenax HTS40 F13 24K, 1600 tex) and epoxy resin (Epikote RIMR 426) were manufactured via vacuum-assisted resin infusion. These flat samples were later cut out into test specimens and the integrated sensors were wired to the measurement equipment (National Instruments, VB-8012) for data acquisition during the execution of mechanical tests. Quasi-static tests were performed (tensile, 3-point bending tests) following standard protocols (DIN EN ISO 527-1 & 4, DIN EN ISO 14132); additionally, dynamic tensile tests were executed. These tests were executed to assess the sensor response under different loading conditions and to evaluate the influence of the sensor presence on the mechanical properties of the material. Several orientations of the sensor with regards to the applied loading and sensor placements inside the laminate were tested. Strain measurements from the integrated sensors were made by programming a data acquisition code (LabView) written for the measurement equipment. Strain measurements from the integrated sensors were then correlated to the strain/stress state for the tested samples. From the assessment of the sensor integration approach it can be concluded that it allows for a seamless sensor integration into the textile preform. No damage to the sensor or negative effect on its electrical properties was detected during inspection after integration. From the assessment of the mechanical tests of instrumented samples it can be concluded that the presence of the sensors does not alter significantly the mechanical properties of the material. It was found that there is a good correlation between resistance measurements from the integrated sensors and the applied strain. It can be concluded that the correlation is of sufficient accuracy to determinate the strain state of a composite laminate based solely on the resistance measurements from the integrated sensors.

Keywords: braiding process, in-process sensor integration, instrumented composite material, resistance-based sensor, strain monitoring

Procedia PDF Downloads 82
92 Geographic Information System Based Multi-Criteria Subsea Pipeline Route Optimisation

Authors: James Brown, Stella Kortekaas, Ian Finnie, George Zhang, Christine Devine, Neil Healy

Abstract:

The use of GIS as an analysis tool for engineering decision making is now best practice in the offshore industry. GIS enables multidisciplinary data integration, analysis and visualisation which allows the presentation of large and intricate datasets in a simple map-interface accessible to all project stakeholders. Presenting integrated geoscience and geotechnical data in GIS enables decision makers to be well-informed. This paper is a successful case study of how GIS spatial analysis techniques were applied to help select the most favourable pipeline route. Routing a pipeline through any natural environment has numerous obstacles, whether they be topographical, geological, engineering or financial. Where the pipeline is subjected to external hydrostatic water pressure and is carrying pressurised hydrocarbons, the requirement to safely route the pipeline through hazardous terrain becomes absolutely paramount. This study illustrates how the application of modern, GIS-based pipeline routing techniques enabled the identification of a single most-favourable pipeline route crossing of a challenging seabed terrain. Conventional approaches to pipeline route determination focus on manual avoidance of primary constraints whilst endeavouring to minimise route length. Such an approach is qualitative, subjective and is liable to bias towards the discipline and expertise that is involved in the routing process. For very short routes traversing benign seabed topography in shallow water this approach may be sufficient, but for deepwater geohazardous sites, the need for an automated, multi-criteria, and quantitative approach is essential. This study combined multiple routing constraints using modern least-cost-routing algorithms deployed in GIS, hitherto unachievable with conventional approaches. The least-cost-routing procedure begins with the assignment of geocost across the study area. Geocost is defined as a numerical penalty score representing hazard posed by each routing constraint (e.g. slope angle, rugosity, vulnerability to debris flows) to the pipeline. All geocosted routing constraints are combined to generate a composite geocost map that is used to compute the least geocost route between two defined terminals. The analyses were applied to select the most favourable pipeline route for a potential gas development in deep water. The study area is geologically complex with a series of incised, potentially active, canyons carved into a steep escarpment, with evidence of extensive debris flows. A similar debris flow in the future could cause significant damage to a poorly-placed pipeline. Protruding inter-canyon spurs offer lower-gradient options for ascending an escarpment but the vulnerability of periodic failure of these spurs is not well understood. Close collaboration between geoscientists, pipeline engineers, geotechnical engineers and of course the gas export pipeline operator guided the analyses and assignment of geocosts. Shorter route length, less severe slope angles, and geohazard avoidance were the primary drivers in identifying the most favourable route.

Keywords: geocost, geohazard, pipeline route determination, pipeline route optimisation, spatial analysis

Procedia PDF Downloads 366
91 Antibacterial Nanofibrous Film Encapsulated with 4-terpineol/β-cyclodextrin Inclusion Complexes: Relative Humidity-Triggered Release and Shrimp Preservation Application

Authors: Chuanxiang Cheng, Tiantian Min, Jin Yue

Abstract:

Antimicrobial active packaging enables extensive biological effects to improve food safety. However, the efficacy of antimicrobial packaging hinges on factors including the diffusion rate of the active agent toward the food surface, the initial content in the antimicrobial agent, and the targeted food shelf life. Among the possibilities of antimicrobial packaging design, an interesting approach involves the incorporation of volatile antimicrobial agents into the packaging material. In this case, the necessity for direct contact between the active packaging material and the food surface is mitigated, as the antimicrobial agent exerts its action through the packaging headspace atmosphere towards the food surface. However, it still remains difficult to achieve controlled and precise release of bioactive compounds to the specific target location with required quantity in food packaging applications. Remarkably, the development of stimuli-responsive materials for electrospinning has introduced the possibility of achieving controlled release of active agents under specific conditions, thereby yielding enduring biological effects. Relative humidity (RH) for the storage of food categories such as meat and aquatic products typically exceeds 90%. Consequently, high RH can be used as an abiotic trigger for the release of active agents to prevent microbial growth. Hence, a novel RH - responsive polyvinyl alcohol/chitosan (PVA/CS) composite nanofibrous film incorporated with 4-terpineol/β-cyclodextrin inclusion complexes (4-TA@β-CD ICs) was engineered by electrospinning that can be deposited as a functional packaging materials. The characterization results showed the thermal stability of the films was enhanced after the incorporation due to the hydrogen bonds between ICs and polymers. Remarkably, the 4 wt% 4-TA@β-CD ICs/PVA/CS film exhibited enhanced crystallinity, moderate hydrophilic (Water contact angle of 81.53°), light barrier property (Transparency of 1.96%) and water resistance (Water vapor permeability of 3.17 g mm/m2 h kPa). Moreover, this film also showed optimized mechanical performance with a Young’s modulus of 11.33 MPa, a tensile strength of 19.99 MPa and an elongation at break of 4.44 %. Notably, the antioxidant and antibacterial properties of this packaging material were significantly improved. The film demonstrated the half-inhibitory concentrations (IC50) values of 87.74% and 85.11% for scavenging 2,2-diphenyl-1-picrylhydrazyl (DPPH) and 2, 2′-azinobis (3-ethylbenzothiazoline-6-sulfonic) (ABTS) free radicals, respectively, in addition to an inhibition efficiency of 65% against Shewanella putrefaciens, the characteristic bacteria in aquatic products. Most importantly, the film achieved controlled release of 4-TA under high 98% RH by inducing the plasticization of polymers caused by water molecules, swelling of polymer chains, and destruction of hydrogen bonds within the cyclodextrin inclusion complex. Consequently, low relative humidity is suitable for the preservation of nanofibrous film, while high humidity conditions typical in fresh food packaging environments effectively stimulated the release of active compounds in the film. This film with a long-term antimicrobial effect successfully extended the shelf life of Litopenaeus vannamei shrimp to 7 days at 4 °C. This attractive design could pave the way for the development of new food packaging materials.

Keywords: controlled release, electrospinning, nanofibrous film, relative humidity–responsive, shrimp preservation

Procedia PDF Downloads 35
90 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip

Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas

Abstract:

A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.

Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration

Procedia PDF Downloads 363
89 i2kit: A Tool for Immutable Infrastructure Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.

Keywords: container, deployment, immutable infrastructure, microservice

Procedia PDF Downloads 152
88 Simulation and Analysis of Mems-Based Flexible Capacitive Pressure Sensors with COMSOL

Authors: Ding Liangxiao

Abstract:

The technological advancements in Micro-Electro-Mechanical Systems (MEMS) have significantly contributed to the development of new, flexible capacitive pressure sensors,which are pivotal in transforming wearable and medical device technologies. This study employs the sophisticated simulation tools available in COMSOL Multiphysics® to develop and analyze a MEMS-based sensor with a tri-layered design. This sensor comprises top and bottom electrodes made from gold (Au), noted for their excellent conductivity, a middle dielectric layer made from a composite of Silver Nanowires (AgNWs) embedded in Thermoplastic Polyurethane (TPU), and a flexible, durable substrate of Polydimethylsiloxane (PDMS). This research was directed towards understanding how changes in the physical characteristics of the AgNWs/TPU dielectric layer—specifically, its thickness and surface area—impact the sensor's operational efficacy. We assessed several key electrical properties: capacitance, electric potential, and membrane displacement under varied pressure conditions. These investigations are crucial for enhancing the sensor's sensitivity and ensuring its adaptability across diverse applications, including health monitoring systems and dynamic user interface technologies. To ensure the reliability of our simulations, we applied the Effective Medium Theory to calculate the dielectric constant of the AgNWs/TPU composite accurately. This approach is essential for predicting how the composite material will perform under different environmental and operational stresses, thus facilitating the optimization of the sensor design for enhanced performance and longevity. Moreover, we explored the potential benefits of innovative three-dimensional structures for the dielectric layer compared to traditional flat designs. Our hypothesis was that 3D configurations might improve the stress distribution and optimize the electrical field interactions within the sensor, thereby boosting its sensitivity and accuracy. Our simulation protocol includes comprehensive performance testing under simulated environmental conditions, such as temperature fluctuations and mechanical pressures, which mirror the actual operational conditions. These tests are crucial for assessing the sensor's robustness and its ability to function reliably over extended periods, ensuring high reliability and accuracy in complex real-world environments. In our current research, although a full dynamic simulation analysis of the three-dimensional structures has not yet been conducted, preliminary explorations through three-dimensional modeling have indicated the potential for mechanical and electrical performance improvements over traditional planar designs. These initial observations emphasize the potential advantages and importance of incorporating advanced three-dimensional modeling techniques in the development of Micro-Electro-Mechanical Systems (MEMS)sensors, offering new directions for the design and functional optimization of future sensors. Overall, this study not only highlights the powerful capabilities of COMSOL Multiphysics® for modeling sophisticated electronic devices but also underscores the potential of innovative MEMS technology in advancing the development of more effective, reliable, and adaptable sensor solutions for a broad spectrum of technological applications.

Keywords: MEMS, flexible sensors, COMSOL Multiphysics, AgNWs/TPU, PDMS, 3D modeling, sensor durability

Procedia PDF Downloads 6
87 The Dynamic Nexus of Public Health and Journalism in Informed Societies

Authors: Ali Raza

Abstract:

The dynamic landscape of communication has brought about significant advancements that intersect with the realms of public health and journalism. This abstract explores the evolving synergy between these fields, highlighting how their intersection has contributed to informed societies and improved public health outcomes. In the digital age, communication plays a pivotal role in shaping public perception, policy formulation, and collective action. Public health, concerned with safeguarding and improving community well-being, relies on effective communication to disseminate information, encourage healthy behaviors, and mitigate health risks. Simultaneously, journalism, with its commitment to accurate and timely reporting, serves as the conduit through which health information reaches the masses. Advancements in communication technologies have revolutionized the ways in which public health information is both generated and shared. The advent of social media platforms, mobile applications, and online forums has democratized the dissemination of health-related news and insights. This democratization, however, brings challenges, such as the rapid spread of misinformation and the need for nuanced strategies to engage diverse audiences. Effective collaboration between public health professionals and journalists is pivotal in countering these challenges, ensuring that accurate information prevails. The synergy between public health and journalism is most evident during public health crises. The COVID-19 pandemic underscored the pivotal role of journalism in providing accurate and up-to-date information to the public. However, it also highlighted the importance of responsible reporting, as sensationalism and misinformation could exacerbate the crisis. Collaborative efforts between public health experts and journalists led to the amplification of preventive measures, the debunking of myths, and the promotion of evidence-based interventions. Moreover, the accessibility of information in the digital era necessitates a strategic approach to health communication. Behavioral economics and data analytics offer insights into human decision-making and allow tailored health messages to resonate more effectively with specific audiences. This approach, when integrated into journalism, enables the crafting of narratives that not only inform but also influence positive health behaviors. Ethical considerations emerge prominently in this alliance. The responsibility to balance the public's right to know with the potential consequences of sensational reporting underscores the significance of ethical journalism. Health journalists must meticulously source information from reputable experts and institutions to maintain credibility, thus fortifying the bridge between public health and the public. As both public health and journalism undergo transformative shifts, fostering collaboration between these domains becomes essential. Training programs that familiarize journalists with public health concepts and practices can enhance their capacity to report accurately and comprehensively on health issues. Likewise, public health professionals can gain insights into effective communication strategies from seasoned journalists, ensuring that health information reaches a wider audience. In conclusion, the convergence of public health and journalism, facilitated by communication advancements, is a cornerstone of informed societies. Effective communication strategies, driven by collaboration, ensure the accurate dissemination of health information and foster positive behavior change. As the world navigates complex health challenges, the continued evolution of this synergy holds the promise of healthier communities and a more engaged and educated public.

Keywords: public awareness, journalism ethics, health promotion, media influence, health literacy

Procedia PDF Downloads 44
86 Expression Profiling of Chlorophyll Biosynthesis Pathways in Chlorophyll B-Lacking Mutants of Rice (Oryza sativa L.)

Authors: Khiem M. Nguyen, Ming C. Yang

Abstract:

Chloroplast pigments are extremely important during photosynthesis since they play essential roles in light absorption and energy transfer. Therefore, understanding the efficiency of chlorophyll (Chl) biosynthesis could facilitate enhancement in photo-assimilates accumulation, and ultimately, in crop yield. The Chl-deficient mutants have been used extensively to study the Chl biosynthetic pathways and the biogenesis of the photosynthetic apparatus. Rice (Oryza sativa L.) is one of the most leading food crops, serving as staple food for many parts of the world. To author’s best knowledge, Chl b–lacking rice has been found; however the molecular mechanism of Chl biosynthesis still remains unclear compared to wild-type rice. In this study, the ultrastructure analysis, photosynthetic properties, and transcriptome profile of wild-type rice (Norin No.8, N8) and its Chl b-lacking mutant (Chlorina 1, C1) were examined. The finding concluded that total Chl content and Chl b content in the C1 leaves were strongly reduced compared to N8 leaves, suggesting that reduction in the total Chl content contributes to leaf color variation at the physiological level. Plastid ultrastructure of C1 possessed abnormal thylakoid membranes with loss of starch granule, large number of vesicles, and numerous plastoglobuli. The C1 rice also exhibited thinner stacked grana, which was caused by a reduction in the number of thylakoid membranes per granum. Thus, the different Chl a/b ratio of C1 may reflect the abnormal plastid development and function. Transcriptional analysis identified 23 differentially expressed genes (DEGs) and 671 transcription factors (TFs) that were involved in Chl metabolism, chloroplast development, cell division, and photosynthesis. The transcriptome profile and DEGs revealed that the gene encoding PsbR (PSII core protein) was down-regulated, therefore suggesting that the lower in light-harvesting complex proteins are responsible for the lower photosynthetic capacity in C1. In addition, expression level of cell division protein (FtsZ) genes were significantly reduced in C1, causing chloroplast division defect. A total of 19 DEGs were identified based on KEGG pathway assignment involving Chl biosynthesis pathway. Among these DEGs, the GluTR gene was down-regulated, whereas the UROD, CPOX, and MgCH genes were up-regulated. Observation through qPCR suggested that later stages of Chl biosynthesis were enhanced in C1, whereas the early stages were inhibited. Plastid structure analysis together with transcriptomic analysis suggested that the Chl a/b ratio was amplified both by the reduction in Chl contents accumulation, owning to abnormal chloroplast development, and by the enhanced conversion of Chl b to Chl a. Moreover, the results indicated the same Chl-cycle pattern in the wild-type and C1 rice, indicating another Chl b degradation pathway. Furthermore, the results demonstrated that normal grana stacking, along with the absence of Chl b and greatly reduced levels of Chl a in C1, provide evidence to support the conclusion that other factors along with LHCII proteins are involved in grana stacking. The findings of this study provide insight into the molecular mechanisms that underlie different Chl a/b ratios in rice.

Keywords: Chl-deficient mutant, grana stacked, photosynthesis, RNA-Seq, transcriptomic analysis

Procedia PDF Downloads 96
85 BIM Modeling of Site and Existing Buildings: Case Study of ESTP Paris Campus

Authors: Rita Sassine, Yassine Hassani, Mohamad Al Omari, Stéphanie Guibert

Abstract:

Building Information Modelling (BIM) is the process of creating, managing, and centralizing information during the building lifecycle. BIM can be used all over a construction project, from the initiation phase to the planning and execution phases to the maintenance and lifecycle management phase. For existing buildings, BIM can be used for specific applications such as lifecycle management. However, most of the existing buildings don’t have a BIM model. Creating a compatible BIM for existing buildings is very challenging. It requires special equipment for data capturing and efforts to convert these data into a BIM model. The main difficulties for such projects are to define the data needed, the level of development (LOD), and the methodology to be adopted. In addition to managing information for an existing building, studying the impact of the built environment is a challenging topic. So, integrating the existing terrain that surrounds buildings into the digital model is essential to be able to make several simulations as flood simulation, energy simulation, etc. Making a replication of the physical model and updating its information in real-time to make its Digital Twin (DT) is very important. The Digital Terrain Model (DTM) represents the ground surface of the terrain by a set of discrete points with unique height values over 2D points based on reference surface (e.g., mean sea level, geoid, and ellipsoid). In addition, information related to the type of pavement materials, types of vegetation and heights and damaged surfaces can be integrated. Our aim in this study is to define the methodology to be used in order to provide a 3D BIM model for the site and the existing building based on the case study of “Ecole Spéciale des Travaux Publiques (ESTP Paris)” school of engineering campus. The property is located on a hilly site of 5 hectares and is composed of more than 20 buildings with a total area of 32 000 square meters and a height between 50 and 68 meters. In this work, the campus precise levelling grid according to the NGF-IGN69 altimetric system and the grid control points are computed according to (Réseau Gédésique Français) RGF93 – Lambert 93 french system with different methods: (i) Land topographic surveying methods using robotic total station, (ii) GNSS (Global Network Satellite sytem) levelling grid with NRTK (Network Real Time Kinematic) mode, (iii) Point clouds generated by laser scanning. These technologies allow the computation of multiple building parameters such as boundary limits, the number of floors, the floors georeferencing, the georeferencing of the 4 base corners of each building, etc. Once the entry data are identified, the digital model of each building is done. The DTM is also modeled. The process of altimetric determination is complex and requires efforts in order to collect and analyze multiple data formats. Since many technologies can be used to produce digital models, different file formats such as DraWinG (DWG), LASer (LAS), Comma-separated values (CSV), Industry Foundation Classes (IFC) and ReViT (RVT) will be generated. Checking the interoperability between BIM models is very important. In this work, all models are linked together and shared on 3DEXPERIENCE collaborative platform.

Keywords: building information modeling, digital terrain model, existing buildings, interoperability

Procedia PDF Downloads 77
84 Poverty Reduction in European Cities: Local Governments’ Strategies and Programmes to Reduce Poverty; Interview Results from Austria

Authors: Melanie Schinnerl, Dorothea Greiling

Abstract:

In the context of the 2020 strategy, poverty and its fight returned to the center of national political efforts. This served as motivation for an Austrian research grant-funded project to focus on the under-researched local government level with the aim to identify municipal best-practice cases and to derive policy implications for Austria. Designing effective poverty reduction strategies is a complex challenge which calls for an integrated multi-actor in approach. Cities are increasingly confronted to combat poverty, even in rich EU-member states. By doing so cities face substantial demographic, cultural, economic and social challenges as well as changing welfare state regimes. Furthermore, there is a low willingness of (right-wing) governments to support the poor. Against this background, the research questions are: 1. How do local governments define poverty? 2. Who are the main risk groups and what are the most pressing problems when fighting urban poverty? 3. What is regarded as successful anti-poverty initiatives? 4. What is the underlying welfare state concept? To address the research questions a multi-method approach was chosen, consisting of a systematic literature analysis, a comprehensive document analysis, and expert interviews. For interpreting the data the project follows the qualitative-interpretive paradigm. Municipal approaches for reducing poverty are compared based on deductive, as well as inductive identified criteria. In addition to an intensive literature analysis, interviews (40) were conducted in Austria since the project started in March 2018. From the other countries, 14 responses have been collected, providing a first insight. Regarding the definition of poverty the EU SILC-definition as well as counting the persons who receive need-based minimum social benefits, the Austrian form of social welfare, are the predominant approaches in Austria. In addition to homeless people, single-parent families, un-skilled persons, long-term unemployed persons, migrants (first and second generation), refugees and families with at least 3 children were frequently mentioned. The most pressing challenges for Austrian cities are: expected reductions of social budgets, a great insecurity of the central government's social policy reform plans, the growing number of homeless people and a lack of affordable housing. Together with affordable housing, old-age poverty will gain more importance in the future. The Austrian best practice examples, suggested by interviewees, focused primarily on homeless, children and young people (till 25). Central government’s policy changes have already negative effects on programs for refugees and elderly unemployed. Social Housing in Vienna was frequently mentioned as an international best practice case, other growing cities can learn from. The results from Austria indicate a change towards the social investment state, which primarily focuses on children and labour market integration. The first insights from the other countries indicate that affordable housing and labor market integration are cross-cutting issues. Inherited poverty and old-age poverty seems to be more pressing outside Austria.

Keywords: anti-poverty policies, European cities, empirical study, social investment

Procedia PDF Downloads 92
83 Triassic and Liassic Paleoenvironments during the Central Atlantic Magmatique Province (CAMP) Effusion in the Moroccan Coastal Meseta: The Mohammedia-Benslimane-El Gara-Berrechid Basin

Authors: Rachid Essamoud, Abdelkrim Afenzar, Ahmed Belqadi

Abstract:

During the Early Mesozoic, the northwestern part of the African continent was affected by initial fracturing associated with the early stages of the opening of the Central Atlantic (Atlantic Rift). During this rifting phase, the Moroccan Meseta experienced an extensive tectonic regime. This extension favored the formation of a set of rift-type basins, including the Mohammedia-Benslimane-ElGara-Berrechid basin. Thus, it is essential to know the nature of the deposits in this basin and their evolution over time as well as their relationship with the basaltic effusion of the Central Atlantic Magmatic Province (CAMP). These deposits are subdivided into two large series: The Lower clay-salt series attributed to the Triassic and the Upper clay-salt series attributed to the Liassic. The two series are separated by the Upper Triassic-Lower Liassic basaltic complex. The detailed sedimentological analysis made it possible to characterize four mega-sequences, fifteen types of facies and eight architectural elements and facies associations in the Triassic series. A progressive decrease observed in paleo-slope over time led to the evolution of the paleoenvironment from a proximal system of alluvial fans to a braided fluvial style, then to an anastomosed system. These environments eventually evolved into an alluvial plain associated with a coastal plain where playa lakes, mudflats and lagoons had developed. The pure and massive halitic facies at the top of the series probably indicate an evolution of the depositional environment towards a shallow subtidal environment. The presence of these evaporites indicates a climate that favored their precipitation, in this case, a fairly hot and humid climate. The sedimentological analysis of the supra-basaltic part shows that during the Lower Liassic, the paleopente after basaltic effusion remained weak with distal environments. The faciological analysis revealed the presence of four major sandstone, silty, clayey and evaporitic lithofacies organized in two mega-sequences: the sedimentation of the first rock-salt mega-sequence took place in a brine depression system free, followed by saline mudflats under continental influences. The upper clay mega-sequence displays facies documenting sea level fluctuations from the final transgression of the Tethys or the opening Atlantic. Saliferous sedimentation is therefore favored from the Upper Triassic, but experienced a sudden rupture by the emission of basaltic flows which are interstratified in the azoic salt clays of very shallow seas. This basaltic emission which belongs to the CAMP would come from a fissural volcanism probably carried out through transfer faults located in the NW and SE of the basin. Their emplacement is probably subaquatic to subaerial. From a chronological and paleogeographic point of view, this main volcanism, dated between the Upper Triassic and the Lower Liassic (180-200 MA), is linked to the fragmentation of Pangea and managed by a progressive expansion triggered in the West in close relation with the initial phases of Central Atlantic rifting and seems to coincide with the major mass extinction at the Triassic-Jurassic boundary.

Keywords: Basalt, CAMP, Liassic, sedimentology, Triassic, Morocco

Procedia PDF Downloads 43
82 Distributed Listening in Intensive Care: Nurses’ Collective Alarm Responses Unravelled through Auditory Spatiotemporal Trajectories

Authors: Michael Sonne Kristensen, Frank Loesche, James Foster, Elif Ozcan, Judy Edworthy

Abstract:

Auditory alarms play an integral role in intensive care nurses’ daily work. Most medical devices in the intensive care unit (ICU) are designed to produce alarm sounds in order to make nurses aware of immediate or prospective safety risks. The utilisation of sound as a carrier of crucial patient information is highly dependent on nurses’ presence - both physically and mentally. For ICU nurses, especially the ones who work with stationary alarm devices at the patient bed space, it is a challenge to display ‘appropriate’ alarm responses at all times as they have to navigate with great flexibility in a complex work environment. While being primarily responsible for a small number of allocated patients they are often required to engage with other nurses’ patients, relatives, and colleagues at different locations inside and outside the unit. This work explores the social strategies used by a team of nurses to comprehend and react to the information conveyed by the alarms in the ICU. Two main research questions guide the study: To what extent do alarms from a patient bed space reach the relevant responsible nurse by direct auditory exposure? By which means do responsible nurses get informed about their patients’ alarms when not directly exposed to the alarms? A comprehensive video-ethnographic field study was carried out to capture and evaluate alarm-related events in an ICU. The study involved close collaboration with four nurses who wore eye-level cameras and ear-level binaural audio recorders during several work shifts. At all time the entire unit was monitored by multiple video and audio recorders. From a data set of hundreds of hours of recorded material information about the nurses’ location, social interaction, and alarm exposure at any point in time was coded in a multi-channel replay-interface. The data shows that responsible nurses’ direct exposure and awareness of the alarms of their allocated patients vary significantly depending on work load, social relationships, and the location of the patient’s bed space. Distributed listening is deliberately employed by the nursing team as a social strategy to respond adequately to alarms, but the patterns of information flow prompted by alarm-related events are not uniform. Auditory Spatiotemporal Trajectory (AST) is proposed as a methodological label to designate the integration of temporal, spatial and auditory load information. As a mixed-method metrics it provides tangible evidence of how nurses’ individual alarm-related experiences differ from one another and from stationary points in the ICU. Furthermore, it is used to demonstrate how alarm-related information reaches the individual nurse through principles of social and distributed cognition, and how that information relates to the actual alarm event. Thereby it bridges a long-standing gap in the literature on medical alarm utilisation between, on the one hand, initiatives to measure objective data of the medical sound environment without consideration for any human experience, and, on the other hand, initiatives to study subjective experiences of the medical sound environment without detailed evidence of the objective characteristics of the environment.

Keywords: auditory spatiotemporal trajectory, medical alarms, social cognition, video-ethography

Procedia PDF Downloads 171
81 Gis Based Flash Flood Runoff Simulation Model of Upper Teesta River Besin - Using Aster Dem and Meteorological Data

Authors: Abhisek Chakrabarty, Subhraprakash Mandal

Abstract:

Flash flood is one of the catastrophic natural hazards in the mountainous region of India. The recent flood in the Mandakini River in Kedarnath (14-17th June, 2013) is a classic example of flash floods that devastated Uttarakhand by killing thousands of people.The disaster was an integrated effect of high intensityrainfall, sudden breach of Chorabari Lake and very steep topography. Every year in Himalayan Region flash flood occur due to intense rainfall over a short period of time, cloud burst, glacial lake outburst and collapse of artificial check dam that cause high flow of river water. In Sikkim-Derjeeling Himalaya one of the probable flash flood occurrence zone is Teesta Watershed. The Teesta River is a right tributary of the Brahmaputra with draining mountain area of approximately 8600 Sq. km. It originates in the Pauhunri massif (7127 m). The total length of the mountain section of the river amounts to 182 km. The Teesta is characterized by a complex hydrological regime. The river is fed not only by precipitation, but also by melting glaciers and snow as well as groundwater. The present study describes an attempt to model surface runoff in upper Teesta basin, which is directly related to catastrophic flood events, by creating a system based on GIS technology. The main object was to construct a direct unit hydrograph for an excess rainfall by estimating the stream flow response at the outlet of a watershed. Specifically, the methodology was based on the creation of a spatial database in GIS environment and on data editing. Moreover, rainfall time-series data collected from Indian Meteorological Department and they were processed in order to calculate flow time and the runoff volume. Apart from the meteorological data, background data such as topography, drainage network, land cover and geological data were also collected. Clipping the watershed from the entire area and the streamline generation for Teesta watershed were done and cross-sectional profiles plotted across the river at various locations from Aster DEM data using the ERDAS IMAGINE 9.0 and Arc GIS 10.0 software. The analysis of different hydraulic model to detect flash flood probability ware done using HEC-RAS, Flow-2D, HEC-HMS Software, which were of great importance in order to achieve the final result. With an input rainfall intensity above 400 mm per day for three days the flood runoff simulation models shows outbursts of lakes and check dam individually or in combination with run-off causing severe damage to the downstream settlements. Model output shows that 313 Sq. km area were found to be most vulnerable to flash flood includes Melli, Jourthang, Chungthang, and Lachung and 655sq. km. as moderately vulnerable includes Rangpo,Yathang, Dambung,Bardang, Singtam, Teesta Bazarand Thangu Valley. The model was validated by inserting the rain fall data of a flood event took place in August 1968, and 78% of the actual area flooded reflected in the output of the model. Lastly preventive and curative measures were suggested to reduce the losses by probable flash flood event.

Keywords: flash flood, GIS, runoff, simulation model, Teesta river basin

Procedia PDF Downloads 279