Search results for: events detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5340

Search results for: events detection

240 Customized Temperature Sensors for Sustainable Home Appliances

Authors: Merve Yünlü, Nihat Kandemir, Aylin Ersoy

Abstract:

Temperature sensors are used in home appliances not only to monitor the basic functions of the machine but also to minimize energy consumption and ensure safe operation. In parallel with the development of smart home applications and IoT algorithms, these sensors produce important data such as the frequency of use of the machine, user preferences, and the compilation of critical data in terms of diagnostic processes for fault detection throughout an appliance's operational lifespan. Commercially available thin-film resistive temperature sensors have a well-established manufacturing procedure that allows them to operate over a wide temperature range. However, these sensors are over-designed for white goods applications. The operating temperature range of these sensors is between -70°C and 850°C, while the temperature range requirement in home appliance applications is between 23°C and 500°C. To ensure the operation of commercial sensors in this wide temperature range, usually, a platinum coating of approximately 1-micron thickness is applied to the wafer. However, the use of platinum in coating and the high coating thickness extends the sensor production process time and therefore increases sensor costs. In this study, an attempt was made to develop a low-cost temperature sensor design and production method that meets the technical requirements of white goods applications. For this purpose, a custom design was made, and design parameters (length, width, trim points, and thin film deposition thickness) were optimized by using statistical methods to achieve the desired resistivity value. To develop thin film resistive temperature sensors, one side polished sapphire wafer was used. To enhance adhesion and insulation 100 nm silicon dioxide was coated by inductively coupled plasma chemical vapor deposition technique. The lithography process was performed by a direct laser writer. The lift-off process was performed after the e-beam evaporation of 10 nm titanium and 280 nm platinum layers. Standard four-point probe sheet resistance measurements were done at room temperature. The annealing process was performed. Resistivity measurements were done with a probe station before and after annealing at 600°C by using a rapid thermal processing machine. Temperature dependence between 25-300 °C was also tested. As a result of this study, a temperature sensor has been developed that has a lower coating thickness than commercial sensors but can produce reliable data in the white goods application temperature range. A relatively simplified but optimized production method has also been developed to produce this sensor.

Keywords: thin film resistive sensor, temperature sensor, household appliance, sustainability, energy efficiency

Procedia PDF Downloads 55
239 Calpoly Autonomous Transportation Experience: Software for Driverless Vehicle Operating on Campus

Authors: F. Tang, S. Boskovich, A. Raheja, Z. Aliyazicioglu, S. Bhandari, N. Tsuchiya

Abstract:

Calpoly Autonomous Transportation Experience (CATE) is a driverless vehicle that we are developing to provide safe, accessible, and efficient transportation of passengers throughout the Cal Poly Pomona campus for events such as orientation tours. Unlike the other self-driving vehicles that are usually developed to operate with other vehicles and reside only on the road networks, CATE will operate exclusively on walk-paths of the campus (potentially narrow passages) with pedestrians traveling from multiple locations. Safety becomes paramount as CATE operates within the same environment as pedestrians. As driverless vehicles assume greater roles in today’s transportation, this project will contribute to autonomous driving with pedestrian traffic in a highly dynamic environment. The CATE project requires significant interdisciplinary work. Researchers from mechanical engineering, electrical engineering and computer science are working together to attack the problem from different perspectives (hardware, software and system). In this abstract, we describe the software aspects of the project, with a focus on the requirements and the major components. CATE shall provide a GUI interface for the average user to interact with the car and access its available functionalities, such as selecting a destination from any origin on campus. We have developed an interface that provides an aerial view of the campus map, the current car location, routes, and the goal location. Users can interact with CATE through audio or manual inputs. CATE shall plan routes from the origin to the selected destination for the vehicle to travel. We will use an existing aerial map for the campus and convert it to a spatial graph configuration where the vertices represent the landmarks and edges represent paths that the car should follow with some designated behaviors (such as stay on the right side of the lane or follow an edge). Graph search algorithms such as A* will be implemented as the default path planning algorithm. D* Lite will be explored to efficiently recompute the path when there are any changes to the map. CATE shall avoid any static obstacles and walking pedestrians within some safe distance. Unlike traveling along traditional roadways, CATE’s route directly coexists with pedestrians. To ensure the safety of the pedestrians, we will use sensor fusion techniques that combine data from both lidar and stereo vision for obstacle avoidance while also allowing CATE to operate along its intended route. We will also build prediction models for pedestrian traffic patterns. CATE shall improve its location and work under a GPS-denied situation. CATE relies on its GPS to give its current location, which has a precision of a few meters. We have implemented an Unscented Kalman Filter (UKF) that allows the fusion of data from multiple sensors (such as GPS, IMU, odometry) in order to increase the confidence of localization. We also noticed that GPS signals can easily get degraded or blocked on campus due to high-rise buildings or trees. UKF can also help here to generate a better state estimate. In summary, CATE will provide on-campus transportation experience that coexists with dynamic pedestrian traffic. In future work, we will extend it to multi-vehicle scenarios.

Keywords: driverless vehicle, path planning, sensor fusion, state estimate

Procedia PDF Downloads 127
238 Beyond Geometry: The Importance of Surface Properties in Space Syntax Research

Authors: Christoph Opperer

Abstract:

Space syntax is a theory and method for analyzing the spatial layout of buildings and urban environments to understand how they can influence patterns of human movement, social interaction, and behavior. While direct visibility is a key factor in space syntax research, important visual information such as light, color, texture, etc., are typically not considered, even though psychological studies have shown a strong correlation to the human perceptual experience within physical space – with light and color, for example, playing a crucial role in shaping the perception of spaciousness. Furthermore, these surface properties are often the visual features that are most salient and responsible for drawing attention to certain elements within the environment. This paper explores the potential of integrating these factors into general space syntax methods and visibility-based analysis of space, particularly for architectural spatial layouts. To this end, we use a combination of geometric (isovist) and topological (visibility graph) approaches together with image-based methods, allowing a comprehensive exploration of the relationship between spatial geometry, visual aesthetics, and human experience. Custom-coded ray-tracing techniques are employed to generate spherical panorama images, encoding three-dimensional spatial data in the form of two-dimensional images. These images are then processed through computer vision algorithms to generate saliency-maps, which serve as a visual representation of areas most likely to attract human attention based on their visual properties. The maps are subsequently used to weight the vertices of isovists and the visibility graph, placing greater emphasis on areas with high saliency. Compared to traditional methods, our weighted visibility analysis introduces an additional layer of information density by assigning different weights or importance levels to various aspects within the field of view. This extends general space syntax measures to provide a more nuanced understanding of visibility patterns that better reflect the dynamics of human attention and perception. Furthermore, by drawing parallels to traditional isovist and VGA analysis, our weighted approach emphasizes a crucial distinction, which has been pointed out by Ervin and Steinitz: the difference between what is possible to see and what is likely to be seen. Therefore, this paper emphasizes the importance of including surface properties in visibility-based analysis to gain deeper insights into how people interact with their surroundings and to establish a stronger connection with human attention and perception.

Keywords: space syntax, visibility analysis, isovist, visibility graph, visual features, human perception, saliency detection, raytracing, spherical images

Procedia PDF Downloads 55
237 Pivoting to Fortify our Digital Self: Revealing the Need for Personal Cyber Insurance

Authors: Richard McGregor, Carmen Reaiche, Stephen Boyle

Abstract:

Cyber threats are a relatively recent phenomenon and offer cyber insurers a dynamic and intelligent peril. As individuals en mass become increasingly digitally dependent, Personal Cyber Insurance (PCI) offers an attractive option to mitigate cyber risk at a personal level. This abstract proposes a literature review that conceptualises a framework for siting Personal Cyber Insurance (PCI) within the context of cyberspace. The lack of empirical research within this domain demonstrates an immediate need to define the scope of PCI to allow cyber insurers to understand personal cyber risk threats and vectors, customer awareness, capabilities, and their associated needs. Additionally, this will allow cyber insurers to conceptualise appropriate frameworks allowing effective management and distribution of PCI products and services within a landscape often in-congruent with risk attributes commonly associated with traditional personal line insurance products. Cyberspace has provided significant improvement to the quality of social connectivity and productivity during past decades and allowed enormous capability uplift of information sharing and communication between people and communities. Conversely, personal digital dependency furnish ample opportunities for adverse cyber events such as data breaches and cyber-attacksthus introducing a continuous and insidious threat of omnipresent cyber risk–particularly since the advent of the COVID-19 pandemic and wide-spread adoption of ‘work-from-home’ practices. Recognition of escalating inter-dependencies, vulnerabilities and inadequate personal cyber behaviours have prompted efforts by businesses and individuals alike to investigate strategies and tactics to mitigate cyber risk – of which cyber insurance is a viable, cost-effective option. It is argued that, ceteris parabus, the nature of cyberspace intrinsically provides characteristic peculiarities that pose significant and bespoke challenges to cyber insurers, often in-congruent with risk attributes commonly associated with traditional personal line insurance products. These challenges include (inter alia) a paucity of historical claim/loss data for underwriting and pricing purposes, interdependencies of cyber architecture promoting high correlation of cyber risk, difficulties in evaluating cyber risk, intangibility of risk assets (such as data, reputation), lack of standardisation across the industry, high and undetermined tail risks, and moral hazard among others. This study proposes a thematic overview of the literature deemed necessary to conceptualise the challenges to issuing personal cyber coverage. There is an evident absence of empirical research appertaining to PCI and the design of operational business models for this business domain, especially qualitative initiatives that (1) attempt to define the scope of the peril, (2) secure an understanding of the needs of both cyber insurer and customer, and (3) to identify elements pivotal to effective management and profitable distribution of PCI - leading to an argument proposed by the author that postulates that the traditional general insurance customer journey and business model are ill-suited for the lineaments of cyberspace. The findings of the review confirm significant gaps in contemporary research within the domain of personal cyber insurance.

Keywords: cyberspace, personal cyber risk, personal cyber insurance, customer journey, business model

Procedia PDF Downloads 86
236 Memories of Lost Fathers: The Unfinished Transmission of Generational Values in Hungarian Cinema by Peter Falanga

Authors: Peter Falanga

Abstract:

During the process of de-Stalinization that began in 1956 with the Twentieth Congress of the Soviet Communist Party, many filmmakers in Hungary chose to explore their country’s political discomforts by using Socialist Realism as a negative model against which they could react to the dominating ideology. A renewed national film industry and a more permissive political regime would allow filmmakers to take to task the plight of the preceding generation who had experienced the fatal political turmoil of both World Wars and the purges of Stalin. What follows is no longer the multigenerational unity found in Socialist Realism wherein both the old and the young embrace Stalin’s revolutionary optimism; instead, the protagonists are parentless, and thus their connection to the previous generation is partially severed. In these films, violent historical forces leave one generation to search for both a connection with their family’s past, and for moral guidance to direct their future. István Szabó’s Father (1966), Márta Mészáros Diary for My Children (1984), and Pál Gábor’s Angi Vera (1978) each consider the fraught relationship between successive generations through the lens of postwar youth. A characteristic each of their protagonist’s share is that they are all missing one or both parents, and cope with familial loss either through recalling memories of their parents in dream-like sequences, or, in the case of Angi Vera, through embracing the surrogate paternalism that the Communist Party promises to provide. This paper considers the argument these films present about the progress of Hungarian history, and how this topic is explored in more recent films that similarly focus on the transmission of generational values. Scholars such as László Strausz and John Cunningham have written on the continuous concern with the transmission of generational values in more recent films such as István Szabó’s Sunshine (1999), Béla Tarr’s Werckmeister Harmonies (2000), György Pálfi’s Taxidermia (2006), Ágnes Kocsis’ Pál Adrienn (2010), and Kornél Mundruczó’s Evolution (2021). These films, they argue, make intimate portrayals of the various sweeping political changes in Hungary’s history and question how these epochs or events have impacted Hungarian identities. If these films attempt to personalize historical shifts of Hungary, then what is the significance of featuring characters who have lost one or both parents? An attempt to understand this coherent trend in Hungarian cinema will profit from examining the earlier, celebrated films of Szabó, Mészáros, and Gábor, who inaugurated this preoccupation with generational values. The pervasive interplay of dreams and memory in their films invites an additional element to their argument concerning historical progression. This paper incorporates Richard Teniman’s notion of the “dialectics of memory” in which memory is in a constant process of negation and reinvention to explain why these Directors prefer to explore Hungarian identity through the disarranged form of psychological realism over the linear causality structure of historical realism.

Keywords: film theory, Eastern European Studies, film history, Eastern European History

Procedia PDF Downloads 102
235 Becoming a Good-Enough White Therapist: Experiences of International Students in Psychology Doctoral Programs

Authors: Mary T. McKinley

Abstract:

As socio-economic globalization impacts education and turns knowledge into a commodity, institutions of higher education are becoming more intentional about infusing a global and intercultural perspective into education via the recruitment of international students. Coming from dissimilar cultures, many of these students are evaluated and held accountable to Euro-American values of independence, self-reliance, and autonomy. Not surprisingly, these students often experience culture shock with deleterious effects on their mental health and academic functioning. Thus, it is critical to understand the experiences of international students with the hope that such knowledge will keep the field of psychology from promulgating Eurocentric ideals and values and prevent the training of these students as good-enough White therapists. Using a critical narrative inquiry framework, this study elicits stories about the challenges encountered by international students as they navigate their clinical training in the presence of acculturative stress and potentially different worldviews. With its emphasis on story-telling as meaning making, narrative research design is hinged on the assumption that people are interpretive beings who make meaning of themselves and their world through the language of stories. Also, dominant socially-constructed narratives play a central role in creating and maintaining hegemonic structures that privilege certain individuals and ideologies at the expense of others. On this premise, narrative inquiry begins with an exploration of the experiences of participants in their lived stories. Bounded narrative segments were read, interpreted, and analyzed using a critical events approach. Throughout the process, issues of reliability and researcher bias were addressed by keeping a reflective analytic memo, as well as triangulating the data using peer-reviewers and check-ins with participants. The findings situate culture at the epicenter of international students’ acculturation challenges as well as their resiliency in psychology doctoral programs. It was not uncommon for these international students to experience ethical dilemmas inherent in learning content that conflicted with their cultural beliefs and values. Issues of cultural incongruence appear to be further exacerbated by visible markers for differences like speech accent and clothing attire. These stories also link the acculturative stress reported by international students to the experiences of perceived racial discrimination and lack of support from the faculty, administration, peers, and the society at large. Beyond the impact on the international students themselves, there are implications for internationalization in psychology with the goal of equipping doctoral programs to be better prepared to meet the needs of their international students. More than ever before, programs need to liaise with international students’ services and work in tandem to meet the unique needs of this population of students. Also, there exists a need for multiculturally competent supervisors working with international students with varying degrees of acculturation. In addition to making social justice and advocacy salient in students’ multicultural training, it may be helpful for psychology doctoral programs to be more intentional about infusing cross-cultural theories, indigenous psychotherapies, and/or when practical, the possibility for geographically cross-cultural practicum experiences in the home countries of international students while taking into consideration the ethical issues for virtual supervision.

Keywords: decolonizing pedagogies, international students, multiculturalism, psychology doctoral programs

Procedia PDF Downloads 100
234 Vertebral Artery Dissection Complicating Pregnancy and Puerperium: Case Report and Review of the Literature

Authors: N. Reza Pour, S. Chuah, T. Vo

Abstract:

Background: Vertebral artery dissection (VAD) is a rare complication of pregnancy. It can occur spontaneously or following a traumatic event. The pathogenesis is unclear. Predisposing factors include chronic hypertension, Marfan’s syndrome, fibromuscular dysplasia, vasculitis and cystic medial necrosis. Physiological changes of pregnancy have also been proposed as potential mechanisms of injury to the vessel wall. The clinical presentation varies and it can present as a headache, neck pain, diplopia, transient ischaemic attack, or an ischemic stroke. Isolated cases of VAD in pregnancy and puerperium have been reported in the literature. One case was found to have posterior circulation stroke as a result of bilateral VAD and labour was induced at 37 weeks gestation for preeclampsia. Another patient at 38 weeks with severe neck pain that persisted after induction for elevated blood pressure and arteriography showed right VAD postpartum. A single case of lethal VAD in pregnancy with subsequent massive subarachnoid haemorrhage has been reported which was confirmed by the autopsy. Case Presentation: We report two cases of vertebral artery dissection in pregnancy. The first patient was a 32-year-old primigravida presented at the 38th week of pregnancy with the onset of early labour and blood pressure (BP) of 130/70 on arrival. After 2 hours, the patient developed a severe headache with blurry vision and BP was 238/120. Despite treatment with an intravenous antihypertensive, she had eclamptic fit. Magnesium solfate was started and Emergency Caesarean Section was performed under the general anaesthesia. On the second day after the operation, she developed left-sided neck pain. Magnetic Resonance Imaging (MRI) angiography confirmed a short segment left vertebral artery dissection at the level of C3. The patient was treated with aspirin and remained stable without any neurological deficit. The second patient was a 33-year-old primigavida who was admitted to the hospital at 36 weeks gestation with BP of 155/105, constant headache and visual disturbances. She was medicated with an oral antihypertensive agent. On day 4, she complained of right-sided neck pain. MRI angiogram revealed a short segment dissection of the right vertebral artery at the C2-3 level. Pregnancy was terminated on the same day with emergency Caesarean Section and anticoagulation was started subsequently. Post-operative recovery was complicated by rectus sheath haematoma requiring evacuation. She was discharged home on Aspirin without any neurological sequelae. Conclusion: Because of collateral circulation, unilateral vertebral artery dissections may go unrecognized and may be more common than suspected. The outcome for most patients is benign, reflecting the adequacy of the collateral circulation in young patients. Spontaneous VAD is usually treated with anticoagulation or antiplatelet therapy for a minimum of 3-6 months to prevent future ischaemic events, allowing the dissection to heal on its own. We had two cases of VAD in the context of hypertensive disorders of pregnancy with an acceptable outcome. A high level of vigilance is required particularly with preeclamptic patients presenting with head/neck pain to allow an early diagnosis. This is as we hypothesize, early and aggressive management of vertebral artery dissection may potentially prevent further complications.

Keywords: eclampsia, preeclampsia, pregnancy, Vertebral Artery Dissection

Procedia PDF Downloads 258
233 A Prospective Study of a Clinically Significant Anatomical Change in Head and Neck Intensity-Modulated Radiation Therapy Using Transit Electronic Portal Imaging Device Images

Authors: Wilai Masanga, Chirapha Tannanonta, Sangutid Thongsawad, Sasikarn Chamchod, Todsaporn Fuangrod

Abstract:

The major factors of radiotherapy for head and neck (HN) cancers include patient’s anatomical changes and tumour shrinkage. These changes can significantly affect the planned dose distribution that causes the treatment plan deterioration. A measured transit EPID images compared to a predicted EPID images using gamma analysis has been clinically implemented to verify the dose accuracy as part of adaptive radiotherapy protocol. However, a global gamma analysis dose not sensitive to some critical organ changes as the entire treatment field is compared. The objective of this feasibility study is to evaluate the dosimetric response to patient anatomical changes during the treatment course in HN IMRT (Head and Neck Intensity-Modulated Radiation Therapy) using a novel comparison method; organ-of-interest gamma analysis. This method provides more sensitive to specific organ change detection. Random replanned 5 HN IMRT patients with causes of tumour shrinkage and patient weight loss that critically affect to the parotid size changes were selected and evaluated its transit dosimetry. A comprehensive physics-based model was used to generate a series of predicted transit EPID images for each gantry angle from original computed tomography (CT) and replan CT datasets. The patient structures; including left and right parotid, spinal cord, and planning target volume (PTV56) were projected to EPID level. The agreement between the transit images generated from original CT and replanned CT was quantified using gamma analysis with 3%, 3mm criteria. Moreover, only gamma pass-rate is calculated within each projected structure. The gamma pass-rate in right parotid and PTV56 between predicted transit of original CT and replan CT were 42.8%( ± 17.2%) and 54.7%( ± 21.5%). The gamma pass-rate for other projected organs were greater than 80%. Additionally, the results of organ-of-interest gamma analysis were compared with 3-dimensional cone-beam computed tomography (3D-CBCT) and the rational of replan by radiation oncologists. It showed that using only registration of 3D-CBCT to original CT does not provide the dosimetric impact of anatomical changes. Using transit EPID images with organ-of-interest gamma analysis can provide additional information for treatment plan suitability assessment.

Keywords: re-plan, anatomical change, transit electronic portal imaging device, EPID, head, and neck

Procedia PDF Downloads 201
232 Complex Dynamics in a Morphologically Heterogeneous Biological Medium

Authors: Turky Al-Qahtani, Roustem Miftahof

Abstract:

Introduction: Under common assumptions of excitabi-lity, morphological (cellular) homogeneity, and spatial structural anomalies added as required, it has been shown that biological systems are able to display travelling wave dynamics. Being not self-sustainable, existence depends on the electrophysiological state of transmembrane ion channels and it requires an extrinsic/intrinsic periodic source. However, organs in the body are highly multicellular, heterogeneous, and their functionality is the outcome of electro-mechanical conjugation, rather than excitability only. Thus, peristalsis in the gut relies on spatiotemporal myoelectrical pattern formations between the mechanical, represented by smooth muscle cells (SM), and the control, comprised of a chain of primary sensory and motor neurones, components. Synaptically linked through the afferent and efferent pathways, they form a functional unit (FU) of the gut. Aims: These are: i) to study numerically the complex dynamics, and ii) to investigate the possibility of self-sustained myoelectrical activity in the FU. Methods: The FU recreates the following sequence of physiological events: deformation of mechanoreceptors of located in SM; generation and propagation of electrical waves of depolarisation - spikes - along the axon to the soma of the primary neurone; discharge of the primary neurone and spike propagation towards the motor neurone; burst of the motor neurone and transduction of spikes to SM, subsequently producing forces of contraction. These are governed by a system of nonlinear partial and ordinary differential equations being a modified version of the Hodgkin-Huxley model and SM fibre mechanics. In numerical experiments; the source of excitation is mechanical stretches of SM at a fixed amplitude and variable frequencies. Results: Low frequency (0.5 < v < 2 Hz) stimuli cause the propagation of spikes in the neuronal chain and, finally, the generation of active forces by SM. However, induced contractions are not sufficient to initiate travelling wave dynamics in the control system. At frequencies, 2 < v < 4 Hz, multiple low amplitude and short-lasting contractions are observed in SM after the termination of stretching. For frequencies (0.5 < v < 4 Hz), primary and sensory neurones demonstrate strong connectivity and coherent electrical activity. Significant qualitative and quantitative changes in dynamics of myoelectical patterns with a transition to a self-organised mode are recorded with the high degree of stretches at v = 4.5 Hz. Increased rates of deformation lead to the production of high amplitude signals at the mechanoreceptors with subsequent self-sustained excitation within the neuronal chain. Remarkably, the connection between neurones weakens resulting in incoherent firing. Further increase in a frequency of stimulation (v > 4.5 Hz) has a detrimental effect on the system. The mechanical and control systems become disconnected and exhibit uncoordinated electromechanical activity. Conclusion: To our knowledge, the existence of periodic activity in a multicellular, functionally heterogeneous biological system with mechano-electrical dynamics, such as the FU, has been demonstrated for the first time. These findings support the notion of possible peristalsis in the gut even in the absence of intrinsic sources - pacemaker cells. Results could be implicated in the pathogenesis of intestinal dysrythmia, a medical condition associated with motor dysfunction.

Keywords: complex dynamics, functional unit, the gut, dysrythmia

Procedia PDF Downloads 189
231 Dynamic High-Rise Moment Resisting Frame Dissipation Performances Adopting Glazed Curtain Walls with Superelastic Shape Memory Alloy Joints

Authors: Lorenzo Casagrande, Antonio Bonati, Ferdinando Auricchio, Antonio Occhiuzzi

Abstract:

This paper summarizes the results of a survey on smart non-structural element dynamic dissipation when installed in modern high-rise mega-frame prototypes. An innovative glazed curtain wall was designed using Shape Memory Alloy (SMA) joints in order to increase the energy dissipation and enhance the seismic/wind response of the structures. The studied buildings consisted of thirty- and sixty-storey planar frames, extracted from reference three-dimensional steel Moment Resisting Frame (MRF) with outriggers and belt trusses. The internal core was composed of a CBF system, whilst outriggers were placed every fifteen stories to limit second order effects and inter-storey drifts. These structural systems were designed in accordance with European rules and numerical FE models were developed with an open-source code, able to account for geometric and material nonlinearities. With regard to the characterization of non-structural building components, full-scale crescendo tests were performed on aluminium/glass curtain wall units at the laboratory of the Construction Technologies Institute (ITC) of the Italian National Research Council (CNR), deriving force-displacement curves. Three-dimensional brick-based inelastic FE models were calibrated according to experimental results, simulating the fac¸ade response. Since recent seismic events and extreme dynamic wind loads have generated the large occurrence of non-structural components failure, which causes sensitive economic losses and represents a hazard for pedestrians safety, a more dissipative glazed curtain wall was studied. Taking advantage of the mechanical properties of SMA, advanced smart joints were designed with the aim to enhance both the dynamic performance of the single non-structural unit and the global behavior. Thus, three-dimensional brick-based plastic FE models were produced, based on the innovated non-structural system, simulating the evolution of mechanical degradation in aluminium-to-glass and SMA-to-glass connections when high deformations occurred. Consequently, equivalent nonlinear links were calibrated to reproduce the behavior of both tested and smart designed units, and implemented on the thirty- and sixty-storey structural planar frame FE models. Nonlinear time history analyses (NLTHAs) were performed to quantify the potential of the new system, when considered in the lateral resisting frame system (LRFS) of modern high-rise MRFs. Sensitivity to the structure height was explored comparing the responses of the two prototypes. Trends in global and local performance were discussed to show that, if accurately designed, advanced materials in non-structural elements provide new sources of energy dissipation.

Keywords: advanced technologies, glazed curtain walls, non-structural elements, seismic-action reduction, shape memory alloy

Procedia PDF Downloads 315
230 Cardiac Arrest after Cardiac Surgery

Authors: Ravshan A. Ibadov, Sardor Kh. Ibragimov

Abstract:

Objective. The aim of the study was to optimize the protocol of cardiopulmonary resuscitation (CPR) after cardiovascular surgical interventions. Methods. The experience of CPR conducted on patients after cardiovascular surgical interventions in the Department of Intensive Care and Resuscitation (DIR) of the Republican Specialized Scientific-Practical Medical Center of Surgery named after Academician V. Vakhidov is presented. The key to the new approach is the rapid elimination of reversible causes of cardiac arrest, followed by either defibrillation or electrical cardioversion (depending on the situation) before external heart compression, which may damage sternotomy. Careful use of adrenaline is emphasized due to the potential recurrence of hypertension, and timely resternotomy (within 5 minutes) is performed to ensure optimal cerebral perfusion through direct massage. Out of 32 patients, cardiac arrest in the form of asystole was observed in 16 (50%), with hypoxemia as the cause, while the remaining 16 (50%) experienced ventricular fibrillation caused by arrhythmogenic reactions. The age of the patients ranged from 6 to 60 years. All patients were evaluated before the operation using the ASA and EuroSCORE scales, falling into the moderate-risk group (3-5 points). CPR was conducted for cardiac activity restoration according to the American Heart Association and European Resuscitation Council guidelines (Ley SJ. Standards for Resuscitation After Cardiac Surgery. Critical Care Nurse. 2015;35(2):30-38). The duration of CPR ranged from 8 to 50 minutes. The ARASNE II scale was used to assess the severity of patients' conditions after CPR, and the Glasgow Coma Scale was employed to evaluate patients' consciousness after the restoration of cardiac activity and sedation withdrawal. Results. In all patients, immediate chest compressions of the necessary depth (4-5 cm) at a frequency of 100-120 compressions per minute were initiated upon detection of cardiac arrest. Regardless of the type of cardiac arrest, defibrillation with a manual defibrillator was performed 3-5 minutes later, and adrenaline was administered in doses ranging from 100 to 300 mcg. Persistent ventricular fibrillation was also treated with antiarrhythmic therapy (amiodarone, lidocaine). If necessary, infusion of inotropes and vasopressors was used, and for the prevention of brain edema and the restoration of adequate neurostatus within 1-3 days, sedation, a magnesium-lidocaine mixture, mechanical intranasal cooling of the brain stem, and neuroprotective drugs were employed. A coordinated effort by the resuscitation team and proper role allocation within the team were essential for effective cardiopulmonary resuscitation (CPR). All these measures contributed to the improvement of CPR outcomes. Conclusion. Successful CPR following cardiac surgical interventions involves interdisciplinary collaboration. The application of an optimized CPR standard leads to a reduction in mortality rates and favorable neurological outcomes.

Keywords: cardiac surgery, cardiac arrest, resuscitation, critically ill patients

Procedia PDF Downloads 38
229 Analytical Tools for Multi-Residue Analysis of Some Oxygenated Metabolites of PAHs (Hydroxylated, Quinones) in Sediments

Authors: I. Berger, N. Machour, F. Portet-Koltalo

Abstract:

Polycyclic aromatic hydrocarbons (PAHs) are toxic and carcinogenic pollutants produced in majority by incomplete combustion processes in industrialized and urbanized areas. After being emitted in atmosphere, these persistent contaminants are deposited to soils or sediments. Even if persistent, some can be partially degraded (photodegradation, biodegradation, chemical oxidation) and they lead to oxygenated metabolites (oxy-PAHs) which can be more toxic than their parent PAH. Oxy-PAHs are less measured than PAHs in sediments and this study aims to compare different analytical tools in order to extract and quantify a mixture of four hydroxylated PAHs (OH-PAHs) and four carbonyl PAHs (quinones) in sediments. Methodologies: Two analytical systems – HPLC with on-line UV and fluorescence detectors (HPLC-UV-FLD) and GC coupled to a mass spectrometer (GC-MS) – were compared to separate and quantify oxy-PAHs. Microwave assisted extraction (MAE) was optimized to extract oxy-PAHs from sediments. Results: First OH-PAHs and quinones were analyzed in HPLC with on-line UV and fluorimetric detectors. OH-PAHs were detected with the sensitive FLD, but the non-fluorescent quinones were detected with UV. The limits of detection (LOD)s obtained were in the range (2-3)×10-4 mg/L for OH-PAHs and (2-3)×10-3 mg/L for quinones. Second, even if GC-MS is not well adapted to the analysis of the thermodegradable OH-PAHs and quinones without any derivatization step, it was used because of the advantages of the detector in terms of identification and of GC in terms of efficiency. Without derivatization, only two of the four quinones were detected in the range 1-10 mg/L (LODs=0.3-1.2 mg/L) and LODs were neither very satisfying for the four OH-PAHs (0.18-0.6 mg/L). So two derivatization processes were optimized, comparing to literature: one for silylation of OH-PAHs, one for acetylation of quinones. Silylation using BSTFA/TCMS 99/1 was enhanced using a mixture of catalyst solvents (pyridine/ethyle acetate) and finding the appropriate reaction duration (5-60 minutes). Acetylation was optimized at different steps of the process, including the initial volume of compounds to derivatize, the added amounts of Zn (0.1-0.25 g), the nature of the derivatization product (acetic anhydride, heptafluorobutyric acid…) and the liquid/liquid extraction at the end of the process. After derivatization, LODs were decreased by a factor 3 for OH-PAHs and by a factor 4 for quinones, all the quinones being now detected. Thereafter, quinones and OH-PAHs were extracted from spiked sediments using microwave assisted extraction (MAE) followed by GC-MS analysis. Several mixtures of solvents of different volumes (10-25 mL) and using different extraction temperatures (80-120°C) were tested to obtain the best recovery yields. Satisfactory recoveries could be obtained for quinones (70-96%) and for OH-PAHs (70-104%). Temperature was a critical factor which had to be controlled to avoid oxy-PAHs degradation during the MAE extraction process. Conclusion: Even if MAE-GC-MS was satisfactory to analyze these oxy-PAHs, MAE optimization has to be carried on to obtain a most appropriate extraction solvent mixture, allowing a direct injection in the HPLC-UV-FLD system, which is more sensitive than GC-MS and does not necessitate a previous long derivatization step.

Keywords: derivatizations for GC-MS, microwave assisted extraction, on-line HPLC-UV-FLD, oxygenated PAHs, polluted sediments

Procedia PDF Downloads 271
228 Physiological Effects during Aerobatic Flights on Science Astronaut Candidates

Authors: Pedro Llanos, Diego García

Abstract:

Spaceflight is considered the last frontier in terms of science, technology, and engineering. But it is also the next frontier in terms of human physiology and performance. After more than 200,000 years humans have evolved under earth’s gravity and atmospheric conditions, spaceflight poses environmental stresses for which human physiology is not adapted. Hypoxia, accelerations, and radiation are among such stressors, our research involves suborbital flights aiming to develop effective countermeasures in order to assure sustainable human space presence. The physiologic baseline of spaceflight participants is subject to great variability driven by age, gender, fitness, and metabolic reserve. The objective of the present study is to characterize different physiologic variables in a population of STEM practitioners during an aerobatic flight. Cardiovascular and pulmonary responses were determined in Science Astronaut Candidates (SACs) during unusual attitude aerobatic flight indoctrination. Physiologic data recordings from 20 subjects participating in high-G flight training were analyzed. These recordings were registered by wearable sensor-vest that monitored electrocardiographic tracings (ECGs), signs of dysrhythmias or other electric disturbances during all the flight. The same cardiovascular parameters were also collected approximately 10 min pre-flight, during each high-G/unusual attitude maneuver and 10 min after the flights. The ratio (pre-flight/in-flight/post-flight) of the cardiovascular responses was calculated for comparison of inter-individual differences. The resulting tracings depicting the cardiovascular responses of the subjects were compared against the G-loads (Gs) during the aerobatic flights to analyze cardiovascular variability aspects and fluid/pressure shifts due to the high Gs. In-flight ECG revealed cardiac variability patterns associated with rapid Gs onset in terms of reduced heart rate (HR) and some scattered dysrhythmic patterns (15% premature ventricular contractions-type) that were considered as triggered physiological responses to high-G/unusual attitude training and some were considered as instrument artifact. Variation events were observed in subjects during the +Gz and –Gz maneuvers and these may be due to preload and afterload, sudden shift. Our data reveal that aerobatic flight influenced the breathing rate of the subject, due in part by the various levels of energy expenditure due to the increased use of muscle work during these aerobatic maneuvers. Noteworthy was the high heterogeneity in the different physiological responses among a relatively small group of SACs exposed to similar aerobatic flights with similar Gs exposures. The cardiovascular responses clearly demonstrated that SACs were subjected to significant flight stress. Routine ECG monitoring during high-G/unusual attitude flight training is recommended to capture pathology underlying dangerous dysrhythmias in suborbital flight safety. More research is currently being conducted to further facilitate the development of robust medical screening, medical risk assessment approaches, and suborbital flight training in the context of the evolving commercial human suborbital spaceflight industry. A more mature and integrative medical assessment method is required to understand the physiology state and response variability among highly diverse populations of prospective suborbital flight participants.

Keywords: g force, aerobatic maneuvers, suborbital flight, hypoxia, commercial astronauts

Procedia PDF Downloads 109
227 Impact of Transgenic Adipose Derived Stem Cells in the Healing of Spinal Cord Injury of Dogs

Authors: Imdad Ullah Khan, Yongseok Yoon, Kyeung Uk Choi, Kwang Rae Jo, Namyul Kim, Eunbee Lee, Wan Hee Kim, Oh-Kyeong Kweon

Abstract:

The primary spinal cord injury (SCI) causes mechanical damage to the neurons and blood vessels. It leads to secondary SCI, which activates multiple pathological pathways, which expand neuronal damage at the injury site. It is characterized by vascular disruption, ischemia, excitotoxicity, oxidation, inflammation, and apoptotic cell death. It causes nerve demyelination and disruption of axons, which perpetuate a loss of impulse conduction through the injured spinal cord. It also leads to the production of myelin inhibitory molecules, which with a concomitant formation of an astroglial scar, impede axonal regeneration. The pivotal role regarding the neuronal necrosis is played by oxidation and inflammation. During an early stage of spinal cord injury, there occurs an abundant expression of reactive oxygen species (ROS) due to defective mitochondrial metabolism and abundant migration of phagocytes (macrophages, neutrophils). ROS cause lipid peroxidation of the cell membrane, and cell death. Abundant migration of neutrophils, macrophages, and lymphocytes collectively produce pro-inflammatory cytokines such as tumor necrosis factor-alpha (TNF-α), interleukin-6 (IL-6), interleukin-1beta (IL-1β), matrix metalloproteinase, superoxide dismutase, and myeloperoxidases which synergize neuronal apoptosis. Therefore, it is crucial to control inflammation and oxidation injury to minimize the nerve cell death during secondary spinal cord injury. Therefore, in response to oxidation and inflammation, heme oxygenase-1 (HO-1) is induced by the resident cells to ameliorate the milieu. In the meanwhile, neurotrophic factors are induced to promote neuroregeneration. However, it seems that anti-stress enzyme (HO-1) and neurotrophic factor (BDNF) do not significantly combat the pathological events during secondary spinal cord injury. Therefore, optimum healing can be induced if anti-inflammatory and neurotrophic factors are administered in a higher amount through an exogenous source. During the first experiment, the inflammation and neuroregeneration were selectively targeted. HO-1 expressing MSCs (HO-1 MSCs) and BDNF expressing MSCs (BDNF MSC) were co-transplanted in one group (combination group) of dogs with subacute spinal cord injury to selectively control the expression of inflammatory cytokines by HO-1 and induce neuroregeneration by BDNF. We compared the combination group with the HO-1 MSCs group, BDNF MSCs group, and GFP MSCs group. We found that the combination group showed significant improvement in functional recovery. It showed increased expression of neural markers and growth-associated proteins (GAP-43) than in other groups, which depicts enhanced neuroregeneration/neural sparing due to reduced expression of pro-inflammatory cytokines such as TNF-alpha, IL-6 and COX-2; and increased expression of anti-inflammatory markers such as IL-10 and HO-1. Histopathological study revealed reduced intra-parenchymal fibrosis in the injured spinal cord segment in the combination group than in other groups. Thus it was concluded that selectively targeting the inflammation and neuronal growth with the combined use of HO-1 MSCs and BDNF MSCs more favorably promote healing of the SCI. HO-1 MSCs play a role in controlling the inflammation, which favors the BDNF induced neuroregeneration at the injured spinal cord segment of dogs.

Keywords: HO-1 MSCs, BDNF MSCs, neuroregeneration, inflammation, anti-inflammation, spinal cord injury, dogs

Procedia PDF Downloads 107
226 Genetic Variations of Two Casein Genes among Maghrabi Camels Reared in Egypt

Authors: Othman E. Othman, Amira M. Nowier, Medhat El-Denary

Abstract:

Camels play an important socio-economic role within the pastoral and agricultural system in the dry and semidry zones of Asia and Africa. Camels are economically important animals in Egypt where they are dual purpose animals (meat and milk). The analysis of chemical composition of camel milk showed that the total protein contents ranged from 2.4% to 5.3% and it is divided into casein and whey proteins. The casein fraction constitutes 52% to 89% of total camel milk protein and it divided into 4 fractions namely αs1, αs2, β and κ-caseins which are encoded by four tightly genes. In spite of the important role of casein genes and the effects of their genetic polymorphisms on quantitative traits and technological properties of milk, the studies for the detection of genetic polymorphism of camel milk genes are still limited. Due to this fact, this work focused - using PCR-RFP and sequencing analysis - on the identification of genetic polymorphisms and SNPs of two casein genes in Maghrabi camel breed which is a dual purpose camel breed in Egypt. The amplified fragments at 488-bp of the camel κ-CN gene were digested with AluI endonuclease. The results showed the appearance of three different genotypes in the tested animals; CC with three digested fragments at 203-, 127- and 120-bp, TT with three digested fragments at 203-, 158- and 127-bp and CT with four digested fragments at 203-, 158-, 127- and 120-bp. The frequencies of three detected genotypes were 11.0% for CC, 48.0% for TT and 41.0% for CT genotypes. The sequencing analysis of the two different alleles declared the presence of a single nucleotide polymorphism (C→T) at position 121 in the amplified fragments which is responsible for the destruction of a restriction site (AG/CT) in allele T and resulted in the presence of two different alleles C and T in tested animals. The nucleotide sequences of κ-CN alleles C and T were submitted to GenBank with the accession numbers; KU055605 and KU055606, respectively. The primers used in this study amplified 942-bp fragments spanning from exon 4 to exon 6 of camel αS1-Casein gene. The amplified fragments were digested with two different restriction enzymes; SmlI and AluI. The results of SmlI digestion did not show any restriction site whereas the digestion with AluI endonuclease revealed the presence of two restriction sites AG^CT at positions 68^69 and 631^632 yielding the presence of three digested fragments with sizes 68-, 563- and 293-bp.The nucleotide sequences of this fragment from camel αS1-Casein gene were submitted to GenBank with the accession number KU145820. In conclusion, the genetic characterization of quantitative traits genes which are associated with the production traits like milk yield and composition is considered an important step towards the genetic improvement of livestock species through the selection of superior animals depending on the favorable alleles and genotypes; marker assisted selection (MAS).

Keywords: genetic polymorphism, SNP polymorphism, Maghrabi camels, κ-Casein gene, αS1-Casein gene

Procedia PDF Downloads 591
225 Mental Health Promotion for Children of Mentally Ill Parents in Schools. Assessment and Promotion of Teacher Mental Health Literacy in Order to Promote Child Related Mental Health (Teacher-MHL)

Authors: Dirk Bruland, Paulo Pinheiro, Ullrich Bauer

Abstract:

Introduction: Over 3 million children, about one quarter of all students, experience at least one parent with mental disorder in Germany every year. Children of mentally-ill parents are at considerably higher risk of developing serious mental health problems. The different burden patterns and coping attempts often become manifest in children's school lives. In this context, schools can have an important protective function, but can also create risk potentials. In reference to Jorm, pupil-related teachers’ mental health literacy (Teacher-MHL) includes the ability to recognize change behaviour, the knowledge of risk factors, the implementation of first aid intervention, and seeking professional help (teacher as gatekeeper). Although teachers’ knowledge and increased awareness of this topic is essential, the literature provides little information on the extent of teachers' abilities. As part of a German-wide research consortium on health literacy, this project, launched in March for 3 years, will conduct evidence-based mental health literacy research. The primary objective is to measure Teacher-MHL in the context of pupil-related psychosocial factors at primary and secondary schools (grades 5 & 6), while also focussing on children’s social living conditions. Methods: (1) A systematic literature review in different databases to identify papers with regard to Teacher-MHL (completed). (2) Based on these results, an interview guide was developed. This research step includes a qualitative pre-study to inductively survey the general profiles of teachers (n=24). The evaluation will be presented on the conference. (3) These findings will be translated into a quantitative teacher survey (n=2500) in order to assess the extent of socio-analytical skills of teachers as well as in relation to institutional and individual characteristics. (4) Based on results 1 – 3, developing a training program for teachers. Results: The review highlights a lack of information for Teacher-MHL and their skills, especially related to high-risk-groups like children of mentally ill parents. The literature is limited to a few studies only. According to these, teacher are not good at identifying burdened children and if they identify those children they do not know how to handle the situations in school. They are not sufficiently trained to deal with these children, especially there are great uncertainties in dealing with the teaching situation. Institutional means and resources are missing as well. Such a mismatch can result in insufficient support and use of opportunities for children at risk. First impressions from the interviews confirm these results and allow a greater insight in the everyday school-life according to critical life events in families. Conclusions: For the first time schools will be addressed as a setting where children are especially "accessible" for measures of health promotion. Addressing Teacher-MHL gives reason to expect high effectiveness. Targeting professionals' abilities for dealing with this high-risk-group leads to a discharge for teacher themselves to handle those situations and increases school health promotion. In view of the fact that only 10-30% of such high-risk families accept offers of therapy and assistance, this will be the first primary preventive and health-promoting approach to protect the health of a yet unaffected, but particularly burdened, high-risk group.

Keywords: children of mentally ill parents, health promotion, mental health literacy, school

Procedia PDF Downloads 528
224 Physical Aspects of Shape Memory and Reversibility in Shape Memory Alloys

Authors: Osman Adiguzel

Abstract:

Shape memory alloys take place in a class of smart materials by exhibiting a peculiar property called the shape memory effect. This property is characterized by the recoverability of two certain shapes of material at different temperatures. These materials are often called smart materials due to their functionality and their capacity of responding to changes in the environment. Shape memory materials are used as shape memory devices in many interdisciplinary fields such as medicine, bioengineering, metallurgy, building industry and many engineering fields. The shape memory effect is performed thermally by heating and cooling after first cooling and stressing treatments, and this behavior is called thermoelasticity. This effect is based on martensitic transformations characterized by changes in the crystal structure of the material. The shape memory effect is the result of successive thermally and stress-induced martensitic transformations. Shape memory alloys exhibit thermoelasticity and superelasticity by means of deformation in the low-temperature product phase and high-temperature parent phase region, respectively. Superelasticity is performed by stressing and releasing the material in the parent phase region. Loading and unloading paths are different in the stress-strain diagram, and the cycling loop reveals energy dissipation. The strain energy is stored after releasing, and these alloys are mainly used as deformation absorbent materials in control of civil structures subjected to seismic events, due to the absorbance of strain energy during any disaster or earthquake. Thermal-induced martensitic transformation occurs thermally on cooling, along with lattice twinning with cooperative movements of atoms by means of lattice invariant shears, and ordered parent phase structures turn into twinned martensite structures, and twinned structures turn into the detwinned structures by means of stress-induced martensitic transformation by stressing the material in the martensitic condition. Thermal induced transformation occurs with the cooperative movements of atoms in two opposite directions, <110 > -type directions on the {110} - type planes of austenite matrix which is the basal plane of martensite. Copper-based alloys exhibit this property in the metastable β-phase region, which has bcc-based structures at high-temperature parent phase field. Lattice invariant shear and twinning is not uniform in copper-based ternary alloys and gives rise to the formation of complex layered structures, depending on the stacking sequences on the close-packed planes of the ordered parent phase lattice. In the present contribution, x-ray diffraction and transmission electron microscopy (TEM) studies were carried out on two copper-based CuAlMn and CuZnAl alloys. X-ray diffraction profiles and electron diffraction patterns reveal that both alloys exhibit superlattice reflections inherited from the parent phase due to the displacive character of martensitic transformation. X-ray diffractograms taken in a long time interval show that diffraction angles and intensities of diffraction peaks change with the aging duration at room temperature. In particular, some of the successive peak pairs providing a special relation between Miller indices come close to each other. This result refers to the rearrangement of atoms in a diffusive manner.

Keywords: shape memory effect, martensitic transformation, reversibility, superelasticity, twinning, detwinning

Procedia PDF Downloads 172
223 Exploring Behavioural Biases among Indian Investors: A Qualitative Inquiry

Authors: Satish Kumar, Nisha Goyal

Abstract:

In the stock market, individual investors exhibit different kinds of behaviour. Traditional finance is built on the notion of 'homo economics', which states that humans always make perfectly rational choices to maximize their wealth and minimize risk. That is, traditional finance has concern for how investors should behave rather than how actual investors are behaving. Behavioural finance provides the explanation for this phenomenon. Although finance has been studied for thousands of years, behavioural finance is an emerging field that combines the behavioural or psychological aspects with conventional economic and financial theories to provide explanations on how emotions and cognitive factors influence investors’ behaviours. These emotions and cognitive factors are known as behavioural biases. Because of these biases, investors make irrational investment decisions. Besides, the emotional and cognitive factors, the social influence of media as well as friends, relatives and colleagues also affect investment decisions. Psychological factors influence individual investors’ investment decision making, but few studies have used qualitative methods to understand these factors. The aim of this study is to explore the behavioural factors or biases that affect individuals’ investment decision making. For the purpose of this exploratory study, an in-depth interview method was used because it provides much more exhaustive information and a relaxed atmosphere in which people feel more comfortable to provide information. Twenty investment advisors having a minimum 5 years’ experience in securities firms were interviewed. In this study, thematic content analysis was used to analyse interview transcripts. Thematic content analysis process involves analysis of transcripts, coding and identification of themes from data. Based on the analysis we categorized the statements of advisors into various themes. Past market returns and volatility; preference for safe returns; tendency to believe they are better than others; tendency to divide their money into different accounts/assets; tendency to hold on to loss-making assets; preference to invest in familiar securities; tendency to believe that past events were predictable; tendency to rely on the reference point; tendency to rely on other sources of information; tendency to have regret for making past decisions; tendency to have more sensitivity towards losses than gains; tendency to rely on own skills; tendency to buy rising stocks with the expectation that this rise will continue etc. are some of the major concerns showed by experts about investors. The findings of the study revealed 13 biases such as overconfidence bias, disposition effect, familiarity bias, framing effect, anchoring bias, availability bias, self-attribution bias, representativeness, mental accounting, hindsight bias, regret aversion, loss aversion and herding bias/media biases present in Indian investors. These biases have a negative connotation because they produce a distortion in the calculation of an outcome. These biases are classified under three categories such as cognitive errors, emotional biases and social interaction. The findings of this study may assist both financial service providers and researchers to understand the various psychological biases of individual investors in investment decision making. Additionally, individual investors will also be aware of the behavioural biases that will aid them to make sensible and efficient investment decisions.

Keywords: financial advisors, individual investors, investment decisions, psychological biases, qualitative thematic content analysis

Procedia PDF Downloads 155
222 Climate Safe House: A Community Housing Project Tackling Catastrophic Sea Level Rise in Coastal Communities

Authors: Chris Fersterer, Col Fay, Tobias Danielmeier, Kat Achterberg, Scott Willis

Abstract:

New Zealand, an island nation, has an extensive coastline peppered with small communities of iconic buildings known as Bachs. Post WWII, these modest buildings were constructed by their owners as retreats and generally were small, low cost, often using recycled material and often they fell below current acceptable building standards. In the latter part of the 20th century, real estate prices in many of these communities remained low and these areas became permanent residences for people attracted to this affordable lifestyle choice. The Blueskin Resilient Communities Trust (BRCT) is an organisation that recognises the vulnerability of communities in low lying settlements as now being prone to increased flood threat brought about by climate change and sea level rise. Some of the inhabitants of Blueskin Bay, Otago, NZ have already found their properties to be un-insurable because of increased frequency of flood events and property values have slumped accordingly. Territorial authorities also acknowledge this increased risk and have created additional compliance measures for new buildings that are less than 2 m above tidal peaks. Community resilience becomes an additional concern where inhabitants are attracted to a lifestyle associated with a specific location and its people when this lifestyle is unable to be met in a suburban or city context. Traditional models of social housing fail to provide the sense of community connectedness and identity enjoyed by the current residents of Blueskin Bay. BRCT have partnered with the Otago Polytechnic Design School to design a new form of community housing that can react to this environmental change. It is a longitudinal project incorporating participatory approaches as a means of getting people ‘on board’, to understand complex systems and co-develop solutions. In the first period, they are seeking industry support and funding to develop a transportable and fully self-contained housing model that exploits current technologies. BRCT also hope that the building will become an educational tool to highlight climate change issues facing us today. This paper uses the Climate Safe House (CSH) as a case study for education in architectural sustainability through experiential learning offered as part of the Otago Polytechnics Bachelor of Design. Students engage with the project with research methodologies, including site surveys, resident interviews, data sourced from government agencies and physical modelling. The process involves collaboration across design disciplines including product and interior design but also includes connections with industry, both within the education institution and stakeholder industries introduced through BRCT. This project offers a rich learning environment where students become engaged through project based learning within a community of practice, including architecture, construction, energy and other related fields. The design outcomes are expressed in a series of public exhibitions and forums where community input is sought in a truly participatory process.

Keywords: community resilience, problem based learning, project based learning, case study

Procedia PDF Downloads 269
221 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach

Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier

Abstract:

Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.

Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube

Procedia PDF Downloads 134
220 Lying in a Sender-Receiver Deception Game: Effects of Gender and Motivation to Deceive

Authors: Eitan Elaad, Yeela Gal-Gonen

Abstract:

Two studies examined gender differences in lying when the truth-telling bias prevailed and when inspiring lying and distrust. The first study used 156 participants from the community (78 pairs). First, participants completed the Narcissistic Personality Inventory, the Lie- and Truth Ability Assessment Scale (LTAAS), and the Rational-Experiential Inventory. Then, they participated in a deception game where they performed as senders and receivers of true and false communications. Their goal was to retain as many points as possible according to a payoff matrix that specified the reward they would gain for any possible outcome. Results indicated that males in the sender position lied more and were more successful tellers of lies and truths than females. On the other hand, males, as receivers, trusted less than females but were not better at detecting lies and truths. We explained the results by a. Male's high perceived lie-telling ability. We observed that confidence in telling lies guided participants to increase their use of lies. Male's lie-telling confidence corresponded to earlier accounts that showed a consistent association between high self-assessed lying ability, reports of frequent lying, and predictions of actual lying in experimental settings; b. Male's narcissistic features. Earlier accounts described positive relations between narcissism and reported lying or unethical behavior in everyday life situations. Predictions about the association between narcissism and frequent lying received support in the present study. Furthermore, males scored higher than females on the narcissism scale; and c. Male's experiential thinking style. We observed that males scored higher than females on the experiential thinking style scale. We further hypothesized that the experiential thinking style predicts frequent lying in the deception game. Results confirmed the hypothesis. The second study used one hundred volunteers (40 females) who underwent the same procedure. However, the payoff matrix encouraged lying and distrust. Results showed that male participants lied more than females. We found no gender differences in trust. Males and females did not differ in their success of telling and detecting lies and truths. Participants also completed the LTAAS questionnaire. Males assessed their lie-telling ability higher than females, but the ability assessment did not predict lying frequency. A final note. The present design is limited to low stakes. Participants knew that they were participating in a game, and they would not experience any consequences from their deception in the game. Therefore, we advise caution when applying the present results to lying under high stakes.

Keywords: gender, lying, detection of deception, information processing style, self-assessed lying ability

Procedia PDF Downloads 132
219 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 271
218 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI

Authors: James Rigor Camacho, Wansu Lim

Abstract:

Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.

Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors

Procedia PDF Downloads 88
217 Microplastic Concentrations and Fluxes in Urban Compartments: A Systemic Approach at the Scale of the Paris Megacity

Authors: Rachid Dris, Robin Treilles, Max Beaurepaire, Minh Trang Nguyen, Sam Azimi, Vincent Rocher, Johnny Gasperi, Bruno Tassin

Abstract:

Microplastic sources and fluxes in urban catchments are only poorly studied. Most often, the approaches taken focus on a single source and only carry out a description of the contamination levels and type (shape, size, polymers). In order to gain an improved knowledge of microplastic inputs at urban scales, estimating and comparing various fluxes is necessary. The Laboratoire Eau, Environnement et Systèmes Urbains (LEESU), the Laboratoire Eau Environnement (LEE) and the SIAAP (Service public de l’assainissement francilien) initiated several projects to investigate different urban sources and flows of microplastics. A systemic approach is undertaken at the scale of Paris Megacity, and several compartments are considered, including atmospheric fallout, wastewater treatments plants, runoff and combined sewer overflows. These investigations are carried out within the Limnoplast and OPUR projects. Atmospheric fallout was sampled during consecutive periods ranging from 2 to 3 weeks with a stainless-steel funnel. Both wet and dry periods were considered. Different treatment steps were sampled in 2 wastewater treatment plants (Seine-Amont for activated sludge and Seine-Centre for biofiltration) of the SIAAP, including sludge samples. Microplastics were also investigated in combined sewer overflows as well as in stormwater at the outlet suburban catchment (Sucy-en-Brie, France) during four rain events. Samples are treated using hydroperoxide digestion (H₂O₂ 30 %) in order to reduce organic material. Microplastics are then extracted from the samples with a density separation step using NaI (d=1.6 g.cm⁻³). Samples are filtered on metallic filters with a porosity of 14 µm between steps to separate them from the solutions (H₂O₂ and NaI). The last filtration was carried out on alumina filters. Infrared mapping analysis (using a micro-FTIR with an MCT detector) is performed on each alumina filter. The resulting maps are analyzed using a microplastic analysis software simple, developed by Aalborg University, Denmark and Alfred Wegener Institute, Germany. Blanks were systematically carried out to consider sample contamination. This presentation aims at synthesizing the data found in the various projects. In order to carry out a systemic approach and compare the various inputs, all the data were converted into annual microplastic fluxes (number of microplastics per year), and extrapolated to the Parisian agglomeration. PP, PE and alkyd are the most prevalent polymers found in storm water samples. Rain intensity and microplastic concentrations did not show any clear correlation. Considering the runoff volumes and the impervious surface area of the studied catchment, a flux of 4*107–9*107 MPs.yr⁻¹.ha⁻¹ was estimated. Samples of wastewater treatment plants and atmospheric fallout are currently being analyzed in order to finalize this assessment. The representativeness of such samplings and uncertainties related to the extrapolations will be discussed and gaps in knowledge will be identified. The data provided by such an approach will help to prioritize future research as well as policy efforts.

Keywords: microplastics, atmosphere, wastewater, urban runoff, Paris megacity, urban waters

Procedia PDF Downloads 166
216 Urban Flood Resilience Comprehensive Assessment of "720" Rainstorm in Zhengzhou Based on Multiple Factors

Authors: Meiyan Gao, Zongmin Wang, Haibo Yang, Qiuhua Liang

Abstract:

Under the background of global climate change and rapid development of modern urbanization, the frequency of climate disasters such as extreme precipitation in cities around the world is gradually increasing. In this paper, Hi-PIMS model is used to simulate the "720" flood in Zhengzhou, and the continuous stages of flood resilience are determined with the urban flood stages are divided. The flood resilience curve under the influence of multiple factors were determined and the urban flood toughness was evaluated by combining the results of resilience curves. The flood resilience of urban unit grid was evaluated based on economy, population, road network, hospital distribution and land use type. Firstly, the rainfall data of meteorological stations near Zhengzhou and the remote sensing rainfall data from July 17 to 22, 2021 were collected. The Kriging interpolation method was used to expand the rainfall data of Zhengzhou. According to the rainfall data, the flood process generated by four rainfall events in Zhengzhou was reproduced. Based on the results of the inundation range and inundation depth in different areas, the flood process was divided into four stages: absorption, resistance, overload and recovery based on the once in 50 years rainfall standard. At the same time, based on the levels of slope, GDP, population, hospital affected area, land use type, road network density and other aspects, the resilience curve was applied to evaluate the urban flood resilience of different regional units, and the difference of flood process of different precipitation in "720" rainstorm in Zhengzhou was analyzed. Faced with more than 1,000 years of rainstorm, most areas are quickly entering the stage of overload. The influence levels of factors in different areas are different, some areas with ramps or higher terrain have better resilience, and restore normal social order faster, that is, the recovery stage needs shorter time. Some low-lying areas or special terrain, such as tunnels, will enter the overload stage faster in the case of heavy rainfall. As a result, high levels of flood protection, water level warning systems and faster emergency response are needed in areas with low resilience and high risk. The building density of built-up area, population of densely populated area and road network density all have a certain negative impact on urban flood resistance, and the positive impact of slope on flood resilience is also very obvious. While hospitals can have positive effects on medical treatment, they also have negative effects such as population density and asset density when they encounter floods. The result of a separate comparison of the unit grid of hospitals shows that the resilience of hospitals in the distribution range is low when they encounter floods. Therefore, in addition to improving the flood resistance capacity of cities, through reasonable planning can also increase the flood response capacity of cities. Changes in these influencing factors can further improve urban flood resilience, such as raise design standards and the temporary water storage area when floods occur, train the response speed of emergency personnel and adjust emergency support equipment.

Keywords: urban flood resilience, resilience assessment, hydrodynamic model, resilience curve

Procedia PDF Downloads 29
215 An Integrated Real-Time Hydrodynamic and Coastal Risk Assessment Model

Authors: M. Reza Hashemi, Chris Small, Scott Hayward

Abstract:

The Northeast Coast of the US faces damaging effects of coastal flooding and winds due to Atlantic tropical and extratropical storms each year. Historically, several large storm events have produced substantial levels of damage to the region; most notably of which were the Great Atlantic Hurricane of 1938, Hurricane Carol, Hurricane Bob, and recently Hurricane Sandy (2012). The objective of this study was to develop an integrated modeling system that could be used as a forecasting/hindcasting tool to evaluate and communicate the risk coastal communities face from these coastal storms. This modeling system utilizes the ADvanced CIRCulation (ADCIRC) model for storm surge predictions and the Simulating Waves Nearshore (SWAN) model for the wave environment. These models were coupled, passing information to each other and computing over the same unstructured domain, allowing for the most accurate representation of the physical storm processes. The coupled SWAN-ADCIRC model was validated and has been set up to perform real-time forecast simulations (as well as hindcast). Modeled storm parameters were then passed to a coastal risk assessment tool. This tool, which is generic and universally applicable, generates spatial structural damage estimate maps on an individual structure basis for an area of interest. The required inputs for the coastal risk model included a detailed information about the individual structures, inundation levels, and wave heights for the selected region. Additionally, calculation of wind damage to structures was incorporated. The integrated coastal risk assessment system was then tested and applied to Charlestown, a small vulnerable coastal town along the southern shore of Rhode Island. The modeling system was applied to Hurricane Sandy and a synthetic storm. In both storm cases, effect of natural dunes on coastal risk was investigated. The resulting damage maps for the area (Charlestown) clearly showed that the dune eroded scenarios affected more structures, and increased the estimated damage. The system was also tested in forecast mode for a large Nor’Easters: Stella (March 2017). The results showed a good performance of the coupled model in forecast mode when compared to observations. Finally, a nearshore model XBeach was then nested within this regional grid (ADCIRC-SWAN) to simulate nearshore sediment transport processes and coastal erosion. Hurricane Irene (2011) was used to validate XBeach, on the basis of a unique beach profile dataset at the region. XBeach showed a relatively good performance, being able to estimate eroded volumes along the beach transects with a mean error of 16%. The validated model was then used to analyze the effectiveness of several erosion mitigation methods that were recommended in a recent study of coastal erosion in New England: beach nourishment, coastal bank (engineered core), and submerged breakwater as well as artificial surfing reef. It was shown that beach nourishment and coastal banks perform better to mitigate shoreline retreat and coastal erosion.

Keywords: ADCIRC, coastal flooding, storm surge, coastal risk assessment, living shorelines

Procedia PDF Downloads 100
214 Kidnapping of Migrants by Drug Cartels in Mexico as a New Trend in Contemporary Slavery

Authors: Itze Coronel Salomon

Abstract:

The rise of organized crime and violence related to drug cartels in Mexico has created serious challenges for the authorities to provide security to those who live within its borders. However, to achieve a significant improvement in security is absolute respect for fundamental human rights by the authorities. Irregular migrants in Mexico are at serious risk of abuse. Research by Amnesty International as well as reports of the NHRC (National Human Rights) in Mexico, have indicated the major humanitarian crisis faced by thousands of migrants traveling in the shadows. However, the true extent of the problem remains invisible to the general population. The fact that federal and state governments leave no proper record of abuse and do not publish reliable data contributes to ignorance and misinformation, often spread by the media that portray migrants as the source of crime rather than their victims. Discrimination and intolerance against irregular migrants can generate greater hostility and exclusion. According to the modus operandi that has been recorded criminal organizations and criminal groups linked to drug trafficking structures deprive migrants of their liberty for forced labor and illegal activities related to drug trafficking, even some have been kidnapped for be trained as murderers . If the victim or their family cannot pay the ransom, the kidnapped person may suffer torture, mutilation and amputation of limbs or death. Migrant women are victims of sexual abuse during her abduction as well. In 2011, at least 177 bodies were identified in the largest mass grave found in Mexico, located in the town of San Fernando, in the border state of Tamaulipas, most of the victims were killed by blunt instruments, and most seemed to be immigrants and travelers passing through the country. With dozens of small graves discovered in northern Mexico, this may suggest a change in tactics between organized crime groups to the different means of obtaining revenue and reduce murder profile methods. Competition and conflict over territorial control drug trafficking can provide strong incentives for organized crime groups send signals of violence to the authorities and rival groups. However, as some Mexican organized crime groups are increasingly looking to take advantage of income and vulnerable groups, such as Central American migrants seem less interested in advertising his work to authorities and others, and more interested in evading detection and confrontation. This paper pretends to analyze the introduction of this new trend of kidnapping migrants for forced labors by drug cartels in Mexico into the forms of contemporary slavery and its implications.

Keywords: international law, migration, transnational organized crime

Procedia PDF Downloads 396
213 Direct Assessment of Cellular Immune Responses to Ovalbumin with a Secreted Luciferase Transgenic Reporter Mouse Strain IFNγ-Lucia

Authors: Martyna Chotomska, Aleksandra Studzinska, Marta Lisowska, Justyna Szubert, Aleksandra Tabis, Jacek Bania, Arkadiusz Miazek

Abstract:

Objectives: Assessing antigen-specific T cell responses is of utmost importance for the pre-clinical testing of prototype vaccines against intracellular pathogens and tumor antigens. Mainly two types of in vitro assays are used for this purpose 1) enzyme-linked immunospot (ELISpot) and 2) intracellular cytokine staining (ICS). Both are time-consuming, relatively expensive, and require manual dexterity. Here, we assess if a straightforward detection of luciferase activity in blood samples of transgenic reporter mice expressing a secreted Lucia luciferase under the transcriptional control of IFN-γ promoter parallels the sensitivity of IFNγ ELISpot assay. Methods: IFN-γ-LUCIA mouse strain carrying multiple copies of Lucia luciferase transgene under the transcriptional control of IFNγ minimal promoter were generated by pronuclear injection of linear DNA. The specificity of transgene expression and mobilization was assessed in vitro using transgenic splenocytes exposed to various mitogens. The IFN-γ-LUCIA mice were immunized with 50mg of ovalbumin (OVA) emulsified in incomplete Freund’s adjuvant three times every two weeks by subcutaneous injections. Blood samples were collected before and five days after each immunization. Luciferase activity was assessed in blood serum. Peripheral blood mononuclear cells were separated and assessed for frequencies of OVA-specific IFNγ-secreting T cells. Results: We show that in vitro cultured splenocytes of IFN-γ-LUCIA mice respond by 2 and 3 fold increase in secreted luciferase activity to T cell mitogens concanavalin A and phorbol myristate acetate, respectively but fail to respond to B cell-stimulating E.coli lipopolysaccharide. Immunization of IFN-γ-LUCIA mice with OVA leads to over 4 fold increase in luciferase activity in blood serum five days post-immunization with a barely detectable increase in OVA-specific, IFNγ-secreting T cells by ELISpot. Second and third immunizations, further increase the luciferase activity and coincidently also increase the frequencies of OVA-specific T cells by ELISpot. Conclusions: We conclude that minimally invasive monitoring of luciferase secretions in blood serum of IFN-γ-LUCIA mice constitutes a sensitive method for evaluating primary and memory Th1 responses to protein antigens. As such, this method may complement existing methods for rapid immunogenicity assessment of prototype vaccines.

Keywords: ELISpot, immunogenicity, interferon-gamma, reporter mice, vaccines

Procedia PDF Downloads 152
212 Characterization of Fine Particles Emitted by the Inland and Maritime Shipping

Authors: Malika Souada, Juanita Rausch, Benjamin Guinot, Christine Bugajny

Abstract:

The increase of global commerce and tourism makes the shipping sector an important contributor of atmospheric pollution. Both, airborne particles and gaseous pollutants have negative impact on health and climate. This is especially the case in port cities, due to the proximity of the exposed population to the shipping emissions in addition to other multiple sources of pollution linked to the surrounding urban activity. The objective of this study is to determine the concentrations of fine particles (immission), specifically PM2.5, PM1, PM0.3, BC and sulphates, in a context where maritime passenger traffic plays an important role (port area of Bordeaux centre). The methodology is based on high temporal resolution measurements of pollutants, correlated with meteorological and ship movements data. Particles and gaseous pollutants from seven maritime passenger ships were sampled and analysed during the docking, manoeuvring and berthing phases. The particle mass measurements were supplemented by measurements of the number concentration of ultrafine particles (<300 nm diameter). The different measurement points were chosen by taking into account the local meteorological conditions and by pre-modelling the dispersion of the smoke plumes. The results of the measurement campaign carried out during the summer of 2021 in the port of Bordeaux show that the detection of concentrations of particles emitted by ships proved to be punctual and stealthy. Punctual peaks of ultrafine particle concentration in number (P#/m3) and BC (ng/m3) were measured during the docking phases of the ships, but the concentrations returned to their background level within minutes. However, it appears that the influence of the docking phases does not significantly affect the air quality of Bordeaux centre in terms of mass concentration. Additionally, no clear differences in PM2.5 concentrations between the periods with and without ships at berth were observed. The urban background pollution seems to be mainly dominated by exhaust and non-exhaust road traffic emissions. However, temporal high-resolution measurements suggest a probable emission of gaseous precursors responsible for the formation of secondary aerosols related to the ship activities. This was evidenced by the high values of the PM1/BC and PN/BC ratios, tracers of non-primary particle formation, during periods of ship berthing vs. periods without ships at berth. The research findings from this study provide robust support for port area air quality assessment and source apportionment.

Keywords: characterization, fine particulate matter, harbour air quality, shipping impacts

Procedia PDF Downloads 86
211 Will My Home Remain My Castle? Tenants’ Interview Topics regarding an Eco-Friendly Refurbishment Strategy in a Neighborhood in Germany

Authors: Karin Schakib-Ekbatan, Annette Roser

Abstract:

According to the Federal Government’s plans, the German building stock should be virtually climate neutral by 2050. Thus, the “EnEff.Gebäude.2050” funding initiative was launched, complementing the projects of the Energy Transition Construction research initiative. Beyond the construction and renovation of individual buildings, solutions must be found at the neighborhood level. The subject of the presented pilot project is a building ensemble from the Wilhelminian period in Munich, which is planned to be refurbished based on a socially compatible, energy-saving, innovative-technical modernization concept. The building ensemble, with about 200 apartments, is part of the building cooperative. To create an optimized network and possible synergies between researchers and projects of the funding initiative, a Scientific Accompanying Research was established for cross-project analyses of findings and results in order to identify further research needs and trends. Thus, the project is characterized by an interdisciplinary approach that combines constructional, technical, and socio-scientific expertise based on a participatory understanding of research by involving the tenants at an early stage. The research focus is on getting insights into the tenants’ comfort requirements, attitudes, and energy-related behaviour. Both qualitative and quantitative methods are applied based on the Technology-Acceptance-Model (TAM). The core of the refurbishment strategy is a wall heating system intended to replace conventional radiators. A wall heating provides comfortable and consistent radiant heat instead of convection heat, which often causes drafts and dust turbulence. Besides comfort and health, the advantage of wall heating systems is an energy-saving operation. All apartments would be supplied by a uniform basic temperature control system (around perceived room temperature of 18 °C resp. 64,4 °F), which could be adapted to individual preferences via individual heating options (e. g. infrared heating). The new heating system would affect the furnishing of the walls, in terms of not allowing the wall surface to be covered too much with cupboards or pictures. Measurements and simulations of the energy consumption of an installed wall heating system are currently being carried out in a show apartment in this neighborhood to investigate energy-related, economical aspects as well as thermal comfort. In March, interviews were conducted with a total of 12 people in 10 households. The interviews were analyzed by MAXQDA. The main issue of the interview was the fear of reduced self-efficacy within their own walls (not having sufficient individual control over the room temperature or being very limited in furnishing). Other issues concerned the impact that the construction works might have on their daily life, such as noise or dirt. Despite their basically positive attitude towards a climate-friendly refurbishment concept, tenants were very concerned about the further development of the project and they expressed a great need for information events. The results of the interviews will be used for project-internal discussions on technical and psychological aspects of the refurbishment strategy in order to design accompanying workshops with the tenants as well as to prepare a written survey involving all households of the neighbourhood.

Keywords: energy efficiency, interviews, participation, refurbishment, residential buildings

Procedia PDF Downloads 111