Search results for: myofascial trigger point
114 Exploring Closed-Loop Business Systems Which Eliminates Solid Waste in the Textile and Fashion Industry: A Systematic Literature Review Covering the Developments Occurred in the Last Decade
Authors: Bukra Kalayci, Geraldine Brennan
Abstract:
Introduction: Over the last decade, a proliferation of literature related to textile and fashion business in the context of sustainable production and consumption has emerged. However, the economic and environmental benefits of solid waste recovery have not been comprehensively searched. Therefore at the end-of-life or end-of-use textile waste management remains a gap. Solid textile waste reuse and recycling principles of the circular economy need to be developed to close the disposal stage of the textile supply chain. The environmental problems associated with the over-production and –consumption of textile products arise. Together with growing population and fast fashion culture the share of solid textile waste in municipal waste is increasing. Focusing on post-consumer textile waste literature, this research explores the opportunities, obstacles and enablers or success factors associated with closed-loop textile business systems. Methodology: A systematic literature review was conducted in order to identify best practices and gaps from the existing body of knowledge related to closed-loop post-consumer textile waste initiatives over the last decade. Selected keywords namely: ‘cradle-to-cradle ‘, ‘circular* economy* ‘, ‘closed-loop* ‘, ‘end-of-life* ‘, ‘reverse* logistic* ‘, ‘take-back* ‘, ‘remanufacture* ‘, ‘upcycle* ‘ with the combination of (and) ‘fashion* ‘, ‘garment* ‘, ‘textile* ‘, ‘apparel* ‘, clothing* ‘ were used and the time frame of the review was set between 2005 to 2017. In order to obtain a broad coverage, Web of Knowledge and Science Direct databases were used, and peer-reviewed journal articles were chosen. The keyword search identified 299 number of papers which was further refined into 54 relevant papers that form the basis of the in-depth thematic analysis. Preliminary findings: A key finding was that the existing literature is predominantly conceptual rather than applied or empirical work. Moreover, the enablers or success factors, obstacles and opportunities to implement closed-loop systems in the textile industry were not clearly articulated and the following considerations were also largely overlooked in the literature. While the circular economy suggests multiple cycles of discarded products, components or materials, most research has to date tended to focus on a single cycle. Thus the calculations of environmental and economic benefits of closed-loop systems are limited to one cycle which does not adequately explore the feasibility or potential benefits of multiple cycles. Additionally, the time period textile products spend between point of sale, and end-of-use/end-of-life return is a crucial factor. Despite past efforts to study closed-loop textile systems a clear gap in the literature is the lack of a clear evaluation framework which enables manufacturers to clarify the reusability potential of textile products through consideration of indicators related too: quality, design, lifetime, length of time between manufacture and product return, volume of collected disposed products, material properties, and brand segment considerations (e.g. fast fashion versus luxury brands).Keywords: circular fashion, closed loop business, product service systems, solid textile waste elimination
Procedia PDF Downloads 204113 Guard@Lis: Birdwatching Augmented Reality Mobile Application
Authors: Jose A. C. Venancio, Alexandrino J. M. Goncalves, Anabela Marto, Nuno C. S. Rodrigues, Rita M. T. Ascenso
Abstract:
Nowadays, it is common to find people who are concerned about getting away from the everyday life routine, looking forward to outcome well-being and pleasant emotions. Trying to disconnect themselves from the usual places of work and residence, they pursue different places, such as tourist destinations, aiming to have unexpected experiences. In order to make this exploration process easier, cities and tourism agencies seek new opportunities and solutions, creating routes with diverse cultural landmarks, including natural landscapes and historic buildings. These offers frequently aspire to the preservation of the local patrimony. In nature and wildlife, birdwatching is an activity that has been increasing, both in cities and in the countryside. This activity seeks to find, observe and identify the diversity of birds that live permanently or temporarily in these places, and it is usually supported by birdwatching guides. Leiria (Portugal) is a well-known city, presenting several historical and natural landmarks, like the Lis river and the castle where King D. Dinis lived in the 13th century. Along the Lis River, a conservation process was carried out and a pedestrian route was created (Polis project). This is considered an excellent spot for birdwatching, especially for the gray heron (Ardea cinerea) and for the kingfisher (Alcedo atthis). There is also a route through the city, from the riverside to the castle, which encloses a characterized variety of species, such as the barn swallow (Hirundo rustica), known for passing through different seasons of the year. Birdwatching is sometimes a difficult task since it is not always possible to see all bird species that inhabit a given place. For this reason, a need to create a technological solution was found to ease this activity. This project aims to encourage people to learn about the various species of birds that live along the Lis River and to promote the preservation of nature in a conscious way. This work is being conducted in collaboration with Leiria Municipal Council and with the Environmental Interpretation Centre. It intends to show the majesty of the Lis River, a place visited daily by several people, such as children and families, who use it for didactic and recreational activities. We are developing a mobile multi-platform application (Guard@Lis) that allows bird species to be observed along a given route, using representative digital 3D models through the integration of augmented reality technologies. Guard@Lis displays a route with points of interest for birdwatching and a list of species for each point of interest, along with scientific information, images and sounds for every species. For some birds, to ensure their observation, the user can watch them in loco, in their real and natural environment, with their mobile device by means of augmented reality, giving the sensation of presence of these birds, even if they cannot be seen in that place at that moment. The augmented reality feature is being developed with Vuforia SDK, using a hybrid approach to recognition and tracking processes, combining marks and geolocation techniques. This application proposes routes and notifies users with alerts for the possibility of viewing models of augmented reality birds. The final Guard@Lis prototype will be tested by volunteers in-situ.Keywords: augmented reality, birdwatching route, mobile application, nature tourism, watch birds using augmented reality
Procedia PDF Downloads 178112 Foucault and Governmentality: International Organizations and State Power
Authors: Sara Dragisic
Abstract:
Using the theoretical analysis of the birth of biopolitics that Foucault performed through the history of liberalism and neoliberalism, in this paper we will try to show how, precisely through problematizing the role of international institutions, the model of governance differs from previous ways of objectifying body and life. Are the state and its mechanisms still a Leviathan to fight against, or can it be even the driver of resistance against the proponents of modern governance and the biopolitical power? Do paradigmatic examples of biopolitics still appear through sovereignty and (international) law, or is it precisely this sphere that shows a significant dose of incompetence and powerlessness in relation to, not only the economic sphere (Foucault’s critique of neoliberalism) but also the new politics of freedom? Have the struggle for freedom and human rights, as well as the war on terrorism, opened a new spectrum of biopolitical processes, which are manifested precisely through new international institutions and humanitarian discourse? We will try to answer these questions, in the following way. On the one hand, we will show that the views of authors such as Agamben and Hardt and Negri, in whom the state and sovereignty are seen as enemies to be defeated or overcome, fail to see how such attempts could translate into the politicization of life like it is done in many examples through the doctrine of liberal interventionism and humanitarianism. On the other hand, we will point out that it is precisely the humanitarian discourse and the defense of the right to intervention that can be the incentive and basis for the politicization of the category of life and lead to the selective application of human rights. Zizek example of the killing of United Nations workers and doctors in a village during the Vietnam War, who were targeted even before police or soldiers, because they were precisely seen as a powerful instrument of American imperialism (as they were sincerely trying to help the population), will be focus of this part of the analysis. We’ll ask the question whether such interpretation is a kind of liquidation of the extreme left of the political (Laclau) or on this basis can be explained at least in part the need to review the functioning of international organizations, ranging from those dealing with humanitarian aid (and humanitarian military interventions) to those dealing with protection and the security of the population, primarily from growing terrorism. Based on the above examples, we will also explain how the discourse of terrorism itself plays a dual role: it can appear as a tool of liberal biopolitics, although, more superficially, it mostly appears as an enemy that wants to destroy the liberal system and its values. This brings us to the basic problem that this paper will tackle: do the mechanisms of institutional struggle for human rights and freedoms, which is often seen as opposed to the security mechanisms of the state, serve the governance of citizens in such a way that the latter themselves participate in producing biopolitical governmental practices? Is the freedom today "nothing but the correlative development of apparatuses of security" (Foucault)? Or, we can continue this line of Foucault’s argumentation and enhance the interpretation with the important question of what precisely today reflects the change in the rationality of governance in which society is transformed from a passive object into a subject of its own production. Finally, in order to understand the skills of biopolitical governance in modern civil society, it is necessary to pay attention to the status of international organizations, which seem to have become a significant place for the implementation of global governance. In this sense, the power of sovereignty can turn out to be an insufficiently strong power of security policy, which can go hand in hand with freedom policies, through neoliberal governmental techniques.Keywords: neoliberalism, Foucault, sovereignty, biopolitics, international organizations, NGOs, Agamben, Hardt&Negri, Zizek, security, state power
Procedia PDF Downloads 207111 Assessment and Forecasting of the Impact of Negative Environmental Factors on Public Health
Authors: Nurlan Smagulov, Aiman Konkabayeva, Akerke Sadykova, Arailym Serik
Abstract:
Introduction. Adverse environmental factors do not immediately lead to pathological changes in the body. They can exert the growth of pre-pathology characterized by shifts in physiological, biochemical, immunological and other indicators of the body state. These disorders are unstable, reversible and indicative of body reactions. There is an opportunity to objectively judge the internal structure of the adaptive body reactions at the level of individual organs and systems. In order to obtain a stable response of the body to the chronic effects of unfavorable environmental factors of low intensity (compared to production environment factors), a time called the «lag time» is needed. The obtained results without considering this factor distort reality and, for the most part, cannot be a reliable statement of the main conclusions in any work. A technique is needed to reduce methodological errors and combine mathematical logic using statistical methods and a medical point of view, which ultimately will affect the obtained results and avoid a false correlation. Objective. Development of a methodology for assessing and predicting the environmental factors impact on the population health considering the «lag time.» Methods. Research objects: environmental and population morbidity indicators. The database on the environmental state was compiled from the monthly newsletters of Kazhydromet. Data on population morbidity were obtained from regional statistical yearbooks. When processing static data, a time interval (lag) was determined for each «argument-function» pair. That is the required interval, after which the harmful factor effect (argument) will fully manifest itself in the indicators of the organism's state (function). The lag value was determined by cross-correlation functions of arguments (environmental indicators) with functions (morbidity). Correlation coefficients (r) and their reliability (t), Fisher's criterion (F) and the influence share (R2) of the main factor (argument) per indicator (function) were calculated as a percentage. Results. The ecological situation of an industrially developed region has an impact on health indicators, but it has some nuances. Fundamentally opposite results were obtained in the mathematical data processing, considering the «lag time». Namely, an expressed correlation was revealed after two databases (ecology-morbidity) shifted. For example, the lag period was 4 years for dust concentration, general morbidity, and 3 years – for childhood morbidity. These periods accounted for the maximum values of the correlation coefficients and the largest percentage of the influencing factor. Similar results were observed in relation to the concentration of soot, dioxide, etc. The comprehensive statistical processing using multiple correlation-regression variance analysis confirms the correctness of the above statement. This method provided the integrated approach to predicting the degree of pollution of the main environmental components to identify the most dangerous combinations of concentrations of leading negative environmental factors. Conclusion. The method of assessing the «environment-public health» system (considering the «lag time») is qualitatively different from the traditional (without considering the «lag time»). The results significantly differ and are more amenable to a logical explanation of the obtained dependencies. The method allows presenting the quantitative and qualitative dependence in a different way within the «environment-public health» system.Keywords: ecology, morbidity, population, lag time
Procedia PDF Downloads 82110 Review of Urbanization Pattern in Kabul City
Authors: Muhammad Hanif Amiri, Edris Sadeqy, Ahmad Freed Osman
Abstract:
International Conference on Architectural Engineering and Skyscraper (ICAES 2016) on January 18 - 19, 2016 is aimed to exchange new ideas and application experiences face to face, to establish business or research relations and to find global partners for future collaboration. Therefore, we are very keen to participate and share our issues in order to get valuable feedbacks of the conference participants. Urbanization is a controversial issue all around the world. Substandard and unplanned urbanization has many implications on a social, cultural and economic situation of population life. Unplanned and illegal construction has become a critical issue in Afghanistan particularly Kabul city. In addition, lack of municipal bylaws, poor municipal governance, lack of development policies and strategies, budget limitation, low professional capacity of ainvolved private sector in development and poor coordination among stakeholders are the other factors which made the problem more complicated. The main purpose of this research paper is to review urbanization pattern of Kabul city and find out the improvement solutions and to evaluate the increasing of population density which caused vast illegal and unplanned development which finally converts the Kabul city to a slam area as the whole. The Kabul city Master Plan was reviewed in the year 1978 and revised for the planned 2million population. In 2001, the interim administration took place and the city became influx of returnees from neighbor countries and other provinces of Afghanistan mostly for the purpose of employment opportunities, security and better quality of life, therefore, Kabul faced with strange population growth. According to Central Statistics Organization of Afghanistan population of Kabul has been estimated approx. 5 million (2015), however a new Master Plan has been prepared in 2009, but the existing challenges have not been dissolved yet. On the other hand, 70% of Kabul population is living in unplanned (slam) area and facing the shortage of drinking water, inexistence of sewerage and drainage network, inexistence of proper management system for solid waste collection, lack of public transportation and traffic management, environmental degradation and the shortage of social infrastructure. Although there are many problems in Kabul city, but still the development of 22 townships are in progress which caused the great attraction of population. The research is completed with a detailed analysis on four main issues such as elimination of duplicated administrations, Development of regions, Rehabilitation and improvement of infrastructure, and prevention of new townships establishment in Kabul Central Core in order to mitigate the problems and constraints which are the foundation and principal to find the point of departure for an objective based future development of Kabul city. The closure has been defined to reflect the stage-wise development in light of prepared policy and strategies, development of a procedure for the improvement of infrastructure, conducting a preliminary EIA, defining scope of stakeholder’s contribution and preparation of project list for initial development. In conclusion this paper will help the transformation of Kabul city.Keywords: development of regions, illegal construction, population density, urbanization pattern
Procedia PDF Downloads 320109 Flexural Response of Sandwiches with Micro Lattice Cores Manufactured via Selective Laser Sintering
Authors: Emre Kara, Ali Kurşun, Halil Aykul
Abstract:
The lightweight sandwiches obtained with the use of various core materials such as foams, honeycomb, lattice structures etc., which have high energy absorbing capacity and high strength to weight ratio, are suitable for several applications in transport industry (automotive, aerospace, shipbuilding industry) where saving of fuel consumption, load carrying capacity increase, safety of vehicles and decrease of emission of harmful gases are very important aspects. While the sandwich structures with foams and honeycombs have been applied for many years, there is a growing interest on a new generation sandwiches with micro lattice cores. In order to produce these core structures, various production methods were created with the development of the technology. One of these production technologies is an additive manufacturing technique called selective laser sintering/melting (SLS/SLM) which is very popular nowadays because of saving of production time and achieving the production of complex topologies. The static bending and the dynamic low velocity impact tests of the sandwiches with carbon fiber/epoxy skins and the micro lattice cores produced via SLS/SLM were already reported in just a few studies. The goal of this investigation was the analysis of the flexural response of the sandwiches consisting of glass fiber reinforced plastic (GFRP) skins and the micro lattice cores manufactured via SLS under thermo-mechanical loads in order to compare the results in terms of peak load and absorbed energy values respect to the effect of core cell size, temperature and support span length. The micro lattice cores were manufactured using SLS technology that creates the product drawn by a 3D computer aided design (CAD) software. The lattice cores which were designed as body centered cubic (BCC) model having two different cell sizes (d= 2 and 2.5 mm) with the strut diameter of 0.3 mm were produced using titanium alloy (Ti6Al4V) powder. During the production of all the core materials, the same production parameters such as laser power, laser beam diameter, building direction etc. were kept constant. Vacuum Infusion (VI) method was used to produce skin materials, made of [0°/90°] woven S-Glass prepreg laminates. The combination of the core and skins were implemented under VI. Three point bending tests were carried out by a servo-hydraulic test machine with different values of support span distances (L = 30, 45, and 60 mm) under various temperature values (T = 23, 40 and 60 °C) in order to analyze the influences of support span and temperature values. The failure mode of the collapsed sandwiches has been investigated using 3D computed tomography (CT) that allows a three-dimensional reconstruction of the analyzed object. The main results of the bending tests are: load-deflection curves, peak force and absorbed energy values. The results were compared according to the effect of cell size, support span and temperature values. The obtained results have particular importance for applications that require lightweight structures with a high capacity of energy dissipation, such as the transport industry, where problems of collision and crash have increased in the last years.Keywords: light-weight sandwich structures, micro lattice cores, selective laser sintering, transport application
Procedia PDF Downloads 340108 Concepts of Technologies Based on Smart Materials to Improve Aircraft Aerodynamic Performance
Authors: Krzysztof Skiba, Zbigniew Czyz, Ksenia Siadkowska, Piotr Borowiec
Abstract:
The article presents selected concepts of technologies that use intelligent materials in aircraft in order to improve their performance. Most of the research focuses on solutions that improve the performance of fixed wing aircraft due to related to their previously dominant market share. Recently, the development of the rotorcraft has been intensive, so there are not only helicopters but also gyroplanes and unmanned aerial vehicles using rotors and vertical take-off and landing. There are many different technologies to change a shape of the aircraft or its elements. Piezoelectric, deformable actuator systems can be applied in the system of an active control of vibration dampening in the aircraft tail structure. Wires made of shape memory alloys (SMA) could be used instead of hydraulic cylinders in the rear part of the aircraft flap. The aircraft made of intelligent materials (piezoelectrics and SMA) is one of the NASA projects which provide the possibility of changing a wing shape coefficient by 200%, a wing surface by 50%, and wing deflections by 20 degrees. Active surfaces made of shape memory alloys could be used to control swirls in the flowing stream. An intelligent control system for helicopter blades is a method for the active adaptation of blades to flight conditions and the reduction of vibrations caused by the rotor. Shape memory alloys are capable of recovering their pre-programmed shapes. They are divided into three groups: nickel-titanium-based, copper-based, and ferromagnetic. Due to the strongest shape memory effect and the best vibration damping ability, a Ni-Ti alloy is the most commercially important. The subject of this work was to prepare a conceptual design of a rotor blade with SMA actuators. The scope of work included 3D design of the supporting rotor blade, 3D design of beams enabling to change the geometry by changing the angle of rotation and FEM (Finite Element Method) analysis. The FEM analysis was performed using NX 12 software in the Pre/Post module, which includes extended finite element modeling tools and visualizations of the obtained results. Calculations are presented for two versions of the blade girders. For FEM analysis, three types of materials were used for comparison purposes (ABS, aluminium alloy 7057, steel C45). The analysis of internal stresses and extreme displacements of crossbars edges was carried out. The internal stresses in all materials were close to the yield point in the solution of girder no. 1. For girder no. 2 solution, the value of stresses decreased by about 45%. As a result of the displacement analysis, it was found that the best solution was the ABS girder no. 1. The displacement of about 0.5 mm was obtained, which resulted in turning the crossbars (upper and lower) by an angle equal to 3.59 degrees. This is the largest deviation of all the tests. The smallest deviation was obtained for beam no. 2 made of steel. The displacement value of the second girder solution was approximately 30% lower than the first solution. Acknowledgement: This work has been financed by the Polish National Centre for Research and Development under the LIDER program, Grant Agreement No. LIDER/45/0177/L-9/17/NCBR/2018.Keywords: aircraft, helicopters, shape memory alloy, SMA, smart material, unmanned aerial vehicle, UAV
Procedia PDF Downloads 139107 Revolutionizing Manufacturing: Embracing Additive Manufacturing with Eggshell Polylactide (PLA) Polymer
Authors: Choy Sonny Yip Hong
Abstract:
This abstract presents an exploration into the creation of a sustainable bio-polymer compound for additive manufacturing, specifically 3D printing, with a focus on eggshells and polylactide (PLA) polymer. The project initially conducted experiments using a variety of food by-products to create bio-polymers, and promising results were obtained when combining eggshells with PLA polymer. The research journey involved precise measurements, drying of PLA to remove moisture, and the utilization of a filament-making machine to produce 3D printable filaments. The project began with exploratory research and experiments, testing various combinations of food by-products to create bio-polymers. After careful evaluation, it was discovered that eggshells and PLA polymer produced promising results. The initial mixing of the two materials involved heating them just above the melting point. To make the compound 3D printable, the research focused on finding the optimal formulation and production process. The process started with precise measurements of the PLA and eggshell materials. The PLA was placed in a heating oven to remove any absorbed moisture. Handmade testing samples were created to guide the planning for 3D-printed versions. The scrap PLA was recycled and ground into a powdered state. The drying process involved gradual moisture evaporation, which required several hours. The PLA and eggshell materials were then placed into the hopper of a filament-making machine. The machine's four heating elements controlled the temperature of the melted compound mixture, allowing for optimal filament production with accurate and consistent thickness. The filament-making machine extruded the compound, producing filament that could be wound on a wheel. During the testing phase, trials were conducted with different percentages of eggshell in the PLA mixture, including a high percentage (20%). However, poor extrusion results were observed for high eggshell percentage mixtures. Samples were created, and continuous improvement and optimization were pursued to achieve filaments with good performance. To test the 3D printability of the DIY filament, a 3D printer was utilized, set to print the DIY filament smoothly and consistently. Samples were printed and mechanically tested using a universal testing machine to determine their mechanical properties. This testing process allowed for the evaluation of the filament's performance and suitability for additive manufacturing applications. In conclusion, the project explores the creation of a sustainable bio-polymer compound using eggshells and PLA polymer for 3D printing. The research journey involved precise measurements, drying of PLA, and the utilization of a filament-making machine to produce 3D printable filaments. Continuous improvement and optimization were pursued to achieve filaments with good performance. The project's findings contribute to the advancement of additive manufacturing, offering opportunities for design innovation, carbon footprint reduction, supply chain optimization, and collaborative potential. The utilization of eggshell PLA polymer in additive manufacturing has the potential to revolutionize the manufacturing industry, providing a sustainable alternative and enabling the production of intricate and customized products.Keywords: additive manufacturing, 3D printing, eggshell PLA polymer, design innovation, carbon footprint reduction, supply chain optimization, collaborative potential
Procedia PDF Downloads 74106 Chatbots vs. Websites: A Comparative Analysis Measuring User Experience and Emotions in Mobile Commerce
Authors: Stephan Boehm, Julia Engel, Judith Eisser
Abstract:
During the last decade communication in the Internet transformed from a broadcast to a conversational model by supporting more interactive features, enabling user generated content and introducing social media networks. Another important trend with a significant impact on electronic commerce is a massive usage shift from desktop to mobile devices. However, a presentation of product- or service-related information accumulated on websites, micro pages or portals often remains the pivot and focal point of a customer journey. A more recent change of user behavior –especially in younger user groups and in Asia– is going along with the increasing adoption of messaging applications supporting almost real-time but asynchronous communication on mobile devices. Mobile apps of this type cannot only provide an alternative for traditional one-to-one communication on mobile devices like voice calls or short messaging service. Moreover, they can be used in mobile commerce as a new marketing and sales channel, e.g., for product promotions and direct marketing activities. This requires a new way of customer interaction compared to traditional mobile commerce activities and functionalities provided based on mobile web-sites. One option better aligned to the customer interaction in mes-saging apps are so-called chatbots. Chatbots are conversational programs or dialog systems simulating a text or voice based human interaction. They can be introduced in mobile messaging and social media apps by using rule- or artificial intelligence-based imple-mentations. In this context, a comparative analysis is conducted to examine the impact of using traditional websites or chatbots for promoting a product in an impulse purchase situation. The aim of this study is to measure the impact on the customers’ user experi-ence and emotions. The study is based on a random sample of about 60 smartphone users in the group of 20 to 30-year-olds. Participants are randomly assigned into two groups and participate in a traditional website or innovative chatbot based mobile com-merce scenario. The chatbot-based scenario is implemented by using a Wizard-of-Oz experimental approach for reasons of sim-plicity and to allow for more flexibility when simulating simple rule-based and more advanced artificial intelligence-based chatbot setups. A specific set of metrics is defined to measure and com-pare the user experience in both scenarios. It can be assumed, that users get more emotionally involved when interacting with a system simulating human communication behavior instead of browsing a mobile commerce website. For this reason, innovative face-tracking and analysis technology is used to derive feedback on the emotional status of the study participants while interacting with the website or the chatbot. This study is a work in progress. The results will provide first insights on the effects of chatbot usage on user experiences and emotions in mobile commerce environments. Based on the study findings basic requirements for a user-centered design and implementation of chatbot solutions for mobile com-merce can be derived. Moreover, first indications on situations where chatbots might be favorable in comparison to the usage of traditional website based mobile commerce can be identified.Keywords: chatbots, emotions, mobile commerce, user experience, Wizard-of-Oz prototyping
Procedia PDF Downloads 459105 In Vitro Intestine Tissue Model to Study the Impact of Plastic Particles
Authors: Ashleigh Williams
Abstract:
Micro- and nanoplastics’ (MNLPs) omnipresence and ecological accumulation is evident when surveying recent environmental impact studies. For example, in 2014 it was estimated that at least 52.3 trillion plastic microparticles are floating at sea, and scientists have even found plastics present remote Arctic ice and snow (5,6). Plastics have even found their way into precipitation, with more than 1000 tons of microplastic rain precipitating onto the Western United States in 2020. Even more recent studies evaluating the chemical safety of reusable plastic bottles found that hundreds of chemicals leached into the control liquid in the bottle (ddH2O, ph = 7) during a 24-hour time period. A consequence of the increased abundance in plastic waste in the air, land, and water every year is the bioaccumulation of MNLPs in ecosystems and trophic niches of the animal food chain, which could potentially cause increased direct and indirect exposure of humans to MNLPs via inhalation, ingestion, and dermal contact. Though the detrimental, toxic effects of MNLPs have been established in marine biota, much less is known about the potentially hazardous health effects of chronic MNLP ingestion in humans. Recent data indicate that long-term exposure to MNLPs could cause possible inflammatory and dysbiotic effects. However, toxicity seems to be largely dose-, as well as size-dependent. In addition, the transcytotic uptake of MNLPs through the intestinal epithelia in humans remain relatively unknown. To this point, the goal of the current study was to investigate the mechanisms of micro- and nanoplastic uptake and transcytosis of Polystyrene (PE) in human stem-cell derived, physiologically relevant in vitro intestinal model systems, and to compare the relative effect of particle size (30 nm, 100 nm, 500 nm and 1 µm), and concentration (0 µg/mL, 250 µg/mL, 500 µg/mL, 1000 µg/mL) on polystyrene MNLP uptake, transcytosis and intestinal epithelial model integrity. Observational and quantitative data obtained from confocal microscopy, immunostaining, transepithelial electrical resistance (TEER) measurements, cryosectioning, and ELISA cytokine assays of the proinflammatory cytokines Interleukin-6 and Interleukin-8 were used to evaluate the localization and transcytosis of polystyrene MNPs and its impact on epithelial integrity in human-derived intestinal in vitro model systems. The effect of Microfold (M) cell induction on polystyrene micro- and nanoparticle (MNP) uptake, transcytosis, and potential inflammation was also assessed and compared to samples grown under standard conditions. Microfold (M) cells, link the human intestinal system to the immune system and are the primary cells in the epithelium responsible for sampling and transporting foreign matter of interest from the lumen of the gut to underlying immune cells. Given the uptake capabilities of Microfold cells to interact both specifically and nonspecific to abiotic and biotic materials, it was expected that M- cell induced in vitro samples would have increased binding, localization, and potentially transcytosis of Polystyrene MNLPs across the epithelial barrier. Experimental results of this study would not only help in the evaluation of the plastic toxicity, but would allow for more detailed modeling of gut inflammation and the intestinal immune system.Keywords: nanoplastics, enteroids, intestinal barrier, tissue engineering, microfold (M) cells
Procedia PDF Downloads 85104 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit
Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic
Abstract:
Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method
Procedia PDF Downloads 121103 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction
Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal
Abstract:
Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction
Procedia PDF Downloads 139102 An Integrated Water Resources Management Approach to Evaluate Effects of Transportation Projects in Urbanized Territories
Authors: Berna Çalışkan
Abstract:
The integrated water management is a colloborative approach to planning that brings together institutions that influence all elements of the water cycle, waterways, watershed characteristics, wetlands, ponds, lakes, floodplain areas, stream channel structure. It encourages collaboration where it will be beneficial and links between water planning and other planning processes that contribute to improving sustainable urban development and liveability. Hydraulic considerations can influence the selection of a highway corridor and the alternate routes within the corridor. widening a roadway, replacing a culvert, or repairing a bridge. Because of this, the type and amount of data needed for planning studies can vary widely depending on such elements as environmental considerations, class of the proposed highway, state of land use development, and individual site conditions. The extraction of drainage networks provide helpful preliminary drainage data from the digital elevation model (DEM). A case study was carried out using the Arc Hydro extension within ArcGIS in the study area. It provides the means for processing and presenting spatially-referenced Stream Model. Study area’s flow routing, stream levels, segmentation, drainage point processing can be obtained using DEM as the 'Input surface raster'. These processes integrate the fields of hydrologic, engineering research, and environmental modeling in a multi-disciplinary program designed to provide decision makers with a science-based understanding, and innovative tools for, the development of interdisciplinary and multi-level approach. This research helps to manage transport project planning and construction phases to analyze the surficial water flow, high-level streams, wetland sites for development of transportation infrastructure planning, implementing, maintenance, monitoring and long-term evaluations to better face the challenges and solutions associated with effective management and enhancement to deal with Low, Medium, High levels of impact. Transport projects are frequently perceived as critical to the ‘success’ of major urban, metropolitan, regional and/or national development because of their potential to affect significant socio-economic and territorial change. In this context, sustaining and development of economic and social activities depend on having sufficient Water Resources Management. The results of our research provides a workflow to build a stream network how can classify suitability map according to stream levels. Transportation projects establish, develop, incorporate and deliver effectively by selecting best location for reducing construction maintenance costs, cost-effective solutions for drainage, landslide, flood control. According to model findings, field study should be done for filling gaps and checking for errors. In future researches, this study can be extended for determining and preventing possible damage of Sensitive Areas and Vulnerable Zones supported with field investigations.Keywords: water resources management, hydro tool, water protection, transportation
Procedia PDF Downloads 58101 Argos-Linked Fastloc GPS Reveals the Resting Activity of Migrating Sea Turtles
Authors: Gail Schofield, Antoine M. Dujon, Nicole Esteban, Rebecca M. Lester, Graeme C. Hays
Abstract:
Variation in diel movement patterns during migration provides information on the strategies used by animals to maximize energy efficiency and ensure the successful completion of migration. For instance, many flying and land-based terrestrial species stop to rest and refuel at regular intervals along the migratory route, or at transitory ‘stopover’ sites, depending on resource availability. However, in cases where stopping is not possible (such as over–or through deep–open oceans, or over deserts and mountains), non-stop travel is required, with animals needing to develop strategies to rest while actively traveling. Recent advances in biologging technologies have identified mid-flight micro sleeps by swifts in Africa during the 10-month non-breeding period, and the use of lateralized sleep behavior in orca and bottlenose dolphins during migration. Here, highly accurate locations obtained by Argos-linked Fastloc-GPS transmitters of adult green (n=8 turtles, 9487 locations) and loggerhead (n=46 turtles, 47,588 locations) sea turtles migrating around thousand kilometers (over several weeks) from breeding to foraging grounds across the Indian and Mediterranean oceans were used to identify potential resting strategies. Stopovers were only documented for seven turtles, lasting up to 6 days; thus, this strategy was not commonly used, possibly due to the lack of potential ‘shallow’ ( < 100 m seabed depth) sites along routes. However, observations of the day versus night speed of travel indicated that turtles might use other mechanisms to rest. For instance, turtles traveled an average 31% slower at night compared to day during oceanic crossings. Slower travel speeds at night might be explained by turtles swimming in a less direct line at night and/or deeper dives reducing their forward motion, as indicated through studies using Argos-linked transmitters and accelerometers. Furthermore, within the first 24 h of entering waters shallower than 100 m towards the end of migration (the depth at which sea turtles can swim and rest on the seabed), some individuals travelled 72% slower at night, repeating this behavior intermittently (each time for a one-night duration at 3–6-day intervals) until reaching the foraging grounds. If the turtles were, in fact, resting on the seabed at this point, they could be inactive for up to 8-hours, facilitating protracted periods of rest after several weeks of constant swimming. Turtles might not rest every night once within these shallower depths, due to the time constraints of reaching foraging grounds and restoring depleted energetic reserves (as sea turtles are capital breeders, they tend not to feed for several months during migration to and from the breeding grounds and while breeding). In conclusion, access to data-rich, highly accurate Argos-linked Fastloc-GPS provided information about differences in the day versus night activity at different stages of migration, allowing us, for the first time, to compare the strategies used by a marine vertebrate with terrestrial land-based and flying species. However, the question of what resting strategies are used by individuals that remain in oceanic waters to forage, with combinations of highly accurate Argos-linked Fastloc-GPS transmitters and accelerometry or time-depth recorders being required for sufficient numbers of individuals.Keywords: argos-linked fastloc GPS, data loggers, migration, resting strategy, telemetry
Procedia PDF Downloads 157100 Modelling Spatial Dynamics of Terrorism
Authors: André Python
Abstract:
To this day, terrorism persists as a worldwide threat, exemplified by the recent deadly attacks in January 2015 in Paris and the ongoing massacres perpetrated by ISIS in Iraq and Syria. In response to this threat, states deploy various counterterrorism measures, the cost of which could be reduced through effective preventive measures. In order to increase the efficiency of preventive measures, policy-makers may benefit from accurate predictive models that are able to capture the complex spatial dynamics of terrorism occurring at a local scale. Despite empirical research carried out at country-level that has confirmed theories explaining the diffusion processes of terrorism across space and time, scholars have failed to assess diffusion’s theories on a local scale. Moreover, since scholars have not made the most of recent statistical modelling approaches, they have been unable to build up predictive models accurate in both space and time. In an effort to address these shortcomings, this research suggests a novel approach to systematically assess the theories of terrorism’s diffusion on a local scale and provide a predictive model of the local spatial dynamics of terrorism worldwide. With a focus on the lethal terrorist events that occurred after 9/11, this paper addresses the following question: why and how does lethal terrorism diffuse in space and time? Based on geolocalised data on worldwide terrorist attacks and covariates gathered from 2002 to 2013, a binomial spatio-temporal point process is used to model the probability of terrorist attacks on a sphere (the world), the surface of which is discretised in the form of Delaunay triangles and refined in areas of specific interest. Within a Bayesian framework, the model is fitted through an integrated nested Laplace approximation - a recent fitting approach that computes fast and accurate estimates of posterior marginals. Hence, for each location in the world, the model provides a probability of encountering a lethal terrorist attack and measures of volatility, which inform on the model’s predictability. Diffusion processes are visualised through interactive maps that highlight space-time variations in the probability and volatility of encountering a lethal attack from 2002 to 2013. Based on the previous twelve years of observation, the location and lethality of terrorist events in 2014 are statistically accurately predicted. Throughout the global scope of this research, local diffusion processes such as escalation and relocation are systematically examined: the former process describes an expansion from high concentration areas of lethal terrorist events (hotspots) to neighbouring areas, while the latter is characterised by changes in the location of hotspots. By controlling for the effect of geographical, economical and demographic variables, the results of the model suggest that the diffusion processes of lethal terrorism are jointly driven by contagious and non-contagious factors that operate on a local scale – as predicted by theories of diffusion. Moreover, by providing a quantitative measure of predictability, the model prevents policy-makers from making decisions based on highly uncertain predictions. Ultimately, this research may provide important complementary tools to enhance the efficiency of policies that aim to prevent and combat terrorism.Keywords: diffusion process, terrorism, spatial dynamics, spatio-temporal modeling
Procedia PDF Downloads 35199 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data
Authors: Nicola Colaninno, Eugenio Morello
Abstract:
The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing
Procedia PDF Downloads 19698 Exploring Behavioural Biases among Indian Investors: A Qualitative Inquiry
Authors: Satish Kumar, Nisha Goyal
Abstract:
In the stock market, individual investors exhibit different kinds of behaviour. Traditional finance is built on the notion of 'homo economics', which states that humans always make perfectly rational choices to maximize their wealth and minimize risk. That is, traditional finance has concern for how investors should behave rather than how actual investors are behaving. Behavioural finance provides the explanation for this phenomenon. Although finance has been studied for thousands of years, behavioural finance is an emerging field that combines the behavioural or psychological aspects with conventional economic and financial theories to provide explanations on how emotions and cognitive factors influence investors’ behaviours. These emotions and cognitive factors are known as behavioural biases. Because of these biases, investors make irrational investment decisions. Besides, the emotional and cognitive factors, the social influence of media as well as friends, relatives and colleagues also affect investment decisions. Psychological factors influence individual investors’ investment decision making, but few studies have used qualitative methods to understand these factors. The aim of this study is to explore the behavioural factors or biases that affect individuals’ investment decision making. For the purpose of this exploratory study, an in-depth interview method was used because it provides much more exhaustive information and a relaxed atmosphere in which people feel more comfortable to provide information. Twenty investment advisors having a minimum 5 years’ experience in securities firms were interviewed. In this study, thematic content analysis was used to analyse interview transcripts. Thematic content analysis process involves analysis of transcripts, coding and identification of themes from data. Based on the analysis we categorized the statements of advisors into various themes. Past market returns and volatility; preference for safe returns; tendency to believe they are better than others; tendency to divide their money into different accounts/assets; tendency to hold on to loss-making assets; preference to invest in familiar securities; tendency to believe that past events were predictable; tendency to rely on the reference point; tendency to rely on other sources of information; tendency to have regret for making past decisions; tendency to have more sensitivity towards losses than gains; tendency to rely on own skills; tendency to buy rising stocks with the expectation that this rise will continue etc. are some of the major concerns showed by experts about investors. The findings of the study revealed 13 biases such as overconfidence bias, disposition effect, familiarity bias, framing effect, anchoring bias, availability bias, self-attribution bias, representativeness, mental accounting, hindsight bias, regret aversion, loss aversion and herding bias/media biases present in Indian investors. These biases have a negative connotation because they produce a distortion in the calculation of an outcome. These biases are classified under three categories such as cognitive errors, emotional biases and social interaction. The findings of this study may assist both financial service providers and researchers to understand the various psychological biases of individual investors in investment decision making. Additionally, individual investors will also be aware of the behavioural biases that will aid them to make sensible and efficient investment decisions.Keywords: financial advisors, individual investors, investment decisions, psychological biases, qualitative thematic content analysis
Procedia PDF Downloads 16997 Solar and Galactic Cosmic Ray Impacts on Ambient Dose Equivalent Considering a Flight Path Statistic Representative to World-Traffic
Abstract:
The earth is constantly bombarded by cosmic rays that can be of either galactic or solar origin. Thus, humans are exposed to high levels of galactic radiation due to altitude aircraft. The typical total ambient dose equivalent for a transatlantic flight is about 50 μSv during quiet solar activity. On the contrary, estimations differ by one order of magnitude for the contribution induced by certain solar particle events. Indeed, during Ground Level Enhancements (GLE) event, the Sun can emit particles of sufficient energy and intensity to raise radiation levels on Earth's surface. Analyses of GLE characteristics occurring since 1942 showed that for the worst of them, the dose level is of the order of 1 mSv and more. The largest of these events was observed on February 1956 for which the ambient dose equivalent rate is in the orders of 10 mSv/hr. The extra dose at aircraft altitudes for a flight during this event might have been about 20 mSv, i.e. comparable with the annual limit for aircrew. The most recent GLE, occurred on September 2017 resulting from an X-class solar flare, and it was measured on the surface of both the Earth and Mars using the Radiation Assessment Detector on the Mars Science Laboratory's Curiosity Rover. Recently, Hubert et al. proposed a GLE model included in a particle transport platform (named ATMORAD) describing the extensive air shower characteristics and allowing to assess the ambient dose equivalent. In this approach, the GCR is based on the Force-Field approximation model. The physical description of the Solar Cosmic Ray (i.e. SCR) considers the primary differential rigidity spectrum and the distribution of primary particles at the top of the atmosphere. ATMORAD allows to determine the spectral fluence rate of secondary particles induced by extensive showers, considering altitude range from ground to 45 km. Ambient dose equivalent can be determined using fluence-to-ambient dose equivalent conversion coefficients. The objective of this paper is to analyze the GCR and SCR impacts on ambient dose equivalent considering a high number statistic of world-flight paths. Flight trajectories are based on the Eurocontrol Demand Data Repository (DDR) and consider realistic flight plan with and without regulations or updated with Radar Data from CFMU (Central Flow Management Unit). The final paper will present exhaustive analyses implying solar impacts on ambient dose equivalent level and will propose detailed analyses considering route and airplane characteristics (departure, arrival, continent, airplane type etc.), and the phasing of the solar event. Preliminary results show an important impact of the flight path, particularly the latitude which drives the cutoff rigidity variations. Moreover, dose values vary drastically during GLE events, on the one hand with the route path (latitude, longitude altitude), on the other hand with the phasing of the solar event. Considering the GLE occurred on 23 February 1956, the average ambient dose equivalent evaluated for a flight Paris - New York is around 1.6 mSv, which is relevant to previous works This point highlights the importance of monitoring these solar events and of developing semi-empirical and particle transport method to obtain a reliable calculation of dose levels.Keywords: cosmic ray, human dose, solar flare, aviation
Procedia PDF Downloads 20696 Economic Analysis of a Carbon Abatement Technology
Authors: Hameed Rukayat Opeyemi, Pericles Pilidis Pagone Emmanuele, Agbadede Roupa, Allison Isaiah
Abstract:
Climate change represents one of the single most challenging problems facing the world today. According to the National Oceanic and Administrative Association, Atmospheric temperature rose almost 25% since 1958, Artic sea ice has shrunk 40% since 1959 and global sea levels have risen more than 5.5cm since 1990. Power plants are the major culprits of GHG emission to the atmosphere. Several technologies have been proposed to reduce the amount of GHG emitted to the atmosphere from power plant, one of which is the less researched Advanced zero-emission power plant. The advanced zero emission power plants make use of mixed conductive membrane (MCM) reactor also known as oxygen transfer membrane (OTM) for oxygen transfer. The MCM employs membrane separation process. The membrane separation process was first introduced in 1899 when Walter Hermann Nernst investigated electric current between metals and solutions. He found that when a dense ceramic is heated, the current of oxygen molecules move through it. In the bid to curb the amount of GHG emitted to the atmosphere, the membrane separation process was applied to the field of power engineering in the low carbon cycle known as the Advanced zero emission power plant (AZEP cycle). The AZEP cycle was originally invented by Norsk Hydro, Norway and ABB Alstom power (now known as Demag Delaval Industrial turbomachinery AB), Sweden. The AZEP drew a lot of attention because its ability to capture ~100% CO2 and also boasts of about 30-50% cost reduction compared to other carbon abatement technologies, the penalty in efficiency is also not as much as its counterparts and crowns it with almost zero NOx emissions due to very low nitrogen concentrations in the working fluid. The advanced zero emission power plants differ from a conventional gas turbine in the sense that its combustor is substituted with the mixed conductive membrane (MCM-reactor). The MCM-reactor is made up of the combustor, low-temperature heat exchanger LTHX (referred to by some authors as air preheater the mixed conductive membrane responsible for oxygen transfer and the high-temperature heat exchanger and in some layouts, the bleed gas heat exchanger. Air is taken in by the compressor and compressed to a temperature of about 723 Kelvin and pressure of 2 Mega-Pascals. The membrane area needed for oxygen transfer is reduced by increasing the temperature of 90% of the air using the LTHX; the temperature is also increased to facilitate oxygen transfer through the membrane. The air stream enters the LTHX through the transition duct leading to inlet of the LTHX. The temperature of the air stream is then increased to about 1150 K depending on the design point specification of the plant and the efficiency of the heat exchanging system. The amount of oxygen transported through the membrane is directly proportional to the temperature of air going through the membrane. The AZEP cycle was developed using the Fortran software and economic analysis was conducted using excel and Matlab followed by optimization case study. The Simple bleed gas heat exchange layout (100 % CO2 capture), Bleed gas heat exchanger layout with flue gas turbine (100 % CO2 capture), Pre-expansion reheating layout (Sequential burning layout)–AZEP 85% (85% CO2 capture) and Pre-expansion reheating layout (Sequential burning layout) with flue gas turbine–AZEP 85% (85% CO2 capture). This paper discusses monte carlo risk analysis of four possible layouts of the AZEP cycle.Keywords: gas turbine, global warming, green house gas, fossil fuel power plants
Procedia PDF Downloads 39795 Embodied Empowerment: A Design Framework for Augmenting Human Agency in Assistive Technologies
Authors: Melina Kopke, Jelle Van Dijk
Abstract:
Persons with cognitive disabilities, such as Autism Spectrum Disorder (ASD) are often dependent on some form of professional support. Recent transformations in Dutch healthcare have spurred institutions to apply new, empowering methods and tools to enable their clients to cope (more) independently in daily life. Assistive Technologies (ATs) seem promising as empowering tools. While ATs can, functionally speaking, help people to perform certain activities without human assistance, we hold that, from a design-theoretical perspective, such technologies often fail to empower in a deeper sense. Most technologies serve either to prescribe or to monitor users’ actions, which in some sense objectifies them, rather than strengthening their agency. This paper proposes that theories of embodied interaction could help formulating a design vision in which interactive assistive devices augment, rather than replace, human agency and thereby add to a persons’ empowerment in daily life settings. It aims to close the gap between empowerment theory and the opportunities provided by assistive technologies, by showing how embodiment and empowerment theory can be applied in practice in the design of new, interactive assistive devices. Taking a Research-through-Design approach, we conducted a case study of designing to support independently living people with ASD with structuring daily activities. In three iterations we interlaced design action, active involvement and prototype evaluations with future end-users and healthcare professionals, and theoretical reflection. Our co-design sessions revealed the issue of handling daily activities being multidimensional. Not having the ability to self-manage one’s daily life has immense consequences on one’s self-image, and also has major effects on the relationship with professional caregivers. Over the course of the project relevant theoretical principles of both embodiment and empowerment theory together with user-insights, informed our design decisions. This resulted in a system of wireless light units that users can program as a reminder for tasks, but also to record and reflect on their actions. The iterative process helped to gradually refine and reframe our growing understanding of what it concretely means for a technology to empower a person in daily life. Drawing on the case study insights we propose a set of concrete design principles that together form what we call the embodied empowerment design framework. The framework includes four main principles: Enabling ‘reflection-in-action’; making information ‘publicly available’ in order to enable co-reflection and social coupling; enabling the implementation of shared reflections into an ‘endurable-external feedback loop’ embedded in the persons familiar ’lifeworld’; and nudging situated actions with self-created action-affordances. In essence, the framework aims for the self-development of a suitable routine, or ‘situated practice’, by building on a growing shared insight of what works for the person. The framework, we propose, may serve as a starting point for AT designers to create truly empowering interactive products. In a set of follow-up projects involving the participation of persons with ASD, Intellectual Disabilities, Dementia and Acquired Brain Injury, the framework will be applied, evaluated and further refined.Keywords: assistive technology, design, embodiment, empowerment
Procedia PDF Downloads 27994 Tensile and Direct Shear Responses of Basalt-Fibre Reinforced Composite Using Alkali Activate Binder
Authors: S. Candamano, A. Iorfida, L. Pagnotta, F. Crea
Abstract:
Basalt fabric reinforced cementitious composites (FRCM) have attracted great attention because they result in being effective in structural strengthening and eco-efficient. In this study, authors investigate their mechanical behavior when an alkali-activated binder, with tuned properties and containing high amounts of industrial by-products, such as ground granulated blast furnace slag, is used. Reinforcement is made up of a balanced, coated bidirectional fabric made out of basalt fibres and stainless steel micro-wire, with a mesh size of 8x8 mm and an equivalent design thickness equal to 0.064 mm. Mortars mixes have been prepared by maintaining constant the water/(reactive powders) and sand/(reactive powders) ratios at 0.53 and 2.7 respectively. Tensile tests were carried out on composite specimens of nominal dimensions equal to 500 mm x 50 mm x 10 mm, with 6 embedded rovings in the loading direction. Direct shear tests (DST), aimed to the stress-transfer mechanism and failure modes of basalt-FRCM composites, were carried out on brickwork substrate using an externally bonded basalt-FRCM composite strip 10 mm thick, 50 mm wide and a bonded length of 300 mm. Mortars exhibit, after 28 days of curing, a compressive strength of 32 MPa and a flexural strength of 5.5 MPa. Main hydration product is a poorly crystalline CASH gel. The constitutive behavior of the composite has been identified by means of direct tensile tests, with response curves showing a tri-linear behavior. The first linear phase represents the uncracked (I) stage, the second (II) is identified by crack development and the third (III) corresponds to cracked stage, completely developed up to failure. All specimens exhibit a crack pattern throughout the gauge length and failure occurred as a result of sequential tensile failure of the fibre bundles, after reaching the ultimate tensile strength. The behavior is mainly governed by cracks development (II) and widening (III) up to failure. The main average values related to the stages are σI= 173 MPa and εI= 0.026% that are the stress and strain of the transition point between stages I and II, corresponding to the first mortar cracking; σu = 456 MPa and εu= 2.20% that are the ultimate tensile strength and strain, respectively. The tensile modulus of elasticity in stage III is EIII= 41 GPa. All single-lap shear test specimens failed due to composite debonding. It occurred at the internal fabric-to-matrix interface, and it was the result of fracture of the matrix between the fibre bundles. For all specimens, transversal cracks were visible on the external surface of the composite and involved only the external matrix layer. This cracking appears when the interfacial shear stresses increase and slippage of the fabric at the internal matrix layer interface occurs. Since the external matrix layer is bonded to the reinforcement fabric, it translates with the slipped fabric. Average peak load around 945 N, peak stress around 308 MPa, and global slip around 6 mm were measured. The preliminary test results allow affirming that Alkali Activated Binders can be considered a potentially valid alternative to traditional mortars in designing FRCM composites.Keywords: alkali activated binders, basalt-FRCM composites, direct shear tests, structural strengthening
Procedia PDF Downloads 12493 High Pressure Thermophysical Properties of Complex Mixtures Relevant to Liquefied Natural Gas (LNG) Processing
Authors: Saif Al Ghafri, Thomas Hughes, Armand Karimi, Kumarini Seneviratne, Jordan Oakley, Michael Johns, Eric F. May
Abstract:
Knowledge of the thermophysical properties of complex mixtures at extreme conditions of pressure and temperature have always been essential to the Liquefied Natural Gas (LNG) industry’s evolution because of the tremendous technical challenges present at all stages in the supply chain from production to liquefaction to transport. Each stage is designed using predictions of the mixture’s properties, such as density, viscosity, surface tension, heat capacity and phase behaviour as a function of temperature, pressure, and composition. Unfortunately, currently available models lead to equipment over-designs of 15% or more. To achieve better designs that work more effectively and/or over a wider range of conditions, new fundamental property data are essential, both to resolve discrepancies in our current predictive capabilities and to extend them to the higher-pressure conditions characteristic of many new gas fields. Furthermore, innovative experimental techniques are required to measure different thermophysical properties at high pressures and over a wide range of temperatures, including near the mixture’s critical points where gas and liquid become indistinguishable and most existing predictive fluid property models used breakdown. In this work, we present a wide range of experimental measurements made for different binary and ternary mixtures relevant to LNG processing, with a particular focus on viscosity, surface tension, heat capacity, bubble-points and density. For this purpose, customized and specialized apparatus were designed and validated over the temperature range (200 to 423) K at pressures to 35 MPa. The mixtures studied were (CH4 + C3H8), (CH4 + C3H8 + CO2) and (CH4 + C3H8 + C7H16); in the last of these the heptane contents was up to 10 mol %. Viscosity was measured using a vibrating wire apparatus, while mixture densities were obtained by means of a high-pressure magnetic-suspension densimeter and an isochoric cell apparatus; the latter was also used to determine bubble-points. Surface tensions were measured using the capillary rise method in a visual cell, which also enabled the location of the mixture critical point to be determined from observations of critical opalescence. Mixture heat capacities were measured using a customised high-pressure differential scanning calorimeter (DSC). The combined standard relative uncertainties were less than 0.3% for density, 2% for viscosity, 3% for heat capacity and 3 % for surface tension. The extensive experimental data gathered in this work were compared with a variety of different advanced engineering models frequently used for predicting thermophysical properties of mixtures relevant to LNG processing. In many cases the discrepancies between the predictions of different engineering models for these mixtures was large, and the high quality data allowed erroneous but often widely-used models to be identified. The data enable the development of new or improved models, to be implemented in process simulation software, so that the fluid properties needed for equipment and process design can be predicted reliably. This in turn will enable reduced capital and operational expenditure by the LNG industry. The current work also aided the community of scientists working to advance theoretical descriptions of fluid properties by allowing to identify deficiencies in theoretical descriptions and calculations.Keywords: LNG, thermophysical, viscosity, density, surface tension, heat capacity, bubble points, models
Procedia PDF Downloads 27492 Accountability of Artificial Intelligence: An Analysis Using Edgar Morin’s Complex Thought
Authors: Sylvie Michel, Sylvie Gerbaix, Marc Bidan
Abstract:
Artificial intelligence (AI) can be held accountable for its detrimental impacts. This question gains heightened relevance given AI's pervasive reach across various domains, magnifying its power and potential. The expanding influence of AI raises fundamental ethical inquiries, primarily centering on biases, responsibility, and transparency. This encompasses discriminatory biases arising from algorithmic criteria or data, accidents attributed to autonomous vehicles or other systems, and the imperative of transparent decision-making. This article aims to stimulate reflection on AI accountability, denoting the necessity to elucidate the effects it generates. Accountability comprises two integral aspects: adherence to legal and ethical standards and the imperative to elucidate the underlying operational rationale. The objective is to initiate a reflection on the obstacles to this "accountability," facing the challenges of the complexity of artificial intelligence's system and its effects. Then, this article proposes to mobilize Edgar Morin's complex thought to encompass and face the challenges of this complexity. The first contribution is to point out the challenges posed by the complexity of A.I., with fractional accountability between a myriad of human and non-human actors, such as software and equipment, which ultimately contribute to the decisions taken and are multiplied in the case of AI. Accountability faces three challenges resulting from the complexity of the ethical issues combined with the complexity of AI. The challenge of the non-neutrality of algorithmic systems as fully ethically non-neutral actors is put forward by a revealing ethics approach that calls for assigning responsibilities to these systems. The challenge of the dilution of responsibility is induced by the multiplicity and distancing between the actors. Thus, a dilution of responsibility is induced by a split in decision-making between developers, who feel they fulfill their duty by strictly respecting the requests they receive, and management, which does not consider itself responsible for technology-related flaws. Accountability is confronted with the challenge of transparency of complex and scalable algorithmic systems, non-human actors self-learning via big data. A second contribution involves leveraging E. Morin's principles, providing a framework to grasp the multifaceted ethical dilemmas and subsequently paving the way for establishing accountability in AI. When addressing the ethical challenge of biases, the "hologrammatic" principle underscores the imperative of acknowledging the non-ethical neutrality of algorithmic systems inherently imbued with the values and biases of their creators and society. The "dialogic" principle advocates for the responsible consideration of ethical dilemmas, encouraging the integration of complementary and contradictory elements in solutions from the very inception of the design phase. Aligning with the principle of organizing recursiveness, akin to the "transparency" of the system, it promotes a systemic analysis to account for the induced effects and guides the incorporation of modifications into the system to rectify deviations and reintroduce modifications into the system to rectify its drifts. In conclusion, this contribution serves as an inception for contemplating the accountability of "artificial intelligence" systems despite the evident ethical implications and potential deviations. Edgar Morin's principles, providing a lens to contemplate this complexity, offer valuable perspectives to address these challenges concerning accountability.Keywords: accountability, artificial intelligence, complexity, ethics, explainability, transparency, Edgar Morin
Procedia PDF Downloads 6391 CT Images Based Dense Facial Soft Tissue Thickness Measurement by Open-source Tools in Chinese Population
Authors: Ye Xue, Zhenhua Deng
Abstract:
Objectives: Facial soft tissue thickness (FSTT) data could be obtained from CT scans by measuring the face-to-skull distances at sparsely distributed anatomical landmarks by manually located on face and skull. However, automated measurement using 3D facial and skull models by dense points using open-source software has become a viable option due to the development of computed assisted imaging technologies. By utilizing dense FSTT information, it becomes feasible to generate plausible automated facial approximations. Therefore, establishing a comprehensive and detailed, densely calculated FSTT database is crucial in enhancing the accuracy of facial approximation. Materials and methods: This study utilized head CT scans from 250 Chinese adults of Han ethnicity, with 170 participants originally born and residing in northern China and 80 participants in southern China. The age of the participants ranged from 14 to 82 years, and all samples were divided into five non-overlapping age groups. Additionally, samples were also divided into three categories based on BMI information. The 3D Slicer software was utilized to segment bone and soft tissue based on different Hounsfield Unit (HU) thresholds, and surface models of the face and skull were reconstructed for all samples from CT data. Following procedures were performed unsing MeshLab, including converting the face models into hollowed cropped surface models amd automatically measuring the Hausdorff Distance (referred to as FSTT) between the skull and face models. Hausdorff point clouds were colorized based on depth value and exported as PLY files. A histogram of the depth distributions could be view and subdivided into smaller increments. All PLY files were visualized of Hausdorff distance value of each vertex. Basic descriptive statistics (i.e., mean, maximum, minimum and standard deviation etc.) and distribution of FSTT were analysis considering the sex, age, BMI and birthplace. Statistical methods employed included Multiple Regression Analysis, ANOVA, principal component analysis (PCA). Results: The distribution of FSTT is mainly influenced by BMI and sex, as further supported by the results of the PCA analysis. Additionally, FSTT values exceeding 30mm were found to be more sensitive to sex. Birthplace-related differences were observed in regions such as the forehead, orbital, mandibular, and zygoma. Specifically, there are distribution variances in the depth range of 20-30mm, particularly in the mandibular region. Northern males exhibit thinner FSTT in the frontal region of the forehead compared to southern males, while females shows fewer distribution differences between the northern and southern, except for the zygoma region. The observed distribution variance in the orbital region could be attributed to differences in orbital size and shape. Discussion: This study provides a database of Chinese individuals distribution of FSTT and suggested opening source tool shows fine function for FSTT measurement. By incorporating birthplace as an influential factor in the distribution of FSTT, a greater level of detail can be achieved in facial approximation.Keywords: forensic anthropology, forensic imaging, cranial facial reconstruction, facial soft tissue thickness, CT, open-source tool
Procedia PDF Downloads 5890 Design Challenges for Severely Skewed Steel Bridges
Authors: Muna Mitchell, Akshay Parchure, Krishna Singaraju
Abstract:
There is an increasing need for medium- to long-span steel bridges with complex geometry due to site restrictions in developed areas. One of the solutions to grade separations in congested areas is to use longer spans on skewed supports that avoid at-grade obstructions limiting impacts to the foundation. Where vertical clearances are also a constraint, continuous steel girders can be used to reduce superstructure depths. Combining continuous long steel spans on severe skews can resolve the constraints at a cost. The behavior of skewed girders is challenging to analyze and design with subsequent complexity during fabrication and construction. As a part of a corridor improvement project, Walter P Moore designed two 1700-foot side-by-side bridges carrying four lanes of traffic in each direction over a railroad track. The bridges consist of prestressed concrete girder approach spans and three-span continuous steel plate girder units. The roadway design added complex geometry to the bridge with horizontal and vertical curves combined with superelevation transitions within the plate girder units. The substructure at the steel units was skewed approximately 56 degrees to satisfy the existing railroad right-of-way requirements. A horizontal point of curvature (PC) near the end of the steel units required the use flared girders and chorded slab edges. Due to the flared girder geometry, the cross-frame spacing in each bay is unique. Staggered cross frames were provided based on AASHTO LRFD and NCHRP guidelines for high skew steel bridges. Skewed steel bridges develop significant forces in the cross frames and rotation in the girder websdue to differential displacements along the girders under dead and live loads. In addition, under thermal loads, skewed steel bridges expand and contract not along the alignment parallel to the girders but along the diagonal connecting the acute corners, resulting in horizontal displacement both along and perpendicular to the girders. AASHTO LRFD recommends a 95 degree Fahrenheit temperature differential for the design of joints and bearings. The live load and the thermal loads resulted in significant horizontal forces and rotations in the bearings that necessitated the use of HLMR bearings. A unique bearing layout was selected to minimize the effect of thermal forces. The span length, width, skew, and roadway geometry at the bridges also required modular bridge joint systems (MBJS) with inverted-T bent caps to accommodate movement in the steel units. 2D and 3D finite element analysis models were developed to accurately determine the forces and rotations in the girders, cross frames, and bearings and to estimate thermal displacements at the joints. This paper covers the decision-making process for developing the framing plan, bearing configurations, joint type, and analysis models involved in the design of the high-skew three-span continuous steel plate girder bridges.Keywords: complex geometry, continuous steel plate girders, finite element structural analysis, high skew, HLMR bearings, modular joint
Procedia PDF Downloads 19589 Impact of Air Pressure and Outlet Temperature on Physicochemical and Functional Properties of Spray-dried Skim Milk Powder
Authors: Adeline Meriaux, Claire Gaiani, Jennifer Burgain, Frantz Fournier, Lionel Muniglia, Jérémy Petit
Abstract:
Spray-drying process is widely used for the production of dairy powders for food and pharmaceuticals industries. It involves the atomization of a liquid feed into fine droplets, which are subsequently dried through contact with a hot air flow. The resulting powders permit transportation cost reduction and shelf life increase but can also exhibit various interesting functionalities (flowability, solubility, protein modification or acid gelation), depending on operating conditions and milk composition. Indeed, particles porosity, surface composition, lactose crystallization, protein denaturation, protein association or crust formation may change. Links between spray-drying conditions and physicochemical and functional properties of powders were investigated by a design of experiment methodology and analyzed by principal component analysis. Quadratic models were developed, and multicriteria optimization was carried out by the use of genetic algorithm. At the time of abstract submission, verification spray-drying trials are ongoing. To perform experiments, milk from dairy farm was collected, skimmed, froze and spray-dried at different air pressure (between 1 and 3 bars) and outlet temperature (between 75 and 95 °C). Dry matter, minerals content and proteins content were determined by standard method. Solubility index, absorption index and hygroscopicity were determined by method found in literature. Particle size distribution were obtained by laser diffraction granulometry. Location of the powder color in the Cielab color space and water activity were characterized by a colorimeter and an aw-value meter, respectively. Flow properties were characterized with FT4 powder rheometer; in particular compressibility and shearing test were performed. Air pressure and outlet temperature are key factors that directly impact the drying kinetics and powder characteristics during spray-drying process. It was shown that the air pressure affects the particle size distribution by impacting the size of droplet exiting the nozzle. Moreover, small particles lead to more cohesive powder and less saturated color of powders. Higher outlet temperature results in lower moisture level particles which are less sticky and can explain a spray-drying yield increase and the higher cohesiveness; it also leads to particle with low water activity because of the intense evaporation rate. However, it induces a high hygroscopicity, thus, powders tend to get wet rapidly if they are not well stored. On the other hand, high temperature provokes a decrease of native serum proteins which is positively correlated to gelation properties (gel point and firmness). Partial denaturation of serum proteins can improve functional properties of powder. The control of air pressure and outlet temperature during the spray-drying process significantly affects the physicochemical and functional properties of powder. This study permitted to better understand the links between physicochemical and functional properties of powder, to identify correlations between air pressure and outlet temperature. Therefore, mathematical models have been developed and the use of genetic algorithm will allow the optimization of powder functionalities.Keywords: dairy powders, spray-drying, powders functionalities, design of experiment
Procedia PDF Downloads 9388 In-Process Integration of Resistance-Based, Fiber Sensors during the Braiding Process for Strain Monitoring of Carbon Fiber Reinforced Composite Materials
Authors: Oscar Bareiro, Johannes Sackmann, Thomas Gries
Abstract:
Carbon fiber reinforced polymer composites (CFRP) are used in a wide variety of applications due to its advantageous properties and design versatility. The braiding process enables the manufacture of components with good toughness and fatigue strength. However, failure mechanisms of CFRPs are complex and still present challenges associated with their maintenance and repair. Within the broad scope of structural health monitoring (SHM), strain monitoring can be applied to composite materials to improve reliability, reduce maintenance costs and safely exhaust service life. Traditional SHM systems employ e.g. fiber optics, piezoelectrics as sensors, which are often expensive, time consuming and complicated to implement. A cost-efficient alternative can be the exploitation of the conductive properties of fiber-based sensors such as carbon, copper, or constantan - a copper-nickel alloy – that can be utilized as sensors within composite structures to achieve strain monitoring. This allows the structure to provide feedback via electrical signals to a user which are essential for evaluating the structural condition of the structure. This work presents a strategy for the in-process integration of resistance-based sensors (Elektrisola Feindraht AG, CuNi23Mn, Ø = 0.05 mm) into textile preforms during its manufacture via the braiding process (Herzog RF-64/120) to achieve strain monitoring of braided composites. For this, flat samples of instrumented composite laminates of carbon fibers (Toho Tenax HTS40 F13 24K, 1600 tex) and epoxy resin (Epikote RIMR 426) were manufactured via vacuum-assisted resin infusion. These flat samples were later cut out into test specimens and the integrated sensors were wired to the measurement equipment (National Instruments, VB-8012) for data acquisition during the execution of mechanical tests. Quasi-static tests were performed (tensile, 3-point bending tests) following standard protocols (DIN EN ISO 527-1 & 4, DIN EN ISO 14132); additionally, dynamic tensile tests were executed. These tests were executed to assess the sensor response under different loading conditions and to evaluate the influence of the sensor presence on the mechanical properties of the material. Several orientations of the sensor with regards to the applied loading and sensor placements inside the laminate were tested. Strain measurements from the integrated sensors were made by programming a data acquisition code (LabView) written for the measurement equipment. Strain measurements from the integrated sensors were then correlated to the strain/stress state for the tested samples. From the assessment of the sensor integration approach it can be concluded that it allows for a seamless sensor integration into the textile preform. No damage to the sensor or negative effect on its electrical properties was detected during inspection after integration. From the assessment of the mechanical tests of instrumented samples it can be concluded that the presence of the sensors does not alter significantly the mechanical properties of the material. It was found that there is a good correlation between resistance measurements from the integrated sensors and the applied strain. It can be concluded that the correlation is of sufficient accuracy to determinate the strain state of a composite laminate based solely on the resistance measurements from the integrated sensors.Keywords: braiding process, in-process sensor integration, instrumented composite material, resistance-based sensor, strain monitoring
Procedia PDF Downloads 10687 Two Houses in the Arabian Desert: Assessing the Built Work of RCR Architects in the UAE
Authors: Igor Peraza Curiel, Suzanne Strum
Abstract:
Today, when many foreign architects are receiving commissions in the United Arab Emirates, it is essential to analyze how their designs are influenced by the region's culture, environment, and building traditions. This study examines the approach to siting, geometry, construction methods, and material choices in two private homes for a family in Dubai, a project being constructed on adjacent sites by the acclaimed Spanish team of RCR Architects. Their third project in Dubai, the houses mark a turning point in their design approach to the desert. The Pritzker Prize-winning architects of RCR gained renown for building works deeply responsive to the history, landscape, and customs of their hometown in a volcanic area of the Catalonia region of Spain. Key formative projects and their entry to practice in UAE will be analyzed according to the concepts of place identity, the poetics of construction, and material imagination. The poetics of construction, a theoretical position with a long practical tradition, was revived by the British critic Kenneth Frampton. The idea of architecture as a constructional craft is related to the concepts of material imagination and place identity--phenomenological concerns with the creative engagement with local matter and topography that are at the very essence of RCR's way of designing, detailing, and making. Our study situates RCR within the challenges of building in the region, where western forms and means have largely replaced the ingenious responsiveness of indigenous architecture to the climate and material scarcity. The dwellings, iterations of the same steel and concrete vaulting system, highlight the conceptual framework of RCR's design approach to offer a study in contemporary critical regionalism. The Kama House evokes Bedouin tents, while the Alwah House takes the form of desert dunes in response to the temporality of the winds. Metal mesh screens designed to capture the shifting sands will complete the forms. The original research draws on interviews with the architects and unique documentation provided by them and collected by the authors during on-site visits. By examining the two houses in-depth, this paper foregrounds a series of timely questions: 1) What is the impact of the local climatic, cultural, and material conditions on their project in the UAE? 2) How does this work further their experiences in the region? 3) How has RCR adapted their construction techniques as their work expands beyond familiar settings? The investigation seeks to understand how the design methodology developed for more than 20 years and enmeshed in the regional milieu of their hometown can transform as the architects encounter unique characteristics and values in the Middle East. By focusing on the contemporary interpretation of Arabic geometry and elements, the houses reveal the role of geometry, tectonics, and material specificity in the realization from conceptual sketches to built form. In emphasizing the importance of regional responsiveness, the dynamics of international construction practice, and detailing this study highlights essential issues for professionals and students looking to practice in an increasingly global market.Keywords: material imagination, regional responsiveness, place identity, poetics of construction
Procedia PDF Downloads 14786 Acoustic Radiation Force Impulse Elastography of the Hepatic Tissue of Canine Brachycephalic Patients
Authors: A. C. Facin, M. C. Maronezi , M. P. Menezes, G. L. Montanhim, L. Pavan, M. A. R. Feliciano, R. P. Nociti, R. A. R. Uscategui, P. C. Moraes
Abstract:
The incidence of brachycephalic syndrome (BS) in the clinical routine of small animals has increased significantly giving the higher proportion of brachycephalic pets in the last years and has been considered as an animal welfare problem. The treatment of BS is surgical and the clinical signs related can be considerably attenuated. Nevertheless, the systemic effects of the BS are still poorly reported and little is known about these when the surgical correction is not performed early. Affected dogs are more likely to develop cardiopulmonary, gastrointestinal and sleep disorders in which the chronic hypoxemia plays a major role. This syndrome is compared with the obstructive sleep apnea (OSA) in humans, both considered as causes of systemic and metabolic dysfunction. Among the several consequences of the BS little is known if the syndrome also affects the hepatic tissue of brachycephalic patients. Elastography is a promising ultrasound technique that evaluates tissue elasticity and has been recently used with the purpose of diagnosis of liver fibrosis. In medicine, it is a growing concern regarding the hepatic injury of patients affected by OSA. This prospective study hypothesizes if there is any consequence of BS in the hepatic parenchyma of brachycephalic dogs that don’t receive any surgical treatment. This study was conducted following the approval of the Animal Ethics and Welfare Committee of the Faculdade de Ciências Agrárias e Veterinárias, UNESP, Campus Jaboticabal, Brazil (protocol no 17944/2017) and funded by Sao Paulo Research Foundation (FAPESP, process no 2017/24809-4). The methodology was based in ARFI elastography using the ACUSON S2000/SIEMENS device, with convex multifrequential transducer and specific software as well as clinical evaluation of the syndrome, in order to determine if they can be used as a prognostic non-invasive tool. On quantitative elastography, it was collected three measures of shear wave velocity (meters per second) and depth in centimeters in the left lateral, left medial, right lateral, right medial and caudate lobe of the liver. The brachycephalic patients, 16 pugs and 30 french bulldogs, were classified using a previously established 4-point functional grading system based on clinical evaluation before and after a 3-minute exercise tolerance test already established and validated. The control group was based on the same features collected in 22 beagles. The software R version 3.3.0 was used for the analysis and the significance level was set at 0.05. The data were analysed for normality of residuals and homogeneity of variances by Shapiro-Wilks test. Comparisons of parametric continuous variables between breeds were performed by using ANOVA with a post hoc test for pair wise comparison. The preliminary results show significant statistic differences between the brachycephalic groups and the control group in all lobes analysed (p ≤ 0,05), with higher values of shear wave velocities in the hepatic tissue of brachycephalic dogs. In this context, the results obtained in this study contributes to the understanding of BS as well as its consequences in our patients, reflecting in evidence that one more systemic consequence of the syndrome may occur in brachycephalic patients, which was not related in the veterinary literature yet.Keywords: airway obstruction, brachycephalic airway obstructive syndrome, hepatic injury, obstructive sleep apnea
Procedia PDF Downloads 11785 Future Research on the Resilience of Tehran’s Urban Areas Against Pandemic Crises Horizon 2050
Authors: Farzaneh Sasanpour, Saeed Amini Varaki
Abstract:
Resilience is an important goal for cities as urban areas face an increasing range of challenges in the 21st century; therefore, according to the characteristics of risks, adopting an approach that responds to sensitive conditions in the risk management process is the resilience of cities. In the meantime, most of the resilience assessments have dealt with natural hazards and less attention has been paid to pandemics.In the covid-19 pandemic, the country of Iran and especially the metropolis of Tehran, was not immune from the crisis caused by its effects and consequences and faced many challenges. One of the methods that can increase the resilience of Tehran's metropolis against possible crises in the future is future studies. This research is practical in terms of type. The general pattern of the research will be descriptive-analytical and from the point of view that it is trying to communicate between the components and provide urban resilience indicators with pandemic crises and explain the scenarios, its future studies method is exploratory. In order to extract and determine the key factors and driving forces effective on the resilience of Tehran's urban areas against pandemic crises (Covid-19), the method of structural analysis of mutual effects and Micmac software was used. Therefore, the primary factors and variables affecting the resilience of Tehran's urban areas were set in 5 main factors, including physical-infrastructural (transportation, spatial and physical organization, streets and roads, multi-purpose development) with 39 variables based on mutual effects analysis. Finally, key factors and variables in five main areas, including managerial-institutional with five variables; Technology (intelligence) with 3 variables; economic with 2 variables; socio-cultural with 3 variables; and physical infrastructure, were categorized with 7 variables. These factors and variables have been used as key factors and effective driving forces on the resilience of Tehran's urban areas against pandemic crises (Covid-19), in explaining and developing scenarios. In order to develop the scenarios for the resilience of Tehran's urban areas against pandemic crises (Covid-19), intuitive logic, scenario planning as one of the future research methods and the Global Business Network (GBN) model were used. Finally, four scenarios have been drawn and selected with a creative method using the metaphor of weather conditions, which is indicative of the general outline of the conditions of the metropolis of Tehran in that situation. Therefore, the scenarios of Tehran metropolis were obtained in the form of four scenarios: 1- solar scenario (optimal governance and management leading in smart technology) 2- cloud scenario (optimal governance and management following in intelligent technology) 3- dark scenario (optimal governance and management Unfavorable leader in intelligence technology) 4- Storm scenario (unfavorable governance and management of follower in intelligence technology). The solar scenario shows the best situation and the stormy scenario shows the worst situation for the Tehran metropolis. According to the findings obtained in this research, city managers can, in order to achieve a better tomorrow for the metropolis of Tehran, in all the factors and components of urban resilience against pandemic crises by using future research methods, a coherent picture with the long-term horizon of 2050, from the path Provide urban resilience movement and platforms for upgrading and increasing the capacity to deal with the crisis. To create the necessary platforms for the realization, development and evolution of the urban areas of Tehran in a way that guarantees long-term balance and stability in all dimensions and levels.Keywords: future research, resilience, crisis, pandemic, covid-19, Tehran
Procedia PDF Downloads 68