Search results for: complex low-rise building
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8639

Search results for: complex low-rise building

179 A Hardware-in-the-loop Simulation for the Development of Advanced Control System Design for a Spinal Joint Wear Simulator

Authors: Kaushikk Iyer, Richard M Hall, David Keeling

Abstract:

Hardware-in-the-loop (HIL) simulation is an advanced technique for developing and testing complex real-time control systems. This paper presents the benefits of HIL simulation and how it can be implemented and used effectively to develop, test, and validate advanced control algorithms used in a spinal joint Wear simulator for the Tribological testing of spinal disc prostheses. spinal wear simulator is technologically the most advanced machine currently employed For the in-vitro testing of newly developed spinal Discimplants. However, the existing control techniques, such as a simple position control Does not allow the simulator to test non-sinusoidal waveforms. Thus, there is a need for better and advanced control methods that can be developed and tested Rigorouslybut safely before deploying it into the real simulator. A benchtop HILsetupis was created for experimentation, controller verification, and validation purposes, allowing different control strategies to be tested rapidly in a safe environment. The HIL simulation aspect in this setup attempts to replicate similar spinal motion and loading conditions. The spinal joint wear simulator containsa four-Barlinkpowered by electromechanical actuators. LabVIEW software is used to design a kinematic model of the spinal wear Simulator to Validatehow each link contributes towards the final motion of the implant under test. As a result, the implant articulates with an angular motion specified in the international standards, ISO-18192-1, that define fixed, simplified, and sinusoid motion and load profiles for wear testing of cervical disc implants. Using a PID controller, a velocity-based position control algorithm was developed to interface with the benchtop setup that performs HIL simulation. In addition to PID, a fuzzy logic controller (FLC) was also developed that acts as a supervisory controller. FLC provides intelligence to the PID controller by By automatically tuning the controller for profiles that vary in amplitude, shape, and frequency. This combination of the fuzzy-PID controller is novel to the wear testing application for spinal simulators and demonstrated superior performance against PIDwhen tested for a spectrum of frequency. Kaushikk Iyer is a Ph.D. Student at the University of Leeds and an employee at Key Engineering Solutions, Leeds, United Kingdom, (e-mail: [email protected], phone: +44 740 541 5502). Richard M Hall is with the University of Leeds, the United Kingdom as a professor in the Mechanical Engineering Department (e-mail: [email protected]). David Keeling is the managing director of Key Engineering Solutions, Leeds, United Kingdom (e-mail: [email protected]). Results obtained are successfully validated against the load and motion tolerances specified by the ISO18192-1 standard and fall within limits, that is, ±0.5° at the maxima and minima of the motion and ±2 % of the complete cycle for phasing. The simulation results prove the efficacy of the test setup using HIL simulation to verify and validate the accuracy and robustness of the prospective controller before its deployment into the spinal wear simulator. This method of testing controllers enables a wide range of possibilities to test advanced control algorithms that can potentially test even profiles of patients performing various dailyliving activities.

Keywords: Fuzzy-PID controller, hardware-in-the-loop (HIL), real-time simulation, spinal wear simulator

Procedia PDF Downloads 151
178 Zinc Oxide Varistor Performance: A 3D Network Model

Authors: Benjamin Kaufmann, Michael Hofstätter, Nadine Raidl, Peter Supancic

Abstract:

ZnO varistors are the leading overvoltage protection elements in today’s electronic industry. Their highly non-linear current-voltage characteristics, very fast response times, good reliability and attractive cost of production are unique in this field. There are challenges and questions unsolved. Especially, the urge to create even smaller, versatile and reliable parts, that fit industry’s demands, brings manufacturers to the limits of their abilities. Although, the varistor effect of sintered ZnO is known since the 1960’s, and a lot of work was done on this field to explain the sudden exponential increase of conductivity, the strict dependency on sinter parameters, as well as the influence of the complex microstructure, is not sufficiently understood. For further enhancement and down-scaling of varistors, a better understanding of the microscopic processes is needed. This work attempts a microscopic approach to investigate ZnO varistor performance. In order to cope with the polycrystalline varistor ceramic and in order to account for all possible current paths through the material, a preferably realistic model of the microstructure was set up in the form of three-dimensional networks where every grain has a constant electric potential, and voltage drop occurs only at the grain boundaries. The electro-thermal workload, depending on different grain size distributions, was investigated as well as the influence of the metal-semiconductor contact between the electrodes and the ZnO grains. A number of experimental methods are used, firstly, to feed the simulations with realistic parameters and, secondly, to verify the obtained results. These methods are: a micro 4-point probes method system (M4PPS) to investigate the current-voltage characteristics between single ZnO grains and between ZnO grains and the metal electrode inside the varistor, micro lock-in infrared thermography (MLIRT) to detect current paths, electron back scattering diffraction and piezoresponse force microscopy to determine grain orientations, atom probe to determine atomic substituents, Kelvin probe force microscopy for investigating grain surface potentials. The simulations showed that, within a critical voltage range, the current flow is localized along paths which represent only a tiny part of the available volume. This effect could be observed via MLIRT. Furthermore, the simulations exhibit that the electric power density, which is inversely proportional to the number of active current paths, since this number determines the electrical active volume, is dependent on the grain size distribution. M4PPS measurements showed that the electrode-grain contacts behave like Schottky diodes and are crucial for asymmetric current path development. Furthermore, evaluation of actual data suggests that current flow is influenced by grain orientations. The present results deepen the knowledge of influencing microscopic factors on ZnO varistor performance and can give some recommendations on fabrication for obtaining more reliable ZnO varistors.

Keywords: metal-semiconductor contact, Schottky diode, varistor, zinc oxide

Procedia PDF Downloads 265
177 Managing Inter-Organizational Innovation Project: Systematic Review of Literature

Authors: Lamin B Ceesay, Cecilia Rossignoli

Abstract:

Inter-organizational collaboration is a growing phenomenon in both research and practice. The partnership between organizations enables firms to leverage external resources, experiences, and technology that lie with other firms. This collaborative practice is a source of improved business model performance, technological advancement, and increased competitive advantage for firms. However, the competitive intents, and even diverse institutional logics of firms, make inter-firm innovation-based partnership even more complex, and its governance more challenging. The purpose of this paper is to present a systematic review of research linking the inter-organizational relationship of firms with their innovation practice and specify the different project management issues and gaps addressed in previous research. To do this, we employed a systematic review of the literature on inter-organizational innovation using two complementary scholarly databases - ScienceDirect and Web of Science (WoS). Article scoping relies on the combination of keywords based on similar terms used in the literature:(1) inter-organizational relationship, (2) business network, (3) inter-firm project, and (4) innovation network. These searches were conducted in the title, abstract, and keywords of conceptual and empirical research papers done in English. Our search covers between 2010 to 2019. We applied several exclusion criteria including Papers published outside the years under the review, papers in a language other than English, papers neither listed in WoS nor ScienceDirect and papers that are not sharply related to the inter-organizational innovation-based partnership were removed. After all relevant search criteria were applied, a final list of 84 papers constitutes the data for this review. Our review revealed an increasing evolution of inter-organizational relationship research during the period under the review. The descriptive analysis of papers according to Journal outlets finds that International Journal of Project Management (IJPM), Journal of Industrial Marketing, Journal of Business Research (JBR), etc. are the leading journal outlets for research in the inter-organizational innovation project. The review also finds that Qualitative methods and quantitative approaches respectively are the leading research methods adopted by scholars in the field. However, literature review and conceptual papers constitute the least in the field. During the content analysis of the selected papers, we read the content of each paper and found that the selected papers try to address one of the three phenomena in inter-organizational innovation research: (1) project antecedents; (2) project management and (3) project performance outcomes. We found that these categories are not mutually exclusive, but rather interdependent. This categorization also helped us to organize the fragmented literature in the field. While a significant percentage of the literature discussed project management issues, we found fewer extant literature on project antecedents and performance. As a result of this, we organized the future research agenda addressed in several papers by linking them with the under-researched themes in the field, thus providing great potential to advance future research agenda especially, in the under-researched themes in the field. Finally, our paper reveals that research on inter-organizational innovation project is generally fragmented which hinders a better understanding of the field. Thus, this paper contributes to the understanding of the field by organizing and discussing the extant literature to advance the theory and application of inter-organizational relationship.

Keywords: inter-organizational relationship, inter-firm collaboration, innovation projects, project management, systematic review

Procedia PDF Downloads 91
176 Emergency Department Utilisation of Older People Presenting to Four Emergency Departments

Authors: M. Fry, L. Fitzpatrick, Julie Considine, R. Z. Shaban, Kate Curtis

Abstract:

Introduction: The vast majority of older Australians lives independently and are self-managing at home, despite a growing number living with a chronic illness that requires health intervention. Evidence shows that between 50% and 80% of people presenting to the emergency department (ED) are in pain. Australian EDs manage 7.2 million attendances every year and 1.4 million of these are people aged 65 years or more. Research shows that 28% of ED patients aged 65 years or more have Cognitive impairment (CI) associated with dementia, delirium and neurological conditions. Background: Traditional ED service delivery may not be suitable for older people who present with multiple, complex and ongoing illnesses. Likewise, ED clinical staff often perceive that their role should be focused more on immediate and potential lifethreatening illness and conditions which are episodic in nature. Therefore, the needs of older people and their family/carers may not be adequately addressed in the context of an ED presentation. Aim: We aimed to explore the utilisation and characteristics of older people presenting to four metropolitan EDs. Method: The findings being presented are part of a program of research exploring pain management practices for older persons with long bone fractures. The study was conducted across four metropolitan emergency departments of older patients (65years and over) and involved a 12-month randomised medical record audit (n=255). Results: ED presentations across four ED sites in 2012 numbered 168021, with 44778 (26.6%) patients aged 65 and over. Of the 44778 patients, the average age was 79.1 years (SD 8.54). There were more females 23932 (53.5%). The majority (26925: 85.0%) of older persons self-referred to the ED and lived independently. The majority arrived by ambulance (n=18553: 41.4%) and were allocated triage category was 3 (n=19,507:43.65%) or Triage category 4 at (n=15,389: 34.43%). The top five triage symptom presentations involved pain (n=8088; 18.25%), dyspnoea (n=4735; 10.7%), falls (n=4032; 9.1%), other (n=3984; 9.0%), cardiac (n=2987; 6.7%). The top five system based diagnostic presentations involved musculoskeletal (n=8902; 20.1%), cardiac (n=6704:15.0%), respiratory (n=4933; 11.0%), neurological (n=4909; 11.0%), gastroenterology (n=4321; 9.7%). On review of one tertiary hospital database the vital signs on average at time triage: Systolic Blood Pressure 143.6mmHg. Heart Rate 83.4 beats/minute; Respiratory Rate 18.5 breaths/ minute; Oxygen saturation 97.0% and Tympanic temperature 36.7 and Blood Glucose Level 7.4mmols/litre. The majority presented with a Glasgow Coma Score of 14 or higher. On average the older person stayed in the ED 4:56 (SD 3:28minutes).The average time to be seen was 39 minutes (SD 48 minutes). The majority of older persons were admitted (n=27562: 61.5%), did not wait for treatment (n= 8879: 0.02%) discharged home (n=16256: 36.0%). Conclusion: The vast majority of older persons are living independently, although many require admission on arrival to the ED. Many arrived in pain and with musculoskeletal injuries and or conditions. New models of care need to be considered, which may better support self-management and independent living of the older person and the National Emergency Access Targets.

Keywords: chronic, older person, aged care, emergency department

Procedia PDF Downloads 211
175 Efficient Utilization of Negative Half Wave of Regulator Rectifier Output to Drive Class D LED Headlamp

Authors: Lalit Ahuja, Nancy Das, Yashas Shetty

Abstract:

LED lighting has been increasingly adopted for vehicles in both domestic and foreign automotive markets. Although this miniaturized technology gives the best light output, low energy consumption, and cost-efficient solutions for driving, the same is the need of the hour. In this paper, we present a methodology for driving the highest class two-wheeler headlamp with regulator and rectifier (RR) output. Unlike usual LED headlamps, which are driven by a battery, regulator, and rectifier (RR) driven, a low-cost and highly efficient LED Driver Module (LDM) is proposed. The positive half of magneto output is regulated and used to charge batteries used for various peripherals. While conventionally, the negative half was used for operating bulb-based exterior lamps. But with advancements in LED-based headlamps, which are driven by a battery, this negative half pulse remained unused in most of the vehicles. Our system uses negative half-wave rectified DC output from RR to provide constant light output at all RPMs of the vehicle. With the negative rectified DC output of RR, we have the advantage of pulsating DC input which periodically goes to zero, thus helping us to generate a constant DC output equivalent to the required LED load, and with a change in RPM, additional active thermal bypass circuit help us to maintain the efficiency and thermal rise. The methodology uses the negative half wave output of the RR along with a linear constant current driver with significantly higher efficiency. Although RR output has varied frequency and duty cycles at different engine RPMs, the driver is designed such that it provides constant current to LEDs with minimal ripple. In LED Headlamps, a DC-DC switching regulator is usually used, which is usually bulky. But with linear regulators, we’re eliminating bulky components and improving the form factor. Hence, this is both cost-efficient and compact. Presently, output ripple-free amplitude drivers with fewer components and less complexity are limited to lower-power LED Lamps. The focus of current high-efficiency research is often on high LED power applications. This paper presents a method of driving LED load at both High Beam and Low Beam using the negative half wave rectified pulsating DC from RR with minimum components, maintaining high efficiency within the thermal limitations. Linear regulators are significantly inefficient, with efficiencies typically about 40% and reaching as low as 14%. This leads to poor thermal performance. Although they don’t require complex and bulky circuitry, powering high-power devices is difficult to realise with the same. But with the input being negative half wave rectified pulsating DC, this efficiency can be improved as this helps us to generate constant DC output equivalent to LED load minimising the voltage drop on the linear regulator. Hence, losses are significantly reduced, and efficiency as high as 75% is achieved. With a change in RPM, DC voltage increases, which can be managed by active thermal bypass circuitry, thus resulting in better thermal performance. Hence, the use of bulky and expensive heat sinks can be avoided. Hence, the methodology to utilize the unused negative pulsating DC output of RR to optimize the utilization of RR output power and provide a cost-efficient solution as compared to costly DC-DC drivers.

Keywords: class D LED headlamp, regulator and rectifier, pulsating DC, low cost and highly efficient, LED driver module

Procedia PDF Downloads 43
174 Dynamic Simulation of IC Engine Bearings for Fault Detection and Wear Prediction

Authors: M. D. Haneef, R. B. Randall, Z. Peng

Abstract:

Journal bearings used in IC engines are prone to premature failures and are likely to fail earlier than the rated life due to highly impulsive and unstable operating conditions and frequent starts/stops. Vibration signature extraction and wear debris analysis techniques are prevalent in the industry for condition monitoring of rotary machinery. However, both techniques involve a great deal of technical expertise, time and cost. Limited literature is available on the application of these techniques for fault detection in reciprocating machinery, due to the complex nature of impact forces that confounds the extraction of fault signals for vibration based analysis and wear prediction. This work is an extension of a previous study, in which an engine simulation model was developed using a MATLAB/SIMULINK program, whereby the engine parameters used in the simulation were obtained experimentally from a Toyota 3SFE 2.0 litre petrol engines. Simulated hydrodynamic bearing forces were used to estimate vibrations signals and envelope analysis was carried out to analyze the effect of speed, load and clearance on the vibration response. Three different loads 50/80/110 N-m, three different speeds 1500/2000/3000 rpm, and three different clearances, i.e., normal, 2 times and 4 times the normal clearance were simulated to examine the effect of wear on bearing forces. The magnitude of the squared envelope of the generated vibration signals though not affected by load, but was observed to rise significantly with increasing speed and clearance indicating the likelihood of augmented wear. In the present study, the simulation model was extended further to investigate the bearing wear behavior, resulting as a consequence of different operating conditions, to complement the vibration analysis. In the current simulation, the dynamics of the engine was established first, based on which the hydrodynamic journal bearing forces were evaluated by numerical solution of the Reynold’s equation. Also, the essential outputs of interest in this study, critical to determine wear rates are the tangential velocity and oil film thickness between the journal and bearing sleeve, which if not maintained appropriately, have a detrimental effect on the bearing performance. Archard’s wear prediction model was used in the simulation to calculate the wear rate of bearings with specific location information as all determinative parameters were obtained with reference to crank rotation. Oil film thickness obtained from the model was used as a criterion to determine if the lubrication is sufficient to prevent contact between the journal and bearing thus causing accelerated wear. A limiting value of 1 µm was used as the minimum oil film thickness needed to prevent contact. The increased wear rate with growing severity of operating conditions is analogous and comparable to the rise in amplitude of the squared envelope of the referenced vibration signals. Thus on one hand, the developed model demonstrated its capability to explain wear behavior and on the other hand it also helps to establish a correlation between wear based and vibration based analysis. Therefore, the model provides a cost-effective and quick approach to predict the impending wear in IC engine bearings under various operating conditions.

Keywords: condition monitoring, IC engine, journal bearings, vibration analysis, wear prediction

Procedia PDF Downloads 293
173 The Plight of the Rohingyas: Design Guidelines to Accommodate Displaced People in Bangladesh

Authors: Nazia Roushan, Maria Kipti

Abstract:

The sensitive issue of a large-scale entry of Rohingya refugees to Bangladesh has arisen again since August of 2017. Incited by ethnic and religious conflict, the Rohingyas—an ethnic group concentrated in the north-west state of Rakhine in Myanmar—have been fleeing to what is now Bangladesh from as early as the late 1700s in four main exoduses. This long-standing persecution has recently escalated, and accommodating the recent wave of exodus has been especially challenging due to the sheer volume of a million refugees concentrated in refugee camps in two small administrative units (upazilas) in the south-east of the country: the host area. This drastic change in the host area’s social fabric is putting a lot of strain on the country’s economic, demographic and environmental stability, and security. Although Bangladesh’s long-term experience with disaster management has enabled it to respond rapidly to the crisis, the government is failing to cope with this enormous problem and has taken insufficient steps towards improving the living conditions to inhibit the inflow of more refugees. On top of that, the absence of a comprehensive national refugee policy, and the density of the structures of the camps are constricting the upgrading of the shelters to international standards. As of December 2016, the combined number of internally displaced persons (IDPs) due to conflict and violence (stock), and new displacements due to disasters (flow) in Bangladesh had exceeded 1 million. These numbers have increased dramatically in the last few months. Moreover, by 2050, Bangladesh will have as much as 25 million climate refugees just from its coastal districts. To enhance the resilience of the vulnerable, it is crucial to methodically factorize further interventions between Disaster Risk Reduction for Resilience (DRR) and the concept of Building Back Better (BBB) in the rehabilitation-reconstruction period. Considering these points, this paper provides a palette of options for design guidelines related to the living spaces and infrastructures for refugees. This will encourage the development of national standards for refugee camps, and the national and local level rehabilitation-reconstruction practices. Unhygienic living conditions, vulnerability, and the general lack of control over life are pervasive throughout the camps. This paper, therefore, proposes site-specific strategic and physical planning and design for shelters for refugees in Bangladesh that will lead to sustainable living environments through the following: a) site survey of existing two registered and one makeshift unregistered refugee camps to document and study their physical conditions, b) questionnaires and semi-structured focus group discussions carried out among the refugees and stakeholders to understand what the lived experiences and needs are; and c) combining the findings with international minimum standards for shelter and settlement from International Federation of Red Cross and Red Crescent (IFRC), Médecins Sans Frontières (MSF), United Nations High Commissioner for Refugees (UNHCR). These proposals include temporary shelter solutions that balance between lived spaces and regimented, repetitive plans using readily available and cheap materials, erosion control and slope stabilization strategies, and most importantly, coping mechanisms for the refugees to be self-reliant and resilient.

Keywords: architecture, Bangladesh, refugee camp, resilience, Rohingya

Procedia PDF Downloads 213
172 Transition Metal Bis(Dicarbollide) Complexes in Design of Molecular Switches

Authors: Igor B. Sivaev

Abstract:

Design of molecular machines is an extraordinary growing and very important area of research that it was recognized by awarding Sauvage, Stoddart and Feringa the Nobel Prize in Chemistry in 2016 'for the design and synthesis of molecular machines'. Based on the type of motion being performed, molecular machines can be divided into two main types: molecular motors and molecular switches. Molecular switches are molecules or supramolecular complexes having bistability, i.e., the ability to exist in two or more stable forms, among which may be reversible transitions under external influence (heating, lighting, changing the medium acidity, the action of chemicals, exposure to magnetic or electric field). Molecular switches are the main structural element of any molecular electronics devices. Therefore, the design and the study of molecules and supramolecular systems capable of performing mechanical movement is an important and urgent problem of modern chemistry. There is growing interest in molecular switches and other devices of molecular electronics based on transition metal complexes; therefore choice of suitable stable organometallic unit is of great importance. An example of such unit is bis(dicarbollide) complexes of transition metals [3,3’-M(1,2-C₂B₉H₁₁)₂]ⁿ⁻. The control on the ligand rotation in such complexes can be reached by introducing substituents which could provide stabilization of certain rotamers due to specific interactions between the ligands, on the one hand, and which can participate as Lewis bases in complex formation with external metals resulting in a change in the rotation angle of the ligands, on the other hand. A series of isomeric methyl sulfide derivatives of cobalt bis(dicarbollide) complexes containing methyl sulfide substituents at boron atoms in different positions of the pentagonal face of the dicarbollide ligands [8,8’-(MeS)₂-3,3’-Co(1,2-C₂B₉H₁₀)₂]⁻, rac-[4,4’-(MeS)₂-3,3’-Co(1,2-C₂B₉H₁₀)₂]⁻ and meso-[4,7’-(MeS)₂-3,3’-Co(1,2-C₂B₉H₁₀)₂]⁻ were synthesized by the reaction of CoCl₂ with the corresponding methyl sulfide carborane derivatives [10-MeS-7,8-C₂B₉H₁₁)₂]⁻ and [10-MeS-7,8-C₂B₉H₁₁)₂]⁻. In the case of asymmetrically substituted cobalt bis(dicarbollide) complexes the corresponding rac- and meso-isomers were successfully separated by column chromatography as the tetrabutylammonium salts. The compounds obtained were studied by the methods of ¹H, ¹³C, and ¹¹B NMR spectroscopy, single crystal X-ray diffraction, cyclic voltammetry, controlled potential coulometry and quantum chemical calculations. It was found that in the solid state, the transoid- and gauche-conformations of the 8,8’- and 4,4’-isomers are stabilized by four intramolecular CH···S(Me)B hydrogen bonds each one (2.683-2.712 Å and 2.709-2.752 Å, respectively), whereas gauche-conformation of the 4,7’-isomer is stabilized by two intramolecular CH···S hydrogen bonds (2.699-2.711 Å). The existence of the intramolecular CH·S(Me)B hydrogen bonding in solutions was supported by the 1H NMR spectroscopy. These data are in a good agreement with results of the quantum chemical calculations. The corresponding iron and nickel complexes were synthesized as well. The reaction of the methyl sulfide derivatives of cobalt bis(dicarbollide) with various labile transition metal complexes results in rupture of intramolecular hydrogen bonds and complexation of the methyl sulfide groups with external metal. This results in stabilization of other rotational conformation of cobalt bis(dicarbollide) and can be used in design of molecular switches. This work was supported by the Russian Science Foundation (16-13-10331).

Keywords: molecular switches, NMR spectroscopy, single crystal X-ray diffraction, transition metal bis(dicarbollide) complexes, quantum chemical calculations

Procedia PDF Downloads 147
171 Microplastics in Urban Environment – Coimbra City Case Study

Authors: Inês Amorim Leitão, Loes van Shaick, António Dinis Ferreira, Violette Geissen

Abstract:

Plastic pollution is a growing concern worldwide: plastics are commercialized in large quantities and it takes a long time for them to degrade. When in the environment, plastic is fragmented into microplastics (<5mm), which have been found in all environmental compartments at different locations. Microplastics contribute to the environmental pollution in water, air and soil and are linked to human health problems. The progressive increase of population living in cities led to the aggravation of the pollution problem worldwide, especially in urban environments. Urban areas represent a strong source of pollution, through the roads, industrial production, wastewater, landfills, etc. It is expected that pollutants such as microplastics are transported diffusely from the sources through different pathways such as wind and rain. Therefore, it is very complex to quantify, control and treat these pollutants, designated current problematic issues by the European Commission. Green areas are pointed out by experts as natural filters for contaminants in cities, through their capacity of retention by vegetation. These spaces have thus the capacity to control the load of pollutants transported. This study investigates the spatial distribution of microplastics in urban soils of different land uses, their transport through atmospheric deposition, wind erosion, runoff and streams, as well as their deposition in vegetation like grass and tree leaves in urban environment. Coimbra, a medium large city located in the central Portugal, is the case-study. All the soil, sediments, water and vegetation samples were collected in Coimbra and were later analyzed in the Wageningen University & Research laboratory. Microplastics were extracted through the density separation using Sodium Phosphate as solution (~1.4 g cm−3) and filtration methods, visualized under a stereo microscope and identified using the u-FTIR method. Microplastic particles were found in all the different samples. In terms of soils, higher concentrations of microplastics were found in green parks, followed by landfills and industrial places, and the lowest concentrations in forests and pasture land-uses. Atmospheric deposition and streams after rainfall events seems to represent the strongest pathways of microplastics. Tree leaves can retain microplastics on their surfaces. Small leaves such as needle leaves seem to present higher amounts of microplastics per leaf area than bigger leaves. Rainfall episodes seem to reduce the concentration of microplastics on leaves surface, which suggests the wash of microplastics down to lower levels of the tree or to the soil. When in soil, different types of microplastics could be transported to the atmosphere through wind erosion. Grass seems to present high concentrations of microplastics, and the enlargement of the grass cover leads to a reduction of the amount of microplastics in soil, but also of the microplastics moved from the ground to the atmosphere by wind erosion. This study proof that vegetation can help to control the transport and dispersion of microplastics. In order to control the entry and the concentration of microplastics in the environment, especially in cities, it is essential to defining and evaluating nature-based land-use scenarios, considering the role of green urban areas in filtering small particles.

Keywords: microplastics, cities, sources, pathways, vegetation

Procedia PDF Downloads 35
170 New Territories: Materiality and Craft from Natural Systems to Digital Experiments

Authors: Carla Aramouny

Abstract:

Digital fabrication, between advancements in software and machinery, is pushing practice today towards more complexity in design, allowing for unparalleled explorations. It is giving designers the immediate capacity to apply their imagined objects into physical results. Yet at no time have questions of material knowledge become more relevant and crucial, as technological advancements approach a radical re-invention of the design process. As more and more designers look towards tactile crafts for material know-how, an interest in natural behaviors has also emerged trying to embed intelligence from nature into the designed objects. Concerned with enhancing their immediate environment, designers today are pushing the boundaries of design by bringing in natural systems, materiality, and advanced fabrication as essential processes to produce active designs. New Territories, a yearly architecture and design course on digital design and materiality, allows students to explore processes of digital fabrication in intersection with natural systems and hands-on experiments. This paper will highlight the importance of learning from nature and from physical materiality in a digital design process, and how the simultaneous move between the digital and physical realms has become an essential design method. It will detail the work done over the course of three years, on themes of natural systems, crafts, concrete plasticity, and active composite materials. The aim throughout the course is to explore the design of products and active systems, be it modular facades, intelligent cladding, or adaptable seating, by embedding current digital technologies with an understanding of natural systems and a physical know-how of material behavior. From this aim, three main themes of inquiry have emerged through the varied explorations across the three years, each one approaching materiality and digital technologies through a different lens. The first theme involves crossing the study of naturals systems as precedents for intelligent formal assemblies with traditional crafts methods. The students worked on designing performative facade systems, starting from the study of relevant natural systems and a specific craft, and then using parametric modeling to develop their modular facades. The second theme looks at the cross of craft and digital technologies through form-finding techniques and elastic material properties, bringing in flexible formwork into the digital fabrication process. Students explored concrete plasticity and behaviors with natural references, as they worked on the design of an exterior seating installation using lightweight concrete composites and complex casting methods. The third theme brings in bio-composite material properties with additive fabrication and environmental concerns to create performative cladding systems. Students experimented in concrete composites materials, biomaterials and clay 3D printing to produce different cladding and tiling prototypes that actively enhance their immediate environment. This paper thus will detail the work process done by the students under these three themes of inquiry, describing their material experimentation, digital and analog design methodologies, and their final results. It aims to shed light on the persisting importance of material knowledge as it intersects with advanced digital fabrication and the significance of learning from natural systems and biological properties to embed an active performance in today’s design process.

Keywords: digital fabrication, design and craft, materiality, natural systems

Procedia PDF Downloads 108
169 Quantum Chemical Prediction of Standard Formation Enthalpies of Uranyl Nitrates and Its Degradation Products

Authors: Mohamad Saab, Florent Real, Francois Virot, Laurent Cantrel, Valerie Vallet

Abstract:

All spent nuclear fuel reprocessing plants use the PUREX process (Plutonium Uranium Refining by Extraction), which is a liquid-liquid extraction method. The organic extracting solvent is a mixture of tri-n-butyl phosphate (TBP) and hydrocarbon solvent such as hydrogenated tetra-propylene (TPH). By chemical complexation, uranium and plutonium (from spent fuel dissolved in nitric acid solution), are separated from fission products and minor actinides. During a normal extraction operation, uranium is extracted in the organic phase as the UO₂(NO₃)₂(TBP)₂ complex. The TBP solvent can form an explosive mixture called red oil when it comes in contact with nitric acid. The formation of this unstable organic phase originates from the reaction between TBP and its degradation products on the one hand, and nitric acid, its derivatives and heavy metal nitrate complexes on the other hand. The decomposition of the red oil can lead to violent explosive thermal runaway. These hazards are at the origin of several accidents such as the two in the United States in 1953 and 1975 (Savannah River) and, more recently, the one in Russia in 1993 (Tomsk). This raises the question of the exothermicity of reactions that involve TBP and all other degradation products, and calls for a better knowledge of the underlying chemical phenomena. A simulation tool (Alambic) is currently being developed at IRSN that integrates thermal and kinetic functions related to the deterioration of uranyl nitrates in organic and aqueous phases, but not of the n-butyl phosphate. To include them in the modeling scheme, there is an urgent need to obtain the thermodynamic and kinetic functions governing the deterioration processes in liquid phase. However, little is known about the thermodynamic properties, like standard enthalpies of formation, of the n-butyl phosphate molecules and of the UO₂(NO₃)₂(TBP)₂ UO₂(NO₃)₂(HDBP)(TBP) and UO₂(NO₃)₂(HDBP)₂ complexes. In this work, we propose to estimate the thermodynamic properties with Quantum Methods (QM). Thus, in the first part of our project, we focused on the mono, di, and tri-butyl complexes. Quantum chemical calculations have been performed to study several reactions leading to the formation of mono-(H₂MBP), di-(HDBP), and TBP in gas and liquid phases. In the gas phase, the optimal structures of all species were optimized using the B3LYP density functional. Triple-ζ def2-TZVP basis sets were used for all atoms. All geometries were optimized in the gas-phase, and the corresponding harmonic frequencies were used without scaling to compute the vibrational partition functions at 298.15 K and 0.1 Mpa. Accurate single point energies were calculated using the efficient localized LCCSD(T) method to the complete basis set limit. Whenever species in the liquid phase are considered, solvent effects are included with the COSMO-RS continuum model. The standard enthalpies of formation of TBP, HDBP, and H2MBP are finally predicted with an uncertainty of about 15 kJ mol⁻¹. In the second part of this project, we have investigated the fundamental properties of three organic species that mostly contribute to the thermal runaway: UO₂(NO₃)₂(TBP)₂, UO₂(NO₃)₂(HDBP)(TBP), and UO₂(NO₃)₂(HDBP)₂ using the same quantum chemical methods that were used for TBP and its derivatives in both the gas and the liquid phase. We will discuss the structures and thermodynamic properties of all these species.

Keywords: PUREX process, red oils, quantum chemical methods, hydrolysis

Procedia PDF Downloads 171
168 Framework to Organize Community-Led Project-Based Learning at a Massive Scale of 900 Indian Villages

Authors: Ayesha Selwyn, Annapoorni Chandrashekar, Kumar Ashwarya, Nishant Baghel

Abstract:

Project-based learning (PBL) activities are typically implemented in technology-enabled schools by highly trained teachers. In rural India, students have limited access to technology and quality education. Implementing typical PBL activities is challenging. This study details how Pratham Education Foundation’s Hybrid Learning model was used to implement two PBL activities related to music in 900 remote Indian villages with 46,000 students aged 10-14. The activities were completed by 69% of groups that submitted a total of 15,000 videos (completed projects). Pratham’s H-Learning model reaches 100,000 students aged 3-14 in 900 Indian villages. The community-driven model engages students in 20,000 self-organized groups outside of school. The students are guided by 6,000 youth volunteers and 100 facilitators. The students partake in learning activities across subjects with the support of community stakeholders and offline digital content on shared Android tablets. A training and implementation toolkit for PBL activities is designed by subject experts. This toolkit is essential in ensuring efficient implementation of activities as facilitators aren’t highly skilled and have limited access to training resources. The toolkit details the activity at three levels of student engagement - enrollment, participation, and completion. The subject experts train project leaders and facilitators who train youth volunteers. Volunteers need to be trained on how to execute the activity and guide students. The training is focused on building the volunteers’ capacity to enable students to solve problems, rather than developing the volunteers’ subject-related knowledge. This structure ensures that continuous intervention of subject matter experts isn’t required, and the onus of judging creativity skills is put on community members. 46,000 students in the H-Learning program were engaged in two PBL activities related to Music from April-June 2019. For one activity, students had to conduct a “musical survey” in their village by designing a survey and shooting and editing a video. This activity aimed to develop students’ information retrieval, data gathering, teamwork, communication, project management, and creativity skills. It also aimed to identify talent and document local folk music. The second activity, “Pratham Idol”, was a singing competition. Students participated in performing, producing, and editing videos. This activity aimed to develop students’ teamwork and creative skills and give students a creative outlet. Students showcased their completed projects at village fairs wherein a panel of community members evaluated the videos. The shortlisted videos from all villages were further evaluated by experts who identified students and adults to participate in advanced music workshops. The H-Learning framework enables students in low resource settings to engage in PBL and develop relevant skills by leveraging community support and using video creation as a tool. In rural India, students do not have access to high-quality education or infrastructure. Therefore designing activities that can be implemented by community members after limited training is essential. The subject experts have minimal intervention once the activity is initiated, which significantly reduces the cost of implementation and allows the activity to be implemented at a massive scale.

Keywords: community supported learning, project-based learning, self-organized learning, education technology

Procedia PDF Downloads 160
167 Understanding Beginning Writers' Narrative Writing with a Multidimensional Assessment Approach

Authors: Huijing Wen, Daibao Guo

Abstract:

Writing is thought to be the most complex facet of language arts. Assessing writing is difficult and subjective, and there are few scientifically validated assessments exist. Research has proposed evaluating writing using a multidimensional approach, including both qualitative and quantitative measures of handwriting, spelling and prose. Given that narrative writing has historically been a staple of literacy instruction in primary grades and is one of the three major genres Common Core State Standards required students to acquire starting in kindergarten, it is essential for teachers to understand how to measure beginning writers writing development and sources of writing difficulties through narrative writing. Guided by the theoretical models of early written expression and using empirical data, this study examines ways teachers can enact a comprehensive approach to understanding beginning writer’s narrative writing through three writing rubrics developed for a Curriculum-based Measurement (CBM). The goal is to help classroom teachers structure a framework for assessing early writing in primary classrooms. Participants in this study included 380 first-grade students from 50 classrooms in 13 schools in three school districts in a Mid-Atlantic state. Three writing tests were used to assess first graders’ writing skills in relation to both transcription (i.e., handwriting fluency and spelling tests) and translational skills (i.e., a narrative prompt). First graders were asked to respond to a narrative prompt in 20 minutes. Grounded in theoretical models of earlier expression and empirical evidence of key contributors to early writing, all written samples to the narrative prompt were coded three ways for different dimensions of writing: length, quality, and genre elements. To measure the quality of the narrative writing, a traditional holistic rating rubric was developed by the researchers based on the CCSS and the general traits of good writing. Students' genre knowledge was measured by using a separate analytic rubric for narrative writing. Findings showed that first-graders had emerging and limited transcriptional and translational skills with a nascent knowledge of genre conventions. The findings of the study provided support for the Not-So-Simple View of Writing in that fluent written expression, measured by length and other important linguistic resources measured by the overall quality and genre knowledge rubrics, are fundamental in early writing development. Our study echoed previous research findings on children's narrative development. The study has practical classroom application as it informs writing instruction and assessment. It offered practical guidelines for classroom instruction by providing teachers with a better understanding of first graders' narrative writing skills and knowledge of genre conventions. Understanding students’ narrative writing provides teachers with more insights into specific strategies students might use during writing and their understanding of good narrative writing. Additionally, it is important for teachers to differentiate writing instruction given the individual differences shown by our multiple writing measures. Overall, the study shed light on beginning writers’ narrative writing, indicating the complexity of early writing development.

Keywords: writing assessment, early writing, beginning writers, transcriptional skills, translational skills, primary grades, simple view of writing, writing rubrics, curriculum-based measurement

Procedia PDF Downloads 51
166 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection

Authors: S. Delgado, C. Cerrada, R. S. Gómez

Abstract:

This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.

Keywords: voxelization, GPU acceleration, computer graphics, compute shaders

Procedia PDF Downloads 46
165 Discovering Causal Structure from Observations: The Relationships between Technophile Attitude, Users Value and Use Intention of Mobility Management Travel App

Authors: Aliasghar Mehdizadeh Dastjerdi, Francisco Camara Pereira

Abstract:

The increasing complexity and demand of transport services strains transportation systems especially in urban areas with limited possibilities for building new infrastructure. The solution to this challenge requires changes of travel behavior. One of the proposed means to induce such change is multimodal travel apps. This paper describes a study of the intention to use a real-time multi-modal travel app aimed at motivating travel behavior change in the Greater Copenhagen Region (Denmark) toward promoting sustainable transport options. The proposed app is a multi-faceted smartphone app including both travel information and persuasive strategies such as health and environmental feedback, tailoring travel options, self-monitoring, tunneling users toward green behavior, social networking, nudging and gamification elements. The prospective for mobility management travel apps to stimulate sustainable mobility rests not only on the original and proper employment of the behavior change strategies, but also on explicitly anchoring it on established theoretical constructs from behavioral theories. The theoretical foundation is important because it positively and significantly influences the effectiveness of the system. However, there is a gap in current knowledge regarding the study of mobility-management travel app with support in behavioral theories, which should be explored further. This study addresses this gap by a social cognitive theory‐based examination. However, compare to conventional method in technology adoption research, this study adopts a reverse approach in which the associations between theoretical constructs are explored by Max-Min Hill-Climbing (MMHC) algorithm as a hybrid causal discovery method. A technology-use preference survey was designed to collect data. The survey elicited different groups of variables including (1) three groups of user’s motives for using the app including gain motives (e.g., saving travel time and cost), hedonic motives (e.g., enjoyment) and normative motives (e.g., less travel-related CO2 production), (2) technology-related self-concepts (i.e. technophile attitude) and (3) use Intention of the travel app. The questionnaire items led to the formulation of causal relationships discovery to learn the causal structure of the data. Causal relationships discovery from observational data is a critical challenge and it has applications in different research fields. The estimated causal structure shows that the two constructs of gain motives and technophilia have a causal effect on adoption intention. Likewise, there is a causal relationship from technophilia to both gain and hedonic motives. In line with the findings of the prior studies, it highlights the importance of functional value of the travel app as well as technology self-concept as two important variables for adoption intention. Furthermore, the results indicate the effect of technophile attitude on developing gain and hedonic motives. The causal structure shows hierarchical associations between the three groups of user’s motive. They can be explained by “frustration-regression” principle according to Alderfer's ERG (Existence, Relatedness and Growth) theory of needs meaning that a higher level need remains unfulfilled, a person may regress to lower level needs that appear easier to satisfy. To conclude, this study shows the capability of causal discovery methods to learn the causal structure of theoretical model, and accordingly interpret established associations.

Keywords: travel app, behavior change, persuasive technology, travel information, causality

Procedia PDF Downloads 120
164 Generative Syntaxes: Macro-Heterophony and the Form of ‘Synchrony’

Authors: Luminiţa Duţică, Gheorghe Duţică

Abstract:

One of the most powerful language innovation in the twentieth century music was the heterophony–hypostasis of the vertical syntax entered into the sphere of interest of many composers, such as George Enescu, Pierre Boulez, Mauricio Kagel, György Ligeti and others. The heterophonic syntax has a history of its growth, which means a succession of different concepts and writing techniques. The trajectory of settling this phenomenon does not necessarily take into account the chronology: there are highly complex primary stages and advanced stages of returning to the simple forms of writing. In folklore, the plurimelodic simultaneities are free or random and originate from the (unintentional) differences/‘deviations’ from the state of unison, through a variety of ornaments, melismas, imitations, elongations and abbreviations, all in a flexible rhythmic and non-periodic/immeasurable framework, proper to the parlando-rubato rhythmics. Within the general framework of the multivocal organization, the heterophonic syntax in elaborate (academic) version has imposed itself relatively late compared with polyphony and homophony. Of course, the explanation is simple, if we consider the causal relationship between the sound vocabulary elements – in this case, the modalism – and the typologies of vertical organization appropriate for it. Therefore, adding up the ‘classic’ pathway of the writing typologies (monody – polyphony – homophony), heterophony - applied equally to the structures of modal, serial or synthesis vocabulary – reclaims necessarily an own macrotemporal form, in the sense of the analogies enshrined by the evolution of the musical styles and languages: polyphony→fugue, homophony→sonata. Concerned about the prospect of edifying a new musical ontology, the composer Ştefan Niculescu experienced – along with the mathematical organization of heterophony according to his own original methods – the possibility of extrapolation of this phenomenon in macrostructural plan, reaching this way to the unique form of ‘synchrony’. Founded on coincidentia oppositorum principle (involving the ‘one-multiple’ binom), the sound architecture imagined by Ştefan Niculescu consists in one (temporal) model / algorithm of articulation of two sound states: 1. monovocality state (principle of identity) and 2. multivocality state (principle of difference). In this context, the heterophony becomes an (auto)generative mechanism, with macrotemporal amplitude, strategy that will be grown by the composer, practically throughout his creation (see the works: Ison I, Ison II, Unisonos I, Unisonos II, Duplum, Triplum, Psalmus, Héterophonies pour Montreux (Homages to Enescu and Bartók etc.). For the present demonstration, we selected one of the most edifying works of Ştefan Niculescu – Simphony II, Opus dacicum – where the form of (heterophony-)synchrony acquires monumental-symphonic features, representing an emblematic case for the complexity level achieved by this type of vertical syntax in the twentieth century music.

Keywords: heterophony, modalism, serialism, synchrony, syntax

Procedia PDF Downloads 322
163 A Compact Standing-Wave Thermoacoustic Refrigerator Driven by a Rotary Drive Mechanism

Authors: Kareem Abdelwahed, Ahmed Salama, Ahmed Rabie, Ahmed Hamdy, Waleed Abdelfattah, Ahmed Abd El-Rahman

Abstract:

Conventional vapor-compression refrigeration systems rely on typical refrigerants, such as CFC, HCFC and ammonia. Despite of their suitable thermodynamic properties and their stability in the atmosphere, their corresponding global warming potential and ozone depletion potential raise concerns about their usage. Thus, the need for new refrigeration systems, which are environment-friendly, inexpensive and simple in construction, has strongly motivated the development of thermoacoustic energy conversion systems. A thermoacoustic refrigerator (TAR) is a device that is mainly consisting of a resonator, a stack and two heat exchangers. Typically, the resonator is a long circular tube, made of copper or steel and filled with Helium as a the working gas, while the stack has short and relatively low thermal conductivity ceramic parallel plates aligned with the direction of the prevailing resonant wave. Typically, the resonator of a standing-wave refrigerator has one end closed and is bounded by the acoustic driver at the other end enabling the propagation of half-wavelength acoustic excitation. The hot and cold heat exchangers are made of copper to allow for efficient heat transfer between the working gas and the external heat source and sink respectively. TARs are interesting because they have no moving parts, unlike conventional refrigerators, and almost no environmental impact exists as they rely on the conversion of acoustic and heat energies. Their fabrication process is rather simpler and sizes span wide variety of length scales. The viscous and thermal interactions between the stack plates, heat exchangers' plates and the working gas significantly affect the flow field within the plates' channels, and the energy flux density at the plates' surfaces, respectively. Here, the design, the manufacture and the testing of a compact refrigeration system that is based on the thermoacoustic energy-conversion technology is reported. A 1-D linear acoustic model is carefully and specifically developed, which is followed by building the hardware and testing procedures. The system consists of two harmonically-oscillating pistons driven by a simple 1-HP rotary drive mechanism operating at a frequency of 42Hz -hereby, replacing typical expensive linear motors and loudspeakers-, and a thermoacoustic stack within which the energy conversion of sound into heat is taken place. Air at ambient conditions is used as the working gas while the amplitude of the driver's displacement reaches 19 mm. The 30-cm-long stack is a simple porous ceramic material having 100 square channels per square inch. During operation, both oscillating-gas pressure and solid-stack temperature are recorded for further analysis. Measurements show a maximum temperature difference of about 27 degrees between the stack hot and cold ends with a Carnot coefficient of performance of 11 and estimated cooling capacity of five Watts, when operating at ambient conditions. A dynamic pressure of 7-kPa-amplitude is recorded, yielding a drive ratio of 7% approximately, and found in a good agreement with theoretical prediction. The system behavior is clearly non-linear and significant non-linear loss mechanisms are evident. This work helps understanding the operation principles of thermoacoustic refrigerators and presents a keystone towards developing commercial thermoacoustic refrigerator units.

Keywords: refrigeration system, rotary drive mechanism, standing-wave, thermoacoustic refrigerator

Procedia PDF Downloads 352
162 Exploring the Ethics and Impact of Slum Tourism in Kenya: A Critical Examination on the Ethical Implications, Legalities and Beneficiaries of This Trade and Long-Term Implications to the Slum Communities

Authors: Joanne Ndirangu

Abstract:

Delving into the intricate landscape of slum tourism in Kenya, this study critically evaluates its ethical implications, legal frameworks, and beneficiaries. By examining the complex interplay between tourism operators, visitors, and slum residents, it seeks to uncover the long-term consequences for the communities involved. Through an exploration of ethical considerations, legal parameters, and the distribution of benefits, this examination aims to shed light on the broader socio-economic impacts of slum tourism in Kenya, particularly on the lives of those residing in these marginalized communities. Assessing the ethical considerations surrounding slum tourism in Kenya, including the potential exploitation of residents and cultural sensitivities and examine the legal frameworks governing slum tourism in Kenya and evaluate their effectiveness in protecting the rights and well-being of slum dwellers. Identifying the primary beneficiaries of slum tourism in Kenya, including tour operators, local businesses, and residents, and analysing the distribution of economic benefits. Exploring the long-term socio-economic impacts of slum tourism on the lives of residents, including changes in living conditions, access to resources, and community development. Understanding the motivations and perceptions of tourists participating in slum tourism in Kenya and assess their role in shaping the industry's dynamics and investigate the potential for sustainable and responsible forms of slum tourism that prioritize community empowerment, cultural exchange, and mutual respect. Providing recommendations for policymakers, tourism stakeholders, and community organizations to promote ethical and sustainable practices in slum tourism in Kenya. The main contributions of researching slum tourism in Kenya would include; Ethical Awareness: By critically examining the ethical implications of slum tourism, the research can raise awareness among tourists, operators, and policymakers about the potential exploitation of marginalized communities. Beneficiary Analysis: By identifying the primary beneficiaries of slum tourism, the research can inform discussions on fair distribution of economic benefits and potential strategies for ensuring that local communities derive meaningful advantages from tourism activities. Socio-Economic Understanding: By exploring the long-term socio-economic impacts of slum tourism, the research can deepen understanding of how tourism activities affect the lives of slum residents, potentially informing policies and initiatives aimed at improving living conditions and promoting community development. Tourist Perspectives: Understanding the motivations and perceptions of tourists participating in slum tourism can provide valuable insights into consumer behaviour and preferences, informing the development of responsible tourism practices and marketing strategies. Promotion of Responsible Tourism: By providing recommendations for promoting ethical and sustainable practices in slum tourism, the research can contribute to the development of guidelines and initiatives aimed at fostering responsible tourism and minimizing negative impacts on host communities. Overall, the research can contribute to a more comprehensive understanding of slum tourism in Kenya and its broader implications, while also offering practical recommendations for promoting ethical and sustainable tourism practices.

Keywords: slum tourism, dark tourism, ethical tourism, responsible tourism

Procedia PDF Downloads 34
161 Prompt Photons Production in Compton Scattering of Quark-Gluon and Annihilation of Quark-Antiquark Pair Processes

Authors: Mohsun Rasim Alizada, Azar Inshalla Ahmdov

Abstract:

Prompt photons are perhaps the most versatile tools for studying the dynamics of relativistic collisions of heavy ions. The study of photon radiation is of interest that in most hadron interactions, photons fly out as a background to other studied signals. The study of the birth of prompt photons in nucleon-nucleon collisions was previously carried out in experiments on Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC). Due to the large energy of colliding nucleons, in addition to prompt photons, many different elementary particles are born. However, the birth of additional elementary particles makes it difficult to determine the accuracy of the effective section of the birth of prompt photons. From this point of view, the experiments planned on the Nuclotron-based Ion Collider Facility (NICA) complex will have a great advantage, since the energy obtained for colliding heavy ions will reduce the number of additionally born elementary particles. Of particular importance is the study of the processes of birth of prompt photons to determine the gluon leaving hadrons since the photon carries information about a rigid subprocess. At present, paper production of prompt photon in Compton scattering of quark-gluon and annihilation of quark–antiquark processes is investigated. The matrix elements Compton scattering of quark-gluon and annihilation of quark-antiquark pair processes has been written. The Square of matrix elements of processes has been calculated in FeynCalc. The phase volume of subprocesses has been determined. Expression to calculate the differential cross-section of subprocesses has been obtained: Given the resulting expressions for the square of the matrix element in the differential section expression, we see that the differential section depends not only on the energy of colliding protons, but also on the mass of quarks, etc. Differential cross-section of subprocesses is estimated. It is shown that the differential cross-section of subprocesses decreases with the increasing energy of colliding protons. Asymmetry coefficient with polarization of colliding protons is determined. The calculation showed that the squares of the matrix element of the Compton scattering process without and taking into account the polarization of colliding protons are identical. The asymmetry coefficient of this subprocess is zero, which is consistent with the literary data. It is known that in any single polarization processes with a photon, squares of matrix elements without taking into account and taking into account the polarization of the original particle must coincide, that is, the terms in the square of the matrix element with the degree of polarization are equal to zero. The coincidence of the squares of the matrix elements indicates that the parity of the system is preserved. The asymmetry coefficient of annihilation of quark–antiquark pair process linearly decreases from positive unit to negative unit with increasing the production of the polarization degrees of colliding protons. Thus, it was obtained that the differential cross-section of the subprocesses decreases with the increasing energy of colliding protons. The value of the asymmetry coefficient is maximal when the polarization of colliding protons is opposite and minimal when they are directed equally. Taking into account the polarization of only the initial quarks and gluons in Compton scattering does not contribute to the differential section of the subprocess.

Keywords: annihilation of a quark-antiquark pair, coefficient of asymmetry, Compton scattering, effective cross-section

Procedia PDF Downloads 130
160 Sustainable Urban Regenaration the New Vocabulary and the Timless Grammar of the Urban Tissue

Authors: Ruth Shapira

Abstract:

Introduction: The rapid urbanization of the last century confronts planners, regulatory bodies, developers and most of all the public with seemingly unsolved conflicts regarding values, capital, and wellbeing of the built and un-built urban space. There is an out of control change of scale of the urban form and of the rhythm of the urban life which has known no significant progress in the last 2-3 decades despite the on-growing urban population. It is the objective of this paper to analyze some of these fundamental issues through the case study of a relatively small town in the center of Israel (Kiryat-Ono, 36,000 inhabitants), unfold the deep structure of qualities versus disruptors, present some cure that we have developed to bridge over and humbly suggest a practice that may bring about a sustainable new urban environment based on timeless values of the past, an approach that can be generic for similar cases. Basic Methodologies:The object, the town of Kiryat Ono, shall be experimented upon in a series of four action processes: De-composition, Re-composition, the Centering process and, finally, Controlled Structural Disintegration. Each stage will be based on facts, analysis of previous multidisciplinary interventions on various layers – and the inevitable reaction of the OBJECT, leading to the conclusion based on innovative theoretical and practical methods that we have developed and that we believe are proper for the open ended network, setting the rules for the contemporary urban society to cluster by – thus – a new urban vocabulary based on the old structure of times passed. The Study: Kiryat Ono, was founded 70 years ago as an agricultural settlement and rapidly turned into an urban entity. In spite the massive intensification, the original DNA of the old small town was still deeply embedded, mostly in the quality of the public space and in the sense of clustered communities. In the past 20 years, the recent demand for housing has been addressed to on the national level with recent master plans and urban regeneration policies mostly encouraging individual economic initiatives. Unfortunately, due to the obsolete existing planning platform the present urban renewal is characterized by pressure of developers, a dramatic change in building scale and widespread disintegration of the existing urban and social tissue.Our office was commissioned to conceptualize two master plans for the two contradictory processes of Kiryat Ono’s future: intensification and conservation. Following a comprehensive investigation into the deep structures and qualities of the existing town, we developed a new vocabulary of conservation terms thus redefying the sense of PLACE. The main challenge was to create master plans that should offer a regulatory basis to the accelerated and sporadic development providing for the public good and preserving the characteristics of the place consisting of a tool box of design guidelines that will have the ability to reorganize space along the time axis in a sustainable way. In conclusion: The system of rules that we have developed can generate endless possible patterns making sure that at each implementation fragment an event is created, and a better place is revealed. It takes time and perseverance but it seems to be the way to provide a healthy and sustainable framework for the accelerated urbanization of our chaotic present.

Keywords: sustainable urban design, intensification, emergent urban patterns, sustainable housing, compact urban neighborhoods, sustainable regeneration, restoration, complexity, uncertainty, need for change, implications of legislation on local planning

Procedia PDF Downloads 371
159 Functional Plasma-Spray Ceramic Coatings for Corrosion Protection of RAFM Steels in Fusion Energy Systems

Authors: Chen Jiang, Eric Jordan, Maurice Gell, Balakrishnan Nair

Abstract:

Nuclear fusion, one of the most promising options for reliably generating large amounts of carbon-free energy in the future, has seen a plethora of ground-breaking technological advances in recent years. An efficient and durable “breeding blanket”, needed to ensure a reactor’s self-sufficiency by maintaining the optimal coolant temperature as well as by minimizing radiation dosage behind the blanket, still remains a technological challenge for the various reactor designs for commercial fusion power plants. A relatively new dual-coolant lead-lithium (DCLL) breeder design has exhibited great potential for high-temperature (>700oC), high-thermal-efficiency (>40%) fusion reactor operation. However, the structural material, namely reduced activation ferritic-martensitic (RAFM) steel, is not chemically stable in contact with molten Pb-17%Li coolant. Thus, to utilize this new promising reactor design, the demand for effective corrosion-resistant coatings on RAFM steels represents a pressing need. Solution Spray Technologies LLC (SST) is developing a double-layer ceramic coating design to address the corrosion protection of RAFM steels, using a novel solution and solution/suspension plasma spray technology through a US Department of Energy-funded project. Plasma spray is a coating deposition method widely used in many energy applications. Novel derivatives of the conventional powder plasma spray process, known as the solution-precursor and solution/suspension-hybrid plasma spray process, are powerful methods to fabricate thin, dense ceramic coatings with complex compositions necessary for the corrosion protection in DCLL breeders. These processes can be used to produce ultra-fine molten splats and to allow fine adjustment of coating chemistry. Thin, dense ceramic coatings with chosen chemistry for superior chemical stability in molten Pb-Li, low activation properties, and good radiation tolerance, is ideal for corrosion-protection of RAFM steels. A key challenge is to accommodate its CTE mismatch with the RAFM substrate through the selection and incorporation of appropriate bond layers, thus allowing for enhanced coating durability and robustness. Systematic process optimization is being used to define the optimal plasma spray conditions for both the topcoat and bond-layer, and X-ray diffraction and SEM-EDS are applied to successfully validate the chemistry and phase composition of the coatings. The plasma-sprayed double-layer corrosion resistant coatings were also deposited onto simulated RAFM steel substrates, which are being tested separately under thermal cycling, high-temperature moist air oxidation as well as molten Pb-Li capsule corrosion conditions. Results from this testing on coated samples, and comparisons with bare RAFM reference samples will be presented and conclusions will be presented assessing the viability of the new ceramic coatings to be viable corrosion prevention systems for DCLL breeders in commercial nuclear fusion reactors.

Keywords: breeding blanket, corrosion protection, coating, plasma spray

Procedia PDF Downloads 286
158 Experiences of Discrimination and Coping Strategies of Second Generation Academics during the Career-Entry Phase in Austria

Authors: R. Verwiebe, L. Seewann, M. Wolf

Abstract:

This presentation addresses marginalization and discrimination as experienced by young academics with a migrant background in the Austrian labor market. Focusing on second generation academics of Central Eastern European and Turkish descent we explore two major issues. First, we ask whether their career-entry and everyday professional life entails origin-specific barriers. As educational residents, they show competences which, when lacking, tend to be drawn upon to explain discrimination: excellent linguistic skills, accredited high-level training, and networks. Second, we concentrate on how this group reacts to discrimination and overcomes experiences of marginalization. To answer these questions, we utilize recent sociological and social psychological theories that focus on the diversity of individual experiences. This distinguishes us from a long tradition of research that has dealt with the motives that inform discrimination, but has less often considered the effects on those concerned. Similarly, applied coping strategies have less often been investigated, though they may provide unique insights into current problematic issues. Building upon present literature, we follow recent discrimination research incorporating the concepts of ‘multiple discrimination’, ‘subtle discrimination’, and ‘visual social markers’. 21 problem-centered interviews are the empirical foundation underlying this study. The interviewees completed their entire educational career in Austria, graduated in different universities and disciplines and are working in their first post-graduate jobs (career entry phase). In our analysis, we combined thematic charting with a coding method. The results emanating from our empirical material indicated a variety of discrimination experiences ranging from barely perceptible disadvantages to directly articulated and overt marginalization. The spectrum of experiences covered stereotypical suppositions at job interviews, the disavowal of competencies, symbolic or social exclusion by new colleges, restricted professional participation (e.g. customer contact) and non-recruitment due to religious or ethnical markers (e.g. headscarves). In these experiences the role of the academics education level, networks, or competences seemed to be minimal, as negative prejudice on the basis of visible ‘social markers’ operated ‘ex-ante’. The coping strategies identified in overcoming such barriers are: an increased emphasis on effort, avoidance of potentially marginalizing situations, direct resistance (mostly in the form of verbal opposition) and dismissal of negative experiences by ignoring or ironizing the situation. In some cases, the academics drew into their specific competences, such as an intellectual approach of studying specialist literature, focus on their intercultural competences or planning to migrate back to their parent’s country of origin. Our analysis further suggests a distinction between reactive (i.e. to act on and respond to experienced discrimination) and preventative strategies (applied to obviate discrimination) of coping. In light of our results, we would like to stress that the tension between educational and professional success experienced by academics with a migrant background – and the barriers and marginalization they continue to face – are essential issues to be introduced to socio-political discourse. It seems imperative to publicly accentuate the growing social, political and economic significance of this group, their educational aspirations, as well as their experiences of achievement and difficulties.

Keywords: coping strategies, discrimination, labor market, second generation university graduates

Procedia PDF Downloads 202
157 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods

Authors: Dario Milani, Guido Morgenthal

Abstract:

Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.

Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method

Procedia PDF Downloads 241
156 Selective Immobilization of Fructosyltransferase onto Glutaraldehyde Modified Support and Its Application in the Production of Fructo-Oligosaccharides

Authors: Milica B. Veljković, Milica B. Simović, Marija M. Ćorović, Ana D. Milivojević, Anja I. Petrov, Katarina M. Banjanac, Dejan I. Bezbradica

Abstract:

In recent decades, the scientific community has recognized the growing importance of prebiotics, and therefore, numerous studies are focused on their economic production due to their low presence in natural resources. It has been confirmed that prebiotics is a source of energy for probiotics in the gastrointestinal tract (GIT) and enable their proliferation, consequently leading to the normal functioning of the intestinal microbiota. Also, products of their fermentation are short-chain fatty acids (SCFA), which play a key role in maintaining and improving the health not only of the GIT but also of the whole organism. Among several confirmed prebiotics, fructooligosaccharides (FOS) are considered interesting candidates for use in a wide range of products in the food industry. They are characterized as low-calorie and non-cariogenic substances that represent an adequate sugar substitute and can be considered suitable for use in products intended for diabetics. The subject of this research will be the production of FOS by transforming sucrose using a fructosyltransferase (FTase) present in commercial preparation Pectinex® Ultra SP-L, with special emphasis on the development of adequate FTase immobilization method that would enable selective isolation of the enzyme responsible for the synthesis of FOS from the complex enzymatic mixture. This would lead to considerable enzyme purification and allow its direct incorporation into different sucrose-based products without the fear that the action of the other hydrolytic enzymes may adversely affect the products' functional characteristics. Accordingly, the possibility of selective immobilization of the enzyme using support with primary amino groups, Purolite® A109, which was previously activated and modified using glutaraldehyde (GA), was investigated. In the initial phase of the research, the effects of individual immobilization parameters such as pH, enzyme concentration, and immobilization time were investigated to optimize the process using support chemically activated with 15% and 0.5% GA to form dimers and monomers, respectively. It was determined that highly active immobilized preparations (371.8 IU/g of support - dimer and 213.8 IU/g of support – monomer) were achieved under acidic conditions (pH 4) provided that an enzyme concentration was 50 mg/g of support after 7 h and 3 h, respectively. Bearing in mind the obtained results of the expressed activity, it is noticeable that the formation of dimers showed higher reactivity compared to the form of monomers. Also, in the case of support modification using 15% GA, the value of the ratio of FTase and pectinase (as dominant enzyme mixture component) activity immobilization yields was 16.45, indicating the high feasibility of selective immobilization of FTase on modified polystyrene resin. After obtaining immobilized preparations of satisfactory features, they were tested in a reaction of FOS synthesis under determined optimal conditions. The maximum FOS yields of approximately 50% of total carbohydrates in the reaction mixture were recorded after 21 h. Finally, it can be concluded that the examined immobilization method yielded highly active, stable and, more importantly, refined enzyme preparation that can be further utilized on a larger scale for the development of continual processes for FOS synthesis, as well as for modification of different sucrose-based mediums.

Keywords: chemical modification, fructooligosaccharides, glutaraldehyde, immobilization of fructosyltransferase

Procedia PDF Downloads 165
155 Confirming the Factors of Professional Readiness in Athletic Training

Authors: Philip A. Szlosek, M. Susan Guyer, Mary G. Barnum, Elizabeth M. Mullin

Abstract:

In the United States, athletic training is a healthcare profession that encompasses the prevention, examination, diagnosis, treatment, and rehabilitation of injuries and medical conditions. Athletic trainers work under the direction of or in collaboration with a physician and are recognized by the American Medical Association as allied healthcare professionals. Internationally, this profession is often known as athletic therapy. As healthcare professionals, athletic trainers must be prepared for autonomous practice immediately after graduation. However, new athletic trainers have been shown to have clinical areas of strength and weakness.To better assess professional readiness and improve the preparedness of new athletic trainers, the factors of athletic training professional readiness must be defined. Limited research exists defining the holistic aspects of professional readiness needed for athletic trainers. Confirming the factors of professional readiness in athletic training could enhance the professional preparation of athletic trainers and result in more highly prepared new professionals. The objective of this study was to further explore and confirm the factors of professional readiness in athletic training. Authors useda qualitative design based in grounded theory. Participants included athletic trainers with greater than 24 months of experience from a variety of work settings from each district of the National Athletic Trainer’s Association. Participants took the demographic questionnaire electronically using Qualtrics Survey Software (Provo UT). After completing the demographic questionnaire, 20 participants were selected to complete one-on-one interviews using GoToMeeting audiovisual web conferencing software. IBM Statistical Package for the Social Sciences (SPSS, v. 21.0) was used to calculate descriptive statistics for participant demographics. The first author transcribed all interviews verbatim and utilized a grounded theory approach during qualitative data analysis. Data were analyzed using a constant comparative analysis and open and axial coding. Trustworthiness was established using reflexivity, member checks, and peer reviews. Analysis revealed four overarching themes, including management, interpersonal relations, clinical decision-making, and confidence. Management was categorized as athletic training services not involving direct patient care and was divided into three subthemes, including administration skills, advocacy, and time management. Interpersonal Relations was categorized as the need and ability of the athletic trainer to properly interact with others. Interpersonal relations was divided into three subthemes, including personality traits, communication, and collaborative practice. Clinical decision-making was categorized as the skills and attributes required by the athletic trainer whenmaking clinical decisions related to patient care. Clinical decision-making was divided into three subthemes including clinical skills, continuing education, and reflective practice. The final theme was confidence. Participants discussed the importance of confidence regarding relationships building, clinical and administrative duties, and clinical decision-making. Overall, participants explained the value of a well-rounded athletic trainer and emphasized that athletic trainers need communication and organizational skills, the ability to collaborate, and must value self-reflection and continuing education in addition to having clinical expertise. Future research should finalize a comprehensive model of professional readiness for athletic training, develop a holistic assessment instrument for athletic training professional readiness, and explore the preparedness of new athletic trainers.

Keywords: autonomous practice, newly certified athletic trainer, preparedness for professional practice, transition to practice skills

Procedia PDF Downloads 126
154 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation

Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong

Abstract:

Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation

Procedia PDF Downloads 170
153 Finite Element Modelling and Optimization of Post-Machining Distortion for Large Aerospace Monolithic Components

Authors: Bin Shi, Mouhab Meshreki, Grégoire Bazin, Helmi Attia

Abstract:

Large monolithic components are widely used in the aerospace industry in order to reduce airplane weight. Milling is an important operation in manufacturing of the monolithic parts. More than 90% of the material could be removed in the milling operation to obtain the final shape. This results in low rigidity and post-machining distortion. The post-machining distortion is the deviation of the final shape from the original design after releasing the clamps. It is a major challenge in machining of the monolithic parts, which costs billions of economic losses every year. Three sources are directly related to the part distortion, including initial residual stresses (RS) generated from previous manufacturing processes, machining-induced RS and thermal load generated during machining. A finite element model was developed to simulate a milling process and predicate the post-machining distortion. In this study, a rolled-aluminum plate AA7175 with a thickness of 60 mm was used for the raw block. The initial residual stress distribution in the block was measured using a layer-removal method. A stress-mapping technique was developed to implement the initial stress distribution into the part. It is demonstrated that this technique significantly accelerates the simulation time. Machining-induced residual stresses on the machined surface were measured using MTS3000 hole-drilling strain-gauge system. The measured RS was applied on the machined surface of a plate to predict the distortion. The predicted distortion was compared with experimental results. It is found that the effect of the machining-induced residual stress on the distortion of a thick plate is very limited. The distortion can be ignored if the wall thickness is larger than a certain value. The RS generated from the thermal load during machining is another important factor causing part distortion. Very limited number of research on this topic was reported in literature. A coupled thermo-mechanical FE model was developed to evaluate the thermal effect on the plastic deformation of a plate. A moving heat source with a feed rate was used to simulate the dynamic cutting heat in a milling process. When the heat source passed the part surface, a small layer was removed to simulate the cutting operation. The results show that for different feed rates and plate thicknesses, the plastic deformation/distortion occurs only if the temperature exceeds a critical level. It was found that the initial residual stress has a major contribution to the part distortion. The machining-induced stress has limited influence on the distortion for thin-wall structure when the wall thickness is larger than a certain value. The thermal load can also generate part distortion when the cutting temperature is above a critical level. The developed numerical model was employed to predict the distortion of a frame part with complex structures. The predictions were compared with the experimental measurements, showing both are in good agreement. Through optimization of the position of the part inside the raw plate using the developed numerical models, the part distortion can be significantly reduced by 50%.

Keywords: modelling, monolithic parts, optimization, post-machining distortion, residual stresses

Procedia PDF Downloads 33
152 Study on Aerosol Behavior in Piping Assembly under Varying Flow Conditions

Authors: Anubhav Kumar Dwivedi, Arshad Khan, S. N. Tripathi, Manish Joshi, Gaurav Mishra, Dinesh Nath, Naveen Tiwari, B. K. Sapra

Abstract:

In a nuclear reactor accident scenario, a large number of fission products may release to the piping system of the primary heat transport. The released fission products, mostly in the form of the aerosol, get deposited on the inner surface of the piping system mainly due to gravitational settling and thermophoretic deposition. The removal processes in the complex piping system are controlled to a large extent by the thermal-hydraulic conditions like temperature, pressure, and flow rates. These parameters generally vary with time and therefore must be carefully monitored to predict the aerosol behavior in the piping system. The removal process of aerosol depends on the size of particles that determines how many particles get deposit or travel across the bends and reach to the other end of the piping system. The released aerosol gets deposited onto the inner surface of the piping system by various mechanisms like gravitational settling, Brownian diffusion, thermophoretic deposition, and by other deposition mechanisms. To quantify the correct estimate of deposition, the identification and understanding of the aforementioned deposition mechanisms are of great importance. These mechanisms are significantly affected by different flow and thermodynamic conditions. Thermophoresis also plays a significant role in particle deposition. In the present study, a series of experiments were performed in the piping system of the National Aerosol Test Facility (NATF), BARC using metal aerosols (zinc) in dry environments to study the spatial distribution of particles mass and number concentration, and their depletion due to various removal mechanisms in the piping system. The experiments were performed at two different carrier gas flow rates. The commercial CFD software FLUENT is used to determine the distribution of temperature, velocity, pressure, and turbulence quantities in the piping system. In addition to the in-built models for turbulence, heat transfer and flow in the commercial CFD code (FLUENT), a new sub-model PBM (population balance model) is used to describe the coagulation process and to compute the number concentration along with the size distribution at different sections of the piping. In the sub-model coagulation kernels are incorporated through user-defined function (UDF). The experimental results are compared with the CFD modeled results. It is found that most of the Zn particles (more than 35 %) deposit near the inlet of the plenum chamber and a low deposition is obtained in piping sections. The MMAD decreases along the length of the test assembly, which shows that large particles get deposited or removed in the course of flow, and only fine particles travel to the end of the piping system. The effect of a bend is also observed, and it is found that the relative loss in mass concentration at bends is more in case of a high flow rate. The simulation results show that the thermophoresis and depositional effects are more dominating for the small and larger sizes as compared to the intermediate particles size. Both SEM and XRD analysis of the collected samples show the samples are highly agglomerated non-spherical and composed mainly of ZnO. The coupled model framed in this work could be used as an important tool for predicting size distribution and concentration of some other aerosol released during a reactor accident scenario.

Keywords: aerosol, CFD, deposition, coagulation

Procedia PDF Downloads 127
151 Reviving the Past, Enhancing the Future: Preservation of Urban Heritage Connectivity as a Tool for Developing Liveability in Historical Cities in Jordan, Using as Salt City as a Case Study

Authors: Sahar Yousef, Chantelle Niblock, Gul Kacmaz

Abstract:

Salt City, in the context of Jordan’s heritage landscape, is a significant case to explore when it comes to the interaction between tangible and intangible qualities of liveable cities. Most city centers, including Jerash, Salt, Irbid, and Amman, are historical locations. Six of these extraordinary sites were designated UNESCO World Heritage Sites. Jordan is widely acknowledged as a developing country characterized by swift urbanization and unrestrained expansion that exacerbate the challenges associated with the preservation of historic urban areas. The aim of this study is to conduct an examination and analysis of the existing condition of heritage connectivity within heritage city centers. This includes outdoor staircases, pedestrian pathways, footpaths, and other public spaces. Case study-style analysis of the urban core of As-Salt is the focus of this investigation. Salt City is widely acknowledged for its substantial tangible and intangible cultural heritage and has been designated as ‘The Place of Tolerance and Urban Hospitality’ by UNESCO since 2021. Liveability in urban heritage, particularly in historic city centers, incorporates several factors that affect our well-being; its enhancement is a critical issue in contemporary society. The dynamic interaction between humans and historical materials, which serves as a vehicle for the expression of their identity and historical narrative, constitutes preservation that transcends simple conservation. This form of engagement enables people to appreciate the diversity of their heritage recognising their previous and planned futures. Heritage preservation is inextricably linked to a larger physical and emotional context; therefore, it is difficult to examine it in isolation. Urban environments, including roads, structures, and other infrastructure, are undergoing unprecedented physical design and construction requirements. Concurrently, heritage reinforces a sense of affiliation with a particular location or space and unifies individuals with their ancestry, thereby defining their identity. However, a considerable body of research has focused on the conservation of heritage buildings in a fragmented manner without considering their integration within a holistic urban context. Insufficient attention is given to the significance of the physical and social roles played by the heritage staircases and baths that serve as connectors between these valued historical buildings. In doing so, the research uses a methodology that is based on consensus. Given that liveability is considered a complex matter with several dimensions. The discussion starts by making initial observations on the physical context and societal norms inside the urban center while simultaneously establishing the definitions of liveability and connectivity and examining the key criteria associated with these concepts. Then, identify the key elements that contribute to liveable connectivity within the framework of urban heritage in Jordanian city centers. Some of the outcomes that will be discussed in the presentation are: (1) There is not enough connectivity between heritage buildings as can be seen, for example, between buildings in Jada and Qala'. (2) Most of the outdoor spaces suffer from physical issues that hinder their use by the public, like in Salalem. (3) Existing activities in the city center are not well attended because of lack of communication between the organisers and the citizens.

Keywords: connectivity, Jordan, liveability, salt city, tangible and intangible heritage, urban heritage

Procedia PDF Downloads 43
150 Social Vulnerability Mapping in New York City to Discuss Current Adaptation Practice

Authors: Diana Reckien

Abstract:

Vulnerability assessments are increasingly used to support policy-making in complex environments, like urban areas. Usually, vulnerability studies include the construction of aggregate (sub-) indices and the subsequent mapping of indices across an area of interest. Vulnerability studies show a couple of advantages: they are great communication tools, can inform a wider general debate about environmental issues, and can help allocating and efficiently targeting scarce resources for adaptation policy and planning. However, they also have a number of challenges: Vulnerability assessments are constructed on the basis of a wide range of methodologies and there is no single framework or methodology that has proven to serve best in certain environments, indicators vary highly according to the spatial scale used, different variables and metrics produce different results, and aggregate or composite vulnerability indicators that are mapped easily distort or bias the picture of vulnerability as they hide the underlying causes of vulnerability and level out conflicting reasons of vulnerability in space. So, there is urgent need to further develop the methodology of vulnerability studies towards a common framework, which is one reason of the paper. We introduce a social vulnerability approach, which is compared with other approaches of bio-physical or sectoral vulnerability studies relatively developed in terms of a common methodology for index construction, guidelines for mapping, assessment of sensitivity, and verification of variables. Two approaches are commonly pursued in the literature. The first one is an additive approach, in which all potentially influential variables are weighted according to their importance for the vulnerability aspect, and then added to form a composite vulnerability index per unit area. The second approach includes variable reduction, mostly Principal Component Analysis (PCA) that reduces the number of variables that are interrelated into a smaller number of less correlating components, which are also added to form a composite index. We test these two approaches of constructing indices on the area of New York City as well as two different metrics of variables used as input and compare the outcome for the 5 boroughs of NY. Our analysis yields that the mapping exercise yields particularly different results in the outer regions and parts of the boroughs, such as Outer Queens and Staten Island. However, some of these parts, particularly the coastal areas receive the highest attention in the current adaptation policy. We imply from this that the current adaptation policy and practice in NY might need to be discussed, as these outer urban areas show relatively low social vulnerability as compared with the more central parts, i.e. the high dense areas of Manhattan, Central Brooklyn, Central Queens and the Southern Bronx. The inner urban parts receive lesser adaptation attention, but bear a higher risk of damage in case of hazards in those areas. This is conceivable, e.g., during large heatwaves, which would more affect more the inner and poorer parts of the city as compared with the outer urban areas. In light of the recent planning practice of NY one needs to question and discuss who in NY makes adaptation policy for whom, but the presented analyses points towards an under representation of the needs of the socially vulnerable population, such as the poor, the elderly, and ethnic minorities, in the current adaptation practice in New York City.

Keywords: vulnerability mapping, social vulnerability, additive approach, Principal Component Analysis (PCA), New York City, United States, adaptation, social sensitivity

Procedia PDF Downloads 378