Search results for: collisions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 94

Search results for: collisions

34 Investigation of Time Pressure and Instinctive Reaction in Moral Dilemmas While Driving

Authors: Jacqueline Miller, Dongyuan Y. Wang, F. Dan Richard

Abstract:

Before trying to make an ethical machine that holds a higher ethical standard than humans, a better understanding of human moral standards that could be used as a guide is crucial. How humans make decisions in dangerous driving situations like moral dilemmas can contribute to developing acceptable ethical principles for autonomous vehicles (AVs). This study uses a driving simulator to investigate whether drivers make utilitarian choices (choices that maximize lives saved and minimize harm) in unavoidable automobile accidents (moral dilemmas) with time pressure manipulated. This study also investigates how impulsiveness influences drivers’ behavior in moral dilemmas. Manipulating time pressure results in collisions that occur at varying time intervals (4 s, 5 s, 7s). Manipulating time pressure helps investigate how time pressure may influence drivers’ response behavior. Thirty-one undergraduates participated in this study using a STISM driving simulator to respond to driving moral dilemmas. The results indicated that the percentage of utilitarian choices generally increased when given more time to respond (from 4 s to 7 s). Additionally, participants in vehicle scenarios preferred responding right over responding left. Impulsiveness did not influence utilitarian choices. However, as time pressure decreased, response time increased. Findings have potential implications and applications on the regulation of driver assistance technologies and AVs.

Keywords: time pressure, automobile moral dilemmas, impulsiveness, reaction time

Procedia PDF Downloads 25
33 Ideology Shift in Political Translation

Authors: Jingsong Ma

Abstract:

In political translation, ideology plays an important role in conveying implications accurately. Ideological collisions can occur in political translation when there existdifferences of political environments embedded in the translingual political texts in both source and target languages. To reach an accurate translationrequires the translatorto understand the ideologies implied in (and often transcending) the texts. This paper explores the conditions, procedure, and purpose of processingideological collision and resolution of such issues in political translation. These points will be elucidated by case studies of translating English and Chinese political texts. First, there are specific political terminologies in certain political environments. These terminological peculiarities in one language are often determined by ideological elements rather than by syntactical and semantical understanding. The translation of these ideological-loaded terminologiesis a process and operation consisting of understanding the ideological context, including cultural, historical, and political situations. This will be explained with characteristic Chinese political terminologies and their renderings in English. Second, when the ideology in the source language fails to match with the ideology in the target language, the decisions to highlight or disregard these conflicts are shaped by power relations, political engagement, social context, etc. It thus is necessary to go beyond linguisticanalysis of the context by deciphering ideology in political documents to provide a faithful or equivalent rendering of certain messages. Finally, one of the practical issues is about equivalence in political translation by redefining the notion of faithfulness and retainment of ideological messages in the source language in translations of political texts. To avoid distortion, the translator should be liberated from grip the literal meaning, instead diving into functional meanings of the text.

Keywords: translation, ideology, politics, society

Procedia PDF Downloads 87
32 The Effect of a Saturated Kink on the Dynamics of Tungsten Impurities in the Plasma Core

Authors: H. E. Ferrari, R. Farengo, C. F. Clauser

Abstract:

Tungsten (W) will be used in ITER as one of the plasma facing components (PFCs). The W could migrate to the plasma center. This could have a potentially deleterious effect on plasma confinement. Electron cyclotron resonance heating (ECRH) can be used to prevent W accumulation. We simulated a series of H mode discharges in ASDEX U with PFC containing W, where central ECRH was used to prevent W accumulation in the plasma center. The experiments showed that the W density profiles were flat after a sawtooth crash, and become hollow in between sawtooth crashes when ECRH has been applied. It was also observed that a saturated kink mode was active in these conditions. We studied the effect of saturated kink like instabilities on the redistribution of W impurities. The kink was modeled as the sum of a simple analytical equilibrium (large aspect ratio, circular cross section) plus the perturbation produced by the kink. A numerical code that follows the exact trajectories of the impurity ions in the total fields and includes collisions was employed. The code is written in Cuda C and runs in Graphical Processing Units (GPUs), allowing simulations with a large number of particles with modest resources. Our simulations show that when the W ions have a thermal velocity distribution, the kink has no effect on the W density. When we consider the plasma rotation, the kink can affect the W density. When the average passing frequency of the W particles is similar to the frequency of the kink mode, the expulsion of W ions from the plasma core is maximum, and the W density shows a hollow structure. This could have implications for the mitigation of W accumulation.

Keywords: impurity transport, kink instability, tungsten accumulation, tungsten dynamics

Procedia PDF Downloads 143
31 Space Debris: An Environmental Hazard

Authors: Anwesha Pathak

Abstract:

Space law refers to all legal provisions that may regulate or apply to space travel, as well as to space-related activity. Although there is undoubtedly a core corpus of “space law,” rather than designating a conceptually distinct single kind of law, the phrase can be seen as a label applied to a bucket that includes a variety of different laws and regulations. Similar to ‘family law' or ‘environmental law' "space law" refers to a variety of laws that are identified by the subject matter they address rather than by the logical extension of a single legal concept. The word "space law" refers to the Law of Space, which can cover anything from the specifics of an insurance agreement for a specific space launch to the most general guidelines that direct state behaviour in space. Space debris, often referred to as space junk, space pollution, space waste, space trash, or space garbage, is a term used to describe abandoned human-made objects in space, primarily in Earth orbit. These include disused spacecraft, discarded launch vehicle stages, mission-related detritus, and fragmentation material from the destruction of disused rocket bodies and spacecraft, which is particularly prevalent in Earth orbit. Other types of space debris, besides abandoned human-made objects in orbit, include pieces left over from collisions, erosion, and disintegration, or even paint specks, solidified liquids ejected from spacecraft, and unburned components from solid rocket engines. The initial action of launching or using a spacecraft in near-Earth orbit imposes an external cost on others that is typically not taken into account or fully accounted for in the cost by the launcher or payload owner.

Keywords: space, outer space treaty, geostationary orbit, satellites, spacecrafts

Procedia PDF Downloads 58
30 Analysis of the Unmanned Aerial Vehicles’ Incidents and Accidents: The Role of Human Factors

Authors: Jacob J. Shila, Xiaoyu O. Wu

Abstract:

As the applications of unmanned aerial vehicles (UAV) continue to increase across the world, it is critical to understand the factors that contribute to incidents and accidents associated with these systems. Given the variety of daily applications that could utilize the operations of the UAV (e.g., medical, security operations, construction activities, landscape activities), the main discussion has been how to safely incorporate the UAV into the national airspace system. The types of UAV incidents being reported range from near sightings by other pilots to actual collisions with aircraft or UAV. These incidents have the potential to impact the rest of aviation operations in a variety of ways, including human lives, liability costs, and delay costs. One of the largest causes of these incidents cited is the human factor; other causes cited include maintenance, aircraft, and others. This work investigates the key human factors associated with UAV incidents. To that end, the data related to UAV incidents that have occurred in the United States is both reviewed and analyzed to identify key human factors related to UAV incidents. The data utilized in this work is gathered from the Federal Aviation Administration (FAA) drone database. This study adopts the human factor analysis and classification system (HFACS) to identify key human factors that have contributed to some of the UAV failures to date. The uniqueness of this work is the incorporation of UAV incident data from a variety of applications and not just military data. In addition, identifying the specific human factors is crucial towards developing safety operational models and human factor guidelines for the UAV. The findings of these common human factors are also compared to similar studies in other countries to determine whether these factors are common internationally.

Keywords: human factors, incidents and accidents, safety, UAS, UAV

Procedia PDF Downloads 211
29 An Optimization Model for the Arrangement of Assembly Areas Considering Time Dynamic Area Requirements

Authors: Michael Zenker, Henrik Prinzhorn, Christian Böning, Tom Strating

Abstract:

Large-scale products are often assembled according to the job-site principle, meaning that during the assembly the product is located at a fixed position, while the area requirements are constantly changing. On one hand, the product itself is growing with each assembly step, whereas varying areas for storage, machines or working areas are temporarily required. This is an important factor when arranging products to be assembled within the factory. Currently, it is common to reserve a fixed area for each product to avoid overlaps or collisions with the other assemblies. Intending to be large enough to include the product and all adjacent areas, this reserved area corresponds to the superposition of the maximum extents of all required areas of the product. In this procedure, the reserved area is usually poorly utilized over the course of the entire assembly process; instead a large part of it remains unused. If the available area is a limited resource, a systematic arrangement of the products, which complies with the dynamic area requirements, will lead to an increased area utilization and productivity. This paper presents the results of a study on the arrangement of assembly objects assuming dynamic, competing area requirements. First, the problem situation is extensively explained, and existing research on associated topics is described and evaluated on the possibility of an adaptation. Then, a newly developed mathematical optimization model is introduced. This model allows an optimal arrangement of dynamic areas, considering logical and practical constraints. Finally, in order to quantify the potential of the developed method, some test series results are presented, showing the possible increase in area utilization.

Keywords: dynamic area requirements, facility layout problem, optimization model, product assembly

Procedia PDF Downloads 202
28 Analysis of the Vibration Behavior of a Small-Scale Wind Turbine Blade under Johannesburg Wind Speed

Authors: Tolulope Babawarun, Harry Ngwangwa

Abstract:

The wind turbine blade may sustain structural damage from external loads such as high winds or collisions, which could compromise its aerodynamic efficiency. The wind turbine blade vibrates at significant intensities and amplitudes under these conditions. The effect of these vibrations on the dynamic flow field surrounding the blade changes the forces operating on it. The structural dynamic analysis of a small wind turbine blade is considered in this study. It entails creating a finite element model, validating the model, and doing structural analysis on the verified finite element model. The analysis is based on the structural reaction of a small-scale wind turbine blade to various loading sources. Although there are many small-scale off-shore wind turbine systems in use, only preliminary structural analysis is performed during design phases; these systems' performance under various loading conditions as they are encountered in real-world situations has not been properly researched. This will allow us to record the same Equivalent von Mises stress and deformation that the blade underwent. A higher stress contour was found to be more concentrated near the middle span of the blade under the various loading scenarios studied. The highest stress that the blade in this study underwent is within the range of the maximum stress that blade material can withstand. The maximum allowable stress of the blade material is 1,770 MPa. The deformation of the blade was highest at the blade tip. The critical speed of the blade was determined to be 4.3 Rpm with a rotor speed range of 0 to 608 Rpm. The blade's mode form under loading conditions indicates a bending mode, the most prevalent of which is flapwise bending.

Keywords: ANSYS, finite element analysis, static loading, dynamic analysis

Procedia PDF Downloads 57
27 Design of Robust and Intelligent Controller for Active Removal of Space Debris

Authors: Shabadini Sampath, Jinglang Feng

Abstract:

With huge kinetic energy, space debris poses a major threat to astronauts’ space activities and spacecraft in orbit if a collision happens. The active removal of space debris is required in order to avoid frequent collisions that would occur. In addition, the amount of space debris will increase uncontrollably, posing a threat to the safety of the entire space system. But the safe and reliable removal of large-scale space debris has been a huge challenge to date. While capturing and deorbiting space debris, the space manipulator has to achieve high control precision. However, due to uncertainties and unknown disturbances, there is difficulty in coordinating the control of the space manipulator. To address this challenge, this paper focuses on developing a robust and intelligent control algorithm that controls joint movement and restricts it on the sliding manifold by reducing uncertainties. A neural network adaptive sliding mode controller (NNASMC) is applied with the objective of finding the control law such that the joint motions of the space manipulator follow the given trajectory. A computed torque control (CTC) is an effective motion control strategy that is used in this paper for computing space manipulator arm torque to generate the required motion. Based on the Lyapunov stability theorem, the proposed intelligent controller NNASMC and CTC guarantees the robustness and global asymptotic stability of the closed-loop control system. Finally, the controllers used in the paper are modeled and simulated using MATLAB Simulink. The results are presented to prove the effectiveness of the proposed controller approach.

Keywords: GNC, active removal of space debris, AI controllers, MatLabSimulink

Procedia PDF Downloads 91
26 Analysis of Truck Drivers’ Distraction on Crash Risk

Authors: Samuel Nderitu Muchiri, Tracy Wangechi Maina

Abstract:

Truck drivers face a myriad of challenges in their profession. Enhancements in logistics effectiveness can be pivotal in propelling economic developments. The specific objective of the study was to assess the influence of driver distraction on crash risk. The study is significant as it elucidates best practices that truck drivers can embrace in an effort to enhance road safety. These include amalgamating behaviors that enable drivers to fruitfully execute multifaceted functions such as finding and following routes, evading collisions, monitoring speed, adhering to road regulations, and evaluating vehicle systems’ conditions. The analysis involved an empirical review of ten previous studies related to the research topic. The articles revealed that driver distraction plays a substantial role in road accidents and other crucial road security incidents across the globe. Africa depends immensely on the freight transport sector to facilitate supply chain operations. Several studies indicate that drivers who operate primarily on rural roads, such as those found in Sub-Saharan Africa, have an increased propensity to engage in distracted activities such as cell phone usage while driving. The findings also identified the need for digitalization in truck driving operations, including carrier management techniques such as fatigue management, artificial intelligence, and automating functions like cell phone usage controls. The recommendations can aid policymakers and commercial truck carriers in deepening their understanding of driver distraction and enforcing mitigations to foster road safety.

Keywords: truck drivers, distraction, digitalization, crash risk, road safety

Procedia PDF Downloads 13
25 Elastic Collisions of Electrons with DNA and Water From 10 eV to 100 KeV: Scar Macro Investigation

Authors: Aouina Nabila Yasmina, Zine El Abidine Chaoui

Abstract:

Recently, understanding the interactions of electrons with the DNA molecule and its components has attracted considerable interest because DNA is the main site damaged by ionizing radiation. The interactions of radiation with DNA induce a variety of molecular damage such as single-strand breaks, double-strand breaks, basic damage, cross-links between proteins and DNA, and others, or the formation of free radicals, which, by chemical reactions with DNA, can also lead to breakage of the strand. One factor that can contribute significantly to these processes is the effect of water hydration on the formation and reaction of radiation induced by these radicals in and / or around DNA. B-DNA requires about 30% by weight of water to maintain its native conformation in the crystalline state. The transformation depends on various factors such as sequence, ion composition, concentration and water activity. Partial dehydration converts it to DNA-A. The present study shows the results of theoretical calculations for positrons and electrons elastic scattering with DNA medium and water over a broad energy range from 10 eV to 100 keV. Indeed, electron elastic cross sections and elastic mean free paths are calculated using a corrected form of the independent atom method, taking into account the geometry of the biomolecule (SCAR macro). Moreover, the elastic scattering of electrons and positrons by atoms of the biomolecule was evaluated by means of relativistic (Dirac) partial wave analysis. Our calculated results are compared with theoretical data available in the literature in the absence of experimental data, in particular for positron. As a central result, our electron elastic cross sections are in good agreement with existing theoretical data in the range of 10 eV to 1 keV.

Keywords: elastic cross scrion, elastic mean free path, scar macro method, electron collision

Procedia PDF Downloads 28
24 Transient Phenomena in a 100 W Hall Thrusters: Experimental Measurements of Discharge Current and Plasma Parameter Evolution

Authors: Clémence Royer, Stéphane Mazouffre

Abstract:

Nowadays, electric propulsion systems play a crucial role in space exploration missions due to their high specific impulse and long operational life. The Hall thrusters are one of the most mature EP technologies. It is a gridless ion thruster that has proved reliable and high-performance for decades in various space missions. Operation of HT relies on electron emissions through a cathode placed outside a hollow dielectric channel that includes an anode at the back. Negatively charged particles are trapped in a magnetic field and efficiently slow down. By collisions, the electron cloud ionizes xenon atoms. A large electric field is generated in the axial direction due to the low electron transverse mobility in the region of a strong magnetic field. Positive particles are pulled out of the chamber at high velocity and are neutralized directly at the exhaust area. This phenomenon leads to the acceleration of the spacecraft system at a high specific impulse. While HT’s architecture and operating principle are relatively simple, the physics behind thrust is complex and still partly unknown. Current and voltage oscillations, as well as electron properties, have been captured over a 30 mn time period after ignition. The observed low-frequency oscillations exhibited specific frequency ranges, amplitudes, and stability patterns. Correlations between the oscillations and plasma characteristics we analyzed. The impact of these instabilities on thruster performance, including thrust efficiency, has been evaluated as well. Moreover, strategies for mitigating and controlling these instabilities have been developed, such as filtering. In this contribution, in addition to presenting a summary of the results obtained in the transient regime, we will present and discuss recent advances in Hall thruster plasma discharge filtering and control.

Keywords: electric propulsion, Hall Thruster, plasma diagnostics, low-frequency oscillations

Procedia PDF Downloads 54
23 A Review on Investigating the Relations between Water Harvesting and Water Conflicts

Authors: B. Laurita

Abstract:

The importance of Water Harvesting (WH) as an effective mean to deal with water scarcity is universally recognized. The collection and storage of rainwater, floodwater or quick runoff and their conversion to productive uses can ensure water availability for domestic and agricultural use, enabling a lower exploitation of the aquifer, preventing erosion events and providing significant ecosystem services. At the same time, it has been proven that it can reduce the insurgence of water conflicts if supported by a cooperative process of planning and management. On the other hand, the construction of water harvesting structures changes the hydrological regime, affecting upstream-downstream dynamics and changing water allocation, often causing contentions. Furthermore, dynamics existing between water harvesting and water conflict are not properly investigated yet. Thus, objective of this study is to analyze the relations between water harvesting and the insurgence of water conflicts, providing a solid theoretical basis and foundations for future studies. Two search engines were selected in order to perform the study: Google Scholar and Scopus. Separate researches were conducted on the mutual influences between water conflicts and the four main water harvesting techniques: rooftop harvesting, surface harvesting, underground harvesting, runoff harvesting. Some of the aforementioned water harvesting techniques have been developed and implemented on scales ranging from the small, household-sided ones, to gargantuan dam systems. Instead of focusing on the collisions related to large-scale systems, this review is aimed to look for and collect examples of the effects that the implementation of small water harvesting systems has had on the access to the water resource and on water governance. The present research allowed to highlight that in the studies that have been conducted up to now, water harvesting, and in particular those structures that allow the collection and storage of water for domestic use, is usually recognized as a positive, palliative element during contentions. On the other hand, water harvesting can worsen and, in some cases, even generate conflicts for water management. This shows the necessity of studies that consider both benefits and negative influences of water harvesting, analyzing its role respectively as triggering or as mitigating factor of conflicting situations.

Keywords: arid areas, governance, water conflicts, water harvesting

Procedia PDF Downloads 175
22 Development of a Microfluidic Device for Low-Volume Sample Lysis

Authors: Abbas Ali Husseini, Ali Mohammad Yazdani, Fatemeh Ghadiri, Alper Şişman

Abstract:

We developed a microchip device that uses surface acoustic waves for rapid lysis of low level of cell samples. The device incorporates sharp-edge glass microparticles for improved performance. We optimized the lysis conditions for high efficiency and evaluated the device's feasibility for point-of-care applications. The microchip contains a 13-finger pair interdigital transducer with a 30-degree focused angle. It generates high-intensity acoustic beams that converge 6 mm away. The microchip operates at a frequency of 16 MHz, exciting Rayleigh waves with a 250 µm wavelength on the LiNbO3 substrate. Cell lysis occurs when Candida albicans cells and glass particles are placed within the focal area. The high-intensity surface acoustic waves induce centrifugal forces on the cells and glass particles, resulting in cell lysis through lateral forces from the sharp-edge glass particles. We conducted 42 pilot cell lysis experiments to optimize the surface acoustic wave-induced streaming. We varied electrical power, droplet volume, glass particle size, concentration, and lysis time. A regression machine-learning model determined the impact of each parameter on lysis efficiency. Based on these findings, we predicted optimal conditions: electrical signal of 2.5 W, sample volume of 20 µl, glass particle size below 10 µm, concentration of 0.2 µg, and a 5-minute lysis period. Downstream analysis successfully amplified a DNA target fragment directly from the lysate. The study presents an efficient microchip-based cell lysis method employing acoustic streaming and microparticle collisions within microdroplets. Integration of a surface acoustic wave-based lysis chip with an isothermal amplification method enables swift point-of-care applications.

Keywords: cell lysis, surface acoustic wave, micro-glass particle, droplet

Procedia PDF Downloads 52
21 Design and Development of a Safety Equipment and Accessory for Bicycle Users

Authors: Francine Siy, Stephen Buñi

Abstract:

Safety plays a significant role in everyone’s life on a day-to-day basis. We wish ourselves and our loved ones their safety as we all venture out on our daily commute. The road is undeniably dangerous and unpredictable, with abundant traffic collisions and pedestrians experiencing various injuries. For bicycle users, the risk of accidents is even more exacerbated, and injuries may be severe. Even when cyclists try their best to be safe and protected, the possibility of encountering danger is always there. Despite being equipped with protective gear, safety is never guaranteed. Cyclists often settle for helmets and standard reflector vests to establish a presence on the road. There are different types of vests available, depending on the profession. However, traditional reflector vests, mostly seen on construction workers and traffic enforcers, were not designed for riders and their protection from injuries. With insufficient protection for riders, they need access to ergonomically designed equipment and accessories that suit the riders and cater to their needs. This research aimed to offer a protective vest with safety features for riders that is comfortable, effective, durable, and intuitive. This sheds light and addresses the safety of the biker population, which continuously grows through the years. The product was designed and developed by gathering data and using the cognitive mapping method to ensure that all qualitative and quantitative data were considered in this study to improve other existing products that do not have the proper design considerations. It is known that available equipment for cyclists is often sold separately or lacks the safety features for cyclists traversing open roads. Each safety feature like the headlights, reflectors, signal or rear lights, zipper pouch, body camera attachment, and wireless remote control all play a particular role in helping cyclists embark on their daily commute. These features aid in illumination, visibility, easy maneuvering, convenience, and security, allowing cyclists to go for a safer ride that is of use throughout the day. The product is designed and produced effectively and inexpensively without sacrificing the quality and purpose of its usage.

Keywords: bicycle accessory, protective gear, safety, transport, visibility

Procedia PDF Downloads 56
20 The MoEDAL-MAPP* Experiment - Expanding the Discovery Horizon of the Large Hadron Collider

Authors: James Pinfold

Abstract:

The MoEDAL (Monopole and Exotics Detector at the LHC) experiment deployed at IP8 on the Large Hadron Collider ring was the first dedicated search experiment to take data at the Large Hadron Collider (LHC) in 2010. It was designed to search for Highly Ionizing Particle (HIP) avatars of new physics such as magnetic monopoles, dyons, Q-balls, multiply charged particles, massive, slowly moving charged particles and long-lived massive charge SUSY particles. We shall report on our search at LHC’s Run-2 for Magnetic monopoles and dyons produced in p-p and photon-fusion. In more detail, we will report our most recent result in this arena: the search for magnetic monopoles via the Schwinger Mechanism in Pb-Pb collisions. The MoEDAL detector, originally the first dedicated search detector at the LHC, is being reinstalled for LHC’s Run-3 to continue the search for electrically and magnetically charged HIPs with enhanced instantaneous luminosity, detector efficiency and a factor of ten lower thresholds for HIPs. As part of this effort, we will search for massive l long-lived, singly and multiply charged particles from various scenarios for which MoEDAL has a competitive sensitivity. An upgrade to MoEDAL, the MoEDAL Apparatus for Penetrating Particles (MAPP), is now the LHC’s newest detector. The MAPP detector, positioned in UA83, expands the physics reach of MoEDAL to include sensitivity to feebly-charged particles with charge, or effective charge, as low as 10-3 e (where e is the electron charge). Also, In conjunction with MoEDAL’s trapping detector, the MAPP detector gives us a unique sensitivity to extremely long-lived charged particles. MAPP also has some sensitivity to long-lived neutral particles. The addition of an Outrigger detector for MAPP-1 to increase its acceptance for more massive milli-charged particles is currently in the Technical Proposal stage. Additionally, we will briefly report on the plans for the MAPP-2 upgrade to the MoEDAL-MAPP experiment for the High Luminosity LHC (HL-LHC). This experiment phase is designed to maximize MoEDAL-MAPP’s sensitivity to very long-lived neutral messengers of physics beyond the Standard Model. We envisage this detector being deployed in the UGC1 gallery near IP8.

Keywords: LHC, beyond the standard model, dedicated search experiment, highly ionizing particles, long-lived particles, milli-charged particles

Procedia PDF Downloads 42
19 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping

Authors: Masato Saeki

Abstract:

Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.

Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level

Procedia PDF Downloads 435
18 Monte Carlo and Biophysics Analysis in a Criminal Trial

Authors: Luca Indovina, Carmela Coppola, Carlo Altucci, Riccardo Barberi, Rocco Romano

Abstract:

In this paper a real court case, held in Italy at the Court of Nola, in which a correct physical description, conducted with both a Monte Carlo and biophysical analysis, would have been sufficient to arrive at conclusions confirmed by documentary evidence, is considered. This will be an example of how forensic physics can be useful in confirming documentary evidence in order to reach hardly questionable conclusions. This was a libel trial in which the defendant, Mr. DS (Defendant for Slander), had falsely accused one of his neighbors, Mr. OP (Offended Person), of having caused him some damages. The damages would have been caused by an external plaster piece that would have detached from the neighbor’s property and would have hit Mr DS while he was in his garden, much more than a meter far away from the facade of the building from which the plaster piece would have detached. In the trial, Mr. DS claimed to have suffered a scratch on his forehead, but he never showed the plaster that had hit him, nor was able to tell from where the plaster would have arrived. Furthermore, Mr. DS presented a medical certificate with a diagnosis of contusion of the cerebral cortex. On the contrary, the images of Mr. OP’s security cameras do not show any movement in the garden of Mr. DS in a long interval of time (about 2 hours) around the time of the alleged accident, nor do they show any people entering or coming out from the house of Mr. DS in the same interval of time. Biophysical analysis shows that both the diagnosis of the medical certificate and the wound declared by the defendant, already in conflict with each other, are not compatible with the fall of external plaster pieces too small to be found. The wind was at a level 1 of the Beaufort scale, that is, unable to raise even dust (level 4 of the Beaufort scale). Therefore, the motion of the plaster pieces can be described as a projectile motion, whereas collisions with the building cornice can be treated using Newtons law of coefficients of restitution. Numerous numerical Monte Carlo simulations show that the pieces of plaster would not have been able to reach even the garden of Mr. DS, let alone a distance over 1.30 meters. Results agree with the documentary evidence (images of Mr. OP’s security cameras) that Mr. DS could not have been hit by plaster pieces coming from Mr. OP’s property.

Keywords: biophysics analysis, Monte Carlo simulations, Newton’s law of restitution, projectile motion

Procedia PDF Downloads 104
17 Healthcare Learning From Near Misses in Aviation Safety

Authors: Nick Woodier, Paul Sampson, Iain Moppett

Abstract:

Background: For years, healthcare across the world has recognised that patients are coming to harm from the very processes meant to help them. In response, healthcare tells itself that it needs to ‘be more like aviation.’ Aviation safety is highly regarded by those in healthcare and is seen as an exemplar. Specifically, healthcare is keen to learn from how aviation uses near misses to make their industry safer. Healthcare is rife with near misses; however, there has been little progress addressing them, with most research having focused on reporting. Addressing the factors that contribute to near misses will potentially help reduce the number of significant, harm patientsafety incidents. While the healthcare literature states the need to learn from aviation’s use of near misses, there is nothing that describes how best to do this. The authors, as part of a larger study of near-miss management in healthcare, sought to learn from aviation to develop principles for how healthcare can identify, report, and learn from near misses to improve patient safety. Methods: A Grounded Theory (GT) methodology, augmented by a scoping review, was used. Data collection included interviews, field notes, and the literature. The review protocol is accessible online. The GT aimed to develop theories about how aviation, amongst other safety-critical industries, manages near misses. Results: Twelve aviation interviews contributed to the GT across passenger airlines, air traffic control, and bodies involved in policy, regulation, and investigation. The scoping review identified 83 articles across a range of safety-critical industries, but only seven focused on aviation. The GT identified that aviation interprets the term ‘near miss’ in different ways, commonly using it to specifically refer to near-miss air collisions, also known as Airproxes. Other types of near misses exist, such as health and safety, but the reporting of these and the safety climate associated with them is not as mature. Safety culture in aviation was regularly discussed, with evidence that culture varies depending on which part of the industry is being considered (e.g., civil vs. business aviation). Near misses are seen as just one part of an extensive safety management system, but processes to support their reporting and their analysis are not consistent. Their value alone is also questionable, with the challenge to long-held beliefs originating from the ‘common cause hypothesis.’ Conclusions: There is learning that healthcare can take from how parts of aviation manage and learn from near misses. For example, healthcare would benefit from a formal safety management system that currently does not exist. However, it may not be as simple as ‘healthcare should learn from aviation’ due to variation in safety maturity across the industry. Healthcare needs to clarify how to incorporate near misses into learning and whether allocating resources to them is of value; it was heard that catastrophes have led to greater improvements in safety in aviation.

Keywords: aviation safety, patient safety, near miss, safety management systems

Procedia PDF Downloads 103
16 Recommendations to Improve Classification of Grade Crossings in Urban Areas of Mexico

Authors: Javier Alfonso Bonilla-Chávez, Angélica Lozano

Abstract:

In North America, more than 2,000 people annually die in accidents related to railroad tracks. In 2020, collisions at grade crossings were the main cause of deaths related to railway accidents in Mexico. Railway networks have constant interaction with motor transport users, cyclists, and pedestrians, mainly in grade crossings, where is the greatest vulnerability and risk of accidents. Usually, accidents at grade crossings are directly related to risky behavior and non-compliance with regulations by motorists, cyclists, and pedestrians, especially in developing countries. Around the world, countries classify these crossings in different ways. In Mexico, according to their dangerousness (high, medium, or low), types A, B and C have been established, recommending for each one different type of auditive and visual signaling and gates, as well as horizontal and vertical signaling. This classification is based in a weighting, but regrettably, it is not explained how the weight values were obtained. A review of the variables and the current approach for the grade crossing classification is required, since it is inadequate for some crossings. In contrast, North America (USA and Canada) and European countries consider a broader classification so that attention to each crossing is addressed more precisely and equipment costs are adjusted. Lack of a proper classification, could lead to cost overruns in the equipment and a deficient operation. To exemplify the lack of a good classification, six crossings are studied, three located in the rural area of Mexico and three in Mexico City. These cases show the need of: improving the current regulations, improving the existing infrastructure, and implementing technological systems, including informative signals with nomenclature of the involved crossing and direct telephone line for reporting emergencies. This implementation is unaffordable for most municipal governments. Also, an inventory of the most dangerous grade crossings in urban and rural areas must be obtained. Then, an approach for improving the classification of grade crossings is suggested. This approach must be based on criteria design, characteristics of adjacent roads or intersections which can influence traffic flow through the crossing, accidents related to motorized and non-motorized vehicles, land use and land management, type of area, and services and economic activities in the zone where the grade crossings is located. An expanded classification of grade crossing in Mexico could reduce accidents and improve the efficiency of the railroad.

Keywords: accidents, grade crossing, railroad, traffic safety

Procedia PDF Downloads 79
15 A Simulation-Based Investigation of the Smooth-Wall, Radial Gravity Problem of Granular Flow through a Wedge-Shaped Hopper

Authors: A. F. Momin, D. V. Khakhar

Abstract:

Granular materials consist of particulate particles found in nature and various industries that, due to gravity flow, behave macroscopically like liquids. A fundamental industrial unit operation is a hopper with inclined walls or a converging channel in which material flows downward under gravity and exits the storage bin through the bottom outlet. The simplest form of the flow corresponds to a wedge-shaped, quasi-two-dimensional geometry with smooth walls and radially directed gravitational force toward the apex of the wedge. These flows were examined using the Mohr-Coulomb criterion in the classic work of Savage (1965), while Ravi Prakash and Rao used the critical state theory (1988). The smooth-wall radial gravity (SWRG) wedge-shaped hopper is simulated using the discrete element method (DEM) to test existing theories. DEM simulations involve the solution of Newton's equations, taking particle-particle interactions into account to compute stress and velocity fields for the flow in the SWRG system. Our computational results are consistent with the predictions of Savage (1965) and Ravi Prakash and Rao (1988), except for the region near the exit, where both viscous and frictional effects are present. To further comprehend this behaviour, a parametric analysis is carried out to analyze the rheology of wedge-shaped hoppers by varying the orifice diameter, wedge angle, friction coefficient, and stiffness. The conclusion is that velocity increases as the flow rate increases but decreases as the wedge angle and friction coefficient increase. We observed no substantial changes in velocity due to varying stiffness. It is anticipated that stresses at the exit result from the transfer of momentum during particle collisions; for this reason, relationships between viscosity and shear rate are shown, and all data are collapsed into a single curve. In addition, it is demonstrated that viscosity and volume fraction exhibit power law correlations with the inertial number and that all the data collapse into a single curve. A continuum model for determining granular flows is presented using empirical correlations.

Keywords: discrete element method, gravity flow, smooth-wall, wedge-shaped hoppers

Procedia PDF Downloads 55
14 Dynamic Simulation of Disintegration of Wood Chips Caused by Impact and Collisions during the Steam Explosion Pre-Treatment

Authors: Muhammad Muzamal, Anders Rasmuson

Abstract:

Wood material is extensively considered as a raw material for the production of bio-polymers, bio-fuels and value-added chemicals. However, the shortcoming in using wood as raw material is that the enzymatic hydrolysis of wood material is difficult because the accessibility of enzymes to hemicelluloses and cellulose is hindered by complex chemical and physical structure of the wood. The steam explosion (SE) pre-treatment improves the digestion of wood material by creating both chemical and physical modifications in wood. In this process, first, wood chips are treated with steam at high pressure and temperature for a certain time in a steam treatment vessel. During this time, the chemical linkages between lignin and polysaccharides are cleaved and stiffness of material decreases. Then the steam discharge valve is rapidly opened and the steam and wood chips exit the vessel at very high speed. These fast moving wood chips collide with each other and with walls of the equipment and disintegrate to small pieces. More damaged and disintegrated wood have larger surface area and increased accessibility to hemicelluloses and cellulose. The energy required for an increase in specific surface area by same value is 70 % more in conventional mechanical technique, i.e. attrition mill as compared to steam explosion process. The mechanism of wood disintegration during the SE pre-treatment is very little studied. In this study, we have simulated collision and impact of wood chips (dimension 20 mm x 20 mm x 4 mm) with each other and with walls of the vessel. The wood chips are simulated as a 3D orthotropic material. Damage and fracture in the wood material have been modelled using 3D Hashin’s damage model. This has been accomplished by developing a user-defined subroutine and implementing it in the FE software ABAQUS. The elastic and strength properties used for simulation are of spruce wood at 12% and 30 % moisture content and at 20 and 160 OC because the impacted wood chips are pre-treated with steam at high temperature and pressure. We have simulated several cases to study the effects of elastic and strength properties of wood, velocity of moving chip and orientation of wood chip at the time of impact on the damage in the wood chips. The disintegration patterns captured by simulations are very similar to those observed in experimentally obtained steam exploded wood. Simulation results show that the wood chips moving with higher velocity disintegrate more. Moisture contents and temperature decreases elastic properties and increases damage. Impact and collision in specific directions cause easy disintegration. This model can be used to efficiently design the steam explosion equipment.

Keywords: dynamic simulation, disintegration of wood, impact, steam explosion pretreatment

Procedia PDF Downloads 370
13 Simulation of Wet Scrubbers for Flue Gas Desulfurization

Authors: Anders Schou Simonsen, Kim Sorensen, Thomas Condra

Abstract:

Wet scrubbers are used for flue gas desulfurization by injecting water directly into the flue gas stream from a set of sprayers. The water droplets will flow freely inside the scrubber, and flow down along the scrubber walls as a thin wall film while reacting with the gas phase to remove SO₂. This complex multiphase phenomenon can be divided into three main contributions: the continuous gas phase, the liquid droplet phase, and the liquid wall film phase. This study proposes a complete model, where all three main contributions are taken into account and resolved using OpenFOAM for the continuous gas phase, and MATLAB for the liquid droplet and wall film phases. The 3D continuous gas phase is composed of five species: CO₂, H₂O, O₂, SO₂, and N₂, which are resolved along with momentum, energy, and turbulence. Source terms are present for four species, energy and momentum, which are affecting the steady-state solution. The liquid droplet phase experiences breakup, collisions, dynamics, internal chemistry, evaporation and condensation, species mass transfer, energy transfer and wall film interactions. Numerous sub-models have been implemented and coupled to realise the above-mentioned phenomena. The liquid wall film experiences impingement, acceleration, atomization, separation, internal chemistry, evaporation and condensation, species mass transfer, and energy transfer, which have all been resolved using numerous sub-models as well. The continuous gas phase has been coupled with the liquid phases using source terms by an approach, where the two software packages are couples using a link-structure. The complete CFD model has been verified using 16 experimental tests from an existing scrubber installation, where a gradient-based pattern search optimization algorithm has been used to tune numerous model parameters to match the experimental results. The CFD model needed to be fast for evaluation in order to apply this optimization routine, where approximately 1000 simulations were needed. The results show that the complex multiphase phenomena governing wet scrubbers can be resolved in a single model. The optimization routine was able to tune the model to accurately predict the performance of an existing installation. Furthermore, the study shows that a coupling between OpenFOAM and MATLAB is realizable, where the data and source term exchange increases the computational requirements by approximately 5%. This allows for exploiting the benefits of both software programs.

Keywords: desulfurization, discrete phase, scrubber, wall film

Procedia PDF Downloads 230
12 Radar on Bike: Coarse Classification based on Multi-Level Clustering for Cyclist Safety Enhancement

Authors: Asma Omri, Noureddine Benothman, Sofiane Sayahi, Fethi Tlili, Hichem Besbes

Abstract:

Cycling, a popular mode of transportation, can also be perilous due to cyclists' vulnerability to collisions with vehicles and obstacles. This paper presents an innovative cyclist safety system based on radar technology designed to offer real-time collision risk warnings to cyclists. The system incorporates a low-power radar sensor affixed to the bicycle and connected to a microcontroller. It leverages radar point cloud detections, a clustering algorithm, and a supervised classifier. These algorithms are optimized for efficiency to run on the TI’s AWR 1843 BOOST radar, utilizing a coarse classification approach distinguishing between cars, trucks, two-wheeled vehicles, and other objects. To enhance the performance of clustering techniques, we propose a 2-Level clustering approach. This approach builds on the state-of-the-art Density-based spatial clustering of applications with noise (DBSCAN). The objective is to first cluster objects based on their velocity, then refine the analysis by clustering based on position. The initial level identifies groups of objects with similar velocities and movement patterns. The subsequent level refines the analysis by considering the spatial distribution of these objects. The clusters obtained from the first level serve as input for the second level of clustering. Our proposed technique surpasses the classical DBSCAN algorithm in terms of geometrical metrics, including homogeneity, completeness, and V-score. Relevant cluster features are extracted and utilized to classify objects using an SVM classifier. Potential obstacles are identified based on their velocity and proximity to the cyclist. To optimize the system, we used the View of Delft dataset for hyperparameter selection and SVM classifier training. The system's performance was assessed using our collected dataset of radar point clouds synchronized with a camera on an Nvidia Jetson Nano board. The radar-based cyclist safety system is a practical solution that can be easily installed on any bicycle and connected to smartphones or other devices, offering real-time feedback and navigation assistance to cyclists. We conducted experiments to validate the system's feasibility, achieving an impressive 85% accuracy in the classification task. This system has the potential to significantly reduce the number of accidents involving cyclists and enhance their safety on the road.

Keywords: 2-level clustering, coarse classification, cyclist safety, warning system based on radar technology

Procedia PDF Downloads 46
11 Artificial Intelligence and Governance in Relevance to Satellites in Space

Authors: Anwesha Pathak

Abstract:

With the increasing number of satellites and space debris, space traffic management (STM) becomes crucial. AI can aid in STM by predicting and preventing potential collisions, optimizing satellite trajectories, and managing orbital slots. Governance frameworks need to address the integration of AI algorithms in STM to ensure safe and sustainable satellite activities. AI and governance play significant roles in the context of satellite activities in space. Artificial intelligence (AI) technologies, such as machine learning and computer vision, can be utilized to process vast amounts of data received from satellites. AI algorithms can analyse satellite imagery, detect patterns, and extract valuable information for applications like weather forecasting, urban planning, agriculture, disaster management, and environmental monitoring. AI can assist in automating and optimizing satellite operations. Autonomous decision-making systems can be developed using AI to handle routine tasks like orbit control, collision avoidance, and antenna pointing. These systems can improve efficiency, reduce human error, and enable real-time responsiveness in satellite operations. AI technologies can be leveraged to enhance the security of satellite systems. AI algorithms can analyze satellite telemetry data to detect anomalies, identify potential cyber threats, and mitigate vulnerabilities. Governance frameworks should encompass regulations and standards for securing satellite systems against cyberattacks and ensuring data privacy. AI can optimize resource allocation and utilization in satellite constellations. By analyzing user demands, traffic patterns, and satellite performance data, AI algorithms can dynamically adjust the deployment and routing of satellites to maximize coverage and minimize latency. Governance frameworks need to address fair and efficient resource allocation among satellite operators to avoid monopolistic practices. Satellite activities involve multiple countries and organizations. Governance frameworks should encourage international cooperation, information sharing, and standardization to address common challenges, ensure interoperability, and prevent conflicts. AI can facilitate cross-border collaborations by providing data analytics and decision support tools for shared satellite missions and data sharing initiatives. AI and governance are critical aspects of satellite activities in space. They enable efficient and secure operations, ensure responsible and ethical use of AI technologies, and promote international cooperation for the benefit of all stakeholders involved in the satellite industry.

Keywords: satellite, space debris, traffic, threats, cyber security.

Procedia PDF Downloads 38
10 Electron Bernstein Wave Heating in the Toroidally Magnetized System

Authors: Johan Buermans, Kristel Crombé, Niek Desmet, Laura Dittrich, Andrei Goriaev, Yurii Kovtun, Daniel López-Rodriguez, Sören Möller, Per Petersson, Maja Verstraeten

Abstract:

The International Thermonuclear Experimental Reactor (ITER) will rely on three sources of external heating to produce and sustain a plasma; Neutral Beam Injection (NBI), Ion Cyclotron Resonance Heating (ICRH), and Electron Cyclotron Resonance Heating (ECRH). ECRH is a way to heat the electrons in a plasma by resonant absorption of electromagnetic waves. The energy of the electrons is transferred indirectly to the ions by collisions. The electron cyclotron heating system can be directed to deposit heat in particular regions in the plasma (https://www.iter.org/mach/Heating). Electron Cyclotron Resonance Heating (ECRH) at the fundamental resonance in X-mode is limited by a low cut-off density. Electromagnetic waves cannot propagate in the region between this cut-off and the Upper Hybrid Resonance (UHR) and cannot reach the Electron Cyclotron Resonance (ECR) position. Higher harmonic heating is hence preferred in heating scenarios nowadays to overcome this problem. Additional power deposition mechanisms can occur above this threshold to increase the plasma density. This includes collisional losses in the evanescent region, resonant power coupling at the UHR, tunneling of the X-wave with resonant coupling at the ECR, and conversion to the Electron Bernstein Wave (EBW) with resonant coupling at the ECR. A more profound knowledge of these deposition mechanisms can help determine the optimal plasma production scenarios. Several ECRH experiments are performed on the TOroidally MAgnetized System (TOMAS) to identify the conditions for Electron Bernstein Wave (EBW) heating. Density and temperature profiles are measured with movable Triple Langmuir Probes in the horizontal and vertical directions. Measurements of the forwarded and reflected power allow evaluation of the coupling efficiency. Optical emission spectroscopy and camera images also contribute to plasma characterization. The influence of the injected power, magnetic field, gas pressure, and wave polarization on the different deposition mechanisms is studied, and the contribution of the Electron Bernstein Wave is evaluated. The TOMATOR 1D hydrogen-helium plasma simulator numerically describes the evolution of current less magnetized Radio Frequency plasmas in a tokamak based on Braginskii’s legal continuity and heat balance equations. This code was initially benchmarked with experimental data from TCV to determine the transport coefficients. The code is used to model the plasma parameters and the power deposition profiles. The modeling is compared with the data from the experiments.

Keywords: electron Bernstein wave, Langmuir probe, plasma characterization, TOMAS

Procedia PDF Downloads 65
9 The Current Application of BIM - An Empirical Study Focusing on the BIM-Maturity Level

Authors: Matthias Stange

Abstract:

Building Information Modelling (BIM) is one of the most promising methods in the building design process and plays an important role in the digitalization of the Architectural, Engineering, and Construction (AEC) Industry. The application of BIM is seen as the key enabler for increasing productivity in the construction industry. The model-based collaboration using the BIM method is intended to significantly reduce cost increases, schedule delays, and quality problems in the planning and construction of buildings. Numerous qualitative studies based on expert interviews support this theory and report perceived benefits from the use of BIM in terms of achieving project objectives related to cost, schedule, and quality. However, there is a large research gap in analysing quantitative data collected from real construction projects regarding the actual benefits of applying BIM based on representative sample size and different application regions as well as different project typologies. In particular, the influence of the project-related BIM maturity level is completely unexplored. This research project examines primary data from 105 construction projects worldwide using quantitative research methods. Projects from the areas of residential, commercial, and industrial construction as well as infrastructure and hydraulic engineering were examined in application regions North America, Australia, Europe, Asia, MENA region, and South America. First, a descriptive data analysis of 6 independent project variables (BIM maturity level, application region, project category, project type, project size, and BIM level) were carried out using statistical methods. With the help of statisticaldata analyses, the influence of the project-related BIM maturity level on 6 dependent project variables (deviation in planning time, deviation in construction time, number of planning collisions, frequency of rework, number of RFIand number of changes) was investigated. The study revealed that most of the benefits of using BIM perceived through numerous qualitative studies have not been confirmed. The results of the examined sample show that the application of BIM did not have an improving influence on the dependent project variables, especially regarding the quality of the planning itself and the adherence to the schedule targets. The quantitative research suggests the conclusion that the BIM planning method in its current application has not (yet) become a recognizable increase in productivity within the planning and construction process. The empirical findings indicate that this is due to the overall low level of BIM maturity in the projects of the examined sample. As a quintessence, the author suggests that the further implementation of BIM should primarily focus on an application-oriented and consistent development of the project-related BIM maturity level instead of implementing BIM for its own sake. Apparently, there are still significant difficulties in the interweaving of people, processes, and technology.

Keywords: AEC-process, building information modeling, BIM maturity level, project results, productivity of the construction industry

Procedia PDF Downloads 49
8 Basics of Gamma Ray Burst and Its Afterglow

Authors: Swapnil Kumar Singh

Abstract:

Gamma-ray bursts (GRB's), short and intense pulses of low-energy γ rays, have fascinated astronomers and astrophysicists since their unexpected discovery in the late sixties. GRB'sare accompanied by long-lasting afterglows, and they are associated with core-collapse supernovae. The detection of delayed emission in X-ray, optical, and radio wavelength, or "afterglow," following a γ-ray burst can be described as the emission of a relativistic shell decelerating upon collision with the interstellar medium. While it is fair to say that there is strong diversity amongst the afterglow population, probably reflecting diversity in the energy, luminosity, shock efficiency, baryon loading, progenitor properties, circumstellar medium, and more, the afterglows of GRBs do appear more similar than the bursts themselves, and it is possible to identify common features within afterglows that lead to some canonical expectations. After an initial flash of gamma rays, a longer-lived "afterglow" is usually emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave, and radio). It is a slowly fading emission at longer wavelengths created by collisions between the burst ejecta and interstellar gas. In X-ray wavelengths, the GRB afterglow fades quickly at first, then transitions to a less-steep drop-off (it does other stuff after that, but we'll ignore that for now). During these early phases, the X-ray afterglow has a spectrum that looks like a power law: flux F∝ E^β, where E is energy and beta is some number called the spectral index. This kind of spectrum is characteristic of synchrotron emission, which is produced when charged particles spiral around magnetic field lines at close to the speed of light. In addition to the outgoing forward shock that ploughs into the interstellar medium, there is also a so-called reverse shock, which propagates backward through the ejecta. In many ways," reverse" shock can be misleading; this shock is still moving outward from the restframe of the star at relativistic velocity but is ploughing backward through the ejecta in their frame and is slowing the expansion. This reverse shock can be dynamically important, as it can carry comparable energy to the forward shock. The early phases of the GRB afterglow still provide a good description even if the GRB is highly collimated since the individual emitting regions of the outflow are not in causal contact at large angles and so behave as though they are expanding isotropically. The majority of afterglows, at times typically observed, fall in the slow cooling regime, and the cooling break lies between the optical and the X-ray. Numerous observations support this broad picture for afterglows in the spectral energy distribution of the afterglow of the very bright GRB. The bluer light (optical and X-ray) appears to follow a typical synchrotron forward shock expectation (note that the apparent features in the X-ray and optical spectrum are due to the presence of dust within the host galaxy). We need more research in GRB and Particle Physics in order to unfold the mysteries of afterglow.

Keywords: GRB, synchrotron, X-ray, isotropic energy

Procedia PDF Downloads 63
7 Effect of Oxygen Ion Irradiation on the Structural, Spectral and Optical Properties of L-Arginine Acetate Single Crystals

Authors: N. Renuka, R. Ramesh Babu, N. Vijayan

Abstract:

Ion beams play a significant role in the process of tuning the properties of materials. Based on the radiation behavior, the engineering materials are categorized into two different types. The first one comprises organic solids which are sensitive to the energy deposited in their electronic system and the second one comprises metals which are insensitive to the energy deposited in their electronic system. However, exposure to swift heavy ions alters this general behavior. Depending on the mass, kinetic energy and nuclear charge, an ion can produce modifications within a thin surface layer or it can penetrate deeply to produce long and narrow distorted area along its path. When a high energetic ion beam impinges on a material, it causes two different types of changes in the material due to the columbic interaction between the target atom and the energetic ion beam: (i) inelastic collisions of the energetic ion with the atomic electrons of the material; and (ii) elastic scattering from the nuclei of the atoms of the material, which is extremely responsible for relocating the atoms of matter from their lattice position. The exposure of the heavy ions renders the material return to equilibrium state during which the material undergoes surface and bulk modifications which depends on the mass of the projectile ion, physical properties of the target material, its energy, and beam dimension. It is well established that electronic stopping power plays a major role in the defect creation mechanism provided it exceeds a threshold which strongly depends on the nature of the target material. There are reports available on heavy ion irradiation especially on crystalline materials to tune their physical and chemical properties. L-Arginine Acetate [LAA] is a potential semi-organic nonlinear optical crystal and its optical, mechanical and thermal properties have already been reported The main objective of the present work is to enhance or tune the structural and optical properties of LAA single crystals by heavy ion irradiation. In the present study, a potential nonlinear optical single crystal, L-arginine acetate (LAA) was grown by slow evaporation solution growth technique. The grown LAA single crystal was irradiated with oxygen ions at the dose rate of 600 krad and 1M rad in order to tune the structural and optical properties. The structural properties of pristine and oxygen ions irradiated LAA single crystals were studied using Powder X- ray diffraction and Fourier Transform Infrared spectral studies which reveal the structural changes that are generated due to irradiation. Optical behavior of pristine and oxygen ions irradiated crystals is studied by UV-Vis-NIR and photoluminescence analyses. From this investigation we can concluded that oxygen ions irradiation modifies the structural and optical properties of LAA single crystals.

Keywords: heavy ion irradiation, NLO single crystal, photoluminescence, X-ray diffractometer

Procedia PDF Downloads 223
6 Geospatial Modeling Framework for Enhancing Urban Roadway Intersection Safety

Authors: Neeti Nayak, Khalid Duri

Abstract:

Despite the many advances made in transportation planning, the number of injuries and fatalities in the United States which involve motorized vehicles near intersections remain largely unchanged year over year. Data from the National Highway Traffic Safety Administration for 2018 indicates accidents involving motorized vehicles at traffic intersections accounted for 8,245 deaths and 914,811 injuries. Furthermore, collisions involving pedal cyclists killed 861 people (38% at intersections) and injured 46,295 (68% at intersections), while accidents involving pedestrians claimed 6,247 lives (25% at intersections) and injured 71,887 (56% at intersections)- the highest tallies registered in nearly 20 years. Some of the causes attributed to the rising number of accidents relate to increasing populations and the associated changes in land and traffic usage patterns, insufficient visibility conditions, and inadequate applications of traffic controls. Intersections that were initially designed with a particular land use pattern in mind may be rendered obsolete by subsequent developments. Many accidents involving pedestrians are accounted for by locations which should have been designed for safe crosswalks. Conventional solutions for evaluating intersection safety often require costly deployment of engineering surveys and analysis, which limit the capacity of resource-constrained administrations to satisfy their community’s needs for safe roadways adequately, effectively relegating mitigation efforts for high-risk areas to post-incident responses. This paper demonstrates how geospatial technology can identify high-risk locations and evaluate the viability of specific intersection management techniques. GIS is used to simulate relevant real-world conditions- the presence of traffic controls, zoning records, locations of interest for human activity, design speed of roadways, topographic details and immovable structures. The proposed methodology provides a low-cost mechanism for empowering urban planners to reduce the risks of accidents using 2-dimensional data representing multi-modal street networks, parcels, crosswalks and demographic information alongside 3-dimensional models of buildings, elevation, slope and aspect surfaces to evaluate visibility and lighting conditions and estimate probabilities for jaywalking and risks posed by blind or uncontrolled intersections. The proposed tools were developed using sample areas of Southern California, but the model will scale to other cities which conform to similar transportation standards given the availability of relevant GIS data.

Keywords: crosswalks, cyclist safety, geotechnology, GIS, intersection safety, pedestrian safety, roadway safety, transportation planning, urban design

Procedia PDF Downloads 83
5 Study on Electromagnetic Plasma Acceleration Using Rotating Magnetic Field Scheme

Authors: Takeru Furuawa, Kohei Takizawa, Daisuke Kuwahara, Shunjiro Shinohara

Abstract:

In the field of a space propulsion, an electric propulsion system has been developed because its fuel efficiency is much higher than a conventional chemical one. However, the practical electric propulsion systems, e.g., an ion engine, have a problem of short lifetime due to a damage of generation and acceleration electrodes of the plasma. A helicon plasma thruster is proposed as a long-lifetime electric thruster which has non-direct contact electrodes. In this system, both generation and acceleration methods of a dense plasma are executed by antennas from the outside of a discharge tube. Development of the helicon plasma thruster has been conducting under the Helicon Electrodeless Advanced Thruster (HEAT) project. Our helicon plasma thruster has two important processes. First, we generate a dense source plasma using a helicon wave with an excitation frequency between an ion and an electron cyclotron frequencies, fci and fce, respectively, applied from the outside of a discharge using a radio frequency (RF) antenna. The helicon plasma source can provide a high-density (~1019 m-3), a high-ionization ratio (up to several tens of percent), and a high particle generation efficiency. Second, in order to achieve high thrust and specific impulse, we accelerate the dense plasma by the axial Lorentz force fz using the product of the induced azimuthal current jθ and the static radial magnetic field Br, shown as fz = jθ × Br. The HEAT project has proposed several kinds of electrodeless acceleration schemes, and in our particular case, a Rotating Magnetic Field (RMF) method has been extensively studied. The RMF scheme was originally developed as a concept to maintain the Field Reversed Configuration (FRC) in a magnetically confined fusion research. Here, RMF coils are expected to generate jθ due to a nonlinear effect shown below. First, the rotating magnetic field Bω is generated by two pairs of RMF coils with AC currents, which have a phase difference of 90 degrees between the pairs. Due to the Faraday’s law, an axial electric field is induced. Second, an axial current is generated by the effects of an electron-ion and an electron-neutral collisions through the Ohm’s law. Third, the azimuthal electric field is generated by the nonlinear term, and the retarding torque generated by the collision effects again. Then, azimuthal current jθ is generated as jθ = - nₑ er ∙ 2π fRMF. Finally, the axial Lorentz force fz for plasma acceleration is generated. Here, jθ is proportional to nₑ and frequency of RMF coil current fRMF, when Bω is fully penetrated into the plasma. Our previous study has achieved 19 % increase of ion velocity using the 5 MHz and 50 A of the RMF coil power supply. In this presentation, we will show the improvement of the ion velocity using the lower frequency and higher current supplied by RMF power supply. In conclusion, helicon high-density plasma production and electromagnetic acceleration by the RMF scheme with a concept of electrodeless condition have been successfully executed.

Keywords: electric propulsion, electrodeless thruster, helicon plasma, rotating magnetic field

Procedia PDF Downloads 237