Search results for: code blue drill
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2095

Search results for: code blue drill

505 Security Analysis of Mod. S Transponder Technology and Attack Examples

Authors: M. Rutkowski, J. Cwiklak, M. Grzegorzewski, M. Adamski

Abstract:

All class A Airplanes have to be equipped with Mod. S transponder for ATC surveillance purposes. This technology was designed to provide a robust and dependable solution to localize, identify and exchange data with the airplane. The purpose of this paper is to analyze potential hazards that are a result of lack of any security or encryption on a design level. Secondary Surveillance Radars rely on an active response from an airplane. SSR radar installation is broadcasting a directional interrogation signal to the planes in range on 1030MHz frequency with DPSK modulation. If the interrogation is correctly received by the transponder located on the plane, a proper answer is sent on 1090MHz with PPM modulation containing plane’s SQUAWK, barometric altitude, GPS coordinates and 24bit unique address code. This technology does not use any kind of encryption. All of the specifications from the previous chapter can be found easily on the internet. Since there is no encryption or security measure to ensure the credibility of the sender and message, it is highly hazardous to use such technology to ensure the safety of the air traffic. The only thing that identifies the airplane is the 24-bit unique address. Most of the planes have been sniffed by aviation enthusiasts and cataloged in web databases. In the moment of writing this article, The PoFung Technologies has announced that they are planning to release all band SDR transceiver – this device would be more than enough to build your own Mod. S Transponder. With fake transponder, a potential terrorist can identify as a different airplane. By replacing the transponder in a poorly controlled airspace, hijackers can enter another airspace identifying themselves as another plane and land in the desired area.

Keywords: flight safety, hijack, mod S transponder, security analysis

Procedia PDF Downloads 295
504 Effect of Cigarette Smoke on Micro-Architecture of Respiratory Organs with and without Dietary Probiotics

Authors: Komal Khan, Hafsa Zaneb, Saima Masood, Muhammad Younus, Sanan Raza

Abstract:

Cigarette smoke induces many physiological and pathological changes in respiratory tract like goblet cell hyperplasia and regional distention of airspaces. It is also associated with elevation of inflammatory profiles in different airway compartments. As probiotics are generally known to promote mucosal tolerance, it was postulated that prophylactic use of probiotics can be helpful in reduction of respiratory damage induced by cigarette smoke exposure. Twenty-four adult mice were randomly divided into three groups (cigarette-smoke (CS) group, cigarette-smoke+ Lactobacillus (CS+ P) group, control (Cn) group), each having 8 mice. They were exposed to cigarette smoke for 28 days (6 cigarettes/ day for 6 days/week). Wright-Giemsa staining of bronchoalveolar lavage fluid (BALF) was performed in three mice per group. Tissue samples of trachea and lungs of 7 mice from each group were processed by paraffin embedding technique for haematoxylin & eosin (H & E) and alcian blue- periodic acid-Schiff (AB-PAS) staining. Then trachea (goblet cell number, ratio and loss of cilia) and lungs (airspace distention) were studied. The results showed that the number of goblet cells was increased in CS group as a result of defensive mechanism of the respiratory system against irritating substances. This study also revealed that the cells of CS group having acidic glycoprotein were found to be higher in quantity as compared to those containing neutral glycoprotein. However, CS + P group showed a decrease in goblet cell index due to enhanced immunity by prophylactically used probiotics. Moreover, H & E stained tracheas showed significant loss of cilia in CS group due to propelling of mucous but little loss in CS + P group because of having good protective tracheal epithelium. In lungs, protection of airspaces was also much more evident in CS+ P group as compared to CS group having distended airspaces, especially at 150um distance from terminal bronchiole. In addition, a comprehensive analysis of inflammatory cells population of BALF showed neutrophilia and eosinophilia was significantly reduced in CS+ P group. This study proved that probiotics are found to be useful for reduction of changes in micro-architecture of the respiratory system. Thus, dietary supplementation of probiotic as prophylactic measure can be useful in achieving immunomodulatory effects.

Keywords: cigarette smoke, probiotics, goblet cells, airspace enlargement, BALF

Procedia PDF Downloads 364
503 Validation of an Impedance-Based Flow Cytometry Technique for High-Throughput Nanotoxicity Screening

Authors: Melanie Ostermann, Eivind Birkeland, Ying Xue, Alexander Sauter, Mihaela R. Cimpan

Abstract:

Background: New reliable and robust techniques to assess biological effects of nanomaterials (NMs) in vitro are needed to speed up safety analysis and to identify key physicochemical parameters of NMs, which are responsible for their acute cytotoxicity. The central aim of this study was to validate and evaluate the applicability and reliability of an impedance-based flow cytometry (IFC) technique for the high-throughput screening of NMs. Methods: Eight inorganic NMs from the European Commission Joint Research Centre Repository were used: NM-302 and NM-300k (Ag: 200 nm rods and 16.7 nm spheres, respectively), NM-200 and NM- 203 (SiO₂: 18.3 nm and 24.7 nm amorphous, respectively), NM-100 and NM-101 (TiO₂: 100 nm and 6 nm anatase, respectively), and NM-110 and NM-111 (ZnO: 147 nm and 141 nm, respectively). The aim was to assess the biological effects of these materials on human monoblastoid (U937) cells. Dispersions of NMs were prepared as described in the NANOGENOTOX dispersion protocol and cells were exposed to NMs at relevant concentrations (2, 10, 20, 50, and 100 µg/mL) for 24 hrs. The change in electrical impedance was measured at 0.5, 2, 6, and 12 MHz using the IFC AmphaZ30 (Amphasys AG, Switzerland). A traditional toxicity assay, Trypan Blue Dye Exclusion assay, and dark-field microscopy were used to validate the IFC method. Results: Spherical Ag particles (NM-300K) showed the highest toxic effect on U937 cells followed by ZnO (NM-111 ≥ NM-110) particles. Silica particles were moderate to non-toxic at all used concentrations under these conditions. A higher toxic effect was seen with smaller sized TiO2 particles (NM-101) compared to their larger analogues (NM-100). No interferences between the IFC and the used NMs were seen. Uptake and internalization of NMs were observed after 24 hours exposure, confirming actual NM-cell interactions. Conclusion: Results collected with the IFC demonstrate the applicability of this method for rapid nanotoxicity assessment, which proved to be less prone to nano-related interference issues compared to some traditional toxicity assays. Furthermore, this label-free and novel technique shows good potential for up-scaling in directions of an automated high-throughput screening and for future NM toxicity assessment. This work was supported by the EC FP7 NANoREG (Grant Agreement NMP4-LA-2013-310584), the Research Council of Norway, project NorNANoREG (239199/O70), the EuroNanoMed II 'GEMN' project (246672), and the UH-Nett Vest project.

Keywords: cytotoxicity, high-throughput, impedance, nanomaterials

Procedia PDF Downloads 361
502 Evaluation of Immune Checkpoint Inhibitors in Cancer Therapy

Authors: Mir Mohammad Reza Hosseini

Abstract:

In new years immune checkpoint inhibitors have gathered care as being one of the greatest talented kinds of immunotherapy on the prospect. There has been a specific emphasis on the immune checkpoint molecules, cytotoxic T-lymphocyte antigen-4 (CTLA-4) and programmed cell death protein 1 (PD-1). In 2011, ipilimumab, the primary antibody obstructive an immune checkpoint (CTLA4) was authorized. It is now documented that recognized tumors have many devices of overpowering the antitumor immune response, counting manufacture of repressive cytokines, staffing of immunosuppressive immune cells, and upregulation of coinhibitory receptors recognized as immune checkpoints. This was fast followed by the growth of monoclonal antibodies directing PD1 (pembrolizumab and nivolumab) and PDL1 (atezolizumab and durvalumab). Anti-PD1/PDL1 antibodies have developed some of the greatest extensively set anticancer therapies. We also compare and difference their present place in cancer therapy and designs of immune-related toxicities and deliberate the role of dual immune checkpoint inhibition and plans for the organization of immune-related opposing proceedings. In this review, the employed code and present growth of numerous immune checkpoint inhibitors are abridged, while the communicating device and new development of Immune checkpoint inhibitors in cancer therapy-based synergistic therapies with additional immunotherapy, chemotherapy, phototherapy, and radiotherapy in important and clinical educations in the historical 5 years are portrayed and tinted. Lastly, we disapprovingly measure these methods and effort to find their fortes and faintness based on pre-clinical and clinical information.

Keywords: checkpoint, cancer therapy, PD-1, PDL-1, CTLA4, immunosuppressive

Procedia PDF Downloads 168
501 Non-Linear Regression Modeling for Composite Distributions

Authors: Mostafa Aminzadeh, Min Deng

Abstract:

Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.

Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions

Procedia PDF Downloads 33
500 Thermo-Mechanical Behavior of Steel-Wood Connections of Wooden Structures Under the Effect of a Fire

Authors: Ahmed Alagha, Belkacem Lamri, Abdelhak Kada.

Abstract:

Steel-wood assemblies often have complex geometric configurations whose overall behavior under the effect of a fire is conditioned by the thermal response, by combining the two materials steel and wood, whose thermal characteristics are greatly influenced by high temperatures. The objective of this work is to study the thermal behavior of a steel-wood connection, with or without insulating material, subjected to an ISO834 standard fire model. The analysis is developed by the analytical approach using the Eurocode, and numerically, by the finite element method, through the ANSYS calculation code. The design of the connections is evaluated at room temperature taking the cases of single shear and double shear. The thermal behavior of the connections is simulated in transient state while taking into account the modes of heat transfer by convection and by radiation. The variation of temperature as a function of time is evaluated in different positions of the connections while talking about the heat produced and the formation of the carbon layer. The results relate to the temperature distributions in the connection elements as a function of the duration of the fire. The results of the thermal analysis show that the temperature increases rapidly and reaches more than 260 °C in the steel material for an hour of exposure to fire. The temperature development in wood material is different from that in steel because of its thermal properties. Wood heats up on the outside and burns, its surface can reach very high temperatures in points on the surface.

Keywords: Eurocode 5, finite elements, ISO834, simple shear, thermal behaviour, wood-steel connection

Procedia PDF Downloads 85
499 Optimization of the Mechanical Performance of Fused Filament Fabrication Parts

Authors: Iván Rivet, Narges Dialami, Miguel Cervera, Michele Chiumenti

Abstract:

Process parameters in Additive Manufacturing (AM) play a critical role in the mechanical performance of the final component. In order to find the input configuration that guarantees the optimal performance of the printed part, the process-performance relationship must be found. Fused Filament Fabrication (FFF) is the selected demonstrative AM technology due to its great popularity in the industrial manufacturing world. A material model that considers the different printing patterns present in a FFF part is used. A voxelized mesh is built from the manufacturing toolpaths described in the G-Code file. An Adaptive Mesh Refinement (AMR) based on the octree strategy is used in order to reduce the complexity of the mesh while maintaining its accuracy. High-fidelity and cost-efficient Finite Element (FE) simulations are performed and the influence of key process parameters in the mechanical performance of the component is analyzed. A robust optimization process based on appropriate failure criteria is developed to find the printing direction that leads to the optimal mechanical performance of the component. The Tsai-Wu failure criterion is implemented due to the orthotropy and heterogeneity constitutive nature of FFF components and because of the differences between the strengths in tension and compression. The optimization loop implements a modified version of an Anomaly Detection (AD) algorithm and uses the computed metrics to obtain the optimal printing direction. The developed methodology is verified with a case study on an industrial demonstrator.

Keywords: additive manufacturing, optimization, printing direction, mechanical performance, voxelization

Procedia PDF Downloads 63
498 Optimization of Fused Deposition Modeling 3D Printing Process via Preprocess Calibration Routine Using Low-Cost Thermal Sensing

Authors: Raz Flieshman, Adam Michael Altenbuchner, Jörg Krüger

Abstract:

This paper presents an approach to optimizing the Fused Deposition Modeling (FDM) 3D printing process through a preprocess calibration routine of printing parameters. The core of this method involves the use of a low-cost thermal sensor capable of measuring tempera-tures within the range of -20 to 500 degrees Celsius for detailed process observation. The calibration process is conducted by printing a predetermined path while varying the process parameters through machine instructions (g-code). This enables the extraction of critical thermal, dimensional, and surface properties along the printed path. The calibration routine utilizes computer vision models to extract features and metrics from the thermal images, in-cluding temperature distribution, layer adhesion quality, surface roughness, and dimension-al accuracy and consistency. These extracted properties are then analyzed to optimize the process parameters to achieve the desired qualities of the printed material. A significant benefit of this calibration method is its potential to create printing parameter profiles for new polymer and composite materials, thereby enhancing the versatility and application range of FDM 3D printing. The proposed method demonstrates significant potential in enhancing the precision and reliability of FDM 3D printing, making it a valuable contribution to the field of additive manufacturing.

Keywords: FDM 3D printing, preprocess calibration, thermal sensor, process optimization, additive manufacturing, computer vision, material profiles

Procedia PDF Downloads 40
497 Anti-Acanthamoeba Activities of Fatty Acid Salts and Fatty Acids

Authors: Manami Masuda, Mariko Era, Takayoshi Kawahara, Takahide Kanyama, Hiroshi Morita

Abstract:

Objectives: Fatty acid salts are a type of anionic surfactant and are produced from fatty acids and alkali. Moreover, fatty acid salts are known to have potent antibacterial activities. Acanthamoeba is ubiquitously distributed in the environment including sea water, fresh water, soil and even from the air. Although generally free-living, Acanthamoeba can be an opportunistic pathogen, which could cause a potentially blinding corneal infection known as Acanthamoeba keratitis. So, in this study, we evaluated the anti-amoeba activity of fatty acid salts and fatty acids to Acanthamoeba castellanii ATCC 30010. Materials and Methods: The antibacterial activity of 9 fatty acid salts (potassium butyrate (C4K), caproate (C6K), caprylate (C8K), caprate (C10K), laurate (C12K), myristate (C14K), oleate (C18:1K), linoleate (C18:2K), linolenate (C18:3K)) tested on cells of Acanthamoeba castellanii ATCC 30010. Fatty acid salts (concentration of 175 mM and pH 10.5) were prepared by mixing the fatty acid with the appropriate amount of KOH. The amoeba suspension mixed with KOH with a pH adjusted solution was used as the control. Fatty acids (concentration of 175 mM) were prepared by mixing the fatty acid with Tween 80 (20 %). The amoeba suspension mixed with Tween 80 (20 %) was used as the control. The anti-amoeba method, the amoeba suspension (3.0 × 104 cells/ml trophozoites) was mixed with the sample of fatty acid potassium (final concentration of 175 mM). Samples were incubated at 30°C, for 10 min, 60 min, and 180 min and then the viability of A. castellanii was evaluated using plankton counting chamber and trypan blue stainings. The minimum inhibitory concentration (MIC) against Acanthamoeba was determined using the two-fold dilution method. The MIC was defined as the minimal anti-amoeba concentration that inhibited visible amoeba growth following incubation (180 min). Results: C8K, C10K, and C12K were the anti-amoeba effect of 4 log-unit (99.99 % growth suppression of A. castellanii) incubated time for 180 min against A. castellanii at 175mM. After the amoeba, the suspension was mixed with C10K or C12K, destroying the cell membrane had been observed. Whereas, the pH adjusted control solution did not exhibit any effect even after 180 min of incubation with A. castellanii. Moreover, C6, C8, and C18:3 were the anti-amoeba effect of 4 log-unit incubated time for 60 min. C4 and C18:2 exhibited a 4-log reduction after 180 min incubation. Furthermore, the minimum inhibitory concentration (MIC) was determined. The MIC of C10K, C12K and C4 were 2.7 mM. These results indicate that C10K, C12K and C4 have high anti-amoeba activity against A. castellanii and suggest C10K, C12K and C4 have great potential for antimi-amoeba agents.

Keywords: Fatty acid salts, anti-amoeba activities, Acanthamoeba, fatty acids

Procedia PDF Downloads 479
496 Impact of Audit Committee on Real Earnings Management: Cases of Netherlands

Authors: Sana Masmoudi Mardassi, Yosra Makni Fourati

Abstract:

Regulators highlight the importance of the Audit Committee (AC) as a key internal corporate governance mechanism. One of the most important roles of this committee is to oversee the financial reporting process. The purpose of this paper is to examine the link between the characteristics of an audit committee and the financial reporting quality by investigating whether the characteristics of audit committees are associated with improved financial reporting quality, especially the Real Earnings Management. In the current study, a panel data from 80 nonfinancial companies listed on the Amsterdam Stock Exchange during the period between 2010 and 2017 were used. To measure audit committee characteristics, four proxies have been used, specifically, audit committee independence, financial expertise, gender diversity and AC meetings. For this research, a linear regression model was used to identify the influence of a set of board characteristics of the audit committee on real earnings management after controlling for firm audit committee size, leverage, size, loss, growth and board size. This research provides empirical evidence of the association between audit committee independence, financial expertise, gender diversity and meetings and Real Earnings Management (REM) as a proxy of financial reporting quality. The study finds that independence and AC Gender diversity are strongly related to financial reporting quality. In fact, these two characteristics constrain REM. The results also suggest that AC- financial expertise reduces to some extent, the likelihood of engaging in REM. These conclusions provide support then to the audit committee requirement under the Dutch Corporate Governance Code rules regarding gender diversity and AC meetings.

Keywords: audit committee, financial expertise, independence, real earnings management

Procedia PDF Downloads 166
495 Earthquake Hazards in Manipur: Casual Factors and Remedial Measures

Authors: Kangujam Monika, Kiranbala Devi Thokchom, Soibam Sandhyarani Devi

Abstract:

Earthquake is a major natural hazard in India. Manipur, located in the North-Eastern Region of India, is one of the most affected location in the region prone to earthquakes since it lies in an area where Indian and Eurasian tectonic plates meet and is in seismic Zone V which is the most severe intensity zone, according to IS Code. Some recent earthquakes recorded in Manipur are M 6.7 epicenter at Tamenglong (January 4, 2016), M 5.2 epicenter at Churachandpur (February 24, 2017) and most recent M 4.4 epicenter at Thoubal (June 19, 2017). In these recent earthquakes, some houses and buildings were damaged, landslides were also occurred. A field study was carried out. An overview of the various causal factors involved in triggering of earthquake in Manipur has been discussed. It is found that improper planning, poor design, negligence, structural irregularities, poor quality materials, construction of foundation without proper site soil investigation and non-implementation of remedial measures, etc., are possibly the main causal factors for damage in Manipur during earthquake. The study also suggests, though the proper design of structure and foundation along with soil investigation, ground improvement methods, use of modern techniques of construction, counseling with engineer, mass awareness, etc., might be effective solution to control the hazard in many locations. An overview on the analysis pertaining to earthquake in Manipur together with on-going detailed site specific geotechnical investigation were presented.

Keywords: Manipur, earthquake, hazard, structure, soil

Procedia PDF Downloads 210
494 The Culture of Journal Writing among Manobo Senior High School Students

Authors: Jessevel Montes

Abstract:

This study explored on the culture of journal writing among the Senior High School Manobo students. The purpose of this qualitative morpho-semantic and syntactic study was to discover the morphological, semantic, and syntactic features of the written output through morphological, semantic, and syntactic categories present in their journal writings. Also, beliefs and practices embedded in the norms, values, and ideologies were identified. The study was conducted among the Manobo students in the Senior High Schools of Central Mindanao, particularly in the Division of North Cotabato. Findings revealed that morphologically, the features that flourished are the following: subject-verb concordance, tenses, pronouns, prepositions, articles, and the use of adjectives. Semantically, the features are the following: word choice, idiomatic expression, borrowing, and vernacular. Syntactically, the features are the types of sentences according to structure and function; and the dominance of code switching and run-on sentences. Lastly, as to the beliefs and practices embedded in the norms, values, and ideologies of their journal writing, the major themes are: valuing education, family, and friends as treasure, preservation of culture, and emancipation from the bondage of poverty. This study has shed light on the writing capabilities and weaknesses of the Manobo students when it comes to English language. Further, such an insight into language learning problems is useful to teachers because it provides information on common trouble-spots in language learning, which can be used in the preparation of effective teaching materials.

Keywords: applied linguistics, culture, morpho-semantic and syntactic analysis, Manobo Senior High School, Philippines

Procedia PDF Downloads 121
493 Modelling Interactions between Saturated and Unsaturated Zones by Hydrus 1D, Plain of Kairouan, Central Tunisia

Authors: Mariem Saadi, Sabri Kanzari, Adel Zghibi

Abstract:

In semi-arid areas like the Kairouan region, the constant irrigation with saline water and the overuse of groundwater resources, soils and aquifers salinization has become an increasing concern. In this study, a methodology has been developed to evaluate the groundwater contamination risk based on the unsaturated zone hydraulic properties. Two soil profiles with different ranges of salinity, one located in the north of the plain and another one in the south of plain (each 30 m deep) and both characterized by direct recharge of the aquifer were chosen. Simulations were conducted with Hydrus-1D code using measured precipitation data for the period 1998-2003 and calculated evapotranspiration for both chosen profiles. Four combinations of initial conditions of water content and salt concentration were used for the simulation process in order to find the best match between simulated and measured values. The success of the calibration of Hydrus-1D allowed the investigation of some scenarios in order to assess the contamination risk under different natural conditions. The aquifer risk contamination is related to the natural conditions where it increased while facing climate change and temperature increase and decreased in the presence of a clay layer in the unsaturated zone. Hydrus-1D was a useful tool to predict the groundwater level and quality in the case of a direct recharge and in the absence of any information related to the soil layers except for the texture.

Keywords: Hydrus-1D, Kairouan, salinization, semi-arid region, solute transport, unsaturated zone

Procedia PDF Downloads 183
492 The Formulation of the Mecelle and Other Codified Laws in the Ottoman Empire: Transformation Overturning the Sharia Principles

Authors: Tianqi Yin

Abstract:

The sharia had been the legislative basis in the Ottoman Empire since its emergence. The authority of sharia was superlative in the Islamic society compared to the power of the sulta, the nominal ruler of the nation, regulating essentially every aspect of people’s lives according to an ethical code. In modernity, however, as European sovereignty employed forces to re-engineer the Islamic world to make it more like their own, a society ruled by a state, the Ottoman legislation system encountered a great challenge of adopting codified laws to replace sharia with the formulation of the Mecelle being a prominent case. Interpretations of this transformation have been contentious, with the key debate revolving around whether these codified laws are authentic representations of sharia or alien legal formulations authorized by the modern nation-state under heavy European colonial influence. Because of the difference in methodology of the diverse theories, challenges toward having a universal conclusion on this issue remain. This paper argues that the formulation of the Mecelle and other codified laws is a discontinuity of sharia due to European modernity’s influence and that the emphasis on elements of Islamic laws is a tactic employed to promote this process. These codified laws signals a complete social transformation from the Islamic society ruled by the sharia to a replication of the European society that is ruled by a comprehensive ruling system of the modern state. In addition to advancing the discussion on the characterization of the codification movement in the Ottoman Empire in modernity, the research also promotes the determination of the nature of the modern codification movement globally.

Keywords: codification, mecelle, modernity, sharia, ottoman empire

Procedia PDF Downloads 90
491 Timber Urbanism: Assessing the Carbon Footprint of Mass-Timber, Steel, and Concrete Structural Prototypes for Peri-Urban Densification in the Hudson Valley’s Urban Fringe

Authors: Eleni Stefania Kalapoda

Abstract:

The current fossil-fuel based urbanization pattern and the estimated human population growth are increasing the environmental footprint on our planet’s precious resources. To mitigate the estimated skyrocketing in greenhouse gas emissions associated with the construction of new cities and infrastructure over the next 50 years, we need a radical rethink in our approach to construction to deliver a net zero built environment. This paper assesses the carbon footprint of a mass-timber, a steel, and a concrete structural alternative for peri-urban densification in the Hudson Valley's urban fringe, along with examining the updated policy and the building code adjustments that support synergies between timber construction in city making and sustainable management of timber forests. By quantifying the carbon footprint of a structural prototype for four different material assemblies—a concrete (post-tensioned), a mass timber, a steel (composite), and a hybrid (timber/steel/concrete) assembly applicable to the three updated building typologies of the IBC 2021 (Type IV-A, Type IV-B, Type IV-C) that range between a nine to eighteen-story structure alternative—and scaling-up that structural prototype to the size of a neighborhood district, the paper presents a quantitative and a qualitative approach for a forest-based construction economy as well as a resilient and a more just supply chain framework that ensures the wellbeing of both the forest and its inhabitants.

Keywords: mass-timber innovation, concrete structure, carbon footprint, densification

Procedia PDF Downloads 108
490 Design of SAE J2716 Single Edge Nibble Transmission Digital Sensor Interface for Automotive Applications

Authors: Jongbae Lee, Seongsoo Lee

Abstract:

Modern sensors often embed small-size digital controller for sensor control, value calibration, and signal processing. These sensors require digital data communication with host microprocessors, but conventional digital communication protocols are too heavy for price reduction. SAE J2716 SENT (single edge nibble transmission) protocol transmits direct digital waveforms instead of complicated analog modulated signals. In this paper, a SENT interface is designed in Verilog HDL (hardware description language) and implemented in FPGA (field-programmable gate array) evaluation board. The designed SENT interface consists of frame encoder/decoder, configuration register, tick period generator, CRC (cyclic redundancy code) generator/checker, and TX/RX (transmission/reception) buffer. Frame encoder/decoder is implemented as a finite state machine, and it controls whole SENT interface. Configuration register contains various parameters such as operation mode, tick length, CRC option, pause pulse option, and number of nibble data. Tick period generator generates tick signals from input clock. CRC generator/checker generates or checks CRC in the SENT data frame. TX/RX buffer stores transmission/received data. The designed SENT interface can send or receives digital data in 25~65 kbps at 3 us tick. Synthesized in 0.18 um fabrication technologies, it is implemented about 2,500 gates.

Keywords: digital sensor interface, SAE J2716, SENT, verilog HDL

Procedia PDF Downloads 300
489 Simulation of the Collimator Plug Design for Prompt-Gamma Activation Analysis in the IEA-R1 Nuclear Reactor

Authors: Carlos G. Santos, Frederico A. Genezini, A. P. Dos Santos, H. Yorivaz, P. T. D. Siqueira

Abstract:

The Prompt-Gamma Activation Analysis (PGAA) is a valuable technique for investigating the elemental composition of various samples. However, the installation of a PGAA system entails specific conditions such as filtering the neutron beam according to the target and providing adequate shielding for both users and detectors. These requirements incur substantial costs, exceeding $100,000, including manpower. Nevertheless, a cost-effective approach involves leveraging an existing neutron beam facility to create a hybrid system integrating PGAA and Neutron Tomography (NT). The IEA-R1 nuclear reactor at IPEN/USP possesses an NT facility with suitable conditions for adapting and implementing a PGAA device. The NT facility offers a thermal flux slightly colder and provides shielding for user protection. The key additional requirement involves designing detector shielding to mitigate high gamma ray background and safeguard the HPGe detector from neutron-induced damage. This study employs Monte Carlo simulations with the MCNP6 code to optimize the collimator plug for PGAA within the IEA-R1 NT facility. Three collimator models are proposed and simulated to assess their effectiveness in shielding gamma and neutron radiation from nucleon fission. The aim is to achieve a focused prompt-gamma signal while shielding ambient gamma radiation. The simulation results indicate that one of the proposed designs is particularly suitable for the PGAA-NT hybrid system.

Keywords: MCNP6.1, neutron, prompt-gamma ray, prompt-gamma activation analysis

Procedia PDF Downloads 75
488 International Classification of Primary Care as a Reference for Coding the Demand for Care in Primary Health Care

Authors: Souhir Chelly, Chahida Harizi, Aicha Hechaichi, Sihem Aissaoui, Leila Ben Ayed, Maha Bergaoui, Mohamed Kouni Chahed

Abstract:

Introduction: The International Classification of Primary Care (ICPC) is part of the morbidity classification system. It had 17 chapters, and each is coded by an alphanumeric code: the letter corresponds to the chapter, the number to a paragraph in the chapter. The objective of this study is to show the utility of this classification in the coding of the reasons for demand for care in Primary health care (PHC), its advantages and limits. Methods: This is a cross-sectional descriptive study conducted in 4 PHC in Ariana district. Data on the demand for care during 2 days in the same week were collected. The coding of the information was done according to the CISP. The data was entered and analyzed by the EPI Info 7 software. Results: A total of 523 demands for care were investigated. The patients who came for the consultation are predominantly female (62.72%). Most of the consultants are young with an average age of 35 ± 26 years. In the ICPC, there are 7 rubrics: 'infections' is the most common reason with 49.9%, 'other diagnoses' with 40.2%, 'symptoms and complaints' with 5.5%, 'trauma' with 2.1%, 'procedures' with 2.1% and 'neoplasm' with 0.3%. The main advantage of the ICPC is the fact of being a standardized tool. It is very suitable for classification of the reasons for demand for care in PHC according to their specificity, capacity to be used in a computerized medical file of the PHC. Its current limitations are related to the difficulty of classification of some reasons for demand for care. Conclusion: The ICPC has been developed to provide healthcare with a coding reference that takes into account their specificity. The CIM is in its 10th revision; it would gain from revision to revision to be more efficient to be generalized and used by the teams of PHC.

Keywords: international classification of primary care, medical file, primary health care, Tunisia

Procedia PDF Downloads 266
487 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation

Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar

Abstract:

The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.

Keywords: computational fluid dynamics (CFD), erosion, slurry transportation, k-ε Model

Procedia PDF Downloads 408
486 Perforation Analysis of the Aluminum Alloy Sheets Subjected to High Rate of Loading and Heated Using Thermal Chamber: Experimental and Numerical Approach

Authors: A. Bendarma, T. Jankowiak, A. Rusinek, T. Lodygowski, M. Klósak, S. Bouslikhane

Abstract:

The analysis of the mechanical characteristics and dynamic behavior of aluminum alloy sheet due to perforation tests based on the experimental tests coupled with the numerical simulation is presented. The impact problems (penetration and perforation) of the metallic plates have been of interest for a long time. Experimental, analytical as well as numerical studies have been carried out to analyze in details the perforation process. Based on these approaches, the ballistic properties of the material have been studied. The initial and residual velocities laser sensor is used during experiments to obtain the ballistic curve and the ballistic limit. The energy balance is also reported together with the energy absorbed by the aluminum including the ballistic curve and ballistic limit. The high speed camera helps to estimate the failure time and to calculate the impact force. A wide range of initial impact velocities from 40 up to 180 m/s has been covered during the tests. The mass of the conical nose shaped projectile is 28 g, its diameter is 12 mm, and the thickness of the aluminum sheet is equal to 1.0 mm. The ABAQUS/Explicit finite element code has been used to simulate the perforation processes. The comparison of the ballistic curve was obtained numerically and was verified experimentally, and the failure patterns are presented using the optimal mesh densities which provide the stability of the results. A good agreement of the numerical and experimental results is observed.

Keywords: aluminum alloy, ballistic behavior, failure criterion, numerical simulation

Procedia PDF Downloads 312
485 Utilizing Minecraft Java Edition for the Application of Fire Disaster Procedures to Establish Fire Disaster Readiness for Grade 12 STEM students of DLSU-IS

Authors: Aravella Flores, Jose Rafael E. Sotelo, Luis Romulus Phillippe R. Javier, Josh Christian V. Nunez

Abstract:

This study focuses on analyzing the performance of Grade 12 STEM students of De La Salle University - Integrated School that has completed the Disaster Readiness and Risk Reduction course in handling fire hazards through Minecraft Java Edition. This platform is suitable because fire DRRR is challenging to learn in a practical setting as well as questionable with regard to supplementing the successful implementation of textbook knowledge into actual practice. The purpose of this study is to acknowledge whether Minecraft can be a suitable environment to familiarize oneself to fire DRRR. The objectives are achieved through utilizing Minecraft in simulating fire scenarios which allows the participants to freely act upon and practice fire DRRR. The experiment was divided into the grounding and validation phase, where researchers observed the performance of the participants in the simulation. A pre-simulation and post-simulation survey was given to acknowledge the change in participants’ perception of being able to utilize fire DRRR procedures and their vulnerabilities. The paired t-test was utilized, showing significant differences in the pre-simulation and post-simulation survey scores, thus, insinuating improved judgment of DRRR, lessening their vulnerabilities in the possibility of encountering a fire hazard. This research poses a model for future research which can gather more participants and dwell on more complex codes outside just command blocks and into the code lines of Minecraft itself.

Keywords: minecraft, DRRR, fire, disaster, simulation

Procedia PDF Downloads 137
484 Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets

Authors: Kothuri Sriraman, Mattupalli Komal Teja

Abstract:

In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war).

Keywords: chain code, Hull regions, Hough transform, Hull recognition, Layered Outline Extraction, SASK algorithm

Procedia PDF Downloads 348
483 Cache Analysis and Software Optimizations for Faster on-Chip Network Simulations

Authors: Khyamling Parane, B. M. Prabhu Prasad, Basavaraj Talawar

Abstract:

Fast simulations are critical in reducing time to market in CMPs and SoCs. Several simulators have been used to evaluate the performance and power consumed by Network-on-Chips. Researchers and designers rely upon these simulators for design space exploration of NoC architectures. Our experiments show that simulating large NoC topologies take hours to several days for completion. To speed up the simulations, it is necessary to investigate and optimize the hotspots in simulator source code. Among several simulators available, we choose Booksim2.0, as it is being extensively used in the NoC community. In this paper, we analyze the cache and memory system behaviour of Booksim2.0 to accurately monitor input dependent performance bottlenecks. Our measurements show that cache and memory usage patterns vary widely based on the input parameters given to Booksim2.0. Based on these measurements, the cache configuration having least misses has been identified. To further reduce the cache misses, we use software optimization techniques such as removal of unused functions, loop interchanging and replacing post-increment operator with pre-increment operator for non-primitive data types. The cache misses were reduced by 18.52%, 5.34% and 3.91% by employing above technology respectively. We also employ thread parallelization and vectorization to improve the overall performance of Booksim2.0. The OpenMP programming model and SIMD are used for parallelizing and vectorizing the more time-consuming portions of Booksim2.0. Speedups of 2.93x and 3.97x were observed for the Mesh topology with 30 × 30 network size by employing thread parallelization and vectorization respectively.

Keywords: cache behaviour, network-on-chip, performance profiling, vectorization

Procedia PDF Downloads 197
482 The Influence of Immunity on the Behavior and Dignity of Judges

Authors: D. Avnieli

Abstract:

Immunity of judges from liability represents a departure from the principle that all are equal under the law, and that victims may be granted compensation from their offenders. The purpose of the study is to determine if judicial immunity coincides with the need to ensure the existence of highly independent and incorruptible judiciary. Judges are immune from civil and criminal liability for their judicial acts. Judicial immunity is justified by the need to maintain complete independence and discretion of the judiciary. Scholars and judges believe that absolute immunity is needed to shield judges from pressures, threats, or outside interference. It is commonly accepted, that judges should be free to perform their judicial role in accordance with their assessment of the fact and their understanding of the law, without any restrictions, influences, inducements or interferences. In most countries, immunity applies when judges act in excess of jurisdiction. In some countries, it applies even when they act maliciously or corruptly. The only exception to absolute immunity applicable in all judicial systems is when judges act without jurisdiction over the subject matter. The Israeli Supreme Court recently decided to embrace absolute immunity and strike off a lawsuit of a refugee, who was unlawfully incarcerated. The Court ruled that the plaintiff cannot sue the State or the judge for damages. The questions of malice, dignity, and public scrutiny were not discussed. This paper, based on comparative analysis of many cases, aims to determine if immunity affects the dignity and behavior of judges. It demonstrates that most judges maintain their dignity and ethical code of behavior, but sometimes do not hesitate to act consciously in excess of jurisdiction, and in rare cases even corruptly. Therefore, in order to maintain independent and incorruptible judiciary, immunity should not be applied where judges act consciously in excess of jurisdiction or with malicious incentives.

Keywords: incorruptible judiciary, immunity, independent, judicial, judges, jurisdiction

Procedia PDF Downloads 103
481 A Close Study on the Nitrate Fertilizer Use and Environmental Pollution for Human Health in Iran

Authors: Saeed Rezaeian, M. Rezaee Boroon

Abstract:

Nitrogen accumulates in soils during the process of fertilizer addition to promote the plant growth. When the organic matter decomposes, the form of available nitrogen produced is in the form of nitrate, which is highly mobile. The most significant health effect of nitrate ingestion is methemoglobinemia in infants under six months of age (blue baby syndrome). The mobile nutrients, like nitrate nitrogen, are not stored in the soil as the available forms for the long periods and in large amounts. It depends on the needs for the crops such as vegetables. On the other hand, the vegetables will compete actively for nitrate nitrogen as a mobile nutrient and water. The mobile nutrients must be shared. The fewer the plants, the larger this share is for each plant. Also, this nitrate nitrogen is poisonous for the people who use these vegetables. Nitrate is converted to nitrite by the existing bacteria in the stomach and the Gastro-Intestinal (GI) tract. When nitrite is entered into the blood cells, it converts the hemoglobin to methemoglobin, which causes the anoxemia and cyanosis. The increasing use of pesticides and chemical fertilizers, especially the fertilizers with nitrates compounds, which have been common for the increased production of agricultural crops, has caused the nitrate pollution in the (soil, water, and environment). They have caused a lot of damage to humans and animals. In this research, the nitrate accumulation in different kind of vegetables such as; green pepper, tomatoes, egg plants, watermelon, cucumber, and red pepper were observed in the suburbs of Mashhad, Neisabour, and Sabzevar cities. In some of these cities, the information forms of agronomical practices collected were such as; different vegetable crops fertilizer recommendations, varieties, pesticides, irrigation schedules, etc., which were filled out by some of our colleagues in the research areas mentioned above. Analysis of the samples was sent to the soil and water laboratory in our department in Mashhad. The final results from the chemical analysis of samples showed that the mean levels of nitrates from the samples of the fruit crops in the mentioned cities above were all lower than the critical levels. These fruit crop samples were in the order of: 35.91, 8.47, 24.81, 6.03, 46.43, 2.06 mg/kg dry matter, for the following crops such as; tomato, cucumber, eggplant, watermelon, green pepper, and red pepper. Even though, this study was conducted with limited samples and by considering the mean levels, the use of these crops from the nutritional point of view will not cause the poisoning of humans.

Keywords: environmental pollution, human health, nitrate accumulations, nitrate fertilizers

Procedia PDF Downloads 251
480 Linux Security Management: Research and Discussion on Problems Caused by Different Aspects

Authors: Ma Yuzhe, Burra Venkata Durga Kumar

Abstract:

The computer is a great invention. As people use computers more and more frequently, the demand for PCs is growing, and the performance of computer hardware is also rising to face more complex processing and operation. However, the operating system, which provides the soul for computers, has stopped developing at a stage. In the face of the high price of UNIX (Uniplexed Information and Computering System), batch after batch of personal computer owners can only give up. Disk Operating System is too simple and difficult to bring innovation into play, which is not a good choice. And MacOS is a special operating system for Apple computers, and it can not be widely used on personal computers. In this environment, Linux, based on the UNIX system, was born. Linux combines the advantages of the operating system and is composed of many microkernels, which is relatively powerful in the core architecture. Linux system supports all Internet protocols, so it has very good network functions. Linux supports multiple users. Each user has no influence on their own files. Linux can also multitask and run different programs independently at the same time. Linux is a completely open source operating system. Users can obtain and modify the source code for free. Because of these advantages of Linux, it has also attracted a large number of users and programmers. The Linux system is also constantly upgraded and improved. It has also issued many different versions, which are suitable for community use and commercial use. Linux system has good security because it relies on a file partition system. However, due to the constant updating of vulnerabilities and hazards, the using security of the operating system also needs to be paid more attention to. This article will focus on the analysis and discussion of Linux security issues.

Keywords: Linux, operating system, system management, security

Procedia PDF Downloads 108
479 Herbal Medicinal Materials for Health/Functional Foods in Korea

Authors: Chang-Hwan Oh, Young-Jong Lee

Abstract:

In April, 2015, the Ministry of Food and Drug Safety’s announcement that only 10 of the 207 products that list Cynanchum Wilfordii Radix among their ingredients were confirmed to actually contain “iyeobupiso” the counterfeit version of the “baeksuo” raised a fog to consumers who purchased health/functional foods supposedly containing the herbal medicinal material, “baeksuo” in Korean. Baeksuo is the main ingredient of the product “EstroG-100” that contain Phlomis umbrosa and Angelica gigas too (NaturalEndoTech, S.Korea). The hot water extract of the herbal medicinal materials (HMM) was approved as a product specific Health/Functional Food (HFF) having a helpful function to women reaching menopause by Korea Food & Drug Administration (Ministry of Food & Drug Safety at present). The origin of “baeksuo” is the root of Cynanchum wilfordii Hemsley in Korea (But “iyeobupiso, the root of Cynanchum auriculatum Royle ex Wight is considered as the origin of “baeksuo” in China). In Korea, about 116 HMMs are listed as the food materials in Korea Food Code among the total 187 HMMs could be used for food and medicine purpose simultaneously. But there are some chances of the HMMs (shared use for food and medicine purpose) could be misused by the part and HMMs not permitted for HFF such as the “baeksuo” case. In this study, some of HMMs (shared use for food and medicine purpose) are examined to alleviate the misuse chance of HMMs for HFFs in Korea. For the purpose of this study, the origin, shape, edible parts, efficacy and the side effects of the similar HMMs to be misused for HFF are investigated.

Keywords: herbal medicinal materials, healthy/functional foods, misuse, shared use

Procedia PDF Downloads 291
478 Pre and Post IFRS Loss Avoidance in France and the United Kingdom

Authors: T. Miková

Abstract:

This paper analyzes the effect of a single uniform accounting rule on reporting quality by investigating the influence of IFRS on earnings management. This paper examines whether earnings management is reduced after IFRS adoption through the use of “loss avoidance thresholds”, a method that has been verified in earlier studies. This paper concentrates on two European countries: one that represents the continental code law tradition with weak protection of investors (France) and one that represents the Anglo-American common law tradition, which typically implies a strong enforcement system (the United Kingdom). The research investigates a sample of 526 companies (6822 firm-year observations) during the years 2000 – 2013. The results are different for the two jurisdictions. This study demonstrates that a single set of accounting standards contributes to better reporting quality and reduces the pervasiveness of earnings management in France. In contrast, there is no evidence that a reduction in earnings management followed the implementation of IFRS in the United Kingdom. Due to the fact that IFRS benefit France but not the United Kingdom, other political and economic factors, such legal system or capital market strength, must play a significant role in influencing the comparability and transparency cross-border companies’ financial statements. Overall, the result suggests that IFRS moderately contribute to the accounting quality of reported financial statements and bring benefit for stakeholders, though the role played by other economic factors cannot be discounted.

Keywords: accounting standards, earnings management, international financial reporting standards, loss avoidance, reporting quality

Procedia PDF Downloads 198
477 Application of Finite Volume Method for Numerical Simulation of Contaminant Transfer in a Two-Dimensional Reservoir

Authors: Atousa Ataieyan, Salvador A. Gomez-Lopera, Gennaro Sepede

Abstract:

Today, due to the growing urban population and consequently, the increasing water demand in cities, the amount of contaminants entering the water resources is increasing. This can impose harmful effects on the quality of the downstream water. Therefore, predicting the concentration of discharged pollutants at different times and distances of the interested area is of high importance in order to carry out preventative and controlling measures, as well as to avoid consuming the contaminated water. In this paper, the concentration distribution of an injected conservative pollutant in a square reservoir containing four symmetric blocks and three sources using Finite Volume Method (FVM) is simulated. For this purpose, after estimating the flow velocity, classical Advection-Diffusion Equation (ADE) has been discretized over the studying domain by Backward Time- Backward Space (BTBS) scheme. Then, the discretized equations for each node have been derived according to the initial condition, boundary conditions and point contaminant sources. Finally, taking into account the appropriate time step and space step, a computational code was set up in MATLAB. Contaminant concentration was then obtained at different times and distances. Simulation results show how using BTBS differentiating scheme and FVM as a numerical method for solving the partial differential equation of transport is an appropriate approach in the case of two-dimensional contaminant transfer in an advective-diffusive flow.

Keywords: BTBS differentiating scheme, contaminant concentration, finite volume, mass transfer, water pollution

Procedia PDF Downloads 135
476 The DAQ Debugger for iFDAQ of the COMPASS Experiment

Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius

Abstract:

In general, state-of-the-art Data Acquisition Systems (DAQ) in high energy physics experiments must satisfy high requirements in terms of reliability, efficiency and data rate capability. This paper presents the development and deployment of a debugging tool named DAQ Debugger for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. Utilizing a hardware event builder, the iFDAQ is designed to be able to readout data at the average maximum rate of 1.5 GB/s of the experiment. In complex softwares, such as the iFDAQ, having thousands of lines of code, the debugging process is absolutely essential to reveal all software issues. Unfortunately, conventional debugging of the iFDAQ is not possible during the real data taking. The DAQ Debugger is a tool for identifying a problem, isolating the source of the problem, and then either correcting the problem or determining a way to work around it. It provides the layer for an easy integration to any process and has no impact on the process performance. Based on handling of system signals, the DAQ Debugger represents an alternative to conventional debuggers provided by most integrated development environments. Whenever problem occurs, it generates reports containing all necessary information important for a deeper investigation and analysis. The DAQ Debugger was fully incorporated to all processes in the iFDAQ during the run 2016. It helped to reveal remaining software issues and improved significantly the stability of the system in comparison with the previous run. In the paper, we present the DAQ Debugger from several insights and discuss it in a detailed way.

Keywords: DAQ Debugger, data acquisition system, FPGA, system signals, Qt framework

Procedia PDF Downloads 284