Search results for: managing code blue
962 Applying Resilience Engineering to improve Safety Management in a Construction Site: Design and Validation of a Questionnaire
Authors: M. C. Pardo-Ferreira, J. C. Rubio-Romero, M. Martínez-Rojas
Abstract:
Resilience Engineering is a new paradigm of safety management that proposes to change the way of managing the safety to focus on the things that go well instead of the things that go wrong. Many complex and high-risk sectors such as air traffic control, health care, nuclear power plants, railways or emergencies, have applied this new vision of safety and have obtained very positive results. In the construction sector, safety management continues to be a problem as indicated by the statistics of occupational injuries worldwide. Therefore, it is important to improve safety management in this sector. For this reason, it is proposed to apply Resilience Engineering to the construction sector. The Construction Phase Health and Safety Plan emerges as a key element for the planning of safety management. One of the key tools of Resilience Engineering is the Resilience Assessment Grid that allows measuring the four essential abilities (respond, monitor, learn and anticipate) for resilient performance. The purpose of this paper is to develop a questionnaire based on the Resilience Assessment Grid, specifically on the ability to learn, to assess whether a Construction Phase Health and Safety Plans helps companies in a construction site to implement this ability. The research process was divided into four stages: (i) initial design of a questionnaire, (ii) validation of the content of the questionnaire, (iii) redesign of the questionnaire and (iii) application of the Delphi method. The questionnaire obtained could be used as a tool to help construction companies to evolve from Safety-I to Safety-II. In this way, companies could begin to develop the ability to learn, which will serve as a basis for the development of the other abilities necessary for resilient performance. The following steps in this research are intended to develop other questions that allow evaluating the rest of abilities for resilient performance such as monitoring, learning and anticipating.Keywords: resilience engineering, construction sector, resilience assessment grid, construction phase health and safety plan
Procedia PDF Downloads 137961 Non-Linear Regression Modeling for Composite Distributions
Authors: Mostafa Aminzadeh, Min Deng
Abstract:
Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions
Procedia PDF Downloads 34960 Anti-Acanthamoeba Activities of Fatty Acid Salts and Fatty Acids
Authors: Manami Masuda, Mariko Era, Takayoshi Kawahara, Takahide Kanyama, Hiroshi Morita
Abstract:
Objectives: Fatty acid salts are a type of anionic surfactant and are produced from fatty acids and alkali. Moreover, fatty acid salts are known to have potent antibacterial activities. Acanthamoeba is ubiquitously distributed in the environment including sea water, fresh water, soil and even from the air. Although generally free-living, Acanthamoeba can be an opportunistic pathogen, which could cause a potentially blinding corneal infection known as Acanthamoeba keratitis. So, in this study, we evaluated the anti-amoeba activity of fatty acid salts and fatty acids to Acanthamoeba castellanii ATCC 30010. Materials and Methods: The antibacterial activity of 9 fatty acid salts (potassium butyrate (C4K), caproate (C6K), caprylate (C8K), caprate (C10K), laurate (C12K), myristate (C14K), oleate (C18:1K), linoleate (C18:2K), linolenate (C18:3K)) tested on cells of Acanthamoeba castellanii ATCC 30010. Fatty acid salts (concentration of 175 mM and pH 10.5) were prepared by mixing the fatty acid with the appropriate amount of KOH. The amoeba suspension mixed with KOH with a pH adjusted solution was used as the control. Fatty acids (concentration of 175 mM) were prepared by mixing the fatty acid with Tween 80 (20 %). The amoeba suspension mixed with Tween 80 (20 %) was used as the control. The anti-amoeba method, the amoeba suspension (3.0 × 104 cells/ml trophozoites) was mixed with the sample of fatty acid potassium (final concentration of 175 mM). Samples were incubated at 30°C, for 10 min, 60 min, and 180 min and then the viability of A. castellanii was evaluated using plankton counting chamber and trypan blue stainings. The minimum inhibitory concentration (MIC) against Acanthamoeba was determined using the two-fold dilution method. The MIC was defined as the minimal anti-amoeba concentration that inhibited visible amoeba growth following incubation (180 min). Results: C8K, C10K, and C12K were the anti-amoeba effect of 4 log-unit (99.99 % growth suppression of A. castellanii) incubated time for 180 min against A. castellanii at 175mM. After the amoeba, the suspension was mixed with C10K or C12K, destroying the cell membrane had been observed. Whereas, the pH adjusted control solution did not exhibit any effect even after 180 min of incubation with A. castellanii. Moreover, C6, C8, and C18:3 were the anti-amoeba effect of 4 log-unit incubated time for 60 min. C4 and C18:2 exhibited a 4-log reduction after 180 min incubation. Furthermore, the minimum inhibitory concentration (MIC) was determined. The MIC of C10K, C12K and C4 were 2.7 mM. These results indicate that C10K, C12K and C4 have high anti-amoeba activity against A. castellanii and suggest C10K, C12K and C4 have great potential for antimi-amoeba agents.Keywords: Fatty acid salts, anti-amoeba activities, Acanthamoeba, fatty acids
Procedia PDF Downloads 479959 Thermo-Mechanical Behavior of Steel-Wood Connections of Wooden Structures Under the Effect of a Fire
Authors: Ahmed Alagha, Belkacem Lamri, Abdelhak Kada.
Abstract:
Steel-wood assemblies often have complex geometric configurations whose overall behavior under the effect of a fire is conditioned by the thermal response, by combining the two materials steel and wood, whose thermal characteristics are greatly influenced by high temperatures. The objective of this work is to study the thermal behavior of a steel-wood connection, with or without insulating material, subjected to an ISO834 standard fire model. The analysis is developed by the analytical approach using the Eurocode, and numerically, by the finite element method, through the ANSYS calculation code. The design of the connections is evaluated at room temperature taking the cases of single shear and double shear. The thermal behavior of the connections is simulated in transient state while taking into account the modes of heat transfer by convection and by radiation. The variation of temperature as a function of time is evaluated in different positions of the connections while talking about the heat produced and the formation of the carbon layer. The results relate to the temperature distributions in the connection elements as a function of the duration of the fire. The results of the thermal analysis show that the temperature increases rapidly and reaches more than 260 °C in the steel material for an hour of exposure to fire. The temperature development in wood material is different from that in steel because of its thermal properties. Wood heats up on the outside and burns, its surface can reach very high temperatures in points on the surface.Keywords: Eurocode 5, finite elements, ISO834, simple shear, thermal behaviour, wood-steel connection
Procedia PDF Downloads 86958 Optimization of the Mechanical Performance of Fused Filament Fabrication Parts
Authors: Iván Rivet, Narges Dialami, Miguel Cervera, Michele Chiumenti
Abstract:
Process parameters in Additive Manufacturing (AM) play a critical role in the mechanical performance of the final component. In order to find the input configuration that guarantees the optimal performance of the printed part, the process-performance relationship must be found. Fused Filament Fabrication (FFF) is the selected demonstrative AM technology due to its great popularity in the industrial manufacturing world. A material model that considers the different printing patterns present in a FFF part is used. A voxelized mesh is built from the manufacturing toolpaths described in the G-Code file. An Adaptive Mesh Refinement (AMR) based on the octree strategy is used in order to reduce the complexity of the mesh while maintaining its accuracy. High-fidelity and cost-efficient Finite Element (FE) simulations are performed and the influence of key process parameters in the mechanical performance of the component is analyzed. A robust optimization process based on appropriate failure criteria is developed to find the printing direction that leads to the optimal mechanical performance of the component. The Tsai-Wu failure criterion is implemented due to the orthotropy and heterogeneity constitutive nature of FFF components and because of the differences between the strengths in tension and compression. The optimization loop implements a modified version of an Anomaly Detection (AD) algorithm and uses the computed metrics to obtain the optimal printing direction. The developed methodology is verified with a case study on an industrial demonstrator.Keywords: additive manufacturing, optimization, printing direction, mechanical performance, voxelization
Procedia PDF Downloads 63957 Optimization of Fused Deposition Modeling 3D Printing Process via Preprocess Calibration Routine Using Low-Cost Thermal Sensing
Authors: Raz Flieshman, Adam Michael Altenbuchner, Jörg Krüger
Abstract:
This paper presents an approach to optimizing the Fused Deposition Modeling (FDM) 3D printing process through a preprocess calibration routine of printing parameters. The core of this method involves the use of a low-cost thermal sensor capable of measuring tempera-tures within the range of -20 to 500 degrees Celsius for detailed process observation. The calibration process is conducted by printing a predetermined path while varying the process parameters through machine instructions (g-code). This enables the extraction of critical thermal, dimensional, and surface properties along the printed path. The calibration routine utilizes computer vision models to extract features and metrics from the thermal images, in-cluding temperature distribution, layer adhesion quality, surface roughness, and dimension-al accuracy and consistency. These extracted properties are then analyzed to optimize the process parameters to achieve the desired qualities of the printed material. A significant benefit of this calibration method is its potential to create printing parameter profiles for new polymer and composite materials, thereby enhancing the versatility and application range of FDM 3D printing. The proposed method demonstrates significant potential in enhancing the precision and reliability of FDM 3D printing, making it a valuable contribution to the field of additive manufacturing.Keywords: FDM 3D printing, preprocess calibration, thermal sensor, process optimization, additive manufacturing, computer vision, material profiles
Procedia PDF Downloads 40956 Impact of Audit Committee on Real Earnings Management: Cases of Netherlands
Authors: Sana Masmoudi Mardassi, Yosra Makni Fourati
Abstract:
Regulators highlight the importance of the Audit Committee (AC) as a key internal corporate governance mechanism. One of the most important roles of this committee is to oversee the financial reporting process. The purpose of this paper is to examine the link between the characteristics of an audit committee and the financial reporting quality by investigating whether the characteristics of audit committees are associated with improved financial reporting quality, especially the Real Earnings Management. In the current study, a panel data from 80 nonfinancial companies listed on the Amsterdam Stock Exchange during the period between 2010 and 2017 were used. To measure audit committee characteristics, four proxies have been used, specifically, audit committee independence, financial expertise, gender diversity and AC meetings. For this research, a linear regression model was used to identify the influence of a set of board characteristics of the audit committee on real earnings management after controlling for firm audit committee size, leverage, size, loss, growth and board size. This research provides empirical evidence of the association between audit committee independence, financial expertise, gender diversity and meetings and Real Earnings Management (REM) as a proxy of financial reporting quality. The study finds that independence and AC Gender diversity are strongly related to financial reporting quality. In fact, these two characteristics constrain REM. The results also suggest that AC- financial expertise reduces to some extent, the likelihood of engaging in REM. These conclusions provide support then to the audit committee requirement under the Dutch Corporate Governance Code rules regarding gender diversity and AC meetings.Keywords: audit committee, financial expertise, independence, real earnings management
Procedia PDF Downloads 167955 Earthquake Hazards in Manipur: Casual Factors and Remedial Measures
Authors: Kangujam Monika, Kiranbala Devi Thokchom, Soibam Sandhyarani Devi
Abstract:
Earthquake is a major natural hazard in India. Manipur, located in the North-Eastern Region of India, is one of the most affected location in the region prone to earthquakes since it lies in an area where Indian and Eurasian tectonic plates meet and is in seismic Zone V which is the most severe intensity zone, according to IS Code. Some recent earthquakes recorded in Manipur are M 6.7 epicenter at Tamenglong (January 4, 2016), M 5.2 epicenter at Churachandpur (February 24, 2017) and most recent M 4.4 epicenter at Thoubal (June 19, 2017). In these recent earthquakes, some houses and buildings were damaged, landslides were also occurred. A field study was carried out. An overview of the various causal factors involved in triggering of earthquake in Manipur has been discussed. It is found that improper planning, poor design, negligence, structural irregularities, poor quality materials, construction of foundation without proper site soil investigation and non-implementation of remedial measures, etc., are possibly the main causal factors for damage in Manipur during earthquake. The study also suggests, though the proper design of structure and foundation along with soil investigation, ground improvement methods, use of modern techniques of construction, counseling with engineer, mass awareness, etc., might be effective solution to control the hazard in many locations. An overview on the analysis pertaining to earthquake in Manipur together with on-going detailed site specific geotechnical investigation were presented.Keywords: Manipur, earthquake, hazard, structure, soil
Procedia PDF Downloads 210954 The Culture of Journal Writing among Manobo Senior High School Students
Authors: Jessevel Montes
Abstract:
This study explored on the culture of journal writing among the Senior High School Manobo students. The purpose of this qualitative morpho-semantic and syntactic study was to discover the morphological, semantic, and syntactic features of the written output through morphological, semantic, and syntactic categories present in their journal writings. Also, beliefs and practices embedded in the norms, values, and ideologies were identified. The study was conducted among the Manobo students in the Senior High Schools of Central Mindanao, particularly in the Division of North Cotabato. Findings revealed that morphologically, the features that flourished are the following: subject-verb concordance, tenses, pronouns, prepositions, articles, and the use of adjectives. Semantically, the features are the following: word choice, idiomatic expression, borrowing, and vernacular. Syntactically, the features are the types of sentences according to structure and function; and the dominance of code switching and run-on sentences. Lastly, as to the beliefs and practices embedded in the norms, values, and ideologies of their journal writing, the major themes are: valuing education, family, and friends as treasure, preservation of culture, and emancipation from the bondage of poverty. This study has shed light on the writing capabilities and weaknesses of the Manobo students when it comes to English language. Further, such an insight into language learning problems is useful to teachers because it provides information on common trouble-spots in language learning, which can be used in the preparation of effective teaching materials.Keywords: applied linguistics, culture, morpho-semantic and syntactic analysis, Manobo Senior High School, Philippines
Procedia PDF Downloads 121953 Studies on Climatic and Soil Site Suitability of Major Grapes-Growing Soils of Eastern and Southern Dry Zones of Karnataka
Authors: Harsha B. R., Anil Kumar K. S.
Abstract:
Climate and soils are the two most dynamic entities among the factors affecting growth and grapes productivity. Studying of prevailing climate over the years in a region provides sufficient information related to management practices to be carried out in vineyards. Evaluating the suitability of vineyard soils under different climatic conditions serves as the yardstick to analyse the performance of grapevines. This study was formulated to study the climate and evaluate the site-suitability of soils in vineyards of southern Karnataka, which has registered its superiority in the quality production of wine. Ten soil profiles were excavated for suitability evaluation of soils, and six taluks were studied for climatic analysis. In almost all the regions studied, recharge starts at the end of the May or June months, peaking in either September or October months. Soil Starts drying from mid of December months in the taluks studied. Bangalore North (Rajanukunte) soils were highly suited for grapes cultivation with no or slight limitations. Bangalore North (GKVK Farm) was moderately suited with slight to moderate limitations of slope and available nitrogen content. Moderate suitability was observed in the rest of the profiles studied in Eastern dry zone soils with the slight to moderate limitations of either organic carbon or available nitrogen or both in the Eastern dry zone. Magadi (Southern dry zone) soils were moderately suitable with slight to moderate limitations of graveliness, available nitrogen, organic carbon, and exchangeable sodium percentage. Sustainable performance of vineyards in terms of yield can be achieved in these taluks by managing the constraints existing in soils.Keywords: climatic analysis, dry zone, water recharge, growing period, suitability, sustainability
Procedia PDF Downloads 124952 Modelling Interactions between Saturated and Unsaturated Zones by Hydrus 1D, Plain of Kairouan, Central Tunisia
Authors: Mariem Saadi, Sabri Kanzari, Adel Zghibi
Abstract:
In semi-arid areas like the Kairouan region, the constant irrigation with saline water and the overuse of groundwater resources, soils and aquifers salinization has become an increasing concern. In this study, a methodology has been developed to evaluate the groundwater contamination risk based on the unsaturated zone hydraulic properties. Two soil profiles with different ranges of salinity, one located in the north of the plain and another one in the south of plain (each 30 m deep) and both characterized by direct recharge of the aquifer were chosen. Simulations were conducted with Hydrus-1D code using measured precipitation data for the period 1998-2003 and calculated evapotranspiration for both chosen profiles. Four combinations of initial conditions of water content and salt concentration were used for the simulation process in order to find the best match between simulated and measured values. The success of the calibration of Hydrus-1D allowed the investigation of some scenarios in order to assess the contamination risk under different natural conditions. The aquifer risk contamination is related to the natural conditions where it increased while facing climate change and temperature increase and decreased in the presence of a clay layer in the unsaturated zone. Hydrus-1D was a useful tool to predict the groundwater level and quality in the case of a direct recharge and in the absence of any information related to the soil layers except for the texture.Keywords: Hydrus-1D, Kairouan, salinization, semi-arid region, solute transport, unsaturated zone
Procedia PDF Downloads 183951 The Formulation of the Mecelle and Other Codified Laws in the Ottoman Empire: Transformation Overturning the Sharia Principles
Authors: Tianqi Yin
Abstract:
The sharia had been the legislative basis in the Ottoman Empire since its emergence. The authority of sharia was superlative in the Islamic society compared to the power of the sulta, the nominal ruler of the nation, regulating essentially every aspect of people’s lives according to an ethical code. In modernity, however, as European sovereignty employed forces to re-engineer the Islamic world to make it more like their own, a society ruled by a state, the Ottoman legislation system encountered a great challenge of adopting codified laws to replace sharia with the formulation of the Mecelle being a prominent case. Interpretations of this transformation have been contentious, with the key debate revolving around whether these codified laws are authentic representations of sharia or alien legal formulations authorized by the modern nation-state under heavy European colonial influence. Because of the difference in methodology of the diverse theories, challenges toward having a universal conclusion on this issue remain. This paper argues that the formulation of the Mecelle and other codified laws is a discontinuity of sharia due to European modernity’s influence and that the emphasis on elements of Islamic laws is a tactic employed to promote this process. These codified laws signals a complete social transformation from the Islamic society ruled by the sharia to a replication of the European society that is ruled by a comprehensive ruling system of the modern state. In addition to advancing the discussion on the characterization of the codification movement in the Ottoman Empire in modernity, the research also promotes the determination of the nature of the modern codification movement globally.Keywords: codification, mecelle, modernity, sharia, ottoman empire
Procedia PDF Downloads 91950 A Feasibility Study of Waste (d) Potential: Synergistic Effect Evaluation by Co-digesting Organic Wastes and Kinetics of Biogas Production
Authors: Kunwar Paritosh, Sanjay Mathur, Monika Yadav, Paras Gandhi, Subodh Kumar, Nidhi Pareek, Vivekanand Vivekanand
Abstract:
A significant fraction of energy is wasted every year managing the biodegradable organic waste inadequately as development and sustainability are the inherent enemies. The management of these waste is indispensable to boost its optimum utilization by converting it to renewable energy resource (here biogas) through anaerobic digestion and to mitigate greenhouse gas emission. Food and yard wastes may prove to be appropriate and potential feedstocks for anaerobic co-digestion for biogas production. The present study has been performed to explore the synergistic effect of co-digesting food waste and yard trimmings from MNIT campus for enhanced biogas production in different ratios in batch tests (37±10C, 90 rpm, 45 days). The results were overwhelming and showed that blending two different organic waste in proper ratio improved the biogas generation considerably, with the highest biogas yield (2044±24 mLg-1VS) that was achieved at 75:25 of food waste to yard waste ratio on volatile solids (VS) basis. The yield was 1.7 and 2.2 folds higher than the mono-digestion of food or yard waste (1172±34, 1016±36mLg-1VS) respectively. The increase in biogas production may be credited to optimum C/N ratio resulting in higher yield. Also Adding TiO2 nanoparticles showed virtually no effect on biogas production as sometimes nanoparticles enhance biogas production. ICP-MS, FTIR analysis was carried out to gain an insight of feedstocks. Modified Gompertz and logistics models were applied for the kinetic study of biogas production where modified Gompertz model showed goodness-of-fit (R2=0.9978) with the experimental results.Keywords: anaerobic co-digestion, biogas, kinetics, nanoparticle, organic waste
Procedia PDF Downloads 389949 A Retrospective Study of Pain Management Strategies for Pediatric Hypospadias Surgery in a Tertiary Care Hospital in Western Rajasthan
Authors: Darshana Rathod, Kirtikumar Rathod, Kamlesh Kumari, Abhilasha Motghare
Abstract:
Background and Aims: Hypospadias is one of the common congenital anomalies in males. Various modalities are used for pain management, including caudal, penile, pudendal, ring blocks, and systemic analgesics. There has yet to be a consensus regarding the most effective and safe analgesic method for controlling pain in these children. We planned this study to determine our institute's pain management practices for hypospadias surgeries. Material and Methods: This retrospective cohort study reviewed 150 children with hypospadias undergoing surgery from January 2020 to December 2023. Data regarding the mode of pain management, postoperative opioid requirement, PACU discharge, and complications was collected from the records. Results: For postoperative pain, 33 (22%) children received caudal block, 60 (40%) penile block, and 57 (38%) were managed by intravenous analgesics. A significant difference was found in the three groups, with the IV analgesic group requiring significantly higher opioid boluses in PACU [43 (75.4%) required two boluses (p < 0.05)]. The difference in PACU discharge time among the three groups was statistically significant (p< 0.05), with IV analgesics groups having the highest (55 mins [47, 60]), the Caudal group at 35mins (30, 40), and the dorsal penile block group at 35mins (25, 40). There was no significant difference in complications like edema, meatal stenosis, urethra-cutaneous fistula, or wound dehiscence among all three groups. Conclusion: Intravenous analgesics and regional blocks like caudal and penile blocks are the common pain management modalities in our institute. The regional blocks are effective in managing pain in the postoperative period and are not significantly associated with complications.Keywords: caudal block, hypospadias, pain management, penile block
Procedia PDF Downloads 45948 The Nexus of Decentralized Policy, social Heterogeneity and Poverty in Equitable Forest Benefit Sharing in the Lowland Community Forestry Program of Nepal
Authors: Dhiraj Neupane
Abstract:
Decentralized policy and practices have largely concentrated on the transformation of decision-making authorities from central to local institutions (or people) in the developing world. Such policy and practices always aimed for the equitable and efficient management of resources in the line of poverty reduction. The transformation of forest decision-making autonomy has also glorified as the best forest management alternatives to maximize the forest benefits and improve the livelihood of local people living nearby the forests. However, social heterogeneity and poor decision-making capacity of local institutions (or people) pose a nexus while managing the resources and sharing the forest benefits among the user households despite the policy objectives. The situation is severe in the lowland of Nepal, where forest resources have higher economic potential and user households have heterogeneous socio-economic conditions. The study discovered that utilizing the power of decision-making autonomy, user households were putting low values of timber considering the equitable access of timber to all user households as it is the most valuable product of community forest. Being the society is heterogeneous by socio-economic conditions, households of better economic conditions were always taking higher amount of forest benefits. The low valuation of timber has negative consequences on equitable benefit sharing and poor support to livelihood improvement of user households. Moreover, low valuation has possibility to increase the local demands of timber and increase the human pressure on forests.Keywords: decentralized forest policy, Nepal, poverty, social heterogeneity, Terai
Procedia PDF Downloads 287947 Timber Urbanism: Assessing the Carbon Footprint of Mass-Timber, Steel, and Concrete Structural Prototypes for Peri-Urban Densification in the Hudson Valley’s Urban Fringe
Authors: Eleni Stefania Kalapoda
Abstract:
The current fossil-fuel based urbanization pattern and the estimated human population growth are increasing the environmental footprint on our planet’s precious resources. To mitigate the estimated skyrocketing in greenhouse gas emissions associated with the construction of new cities and infrastructure over the next 50 years, we need a radical rethink in our approach to construction to deliver a net zero built environment. This paper assesses the carbon footprint of a mass-timber, a steel, and a concrete structural alternative for peri-urban densification in the Hudson Valley's urban fringe, along with examining the updated policy and the building code adjustments that support synergies between timber construction in city making and sustainable management of timber forests. By quantifying the carbon footprint of a structural prototype for four different material assemblies—a concrete (post-tensioned), a mass timber, a steel (composite), and a hybrid (timber/steel/concrete) assembly applicable to the three updated building typologies of the IBC 2021 (Type IV-A, Type IV-B, Type IV-C) that range between a nine to eighteen-story structure alternative—and scaling-up that structural prototype to the size of a neighborhood district, the paper presents a quantitative and a qualitative approach for a forest-based construction economy as well as a resilient and a more just supply chain framework that ensures the wellbeing of both the forest and its inhabitants.Keywords: mass-timber innovation, concrete structure, carbon footprint, densification
Procedia PDF Downloads 108946 Design of SAE J2716 Single Edge Nibble Transmission Digital Sensor Interface for Automotive Applications
Authors: Jongbae Lee, Seongsoo Lee
Abstract:
Modern sensors often embed small-size digital controller for sensor control, value calibration, and signal processing. These sensors require digital data communication with host microprocessors, but conventional digital communication protocols are too heavy for price reduction. SAE J2716 SENT (single edge nibble transmission) protocol transmits direct digital waveforms instead of complicated analog modulated signals. In this paper, a SENT interface is designed in Verilog HDL (hardware description language) and implemented in FPGA (field-programmable gate array) evaluation board. The designed SENT interface consists of frame encoder/decoder, configuration register, tick period generator, CRC (cyclic redundancy code) generator/checker, and TX/RX (transmission/reception) buffer. Frame encoder/decoder is implemented as a finite state machine, and it controls whole SENT interface. Configuration register contains various parameters such as operation mode, tick length, CRC option, pause pulse option, and number of nibble data. Tick period generator generates tick signals from input clock. CRC generator/checker generates or checks CRC in the SENT data frame. TX/RX buffer stores transmission/received data. The designed SENT interface can send or receives digital data in 25~65 kbps at 3 us tick. Synthesized in 0.18 um fabrication technologies, it is implemented about 2,500 gates.Keywords: digital sensor interface, SAE J2716, SENT, verilog HDL
Procedia PDF Downloads 301945 Simulation of the Collimator Plug Design for Prompt-Gamma Activation Analysis in the IEA-R1 Nuclear Reactor
Authors: Carlos G. Santos, Frederico A. Genezini, A. P. Dos Santos, H. Yorivaz, P. T. D. Siqueira
Abstract:
The Prompt-Gamma Activation Analysis (PGAA) is a valuable technique for investigating the elemental composition of various samples. However, the installation of a PGAA system entails specific conditions such as filtering the neutron beam according to the target and providing adequate shielding for both users and detectors. These requirements incur substantial costs, exceeding $100,000, including manpower. Nevertheless, a cost-effective approach involves leveraging an existing neutron beam facility to create a hybrid system integrating PGAA and Neutron Tomography (NT). The IEA-R1 nuclear reactor at IPEN/USP possesses an NT facility with suitable conditions for adapting and implementing a PGAA device. The NT facility offers a thermal flux slightly colder and provides shielding for user protection. The key additional requirement involves designing detector shielding to mitigate high gamma ray background and safeguard the HPGe detector from neutron-induced damage. This study employs Monte Carlo simulations with the MCNP6 code to optimize the collimator plug for PGAA within the IEA-R1 NT facility. Three collimator models are proposed and simulated to assess their effectiveness in shielding gamma and neutron radiation from nucleon fission. The aim is to achieve a focused prompt-gamma signal while shielding ambient gamma radiation. The simulation results indicate that one of the proposed designs is particularly suitable for the PGAA-NT hybrid system.Keywords: MCNP6.1, neutron, prompt-gamma ray, prompt-gamma activation analysis
Procedia PDF Downloads 75944 International Classification of Primary Care as a Reference for Coding the Demand for Care in Primary Health Care
Authors: Souhir Chelly, Chahida Harizi, Aicha Hechaichi, Sihem Aissaoui, Leila Ben Ayed, Maha Bergaoui, Mohamed Kouni Chahed
Abstract:
Introduction: The International Classification of Primary Care (ICPC) is part of the morbidity classification system. It had 17 chapters, and each is coded by an alphanumeric code: the letter corresponds to the chapter, the number to a paragraph in the chapter. The objective of this study is to show the utility of this classification in the coding of the reasons for demand for care in Primary health care (PHC), its advantages and limits. Methods: This is a cross-sectional descriptive study conducted in 4 PHC in Ariana district. Data on the demand for care during 2 days in the same week were collected. The coding of the information was done according to the CISP. The data was entered and analyzed by the EPI Info 7 software. Results: A total of 523 demands for care were investigated. The patients who came for the consultation are predominantly female (62.72%). Most of the consultants are young with an average age of 35 ± 26 years. In the ICPC, there are 7 rubrics: 'infections' is the most common reason with 49.9%, 'other diagnoses' with 40.2%, 'symptoms and complaints' with 5.5%, 'trauma' with 2.1%, 'procedures' with 2.1% and 'neoplasm' with 0.3%. The main advantage of the ICPC is the fact of being a standardized tool. It is very suitable for classification of the reasons for demand for care in PHC according to their specificity, capacity to be used in a computerized medical file of the PHC. Its current limitations are related to the difficulty of classification of some reasons for demand for care. Conclusion: The ICPC has been developed to provide healthcare with a coding reference that takes into account their specificity. The CIM is in its 10th revision; it would gain from revision to revision to be more efficient to be generalized and used by the teams of PHC.Keywords: international classification of primary care, medical file, primary health care, Tunisia
Procedia PDF Downloads 267943 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation
Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar
Abstract:
The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.Keywords: computational fluid dynamics (CFD), erosion, slurry transportation, k-ε Model
Procedia PDF Downloads 408942 Perforation Analysis of the Aluminum Alloy Sheets Subjected to High Rate of Loading and Heated Using Thermal Chamber: Experimental and Numerical Approach
Authors: A. Bendarma, T. Jankowiak, A. Rusinek, T. Lodygowski, M. Klósak, S. Bouslikhane
Abstract:
The analysis of the mechanical characteristics and dynamic behavior of aluminum alloy sheet due to perforation tests based on the experimental tests coupled with the numerical simulation is presented. The impact problems (penetration and perforation) of the metallic plates have been of interest for a long time. Experimental, analytical as well as numerical studies have been carried out to analyze in details the perforation process. Based on these approaches, the ballistic properties of the material have been studied. The initial and residual velocities laser sensor is used during experiments to obtain the ballistic curve and the ballistic limit. The energy balance is also reported together with the energy absorbed by the aluminum including the ballistic curve and ballistic limit. The high speed camera helps to estimate the failure time and to calculate the impact force. A wide range of initial impact velocities from 40 up to 180 m/s has been covered during the tests. The mass of the conical nose shaped projectile is 28 g, its diameter is 12 mm, and the thickness of the aluminum sheet is equal to 1.0 mm. The ABAQUS/Explicit finite element code has been used to simulate the perforation processes. The comparison of the ballistic curve was obtained numerically and was verified experimentally, and the failure patterns are presented using the optimal mesh densities which provide the stability of the results. A good agreement of the numerical and experimental results is observed.Keywords: aluminum alloy, ballistic behavior, failure criterion, numerical simulation
Procedia PDF Downloads 312941 Utilizing Minecraft Java Edition for the Application of Fire Disaster Procedures to Establish Fire Disaster Readiness for Grade 12 STEM students of DLSU-IS
Authors: Aravella Flores, Jose Rafael E. Sotelo, Luis Romulus Phillippe R. Javier, Josh Christian V. Nunez
Abstract:
This study focuses on analyzing the performance of Grade 12 STEM students of De La Salle University - Integrated School that has completed the Disaster Readiness and Risk Reduction course in handling fire hazards through Minecraft Java Edition. This platform is suitable because fire DRRR is challenging to learn in a practical setting as well as questionable with regard to supplementing the successful implementation of textbook knowledge into actual practice. The purpose of this study is to acknowledge whether Minecraft can be a suitable environment to familiarize oneself to fire DRRR. The objectives are achieved through utilizing Minecraft in simulating fire scenarios which allows the participants to freely act upon and practice fire DRRR. The experiment was divided into the grounding and validation phase, where researchers observed the performance of the participants in the simulation. A pre-simulation and post-simulation survey was given to acknowledge the change in participants’ perception of being able to utilize fire DRRR procedures and their vulnerabilities. The paired t-test was utilized, showing significant differences in the pre-simulation and post-simulation survey scores, thus, insinuating improved judgment of DRRR, lessening their vulnerabilities in the possibility of encountering a fire hazard. This research poses a model for future research which can gather more participants and dwell on more complex codes outside just command blocks and into the code lines of Minecraft itself.Keywords: minecraft, DRRR, fire, disaster, simulation
Procedia PDF Downloads 137940 Interpreting Ecclesiastical Heritage: Meaning Making and Contentious Conversations
Authors: Alexis Thouki
Abstract:
In our post-Christian societies, ecclesiastical heritage acquired a new extrovert profile aiming to reach out an increasingly diverse audience. In this context, the various motivations, interests, personalities and cultural exchanges, found in the ‘post-modern pilgrimage’, bequeath a hybrid and multidimensional character to religious tourism education. In consequence, churches have acquired the challenging role of enriching visitors cultural and spiritual capital. Despite this promising diversification to relate, reveal and provoke constructive discourses, due to the various ‘conflicting interests’, practitioners attempt to tame the rich in symbolism and meanings religious environment through ‘neutral interpretations’. This paper aims to present the results of an ongoing developing strategy related to the presentation of contentious meanings in English churches. The paper will explore some of the underlying issues related to the capacity of ‘neutrality’ to spark, downplay or eliminate contentious conversations relating to the cultural, religious, and social dimension of Christian cultural heritage thematology. In an effort to understand this issue, the paper examines the concept of neutrality and what it stands for, executing a discourse analysis in the semantic context in which the theological lexicon is interwoven with the cultural and social meanings of sacred sites. Following that, the paper examines whether the preferable interpretive strategies meet the post-modern interpretative framework which is marked by polysemy and critical active engagement. The ultimate aim of the paper is to investigate the hypothesis that the preferable neutral strategies, managing the ‘conflicting’ demands of worshippers and visitors, result in the uneven treatment of both, the religious and historical spirit of the place.Keywords: contentious dialogue, interpretation, meaning making, religious tourism
Procedia PDF Downloads 156939 Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets
Authors: Kothuri Sriraman, Mattupalli Komal Teja
Abstract:
In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war).Keywords: chain code, Hull regions, Hough transform, Hull recognition, Layered Outline Extraction, SASK algorithm
Procedia PDF Downloads 349938 Cache Analysis and Software Optimizations for Faster on-Chip Network Simulations
Authors: Khyamling Parane, B. M. Prabhu Prasad, Basavaraj Talawar
Abstract:
Fast simulations are critical in reducing time to market in CMPs and SoCs. Several simulators have been used to evaluate the performance and power consumed by Network-on-Chips. Researchers and designers rely upon these simulators for design space exploration of NoC architectures. Our experiments show that simulating large NoC topologies take hours to several days for completion. To speed up the simulations, it is necessary to investigate and optimize the hotspots in simulator source code. Among several simulators available, we choose Booksim2.0, as it is being extensively used in the NoC community. In this paper, we analyze the cache and memory system behaviour of Booksim2.0 to accurately monitor input dependent performance bottlenecks. Our measurements show that cache and memory usage patterns vary widely based on the input parameters given to Booksim2.0. Based on these measurements, the cache configuration having least misses has been identified. To further reduce the cache misses, we use software optimization techniques such as removal of unused functions, loop interchanging and replacing post-increment operator with pre-increment operator for non-primitive data types. The cache misses were reduced by 18.52%, 5.34% and 3.91% by employing above technology respectively. We also employ thread parallelization and vectorization to improve the overall performance of Booksim2.0. The OpenMP programming model and SIMD are used for parallelizing and vectorizing the more time-consuming portions of Booksim2.0. Speedups of 2.93x and 3.97x were observed for the Mesh topology with 30 × 30 network size by employing thread parallelization and vectorization respectively.Keywords: cache behaviour, network-on-chip, performance profiling, vectorization
Procedia PDF Downloads 197937 The Influence of Immunity on the Behavior and Dignity of Judges
Authors: D. Avnieli
Abstract:
Immunity of judges from liability represents a departure from the principle that all are equal under the law, and that victims may be granted compensation from their offenders. The purpose of the study is to determine if judicial immunity coincides with the need to ensure the existence of highly independent and incorruptible judiciary. Judges are immune from civil and criminal liability for their judicial acts. Judicial immunity is justified by the need to maintain complete independence and discretion of the judiciary. Scholars and judges believe that absolute immunity is needed to shield judges from pressures, threats, or outside interference. It is commonly accepted, that judges should be free to perform their judicial role in accordance with their assessment of the fact and their understanding of the law, without any restrictions, influences, inducements or interferences. In most countries, immunity applies when judges act in excess of jurisdiction. In some countries, it applies even when they act maliciously or corruptly. The only exception to absolute immunity applicable in all judicial systems is when judges act without jurisdiction over the subject matter. The Israeli Supreme Court recently decided to embrace absolute immunity and strike off a lawsuit of a refugee, who was unlawfully incarcerated. The Court ruled that the plaintiff cannot sue the State or the judge for damages. The questions of malice, dignity, and public scrutiny were not discussed. This paper, based on comparative analysis of many cases, aims to determine if immunity affects the dignity and behavior of judges. It demonstrates that most judges maintain their dignity and ethical code of behavior, but sometimes do not hesitate to act consciously in excess of jurisdiction, and in rare cases even corruptly. Therefore, in order to maintain independent and incorruptible judiciary, immunity should not be applied where judges act consciously in excess of jurisdiction or with malicious incentives.Keywords: incorruptible judiciary, immunity, independent, judicial, judges, jurisdiction
Procedia PDF Downloads 105936 A Close Study on the Nitrate Fertilizer Use and Environmental Pollution for Human Health in Iran
Authors: Saeed Rezaeian, M. Rezaee Boroon
Abstract:
Nitrogen accumulates in soils during the process of fertilizer addition to promote the plant growth. When the organic matter decomposes, the form of available nitrogen produced is in the form of nitrate, which is highly mobile. The most significant health effect of nitrate ingestion is methemoglobinemia in infants under six months of age (blue baby syndrome). The mobile nutrients, like nitrate nitrogen, are not stored in the soil as the available forms for the long periods and in large amounts. It depends on the needs for the crops such as vegetables. On the other hand, the vegetables will compete actively for nitrate nitrogen as a mobile nutrient and water. The mobile nutrients must be shared. The fewer the plants, the larger this share is for each plant. Also, this nitrate nitrogen is poisonous for the people who use these vegetables. Nitrate is converted to nitrite by the existing bacteria in the stomach and the Gastro-Intestinal (GI) tract. When nitrite is entered into the blood cells, it converts the hemoglobin to methemoglobin, which causes the anoxemia and cyanosis. The increasing use of pesticides and chemical fertilizers, especially the fertilizers with nitrates compounds, which have been common for the increased production of agricultural crops, has caused the nitrate pollution in the (soil, water, and environment). They have caused a lot of damage to humans and animals. In this research, the nitrate accumulation in different kind of vegetables such as; green pepper, tomatoes, egg plants, watermelon, cucumber, and red pepper were observed in the suburbs of Mashhad, Neisabour, and Sabzevar cities. In some of these cities, the information forms of agronomical practices collected were such as; different vegetable crops fertilizer recommendations, varieties, pesticides, irrigation schedules, etc., which were filled out by some of our colleagues in the research areas mentioned above. Analysis of the samples was sent to the soil and water laboratory in our department in Mashhad. The final results from the chemical analysis of samples showed that the mean levels of nitrates from the samples of the fruit crops in the mentioned cities above were all lower than the critical levels. These fruit crop samples were in the order of: 35.91, 8.47, 24.81, 6.03, 46.43, 2.06 mg/kg dry matter, for the following crops such as; tomato, cucumber, eggplant, watermelon, green pepper, and red pepper. Even though, this study was conducted with limited samples and by considering the mean levels, the use of these crops from the nutritional point of view will not cause the poisoning of humans.Keywords: environmental pollution, human health, nitrate accumulations, nitrate fertilizers
Procedia PDF Downloads 251935 Young Female’s Heart Was Bitten by Unknown Ghost (Isolated Cardiac Sarcoidosis): A Case Report
Authors: Heru Al Amin
Abstract:
Sarcoidosis is a granulomatous inflammatory disorder of unclear etiology that can affect multiple different organ systems. Isolated cardiac sarcoidosis is a very rare condition that causes lethal arrhythmia and heart failure. A definite diagnosis of cardiac sarcoidosis remains challenging. The use of multimodality imaging plays a pivotal role in the diagnosis of this entity. Case summary: In this report, we discuss a case of a 50-year-old woman who presented with recurrent palpitation, dizziness, vertigo and presyncope. Electrocardiogram revealed variable heart blocks, including first-degree AV block, second-degree AV block, high-degree AV block, complete AV block, trifascicular block and sometimes supraventricular arrhythmia. Twenty-four hours of Holter monitoring show atrial bigeminy, first-degree AV block and trifascicular block. Transthoracic echocardiography showed Thinning of basal anteroseptal and inferred septum with LV dilatation with reduction of Global Longitudinal Strain. A dual-chamber pacemaker was implanted. CT Coronary angiogram showed no coronary artery disease. Cardiac magnetic resonance revealed basal anteroseptal and inferior septum thinning with focal edema with LGE suggestive of sarcoidosis. Computed tomography of the chest showed no lymphadenopathy or pulmonary infiltration. 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) of the whole body showed. We started steroids and followed up with the patient. Conclusion: This case serves to highlight the challenges in identifying and managing isolated CS in a young patient with recurrent syncope with variable heart block. Early, even late initiation of steroids can improve arrhythmia as well as left ventricular function.Keywords: cardiac sarcoidosis, conduction abnormality, syncope, cardiac MRI
Procedia PDF Downloads 91934 Relationship among Teams' Information Processing Capacity and Performance in Information System Projects: The Effects of Uncertainty and Equivocality
Authors: Ouafa Sakka, Henri Barki, Louise Cote
Abstract:
Uncertainty and equivocality are defined in the information processing literature as two task characteristics that require different information processing responses from managers. As uncertainty often stems from a lack of information, addressing it is thought to require the collection of additional data. On the other hand, as equivocality stems from ambiguity and a lack of understanding of the task at hand, addressing it is thought to require rich communication between those involved. Past research has provided weak to moderate empirical support to these hypotheses. The present study contributes to this literature by defining uncertainty and equivocality at the project level and investigating their moderating effects on the association between several project information processing constructs and project performance. The information processing constructs considered are the amount of information collected by the project team, and the richness and frequency of formal communications among the team members to discuss the project’s follow-up reports. Data on 93 information system development (ISD) project managers was collected in a questionnaire survey and analyzed it via the Fisher Test for correlation differences. The results indicate that the highest project performance levels were observed in projects characterized by high uncertainty and low equivocality in which project managers were provided with detailed and updated information on project costs and schedules. In addition, our findings show that information about user needs and technical aspects of the project is less useful to managing projects where uncertainty and equivocality are high. Further, while the strongest positive effect of interactive use of follow-up reports on performance occurred in projects where both uncertainty and equivocality levels were high, its weakest effect occurred when both of these were low.Keywords: uncertainty, equivocality, information processing model, management control systems, project control, interactive use, diagnostic use, information system development
Procedia PDF Downloads 294933 Linux Security Management: Research and Discussion on Problems Caused by Different Aspects
Authors: Ma Yuzhe, Burra Venkata Durga Kumar
Abstract:
The computer is a great invention. As people use computers more and more frequently, the demand for PCs is growing, and the performance of computer hardware is also rising to face more complex processing and operation. However, the operating system, which provides the soul for computers, has stopped developing at a stage. In the face of the high price of UNIX (Uniplexed Information and Computering System), batch after batch of personal computer owners can only give up. Disk Operating System is too simple and difficult to bring innovation into play, which is not a good choice. And MacOS is a special operating system for Apple computers, and it can not be widely used on personal computers. In this environment, Linux, based on the UNIX system, was born. Linux combines the advantages of the operating system and is composed of many microkernels, which is relatively powerful in the core architecture. Linux system supports all Internet protocols, so it has very good network functions. Linux supports multiple users. Each user has no influence on their own files. Linux can also multitask and run different programs independently at the same time. Linux is a completely open source operating system. Users can obtain and modify the source code for free. Because of these advantages of Linux, it has also attracted a large number of users and programmers. The Linux system is also constantly upgraded and improved. It has also issued many different versions, which are suitable for community use and commercial use. Linux system has good security because it relies on a file partition system. However, due to the constant updating of vulnerabilities and hazards, the using security of the operating system also needs to be paid more attention to. This article will focus on the analysis and discussion of Linux security issues.Keywords: Linux, operating system, system management, security
Procedia PDF Downloads 108