Search results for: licensing agreement
407 A Coupled Model for Two-Phase Simulation of a Heavy Water Pressure Vessel Reactor
Authors: D. Ramajo, S. Corzo, M. Nigro
Abstract:
A Multi-dimensional computational fluid dynamics (CFD) two-phase model was developed with the aim to simulate the in-core coolant circuit of a pressurized heavy water reactor (PHWR) of a commercial nuclear power plant (NPP). Due to the fact that this PHWR is a Reactor Pressure Vessel type (RPV), three-dimensional (3D) detailed modelling of the large reservoirs of the RPV (the upper and lower plenums and the downcomer) were coupled with an in-house finite volume one-dimensional (1D) code in order to model the 451 coolant channels housing the nuclear fuel. Regarding the 1D code, suitable empirical correlations for taking into account the in-channel distributed (friction losses) and concentrated (spacer grids, inlet and outlet throttles) pressure losses were used. A local power distribution at each one of the coolant channels was also taken into account. The heat transfer between the coolant and the surrounding moderator was accurately calculated using a two-dimensional theoretical model. The implementation of subcooled boiling and condensation models in the 1D code along with the use of functions for representing the thermal and dynamic properties of the coolant and moderator (heavy water) allow to have estimations of the in-core steam generation under nominal flow conditions for a generic fission power distribution. The in-core mass flow distribution results for steady state nominal conditions are in agreement with the expected from design, thus getting a first assessment of the coupled 1/3D model. Results for nominal condition were compared with those obtained with a previous 1/3D single-phase model getting more realistic temperature patterns, also allowing visualize low values of void fraction inside the upper plenum. It must be mentioned that the current results were obtained by imposing prescribed fission power functions from literature. Therefore, results are showed with the aim of point out the potentiality of the developed model.Keywords: PHWR, CFD, thermo-hydraulic, two-phase flow
Procedia PDF Downloads 468406 Stabilization of Metastable Skyrmion Phase in Polycrystalline Chiral β-Mn Type Co₇Zn₇Mn₆ Alloy
Authors: Pardeep, Yugandhar Bitla, A. K. Patra, G. A. Basheed
Abstract:
The topological protected nanosized particle-like swirling spin textures, “skyrmion,” has been observed in various ferromagnets with chiral crystal structures like MnSi, FeGe, Cu₂OSeO₃ alloys, however the magnetic ordering in these systems takes place at very low temperatures. For skyrmion-based spintronics devices, the skyrmion phase is required to stabilize in a wide temperature – field (T - H) region. The equilibrium skyrmion phase (SkX) in Co₇Zn₇Mn₆ alloy exists in a narrow T – H region just below transition temperature (TC ~ 215 K) and can be quenched by field cooling as a metastable skyrmion phase (MSkX) below SkX region. To realize robust MSkX at 110 K, field sweep ac susceptibility χ(H) measurements were performed after the zero field cooling (ZFC) and field cooling (FC) process. In ZFC process, the sample was cooled from 320 K to 110 K in zero applied magnetic field and then field sweep measurement was performed (up to 2 T) in positive direction (black curve). The real part of ac susceptibility (χ′(H)) at 110 K in positive field direction after ZFC confirms helical to conical phase transition at low field HC₁ (= 42 mT) and conical to ferromagnetic (FM) transition at higher field HC₂ (= 300 mT). After ZFC, FC measurements were performed i.e., sample was initially cooled in zero fields from 320 to 206 K and then a sample was field cooled in the presence of 15 mT field down to the temperature 110 K. After FC process, isothermal χ(H) was measured in positive (+H, red curve) and negative (-H, blue curve) field direction with increasing and decreasing field upto 2 T. Hysteresis behavior in χ′(H), measured after ZFC and FC process, indicates the stabilization of MSkX at 110 K which is in close agreement with literature. Also, the asymmetry between field-increasing curves measured after FC process in both sides confirm the stabilization of MSkX. In the returning process from the high field polarized FM state, helical state below HC₁ is destroyed and only the conical state is observed. Thus, the robust MSkX state is stabilized below its SkX phase over a much wider T - H region by FC in polycrystalline Co₇Zn₇Mn₆ alloy.Keywords: skyrmions, magnetic susceptibility, metastable phases, topological phases
Procedia PDF Downloads 103405 River Habitat Modeling for the Entire Macroinvertebrate Community
Authors: Pinna Beatrice., Laini Alex, Negro Giovanni, Burgazzi Gemma, Viaroli Pierluigi, Vezza Paolo
Abstract:
Habitat models rarely consider macroinvertebrates as ecological targets in rivers. Available approaches mainly focus on single macroinvertebrate species, not addressing the ecological needs and functionality of the entire community. This research aimed to provide an approach to model the habitat of the macroinvertebrate community. The approach is based on the recently developed Flow-T index, together with a Random Forest (RF) regression, which is employed to apply the Flow-T index at the meso-habitat scale. Using different datasets gathered from both field data collection and 2D hydrodynamic simulations, the model has been calibrated in the Trebbia river (2019 campaign), and then validated in the Trebbia, Taro, and Enza rivers (2020 campaign). The three rivers are characterized by a braiding morphology, gravel riverbeds, and summer low flows. The RF model selected 12 mesohabitat descriptors as important for the macroinvertebrate community. These descriptors belong to different frequency classes of water depth, flow velocity, substrate grain size, and connectivity to the main river channel. The cross-validation R² coefficient (R²𝒸ᵥ) of the training dataset is 0.71 for the Trebbia River (2019), whereas the R² coefficient for the validation datasets (Trebbia, Taro, and Enza Rivers 2020) is 0.63. The agreement between the simulated results and the experimental data shows sufficient accuracy and reliability. The outcomes of the study reveal that the model can identify the ecological response of the macroinvertebrate community to possible flow regime alterations and to possible river morphological modifications. Lastly, the proposed approach allows extending the MesoHABSIM methodology, widely used for the fish habitat assessment, to a different ecological target community. Further applications of the approach can be related to flow design in both perennial and non-perennial rivers, including river reaches in which fish fauna is absent.Keywords: ecological flows, macroinvertebrate community, mesohabitat, river habitat modeling
Procedia PDF Downloads 94404 Molecular Topology and TLC Retention Behaviour of s-Triazines: QSRR Study
Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević
Abstract:
Quantitative structure-retention relationship (QSRR) analysis was used to predict the chromatographic behavior of s-triazine derivatives by using theoretical descriptors computed from the chemical structure. Fundamental basis of the reported investigation is to relate molecular topological descriptors with chromatographic behavior of s-triazine derivatives obtained by reversed-phase (RP) thin layer chromatography (TLC) on silica gel impregnated with paraffin oil and applied ethanol-water (φ = 0.5-0.8; v/v). Retention parameter (RM0) of 14 investigated s-triazine derivatives was used as dependent variable while simple connectivity index different orders were used as independent variables. The best QSRR model for predicting RM0 value was obtained with simple third order connectivity index (3χ) in the second-degree polynomial equation. Numerical values of the correlation coefficient (r=0.915), Fisher's value (F=28.34) and root mean square error (RMSE = 0.36) indicate that model is statistically significant. In order to test the predictive power of the QSRR model leave-one-out cross-validation technique has been applied. The parameters of the internal cross-validation analysis (r2CV=0.79, r2adj=0.81, PRESS=1.89) reflect the high predictive ability of the generated model and it confirms that can be used to predict RM0 value. Multivariate classification technique, hierarchical cluster analysis (HCA), has been applied in order to group molecules according to their molecular connectivity indices. HCA is a descriptive statistical method and it is the most frequently used for important area of data processing such is classification. The HCA performed on simple molecular connectivity indices obtained from the 2D structure of investigated s-triazine compounds resulted in two main clusters in which compounds molecules were grouped according to the number of atoms in the molecule. This is in agreement with the fact that these descriptors were calculated on the basis of the number of atoms in the molecule of the investigated s-triazine derivatives.Keywords: s-triazines, QSRR, chemometrics, chromatography, molecular descriptors
Procedia PDF Downloads 393403 Developing Scaffolds for Tissue Regeneration using Low Temperature Plasma (LTP)
Authors: Komal Vig
Abstract:
Cardiovascular disease (CVD)-related deaths occur in 17.3 million people globally each year, accounting for 30% of all deaths worldwide, with a predicted annual incidence of deaths to reach 23.3 million globally by 2030. Autologous bypass grafts remain an important therapeutic option for the treatment of CVD, but the poor quality of the donor patient’s blood vessels, the invasiveness of the resection surgery, and postoperative movement restrictions create issues. The present study is aimed to improve the endothelialization of intimal surface of graft by using low temperature plasma (LTP) to increase the cell attachment and proliferation. Polytetrafluoroethylene (PTFE) was treated with LTP. Air was used as the feed-gas, and the pressure in the plasma chamber was kept at 800 mTorr. Scaffolds were also modified with gelatin and collagen by dipping method. Human umbilical vein endothelial cells (HUVEC) were plated on the developed scaffolds, and cell proliferation was determined by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) assay and by microscopy. mRNA expressions levels of different cell markers were investigated using quantitative real-time PCR (qPCR). XPS confirmed the introduction of oxygenated functionalities from LTP. HUVEC cells showed 80% seeding efficiency on the scaffold. Microscopic and MTT assays indicated increase in cell viability in LTP treated scaffolds, especially when treated with gelatin or collagen, compared to untreated scaffolds. Gene expression studies shows enhanced expression of cell adhesion marker Integrin- α 5 gene after LTP treatment. LTP treated scaffolds exhibited better cell proliferation and viability compared to untreated scaffolds. Protein treatment of scaffold increased cell proliferation. Based on our initial results, more scaffolds alternatives will be developed and investigated for cell growth and vascularization studies. Acknowledgments: This work is supported by the NSF EPSCoR RII-Track-1 Cooperative Agreement OIA-2148653.Keywords: LTP, HUVEC cells, vascular graft, endothelialization
Procedia PDF Downloads 71402 Reliability and Maintainability Optimization for Aircraft’s Repairable Components Based on Cost Modeling Approach
Authors: Adel A. Ghobbar
Abstract:
The airline industry is continuously challenging how to safely increase the service life of the aircraft with limited maintenance budgets. Operators are looking for the most qualified maintenance providers of aircraft components, offering the finest customer service. Component owner and maintenance provider is offering an Abacus agreement (Aircraft Component Leasing) to increase the efficiency and productivity of the customer service. To increase the customer service, the current focus on No Fault Found (NFF) units must change into the focus on Early Failure (EF) units. Since the effect of EF units has a significant impact on customer satisfaction, this needs to increase the reliability of EF units at minimal cost, which leads to the goal of this paper. By identifying the reliability of early failure (EF) units with regards to No Fault Found (NFF) units, in particular, the root cause analysis with an integrated cost analysis of EF units with the use of a failure mode analysis tool and a cost model, there will be a set of EF maintenance improvements. The data used for the investigation of the EF units will be obtained from the Pentagon system, an Enterprise Resource Planning (ERP) system used by Fokker Services. The Pentagon system monitors components, which needs to be repaired from Fokker aircraft owners, Abacus exchange pool, and commercial customers. The data will be selected on several criteria’s: time span, failure rate, and cost driver. When the selected data has been acquired, the failure mode and root cause analysis of EF units are initiated. The failure analysis approach tool was implemented, resulting in the proposed failure solution of EF. This will lead to specific EF maintenance improvements, which can be set-up to decrease the EF units and, as a result of this, increasing the reliability. The investigated EFs, between the time period over ten years, showed to have a significant reliability impact of 32% on the total of 23339 unscheduled failures. Since the EFs encloses almost one-third of the entire population.Keywords: supportability, no fault found, FMEA, early failure, availability, operational reliability, predictive model
Procedia PDF Downloads 127401 Thermodynamic Analysis and Experimental Study of Agricultural Waste Plasma Processing
Authors: V. E. Messerle, A. B. Ustimenko, O. A. Lavrichshev
Abstract:
A large amount of manure and its irrational use negatively affect the environment. As compared with biomass fermentation, plasma processing of manure enhances makes it possible to intensify the process of obtaining fuel gas, which consists mainly of synthesis gas (CO + H₂), and increase plant productivity by 150–200 times. This is achieved due to the high temperature in the plasma reactor and a multiple reduction in waste processing time. This paper examines the plasma processing of biomass using the example of dried mixed animal manure (dung with a moisture content of 30%). Characteristic composition of dung, wt.%: Н₂О – 30, С – 29.07, Н – 4.06, О – 32.08, S – 0.26, N – 1.22, P₂O₅ – 0.61, K₂O – 1.47, СаО – 0.86, MgO – 0.37. The thermodynamic code TERRA was used to numerically analyze dung plasma gasification and pyrolysis. Plasma gasification and pyrolysis of dung were analyzed in the temperature range 300–3,000 K and pressure 0.1 MPa for the following thermodynamic systems: 100% dung + 25% air (plasma gasification) and 100% dung + 25% nitrogen (plasma pyrolysis). Calculations were conducted to determine the composition of the gas phase, the degree of carbon gasification, and the specific energy consumption of the processes. At an optimum temperature of 1,500 K, which provides both complete gasification of dung carbon and the maximum yield of combustible components (99.4 vol.% during dung gasification and 99.5 vol.% during pyrolysis), and decomposition of toxic compounds of furan, dioxin, and benz(a)pyrene, the following composition of combustible gas was obtained, vol.%: СО – 29.6, Н₂ – 35.6, СО₂ – 5.7, N₂ – 10.6, H₂O – 17.9 (gasification) and СО – 30.2, Н₂ – 38.3, СО₂ – 4.1, N₂ – 13.3, H₂O – 13.6 (pyrolysis). The specific energy consumption of gasification and pyrolysis of dung at 1,500 K is 1.28 and 1.33 kWh/kg, respectively. An installation with a DC plasma torch with a rated power of 100 kW and a plasma reactor with a dung capacity of 50 kg/h was used for dung processing experiments. The dung was gasified in an air (or nitrogen during pyrolysis) plasma jet, which provided a mass-average temperature in the reactor volume of at least 1,600 K. The organic part of the dung was gasified, and the inorganic part of the waste was melted. For pyrolysis and gasification of dung, the specific energy consumption was 1.5 kWh/kg and 1.4 kWh/kg, respectively. The maximum temperature in the reactor reached 1,887 K. At the outlet of the reactor, a gas of the following composition was obtained, vol.%: СO – 25.9, H₂ – 32.9, СO₂ – 3.5, N₂ – 37.3 (pyrolysis in nitrogen plasma); СO – 32.6, H₂ – 24.1, СO₂ – 5.7, N₂ – 35.8 (air plasma gasification). The specific heat of combustion of the combustible gas formed during pyrolysis and plasma-air gasification of agricultural waste is 10,500 and 10,340 kJ/kg, respectively. Comparison of the integral indicators of dung plasma processing showed satisfactory agreement between the calculation and experiment.Keywords: agricultural waste, experiment, plasma gasification, thermodynamic calculation
Procedia PDF Downloads 40400 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 125399 Development of Interaction Diagram for Eccentrically Loaded Reinforced Concrete Sandwich Walls with Different Design Parameters
Authors: May Haggag, Ezzat Fahmy, Mohamed Abdel-Mooty, Sherif Safar
Abstract:
Sandwich sections have a very complex nature due to variability of behavior of different materials within the section. Cracking, crushing and yielding capacity of constituent materials enforces high complexity of the section. Furthermore, slippage between the different layers adds to the section complex behavior. Conventional methods implemented in current industrial guidelines do not account for the above complexities. Thus, a throughout study is needed to understand the true behavior of the sandwich panels thus, increase the ability to use them effectively and efficiently. The purpose of this paper is to conduct numerical investigation using ANSYS software for the structural behavior of sandwich wall section under eccentric loading. Sandwich walls studied herein are composed of two RC faces, a foam core and linking shear connectors. Faces are modeled using solid elements and reinforcement together with connectors are modeled using link elements. The analysis conducted herein is nonlinear static analysis incorporating material nonlinearity, crashing and crushing of concrete and yielding of steel. The model is validated by comparing it to test results in literature. After validation, the model is used to establish extensive parametric analysis to investigate the effect of three key parameters on the axial force bending moment interaction diagram of the walls. These parameters are the concrete compressive strength, face thickness and number of shear connectors. Furthermore, the results of the parametric study are used to predict a coefficient that links the interaction diagram of a solid wall to that of a sandwich wall. The equation is predicted using the parametric study data and regression analysis. The predicted α was used to construct the interaction diagram of the investigated wall and the results were compared with ANSYS results and showed good agreement.Keywords: sandwich walls, interaction diagrams, numerical modeling, eccentricity, reinforced concrete
Procedia PDF Downloads 403398 A Molecular Dynamic Simulation Study to Explore Role of Chain Length in Predicting Useful Characteristic Properties of Commodity and Engineering Polymers
Authors: Lokesh Soni, Sushanta Kumar Sethi, Gaurav Manik
Abstract:
This work attempts to use molecular simulations to create equilibrated structures of a range of commercially used polymers. Generated equilibrated structures for polyvinyl acetate (isotactic), polyvinyl alcohol (atactic), polystyrene, polyethylene, polyamide 66, poly dimethyl siloxane, poly carbonate, poly ethylene oxide, poly amide 12, natural rubber, poly urethane, and polycarbonate (bisphenol-A) and poly ethylene terephthalate are employed to estimate the correct chain length that will correctly predict the chain parameters and properties. Further, the equilibrated structures are used to predict some properties like density, solubility parameter, cohesive energy density, surface energy, and Flory-Huggins interaction parameter. The simulated densities for polyvinyl acetate, polyvinyl alcohol, polystyrene, polypropylene, and polycarbonate are 1.15 g/cm3, 1.125 g/cm3, 1.02 g/cm3, 0.84 g/cm3 and 1.223 g/cm3 respectively are found to be in good agreement with the available literature estimates. However, the critical repeating units or the degree of polymerization after which the solubility parameter showed saturation were 15, 20, 25, 10 and 20 respectively. This also indicates that such properties that dictate the miscibility of two or more polymers in their blends are strongly dependent on the chosen polymer or its characteristic properties. An attempt has been made to correlate such properties with polymer properties like Kuhn length, free volume and the energy term which plays a vital role in predicting the mentioned properties. These results help us to screen and propose a useful library which may be used by the research groups in estimating the polymer properties using the molecular simulations of chains with the predicted critical lengths. The library shall help to obviate the need for researchers to spend efforts in finding the critical chain length needed for simulating the mentioned polymer properties.Keywords: Kuhn length, Flory Huggins interaction parameter, cohesive energy density, free volume
Procedia PDF Downloads 193397 Sustainable Zero Carbon Communities: The Role of Community-Based Interventions in Reducing Carbon Footprint
Authors: Damilola Mofikoya
Abstract:
Developed countries account for a large proportion of greenhouse gas emissions. In the last decade, countries including the United States and China have made a commitment to cut down carbon emissions by signing the Paris Climate Agreement. However, carbon neutrality is a challenging issue to tackle at the country level because of the scale of the problem. To overcome this challenge, cities are at the forefront of these efforts. Many cities in the United States are taking strategic actions and proposing programs and initiatives focused on renewable energy, green transportation, less use of fossil fuel vehicles, etc. There have been concerns about the implications of those strategies and a lack of community engagement. This paper is focused on community-based efforts that help actualize the reduction of carbon footprint through sustained and inclusive action. Existing zero-carbon assessment tools are examined to understand variables and indicators associated with the zero-carbon goals. Based on a broad, systematic review of literature on community strategies, and existing zero-carbon assessment tools, a dashboard was developed to help simplify and demystify carbon neutrality goals at a community level. The literature was able to shed light on the key contributing factors responsible for the success of community efforts in carbon neutrality. Stakeholder education is discussed as one of the strategies to help communities take action and generate momentum. The community-based efforts involving individuals and residents, such as reduction of food wastages, shopping preferences, transit mode choices, and healthy diets, play an important role in the context of zero-carbon initiatives. The proposed community-based dashboard will emphasize the importance of sustained, structured, and collective efforts at a communal scale. Finally, the present study discusses the relationship between life expectancy and quality of life and how it affects carbon neutrality in communities.Keywords: carbon footprint, communities, life expectancy, quality of life
Procedia PDF Downloads 87396 Unraveling the Complexities of Competitive Aggressiveness: A Qualitative Exploration in the Oil and Gas Industry
Authors: Salim Al Harthy, Alexandre A. Bachkirov
Abstract:
This study delves into the complexities of competitive aggressiveness in the oil and gas industry, focusing on the characteristics of the identified competitive actions. The current quantitative research on competitive aggressiveness lacks agreement on the connection between antecedents and outcomes, prompting a qualitative investigation. To address this gap, the research utilizes qualitative interviews with CEOs from Oman's oil and gas service industry to explore the dynamics of competitive aggressiveness. Using Noklenain's typology, the study categorizes and analyzes identified actions, shedding light on the spectrum of competitive behaviors within the industry. Notably, actions predominantly fall under the "Bring about" and "Preserve" elements, with a notable absence in the "Forebear" and "Destroy" categories, possibly linked to the study's focus on service-oriented businesses. The study also explores the detectability of actions, revealing that "Bring about" actions are detectable, while those in "Preserve" and "Suppress" are not. This challenges conventional definitions of competitive aggressiveness, suggesting that not all actions are readily detectable despite being considered competitive. The presence of non-detectable actions introduces complexity to measurement methods reliant on visible empirical data. Moreover, the study contends that companies can adopt an aggressive competitive approach without directly challenging rivals. This challenges traditional views and emphasizes the innovative and entrepreneurial aspects of actions not explicitly aimed at competitors. By not revealing strategic intentions, such actions put rivals at a disadvantage, underscoring the need for a nuanced understanding of competitive aggressiveness. In summary, the lack of consensus in existing literature regarding the relationship between antecedents and outcomes in competitive aggressiveness is addressed. The study reveals a spectrum of detectable and undetectable actions, posing challenges in measurement and emphasizing the need for alternative methods to assess undetectable actions in competitive behavior. This research contributes to a more nuanced understanding of competitive aggressiveness, acknowledging the diverse actions shaping a company's strategic positioning in dynamic business environments.Keywords: competitive aggressiveness, qualitative exploration, noklenain's typology, oil and gas industry
Procedia PDF Downloads 63395 Performance of AquaCrop Model for Simulating Maize Growth and Yield Under Varying Sowing Dates in Shire Area, North Ethiopia
Authors: Teklay Tesfay, Gebreyesus Brhane Tesfahunegn, Abadi Berhane, Selemawit Girmay
Abstract:
Adjusting the proper sowing date of a crop at a particular location with a changing climate is an essential management option to maximize crop yield. However, determining the optimum sowing date for rainfed maize production through field experimentation requires repeated trials for many years in different weather conditions and crop management. To avoid such long-term experimentation to determine the optimum sowing date, crop models such as AquaCrop are useful. Therefore, the overall objective of this study was to evaluate the performance of AquaCrop model in simulating maize productivity under varying sowing dates. A field experiment was conducted for two consecutive cropping seasons by deploying four maize seed sowing dates in a randomized complete block design with three replications. Input data required to run this model are stored as climate, crop, soil, and management files in the AquaCrop database and adjusted through the user interface. Observed data from separate field experiments was used to calibrate and validate the model. AquaCrop model was validated for its performance in simulating the green canopy and aboveground biomass of maize for the varying sowing dates based on the calibrated parameters. Results of the present study showed that there was a good agreement (an overall R2 =, Ef= d= RMSE =) between measured and simulated values of the canopy cover and biomass yields. Considering the overall values of the statistical test indicators, the performance of the model to predict maize growth and biomass yield was successful, and so this is a valuable tool help for decision-making. Hence, this calibrated and validated model is suggested to use for determining optimum maize crop sowing date for similar climate and soil conditions to the study area, instead of conducting long-term experimentation.Keywords: AquaCrop model, calibration, validation, simulation
Procedia PDF Downloads 67394 Criteria for Good Governance in Georgian Defense Sector:Standards and Principles
Authors: Vephkhvia Grigalashvili
Abstract:
This paper provides an overview of criteria for good governance in Georgian defense sector and scientific outcomes of comparative research. A respect for good governance and its realization into Georgian national defense sector represents a fundamental institutional necessity as well as country`s politico-legal obligation within the framework of the existing collaboration mechanisms with NATO (especially Building Integrity (BI) Programme) and the Association Agreement between the EU and Georgia. Furthermore good governance is considered as a democracy measuring criterion in country`s Euro-Atlantic integration process. Accordingly, integration and further development of the contemporary approaches of good governance into Georgian defense management model represents a burning issue of the country. The assessment of an existing model of the country, identification of defects and determination of course of institutional reforms in a mutual comparison format of good governance mechanisms of NATO or/and the EU member Eastern European or Baltic countries positively assessed by the international organizations is considered as a precondition for its effective realization. Scientific aims of this study are: (a) to conduct comparative analysis of Georgian national principles and generalized standards of NATO or/and the EU member Eastern European and Baltic countries in following segments of good governance: open governance; anticorruption policy; conflict of interests; integrity; internal and external control bodies; (b) to formulate theoretical and practical recommendations on reforms to be implemented in the country`s national defence sector. As research reveals, although, institutional / legal pillars of good governance in Georgian defense sector generally are in compliance with international principles, the quality of implementation of good government norms still remains as an area that needs further development by raising awareness of public servants and community.Keywords: anti-corruption policy within Georgian defense governance, conflict of interests within Georgian defense governance, good governance in Georgian defense sector, principles of integrity in Georgian defense management
Procedia PDF Downloads 162393 Simulation of Turbulent Flow in Channel Using Generalized Hydrodynamic Equations
Authors: Alex Fedoseyev
Abstract:
This study explores Generalized Hydrodynamic Equations (GHE) for the simulation of turbulent flows. The GHE was derived from the Generalized Boltzmann Equation (GBE) by Alexeev (1994). GBE was obtained by first principles from the chain of Bogolubov kinetic equations and considered particles of finite dimensions, Alexeev (1994). The GHE has new terms, temporal and spatial fluctuations compared to the Navier-Stokes equations (NSE). These new terms have a timescale multiplier τ, and the GHE becomes the NSE when τ is zero. The nondimensional τ is a product of the Reynolds number and the squared length scale ratio, τ=Re*(l/L)², where l is the apparent Kolmogorov length scale, and L is a hydrodynamic length scale. The turbulence phenomenon is not well understood and is not described by NSE. An additional one or two equations are required for the turbulence model, which may have to be tuned for specific problems. We show that, in the case of the GHE, no additional turbulence model is needed, and the turbulent velocity profile is obtained from the GHE. The 2D turbulent channel and circular pipe flows were investigated using a numerical solution of the GHE for several cases. The solutions are compared with the experimental data in the circular pipes and 2D channels by Nicuradse (1932, Prandtl Lab), Hussain and Reynolds (1975), Wei and Willmarth (1989), Van Doorne (2007), theory by Wosnik, Castillo and George (2000), and the relevant experiments on Superpipe setup at Princeton, data by Zagarola (1996) and Zagarola and Smits (1998), the Reynolds number is from Re=7200 to Re=960000. The numerical solution data compared well with the experimental data, as well as with the approximate analytical solution for turbulent flow in channel Fedoseyev (2023). The obtained results confirm that the Alexeev generalized hydrodynamic theory (GHE) is in good agreement with the experiments for turbulent flows. The proposed approach is limited to 2D and 3D axisymmetric channel geometries. Further work will extend this approach by including channels with square and rectangular cross-sections.Keywords: comparison with experimental data. generalized hydrodynamic equations, numerical solution, turbulent boundary layer, turbulent flow in channel
Procedia PDF Downloads 65392 Comparison of Water Equivalent Ratio of Several Dosimetric Materials in Proton Therapy Using Monte Carlo Simulations and Experimental Data
Authors: M. R. Akbari , H. Yousefnia, E. Mirrezaei
Abstract:
Range uncertainties of protons are currently a topic of interest in proton therapy. Two of the parameters that are often used to specify proton range are water equivalent thickness (WET) and water equivalent ratio (WER). Since WER values for a specific material is nearly constant at different proton energies, it is a more useful parameter to compare. In this study, WER values were calculated for different proton energies in polymethyl methacrylate (PMMA), polystyrene (PS) and aluminum (Al) using FLUKA and TRIM codes. The results were compared with analytical, experimental and simulated SEICS code data obtained from the literature. In FLUKA simulation, a cylindrical phantom, 1000 mm in height and 300 mm in diameter, filled with the studied materials was simulated. A typical mono-energetic proton pencil beam in a wide range of incident energies usually applied in proton therapy (50 MeV to 225 MeV) impinges normally on the phantom. In order to obtain the WER values for the considered materials, cylindrical detectors, 1 mm in height and 20 mm in diameter, were also simulated along the beam trajectory in the phantom. In TRIM calculations, type of projectile, energy and angle of incidence, type of target material and thickness should be defined. The mode of 'detailed calculation with full damage cascades' was selected for proton transport in the target material. The biggest difference in WER values between the codes was 3.19%, 1.9% and 0.67% for Al, PMMA and PS, respectively. In Al and PMMA, the biggest difference between each code and experimental data was 1.08%, 1.26%, 2.55%, 0.94%, 0.77% and 0.95% for SEICS, FLUKA and SRIM, respectively. FLUKA and SEICS had the greatest agreement (≤0.77% difference in PMMA and ≤1.08% difference in Al, respectively) with the available experimental data in this study. It is concluded that, FLUKA and TRIM codes have capability for Bragg curves simulation and WER values calculation in the studied materials. They can also predict Bragg peak location and range of proton beams with acceptable accuracy.Keywords: water equivalent ratio, dosimetric materials, proton therapy, Monte Carlo simulations
Procedia PDF Downloads 323391 Designing Nickel Coated Activated Carbon (Ni/AC) Based Electrode Material for Supercapacitor Applications
Authors: Zahid Ali Ghazi
Abstract:
Supercapacitors (SCs) have emerged as auspicious energy storage devices because of their fast charge-discharge characteristics and high power densities. In the current study, a simple approach is used to coat activated carbon (AC) with a thin layer of nickel (Ni) by an electroless deposition process to enhance the electrochemical performance of the SC. The synergistic combination of large surface area and high electrical conductivity of the AC, as well as the pseudocapacitive behavior of the metallic Ni, has shown great potential to overcome the limitations of traditional SC materials. First, the materials were characterized using X-ray diffraction (XRD) for crystallography, scanning electron microscopy (SEM) for surface morphology and energy dispersion X-ray (EDX) for elemental analysis. The electrochemical performance of the nickel-coated activated carbon (Ni-AC) is systematically evaluated through various techniques, including galvanostatic charge-discharge (GCD), cyclic voltammetry (CV), and electrochemical impedance spectroscopy (EIS). The GCD results revealed that Ni/AC has a higher specific capacitance (1559 F/g) than bare AC (222 F/g) at 1 A/g current density in a 2 M KOH electrolyte. Even at a higher current density of 20 A/g, the Ni/AC showed a high capacitance of 944 F/g as compared to 77 F/g by AC. The specific capacitance (1318 F/g) calculated from CV measurements for Ni-AC at 10mV/sec was in close agreement with GCD data. Furthermore, the bare AC exhibited a low energy of 15 Wh/kg at a power density of 356 W/kg whereas, an energy density of 111 Wh/kg at a power density of 360 W/kg was achieved by Ni/AC-850 electrode and demonstrated a long life cycle with 94% capacitance retention over 50000 charge/discharge cycles at 10 A/g. In addition, the EIS study disclosed that the Rs and Rct values of Ni/AC electrodes were much lower than those of bare AC. The superior performance of Ni/AC is mainly attributed to the presence of excessive redox active sites, large electroactive surface area and corrosive resistance properties of Ni. We believe that this study will provide new insights into the controlled coating of ACs and other porous materials with metals for developing high-performance SCs and other energy storage devices.Keywords: supercapacitor, cyclic voltammetry, coating, energy density, activated carbon
Procedia PDF Downloads 62390 Facilitating Factors for the Success of Mobile Service Providers in Bangkok Metropolitan
Authors: Yananda Siraphatthada
Abstract:
The objectives of this research were to study the level of influencing factors, leadership, supply chain management, innovation, competitive advantages, business success, and affecting factors to the business success of the mobile phone system service providers in Bangkok Metropolitan. This research was done by the quantitative approach and the qualitative approach. The quantitative approach was used for questionnaires to collect data from the 331 mobile service shop managers franchised by AIS, Dtac and TrueMove. The mobile phone system service providers/shop managers were randomly stratified and proportionally allocated into subgroups exclusive to the number of the providers in each network. In terms of qualitative method, there were in-depth interviews of 6 mobile service providers/managers of Telewiz and Dtac and TrueMove shop to find the agreement or disagreement with the content analysis method. Descriptive Statistics, including Frequency, Percentage, Means and Standard Deviation were employed; also, the Structural Equation Model (SEM) was used as a tool for data analysis. The content analysis method was applied to identify key patterns emerging from the interview responses. The two data sets were brought together for comparing and contrasting to make the findings, providing triangulation to enrich result interpretation. It revealed that the level of the influencing factors – leadership, innovation management, supply chain management, and business competitiveness had an impact at a great level, but that the level of factors, innovation and the business, financial success and nonbusiness financial success of the mobile phone system service providers in Bangkok Metropolitan, is at the highest level. Moreover, the business influencing factors, competitive advantages in the business of mobile system service providers which were leadership, supply chain management, innovation management, business advantages, and business success, had statistical significance at .01 which corresponded to the data from the interviews.Keywords: mobile service providers, facilitating factors, Bangkok Metropolitan, business success
Procedia PDF Downloads 348389 Elderly Health Care Process by Community Participation: A Sub-District in the Lower Northern Region of Thailand
Authors: Amaraporn Puraya, Roongtiva Boonpracom, Somsak Thojampa, Sirikanok Klankhajhon, Kittisak Kumpeera
Abstract:
The objective of this qualitative research was to study the elderly health care process by community participation. Data were collected by quality research methods, including secondary data study, observation, in-depth interviews, and focus group discussions and analyzed by content analysis, reflection and review of information. The research results pointed out that the important elderly health care process by community participation consisted of 2 parts, namely the community participation development process in elderly health care and the outcomes from the participation development process. The community participation development process consisted of 4 steps as follows: 1) Building the leadership team, an important social capital of the community, which started from searching for both formal and informal leaders by giving the opportunity for public participation and creating clear agreements defining roles, duties and responsibilities; 2) investigating the problems and the needs of the community, 3) designing the elderly health care activities under the concept of self-care potential development of the elderly through participation in community forums and meetings to exchange knowledge with common goals, plans and operation and 4) the development process of sustainable health care agreement at the local level, starting from opening communication channels to create awareness and participation in various activities at both individual and group levels as well as pushing activities/projects into the community development plan consistent with the local administration policy. The outcomes from the participation development process were as follows. 1) There was the integration of the elderly for doing the elderly health care activities/projects in the community managed by the elderly themselves. 2) The service system was changed from the passive to the proactive one, focusing on health promotion rather than treating diseases or illnesses. 3) The registered nurses / the public health officers can provide care for the elderly with chronic illnesses through the implementation of activities/projects of elderly health care so that the elderly can access the services more. 4) The local government organization became the main mechanism in driving the elderly health care process by community participation.Keywords: elderly health care process, community participation, elderly, Thailand
Procedia PDF Downloads 212388 Support for Privilege Based on Nationality in Switched-At-Birth Scenario
Authors: Anne Lehner, Mostafa Salari Rad, Jeremy Ginges
Abstract:
Many of life’s privileges (and burdens) are thrust on us at birth. Someone born white or male in the United States is also born with a set of advantages over someone born non-white or female. One aspect of privileges conferred by birth is that they are so entrenched in social institutions and social norms that until they are robustly challenged, they can be seen as a moral good. While American society increasingly confronts privileges based on gender and race, other types of privileges, like one's nationality, see less attention. The nationality one is born into can have enormous effects on one’s personal life, work opportunities, and health outcomes. Yet, we predicted that although most Americans would regard it as absurd to think that white people have a right to protect their privileges and 'way of life', they would regard it as obvious that Americans have a right to protect the American way of life and associated privileges. In a preregistered study we presented 300 Americans randomly with one out of three 'privilege scales' in order to assess their agreement with certain statements. The domains for the privilege scales were nationality, race, and gender. Next, all participants completed the switched-at-birth task assessing ones tendency to essentialize nationality. We found that Americans are more approving of privilege based on nationality than of privilege based on gender and race. In addition, we found an interaction of condition with ideology, showing that conservatives are in general more approving of the privilege of any kind than liberals are, and they especially approve of privilege based on nationality. For the switched-at-birth task, we found that both, liberals as well as conservatives are equally willing to grant the child 100% American nationality. Whether or not one chose 100% is unrelated to the expressed approval of privilege based on nationality. One might hesitate to fully grant the child 100% American nationality in the task, yet disapprove of privilege based on nationality. This shows that as much as we see beholders of privilege being oblivious to their statuses within other social categories, like gender or race, we seem to detect the same blindness for the privilege based on nationality. Liberals showing relatively fewer support for privilege based on nationality compared to conservatives still refused to acknowledge the child as having become 100% American and thereby denying the privileges it potentially bestows upon them.Keywords: thought experiment, anti-immigrant attitudes, privilege of nationality, immigration, moral circles, psychology
Procedia PDF Downloads 132387 Determination of Non-CO2 Greenhouse Gas Emission in Electronics Industry
Authors: Bong Jae Lee, Jeong Il Lee, Hyo Su Kim
Abstract:
Both developed and developing countries have adopted the decision to join the Paris agreement to reduce greenhouse gas (GHG) emissions at the Conference of the Parties (COP) 21 meeting in Paris. As a result, the developed and developing countries have to submit the Intended Nationally Determined Contributions (INDC) by 2020, and each country will be assessed for their performance in reducing GHG. After that, they shall propose a reduction target which is higher than the previous target every five years. Therefore, an accurate method for calculating greenhouse gas emissions is essential to be presented as a rational for implementing GHG reduction measures based on the reduction targets. Non-CO2 GHGs (CF4, NF3, N2O, SF6 and so on) are being widely used in fabrication process of semiconductor manufacturing, and etching/deposition process of display manufacturing process. The Global Warming Potential (GWP) value of Non-CO2 is much higher than CO2, which means it will have greater effect on a global warming than CO2. Therefore, GHG calculation methods of the electronics industry are provided by Intergovernmental Panel on climate change (IPCC) and U.S. Environmental Protection Agency (EPA), and it will be discussed at ISO/TC 146 meeting. As discussed earlier, being precise and accurate in calculating Non-CO2 GHG is becoming more important. Thus this study aims to discuss the implications of the calculating methods through comparing the methods of IPCC and EPA. As a conclusion, after analyzing the methods of IPCC & EPA, the method of EPA is more detailed and it also provides the calculation for N2O. In case of the default emission factor (by IPCC & EPA), IPCC provides more conservative results compared to that of EPA; The factor of IPCC was developed for calculating a national GHG emission, while the factor of EPA was specifically developed for the U.S. which means it must have been developed to address the environmental issue of the US. The semiconductor factory ‘A’ measured F gas according to the EPA Destruction and Removal Efficiency (DRE) protocol and estimated their own DRE, and it was observed that their emission factor shows higher DRE compared to default DRE factor of IPCC and EPA Therefore, each country can improve their GHG emission calculation by developing its own emission factor (if possible) at the time of reporting Nationally Determined Contributions (NDC). Acknowledgements: This work was supported by the Korea Evaluation Institute of Industrial Technology (No. 10053589).Keywords: non-CO2 GHG, GHG emission, electronics industry, measuring method
Procedia PDF Downloads 288386 Influence of Ammonia Emissions on Aerosol Formation in Northern and Central Europe
Authors: A. Aulinger, A. M. Backes, J. Bieser, V. Matthias, M. Quante
Abstract:
High concentrations of particles pose a threat to human health. Thus, legal maximum concentrations of PM10 and PM2.5 in ambient air have been steadily decreased over the years. In central Europe, the inorganic species ammonium sulphate and ammonium nitrate make up a large fraction of fine particles. Many studies investigate the influence of emission reductions of sulfur- and nitrogen oxides on aerosol concentration. Here, we focus on the influence of ammonia (NH3) emissions. While emissions of sulphate and nitrogen oxides are quite well known, ammonia emissions are subject to high uncertainty. This is due to the uncertainty of location, amount, time of fertilizer application in agriculture, and the storage and treatment of manure from animal husbandry. For this study, we implemented a crop growth model into the SMOKE emission model. Depending on temperature, local legislation, and crop type individual temporal profiles for fertilizer and manure application are calculated for each model grid cell. Additionally, the diffusion from soils and plants and the direct release from open and closed barns are determined. The emission data was used as input for the Community Multiscale Air Quality (CMAQ) model. Comparisons to observations from the EMEP measurement network indicate that the new ammonia emission module leads to a better agreement of model and observation (for both ammonia and ammonium). Finally, the ammonia emission model was used to create emission scenarios. This includes emissions based on future European legislation, as well as a dynamic evaluation of the influence of different agricultural sectors on particle formation. It was found that a reduction of ammonia emissions by 50% lead to a 24% reduction of total PM2.5 concentrations during winter time in the model domain. The observed reduction was mainly driven by reduced formation of ammonium nitrate. Moreover, emission reductions during winter had a larger impact than during the rest of the year.Keywords: ammonia, ammonia abatement strategies, ctm, seasonal impact, secondary aerosol formation
Procedia PDF Downloads 351385 FEM Simulation of Tool Wear and Edge Radius Effects on Residual Stress in High Speed Machining of Inconel718
Authors: Yang Liu, Mathias Agmell, Aylin Ahadi, Jan-Eric Stahl, Jinming Zhou
Abstract:
Tool wear and tool geometry have significant effects on the residual stresses in the component produced by high-speed machining. In this paper, Coupled Eulerian and Lagrangian (CEL) model is adopted to investigate the residual stress in high-speed machining of Inconel718 with a CBN170 cutting tool. The result shows that the mesh with the smallest size of 5 um yields cutting forces and chip morphology in close agreement with the experimental data. The analysis of thermal loading and mechanical loading are performed to study the effect of segmented chip morphology on the machined surface topography and residual stress distribution. The effects of cutting edge radius and flank wear on residual stresses formation and distribution on the workpiece were also investigated. It is found that the temperature within 100um depth of the machined surface increases drastically due to the more friction heat generation with the contact area of tool and workpiece increasing when a larger edge radius and flank wear are used. With the depth further increasing, the temperature drops rapidly for all cases due to the low conductivity of Inconel718. Consequently, higher and deeper tensile residual stress is generated on the superficial. Furthermore, an increased depth of plastic deformation and compressive residual stress is noticed in the subsurface, which is attributed to the reduction of the yield strength under the thermal effect. Besides, the ploughing effect produced by a larger tool edge radius contributes more than flank wear. The magnitude variation of the compressive residual stress caused by various edge radius and flank wear have a totally opposite trend, which depends on the magnitude of the ploughing and friction pressure acting on the machined surface.Keywords: Coupled Eulerian Lagrangian, segmented chip, residual stress, tool wear, edge radius, Inconel718
Procedia PDF Downloads 146384 Computational Fluid Dynamicsfd Simulations of Air Pollutant Dispersion: Validation of Fire Dynamic Simulator Against the Cute Experiments of the Cost ES1006 Action
Authors: Virginie Hergault, Siham Chebbah, Bertrand Frere
Abstract:
Following in-house objectives, Central laboratory of Paris police Prefecture conducted a general review on models and Computational Fluid Dynamics (CFD) codes used to simulate pollutant dispersion in the atmosphere. Starting from that review and considering main features of Large Eddy Simulation, Central Laboratory Of Paris Police Prefecture (LCPP) postulates that the Fire Dynamics Simulator (FDS) model, from National Institute of Standards and Technology (NIST), should be well suited for air pollutant dispersion modeling. This paper focuses on the implementation and the evaluation of FDS in the frame of the European COST ES1006 Action. This action aimed at quantifying the performance of modeling approaches. In this paper, the CUTE dataset carried out in the city of Hamburg, and its mock-up has been used. We have performed a comparison of FDS results with wind tunnel measurements from CUTE trials on the one hand, and, on the other, with the models results involved in the COST Action. The most time-consuming part of creating input data for simulations is the transfer of obstacle geometry information to the format required by SDS. Thus, we have developed Python codes to convert automatically building and topographic data to the FDS input file. In order to evaluate the predictions of FDS with observations, statistical performance measures have been used. These metrics include the fractional bias (FB), the normalized mean square error (NMSE) and the fraction of predictions within a factor of two of observations (FAC2). As well as the CFD models tested in the COST Action, FDS results demonstrate a good agreement with measured concentrations. Furthermore, the metrics assessment indicate that FB and NMSE meet the tolerance acceptable.Keywords: numerical simulations, atmospheric dispersion, cost ES1006 action, CFD model, cute experiments, wind tunnel data, numerical results
Procedia PDF Downloads 133383 Human-Automation Interaction in Law: Mapping Legal Decisions and Judgments, Cognitive Processes, and Automation Levels
Authors: Dovile Petkeviciute-Barysiene
Abstract:
Legal technologies not only create new ways for accessing and providing legal services but also transform the role of legal practitioners. Both lawyers and users of legal services expect automated solutions to outperform people with objectivity and impartiality. Although fairness of the automated decisions is crucial, research on assessing various characteristics of automated processes related to the perceived fairness has only begun. One of the major obstacles to this research is the lack of comprehensive understanding of what legal actions are automated and could be meaningfully automated, and to what extent. Neither public nor legal practitioners oftentimes cannot envision technological input due to the lack of general without illustrative examples. The aim of this study is to map decision making stages and automation levels which are and/or could be achieved in legal actions related to pre-trial and trial processes. Major legal decisions and judgments are identified during the consultations with legal practitioners. The dual-process model of information processing is used to describe cognitive processes taking place while making legal decisions and judgments during pre-trial and trial action. Some of the existing legal technologies are incorporated into the analysis as well. Several published automation level taxonomies are considered because none of them fit well into the legal context, as they were all created for avionics, teleoperation, unmanned aerial vehicles, etc. From the information processing perspective, analysis of the legal decisions and judgments expose situations that are most sensitive to cognitive bias, among others, also help to identify areas that would benefit from the automation the most. Automation level analysis, in turn, provides a systematic approach to interaction and cooperation between humans and algorithms. Moreover, an integrated map of legal decisions and judgments, information processing characteristics, and automation levels all together provide some groundwork for the research of legal technology perceived fairness and acceptance. Acknowledgment: This project has received funding from European Social Fund (project No 09.3.3-LMT-K-712-19-0116) under grant agreement with the Research Council of Lithuania (LMTLT).Keywords: automation levels, information processing, legal judgment and decision making, legal technology
Procedia PDF Downloads 142382 Investment Development Path and Motivations for Foreign Direct Investment in Georgia
Authors: Vakhtang Charaia, Mariam Lashkhi
Abstract:
Foreign direct investment (FDI) plays a vital role in global business. It provides firms with new markets and advertising channels, cheaper production facilities, admission to new technology, products, skills and financing. FDI can provide a recipient country/company with a source of new technologies, capital, practice, products, management skills, and as such can be a powerful drive for economic development. It is one of the key elements of stable economic development in many countries, especially in developing ones. Therefore the size of FDI inflow is one of the most crustal factors for economic perfection in small economy countries (like, Georgia), while most of developed ones are net exporters of FDI. Since, FDI provides firms with new markets; admission to new technologies, products and management skills; marketing channels; cheaper production facilities, and financing opportunities. It plays a significant role in Georgian economic development. Increasing FDI inflows from all over the world to Georgia in last decade was achieved with the outstanding reforms managed by the Georgian government. However, such important phenomenon as world financial crisis and Georgian-Russian war put its consequence on the over amount of FDI inflow in Georgia in the last years. It is important to mention that the biggest investor region for Georgia is EU, which is interested in Georgia not only from the economic points of view but from political. The case studies from main EU investor countries show that Georgia has a big potential of investment in different areas, such as; financial sector, energy, construction, tourism industry, transport and communications. Moreover, signing of Association Agreement between Georgia and EU will further boost all the fields of economy in Georgia in both short and long terms. It will attract more investments from different countries and especially from EU. The last, but not least important issue is the calculation of annual FDI inflow to Georgia, which it is calculated differently by different organizations, based on different methodologies, but what is more important is that all of them show significant increase of FDI in last decade, which gives a positive signal to investors and underlines necessity of further improvement of investment climate in the same direction.Keywords: foreign direct investment (FDI), Georgia, investment development path, investment climate
Procedia PDF Downloads 279381 Bimetallic MOFs Based Membrane for the Removal of Heavy Metal Ions from the Industrial Wastewater
Authors: Muhammad Umar Mushtaq, Muhammad Bilal Khan Niazi, Nouman Ahmad, Dooa Arif
Abstract:
Apart from organic dyes, heavy metals such as Pb, Ni, Cr, and Cu are present in textile effluent and pose a threat to humans and the environment. Many studies on removing heavy metallic ions from textile wastewater have been conducted in recent decades using metal-organic frameworks (MOFs). In this study new polyether sulfone ultrafiltration membrane, modified with Cu/Co and Cu/Zn-based bimetal-organic frameworks (MOFs), was produced. Phase inversion was used to produce the membrane, and atomic force microscopy (AFM), scanning electron microscopy (SEM) were used to characterize it. The bimetallic MOFs-based membrane structure is complex and can be comprehended using characterization techniques. The bimetallic MOF-based filtration membranes are designed to selectively adsorb specific contaminants while allowing the passage of water molecules, improving the ultrafiltration efficiency. MOFs' adsorption capacity and selectivity are enhanced by functionalizing them with particular chemical groups or incorporating them into composite membranes with other materials, such as polymers. The morphology and performance of the bimetallic MOF-based membrane were investigated regarding pure water flux and metal ion rejection. The advantages of developed bimetallic MOFs based membranes for wastewater treatment include enhanced adsorption capacity because of the presence of two metals in their structure, which provides additional binding sites for contaminants, leading to a higher adsorption capacity and more efficient removal of pollutants from wastewater. Based on the experimental findings, bimetallic MOF-based membranes are more capable of rejecting metal ions from industrial wastewater than conventional membranes that have already been developed. Furthermore, the difficulties associated with operational parameters, including pressure gradients and velocity profiles, are simulated using Ansys Fluent software. The simulation results obtained for the operating parameters are in complete agreement with the experimental results.Keywords: bimetallic MOFs, heavy metal ions, industrial wastewater treatment, ultrafiltration.
Procedia PDF Downloads 90380 Numerical Modelling of Prestressed Geogrid Reinforced Soil System
Authors: Soukat Kumar Das
Abstract:
Rapid industrialization and increase in population has resulted in the scarcity of suitable ground conditions. It has driven the need of ground improvement by means of reinforcement with geosynthetics with the minimum possible settlement and with maximum possible safety. Prestressing the geosynthetics offers an economical yet safe method of gaining the goal. Commercially available software PLAXIS 3D has made the analysis of prestressed geosynthetics simpler with much practical simulations of the ground. Attempts have been made so far to analyse the effect of prestressing geosynthetics and the effect of interference of footing on Unreinforced (UR), Geogrid Reinforced (GR) and Prestressed Geogrid Reinforced (PGR) soil on the load bearing capacity and the settlement characteristics of prestressed geogrid reinforced soil using the numerical analysis by using the software PLAXIS 3D. The results of the numerical analysis have been validated and compared with those given in the referred paper. The results have been found to be in very good agreement with those of the actual field values with very small variation. The GR soil has been found to be improve the bearing pressure 240 % whereas the PGR soil improves it by almost 500 % for 1mm settlement. In fact, the PGR soil has enhanced the bearing pressure of the GR soil by almost 200 %. The settlement reduction has also been found to be very significant as for 100 kPa bearing pressure the settlement reduction of the PGR soil has been found to be about 88 % with respect to UR soil and it reduced to up to 67 % with respect to GR soil. The prestressing force has resulted in enhanced reinforcement mechanism, resulting in the increased bearing pressure. The deformation at the geogrid layer has been found to be 13.62 mm for GR soil whereas it decreased down to mere 3.5 mm for PGR soil which certainly ensures the effect of prestressing on the geogrid layer. The parameter Improvement factor or conventionally known as Bearing Capacity Ratio for different settlements and which depicts the improvement of the PGR with respect to UR and GR soil and the improvement of GR soil with respect to UR soil has been found to vary in the range of 1.66-2.40 in the present analysis for GR soil and was found to be vary between 3.58 and 5.12 for PGR soil with respect to UR soil. The effect of prestressing was also observed in case of two interfering square footings. The centre to centre distance between the two footings (SFD) was taken to be B, 1.5B, 2B, 2.5B and 3B where B is the width of the footing. It was found that for UR soil the improvement of the bearing pressure was up to 1.5B after which it remained almost same. But for GR soil the zone of influence rose up to 2B and for PGR it further went up to 2.5B. So the zone of interference for PGR soil has increased by 67% than Unreinforced (UR) soil and almost 25 % with respect to GR soil.Keywords: bearing, geogrid, prestressed, reinforced
Procedia PDF Downloads 402379 A New Co(II) Metal Complex Template with 4-dimethylaminopyridine Organic Cation: Structural, Hirshfeld Surface, Phase Transition, Electrical Study and Dielectric Behavior
Authors: Mohamed dammak
Abstract:
Great attention has been paid to the design and synthesis of novel organic-inorganic compounds in recent decades because of their structural variety and the large diversity of atomic arrangements. In this work, the structure for the novel dimethyl aminopyridine tetrachlorocobaltate (C₇H₁₁N₂)₂CoCl₄ prepared by the slow evaporation method at room temperature has been successfully discussed. The X-ray diffraction results indicate that the hybrid material has a triclinic structure with a P space group and features a 0D structure containing isolated distorted [CoCl₄]2- tetrahedra interposed between [C7H11N²⁻]+ cations forming planes perpendicular to the c axis at z = 0 and z = ½. The effect of the synthesis conditions and the reactants used, the interactions between the cationic planes, and the isolated [CoCl4]2- tetrahedra are employing N-H...Cl and C-H…Cl hydrogen bonding contacts. The inspection of the Hirshfeld surface analysis helps to discuss the strength of hydrogen bonds and to quantify the inter-contacts. A phase transition was discovered by thermal analysis at 390 K, and comprehensive dielectric research was reported, showing a good agreement with thermal data. Impedance spectroscopy measurements were used to study the electrical and dielectric characteristics over a wide range of frequencies and temperatures, 40 Hz–10 MHz and 313–483 K, respectively. The Nyquist plot (Z" versus Z') from the complex impedance spectrum revealed semicircular arcs described by a Cole-Cole model. An electrical circuit consisting of a link of grain and grain boundary elements is employed. The real and imaginary parts of dielectric permittivity, as well as tg(δ) of (C₇H₁₁N₂)₂CoCl₄ at different frequencies, reveal a distribution of relaxation times. The presence of grain and grain boundaries is confirmed by the modulus investigations. Electric and dielectric analyses highlight the good protonic conduction of this material.Keywords: organic-inorganic, phase transitions, complex impedance, protonic conduction, dielectric analysis
Procedia PDF Downloads 85378 Measurement and Simulation of Axial Neutron Flux Distribution in Dry Tube of KAMINI Reactor
Authors: Manish Chand, Subhrojit Bagchi, R. Kumar
Abstract:
A new dry tube (DT) has been installed in the tank of KAMINI research reactor, Kalpakkam India. This tube will be used for neutron activation analysis of small to large samples and testing of neutron detectors. DT tube is 375 cm height and 7.5 cm in diameter, located 35 cm away from the core centre. The experimental thermal flux at various axial positions inside the tube has been measured by irradiating the flux monitor (¹⁹⁷Au) at 20kW reactor power. The measured activity of ¹⁹⁸Au and the thermal cross section of ¹⁹⁷Au (n,γ) ¹⁹⁸Au reaction were used for experimental thermal flux measurement. The flux inside the tube varies from 10⁹ to 10¹⁰ and maximum flux was (1.02 ± 0.023) x10¹⁰ n cm⁻²s⁻¹ at 36 cm from the bottom of the tube. The Au and Zr foils without and with cadmium cover of 1-mm thickness were irradiated at the maximum flux position in the DT to find out the irradiation specific input parameters like sub-cadmium to epithermal neutron flux ratio (f) and the epithermal neutron flux shape factor (α). The f value was 143 ± 5, indicates about 99.3% thermal neutron component and α value was -0.2886 ± 0.0125, indicates hard epithermal neutron spectrum due to insufficient moderation. The measured flux profile has been validated using theoretical model of KAMINI reactor through Monte Carlo N-Particle Code (MCNP). In MCNP, the complex geometry of the entire reactor is modelled in 3D, ensuring minimum approximations for all the components. Continuous energy cross-section data from ENDF-B/VII.1 as well as S (α, β) thermal neutron scattering functions are considered. The neutron flux has been estimated at the corresponding axial locations of the DT using mesh tally. The thermal flux obtained from the experiment shows good agreement with the theoretically predicted values by MCNP, it was within ± 10%. It can be concluded that this MCNP model can be utilized for calculating other important parameters like neutron spectra, dose rate, etc. and multi elemental analysis can be carried out by irradiating the sample at maximum flux position using measured f and α parameters by k₀-NAA standardization.Keywords: neutron flux, neutron activation analysis, neutron flux shape factor, MCNP, Monte Carlo N-Particle Code
Procedia PDF Downloads 163