Search results for: iterative calculation
377 Fenton Sludge's Catalytic Ability with Synergistic Effects During Reuse for Landfill Leachate Treatment
Authors: Mohd Salim Mahtab, Izharul Haq Farooqi, Anwar Khursheed
Abstract:
Advanced oxidation processes (AOPs) based on Fenton are versatile options for treating complex wastewaters containing refractory compounds. However, the classical Fenton process (CFP) has limitations, such as high sludge production and reagent dosage, which limit its broad use and result in secondary contamination. As a result, long-term solutions are required for process intensification and the removal of these impediments. This study shows that Fenton sludge could serve as a catalyst in the Fe³⁺/Fe²⁺ reductive pathway, allowing non-regenerated sludge to be reused for complex wastewater treatment, such as landfill leachate treatment, even in the absence of Fenton's reagents. Experiments with and without pH adjustments in stages I and II demonstrated that an acidic pH is desirable. Humic compounds in leachate could improve the cycle of Fe³⁺/Fe²⁺ under optimal conditions, and the chemical oxygen demand (COD) removal efficiency was 22±2% and 62±2%% in stages I and II, respectively. Furthermore, excellent total suspended solids (TSS) removal (> 95%) and color removal (> 80%) were obtained in stage II. The processes underlying synergistic (oxidation/coagulation/adsorption) effects were addressed. The design of the experiment (DOE) is growing increasingly popular and has thus been implemented in the chemical, water, and environmental domains. The relevance of the statistical model for the desired response was validated using the explicitly stated optimal conditions. The operational factors, characteristics of reused sludge, toxicity analysis, cost calculation, and future research objectives were also discussed. Reusing non-regenerated Fenton sludge, according to the study's findings, can minimize hazardous solid toxic emissions and total treatment costs.Keywords: advanced oxidation processes, catalysis, Fe³⁺/Fe²⁺ cycle, fenton sludge
Procedia PDF Downloads 89376 Weight Estimation Using the K-Means Method in Steelmaking’s Overhead Cranes in Order to Reduce Swing Error
Authors: Seyedamir Makinejadsanij
Abstract:
One of the most important factors in the production of quality steel is to know the exact weight of steel in the steelmaking area. In this study, a calculation method is presented to estimate the exact weight of the melt as well as the objects transported by the overhead crane. Iran Alloy Steel Company's steelmaking area has three 90-ton cranes, which are responsible for transferring the ladles and ladle caps between 34 areas in the melt shop. Each crane is equipped with a Disomat Tersus weighing system that calculates and displays real-time weight. The moving object has a variable weight due to swinging, and the weighing system has an error of about +-5%. This means that when the object is moving by a crane, which weighs about 80 tons, the device (Disomat Tersus system) calculates about 4 tons more or 4 tons less, and this is the biggest problem in calculating a real weight. The k-means algorithm is an unsupervised clustering method that was used here. The best result was obtained by considering 3 centers. Compared to the normal average(one) or two, four, five, and six centers, the best answer is with 3 centers, which is logically due to the elimination of noise above and below the real weight. Every day, the standard weight is moved with working cranes to test and calibrate cranes. The results are shown that the accuracy is about 40 kilos per 60 tons (standard weight). As a result, with this method, the accuracy of moving weight is calculated as 99.95%. K-means is used to calculate the exact mean of objects. The stopping criterion of the algorithm is also the number of 1000 repetitions or not moving the points between the clusters. As a result of the implementation of this system, the crane operator does not stop while moving objects and continues his activity regardless of weight calculations. Also, production speed increased, and human error decreased.Keywords: k-means, overhead crane, melt weight, weight estimation, swing problem
Procedia PDF Downloads 90375 Outcome of Using Penpat Pinyowattanasilp Equation for Prediction of 24-Hour Uptake, First and Second Therapeutic Doses Calculation in Graves’ Disease Patient
Authors: Piyarat Parklug, Busaba Supawattanaobodee, Penpat Pinyowattanasilp
Abstract:
The radioactive iodine thyroid uptake (RAIU) has been widely used to differentiate the cause of thyrotoxicosis and treatment. Twenty-four hours RAIU is routinely used to calculate the dose of radioactive iodine (RAI) therapy; however, 2 days protocol is required. This study aims to evaluate the modification of Penpat Pinyowattanasilp equation application by the exclusion of outlier data, 3 hours RAIU less than 20% and more than 80%, to improve prediction of 24-hour uptake. The equation is predicted 24 hours RAIU (P24RAIU) = 32.5+0.702 (3 hours RAIU). Then calculating separation first and second therapeutic doses in Graves’ disease patients. Methods; This study was a retrospective study at Faculty of Medicine Vajira Hospital in Bangkok, Thailand. Inclusion were Graves’ disease patients who visited RAI clinic between January 2014-March 2019. We divided subjects into 2 groups according to first and second therapeutic doses. Results; Our study had a total of 151 patients. The study was done in 115 patients with first RAI dose and 36 patients with second RAI dose. The P24RAIU are highly correlated with actual 24-hour RAIU in first and second therapeutic doses (r = 0.913, 95% CI = 0.876 to 0.939 and r = 0.806, 95% CI = 0.649 to 0.897). Bland-Altman plot shows that mean differences between predictive and actual 24 hours RAI in the first dose and second dose were 2.14% (95%CI 0.83-3.46) and 1.37% (95%CI -1.41-4.14). The mean first actual and predictive therapeutic doses are 8.33 ± 4.93 and 7.38 ± 3.43 milliCuries (mCi) respectively. The mean second actual and predictive therapeutic doses are 6.51 ± 3.96 and 6.01 ± 3.11 mCi respectively. The predictive therapeutic doses are highly correlated with the actual dose in first and second therapeutic doses (r = 0.907, 95% CI = 0.868 to 0.935 and r = 0.953, 95% CI = 0.909 to 0.976). Bland-Altman plot shows that mean difference between predictive and actual P24RAIU in the first dose and second dose were less than 1 mCi (-0.94 and -0.5 mCi). This modification equation application is simply used in clinical practice especially patient with 3 hours RAIU in range of 20-80% in a Thai population. Before use, this equation for other population should be tested for the correlation.Keywords: equation, Graves’disease, prediction, 24-hour uptake
Procedia PDF Downloads 139374 Queer Anti-Urbanism: An Exploration of Queer Space Through Design
Authors: William Creighton, Jan Smitheram
Abstract:
Queer discourse has been tied to a middle-class, urban-centric, white approach to the discussion of queerness. In doing so, the multilayeredness of queer existence has been washed away in favour of palatable queer occupation. This paper uses design to explore a queer anti-urbanist approach to facilitate a more egalitarian architectural occupancy. Scott Herring’s work on queer anti-urbanism is key to this approach. Herring redeploys anti-urbanism from its historical understanding of open hostility, rejection and desire to destroy the city towards a mode of queer critique that counters normative ideals of homonormative metronormative gay lifestyles. He questions how queer identity has been closed down into a more diminutive frame where those who do not fit within this frame are subjected to persecution or silenced through their absence. We extend these ideas through design to ask how a queer anti-urbanist approach facilitates a more egalitarian architectural occupancy. Following a “design as research” methodology, the design outputs allow a vehicle to ask how we might live, otherwise, in architectural space. A design as research methodologically is a process of questioning, designing and reflecting – in a non-linear, iterative approach – establishes itself through three projects, each increasing in scale and complexity. Each of the three scales tackled a different body relationship. The project began exploring the relations between body to body, body to known others, and body to unknown others. Moving through increasing scales was not to privilege the objective, the public and the large scale; instead, ‘intra-scaling’ acts as a tool to re-think how scale reproduces normative ideas of the identity of space. There was a queering of scale. Through this approach, the results were an installation that brings two people together to co-author space where the installation distorts the sensory experience and forces a more intimate and interconnected experience challenging our socialized proxemics: knees might touch. To queer the home, the installation was used as a drawing device, a tool to study and challenge spatial perception, drawing convention, and as a way to process practical information about the site and existing house – the device became a tool to embrace the spontaneous. The final design proposal operates as a multi-scalar boundary-crossing through “private” and “public” to support kinship through communal labour, queer relationality and mooring. The resulting design works to set adrift bodies in a sea of sensations through a mix of pleasure programmes. To conclude, through three design proposals, this design research creates a relationship between queer anti-urbanism and design. It asserts that queering the design process and outcome allows a more inclusive way to consider place, space and belonging. The projects lend to a queer relationality and interdependence by making spaces that support the unsettled, out-of-place, but is it queer enough?Keywords: queer, queer anti-urbanism, design as research, design
Procedia PDF Downloads 176373 Logical-Probabilistic Modeling of the Reliability of Complex Systems
Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia
Abstract:
The paper presents logical-probabilistic methods, models, and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. It is important to design systems based on structural analysis, research, and evaluation of efficiency indicators. One of the important efficiency criteria is the reliability of the system, which depends on the components of the structure. Quantifying the reliability of large-scale systems is a computationally complex process, and it is advisable to perform it with the help of a computer. Logical-probabilistic modeling is one of the effective means of describing the structure of a complex system and quantitatively evaluating its reliability, which was the basis of our application. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of “weights” of elements of system. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research, and designing of optimal structure systems are carried out.Keywords: complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability of systems, “weights” of elements
Procedia PDF Downloads 66372 Rapid Assessment the Ability of Forest Vegetation in Kulonprogo to Store Carbon Using Multispectral Satellite Imagery and Vegetation Index
Authors: Ima Rahmawati, Nur Hafizul Kalam
Abstract:
Development of industrial and economic sectors in various countries very rapidly caused raising the greenhouse gas (GHG) emissions. Greenhouse gases are dominated by carbon dioxide (CO2) and methane (CH4) in the atmosphere that make the surface temperature of the earth always increase. The increasing gases caused by incomplete combustion of fossil fuels such as petroleum and coals and also high rate of deforestation. Yogyakarta Special Province which every year always become tourist destination, has a great potency in increasing of greenhouse gas emissions mainly from the incomplete combustion. One of effort to reduce the concentration of gases in the atmosphere is keeping and empowering the existing forests in the Province of Yogyakarta, especially forest in Kulonprogro is to be maintained the greenness so that it can absorb and store carbon maximally. Remote sensing technology can be used to determine the ability of forests to absorb carbon and it is connected to the density of vegetation. The purpose of this study is to determine the density of the biomass of forest vegetation and determine the ability of forests to store carbon through Photo-interpretation and Geographic Information System approach. Remote sensing imagery that used in this study is LANDSAT 8 OLI year 2015 recording. LANDSAT 8 OLI imagery has 30 meters spatial resolution for multispectral bands and it can give general overview the condition of the carbon stored from every density of existing vegetation. The method is the transformation of vegetation index combined with allometric calculation of field data then doing regression analysis. The results are model maps of density and capability level of forest vegetation in Kulonprogro, Yogyakarta in storing carbon.Keywords: remote sensing, carbon, kulonprogo, forest vegetation, vegetation index
Procedia PDF Downloads 397371 The Effect of Foundation on the Earth Fill Dam Settlement
Authors: Masoud Ghaemi, Mohammadjafar Hedayati, Faezeh Yousefzadeh, Hoseinali Heydarzadeh
Abstract:
Careful monitoring in the earth dams to measure deformation caused by settlement and movement has always been a concern for engineers in the field. In order to measure settlement and deformation of earth dams, usually, the precision instruments of settlement set and combined Inclinometer that is commonly referred to IS instrument will be used. In some dams, because the thickness of alluvium is high and there is no possibility of alluvium removal (technically and economically and in terms of performance), there is no possibility of placing the end of IS instrument (precision instruments of Inclinometer-settlement set) in the rock foundation. Inevitably, have to accept installing pipes in the weak and deformable alluvial foundation that leads to errors in the calculation of the actual settlement (absolute settlement) in different parts of the dam body. The purpose of this paper is to present new and refine criteria for predicting settlement and deformation in earth dams. The study is based on conditions in three dams with a deformation quite alluvial (Agh Chai, Narmashir and Gilan-e Gharb) to provide settlement criteria affected by the alluvial foundation. To achieve this goal, the settlement of dams was simulated by using the finite difference method with FLAC3D software, and then the modeling results were compared with the reading IS instrument. In the end, the caliber of the model and validate the results, by using regression analysis techniques and scrutinized modeling parameters with real situations and then by using MATLAB software and CURVE FITTING toolbox, new criteria for the settlement based on elasticity modulus, cohesion, friction angle, the density of earth dam and the alluvial foundation was obtained. The results of these studies show that, by using the new criteria measures, the amount of settlement and deformation for the dams with alluvial foundation can be corrected after instrument readings, and the error rate in reading IS instrument can be greatly reduced.Keywords: earth-fill dam, foundation, settlement, finite difference, MATLAB, curve fitting
Procedia PDF Downloads 195370 Understanding Inhibitory Mechanism of the Selective Inhibitors of Cdk5/p25 Complex by Molecular Modeling Studies
Authors: Amir Zeb, Shailima Rampogu, Minky Son, Ayoung Baek, Sang H. Yoon, Keun W. Lee
Abstract:
Neurotoxic insults activate calpain, which in turn produces truncated p25 from p35. p25 forms hyperactivated Cdk5/p25 complex, and thereby induces severe neuropathological aberrations including hyperphosphorylated tau, neuroinflammation, apoptosis, and neuronal death. Inhibition of Cdk5/p25 complex alleviates aberrant phosphorylation of tau to mitigate AD pathology. PHA-793887 and Roscovitine have been investigated as selective inhibitors of Cdk5/p25 with IC50 values 5nM and 160nM, respectively, but their mechanistic studies remain unknown. Herein, computational simulations have explored the binding mode and interaction mechanism of PHA-793887 and Roscovitine with Cdk5/p25. Docking results suggested that PHA-793887 and Rsocovitine have occupied the ATP-binding site of Cdk5 and obtained highest docking (GOLD) score of 66.54 and 84.03, respectively. Furthermore, molecular dynamics (MD) simulation demonstrated that PHA-793887 and Roscovitine established stable RMSD of 1.09 Å and 1.48 Å with Cdk5/p25, respectively. Profiling of polar interactions suggested that each inhibitor formed hydrogen bonds (H-bond) with catalytic residues of Cdk5 and could remain stable throughout the molecular dynamics simulation. Additionally, binding free energy calculation by molecular mechanics/Poisson–Boltzmann surface area (MM/PBSA) suggested that PHA-793887 and Roscovitine had lowest binding free energies of -150.05 kJ/mol and -113.14 kJ/mol, respectively with Cdk5/p25. Free energy decomposition demonstrated that polar energy by H-bond between the Glu81 of Cdk5 and PHA-793887 is the essential factor to make PHA-793887 highly selective towards Cdk5/p25. Overall, this study provided substantial evidences to explore mechanistic interactions of the selective inhibitors of Cdk5/p25 and could be used as fundamental considerations in the development of structure-based selective inhibitors of Cdk5/p25.Keywords: Cdk5/p25 inhibition, molecular modeling of Cdk5/p25, PHA-793887 and roscovitine, selective inhibition of Cdk5/p25
Procedia PDF Downloads 139369 A New Formulation Of The M And M-theta Integrals Generalized For Virtual Crack Closure In A Three-dimensional Medium
Authors: Loïc Chrislin Nguedjio, S. Jerome Afoutou, Rostand Moutou Pitti, Benoit Blaysat, Frédéric Dubois, Naman Recho, Pierre Kisito Talla
Abstract:
The safety and durability of structures remain challenging fields that continue to draw the attention of designers. One widely adopted approach is fracture mechanics, which provides methods to evaluate crack stability in complex geometries and under diverse loading conditions. The global energy approach is particularly comprehensive, as it calculates the energy release rate required for crack initiation and propagation using path-independent integrals. This study aims to extend these invariant integrals to include path-independent integrals, with the goal of enhancing the accuracy of failure predictions. The ultimate objective is to create more robust materials while optimizing structural safety and durability. By integrating the real and virtual field method with the virtual crack closure technique, a new formulation of the M-integral is introduced. This formulation establishes a direct relationship between local stresses on the crack faces and the opening displacements, allowing for an accurate calculation of fracture energy. The analytical calculations are grounded in the assumption that the energy needed to close a crack virtually is equal to the energy released during its opening. This novel integral is implemented in a finite element code using Cast3M to simulate cracking criteria within a wood material context. Initially, the numerical calculations are focused on plane strain conditions, but they are later extended to three-dimensional environments, taking into account the orthotropic nature of wood.Keywords: energy release rate, path-independent integrals, virtual crack closure, orthotropic material
Procedia PDF Downloads 5368 A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.Keywords: linked open data, information integration, digital libraries, data mining
Procedia PDF Downloads 426367 Modelling Patient Condition-Based Demand for Managing Hospital Inventory
Authors: Esha Saha, Pradip Kumar Ray
Abstract:
A hospital inventory comprises of a large number and great variety of items for the proper treatment and care of patients, such as pharmaceuticals, medical equipment, surgical items, etc. Improper management of these items, i.e. stockouts, may lead to delay in treatment or other fatal consequences, even death of the patient. So, generally the hospitals tend to overstock items to avoid the risk of stockout which leads to unnecessary investment of money, difficulty in storing, more expiration and wastage, etc. Thus, in such challenging environment, it is necessary for hospitals to follow an inventory policy considering the stochasticity of demand in a hospital. Statistical analysis captures the correlation of patient condition based on bed occupancy with the patient demand which changes stochastically. Due to the dependency on bed occupancy, the markov model is developed that helps to map the changes in demand of hospital inventory based on the changes in the patient condition represented by the movements of bed occupancy states (acute care state, rehabilitative state and long-care state) during the length-of-stay of patient in a hospital. An inventory policy is developed for a hospital based on the fulfillment of patient demand with the objective of minimizing the frequency and quantity of placement of orders of inventoried items. The analytical structure of the model based on probability calculation is provided to show the optimal inventory-related decisions. A case-study is illustrated in this paper for the development of hospital inventory model based on patient demand for multiple inpatient pharmaceutical items. A sensitivity analysis is conducted to investigate the impact of inventory-related parameters on the developed optimal inventory policy. Therefore, the developed model and solution approach may help the hospital managers and pharmacists in managing the hospital inventory in case of stochastic demand of inpatient pharmaceutical items.Keywords: bed occupancy, hospital inventory, markov model, patient condition, pharmaceutical items
Procedia PDF Downloads 323366 Direct Approach in Modeling Particle Breakage Using Discrete Element Method
Authors: Ebrahim Ghasemi Ardi, Ai Bing Yu, Run Yu Yang
Abstract:
Current study is aimed to develop an available in-house discrete element method (DEM) code and link it with direct breakage event. So, it became possible to determine the particle breakage and then its fragments size distribution, simultaneous with DEM simulation. It directly applies the particle breakage inside the DEM computation algorithm and if any breakage happens the original particle is replaced with daughters. In this way, the calculation will be followed based on a new updated particles list which is very similar to the real grinding environment. To validate developed model, a grinding ball impacting an unconfined particle bed was simulated. Since considering an entire ball mill would be too computationally demanding, this method provided a simplified environment to test the model. Accordingly, a representative volume of the ball mill was simulated inside a box, which could emulate media (ball)–powder bed impacts in a ball mill and during particle bed impact tests. Mono, binary and ternary particle beds were simulated to determine the effects of granular composition on breakage kinetics. The results obtained from the DEM simulations showed a reduction in the specific breakage rate for coarse particles in binary mixtures. The origin of this phenomenon, commonly known as cushioning or decelerated breakage in dry milling processes, was explained by the DEM simulations. Fine particles in a particle bed increase mechanical energy loss, and reduce and distribute interparticle forces thereby inhibiting the breakage of the coarse component. On the other hand, the specific breakage rate of fine particles increased due to contacts associated with coarse particles. Such phenomenon, known as acceleration, was shown to be less significant, but should be considered in future attempts to accurately quantify non-linear breakage kinetics in the modeling of dry milling processes.Keywords: particle bed, breakage models, breakage kinetic, discrete element method
Procedia PDF Downloads 199365 Larger Diameter 22 MM-PDC Cutter Greatly Improves Drilling Efficiency of PDC Bit
Authors: Fangyuan Shao, Wei Liu, Deli Gao
Abstract:
With the increasing speed of oil and gas exploration, development and production at home and abroad, the demand for drilling speed up technology is becoming more and more critical to reduce the development cost. Highly efficient and personalized PDC bit is important equipment in the bottom hole assembly (BHA). Therefore, improving the rock-breaking efficiency of PDC bits will help reduce drilling time and drilling cost. Advances in PDC bit technology have resulted in a leapfrogging improvement in the rate of penetration (ROP) of PDC bits over roller cone bits in soft to medium-hard formations. Recently, with the development of PDC technology, the diameter of the PDC tooth can be further expanded. The maximum diameter of the PDC cutter used in this paper is 22 mm. According to the theoretical calculation, under the same depth of cut (DOC), the 22mm-PDC cutter increases the exposure of the cutter, and the increase of PDC cutter diameter helps to increase the cutting area of the PDC cutter. In order to evaluate the cutting performance of the 22 mm-PDC cutter and the existing commonly used cutters, the 16 mm, 19 mm and 22 mm PDC cutter was selected put on a vertical turret lathe (VTL) in the laboratory for cutting tests under different DOCs. The DOCs were 0.5mm, 1.0 mm, 1.5 mm and 2.0 mm, 2.5 mm and 3 mm, respectively. The rock sample used in the experiment was limestone. Results of laboratory tests have shown the new 22 mm-PDC cutter technology greatly improved cutting efficiency. On the one hand, as the DOC increases, the mechanical specific energy (MSE) of all cutters decreases, which means that the cutting efficiency increases. On the other hand, under the same DOC condition, the larger the cutter diameter is, the larger the working area of the cutter is, which leads to higher the cutting efficiency. In view of the high performance of the 22 mm-PDC cutters, which was applied to carry out full-scale bit field experiments. The result shows that the bit with 22mm-PDC cutters achieves a breakthrough improvement of ROP than that with conventional 16mm and 19mm cutters in offset well drilling.Keywords: polycrystalline diamond compact, 22 mm-PDC cutters, cutting efficiency, mechanical specific energy
Procedia PDF Downloads 204364 Design & Development of a Static-Thrust Test-Bench for Aviation/UAV Based Piston Engines
Authors: Syed Muhammad Basit Ali, Usama Saleem, Irtiza Ali
Abstract:
Internal combustion engines have been pioneers in the aviation industry, use of piston engines for aircraft propulsion, from propeller-driven bi-planes to turbo-prop, commercial, and cargo airliners. To provide an adequate amount of thrust piston engine rotates the propeller at a specific rpm, allowing enough mass airflow. Thrust is the only forward-acting force of an aircraft that helps heavier than air bodies to fly, depending on the mathematical model and variables included in that with the correct measurement. Test-benches have been a bench-mark in the aerospace industry to analyse the results before a flight, having paramount significance in reliability and safety engineering, depending on the mathematical model and variables included in that with the correct measurement. Calculation of thrust from a piston engine also depends on environmental changes, the diameter of the propeller, and the density of air. The project would be centered on piston engines used in the aviation industry for light aircraft and UAVs. A static thrust test bench involves various units, each performing a designed purpose to monitor and display. Static thrust tests are performed on the ground, and safety concerns hold paramount importance. The execution of this study involves research, design, manufacturing, and results based on reverse engineering initiating from virtual design, analytical analysis, and simulations. The final evaluation of results gathered from various methods such as co-relation between conventional mass-spring and digital loadcell. On average, we received 17.5kg of thrust (25+ engine run-ups – around 40 hours of engine run), only 10% deviation from analytically calculated thrust –providing 90% accuracy.Keywords: aviation, aeronautics, static thrust, test bench, aircraft maintenance
Procedia PDF Downloads 413363 Software Development for AASHTO and Ethiopian Roads Authority Flexible Pavement Design Methods
Authors: Amare Setegn Enyew, Bikila Teklu Wodajo
Abstract:
The primary aim of flexible pavement design is to ensure the development of economical and safe road infrastructure. However, failures can still occur due to improper or erroneous structural design. In Ethiopia, the design of flexible pavements relies on doing calculations manually and selecting pavement structure from catalogue. The catalogue offers, in eight different charts, alternative structures for combinations of traffic and subgrade classes, as outlined in the Ethiopian Roads Authority (ERA) Pavement Design Manual 2001. Furthermore, design modification is allowed in accordance with the structural number principles outlined in the AASHTO 1993 Guide for Design of Pavement Structures. Nevertheless, the manual calculation and design process involves the use of nomographs, charts, tables, and formulas, which increases the likelihood of human errors and inaccuracies, and this may lead to unsafe or uneconomical road construction. To address the challenge, a software called AASHERA has been developed for AASHTO 1993 and ERA design methods, using MATLAB language. The software accurately determines the required thicknesses of flexible pavement surface, base, and subbase layers for the two methods. It also digitizes design inputs and references like nomographs, charts, default values, and tables. Moreover, the software allows easier comparison of the two design methods in terms of results and cost of construction. AASHERA's accuracy has been confirmed through comparisons with designs from handbooks and manuals. The software can aid in reducing human errors, inaccuracies, and time consumption as compared to the conventional manual design methods employed in Ethiopia. AASHERA, with its validated accuracy, proves to be an indispensable tool for flexible pavement structure designers.Keywords: flexible pavement design, AASHTO 1993, ERA, MATLAB, AASHERA
Procedia PDF Downloads 63362 The Effects of Impact Forces and Kinematics of Two Different Stance Position at Straight Punch Techniques in Boxing
Authors: Bergun Meric Bingul, Cigdem Bulgan, Ozlem Tore, Mensure Aydin, Erdal Bal
Abstract:
The aim of the study was to compare the effects of impact forces and some kinematic parameters with two different straight punch stance positions in boxing. 9 elite boxing athletes from the Turkish National Team (mean age± SD 19.33±2.11 years, mean height 174.22±3.79 cm, mean weight 66.0±6.62 kg) participated in this study as voluntarily. Boxing athletes performed one trial in straight punch technique for each two different stance positions (orthodox and southpaw stances) at sandbag. The trials were recorded at a frequency of 120Hz using eight synchronized high-speed cameras (Oqus 7+), which were placed, approximately at right- angles to one another. The three-dimensional motion analysis was performed with a Motion Capture System (Qualisys, Sweden). Data was transferred to Windows-based data acquisition software, which was QTM (Qualisys Track Manager). 11 segment models were used for determination of the kinematic variables (Calf, leg, punch, upperarm, lowerarm, trunk). Also, the sandbag was markered for calculation of the impact forces. Wand calibration method (with T stick) was used for field calibration. The mean velocity and acceleration of the punch; mean acceleration of the sandbag and angles of the trunk, shoulder, hip and knee were calculated. Stance differences’ data were compared with Wilcoxon test for using SPSS 20.0 program. According to the results, there were statistically significant differences found in trunk angle on the sagittal plane (yz) (p<0.05). There was a significant difference also found in sandbag acceleration and impact forces between stance positions (p < 0.05). Boxing athletes achieved more impact forces and accelerations in orthodox stance position. It is recommended that to use an orthodox stance instead of southpaw stance in straight punch technique especially for creating more impact forces.Keywords: boxing, impact force, kinematics, straight punch, orthodox, southpaw
Procedia PDF Downloads 326361 Computational Fluid Dynamics Model of Various Types of Rocket Engine Nozzles
Authors: Konrad Pietrykowski, Michal Bialy, Pawel Karpinski, Radoslaw Maczka
Abstract:
The nozzle is an element of the rocket engine in which the conversion of the potential energy of gases generated during combustion into the kinetic energy of the gas stream takes place. The design parameters of the nozzle have a decisive influence on the ballistic characteristics of the engine. Designing a nozzle assembly is, therefore, one of the most responsible stages in developing a rocket engine design. The paper presents the results of the simulation of three types of rocket propulsion nozzles. Calculations were made using CFD (Computational Fluid Dynamics) in ANSYS Fluent software. The next types of nozzles differ in shape. The analysis was made of a conical nozzle, a bell type nozzle with a conical supersonic part and a bell type nozzle. Calculation results are presented in the form of pressure, velocity and kinetic energy distributions of turbulence in the longitudinal section. The courses of these values along the nozzles are also presented. The results show that the cone nozzle generates strong turbulence in the critical section. Which negatively affect the flow of the working medium. In the case of a bell nozzle, the transformation of the wall caused the elimination of flow disturbances in the critical section. This reduces the probability of waves forming before or after the trailing edge. The most sophisticated construction is the bell type nozzle. It allows you to maximize performance without adding extra weight. The bell type nozzle can be used as a starter and auxiliary engine nozzle due to its advantages. The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).Keywords: computational fluid dynamics, nozzle, rocket engine, supersonic flow
Procedia PDF Downloads 158360 Analysis of Thermal Effect on Functionally Graded Micro-Beam via Mixed Finite Element Method
Authors: Cagri Mollamahmutoglu, Ali Mercan, Aykut Levent
Abstract:
Studies concerning the microstructures are becoming more important as the utilization of various micro-electro mechanical systems (MEMS) are increasing. Thus in recent years, thermal buckling and vibration analysis of microstructures have been subject to many investigations that are utilizing different numerical methods. In this study, thermal effects on mechanical response of a functionally graded (FG) Timoshenko micro-beam are presented in the framework of a mixed finite element formulation. Size effects are taken into consideration via modified couple stress theory. The mixed formulation is based on a function which in turn is derived via Gateaux Differential scientifically. After the resolution of all field equations of the beam, a potential operator is carefully constructed. Then this operator is used for the manufacturing of the functional. Usual procedures of finite element approximation are utilized for the derivation of the mixed finite element equations once the potential is obtained. Resulting finite element formulation allows usage of C₀ type simple linear shape functions and avoids shear-locking phenomena, which is a common shortcoming of the displacement-based formulations of moderately thick beams. The developed numerical scheme is used to obtain the effects of thermal loads on the static bending, free vibration and buckling of FG Timoshenko micro-beams for different power-law parameters, aspect ratios and boundary conditions. The versatility of the mixed formulation is presented over other numerical methods such as generalized differential quadrature method (GDQM). Another attractive property of the formulation is that it allows direct calculation of the contribution of micro effects on the overall mechanical response.Keywords: micro-beam, functionally graded materials, thermal effect, mixed finite element method
Procedia PDF Downloads 139359 The Impact of Cognitive Load on Deceit Detection and Memory Recall in Children’s Interviews: A Meta-Analysis
Authors: Sevilay Çankaya
Abstract:
The detection of deception in children’s interviews is essential for statement veracity. The widely used method for deception detection is building cognitive load, which is the logic of the cognitive interview (CI), and its effectiveness for adults is approved. This meta-analysis delves into the effectiveness of inducing cognitive load as a means of enhancing veracity detection during interviews with children. Additionally, the effectiveness of cognitive load on children's total number of events recalled is assessed as a second part of the analysis. The current meta-analysis includes ten effect sizes from search using databases. For the effect size calculation, Hedge’s g was used with a random effect model by using CMA version 2. Heterogeneity analysis was conducted to detect potential moderators. The overall result indicated that cognitive load had no significant effect on veracity outcomes (g =0.052, 95% CI [-.006,1.25]). However, a high level of heterogeneity was found (I² = 92%). Age, participants’ characteristics, interview setting, and characteristics of the interviewer were coded as possible moderators to explain variance. Age was significant moderator (β = .021; p = .03, R2 = 75%) but the analysis did not reveal statistically significant effects for other potential moderators: participants’ characteristics (Q = 0.106, df = 1, p = .744), interview setting (Q = 2.04, df = 1, p = .154), and characteristics of interviewer (Q = 2.96, df = 1, p = .086). For the second outcome, the total number of events recalled, the overall effect was significant (g =4.121, 95% CI [2.256,5.985]). The cognitive load was effective in total recalled events when interviewing with children. All in all, while age plays a crucial role in determining the impact of cognitive load on veracity, the surrounding context, interviewer attributes, and inherent participant traits may not significantly alter the relationship. These findings throw light on the need for more focused, age-specific methods when using cognitive load measures. It may be possible to improve the precision and dependability of deceit detection in children's interviews with the help of more studies in this field.Keywords: deceit detection, cognitive load, memory recall, children interviews, meta-analysis
Procedia PDF Downloads 55358 Calculation of Electronic Structures of Nickel in Interaction with Hydrogen by Density Functional Theoretical (DFT) Method
Authors: Choukri Lekbir, Mira Mokhtari
Abstract:
Hydrogen-Materials interaction and mechanisms can be modeled at nano scale by quantum methods. In this work, the effect of hydrogen on the electronic properties of a cluster material model «nickel» has been studied by using of density functional theoretical (DFT) method. Two types of clusters are optimized: Nickel and hydrogen-nickel system. In the case of nickel clusters (n = 1-6) without presence of hydrogen, three types of electronic structures (neutral, cationic and anionic), have been optimized according to three basis sets calculations (B3LYP/LANL2DZ, PW91PW91/DGDZVP2, PBE/DGDZVP2). The comparison of binding energies and bond lengths of the three structures of nickel clusters (neutral, cationic and anionic) obtained by those basis sets, shows that the results of neutral and anionic nickel clusters are in good agreement with the experimental results. In the case of neutral and anionic nickel clusters, comparing energies and bond lengths obtained by the three bases, shows that the basis set PBE/DGDZVP2 is most suitable to experimental results. In the case of anionic nickel clusters (n = 1-6) with presence of hydrogen, the optimization of the hydrogen-nickel (anionic) structures by using of the basis set PBE/DGDZVP2, shows that the binding energies and bond lengths increase compared to those obtained in the case of anionic nickel clusters without the presence of hydrogen, that reveals the armor effect exerted by hydrogen on the electronic structure of nickel, which due to the storing of hydrogen energy within nickel clusters structures. The comparison between the bond lengths for both clusters shows the expansion effect of clusters geometry which due to hydrogen presence.Keywords: binding energies, bond lengths, density functional theoretical, geometry optimization, hydrogen energy, nickel cluster
Procedia PDF Downloads 422357 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores
Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan
Abstract:
Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics
Procedia PDF Downloads 130356 Exploration into Bio Inspired Computing Based on Spintronic Energy Efficiency Principles and Neuromorphic Speed Pathways
Authors: Anirudh Lahiri
Abstract:
Neuromorphic computing, inspired by the intricate operations of biological neural networks, offers a revolutionary approach to overcoming the limitations of traditional computing architectures. This research proposes the integration of spintronics with neuromorphic systems, aiming to enhance computational performance, scalability, and energy efficiency. Traditional computing systems, based on the Von Neumann architecture, struggle with scalability and efficiency due to the segregation of memory and processing functions. In contrast, the human brain exemplifies high efficiency and adaptability, processing vast amounts of information with minimal energy consumption. This project explores the use of spintronics, which utilizes the electron's spin rather than its charge, to create more energy-efficient computing systems. Spintronic devices, such as magnetic tunnel junctions (MTJs) manipulated through spin-transfer torque (STT) and spin-orbit torque (SOT), offer a promising pathway to reducing power consumption and enhancing the speed of data processing. The integration of these devices within a neuromorphic framework aims to replicate the efficiency and adaptability of biological systems. The research is structured into three phases: an exhaustive literature review to build a theoretical foundation, laboratory experiments to test and optimize the theoretical models, and iterative refinements based on experimental results to finalize the system. The initial phase focuses on understanding the current state of neuromorphic and spintronic technologies. The second phase involves practical experimentation with spintronic devices and the development of neuromorphic systems that mimic synaptic plasticity and other biological processes. The final phase focuses on refining the systems based on feedback from the testing phase and preparing the findings for publication. The expected contributions of this research are twofold. Firstly, it aims to significantly reduce the energy consumption of computational systems while maintaining or increasing processing speed, addressing a critical need in the field of computing. Secondly, it seeks to enhance the learning capabilities of neuromorphic systems, allowing them to adapt more dynamically to changing environmental inputs, thus better mimicking the human brain's functionality. The integration of spintronics with neuromorphic computing could revolutionize how computational systems are designed, making them more efficient, faster, and more adaptable. This research aligns with the ongoing pursuit of energy-efficient and scalable computing solutions, marking a significant step forward in the field of computational technology.Keywords: material science, biological engineering, mechanical engineering, neuromorphic computing, spintronics, energy efficiency, computational scalability, synaptic plasticity.
Procedia PDF Downloads 43355 High-Resolution Flood Hazard Mapping Using Two-Dimensional Hydrodynamic Model Anuga: Case Study of Jakarta, Indonesia
Authors: Hengki Eko Putra, Dennish Ari Putro, Tri Wahyu Hadi, Edi Riawan, Junnaedhi Dewa Gede, Aditia Rojali, Fariza Dian Prasetyo, Yudhistira Satya Pribadi, Dita Fatria Andarini, Mila Khaerunisa, Raditya Hanung Prakoswa
Abstract:
Catastrophe risk management can only be done if we are able to calculate the exposed risks. Jakarta is an important city economically, socially, and politically and in the same time exposed to severe floods. On the other hand, flood risk calculation is still very limited in the area. This study has calculated the risk of flooding for Jakarta using 2-Dimensional Model ANUGA. 2-Dimensional model ANUGA and 1-Dimensional Model HEC-RAS are used to calculate the risk of flooding from 13 major rivers in Jakarta. ANUGA can simulate physical and dynamical processes between the streamflow against river geometry and land cover to produce a 1-meter resolution inundation map. The value of streamflow as an input for the model obtained from hydrological analysis on rainfall data using hydrologic model HEC-HMS. The probabilistic streamflow derived from probabilistic rainfall using statistical distribution Log-Pearson III, Normal and Gumbel, through compatibility test using Chi Square and Smirnov-Kolmogorov. Flood event on 2007 is used as a comparison to evaluate the accuracy of model output. Property damage estimations were calculated based on flood depth for 1, 5, 10, 25, 50, and 100 years return period against housing value data from the BPS-Statistics Indonesia, Centre for Research and Development of Housing and Settlements, Ministry of Public Work Indonesia. The vulnerability factor was derived from flood insurance claim. Jakarta's flood loss estimation for the return period of 1, 5, 10, 25, 50, and 100 years, respectively are Rp 1.30 t; Rp 16.18 t; Rp 16.85 t; Rp 21.21 t; Rp 24.32 t; and Rp 24.67 t of the total value of building Rp 434.43 t.Keywords: 2D hydrodynamic model, ANUGA, flood, flood modeling
Procedia PDF Downloads 275354 QSAR Modeling of Germination Activity of a Series of 5-(4-Substituent-Phenoxy)-3-Methylfuran-2(5H)-One Derivatives with Potential of Strigolactone Mimics toward Striga hermonthica
Authors: Strahinja Kovačević, Sanja Podunavac-Kuzmanović, Lidija Jevrić, Cristina Prandi, Piermichele Kobauri
Abstract:
The present study is based on molecular modeling of a series of twelve 5-(4-substituent-phenoxy)-3-methylfuran-2(5H)-one derivatives which have potential of strigolactones mimics toward Striga hermonthica. The first step of the analysis included the calculation of molecular descriptors which numerically describe the structures of the analyzed compounds. The descriptors ALOGP (lipophilicity), AClogS (water solubility) and BBB (blood-brain barrier penetration), served as the input variables in multiple linear regression (MLR) modeling of germination activity toward S. hermonthica. Two MLR models were obtained. The first MLR model contains ALOGP and AClogS descriptors, while the second one is based on these two descriptors plus BBB descriptor. Despite the braking Topliss-Costello rule in the second MLR model, it has much better statistical and cross-validation characteristics than the first one. The ALOGP and AClogS descriptors are often very suitable predictors of the biological activity of many compounds. They are very important descriptors of the biological behavior and availability of a compound in any biological system (i.e. the ability to pass through the cell membranes). BBB descriptor defines the ability of a molecule to pass through the blood-brain barrier. Besides the lipophilicity of a compound, this descriptor carries the information of the molecular bulkiness (its value strongly depends on molecular bulkiness). According to the obtained results of MLR modeling, these three descriptors are considered as very good predictors of germination activity of the analyzed compounds toward S. hermonthica seeds. This article is based upon work from COST Action (FA1206), supported by COST (European Cooperation in Science and Technology).Keywords: chemometrics, germination activity, molecular modeling, QSAR analysis, strigolactones
Procedia PDF Downloads 287353 The Future of Insurance: P2P Innovation versus Traditional Business Model
Authors: Ivan Sosa Gomez
Abstract:
Digitalization has impacted the entire insurance value chain, and the growing movement towards P2P platforms and the collaborative economy is also beginning to have a significant impact. P2P insurance is defined as innovation, enabling policyholders to pool their capital, self-organize, and self-manage their own insurance. In this context, new InsurTech start-ups are emerging as peer-to-peer (P2P) providers, based on a model that differs from traditional insurance. As a result, although P2P platforms do not change the fundamental basis of insurance, they do enable potentially more efficient business models to be established in terms of ensuring the coverage of risk. It is therefore relevant to determine whether p2p innovation can have substantial effects on the future of the insurance sector. For this purpose, it is considered necessary to develop P2P innovation from a business perspective, as well as to build a comparison between a traditional model and a P2P model from an actuarial perspective. Objectives: The objectives are (1) to represent P2P innovation in the business model compared to the traditional insurance model and (2) to establish a comparison between a traditional model and a P2P model from an actuarial perspective. Methodology: The research design is defined as action research in terms of understanding and solving the problems of a collectivity linked to an environment, applying theory and best practices according to the approach. For this purpose, the study is carried out through the participatory variant, which involves the collaboration of the participants, given that in this design, participants are considered experts. For this purpose, prolonged immersion in the field is carried out as the main instrument for data collection. Finally, an actuarial model is developed relating to the calculation of premiums that allows for the establishment of projections of future scenarios and the generation of conclusions between the two models. Main Contributions: From an actuarial and business perspective, we aim to contribute by developing a comparison of the two models in the coverage of risk in order to determine whether P2P innovation can have substantial effects on the future of the insurance sector.Keywords: Insurtech, innovation, business model, P2P, insurance
Procedia PDF Downloads 92352 Additive Manufacturing – Application to Next Generation Structured Packing (SpiroPak)
Authors: Biao Sun, Tejas Bhatelia, Vishnu Pareek, Ranjeet Utikar, Moses Tadé
Abstract:
Additive manufacturing (AM), commonly known as 3D printing, with the continuing advances in parallel processing and computational modeling, has created a paradigm shift (with significant radical thinking) in the design and operation of chemical processing plants, especially LNG plants. With the rising energy demands, environmental pressures, and economic challenges, there is a continuing industrial need for disruptive technologies such as AM, which possess capabilities that can drastically reduce the cost of manufacturing and operations of chemical processing plants in the future. However, the continuing challenge for 3D printing is its lack of adaptability in re-designing the process plant equipment coupled with the non-existent theory or models that could assist in selecting the optimal candidates out of the countless potential fabrications that are possible using AM. One of the most common packings used in the LNG process is structured packing in the packed column (which is a unit operation) in the process. In this work, we present an example of an optimum strategy for the application of AM to this important unit operation. Packed columns use a packing material through which the gas phase passes and comes into contact with the liquid phase flowing over the packing, typically performing the necessary mass transfer to enrich the products, etc. Structured packing consists of stacks of corrugated sheets, typically inclined between 40-70° from the plane. Computational Fluid Dynamics (CFD) was used to test and model various geometries to study the governing hydrodynamic characteristics. The results demonstrate that the costly iterative experimental process can be minimized. Furthermore, they also improve the understanding of the fundamental physics of the system at the multiscale level. SpiroPak, patented by Curtin University, represents an innovative structured packing solution currently at a technology readiness level (TRL) of 5~6. This packing exhibits remarkable characteristics, offering a substantial increase in surface area while significantly enhancing hydrodynamic and mass transfer performance. Recent studies have revealed that SpiroPak can reduce pressure drop by 50~70% compared to commonly used commercial packings, and it can achieve 20~50% greater mass transfer efficiency (particularly in CO2 absorption applications). The implementation of SpiroPak has the potential to reduce the overall size of columns and decrease power consumption, resulting in cost savings for both capital expenditure (CAPEX) and operational expenditure (OPEX) when applied to retrofitting existing systems or incorporated into new processes. Furthermore, pilot to large-scale tests is currently underway to further advance and refine this technology.Keywords: Additive Manufacturing (AM), 3D printing, Computational Fluid Dynamics (CFD, structured packing (SpiroPak)
Procedia PDF Downloads 87351 Molecular Modeling of Structurally Diverse Compounds as Potential Therapeutics for Transmissible Spongiform Encephalopathy
Authors: Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević, Lidija R. Jevrić
Abstract:
Prion is a protein substance whose certain form is considered as infectious agent. It is presumed to be the cause of the transmissible spongiform encephalopathies (TSEs). The protein it is composed of, called PrP, can fold in structurally distinct ways. At least one of those 3D structures is transmissible to other prion proteins. Prions can be found in brain tissue of healthy people and have certain biological role. The structure of prions naturally occurring in healthy organisms is marked as PrPc, and the structure of infectious prion is labeled as PrPSc. PrPc may play a role in synaptic plasticity and neuronal development. Also, it may be required for neuronal myelin sheath maintenance, including a role in iron uptake and iron homeostasis. PrPSc can be considered as an environmental pollutant. The main aim of this study was to carry out the molecular modeling and calculation of molecular descriptors (lipophilicity, physico-chemical and topological descriptors) of structurally diverse compounds which can be considered as anti-prion agents. Molecular modeling was conducted applying ChemBio3D Ultra version 12.0 software. The obtained 3D models were subjected to energy minimization using molecular mechanics force field method (MM2). The cutoff for structure optimization was set at a gradient of 0.1 kcal/Åmol. The Austin Model 1 (AM-1) was used for full geometry optimization of all structures. The obtained set of molecular descriptors is applied in analysis of similarities and dissimilarities among the tested compounds. This study is an important step in further development of quantitative structure-activity relationship (QSAR) models, which can be used for prediction of anti-prion activity of newly synthesized compounds.Keywords: chemometrics, molecular modeling, molecular descriptors, prions, QSAR
Procedia PDF Downloads 322350 Use of the Budyko Framework to Estimate the Virtual Water Content in Shijiazhuang Plain, North China
Authors: Enze Zhang
Abstract:
One of the most challenging steps in implementing virtual water content (VWC) analysis of crops is to get properly the total volume of consumptive water use (CWU) and, therefore, the choice of a reliable crop CWU estimation method. In practice, lots of previous researches obtaining CWU of crops follow a classical procedure for calculating crop evapotranspiration which is determined by multiplying reference evapotranspiration by appropriate coefficient, such as crop coefficient and water stress coefficients. However, this manner of calculation requires lots of field experimental data at point scale and more seriously, when current growing conditions differ from the standard conditions, may easily produce deviation between the calculated CWU and the actual CWU. Since evapotranspiration caused by crop planting always plays a vital role in surface water-energy balance in an agricultural region, this study decided to alternatively estimates crop evapotranspiration by Budyko framework. After brief introduce the development process of Budyko framework. We choose a modified Budyko framework under unsteady-state to better evaluated the actual CWU and apply it in an agricultural irrigation area in North China Plain which rely on underground water for irrigation. With the agricultural statistic data, this calculated CWU was further converted into VWC and its subdivision of crops at the annual scale. Results show that all the average values of VWC, VWC_blue and VWC_green show a downward trend with increased agricultural production and improved acreage. By comparison with the previous research, VWC calculated by Budyko framework agree well with part of the previous research and for some other research the value is greater. Our research also suggests that this methodology and findings may be reliable and convenient for investigation of virtual water throughout various agriculture regions of the world.Keywords: virtual water content, Budyko framework, consumptive water use, crop evapotranspiration
Procedia PDF Downloads 333349 The Effect of Metal Transfer Modes on Mechanical Properties of 3CR12 Stainless Steel
Authors: Abdullah Kaymakci, Daniel M. Madyira, Ntokozo Nkwanyana
Abstract:
The effect of metal transfer modes on mechanical properties of welded 3CR12 stainless steel were investigated. This was achieved by butt welding 10 mm thick plates of 3CR12 in different positions while varying the welding positions for different metal transfer modes. The ASME IX: 2010 (Welding and Brazing Qualifications) code was used as a basis for welding variables. The material and the thickness of the base metal were kept constant together with the filler metal, shielding gas and joint types. The effect of the metal transfer modes on the microstructure and the mechanical properties of the 3CR12 steel was then investigated as it was hypothesized that the change in welding positions will affect the transfer modes partly due to the effect of gravity. The microscopic examination revealed that the substrate was characterized by dual phase microstructure, that is, alpha phase and beta phase grain structures. Using the spectroscopic examination results and the ferritic factor calculation had shown that the microstructure was expected to be ferritic-martensitic during air cooling process. The tested tensile strength and Charpy impact energy were measured to be 498 MPa and 102 J which were in line with mechanical properties given in the material certificate. The heat input in the material was observed to be greater than 1 kJ/mm which is the limiting factor for grain growth during the welding process. Grain growths were observed in the heat affected zone of the welded materials. Ferritic-martensitic microstructure was observed in the microstructure during the microscopic examination. The grain growth altered the mechanical properties of the test material. Globular down hand had higher mechanical properties than spray down hand. Globular vertical up had better mechanical properties than globular vertical down.Keywords: welding, metal transfer modes, stainless steel, microstructure, hardness, tensile strength
Procedia PDF Downloads 252348 Creating Database and Building 3D Geological Models: A Case Study on Bac Ai Pumped Storage Hydropower Project
Authors: Nguyen Chi Quang, Nguyen Duong Tri Nguyen
Abstract:
This article is the first step to research and outline the structure of the geotechnical database in the geological survey of a power project; in the context of this report creating the database that has been carried out for the Bac Ai pumped storage hydropower project. For the purpose of providing a method of organizing and storing geological and topographic survey data and experimental results in a spatial database, the RockWorks software is used to bring optimal efficiency in the process of exploiting, using, and analyzing data in service of the design work in the power engineering consulting. Three-dimensional (3D) geotechnical models are created from the survey data: such as stratigraphy, lithology, porosity, etc. The results of the 3D geotechnical model in the case of Bac Ai pumped storage hydropower project include six closely stacked stratigraphic formations by Horizons method, whereas modeling of engineering geological parameters is performed by geostatistical methods. The accuracy and reliability assessments are tested through error statistics, empirical evaluation, and expert methods. The three-dimensional model analysis allows better visualization of volumetric calculations, excavation and backfilling of the lake area, tunneling of power pipelines, and calculation of on-site construction material reserves. In general, the application of engineering geological modeling makes the design work more intuitive and comprehensive, helping construction designers better identify and offer the most optimal design solutions for the project. The database always ensures the update and synchronization, as well as enables 3D modeling of geological and topographic data to integrate with the designed data according to the building information modeling. This is also the base platform for BIM & GIS integration.Keywords: database, engineering geology, 3D Model, RockWorks, Bac Ai pumped storage hydropower project
Procedia PDF Downloads 168