Search results for: thermodynamic calculation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1609

Search results for: thermodynamic calculation

1369 Numerical Calculation of Dynamic Response of Catamaran Vessels Based on 3D Green Function Method

Authors: Md. Moinul Islam, N. M. Golam Zakaria

Abstract:

Seakeeping analysis of catamaran vessels in the earlier stages of design has become an important issue as it dictates the seakeeping characteristics, and it ensures safe navigation during the voyage. In the present paper, a 3D numerical method for the seakeeping prediction of catamaran vessel is presented using the 3D Green Function method. Both steady and unsteady potential flow problem is dealt with here. Using 3D linearized potential theory, the dynamic wave loads and the subsequent response of the vessel is computed. For validation of the numerical procedure catamaran vessel composed of twin, Wigley form demi-hull is used. The results of the present calculation are compared with the available experimental data and also with other calculations. The numerical procedure is also carried out for NPL-based round bilge catamaran, and hydrodynamic coefficients along with heave and pitch motion responses are presented for various Froude number. The results obtained by the present numerical method are found to be in fairly good agreement with the available data. This can be used as a design tool for predicting the seakeeping behavior of catamaran ships in waves.

Keywords: catamaran, hydrodynamic coefficients , motion response, 3D green function

Procedia PDF Downloads 188
1368 Computational Insight into a Mechanistic Overview of Water Exchange Kinetics and Thermodynamic Stabilities of Bis and Tris-Aquated Complexes of Lanthanides

Authors: Niharika Keot, Manabendra Sarma

Abstract:

A thorough investigation of Ln3+ complexes with more than one inner-sphere water molecule is crucial for designing high relaxivity contrast agents (CAs) used in magnetic resonance imaging (MRI). This study accomplished a comparative stability analysis of two hexadentate (H3cbda and H3dpaa) and two heptadentate (H4peada and H3tpaa) ligands with Ln3+ ions. The higher stability of the hexadentate H3cbda and heptadentate H4peada ligands has been confirmed by the binding affinity and Gibbs free energy analysis in aqueous solution. In addition, energy decomposition analysis (EDA) reveals the higher binding affinity of the peada4− ligand than the cbda3− ligand towards Ln3+ ions due to the higher charge density of the peada4− ligand. Moreover, a mechanistic overview of water exchange kinetics has been carried out based on the strength of the metal–water bond. The strength of the metal–water bond follows the trend Gd–O47 (w) > Gd–O39 (w) > Gd–O36 (w) in the case of the tris-aquated [Gd(cbda)(H2O)3] and Gd–O43 (w) > Gd–O40 (w) for the bis-aquated [Gd(peada)(H2O)2]− complex, which was confirmed by bond length, electron density (ρ), and electron localization function (ELF) at the corresponding bond critical points. Our analysis also predicts that the activation energy barrier decreases with the decrease in bond strength; hence kex increases. The 17O and 1H hyperfine coupling constant values of all the coordinated water molecules were different, calculated by using the second-order Douglas–Kroll–Hess (DKH2) approach. Furthermore, the ionic nature of the bonding in the metal–ligand (M–L) bond was confirmed by the Quantum Theory of Atoms-In-Molecules (QTAIM) and ELF along with energy decomposition analysis (EDA). We hope that the results can be used as a basis for the design of highly efficient Gd(III)-based high relaxivity MRI contrast agents for medical applications.

Keywords: MRI contrast agents, lanthanide chemistry, thermodynamic stability, water exchange kinetics

Procedia PDF Downloads 53
1367 R Statistical Software Applied in Reliability Analysis: Case Study of Diesel Generator Fans

Authors: Jelena Vucicevic

Abstract:

Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. This paper will try to introduce another way of calculating reliability by using R statistical software. R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS. The R programming environment is a widely used open source system for statistical analysis and statistical programming. It includes thousands of functions for the implementation of both standard and new statistical methods. R does not limit user only to operation related only to these functions. This program has many benefits over other similar programs: it is free and, as an open source, constantly updated; it has built-in help system; the R language is easy to extend with user-written functions. The significance of the work is calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. Seventy generators were studied. For each one, the number of hours of running time from its first being put into service until fan failure or until the end of the study (whichever came first) was recorded. Dataset consists of two variables: hours and status. Hours show the time of each fan working and status shows the event: 1- failed, 0- censored data. Censored data represent cases when we cannot track the specific case, so it could fail or success. Gaining the result by using R was easy and quick. The program will take into consideration censored data and include this into the results. This is not so easy in hand calculation. For the purpose of the paper results from R program have been compared to hand calculations in two different cases: censored data taken as a failure and censored data taken as a success. In all three cases, results are significantly different. If user decides to use the R for further calculations, it will give more precise results with work on censored data than the hand calculation.

Keywords: censored data, R statistical software, reliability analysis, time to failure

Procedia PDF Downloads 379
1366 Delivery System Design of the Local Part to Reduce the Logistic Costs in an Automotive Industry

Authors: Alesandro Romero, Inaki Maulida Hakim

Abstract:

This research was conducted in an automotive company in Indonesia to overcome the problem of high logistics cost. The problem causes high of additional truck delivery. From the breakdown of the problem, chosen one route, which has the highest gap value, namely for RE-04. Research methodology will be started from calculating the ideal condition, making simulation, calculating the ideal logistic cost, and proposing an improvement. From the calculation of the ideal condition, box arrangement was done on the truck; the average efficiency was 97,4 % with three trucks delivery per day. Route simulation making uses Tecnomatix Plant Simulation software as a visualization for the company about how the system is occurred on route RE-04 in ideal condition. Furthermore, from the calculation of logistics cost of the ideal condition, it brings savings of Rp53.011.800,00 in a month. The last step is proposing improvements on the area of route RE-04. The route arrangement is done by Saving Method and sequence of each supplier with the Nearest Neighbor. The results of the proposed improvements are three new route groups, where was expected to decrease logistics cost Rp3.966.559,40 per day, and increase the average of the truck efficiency 8,78% per day.

Keywords: efficiency, logistic cost, milkrun, saving methode, simulation

Procedia PDF Downloads 421
1365 Evaluation of Low-Reducible Sinter in Blast Furnace Technology by Mathematical Model Developed at Centre ENET, VSB: Technical University of Ostrava

Authors: S. Jursová, P. Pustějovská, S. Brožová, J. Bilík

Abstract:

The paper deals with possibilities of interpretation of iron ore reducibility tests. It presents a mathematical model developed at Centre ENET, VŠB–Technical University of Ostrava, Czech Republic for an evaluation of metallurgical material of blast furnace feedstock such as iron ore, sinter or pellets. According to the data from the test, the model predicts its usage in blast furnace technology and its effects on production parameters of shaft aggregate. At the beginning, the paper sums up the general concept and experience in mathematical modelling of iron ore reduction. It presents basic equation for the calculation and the main parts of the developed model. In the experimental part, there is an example of usage of the mathematical model. The paper describes the usage of data for some predictive calculation. There are presented material, method of carried test of iron ore reducibility. Then there are graphically interpreted effects of used material on carbon consumption, rate of direct reduction and the whole reduction process.

Keywords: blast furnace technology, iron ore reduction, mathematical model, prediction of iron ore reduction

Procedia PDF Downloads 652
1364 Device Control Using Brain Computer Interface

Authors: P. Neeraj, Anurag Sharma, Harsukhpreet Singh

Abstract:

In current years, Brain-Computer Interface (BCI) scheme based on steady-state Visual Evoked Potential (SSVEP) have earned much consideration. This study tries to evolve an SSVEP based BCI scheme that can regulate any gadget mock-up in two unique positions ON and OFF. In this paper, two distinctive gleam frequencies in low-frequency part were utilized to evoke the SSVEPs and were shown on a Liquid Crystal Display (LCD) screen utilizing Lab View. Two stimuli shading, Yellow, and Blue were utilized to prepare the system in SSVEPs. The Electroencephalogram (EEG) signals recorded from the occipital part. Elements of the brain were separated by utilizing discrete wavelet Transform. A prominent system for multilayer system diverse Neural Network Algorithm (NNA), is utilized to characterize SSVEP signals. During training of the network with diverse calculation Regression plot results demonstrated that when Levenberg-Marquardt preparing calculation was utilized the exactness turns out to be 93.9%, which is superior to another training algorithm.

Keywords: brain computer interface, electroencephalography, steady-state visual evoked potential, wavelet transform, neural network

Procedia PDF Downloads 314
1363 Vector Quantization Based on Vector Difference Scheme for Image Enhancement

Authors: Biji Jacob

Abstract:

Vector quantization algorithm which uses minimum distance calculation for codebook generation, a time consuming calculation performed on each pixel values leads to computation complexity. The codebook is updated by comparing the distance of each vector to their centroid vector and measure for their closeness. In this paper vector quantization is modified based on vector difference algorithm for image enhancement purpose. In the proposed scheme, vector differences between the vectors are considered as the new generation vectors or new codebook vectors. The codebook is updated by comparing the new generation vector with a threshold value having minimum error with the parent vector. The minimum error decides the fitness of each newly generated vector. Thus the codebook is generated in an adaptive manner and the fitness value is determined for the suppression of the degraded portion of the image and thereby leads to the enhancement of the image through the adaptive searching capability of the vector quantization through vector difference algorithm. Experimental results shows that the vector difference scheme efficiently modifies the vector quantization algorithm for enhancing the image with peak signal to noise ratio (PSNR), mean square error (MSE), Euclidean distance (E_dist) as the performance parameters.

Keywords: codebook, image enhancement, vector difference, vector quantization

Procedia PDF Downloads 239
1362 Screen Method of Distributed Cooperative Navigation Factors for Unmanned Aerial Vehicle Swarm

Authors: Can Zhang, Qun Li, Yonglin Lei, Zhi Zhu, Dong Guo

Abstract:

Aiming at the problem of factor screen in distributed collaborative navigation of dense UAV swarm, an efficient distributed collaborative navigation factor screen method is proposed. The method considered the balance between computing load and positioning accuracy. The proposed algorithm utilized the factor graph model to implement a distributed collaborative navigation algorithm. The GNSS information of the UAV itself and the ranging information between the UAVs are used as the positioning factors. In this distributed scheme, a local factor graph is established for each UAV. The positioning factors of nodes with good geometric position distribution and small variance are selected to participate in the navigation calculation. To demonstrate and verify the proposed methods, the simulation and experiments in different scenarios are performed in this research. Simulation results show that the proposed scheme achieves a good balance between the computing load and positioning accuracy in the distributed cooperative navigation calculation of UAV swarm. This proposed algorithm has important theoretical and practical value for both industry and academic areas.

Keywords: screen method, cooperative positioning system, UAV swarm, factor graph, cooperative navigation

Procedia PDF Downloads 50
1361 Stress Analysis of the Ceramics Heads with Different Sizes under the Destruction Tests

Authors: V. Fuis, P. Janicek, T. Navrat

Abstract:

The global solved problem is the calculation of the parameters of ceramic material from a set of destruction tests of ceramic heads of total hip joint endoprosthesis. The standard way of calculation of the material parameters consists in carrying out a set of 3 or 4 point bending tests of specimens cut out from parts of the ceramic material to be analysed. In case of ceramic heads, it is not possible to cut out specimens of required dimensions because the heads are too small (if the cut out specimens were smaller than the normalized ones, the material parameters derived from them would exhibit higher strength values than those which the given ceramic material really has). A special destruction device for heads destruction was designed and the solved local problem is the modification of this destructive device based on the analysis of tensile stress in the head for two different values of the depth of the conical hole in the head. The goal of device modification is a shift of the location with extreme value of 1 max from the region of head’s hole bottom to its opening. This modification will increase the credibility of the obtained material properties of bio ceramics, which will be determined from a set of head destructions using the Weibull weakest link theory.

Keywords: ceramic heads, depth of the conical hole, destruction test, material parameters, principal stress, total hip joint endoprosthesis

Procedia PDF Downloads 394
1360 Experimental and Theoretical Characterization of Supramolecular Complexes between 7-(Diethylamino)Quinoline-2(1H)-One and Cucurbit[7] Uril

Authors: Kevin A. Droguett, Edwin G. Pérez, Denis Fuentealba, Margarita E. Aliaga, Angélica M. Fierro

Abstract:

Supramolecular chemistry is a field of growing interest. Moreover, studying the formation of host-guest complexes between macrocycles and dyes is highly attractive due to their potential applications. Examples of the above are drug delivery, catalytic process, and sensing, among others. There are different dyes of interest in the literature; one example is the quinolinone derivatives. Those molecules have good optical properties and chemical and thermal stability, making them suitable for developing fluorescent probes. Secondly, several macrocycles can be seen in the literature. One example is the cucurbiturils. This water-soluble macromolecule family has a hydrophobic cavity and two identical carbonyl portals. Additionally, the thermodynamic analysis of those supramolecular systems could help understand the affinity between the host and guest, their interaction, and the main stabilization energy of the complex. In this work, two 7-(diethylamino) quinoline-2 (1H)-one derivative (QD1-2) and their interaction with cucurbit[7]uril (CB[7]) were studied from an experimental and in-silico point of view. For the experimental section, the complexes showed a 1:1 stoichiometry by HRMS-ESI and isothermal titration calorimetry (ITC). The inclusion of the derivatives on the macrocycle lends to an upward shift in the fluorescence intensity, and the pKa value of QD1-2 exhibits almost no variation after the formation of the complex. The thermodynamics of the inclusion complexes was investigated using ITC; the results demonstrate a non-classical hydrophobic effect with a minimum contribution from the entropy term and a constant binding on the order of 106 for both ligands. Additionally, dynamic molecular studies were carried out during 300 ns in an explicit solvent at NTP conditions. Our finding shows that the complex remains stable during the simulation (RMSD ~1 Å), and hydrogen bonds contribute to the stabilization of the systems. Finally, thermodynamic parameters from MMPBSA calculations were obtained to generate new computational insights to compare with experimental results.

Keywords: host-guest complexes, molecular dynamics, quinolin-2(1H)-one derivatives dyes, thermodynamics

Procedia PDF Downloads 63
1359 Determination of Non-CO2 Greenhouse Gas Emission in Electronics Industry

Authors: Bong Jae Lee, Jeong Il Lee, Hyo Su Kim

Abstract:

Both developed and developing countries have adopted the decision to join the Paris agreement to reduce greenhouse gas (GHG) emissions at the Conference of the Parties (COP) 21 meeting in Paris. As a result, the developed and developing countries have to submit the Intended Nationally Determined Contributions (INDC) by 2020, and each country will be assessed for their performance in reducing GHG. After that, they shall propose a reduction target which is higher than the previous target every five years. Therefore, an accurate method for calculating greenhouse gas emissions is essential to be presented as a rational for implementing GHG reduction measures based on the reduction targets. Non-CO2 GHGs (CF4, NF3, N2O, SF6 and so on) are being widely used in fabrication process of semiconductor manufacturing, and etching/deposition process of display manufacturing process. The Global Warming Potential (GWP) value of Non-CO2 is much higher than CO2, which means it will have greater effect on a global warming than CO2. Therefore, GHG calculation methods of the electronics industry are provided by Intergovernmental Panel on climate change (IPCC) and U.S. Environmental Protection Agency (EPA), and it will be discussed at ISO/TC 146 meeting. As discussed earlier, being precise and accurate in calculating Non-CO2 GHG is becoming more important. Thus this study aims to discuss the implications of the calculating methods through comparing the methods of IPCC and EPA. As a conclusion, after analyzing the methods of IPCC & EPA, the method of EPA is more detailed and it also provides the calculation for N2O. In case of the default emission factor (by IPCC & EPA), IPCC provides more conservative results compared to that of EPA; The factor of IPCC was developed for calculating a national GHG emission, while the factor of EPA was specifically developed for the U.S. which means it must have been developed to address the environmental issue of the US. The semiconductor factory ‘A’ measured F gas according to the EPA Destruction and Removal Efficiency (DRE) protocol and estimated their own DRE, and it was observed that their emission factor shows higher DRE compared to default DRE factor of IPCC and EPA Therefore, each country can improve their GHG emission calculation by developing its own emission factor (if possible) at the time of reporting Nationally Determined Contributions (NDC). Acknowledgements: This work was supported by the Korea Evaluation Institute of Industrial Technology (No. 10053589).

Keywords: non-CO2 GHG, GHG emission, electronics industry, measuring method

Procedia PDF Downloads 262
1358 A New Social Vulnerability Index for Evaluating Social Vulnerability to Climate Change at the Local Scale

Authors: Cuong V Nguyen, Ralph Horne, John Fien, France Cheong

Abstract:

Social vulnerability to climate change is increasingly being acknowledged, and proposals to measure and manage it are emerging. Building upon this work, this paper proposes an approach to social vulnerability assessment using a new mechanism to aggregate and account for causal relationships among components of a Social Vulnerability Index (SVI). To operationalize this index, the authors propose a means to develop an appropriate primary dataset, through application of a specifically-designed household survey questionnaire. The data collection and analysis, including calibration and calculation of the SVI is demonstrated through application in case study city in central coastal Vietnam. The calculation of SVI at the fine-grained local neighbourhood scale provides high resolution in vulnerability assessment, and also obviates the need for secondary data, which may be unavailable or problematic, particularly at the local scale in developing countries. The SVI household survey is underpinned by the results of a Delphi survey, an in-depth interview and focus group discussions with local environmental professionals and community members. The research reveals inherent limitations of existing SVIs but also indicates the potential for their use in assessing social vulnerability and making decisions associated with responding to climate change at the local scale.

Keywords: climate change, local scale, social vulnerability, social vulnerability index

Procedia PDF Downloads 405
1357 Effects of Local Ground Conditions on Site Response Analysis Results in Hungary

Authors: Orsolya Kegyes-Brassai, Zsolt Szilvágyi, Ákos Wolf, Richard P. Ray

Abstract:

Local ground conditions have a substantial influence on the seismic response of structures. Their inclusion in seismic hazard assessment and structural design can be realized at different levels of sophistication. However, response results based on more advanced calculation methods e.g. nonlinear or equivalent linear site analysis tend to show significant discrepancies when compared to simpler approaches. This project's main objective was to compare results from several 1-D response programs to Eurocode 8 design spectra. Data from in-situ site investigations were used for assessing local ground conditions at several locations in Hungary. After discussion of the in-situ measurements and calculation methods used, a comprehensive evaluation of all major contributing factors for site response is given. While the Eurocode spectra should account for local ground conditions based on soil classification, there is a wide variation in peak ground acceleration determined from 1-D analyses versus Eurocode. Results show that current Eurocode 8 design spectra may not be conservative enough to account for local ground conditions typical for Hungary.

Keywords: 1-D site response analysis, multichannel analysis of surface waves (MASW), seismic CPT, seismic hazard assessment

Procedia PDF Downloads 228
1356 Multi Tier Data Collection and Estimation, Utilizing Queue Model in Wireless Sensor Networks

Authors: Amirhossein Mohajerzadeh, Abolghasem Mohajerzadeh

Abstract:

In this paper, target parameter is estimated with desirable precision in hierarchical wireless sensor networks (WSN) while the proposed algorithm also tries to prolong network lifetime as much as possible, using efficient data collecting algorithm. Target parameter distribution function is considered unknown. Sensor nodes sense the environment and send the data to the base station called fusion center (FC) using hierarchical data collecting algorithm. FC builds underlying phenomena based on collected data. Considering the aggregation level, x, the goal is providing the essential infrastructure to find the best value for aggregation level in order to prolong network lifetime as much as possible, while desirable accuracy is guaranteed (required sample size is fully depended on desirable precision). First, the sample size calculation algorithm is discussed, second, the average queue length based on M/M[x]/1/K queue model is determined and it is used for energy consumption calculation. Nodes can decrease transmission cost by aggregating incoming data. Furthermore, the performance of the new algorithm is evaluated in terms of lifetime and estimation accuracy.

Keywords: aggregation, estimation, queuing, wireless sensor network

Procedia PDF Downloads 164
1355 Hohmann Transfer and Bi-Elliptic Hohmann Transfer in TRAPPIST-1 System

Authors: Jorge L. Nisperuza, Wilson Sandoval, Edward. A. Gil, Johan A. Jimenez

Abstract:

In orbital mechanics, an active research topic is the calculation of interplanetary trajectories efficient in terms of energy and time. In this sense, this work concerns the calculation of the orbital elements for sending interplanetary probes in the extrasolar system TRAPPIST-1. Specifically, using the mathematical expressions of the circular and elliptical trajectory parameters, expressions for the flight time and the orbital transfer rate increase between orbits, the orbital parameters and the graphs of the trajectories of Hohmann and Hohmann bi-elliptic for sending a probe from the innermost planet to all the other planets of the studied system, are obtained. The relationship between the orbital transfer rate increments and the relationship between the flight times for the two transfer types is found. The results show that, for all cases under consideration, the Hohmann transfer results to be the least energy and temporary cost, a result according to the theory associated with Hohmann and Hohmann bi-elliptic transfers. Saving in the increase of the speed reaches up to 87% was found, and it happens for the transference between the two innermost planets, whereas the time of flight increases by a factor of up to 6.6 if one makes use of the bi-elliptic transfer, this for the case of sending a probe from the innermost planet to the outermost.

Keywords: bi-elliptic Hohmann transfer, exoplanet, extrasolar system, Hohmann transfer, TRAPPIST-1

Procedia PDF Downloads 165
1354 Statistical Characteristics of Code Formula for Design of Concrete Structures

Authors: Inyeol Paik, Ah-Ryang Kim

Abstract:

In this research, a statistical analysis is carried out to examine the statistical properties of the formula given in the design code for concrete structures. The design formulas of the Korea highway bridge design code - the limit state design method (KHBDC) which is the current national bridge design code and the design code for concrete structures by Korea Concrete Institute (KCI) are applied for the analysis. The safety levels provided by the strength formulas of the design codes are defined based on the probabilistic and statistical theory.KHBDC is a reliability-based design code. The load and resistance factors of this code were calibrated to attain the target reliability index. It is essential to define the statistical properties for the design formulas in this calibration process. In general, the statistical characteristics of a member strength are due to the following three factors. The first is due to the difference between the material strength of the actual construction and that used in the design calculation. The second is the difference between the actual dimensions of the constructed sections and those used in design calculation. The third is the difference between the strength of the actual member and the formula simplified for the design calculation. In this paper, the statistical study is focused on the third difference. The formulas for calculating the shear strength of concrete members are presented in different ways in KHBDC and KCI. In this study, the statistical properties of design formulas were obtained through comparison with the database which comprises the experimental results from the reference publications. The test specimen was either reinforced with the shear stirrup or not. For an applied database, the bias factor was about 1.12 and the coefficient of variation was about 0.18. By applying the statistical properties of the design formula to the reliability analysis, it is shown that the resistance factors of the current design codes satisfy the target reliability indexes of both codes. Also, the minimum resistance factors of the KHBDC which is written in the material resistance factor format and KCE which is in the member resistance format are obtained and the results are presented. A further research is underway to calibrate the resistance factors of the high strength and high-performance concrete design guide.

Keywords: concrete design code, reliability analysis, resistance factor, shear strength, statistical property

Procedia PDF Downloads 290
1353 Biosorption Kinetics, Isotherms, and Thermodynamic Studies of Copper (II) on Spirogyra sp.

Authors: Diwan Singh

Abstract:

The ability of non-living Spirogyra sp. biomass for biosorption of copper(II) ions from aqueous solutions was explored. The effect of contact time, pH, initial copper ion concentration, biosorbent dosage and temperature were investigated in batch experiments. Both the Freundlich and Langmuir Isotherms were found applicable on the experimental data (R2>0.98). Qmax obtained from the Langmuir Isotherms was found to be 28.7 mg/g of biomass. The values of Gibbs free energy (ΔGº) and enthalpy change (ΔHº) suggest that the sorption is spontaneous and endothermic at 20ºC-40ºC.

Keywords: biosorption, Spirogyra sp., contact time, pH, dose

Procedia PDF Downloads 398
1352 Self-Attention Mechanism for Target Hiding Based on Satellite Images

Authors: Hao Yuan, Yongjian Shen, Xiangjun He, Yuheng Li, Zhouzhou Zhang, Pengyu Zhang, Minkang Cai

Abstract:

Remote sensing data can provide support for decision-making in disaster assessment or disaster relief. The traditional processing methods of sensitive targets in remote sensing mapping are mainly based on manual retrieval and image editing tools, which are inefficient. Methods based on deep learning for sensitive target hiding are faster and more flexible. But these methods have disadvantages in training time and cost of calculation. This paper proposed a target hiding model Self Attention (SA) Deepfill, which used self-attention modules to replace part of gated convolution layers in image inpainting. By this operation, the calculation amount of the model becomes smaller, and the performance is improved. And this paper adds free-form masks to the model’s training to enhance the model’s universal. The experiment on an open remote sensing dataset proved the efficiency of our method. Moreover, through experimental comparison, the proposed method can train for a longer time without over-fitting. Finally, compared with the existing methods, the proposed model has lower computational weight and better performance.

Keywords: remote sensing mapping, image inpainting, self-attention mechanism, target hiding

Procedia PDF Downloads 100
1351 A New Intelligent, Dynamic and Real Time Management System of Sewerage

Authors: R. Tlili Yaakoubi, H.Nakouri, O. Blanpain, S. Lallahem

Abstract:

The current tools for real time management of sewer systems are based on two software tools: the software of weather forecast and the software of hydraulic simulation. The use of the first ones is an important cause of imprecision and uncertainty, the use of the second requires temporal important steps of decision because of their need in times of calculation. This way of proceeding fact that the obtained results are generally different from those waited. The major idea of this project is to change the basic paradigm by approaching the problem by the "automatic" face rather than by that "hydrology". The objective is to make possible the realization of a large number of simulations at very short times (a few seconds) allowing to take place weather forecasts by using directly the real time meditative pluviometric data. The aim is to reach a system where the decision-making is realized from reliable data and where the correction of the error is permanent. A first model of control laws was realized and tested with different return-period rainfalls. The gains obtained in rejecting volume vary from 19 to 100 %. The development of a new algorithm was then used to optimize calculation time and thus to overcome the subsequent combinatorial problem in our first approach. Finally, this new algorithm was tested with 16- year-rainfall series. The obtained gains are 40 % of total volume rejected to the natural environment and of 65 % in the number of discharges.

Keywords: automation, optimization, paradigm, RTC

Procedia PDF Downloads 280
1350 Comparative Study of Dose Calculation Accuracy in Bone Marrow Using Monte Carlo Method

Authors: Marzieh Jafarzadeh, Fatemeh Rezaee

Abstract:

Introduction: The effect of ionizing radiation on human health can be effective for genomic integrity and cell viability. It also increases the risk of cancer and malignancy. Therefore, X-ray behavior and absorption dose calculation are considered. One of the applicable tools for calculating and evaluating the absorption dose in human tissues is Monte Carlo simulation. Monte Carlo offers a straightforward way to simulate and integrate, and because it is simple and straightforward, Monte Carlo is easy to use. The Monte Carlo BEAMnrc code is one of the most common diagnostic X-ray simulation codes used in this study. Method: In one of the understudy hospitals, a certain number of CT scan images of patients who had previously been imaged were extracted from the hospital database. BEAMnrc software was used for simulation. The simulation of the head of the device with the energy of 0.09 MeV with 500 million particles was performed, and the output data obtained from the simulation was applied for phantom construction using CT CREATE software. The percentage of depth dose (PDD) was calculated using STATE DOSE was then compared with international standard values. Results and Discussion: The ratio of surface dose to depth dose (D/Ds) in the measured energy was estimated to be about 4% to 8% for bone and 3% to 7% for bone marrow. Conclusion: MC simulation is an efficient and accurate method for simulating bone marrow and calculating the absorbed dose.

Keywords: Monte Carlo, absorption dose, BEAMnrc, bone marrow

Procedia PDF Downloads 189
1349 Conception of a Regulated, Dynamic and Intelligent Sewerage in Ostrevent

Authors: Rabaa Tlili Yaakoubi, Hind Nakouri, Olivier Blanpain

Abstract:

The current tools for real time management of sewer systems are based on two software tools: the software of weather forecast and the software of hydraulic simulation. The use of the first ones is an important cause of imprecision and uncertainty, the use of the second requires temporal important steps of decision because of their need in times of calculation. This way of proceeding fact that the obtained results are generally different from those waited. The major idea of the CARDIO project is to change the basic paradigm by approaching the problem by the "automatic" face rather than by that "hydrology". The objective is to make possible the realization of a large number of simulations at very short times (a few seconds) allowing to take place weather forecasts by using directly the real time meditative pluviometric data. The aim is to reach a system where the decision-making is realized from reliable data and where the correction of the error is permanent. A first model of control laws was realized and tested with different return-period rainfalls. The gains obtained in rejecting volume vary from 40 to 100%. The development of a new algorithm was then used to optimize calculation time and thus to overcome the subsequent combinatorial problem in our first approach. Finally, this new algorithm was tested with 16- year-rainfall series. The obtained gains are 60% of total volume rejected to the natural environment and of 80 % in the number of discharges.

Keywords: RTC, paradigm, optimization, automation

Procedia PDF Downloads 261
1348 Exergetic Optimization on Solid Oxide Fuel Cell Systems

Authors: George N. Prodromidis, Frank A. Coutelieris

Abstract:

Biogas can be currently considered as an alternative option for electricity production, mainly due to its high energy content (hydrocarbon-rich source), its renewable status and its relatively low utilization cost. Solid Oxide Fuel Cell (SOFC) stacks convert fuel’s chemical energy to electricity with high efficiencies and reveal significant advantages on fuel flexibility combined with lower emissions rate, especially when utilize biogas. Electricity production by biogas constitutes a composite problem which incorporates an extensive parametric analysis on numerous dynamic variables. The main scope of the presented study is to propose a detailed thermodynamic model on the optimization of SOFC-based power plants’ operation based on fundamental thermodynamics, energy and exergy balances. This model named THERMAS (THERmodynamic MAthematical Simulation model) incorporates each individual process, during electricity production, mathematically simulated for different case studies that represent real life operational conditions. Also, THERMAS offers the opportunity to choose a great variety of different values for each operational parameter individually, thus allowing for studies within unexplored and experimentally impossible operational ranges. Finally, THERMAS innovatively incorporates a specific criterion concluded by the extensive energy analysis to identify the most optimal scenario per simulated system in exergy terms. Therefore, several dynamical parameters as well as several biogas mixture compositions have been taken into account, to cover all the possible incidents. Towards the optimization process in terms of an innovative OPF (OPtimization Factor), presented here, this research study reveals that systems supplied by low methane fuels can be comparable to these supplied by pure methane. To conclude, such an innovative simulation model indicates a perspective on the optimal design of a SOFC stack based system, in the direction of the commercialization of systems utilizing biogas.

Keywords: biogas, exergy, efficiency, optimization

Procedia PDF Downloads 345
1347 Raising the Property Provisions of the Topographic Located near the Locality of Gircov, Romania

Authors: Carmen Georgeta Dumitrache

Abstract:

Measurements of terrestrial science aims to study the totality of operations and computing, which are carried out for the purposes of representation on the plan or map of the land surface in a specific cartographic projection and topographic scale. With the development of society, the metrics have evolved, and they land, being dependent on the achievement of a goal-bound utility of economic activity and of a scientific purpose related to determining the form and dimensions of the Earth. For measurements in the field, data processing and proper representation on drawings and maps of planimetry and landform of the land, using topographic and geodesic instruments, calculation and graphical reporting, which requires a knowledge of theoretical and practical concepts from different areas of science and technology. In order to use properly in practice, topographical and geodetic instruments designed to measure precise angles and distances are required knowledge of geometric optics, precision mechanics, the strength of materials, and more. For processing, the results from field measurements are necessary for calculation methods, based on notions of geometry, trigonometry, algebra, mathematical analysis and computer science. To be able to illustrate topographic measurements was established for the lifting of property located near the locality of Gircov, Romania. We determine this total surface of the plan (T30), parcel/plot, but also in the field trace the coordinates of a parcel. The purpose of the removal of the planimetric consisted of: the exact determination of the bounding surface; analytical calculation of the surface; comparing the surface determined with the one registered in the documents produced; drawing up a plan of location and delineation with closeness and distance contour, as well as highlighting the parcels comprising this property; drawing up a plan of location and delineation with closeness and distance contour for a parcel from Dave; in the field trace outline of plot points from the previous point. The ultimate goal of this work was to determine and represent the surface, but also to tear off a plot of the surface total, while respecting the first surface condition imposed by the Act of the beneficiary's property.

Keywords: topography, surface, coordinate, modeling

Procedia PDF Downloads 231
1346 Inversion of Gravity Data for Density Reconstruction

Authors: Arka Roy, Chandra Prakash Dubey

Abstract:

Inverse problem generally used for recovering hidden information from outside available data. Vertical component of gravity field we will be going to use for underneath density structure calculation. Ill-posing nature is main obstacle for any inverse problem. Linear regularization using Tikhonov formulation are used for appropriate choice of SVD and GSVD components. For real time data handle, signal to noise ratios should have to be less for reliable solution. In our study, 2D and 3D synthetic model with rectangular grid are used for gravity field calculation and its corresponding inversion for density reconstruction. Fine grid also we have considered to hold any irregular structure. Keeping in mind of algebraic ambiguity factor number of observation point should be more than that of number of data point. Picard plot is represented here for choosing appropriate or main controlling Eigenvalues for a regularized solution. Another important study is depth resolution plot (DRP). DRP are generally used for studying how the inversion is influenced by regularizing or discretizing. Our further study involves real time gravity data inversion of Vredeforte Dome South Africa. We apply our method to this data. The results include density structure is in good agreement with known formation in that region, which puts an additional support of our method.

Keywords: depth resolution plot, gravity inversion, Picard plot, SVD, Tikhonov formulation

Procedia PDF Downloads 184
1345 Digital Twin of Real Electrical Distribution System with Real Time Recursive Load Flow Calculation and State Estimation

Authors: Anosh Arshad Sundhu, Francesco Giordano, Giacomo Della Croce, Maurizio Arnone

Abstract:

Digital Twin (DT) is a technology that generates a virtual representation of a physical system or process, enabling real-time monitoring, analysis, and simulation. DT of an Electrical Distribution System (EDS) can perform online analysis by integrating the static and real-time data in order to show the current grid status and predictions about the future status to the Distribution System Operator (DSO), producers and consumers. DT technology for EDS also offers the opportunity to DSO to test hypothetical scenarios. This paper discusses the development of a DT of an EDS by Smart Grid Controller (SGC) application, which is developed using open-source libraries and languages. The developed application can be integrated with Supervisory Control and Data Acquisition System (SCADA) of any EDS for creating the DT. The paper shows the performance of developed tools inside the application, tested on real EDS for grid observability, Smart Recursive Load Flow (SRLF) calculation and state estimation of loads in MV feeders.

Keywords: digital twin, distributed energy resources, remote terminal units, supervisory control and data acquisition system, smart recursive load flow

Procedia PDF Downloads 75
1344 A Geometric Interpolation Scheme in Overset Meshes for the Piecewise Linear Interface Calculation Volume of Fluid Method in Multiphase Flows

Authors: Yanni Chang, Dezhi Dai, Albert Y. Tong

Abstract:

Piecewise linear interface calculation (PLIC) schemes are widely used in the volume-of-fluid (VOF) method to capture interfaces in numerical simulations of multiphase flows. Dynamic overset meshes can be especially useful in applications involving component motions and complex geometric shapes. In the present study, the VOF value of an acceptor cell is evaluated in a geometric way that transfers the fraction field between the meshes precisely with reconstructed interfaces from the corresponding donor elements. The acceptor cell value is evaluated by using a weighted average of its donors for most of the overset interpolation schemes for continuous flow variables. The weighting factors are obtained by different algebraic methods. Unlike the continuous flow variables, the VOF equation is a step function near the interfaces, which ranges from zero to unity rapidly. A geometric interpolation scheme of the VOF field in overset meshes for the PLIC-VOF method has been proposed in the paper. It has been tested successfully in quadrilateral/hexahedral overset meshes by employing several VOF advection tests with imposed solenoidal velocity fields. The proposed algorithm has been shown to yield higher accuracy in mass conservation and interface reconstruction compared with three other algebraic ones.

Keywords: interpolation scheme, multiphase flows, overset meshes, PLIC-VOF method

Procedia PDF Downloads 143
1343 Process of Dimensioning Small Type Annular Combustors

Authors: Saleh B. Mohamed, Mohamed H. Elhsnawi, Mesbah M. Salem

Abstract:

Current and future applications of small gas turbine engines annular type combustors have requirements presenting difficult disputes to the combustor designer. Reduced cost and fuel consumption and improved durability and reliability as well as higher temperatures and pressures for such application are forecast. Coupled with these performance requirements, irrespective of the engine size, is the demand to control the pollutant emissions, namely the oxides of nitrogen, carbon monoxide, smoke and unburned hydrocarbons. These technical and environmental challenges have made the design of small size combustion system a very hard task. Thus, the main target of this work is to generalize a calculation method of annular type combustors for small gas turbine engines that enables to understand the fundamental concepts of the coupled processes and to identify the proper procedure that formulates and solves the problems in combustion fields in as much simplified and accurate manner as possible. The combustion chamber in task is designed with central vaporizing unit and to deliver 516.3 KW of power. The geometrical constraints are 142 mm & 140 mm overall length and casing diameter, respectively, while the airflow rate is 0.8 kg/sec and the fuel flow rate is 0.012 kg/sec. The relevant design equations are programmed by using MathCAD language for ease and speed up of the calculation process.

Keywords: design of gas turbine, small engine design, annular type combustors, mechanical engineering

Procedia PDF Downloads 387
1342 Experimental Study of CO₂ Hydrate Formation in Presence of Different Promotors

Authors: Samaneh Soroush, Tommy Golczynski, Tony Spratt

Abstract:

One of the new technologies for CO₂ capture, storage, and utilization (CCSU) is forming clathrate hydrate. This technology has some unknowns and challenges that make it difficult to apply in the real world. The low formation rate is one of the main difficulties of CO₂ hydrate. In this work, the effect of different promotors on the hydrate formation rate has been studied. Two surfactants, sodium dodecyl sulfate (SDS), tetra-n-butylammonium bromide (TBAB), and cyclopentane (CP) as a thermodynamic promotor and their combination have been used for the experiments. The results showed that the SDS is a powerful kinetic promotor and its combination with CP helps to convert more CO₂ to hydrate in a short time.

Keywords: carbon capture, carbon dioxide, hydrate, promotor

Procedia PDF Downloads 217
1341 Application of the EU Commission Waste Management Methodology Level(s) to a Construction and a Demolition in North-West Romania.

Authors: Valean Maria

Abstract:

Construction and demolition waste management is a timely topic, due to the urgency of its transition to sustainability. This sector is responsible for over a third of the waste generated in the E.U., while the legislation requires a proportion of at least 70% preparation for reuse and recycle, excluding backfilling. To this end, the E.U. Commission has provided the Level(s) methodology, allowing for the standardized planning and reporting of waste quantities across all levels of the construction process, from the architecture, to the demolition, from the estimation stage, to the actual measurements at the end of the operations. We applied Level(s) for the first time to the Romanian context, a developing E.U. country in which illegal dumping of contruction waste in nature and landfills, are still common practice. We performed the desk study of the buildings’ documents, followed by field studies of the sites, and finally the insertion and calculation of statistical data of the construction and demolition waste. We learned that Romania is far from the E.U. average in terms of the initial estimations of waste, with some numbers being higher, others lower, and that the price of evacuation to landfills is significantly lower in the developing country, a possible barrier to adopting the new regulations. Finally, we found that concrete is the predominant type waste, in terms of quantity as well as cost of disposal. Further directions of research are provided, such as mapping out all of the alternative facilities in the region and the calculation of the financial costs and of the CO2 footprint, for preparing and delivering waste sustainably, for a more sound and locally adapted model of waste management.

Keywords: construction, waste, management, levels, EU

Procedia PDF Downloads 54
1340 Artificial Neural Network Based Parameter Prediction of Miniaturized Solid Rocket Motor

Authors: Hao Yan, Xiaobing Zhang

Abstract:

The working mechanism of miniaturized solid rocket motors (SRMs) is not yet fully understood. It is imperative to explore its unique features. However, there are many disadvantages to using common multi-objective evolutionary algorithms (MOEAs) in predicting the parameters of the miniaturized SRM during its conceptual design phase. Initially, the design variables and objectives are constrained in a lumped parameter model (LPM) of this SRM, which leads to local optima in MOEAs. In addition, MOEAs require a large number of calculations due to their population strategy. Although the calculation time for simulating an LPM just once is usually less than that of a CFD simulation, the number of function evaluations (NFEs) is usually large in MOEAs, which makes the total time cost unacceptably long. Moreover, the accuracy of the LPM is relatively low compared to that of a CFD model due to its assumptions. CFD simulations or experiments are required for comparison and verification of the optimal results obtained by MOEAs with an LPM. The conceptual design phase based on MOEAs is a lengthy process, and its results are not precise enough due to the above shortcomings. An artificial neural network (ANN) based parameter prediction is proposed as a way to reduce time costs and improve prediction accuracy. In this method, an ANN is used to build a surrogate model that is trained with a 3D numerical simulation. In design, the original LPM is replaced by a surrogate model. Each case uses the same MOEAs, in which the calculation time of the two models is compared, and their optimization results are compared with 3D simulation results. Using the surrogate model for the parameter prediction process of the miniaturized SRMs results in a significant increase in computational efficiency and an improvement in prediction accuracy. Thus, the ANN-based surrogate model does provide faster and more accurate parameter prediction for an initial design scheme. Moreover, even when the MOEAs converge to local optima, the time cost of the ANN-based surrogate model is much lower than that of the simplified physical model LPM. This means that designers can save a lot of time during code debugging and parameter tuning in a complex design process. Designers can reduce repeated calculation costs and obtain accurate optimal solutions by combining an ANN-based surrogate model with MOEAs.

Keywords: artificial neural network, solid rocket motor, multi-objective evolutionary algorithm, surrogate model

Procedia PDF Downloads 62