Search results for: exact analytical calculation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4070

Search results for: exact analytical calculation

3770 A New Approach for Solving Fractional Coupled Pdes

Authors: Prashant Pandey

Abstract:

In the present article, an effective Laguerre collocation method is used to obtain the approximate solution of a system of coupled fractional-order non-linear reaction-advection-diffusion equation with prescribed initial and boundary conditions. In the proposed scheme, Laguerre polynomials are used together with an operational matrix and collocation method to obtain approximate solutions of the coupled system, so that our proposed model is converted into a system of algebraic equations which can be solved employing the Newton method. The solution profiles of the coupled system are presented graphically for different particular cases. The salient feature of the present article is finding the stability analysis of the proposed method and also the demonstration of the lower variation of solute concentrations with respect to the column length in the fractional-order system compared to the integer-order system. To show the higher efficiency, reliability, and accuracy of the proposed scheme, a comparison between the numerical results of Burger’s coupled system and its existing analytical result is reported. There are high compatibility and consistency between the approximate solution and its exact solution to a higher order of accuracy. The exhibition of error analysis for each case through tables and graphs confirms the super-linearly convergence rate of the proposed method.

Keywords: fractional coupled PDE, stability and convergence analysis, diffusion equation, Laguerre polynomials, spectral method

Procedia PDF Downloads 130
3769 Analytical Approach to Study the Uncertainties Related to the Behavior of Structures Submitted to Differential Settlement

Authors: Elio El Kahi, Michel Khouri, Olivier Deck, Pierre Rahme, Rasool Mehdizadeh

Abstract:

Recent developments in civil engineering create multiple interaction problems between the soil and the structure. One of the major problems is the impact of ground movements on buildings. Consequently, managing risks associated with these movements, requires a determination of the different influencing factors and a specific knowledge of their variability/uncertainty. The main purpose of this research is to study the behavior of structures submitted to differential settlement, in order to assess their vulnerability, taking into consideration the different sources of uncertainties. Analytical approach is applied to investigate on one hand the influence of these uncertainties that are related to the soil, and on the other hand the structure stiffness variation with the presence of openings and the movement transmitted between them as related to the origin and shape of the free-field movement. Results reveal the effect of taking these uncertainties into consideration, and specify the dominant and most significant parameters that control the ground movement associated with the Soil-Structure Interaction (SSI) phenomenon.

Keywords: analytical approach, building, damage, differential settlement, soil-structure interaction, uncertainties

Procedia PDF Downloads 211
3768 GIS Application in Surface Runoff Estimation for Upper Klang River Basin, Malaysia

Authors: Suzana Ramli, Wardah Tahir

Abstract:

Estimation of surface runoff depth is a vital part in any rainfall-runoff modeling. It leads to stream flow calculation and later predicts flood occurrences. GIS (Geographic Information System) is an advanced and opposite tool used in simulating hydrological model due to its realistic application on topography. The paper discusses on calculation of surface runoff depth for two selected events by using GIS with Curve Number method for Upper Klang River basin. GIS enables maps intersection between soil type and land use that later produces curve number map. The results show good correlation between simulated and observed values with more than 0.7 of R2. Acceptable performance of statistical measurements namely mean error, absolute mean error, RMSE, and bias are also deduced in the paper.

Keywords: surface runoff, geographic information system, curve number method, environment

Procedia PDF Downloads 266
3767 Application of Double Side Approach Method on Super Elliptical Winkler Plate

Authors: Hsiang-Wen Tang, Cheng-Ying Lo

Abstract:

In this study, the static behavior of super elliptical Winkler plate is analyzed by applying the double side approach method. The lack of information about super elliptical Winkler plates is the motivation of this study and we use the double side approach method to solve this problem because of its superior ability on efficiently treating problems with complex boundary shape. The double side approach method has the advantages of high accuracy, easy calculation procedure and less calculation load required. Most important of all, it can give the error bound of the approximate solution. The numerical results not only show that the double side approach method works well on this problem but also provide us the knowledge of static behavior of super elliptical Winkler plate in practical use.

Keywords: super elliptical winkler plate, double side approach method, error bound, mechanic

Procedia PDF Downloads 336
3766 Simple Procedure for Probability Calculation of Tensile Crack Occurring in Rigid Pavement: A Case Study

Authors: Aleš Florian, Lenka Ševelová, Jaroslav Žák

Abstract:

Formation of tensile cracks in concrete slabs of rigid pavement can be (among others) the initiation point of the other, more serious failures which can ultimately lead to complete degradation of the concrete slab and thus the whole pavement. Two measures can be used for reliability assessment of this phenomenon - the probability of failure and/or the reliability index. Different methods can be used for their calculation. The simple ones are called moment methods and simulation techniques. Two methods - FOSM Method and Simple Random Sampling Method - are verified and their comparison is performed. The influence of information about the probability distribution and the statistical parameters of input variables as well as of the limit state function on the calculated reliability index and failure probability are studied in three points on the lower surface of concrete slabs of the older type of rigid pavement formerly used in the Czech Republic.

Keywords: failure, pavement, probability, reliability index, simulation, tensile crack

Procedia PDF Downloads 528
3765 Analysis of CO₂ Two-Phase Ejector with Taguchi and ANOVA Optimization and Refrigerant Selection with Enviro Economic Concerns by TOPSIS Analysis

Authors: Karima Megdouli, Bourhan tachtouch

Abstract:

Ejector refrigeration cycles offer an alternative to conventional systems for producing cold from low-temperature heat. In this article, a thermodynamic model is presented. This model has the advantage of simplifying the calculation algorithm and describes the complex double-throttling mechanism that occurs in the ejector. The model assumption and calculation algorithm are presented first. The impact of each efficiency is evaluated. Validation is performed on several data sets. The ejector model is then used to simulate a RES (refrigeration ejector system), to validate its robustness and suitability for use in predicting thermodynamic cycle performance. A Taguchi and ANOVA optimization is carried out on a RES. TOPSIS analysis was applied to decide the optimum refrigerants with cost, safety, environmental and enviro economic concerns along with thermophysical properties.

Keywords: ejector, velocity distribution, shock circle, Taguchi and ANOVA optimization, TOPSIS analysis

Procedia PDF Downloads 63
3764 Soil Moisture Regulation in Irrigated Agriculture

Authors: I. Kruashvili, I. Inashvili, K. Bziava, M. Lomishvili

Abstract:

Seepage capillary anomalies in the active layer of soil, related to the soil water movement, often cause variation of soil hydrophysical properties and become one of the main objectives of the hydroecology. It is necessary to mention that all existing equations for computing the seepage flow particularly from soil channels, through dams, bulkheads, and foundations of hydraulic engineering structures are preferable based on the linear seepage law. Regarding the existing beliefs, anomalous seepage is based on postulates according to which the fluid in free volume is characterized by resistance against shear deformation and is presented in the form of initial gradient. According to the above-mentioned information, we have determined: Equation to calculate seepage coefficient when the velocity of transition flow is equal to seepage flow velocity; by means of power function, equations for the calculation of average and maximum velocities of seepage flow have been derived; taking into consideration the fluid continuity condition, average velocity for calculation of average velocity in capillary tube has been received.

Keywords: seepage, soil, velocity, water

Procedia PDF Downloads 445
3763 A Study on ZnO Nanoparticles Properties: An Integration of Rietveld Method and First-Principles Calculation

Authors: Kausar Harun, Ahmad Azmin Mohamad

Abstract:

Zinc oxide (ZnO) has been extensively used in optoelectronic devices, with recent interest as photoanode material in dye-sensitize solar cell. Numerous methods employed to experimentally synthesized ZnO, while some are theoretically-modeled. Both approaches provide information on ZnO properties, but theoretical calculation proved to be more accurate and timely effective. Thus, integration between these two methods is essential to intimately resemble the properties of synthesized ZnO. In this study, experimentally-grown ZnO nanoparticles were prepared by sol-gel storage method with zinc acetate dihydrate and methanol as precursor and solvent. A 1 M sodium hydroxide (NaOH) solution was used as stabilizer. The optimum time to produce ZnO nanoparticles were recorded as 12 hours. Phase and structural analysis showed that single phase ZnO produced with wurtzite hexagonal structure. Further work on quantitative analysis was done via Rietveld-refinement method to obtain structural and crystallite parameter such as lattice dimensions, space group, and atomic coordination. The lattice dimensions were a=b=3.2498Å and c=5.2068Å which were later used as main input in first-principles calculations. By applying density-functional theory (DFT) embedded in CASTEP computer code, the structure of synthesized ZnO was built and optimized using several exchange-correlation functionals. The generalized-gradient approximation functional with Perdew-Burke-Ernzerhof and Hubbard U corrections (GGA-PBE+U) showed the structure with lowest energy and lattice deviations. In this study, emphasize also given to the modification of valence electron energy level to overcome the underestimation in DFT calculation. Both Zn and O valance energy were fixed at Ud=8.3 eV and Up=7.3 eV, respectively. Hence, the following electronic and optical properties of synthesized ZnO were calculated based on GGA-PBE+U functional within ultrasoft-pseudopotential method. In conclusion, the incorporation of Rietveld analysis into first-principles calculation was valid as the resulting properties were comparable with those reported in literature. The time taken to evaluate certain properties via physical testing was then eliminated as the simulation could be done through computational method.

Keywords: density functional theory, first-principles, Rietveld-refinement, ZnO nanoparticles

Procedia PDF Downloads 296
3762 Theoretical and Experimental Study of Iron Oxide Thin Film

Authors: Fahima Djefaflia, M. Loutfi Benkhedir

Abstract:

The aim of this work was to development and characterisation of iron oxide thin films by spray pyrolysis technique. Influences of deposition parameters pile temperature on structural and optical properties have been studied Thin films are analysed by various techniques of materials. The structural characterization of films by analysis of spectra of X-ray diffraction showed that the films prepared at T=350,400,450 are crystalline and amorphous at T=300C. For particular condition, two phases hematiteFe2O3 and magnetite Fe3O4 have been observed.The UV-Visible spectrophotometer of this films confirms that it is possible to obtain films with a transmittance of about 15-30% in the visible range. In addition, this analysis allowed us to determine the optical gap and disorder of films. We conclude that the increase in temperature is accompanied by a reduction in the optical gap with increasing in disorder. An ab initio calculation for this phase shows that the results are in good agreement with the experimental results.

Keywords: spray pyrolysis technique, iron oxide, ab initio calculation, optical properties

Procedia PDF Downloads 540
3761 Investigation of the Working Processes in Thermocompressor Operating on Cryogenic Working Fluid

Authors: Evgeny V. Blagin, Aleksandr I. Dovgjallo, Dmitry A. Uglanov

Abstract:

This article deals with research of the working process in the thermocompressor which operates on cryogenic working fluid. Thermocompressor is device suited for the conversation of heat energy directly to the potential energy of pressure. Suggested thermocompressor is suited for operation during liquid natural gas (LNG) re-gasification and is placed after evaporator. Such application of thermocompressor allows using of the LNG cold energy for rising of working fluid pressure, which then can be used for electricity generation or another purpose. Thermocompressor consists of two chambers divided by the regenerative heat exchanger. Calculation algorithm for unsteady calculation of thermocompressor working process was suggested. The results of this investigation are to change of thermocompressor’s chambers temperature and pressure during the working cycle. These distributions help to find out the parameters, which significantly influence thermocompressor efficiency. These parameters include regenerative heat exchanger coefficient of the performance (COP) dead volume of the chambers, working frequency of the thermocompressor etc. Exergy analysis was performed to estimate thermocompressor efficiency. Cryogenic thermocompressor operated on nitrogen working fluid was chosen as a prototype. Calculation of the temperature and pressure change was performed with taking into account heat fluxes through regenerator and thermocompressor walls. Temperature of the cold chamber significantly differs from the results of steady calculation, which is caused by friction of the working fluid in regenerator and heat fluxes from the hot chamber. The rise of the cold chamber temperature leads to decreasing of thermocompressor delivery volume. Temperature of hot chamber differs negligibly because losses due to heat fluxes to a cold chamber are compensated by the friction of the working fluid in the regenerator. Optimal working frequency was selected. Main results of the investigation: -theoretical confirmation of thermocompressor operation capability on the cryogenic working fluid; -optimal working frequency was found; -value of the cold chamber temperature differs from the starting value much more than the temperature of the hot chamber; -main parameters which influence thermocompressor performance are regenerative heat exchanger COP and heat fluxes through regenerator and thermocompressor walls.

Keywords: cold energy, liquid natural gas, thermocompressor, regenerative heat exchanger

Procedia PDF Downloads 568
3760 A Novel Search Pattern for Motion Estimation in High Efficiency Video Coding

Authors: Phong Nguyen, Phap Nguyen, Thang Nguyen

Abstract:

High Efficiency Video Coding (HEVC) or H.265 Standard fulfills the demand of high resolution video storage and transmission since it achieves high compression ratio. However, it requires a huge amount of calculation. Since Motion Estimation (ME) block composes about 80 % of calculation load of HEVC, there are a lot of researches to reduce the computation cost. In this paper, we propose a new algorithm to lower the number of Motion Estimation’s searching points. The number of computing points in search pattern is down from 77 for Diamond Pattern and 81 for Square Pattern to only 31. Meanwhile, the Peak Signal to Noise Ratio (PSNR) and bit rate are almost equal to those of conventional patterns. The motion estimation time of new algorithm reduces by at 68.23%, 65.83%compared to the recommended search pattern of diamond pattern, square pattern, respectively.

Keywords: motion estimation, wide diamond, search pattern, H.265, test zone search, HM software

Procedia PDF Downloads 586
3759 Development of Paper Based Analytical Devices for Analysis of Iron (III) in Natural Water Samples

Authors: Sakchai Satienperakul, Manoch Thanomwat, Jutiporn Seedasama

Abstract:

A paper based analytical devices (PADs) for the analysis of Fe (III) ion in natural water samples is developed, using reagent from guava leaf extract. The extraction is simply performed in deionized water pH 7, where tannin extract is obtained and used as an alternative natural reagent. The PADs are fabricated by ink-jet printing using alkenyl ketene dimer (AKD) wax. The quantitation of Fe (III) is carried out using reagent from guava leaf extract prepared in acetate buffer at the ratio of 1:1. A color change to gray-purple is observed by naked eye when dropping sample contained Fe (III) ion on PADs channel. The reflective absorption measurement is performed for creating a standard curve. The linear calibration range is observed over the concentration range of 2-10 mg L-1. Detection limited of Fe (III) is observed at 2 mg L-1. In its optimum form, the PADs is stable for up to 30 days under oxygen free conditions. The small dimensions, low volume requirement and alternative natural reagent make the proposed PADs attractive for on-site environmental monitoring and analysis.

Keywords: green chemical analysis, guava leaf extract, lab on a chip, paper based analytical device

Procedia PDF Downloads 223
3758 An Enhanced Approach in Validating Analytical Methods Using Tolerance-Based Design of Experiments (DoE)

Authors: Gule Teri

Abstract:

The effective validation of analytical methods forms a crucial component of pharmaceutical manufacturing. However, traditional validation techniques can occasionally fail to fully account for inherent variations within datasets, which may result in inconsistent outcomes. This deficiency in validation accuracy is particularly noticeable when quantifying low concentrations of active pharmaceutical ingredients (APIs), excipients, or impurities, introducing a risk to the reliability of the results and, subsequently, the safety and effectiveness of the pharmaceutical products. In response to this challenge, we introduce an enhanced, tolerance-based Design of Experiments (DoE) approach for the validation of analytical methods. This approach distinctly measures variability with reference to tolerance or design margins, enhancing the precision and trustworthiness of the results. This method provides a systematic, statistically grounded validation technique that improves the truthfulness of results. It offers an essential tool for industry professionals aiming to guarantee the accuracy of their measurements, particularly for low-concentration components. By incorporating this innovative method, pharmaceutical manufacturers can substantially advance their validation processes, subsequently improving the overall quality and safety of their products. This paper delves deeper into the development, application, and advantages of this tolerance-based DoE approach and demonstrates its effectiveness using High-Performance Liquid Chromatography (HPLC) data for verification. This paper also discusses the potential implications and future applications of this method in enhancing pharmaceutical manufacturing practices and outcomes.

Keywords: tolerance-based design, design of experiments, analytical method validation, quality control, biopharmaceutical manufacturing

Procedia PDF Downloads 54
3757 Estimating Tree Height and Forest Classification from Multi Temporal Risat-1 HH and HV Polarized Satellite Aperture Radar Interferometric Phase Data

Authors: Saurav Kumar Suman, P. Karthigayani

Abstract:

In this paper the height of the tree is estimated and forest types is classified from the multi temporal RISAT-1 Horizontal-Horizontal (HH) and Horizontal-Vertical (HV) Polarised Satellite Aperture Radar (SAR) data. The novelty of the proposed project is combined use of the Back-scattering Coefficients (Sigma Naught) and the Coherence. It uses Water Cloud Model (WCM). The approaches use two main steps. (a) Extraction of the different forest parameter data from the Product.xml, BAND-META file and from Grid-xxx.txt file come with the HH & HV polarized data from the ISRO (Indian Space Research Centre). These file contains the required parameter during height estimation. (b) Calculation of the Vegetation and Ground Backscattering, Coherence and other Forest Parameters. (c) Classification of Forest Types using the ENVI 5.0 Tool and ROI (Region of Interest) calculation.

Keywords: RISAT-1, classification, forest, SAR data

Procedia PDF Downloads 384
3756 Mecano-Reliability Coupled of Reinforced Concrete Structure and Vulnerability Analysis: Case Study

Authors: Kernou Nassim

Abstract:

The current study presents a vulnerability and a reliability-mechanical approach that focuses on evaluating the seismic performance of reinforced concrete structures to determine the probability of failure. In this case, the performance function reflecting the non-linear behavior of the structure is modeled by a response surface to establish an analytical relationship between the random variables (strength of concrete and yield strength of steel) and mechanical responses of the structure (inter-floor displacement) obtained by the pushover results of finite element simulations. The push over-analysis is executed by software SAP2000. The results acquired prove that properly designed frames will perform well under seismic loads. It is a comparative study of the behavior of the existing structure before and after reinforcement using the pushover method. The coupling indirect mechanical reliability by response surface avoids prohibitive calculation times. Finally, the results of the proposed approach are compared with Monte Carlo Simulation. The comparative study shows that the structure is more reliable after the introduction of new shear walls.

Keywords: finite element method, surface response, reliability, reliability mechanical coupling, vulnerability

Procedia PDF Downloads 106
3755 Calculation of the Normalized Difference Vegetation Index and the Spectral Signature of Coffee Crops: Benefits of Image Filtering on Mixed Crops

Authors: Catalina Albornoz, Giacomo Barbieri

Abstract:

Crop monitoring has shown to reduce vulnerability to spreading plagues and pathologies in crops. Remote sensing with Unmanned Aerial Vehicles (UAVs) has made crop monitoring more precise, cost-efficient and accessible. Nowadays, remote monitoring involves calculating maps of vegetation indices by using different software that takes either Truecolor (RGB) or multispectral images as an input. These maps are then used to segment the crop into management zones. Finally, knowing the spectral signature of a crop (the reflected radiation as a function of wavelength) can be used as an input for decision-making and crop characterization. The calculation of vegetation indices using software such as Pix4D has high precision for monoculture plantations. However, this paper shows that using this software on mixed crops may lead to errors resulting in an incorrect segmentation of the field. Within this work, authors propose to filter all the elements different from the main crop before the calculation of vegetation indices and the spectral signature. A filter based on the Sobel method for border detection is used for filtering a coffee crop. Results show that segmentation into management zones changes with respect to the traditional situation in which a filter is not applied. In particular, it is shown how the values of the spectral signature change in up to 17% per spectral band. Future work will quantify the benefits of filtering through the comparison between in situ measurements and the calculated vegetation indices obtained through remote sensing.

Keywords: coffee, filtering, mixed crop, precision agriculture, remote sensing, spectral signature

Procedia PDF Downloads 374
3754 Efficient Implementation of Finite Volume Multi-Resolution Weno Scheme on Adaptive Cartesian Grids

Authors: Yuchen Yang, Zhenming Wang, Jun Zhu, Ning Zhao

Abstract:

An easy-to-implement and robust finite volume multi-resolution Weighted Essentially Non-Oscillatory (WENO) scheme is proposed on adaptive cartesian grids in this paper. Such a multi-resolution WENO scheme is combined with the ghost cell immersed boundary method (IBM) and wall-function technique to solve Navier-Stokes equations. Unlike the k-exact finite volume WENO schemes which involve large amounts of extra storage, repeatedly solving the matrix generated in a least-square method or the process of calculating optimal linear weights on adaptive cartesian grids, the present methodology only adds very small overhead and can be easily implemented in existing edge-based computational fluid dynamics (CFD) codes with minor modifications. Also, the linear weights of this adaptive finite volume multi-resolution WENO scheme can be any positive numbers on condition that their sum is one. It is a way of bypassing the calculation of the optimal linear weights and such a multi-resolution WENO scheme avoids dealing with the negative linear weights on adaptive cartesian grids. Some benchmark viscous problems are numerical solved to show the efficiency and good performance of this adaptive multi-resolution WENO scheme. Compared with a second-order edge-based method, the presented method can be implemented into an adaptive cartesian grid with slight modification for big Reynolds number problems.

Keywords: adaptive mesh refinement method, finite volume multi-resolution WENO scheme, immersed boundary method, wall-function technique.

Procedia PDF Downloads 136
3753 D-Care: Diabetes Care Application to Enhance Diabetic Awareness to Diabetes in Indonesia

Authors: Samara R. Dania, Maulana S. Aji, Dewi Lestari

Abstract:

Diabetes is a common disease in Indonesia. One of the risk factors of diabetes is an unhealthy diet which is consuming food that contains too much glucose, one of glucose sources presents in food containing carbohydrate. The purpose of this study is to identify the amount of glucose level in the consumed food. The authors use literature studies for this research method. For the results of this study, the authors expect diabetics to be more aware of diabetes by applying daily dietary regulation through D-Care. D-Care is an application that can enhance people awareness to diabetes in Indonesia. D-Care provides two menus; there are nutrition calculation and healthy food. Nutrition calculation menu is used for knowing estimated glucose intake level by calculating food that consumed each day. Whereas healthy food menu, it provides a combination of healthy food menu for diabetic. The conclusion is D-Care is useful to be used for reducing diabetes prevalence in Indonesia.

Keywords: D-Care, diabetes, awareness, healthy food

Procedia PDF Downloads 399
3752 Aerodynamic Design Optimization of High-Speed Hatchback Cars for Lucrative Commercial Applications

Authors: A. Aravind, M. Vetrivel, P. Abhimanyu, C. A. Akaash Emmanuel Raj, K. Sundararaj, V. R. S. Kumar

Abstract:

The choice of high-speed, low budget hatchback car with diversified options is increasing for meeting the new generation buyers trend. This paper is aimed to augment the current speed of the hatchback cars through the aerodynamic drag reduction technique. The inverted airfoils are facilitated at the bottom of the car for generating the downward force for negating the lift while increasing the current speed range for achieving a better road performance. The numerical simulations have been carried out using a 2D steady pressure-based    k-ɛ realizable model with enhanced wall treatment. In our numerical studies, Reynolds-averaged Navier-Stokes model and its code of solution are used. The code is calibrated and validated using the exact solution of the 2D boundary layer displacement thickness at the Sanal flow choking condition for adiabatic flows. We observed through the parametric analytical studies that the inverted airfoil integrated with the bottom surface at various predesigned locations of Hatchback cars can improve its overall aerodynamic efficiency through drag reduction, which obviously decreases the fuel consumption significantly and ensure an optimum road performance lucratively with maximum permissible speed within the framework of the manufactures constraints.

Keywords: aerodynamics of commercial cars, downward force, hatchback car, inverted airfoil

Procedia PDF Downloads 257
3751 Modular Data and Calculation Framework for a Technology-based Mapping of the Manufacturing Process According to the Value Stream Management Approach

Authors: Tim Wollert, Fabian Behrendt

Abstract:

Value Stream Management (VSM) is a widely used methodology in the context of Lean Management for improving end-to-end material and information flows from a supplier to a customer from a company’s perspective. Whereas the design principles, e.g. Pull, value-adding, customer-orientation and further ones are still valid against the background of an increasing digitalized and dynamic environment, the methodology itself for mapping a value stream is characterized as time- and resource-intensive due to the high degree of manual activities. The digitalization of processes in the context of Industry 4.0 enables new opportunities to reduce these manual efforts and make the VSM approach more agile. The paper at hand aims at providing a modular data and calculation framework, utilizing the available business data, provided by information and communication technologies for automizing the value stream mapping process with focus on the manufacturing process.

Keywords: lean management 4.0, value stream management (VSM) 4.0, dynamic value stream mapping, enterprise resource planning (ERP)

Procedia PDF Downloads 130
3750 An Analytical Method for Solving General Riccati Equation

Authors: Y. Pala, M. O. Ertas

Abstract:

In this paper, the general Riccati equation is analytically solved by a new transformation. By the method developed, looking at the transformed equation, whether or not an explicit solution can be obtained is readily determined. Since the present method does not require a proper solution for the general solution, it is especially suitable for equations whose proper solutions cannot be seen at first glance. Since the transformed second order linear equation obtained by the present transformation has the simplest form that it can have, it is immediately seen whether or not the original equation can be solved analytically. The present method is exemplified by several examples.

Keywords: Riccati equation, analytical solution, proper solution, nonlinear

Procedia PDF Downloads 335
3749 A Perspective of Digital Formation in the Solar Community as a Prototype for Finding Sustainable Algorithmic Conditions on Earth

Authors: Kunihisa Kakumoto

Abstract:

“Purpose”: Global environmental issues are now being raised in a global dimension. By predicting sprawl phenomena beyond the limits of nature with algorithms, we can expect to protect our social life within the limits of nature. It turns out that the sustainable state of the planet now consists in maintaining a balance between the capabilities of nature and the possibilities of our social life. The amount of water on earth is finite. Sustainability is therefore highly dependent on water capacity. A certain amount of water is stored in the forest by planting and green space, and the amount of water can be considered in relation to the green space. CO2 is also absorbed by green plants. "Possible measurements and methods": The concept of the solar community has been introduced in technical papers on the occasion of many international conferences. The solar community concept is based on data collected from one solar model house. This algorithmic study simulates the amount of water stored by lush green vegetation. In addition, we calculated and compared the amount of CO2 emissions from the Taiyo Community and the amount of CO2 reduction from greening. Based on the trial calculation results of these solar communities, we are simulating the sustainable state of the earth as an algorithm trial calculation result. We believe that we should also consider the composition of this solar community group using digital technology as control technology. "Conclusion": We consider the solar community as a prototype for finding sustainable conditions for the planet. The role of water is very important as the supply capacity of water is limited. However, the circulation of social life is not constructed according to the mechanism of nature. This simulation trial calculation is explained using the total water supply volume as an example. According to this process, algorithmic calculations consider the total capacity of the water supply and the population and habitable numbers of the area. Green vegetated land is very important to keep enough water. Green vegetation is also very important to maintain CO2 balance. A simulation trial calculation is possible from the relationship between the CO2 emissions of the solar community and the amount of CO2 reduction due to greening. In order to find this total balance and sustainable conditions, the algorithmic simulation calculation takes into account lush vegetation and total water supply. Research to find sustainable conditions is done by simulating an algorithmic model of the solar community as a prototype. In this one prototype example, it's balanced. The activities of our social life must take place within the permissive limits of natural mechanisms. Of course, we aim for a more ideal balance by utilizing auxiliary digital control technology such as AI.

Keywords: solar community, sustainability, prototype, algorithmic simulation

Procedia PDF Downloads 44
3748 Comprehensive Validation of High-Performance Liquid Chromatography-Diode Array Detection (HPLC-DAD) for Quantitative Assessment of Caffeic Acid in Phenolic Extracts from Olive Mill Wastewater

Authors: Layla El Gaini, Majdouline Belaqziz, Meriem Outaki, Mariam Minhaj

Abstract:

In this study, it introduce and validate a high-performance liquid chromatography method with diode-array detection (HPLC-DAD) specifically designed for the accurate quantification of caffeic acid in phenolic extracts obtained from olive mill wastewater. The separation process of caffeic acid was effectively achieved through the use of an Acclaim Polar Advantage column (5µm, 250x4.6mm). A meticulous multi-step gradient mobile phase was employed, comprising water acidified with phosphoric acid (pH 2.3) and acetonitrile, to ensure optimal separation. The diode-array detection was adeptly conducted within the UV–VIS spectrum, spanning a range of 200–800 nm, which facilitated precise analytical results. The method underwent comprehensive validation, addressing several essential analytical parameters, including specificity, repeatability, linearity, as well as the limits of detection and quantification, alongside measurement uncertainty. The generated linear standard curves displayed high correlation coefficients, underscoring the method's efficacy and consistency. This validated approach is not only robust but also demonstrates exceptional reliability for the focused analysis of caffeic acid within the intricate matrices of wastewater, thus offering significant potential for applications in environmental and analytical chemistry.

Keywords: high-performance liquid chromatography (HPLC-DAD), caffeic acid analysis, olive mill wastewater phenolics, analytical method validation

Procedia PDF Downloads 50
3747 A Multi-Family Offline SPE LC-MS/MS Analytical Method for Anionic, Cationic and Non-ionic Surfactants in Surface Water

Authors: Laure Wiest, Barbara Giroud, Azziz Assoumani, Francois Lestremau, Emmanuelle Vulliet

Abstract:

Due to their production at high tonnages and their extensive use, surfactants are contaminants among those determined at the highest concentrations in wastewater. However, analytical methods and data regarding their occurrence in river water are scarce and concern only a few families, mainly anionic surfactants. The objective of this study was to develop an analytical method to extract and analyze a wide variety of surfactants in a minimum of steps, with a sensitivity compatible with the detection of ultra-traces in surface waters. 27 substances, from 12 families of surfactants, anionic, cationic and non-ionic were selected for method optimization. Different retention mechanisms for the extraction by solid phase extraction (SPE) were tested and compared in order to improve their detection by liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). The best results were finally obtained with a C18 grafted silica LC column and a polymer cartridge with hydrophilic lipophilic balance (HLB), and the method developed allows the extraction of the three types of surfactants with satisfactory recoveries. The final analytical method comprised only one extraction and two LC injections. It was validated and applied for the quantification of surfactants in 36 river samples. The method's limits of quantification (LQ), intra- and inter-day precision and accuracy were evaluated, and good performances were obtained for the 27 substances. As these compounds have many areas of application, contaminations of instrument and method blanks were observed and considered for the determination of LQ. Nevertheless, with LQ between 15 and 485 ng/L, and accuracy of over 80%, this method was suitable for monitoring surfactants in surface waters. Application on French river samples revealed the presence of anionic, cationic and non-ionic surfactants with median concentrations ranging from 24 ng/L for octylphenol ethoxylates (OPEO) to 4.6 µg/L for linear alkylbenzenesulfonates (LAS). The analytical method developed in this work will therefore be useful for future monitoring of surfactants in waters. Moreover, this method, which shows good performances for anionic, non-ionic and cationic surfactants, may be easily adapted to other surfactants.

Keywords: anionic surfactant, cationic surfactant, LC-MS/MS, non-ionic surfactant, SPE, surface water

Procedia PDF Downloads 126
3746 Uneven Habitat Characterisation by Using Geo-Gebra Software in the Lacewings (Insecta: Neuroptera), Knowing When to Calculate the Habitat: Creating More Informative Ecological Experiments

Authors: Hakan Bozdoğan

Abstract:

A wide variety of traditional methodologies has been enhanced for characterising smooth habitats in order to find out different environmental objectives. The habitats were characterised based on size and shape by using Geo-Gebra Software. In this study, an innovative approach to researching habitat characterisation in the lacewing species, GeoGebra software is utilised. This approach is demonstrated using the example of ‘surface area’ as an analytical concept, wherein the goal was to increase clearness for researchers, and to improve the quality of researching in survey area. In conclusion, habitat characterisation using the mathematical programme provides a unique potential to collect more comprehensible and analytical information about in shapeless areas beyond the range of direct observations methods. This research contributes a new perspective for assessing the structure of habitat, providing a novel mathematical tool for the research and management of such habitats and environments. Further surveys should be undertaken at additional sites within the Amanos Mountains for a comprehensive assessment of lacewings habitat characterisation in an analytical plane. This paper is supported by Ahi Evran University Scientific Research Projects Coordination Unit, Projects No:TBY.E2.17.001 and TBY.A4.16.001.

Keywords: uneven habitat shape, habitat assessment, lacewings, Geo-Gebra Software

Procedia PDF Downloads 267
3745 Analytical Solution for Multi-Segmented Toroidal Shells under Uniform Pressure

Authors: Nosakhare Enoma, Alphose Zingoni

Abstract:

The requirements for various toroidal shell forms are increasing due to new applications, available storage space and the consideration of appearance. Because of the complexity of some of these structural forms, the finite element method is nowadays mainly used for their analysis, even for simple static studies. This paper presents an easy-to-use analytical algorithm for pressurized multi-segmented toroidal shells of revolution. The membrane solution, which acts as a particular solution of the bending-theory equations, is developed based on membrane theory of shells, and a general approach is formulated for quantifying discontinuity effects at the shell junctions using the well-known Geckeler’s approximation. On superimposing these effects, and applying the ensuing solution to the problem of the pressurized toroid with four segments, closed-form stress results are obtained for the entire toroid. A numerical example is carried out using the developed method. The analytical results obtained show excellent agreement with those from the finite element method, indicating that the proposed method can be also used for complementing and verifying FEM results, and providing insights on other related problems.

Keywords: bending theory of shells, membrane hypothesis, pressurized toroid, segmented toroidal vessel, shell analysis

Procedia PDF Downloads 297
3744 Calculation of Organs Radiation Dose in Cervical Carcinoma External Irradiation Beam Using Day’s Methods

Authors: Yousif M. Yousif Abdallah, Mohamed E. Gar-Elnabi, Abdoelrahman H. A. Bakary, Alaa M. H. Eltoum, Abdelazeem K. M. Ali

Abstract:

The study was established to measure the amount of radiation outside the treatment field in external beam radiation therapy using day method of dose calculation, the data was collected from 89 patients of cervical carcinoma in order to determine if the dose outside side the irradiation treatment field for spleen, liver, both kidneys, small bowel, large colon, skin within the acceptable limit or not. The cervical field included mainly 4 organs which are bladder, rectum part of small bowel and hip joint these organ received mean dose of (4781.987±281.321), (4736.91±331.8), (4647.64±387.1) and (4745.91±321.11) respectively. The mean dose received by outfield organs was (77.69±15.24cGy) to large colon, (93.079±12.31cGy) to right kidney (80.688±12.644cGy) to skin, (155.86±17.69cGy) to small bowel. This was more significant value noted.

Keywords: radiation dose, cervical carcinoma, day’s methods, radiation medicine

Procedia PDF Downloads 399
3743 Analytical Study of Cobalt(II) and Nickel(II) Extraction with Salicylidene O-, M-, and P-Toluidine in Chloroform

Authors: Sana Almi, Djamel Barkat

Abstract:

The solvent extraction of cobalt (II) and nickel (II) from aqueous sulfate solutions were investigated with the analytical methods of slope analysis using salicylidene aniline and the three isomeric o-, m- and p-salicylidene toluidine diluted with chloroform at 25°C. By a statistical analysis of the extraction data, it was concluded that the extracted species are CoL2 with CoL2(HL) and NiL2 (HL denotes HSA, HSOT, HSMT, and HSPT). The extraction efficiency of Co(II) was higher than Ni(II). This tendency is confirmed from numerical extraction constants for each metal cations. The best extraction was according to the following order: HSMT > HSPT > HSOT > HSA for Co2+ and Ni2+.

Keywords: solvent extraction, nickel(II), cobalt(II), salicylidene aniline, o-, m-, and p-salicylidene toluidine

Procedia PDF Downloads 464
3742 On the Stability Exact Analysis of Tall Buildings with Outrigger System

Authors: Mahrooz Abed, Amir R. Masoodi

Abstract:

Many structural lateral systems are used in tall buildings such as rigid frames, braced frames, shear walls, tubular structures and core structures. Some efficient structures for drift control and base moment reduction in tall buildings is outrigger and belt truss systems. When adopting outrigger beams in building design, their location should be in an optimum position for an economical design. A range of different strategies has been employed to identify the optimum locations of these outrigger beams under wind load. However, there is an absence of scientific research or case studies dealing with optimum outrigger location using buckling analysis. In this paper, one outrigger system is considered at the middle of height of structure. The optimum location of outrigger will be found based on the buckling load limitation. The core of structure is modeled by a clamped tapered beam. The exact stiffness matrix of tapered beam is formulated based on the Euler-Bernoulli theory. Finally, based on the buckling load of structure, the optimal location of outrigger will be found.

Keywords: tall buildings, outrigger system, buckling load, second-order effects, Euler-Bernoulli beam theory

Procedia PDF Downloads 379
3741 Effect of Modeling of Hydraulic Form Loss Coefficient to Break on Emergency Core Coolant Bypass

Authors: Young S. Bang, Dong H. Yoon, Seung H. Yoo

Abstract:

Emergency Core Coolant Bypass (ECC Bypass) has been regarded as an important phenomenon to peak cladding temperature of large-break loss-of-coolant-accidents (LBLOCA) in nuclear power plants (NPP). A modeling scheme to address the ECC Bypass phenomena and the calculation of LBLOCA using that scheme are discussed in the present paper. A hydraulic form loss coefficient (HFLC) from the reactor vessel downcomer to the broken cold leg is predicted by the computational fluid dynamics (CFD) code with a variation of the void fraction incoming from the downcomer. The maximum, mean, and minimum values of FLC are derived from the CFD results and are incorporated into the LBLOCA calculation using a system thermal-hydraulic code, MARS-KS. As a relevant parameter addressing the ECC Bypass phenomena, the FLC to the break and its range are proposed.

Keywords: CFD analysis, ECC bypass, hydraulic form loss coefficient, system thermal-hydraulic code

Procedia PDF Downloads 214