Search results for: quantum optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3794

Search results for: quantum optimization

224 CO₂ Conversion by Low-Temperature Fischer-Tropsch

Authors: Pauline Bredy, Yves Schuurman, David Farrusseng

Abstract:

To fulfill climate objectives, the production of synthetic e-fuels using CO₂ as a raw material appears as part of the solution. In particular, Power-to-Liquid (PtL) concept combines CO₂ with hydrogen supplied from water electrolysis, powered by renewable sources, which is currently gaining interest as it allows the production of sustainable fossil-free liquid fuels. The proposed process discussed here is an upgrading of the well-known Fischer-Tropsch synthesis. The concept deals with two cascade reactions in one pot, with first the conversion of CO₂ into CO via the reverse water gas shift (RWGS) reaction, which is then followed by the Fischer-Tropsch Synthesis (FTS). Instead of using a Fe-based catalyst, which can carry out both reactions, we have chosen the strategy to decouple the two functions (RWGS and FT) on two different catalysts within the same reactor. The FTS shall shift the equilibrium of the RWGS reaction (which alone would be limited to 15-20% of conversion at 250°C) by converting the CO into hydrocarbons. This strategy shall enable optimization of the catalyst pair and thus lower the temperature of the reaction thanks to the equilibrium shift to gain selectivity in the liquid fraction. The challenge lies in maximizing the activity of the RWGS catalyst but also in the ability of the FT catalyst to be highly selective. Methane production is the main concern as the energetic barrier of CH₄ formation is generally lower than that of the RWGS reaction, so the goal will be to minimize methane selectivity. Here we report the study of different combinations of copper-based RWGS catalysts with different cobalt-based FTS catalysts. We investigated their behaviors under mild process conditions by the use of high-throughput experimentation. Our results show that at 250°C and 20 bars, Cobalt catalysts mainly act as methanation catalysts. Indeed, CH₄ selectivity never drops under 80% despite the addition of various protomers (Nb, K, Pt, Cu) on the catalyst and its coupling with active RWGS catalysts. However, we show that the activity of the RWGS catalyst has an impact and can lead to longer hydrocarbons chains selectivities (C₂⁺) of about 10%. We studied the influence of the reduction temperature on the activity and selectivity of the tandem catalyst system. Similar selectivity and conversion were obtained at reduction temperatures between 250-400°C. This leads to the question of the active phase of the cobalt catalysts, which is currently investigated by magnetic measurements and DRIFTS. Thus, in coupling it with a more selective FT catalyst, better results are expected. This was achieved using a cobalt/iron FTS catalyst. The CH₄ selectivity dropped to 62% at 265°C, 20 bars, and a GHSV of 2500ml/h/gcat. We propose that the conditions used for the cobalt catalysts could have generated this methanation because these catalysts are known to have their best performance around 210°C in classical FTS, whereas the iron catalysts are more flexible but are also known to have an RWGS activity.

Keywords: cobalt-copper catalytic systems, CO₂-hydrogenation, Fischer-Tropsch synthesis, hydrocarbons, low-temperature process

Procedia PDF Downloads 57
223 Evaluation of Cardiac Rhythm Patterns after Open Surgical Maze-Procedures from Three Years' Experiences in a Single Heart Center

Authors: J. Yan, B. Pieper, B. Bucsky, H. H. Sievers, B. Nasseri, S. A. Mohamed

Abstract:

In order to optimize the efficacy of medications, the regular follow-up with long-term continuous monitoring of heart rhythmic patterns has been facilitated since clinical introduction of cardiac implantable electronic monitoring devices (CIMD). Extensive analysis of rhythmic circadian properties is capable to disclose the distributions of arrhythmic events, which may support appropriate medication according rate-/rhythm-control strategy and minimize consequent afflictions. 348 patients (69 ± 0.5ys, male 61.8%) with predisposed atrial fibrillation (AF), undergoing primary ablating therapies combined to coronary or valve operations and secondary implantation of CIMDs, were involved and divided into 3 groups such as PAAF (paroxysmal AF) (n=99, male 68.7%), PEAF (persistent AF) (n=94, male 62.8%), and LSPEAF (long-standing persistent AF) (n=155, male 56.8%). All patients participated in three-year ambulant follow-up (3, 6, 9, 12, 18, 24, 30 and 36 months). Burdens of atrial fibrillation recurrence were assessed using cardiac monitor devices, whereby attacks frequencies and their circadian patterns were systemically analyzed. Anticoagulants and regular anti-arrhythmic medications were evaluated and the last were listed in terms of anti-rate and anti-rhythm regimens. Patients in the PEAF-group showed the least AF-burden after surgical ablating procedures compared to both of the other subtypes (p < 0.05). The AF-recurrences predominantly performed such attacks’ property as shorter than one hour, namely within 10 minutes (p < 0.05), regardless of AF-subtypes. Concerning circadian distribution of the recurrence attacks, frequent AF-attacks were mostly recorded in the morning in the PAAF-group (p < 0.05), while the patients with predisposed PEAF complained less attack-induced discomforts in the latter half of the night and the ones with LSPEAF only if they were not physically active after primary surgical ablations. Different AF-subtypes presented distinct therapeutic efficacies after appropriate surgical ablating procedures and recurrence properties in sense of circadian distribution. An optimization of medical regimen and drug dosages to maintain the therapeutic success needs more attention to detailed assessment of the long-term follow-up. Rate-control strategy plays a much more important role than rhythm-control in the ongoing follow-up examinations.

Keywords: atrial fibrillation, CIMD, MAZE, rate-control, rhythm-control, rhythm patterns

Procedia PDF Downloads 156
222 Finite Element Analysis of Layered Composite Plate with Elastic Pin Under Uniaxial Load Using ANSYS

Authors: R. M. Shabbir Ahmed, Mohamed Haneef, A. R. Anwar Khan

Abstract:

Analysis of stresses plays important role in the optimization of structures. Prior stress estimation helps in better design of the products. Composites find wide usage in the industrial and home applications due to its strength to weight ratio. Especially in the air craft industry, the usage of composites is more due to its advantages over the conventional materials. Composites are mainly made of orthotropic materials having unequal strength in the different directions. Composite materials have the drawback of delamination and debonding due to the weaker bond materials compared to the parent materials. So proper analysis should be done to the composite joints before using it in the practical conditions. In the present work, a composite plate with elastic pin is considered for analysis using finite element software Ansys. Basically the geometry is built using Ansys software using top down approach with different Boolean operations. The modelled object is meshed with three dimensional layered element solid46 for composite plate and solid element (Solid45) for pin material. Various combinations are considered to find the strength of the composite joint under uniaxial loading conditions. Due to symmetry of the problem, only quarter geometry is built and results are presented for full model using Ansys expansion options. The results show effect of pin diameter on the joint strength. Here the deflection and load sharing of the pin are increasing and other parameters like overall stress, pin stress and contact pressure are reducing due to lesser load on the plate material. Further material effect shows, higher young modulus material has little deflection, but other parameters are increasing. Interference analysis shows increasing of overall stress, pin stress, contact stress along with pin bearing load. This increase should be understood properly for increasing the load carrying capacity of the joint. Generally every structure is preloaded to increase the compressive stress in the joint to increase the load carrying capacity. But the stress increase should be properly analysed for composite due to its delamination and debonding effects due to failure of the bond materials. When results for an isotropic combination is compared with composite joint, isotropic joint shows uniformity of the results with lesser values for all parameters. This is mainly due to applied layer angle combinations. All the results are represented with necessasary pictorial plots.

Keywords: bearing force, frictional force, finite element analysis, ANSYS

Procedia PDF Downloads 334
221 The Path to Ruthium: Insights into the Creation of a New Element

Authors: Goodluck Akaoma Ordu

Abstract:

Ruthium (Rth) represents a theoretical superheavy element with an atomic number of 119, proposed within the context of advanced materials science and nuclear physics. The conceptualization of Rth involves theoretical frameworks that anticipate its atomic structure, including a hypothesized stable isotope, Rth-320, characterized by 119 protons and 201 neutrons. The synthesis of Ruthium (Rth) hinges on intricate nuclear fusion processes conducted in state-of-the-art particle accelerators, notably utilizing Calcium-48 (Ca-48) as a projectile nucleus and Einsteinium-253 (Es-253) as a target nucleus. These experiments aim to induce fusion reactions that yield Ruthium isotopes, such as Rth-301, accompanied by neutron emission. Theoretical predictions outline various physical and chemical properties attributed to Ruthium (Rth). It is envisaged to possess a high density, estimated at around 25 g/cm³, with melting and boiling points anticipated to be exceptionally high, approximately 4000 K and 6000 K, respectively. Chemical studies suggest potential oxidation states of +2, +3, and +4, indicating a versatile reactivity, particularly with halogens and chalcogens. The atomic structure of Ruthium (Rth) is postulated to feature an electron configuration of [Rn] 5f^14 6d^10 7s^2 7p^2, reflecting its position in the periodic table as a superheavy element. However, the creation and study of superheavy elements like Ruthium (Rth) pose significant challenges. These elements typically exhibit very short half-lives, posing difficulties in their stabilization and detection. Research efforts are focused on identifying the most stable isotopes of Ruthium (Rth) and developing advanced detection methodologies to confirm their existence and properties. Specialized detectors are essential in observing decay patterns unique to Ruthium (Rth), such as alpha decay or fission signatures, which serve as key indicators of its presence and characteristics. The potential applications of Ruthium (Rth) span across diverse technological domains, promising innovations in energy production, material strength enhancement, and sensor technology. Incorporating Ruthium (Rth) into advanced energy systems, such as the Arc Reactor concept, could potentially amplify energy output efficiencies. Similarly, integrating Ruthium (Rth) into structural materials, exemplified by projects like the NanoArc gauntlet, could bolster mechanical properties and resilience. Furthermore, Ruthium (Rth)--based sensors hold promise for achieving heightened sensitivity and performance in various sensing applications. Looking ahead, the study of Ruthium (Rth) represents a frontier in both fundamental science and applied research. It underscores the quest to expand the periodic table and explore the limits of atomic stability and reactivity. Future research directions aim to delve deeper into Ruthium (Rth)'s atomic properties under varying conditions, paving the way for innovations in nanotechnology, quantum materials, and beyond. The synthesis and characterization of Ruthium (Rth) stand as a testament to human ingenuity and technological advancement, pushing the boundaries of scientific understanding and engineering capabilities. In conclusion, Ruthium (Rth) embodies the intersection of theoretical speculation and experimental pursuit in the realm of superheavy elements. It symbolizes the relentless pursuit of scientific excellence and the potential for transformative technological breakthroughs. As research continues to unravel the mysteries of Ruthium (Rth), it holds the promise of reshaping materials science and opening new frontiers in technological innovation.

Keywords: superheavy element, nuclear fusion, bombardment, particle accelerator, nuclear physics, particle physics

Procedia PDF Downloads 36
220 Frequency Response of Complex Systems with Localized Nonlinearities

Authors: E. Menga, S. Hernandez

Abstract:

Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.

Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber

Procedia PDF Downloads 266
219 Impact of Climate Change on Flow Regime in Himalayan Basins, Nepal

Authors: Tirtha Raj Adhikari, Lochan Prasad Devkota

Abstract:

This research studied the hydrological regime of three glacierized river basins in Khumbu, Langtang and Annapurna regions of Nepal using the Hydraologiska Byrans Vattenbalansavde (HBV), HVB-light 3.0 model. Future scenario of discharge is also studied using downscaled climate data derived from statistical downscaling method. General Circulation Models (GCMs) successfully simulate future climate variability and climate change on a global scale; however, poor spatial resolution constrains their application for impact studies at a regional or a local level. The dynamically downscaled precipitation and temperature data from Coupled Global Circulation Model 3 (CGCM3) was used for the climate projection, under A2 and A1B SRES scenarios. In addition, the observed historical temperature, precipitation and discharge data were collected from 14 different hydro-metrological locations for the implementation of this study, which include watershed and hydro-meteorological characteristics, trends analysis and water balance computation. The simulated precipitation and temperature were corrected for bias before implementing in the HVB-light 3.0 conceptual rainfall-runoff model to predict the flow regime, in which Groups Algorithms Programming (GAP) optimization approach and then calibration were used to obtain several parameter sets which were finally reproduced as observed stream flow. Except in summer, the analysis showed that the increasing trends in annual as well as seasonal precipitations during the period 2001 - 2060 for both A2 and A1B scenarios over three basins under investigation. In these river basins, the model projected warmer days in every seasons of entire period from 2001 to 2060 for both A1B and A2 scenarios. These warming trends are higher in maximum than in minimum temperatures throughout the year, indicating increasing trend of daily temperature range due to recent global warming phenomenon. Furthermore, there are decreasing trends in summer discharge in Langtang Khola (Langtang region) which is increasing in Modi Khola (Annapurna region) as well as Dudh Koshi (Khumbu region) river basin. The flow regime is more pronounced during later parts of the future decades than during earlier parts in all basins. The annual water surplus of 1419 mm, 177 mm and 49 mm are observed in Annapurna, Langtang and Khumbu region, respectively.

Keywords: temperature, precipitation, water discharge, water balance, global warming

Procedia PDF Downloads 344
218 Discrete PID and Discrete State Feedback Control of a Brushed DC Motor

Authors: I. Valdez, J. Perdomo, M. Colindres, N. Castro

Abstract:

Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.

Keywords: control, DC motor, discrete PID, discrete state feedback

Procedia PDF Downloads 266
217 Scalable UI Test Automation for Large-scale Web Applications

Authors: Kuniaki Kudo, Raviraj Solanki, Kaushal Patel, Yash Virani

Abstract:

This research mainly concerns optimizing UI test automation for large-scale web applications. The test target application is the HHAexchange homecare management WEB application that seamlessly connects providers, state Medicaid programs, managed care organizations (MCOs), and caregivers through one platform with large-scale functionalities. This study focuses on user interface automation testing for the WEB application. The quality assurance team must execute many manual users interface test cases in the development process to confirm no regression bugs. The team automated 346 test cases; the UI automation test execution time was over 17 hours. The business requirement was reducing the execution time to release high-quality products quickly, and the quality assurance automation team modernized the test automation framework to optimize the execution time. The base of the WEB UI automation test environment is Selenium, and the test code is written in Python. Adopting a compilation language to write test code leads to an inefficient flow when introducing scalability into a traditional test automation environment. In order to efficiently introduce scalability into Test Automation, a scripting language was adopted. The scalability implementation is mainly implemented with AWS's serverless technology, an elastic container service. The definition of scalability here is the ability to automatically set up computers to test automation and increase or decrease the number of computers running those tests. This means the scalable mechanism can help test cases run parallelly. Then test execution time is dramatically decreased. Also, introducing scalable test automation is for more than just reducing test execution time. There is a possibility that some challenging bugs are detected by introducing scalable test automation, such as race conditions, Etc. since test cases can be executed at same timing. If API and Unit tests are implemented, the test strategies can be adopted more efficiently for this scalability testing. However, in WEB applications, as a practical matter, API and Unit testing cannot cover 100% functional testing since they do not reach front-end codes. This study applied a scalable UI automation testing strategy to the large-scale homecare management system. It confirmed the optimization of the test case execution time and the detection of a challenging bug. This study first describes the detailed architecture of the scalable test automation environment, then describes the actual performance reduction time and an example of challenging issue detection.

Keywords: aws, elastic container service, scalability, serverless, ui automation test

Procedia PDF Downloads 106
216 Quantification and Detection of Non-Sewer Water Infiltration and Inflow in Urban Sewer Systems

Authors: M. Beheshti, S. Saegrov, T. M. Muthanna

Abstract:

Separated sewer systems are designed to transfer the wastewater from houses and industrial sections to wastewater treatment plants. Unwanted water in the sewer systems is a well-known problem, i.e. storm-water inflow is around 50% of the foul sewer, and groundwater infiltration to the sewer system can exceed 50% of total wastewater volume in deteriorated networks. Infiltration and inflow of non-sewer water (I/I) into sewer systems is unfavorable in separated sewer systems and can trigger overloading the system and reducing the efficiency of wastewater treatment plants. Moreover, I/I has negative economic, environmental, and social impacts on urban areas. Therefore, for having sustainable management of urban sewer systems, I/I of unwanted water into the urban sewer systems should be considered carefully and maintenance and rehabilitation plan should be implemented on these water infrastructural assets. This study presents a methodology to identify and quantify the level of I/I into the sewer system. Amount of I/I is evaluated by accurate flow measurement in separated sewer systems for specified isolated catchments in Trondheim city (Norway). Advanced information about the characteristics of I/I is gained by CCTV inspection of sewer pipelines with high I/I contribution. Achieving enhanced knowledge about the detection and localization of non-sewer water in foul sewer system during the wet and dry weather conditions will enable the possibility for finding the problem of sewer system and prioritizing them and taking decisions for rehabilitation and renewal planning in the long-term. Furthermore, preventive measures and optimization of sewer systems functionality and efficiency can be executed by maintenance of sewer system. In this way, the exploitation of sewer system can be improved by maintenance and rehabilitation of existing pipelines in a sustainable way by more practical cost-effective and environmental friendly way. This study is conducted on specified catchments with different properties in Trondheim city. Risvollan catchment is one of these catchments with a measuring station to investigate hydrological parameters through the year, which also has a good database. For assessing the infiltration in a separated sewer system, applying the flow rate measurement method can be utilized in obtaining a general view of the network condition from infiltration point of view. This study discusses commonly used and advanced methods of localizing and quantifying I/I in sewer systems. A combination of these methods give sewer operators the possibility to compare different techniques and obtain reliable and accurate I/I data which is vital for long-term rehabilitation plans.

Keywords: flow rate measurement, infiltration and inflow (I/I), non-sewer water, separated sewer systems, sustainable management

Procedia PDF Downloads 333
215 Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

This study is an attempt to obtain reliable data on the natural history of breast cancer growth. We analyze the opportunities for using classical mathematical models (exponential and logistic tumor growth models, Gompertz and von Bertalanffy tumor growth models) to try to describe growth of the primary tumor and the secondary distant metastases of human breast cancer. The research aim is to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoMPaS and corresponding software. We are interested in: 1) modelling the whole natural history of the primary tumor and the secondary distant metastases; 2) developing adequate and precise CoMPaS which reflects relations between the primary tumor and the secondary distant metastases; 3) analyzing the CoMPaS scope of application; 4) implementing the model as a software tool. The foundation of the CoMPaS is the exponential tumor growth model, which is described by determinate nonlinear and linear equations. The CoMPaS corresponds to TNM classification. It allows to calculate different growth periods of the primary tumor and the secondary distant metastases: 1) ‘non-visible period’ for the primary tumor; 2) ‘non-visible period’ for the secondary distant metastases; 3) ‘visible period’ for the secondary distant metastases. The CoMPaS is validated on clinical data of 10-years and 15-years survival depending on the tumor stage and diameter of the primary tumor. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer growth models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. The CoMPaS model and predictive software: a) fit to clinical trials data; b) detect different growth periods of the primary tumor and the secondary distant metastases; c) make forecast of the period of the secondary distant metastases appearance; d) have higher average prediction accuracy than the other tools; e) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoMPaS: the number of doublings for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases. The CoMPaS enables, for the first time, to predict ‘whole natural history’ of the primary tumor and the secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on the primary tumor sizes. Summarizing: a) CoMPaS describes correctly the primary tumor growth of IA, IIA, IIB, IIIB (T1-4N0M0) stages without metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and inception of the secondary distant metastases.

Keywords: breast cancer, exponential growth model, mathematical model, metastases in lymph nodes, primary tumor, survival

Procedia PDF Downloads 341
214 The Aromaticity of P-Substituted O-(N-Dialkyl)Aminomethylphenols

Authors: Khodzhaberdi Allaberdiev

Abstract:

Aromaticity, one of the most important concepts in organic chemistry, has attracted considerable interest from both experimentalists and theoreticians. The geometry optimization of p-substituted o-(N-dialkyl)aminomethylphenols, o-DEAMPH XC₆ H₅CH ₂Y (X=p-OCH₃, CH₃, H, F, Cl, Br, COCH₃, COOCH₃, CHO, CN and NO₂, Y=o-N (C₂H₅)₂, o-DEAMPHs have been performed in the gas phase using the B3LYP/6-311+G(d,p) level. Aromaticities of the considered molecules were investigated using different indices included geometrical (HOMA and Bird), electronic (FLU, PDI and SA) magnetic (NICS(0), NICS(1) and NICS(1)zz indices. The linear dependencies were obtained between some aromaticity indices. The best correlation is observed between the Bird and PDI indices (R² =0.9240). However, not all types of indices or even different indices within the same type correlate well among each other. Surprisingly, for studied molecules in which geometrical and electronic cannot correctly give the aromaticity of ring, the magnetism based index successfully predicts the aromaticity of systems. 1H NMR spectra of compounds were obtained at B3LYP/6–311+G(d,p) level using the GIAO method. Excellent linear correlation (R²= 0.9996) between values the chemical shift of hydrogen atom obtained experimentally of 1H NMR and calculated using B3LYP/6–311+G(d,p) demonstrates a good assignment of the experimental values chemical shift to the calculated structures of o-DEAMPH. It is found that the best linear correlation with the Hammett substituent constants is observed for the NICS(1)zz index in comparison with the other indices: NICS(1)zz =-21.5552+1,1070 σp- (R²=0.9394). The presence intramolecular hydrogen bond in the studied molecules also revealed changes the aromatic character of substituted o-DEAMPHs. The HOMA index predicted for R=NO2 the reduction in the π-electron delocalization of 3.4% was about double that observed for p-nitrophenol. The influence intramolecular H-bonding on aromaticity of benzene ring in the ground state (S0) are described by equations between NICS(1)zz and H-bond energies: experimental, Eₑₓₚ, predicted IR spectroscopical, Eν and topological, EQTAIM with correlation coefficients R² =0.9666, R² =0.9028 and R² =0.8864, respectively. The NICS(1)zz index also correlates with usual descriptors of the hydrogen bond, while the other indices do not give any meaningful results. The influence of the intramolecular H-bonding formation on the aromaticity of some substituted o-DEAMPHs is criteria to consider the multidimensional character of aromaticity. The linear relationships as well as revealed between NICS(1)zz and both pyramidality nitrogen atom, ΣN(C₂H₅)₂ and dihedral angle, φ CAr – CAr -CCH₂ –N, to characterizing out-of-plane properties.These results demonstrated the nonplanar structure of o-DEAMPHs. Finally, when considering dependencies of NICS(1)zz, were excluded data for R=H, because the NICS(1) and NICS(1)zz values are the most negative for unsubstituted DEAMPH, indicating its highest aromaticity; that was not the case for NICS(0) index.

Keywords: aminomethylphenols, DFT, aromaticity, correlations

Procedia PDF Downloads 181
213 Optimization of Artisanal Fishing Waste Fermentation for Volatile Fatty Acids Production

Authors: Luz Stella Cadavid-Rodriguez, Viviana E. Castro-Lopez

Abstract:

Fish waste (FW) has a high content of potentially biodegradable components, so it is amenable to be digested anaerobically. In this line, anaerobic digestion (AD) of FW has been studied for biogas production. Nevertheless, intermediate products such as volatile fatty acids (VFA), generated during the acidogenic stage, have been scarce investigated, even though they have a high potential as a renewable source of carbon. In the literature, there are few studies about the Inoculum-Substrate (I/S) ratio on acidogenesis. On the other hand, it is well known that pH is a critical factor in the production of VFA. The optimum pH for the production of VFA seems to change depending on the substrate and can vary in a range between 5.25 and 11. Nonetheless, the literature about VFA production from protein-rich waste, such as FW, is scarce. In this context, it is necessary to deepen on the determination of the optimal operating conditions of acidogenic fermentation for VFA production from protein-rich waste. Therefore, the aim of this research was to optimize the volatile fatty acid production from artisanal fishing waste, studying the effect of pH and the I/S ratio on the acidogenic process. For this research, the inoculum used was a methanogenic sludge (MS) obtained from a UASB reactor treating wastewater of a slaughterhouse plant, and the FW was collected in the port of Tumaco (Colombia) from the local artisanal fishers. The acidogenic fermentation experiments were conducted in batch mode, in 500 mL glass bottles as anaerobic reactors, equipped with rubber stoppers provided with a valve to release biogas. The effective volume used was 300 mL. The experiments were carried out for 15 days at a mesophilic temperature of 37± 2 °C and constant agitation of 200 rpm. The effect of 3 pH levels: 5, 7, 9, coupled with five I/S ratios, corresponding to 0.20, 0.15, 0.10, 0.05, 0.00 was evaluated taking as a response variable the production of VFA. A complete randomized block design was selected for the experiments in a 5x3 factorial arrangement, with two repetitions per treatment. At the beginning and during the process, pH in the experimental reactors was adjusted to the corresponding values of 5, 7, and 9 using 1M NaOH or 1M H2SO4, as was appropriated. In addition, once the optimum I/S ratio was determined, the process was evaluated at this condition without pH control. The results indicated that pH is the main factor in the production of VFA, obtaining the highest concentration with neutral pH. By reducing the I/S ratio, as low as 0.05, it was possible to maximize VFA production. Thus, the optimum conditions found were natural pH (6.6-7.7) and I/S ratio of 0.05, with which it was possible to reach a maximum total VFA concentration of 70.3 g Ac/L, whose major components were acetic acid (35%) and butyric acid (32%). The findings showed that the acidogenic fermentation of FW is an efficient way of producing VFA and that the operating conditions can be simple and economical.

Keywords: acidogenesis, artisanal fishing waste, inoculum to substrate ratio, volatile fatty acids

Procedia PDF Downloads 125
212 Sustainable Technology and the Production of Housing

Authors: S. Arias

Abstract:

New housing developments and the technological changes that this implies, adapt the styles of living of its residents, as well as new family structures and forms of work due to the particular needs of a specific group of people which involves different techniques of dealing with, organize, equip and use a particular territory. Currently, own their own space is increasingly important and the cities are faced with the challenge of providing the opportunity for such demands, as well as energy, water and waste removal necessary in the process of construction and occupation of new human settlements. Until the day of today, not has failed to give full response to these demands and needs, resulting in cities that grow without control, badly used land, avenues and congested streets. Buildings and dwellings have an important impact on the environment and on the health of the people, therefore environmental quality associated with the comfort of humans to the sustainable development of natural resources. Applied to architecture, this concept involves the incorporation of new technologies in all the constructive process of a dwelling, changing customs of developers and users, what must be a greater effort in planning energy savings and thus reducing the emissions Greenhouse Gases (GHG) depending on the geographical location where it is planned to develop. Since the techniques of occupation of the territory are not the same everywhere, must take into account that these depend on the geographical, social, political, economic and climatic-environmental circumstances of place, which in modified according to the degree of development reached. In the analysis that must be undertaken to check the degree of sustainability of the place, it is necessary to make estimates of the energy used in artificial air conditioning and lighting. In the same way is required to diagnose the availability and distribution of the water resources used for hygiene and for the cooling of artificially air-conditioned spaces, as well as the waste resulting from these technological processes. Based on the results obtained through the different stages of the analysis, it is possible to perform an energy audit in the process of proposing recommendations of sustainability in architectural spaces in search of energy saving, rational use of water and natural resources optimization. The above can be carried out through the development of a sustainable building code in develop technical recommendations to the regional characteristics of each study site. These codes would seek to build bases to promote a building regulations applicable to new human settlements looking for is generated at the same time quality, protection and safety in them. This building regulation must be consistent with other regulations both national and municipal and State, such as the laws of human settlements, urban development and zoning regulations.

Keywords: building regulations, housing, sustainability, technology

Procedia PDF Downloads 347
211 Environmental Benefits of Corn Cob Ash in Lateritic Soil Cement Stabilization for Road Works in a Sub-Tropical Region

Authors: Ahmed O. Apampa, Yinusa A. Jimoh

Abstract:

The potential economic viability and environmental benefits of using a biomass waste, such as corn cob ash (CCA) as pozzolan in stabilizing soils for road pavement construction in a sub-tropical region was investigated. Corn cob was obtained from Maya in South West Nigeria and processed to ash of characteristics similar to Class C Fly Ash pozzolan as specified in ASTM C618-12. This was then blended with ordinary Portland cement in the CCA:OPC ratios of 1:1, 1:2 and 2:1. Each of these blends was then mixed with lateritic soil of ASHTO classification A-2-6(3) in varying percentages from 0 – 7.5% at 1.5% intervals. The soil-CCA-Cement mixtures were thereafter tested for geotechnical index properties including the BS Proctor Compaction, California Bearing Ratio (CBR) and the Unconfined Compression Strength Test. The tests were repeated for soil-cement mix without any CCA blending. The cost of the binder inputs and optimal blends of CCA:OPC in the stabilized soil were thereafter analyzed by developing algorithms that relate the experimental data on strength parameters (Unconfined Compression Strength, UCS and California Bearing Ratio, CBR) with the bivariate independent variables CCA and OPC content, using Matlab R2011b. An optimization problem was then set up minimizing the cost of chemical stabilization of laterite with CCA and OPC, subject to the constraints of minimum strength specifications. The Evolutionary Engine as well as the Generalized Reduced Gradient option of the Solver of MS Excel 2010 were used separately on the cells to obtain the optimal blend of CCA:OPC. The optimal blend attaining the required strength of 1800 kN/m2 was determined for the 1:2 CCA:OPC as 5.4% mix (OPC content 3.6%) compared with 4.2% for the OPC only option; and as 6.2% mix for the 1:1 blend (OPC content 3%). The 2:1 blend did not attain the required strength, though over a 100% gain in UCS value was obtained over the control sample with 0% binder. Upon the fact that 0.97 tonne of CO2 is released for every tonne of cement used (OEE, 2001), the reduced OPC requirement to attain the same result indicates the possibility of reducing the net CO2 contribution of the construction industry to the environment ranging from 14 – 28.5% if CCA:OPC blends are widely used in soil stabilization, going by the results of this study. The paper concludes by recommending that Nigeria and other developing countries in the sub-tropics with abundant stock of biomass waste should look in the direction of intensifying the use of biomass waste as fuel and the derived ash for the production of pozzolans for road-works, thereby reducing overall green house gas emissions and in compliance with the objectives of the United Nations Framework on Climate Change.

Keywords: corn cob ash, biomass waste, lateritic soil, unconfined compression strength, CO2 emission

Procedia PDF Downloads 373
210 Improving Alkaline Water Electrolysis by Using an Asymmetrical Electrode Cell Design

Authors: Gabriel Wosiak, Felipe Staciaki, Eryka Nobrega, Ernesto Pereira

Abstract:

Hydrogen is an energy carrier with potential applications in various industries. Alkaline electrolysis is a commonly used method for hydrogen production; however, its energy cost remains relatively high compared to other methods. This is due in part to interfacial pH changes that occur during the electrolysis process. Interfacial pH changes refer to the changes in pH that occur at the interface between the cathode electrode and the electrolyte solution. These changes are caused by the electrochemical reactions at both electrodes, which consume or produces hydroxide ions (OH-) from the electrolyte solution. This results in an important change in the local pH at the electrode surface, which can have several impacts on the energy consumption and durability of electrolysers. One impact of interfacial pH changes is an increase in the overpotential required for hydrogen production. Overpotential is the difference between the theoretical potential required for a reaction to occur and the actual potential that is applied to the electrodes. In the case of water electrolysis, the overpotential is caused by a number of factors, including the mass transport of reactants and products to and from the electrodes, the kinetics of the electrochemical reactions, and the interfacial pH. An increase in the interfacial pH at the anode surface in alkaline conditions can lead to an increase in the overpotential for hydrogen production. This is because the lower local pH makes it more difficult for the hydroxide ions to be oxidized. As a result, there is an increase in the required energy to the process occur. In addition to increasing the overpotential, interfacial pH changes can also lead to the degradation of the electrodes. This is because the lower pH can make the electrode more susceptible to corrosion. As a result, the electrodes may need to be replaced more frequently, which can increase the overall cost of water electrolysis. The method presented in the paper addresses the issue of interfacial pH changes by using a cell design with a different cell design, introducing the electrode asymmetry. This design helps to mitigate the pH gradient at the anode/electrolyte interface, which reduces the overpotential and improves the energy efficiency of the electrolyser. The method was tested using a multivariate approach in both laboratory and industrial current density conditions and validated the results with numerical simulations. The results demonstrated a clear improvement (11.6%) in energy efficiency, providing an important contribution to the field of sustainable energy production. The findings of the paper have important implications for the development of cost-effective and sustainable hydrogen production methods. By mitigating interfacial pH changes, it is possible to improve the energy efficiency of alkaline electrolysis and make it a more competitive option for hydrogen production.

Keywords: electrolyser, interfacial pH, numerical simulation, optimization, asymmetric cell

Procedia PDF Downloads 70
209 Investigation of Fluid-Structure-Seabed Interaction of Gravity Anchor under Liquefaction and Scour

Authors: Vinay Kumar Vanjakula, Frank Adam, Nils Goseberg, Christian Windt

Abstract:

When a structure is installed on a seabed, the presence of the structure will influence the flow field around it. The changes in the flow field include, formation of vortices, turbulence generation, waves or currents flow breaking and pressure differentials around the seabed sediment. These changes allow the local seabed sediment to be carried off and results in Scour (erosion). These are a threat to the structure's stability. In recent decades, rapid developments of research work and the knowledge of scour On fixed structures (bridges and Monopiles) in rivers and oceans has been carried out, and very limited research work on scour and liquefaction for gravity anchors, particularly for floating Tension Leg Platform (TLP) substructures. Due to its importance and need for enhancement of knowledge in scour and liquefaction around marine structures, the MarTERA funded a three-year (2020-2023) research program called NuLIMAS (Numerical Modeling of Liquefaction Around Marine Structures). It’s a group consists of European institutions (Universities, laboratories, and consulting companies). The objective of this study is to build a numerical model that replicates the reality, which indeed helps to simulate (predict) underwater flow conditions and to study different marine scour and Liquefication situations. It helps to design a heavyweight anchor for the TLP substructure and to minimize the time and expenditure on experiments. And also, the achieved results and the numerical model will be a basis for the development of other design and concepts For marine structures. The Computational Fluid Dynamics (CFD) numerical model will build in OpenFOAM. A conceptual design of heavyweight anchor for TLP substructure is designed through taking considerations of available state-of-the-art knowledge on scour and Liquefication concepts and references to Previous existing designs. These conceptual designs are validated with the available similar experimental benchmark data and also with the CFD numerical benchmark standards (CFD quality assurance study). CFD optimization model/tool is designed as to minimize the effect of fluid flow, scour, and Liquefication. A parameterized model is also developed to automate the calculation process to reduce user interactions. The parameters such as anchor Lowering Process, flow optimized outer contours, seabed interaction study, and FSSI (Fluid-Structure-Seabed Interactions) are investigated and used to carve the model as to build an optimized anchor.

Keywords: gravity anchor, liquefaction, scour, computational fluid dynamics

Procedia PDF Downloads 144
208 Market Segmentation of Cruise Ship Passengers: Implications for Marketing of Local Products and Services at Destination Points

Authors: Gunnar Oskarsson, Irena Georgsdottir

Abstract:

Tourism has been growing incredibly fast during the past years, including the cruise industry, which is gaining increasing popularity among various groups of travelers. It is a challenging task for companies serving cruise ship passengers with local products and services at the point of destination to reach them in due time with information about their offerings, as well learning how to adapt their offerings and messages to the type of customers arriving on each particular occasion. Although some research has been conducted in this sphere, there is still limited knowledge about many specifics within this sector of the tourist industry. The objective of this research is to examine one of these, with the main goal of studying the segmentation of cruise passengers and to learn about marketing practices directed towards them. A qualitative research method, based on in-depth interviews, was used, as this provides an opportunity to gain insight into the participants’ perspectives. Interviews were conducted with 10 respondents from different companies in the tourist industry in Iceland, who interact with cruise passengers on a regular basis in their work environment. The main objective was to gain an understanding of what distinguishes different customer groups, or segments, in this industry, and of the marketing approaches directed towards them. The main findings reveal that participants note the strongest difference between cruise passengers of different nationalities, passengers coming on different ships (size and type), and passengers arriving at different times of the year. A drastic difference was noticed between nationalities in four main segments, American, British, Other European, and Asian customers, although some of these segments could be divided into even further sub-segments. Other important differencing factors were size and type of ships, quality or number of stars on the ship, and travelling time of the year. Companies serving cruise ship passengers, as well as the customers themselves, could benefit if the offerings of services were designed specifically for particular segments within the industry. Concerning marketing towards cruise passengers, the results indicate that it is carried out almost exclusively through the Internet using; a reliable website and, search engine optimization. Marketing is also by word-of-mouth. This research can assist practitioners by offering a deeper understanding of the approaches that may be effective in marketing local products and services to cruise ship passengers, based on their segmentation and by identifying effective ways to reach them. The research, furthermore, provides a valuable contribution to marketing knowledge for the benefit of an increasingly important market segment in a fast growing tourist industry.

Keywords: capabilities, global integration, internationalisation, SMEs

Procedia PDF Downloads 401
207 Investigations on the Influence of Optimized Charge Air Cooling for a Diesel Passenger Car

Authors: Christian Doppler, Gernot Hirschl, Gerhard Zsiga

Abstract:

Starting from 2020, an EU-wide CO2-limitation of 95g/km is scheduled for the average of an OEMs passenger car fleet. Considering that, further measures of optimization on the diesel cycle will be necessary in order to reduce fuel consumption and emissions while keeping performance values adequate at the least. The present article deals with charge air cooling (CAC) on the basis of a diesel passenger car model in a 0D/1D-working process calculation environment. The considered engine is a 2.4 litre EURO VI diesel engine with variable geometry turbocharger (VGT) and low-pressure exhaust gas recirculation (LP EGR). The object of study was the impact of charge air cooling on the engine working process at constant boundary conditions which could have been conducted with an available and validated engine model in AVL BOOST. Part load was realized with constant power and NOx-emissions, whereas full load was accomplished with a lambda control in order to obtain maximum engine performance. The informative results were used to implement a simulation model in Matlab/Simulink which is further integrated into a full vehicle simulation environment via coupling with ICOS (Independent Co-Simulation Platform). Next, the dynamic engine behavior was validated and modified with load steps taken from the engine test bed. Due to the modular setup in the Co-Simulation, different CAC-models have been simulated quickly with their different influences on the working process. In doing so, a new cooler variation isn’t needed to be reproduced and implemented into the primary simulation model environment, but is implemented quickly and easily as an independent component into the simulation entity. By means of the association of the engine model, longitudinal dynamics vehicle model and different CAC models (air/air & water/air variants) in both steady state and transient operational modes, statements are gained regarding fuel consumption, NOx-emissions and power behavior. The fact that there is no more need of a complex engine model is very advantageous for the overall simulation volume. Beside of the simulation with the mentioned demonstrator engine, there have also been conducted several experimental investigations on the engine test bench. Here the comparison of a standard CAC with an intake-manifold-integrated CAC was executed in particular. Simulative as well as experimental tests showed benefits for the water/air CAC variant (on test bed especially the intake manifold integrated variant). The benefits are illustrated by a reduced pressure loss and a gain in air efficiency and CAC efficiency, those who all lead to minimized emission and fuel consumption for stationary and transient operation.

Keywords: air/water-charge air cooler, co-simulation, diesel working process, EURO VI fuel consumption

Procedia PDF Downloads 269
206 Experimental Study on Granulated Steel Slag as an Alternative to River Sand

Authors: K. Raghu, M. N. Vathhsala, Naveen Aradya, Sharth

Abstract:

River sand is the most preferred fine aggregate for mortar and concrete. River sand is a product of natural weathering of rocks over a period of millions of years and is mined from river beds. Sand mining has disastrous environmental consequences. The excessive mining of river bed is creating an ecological imbalance. This has lead to have restrictions imposed by ministry of environment on sand mining. Driven by the acute need for sand, stone dust or manufactured sand prepared from the crushing and screening of coarse aggregate is being used as sand in the recent past. However manufactured sand is also a natural material and has quarrying and quality issues. To reduce the burden on the environment, alternative materials to be used as fine aggregates are being extensively investigated all over the world. Looking to the quantum of requirements, quality and properties there has been a global consensus on a material – Granulated slags. Granulated slag has been proven as a suitable material for replacing natural sand / crushed fine aggregates. In developed countries, the use of granulated slag as fine aggregate to replace natural sand is well established and is in regular practice. In the present paper Granulated slag has been experimented for usage in mortar. Slags are the main by-products generated during iron and steel production in the steel industry. Over the past decades, the steel production has increased and, consequently, the higher volumes of by-products and residues generated which have driven to the reuse of these materials in an increasingly efficient way. In recent years new technologies have been developed to improve the recovery rates of slags. Increase of slags recovery and use in different fields of applications like cement making, construction and fertilizers help in preserving natural resources. In addition to the environment protection, these practices produced economic benefits, by providing sustainable solutions that can allow the steel industry to achieve its ambitious targets of “zero waste” in coming years. Slags are generated at two different stages of steel production, iron making and steel making known as BF(Blast Furnace) slag and steel slag respectively. The slagging agent or fluxes, such as lime stone, dolomite and quartzite added into BF or steel making furnaces in order to remove impurities from ore, scrap and other ferrous charges during smelting. The slag formation is the result of a complex series of physical and chemical reactions between the non-metallic charge(lime stone, dolomite, fluxes), the energy sources(coal, coke, oxygen, etc.) and refractory materials. Because of the high temperatures (about 15000 C) during their generation, slags do not contain any organic substances. Due to the fact that slags are lighter than the liquid metal, they float and get easily removed. The slags protect the metal bath from atmosphere and maintain temperature through a kind of liquid formation. These slags are in liquid state and solidified in air after dumping in the pit or granulated by impinging water systems. Generally, BF slags are granulated and used in cement making due to its high cementious properties, and steel slags are mostly dumped due to unfavourable physio-chemical conditions. The increasing dump of steel slag not only occupies a plenty of land but also wastes resources and can potentially have an impact on the environment due to water pollution. Since BF slag contains little Fe and can be used directly. BF slag has found a wide application, such as cement production, road construction, Civil Engineering work, fertilizer production, landfill daily cover, soil reclamation, prior to its application outside the iron and steel making process.

Keywords: steel slag, river sand, granulated slag, environmental

Procedia PDF Downloads 244
205 Study of the Removal Efficiency of Azo-Dyes Using Xanthan as Sequestering Agent

Authors: Cedillo Ortiz Cesar Isaac, Marañón-Ruiz Virginia-Francisca, Lozano-Alvarez Juan Antonio, Jáuregui-Rincón Juan, Roger Chiu Zarate

Abstract:

Introduction: The contamination of water with the azo-dye is a problem worldwide as although wastewater contaminate is treated in a municipal sewage system, still contain a considerable amount of dyes. In the present, there are different processes denominated tertiary method in which it is possible to lower the concentration of the dye. One of these methods is by adsorption onto various materials which can be organic or inorganic materials. The xanthan is a biomaterial as removal agents to decrease the dye content in aqueous solution. The Zimm-Bragg model described the experimental isotherms obtained when this biopolymer was used in the removal of textile dyes. Nevertheless, it was not established if a possible correlation between dye structure and removal efficiency exists. In this sense, the principal objective of this report is to propose a qualitative relationship between the structure of three azo-dyes (Congo Red (CR), Methyl Red (MR) and Methyl Orange (MO)) and their removal efficiency from aqueous environment when xanthan are used as dye sequestering agents. Methods: The dyes were subjected to different pH and ionic strength values to obtain the conditions of maximum dye removal. Afterward, these conditions were used to perform the adsorption isotherm as was reported in the previous study in our group. The Zimm-Bragg model was used to describe the experimental data and the parameters of nucleation (Ku) and cooperativity (U) were obtained by optimization using the R statistical software. The spectra from UV-Visible (aqueous solution), Infrared absorption and Raman spectroscopies (dry samples) were obtained from the biopolymer-dye complex. Results: The removal percent with xanthan in each dye are as follows: with CR had 99.98 % when the pH is 12 and ionic strength is 10.12, with MR had 84.79 % when the pH is 9.5 and ionic strength is 43 and finally the MO had 30 % in pH 4 and 72. It can be seen that when xanthan is used to remove the dyes, exists a lower dependence between structure and removal efficiency. This may be due to the different tendency to form aggregates of each dye. This aggregation capacity and the charge of each dye resulting from the pH and ionic strength values of aqueous solutions are key factors in the dye removal. The experimental isotherm of MR was only that adequately described by Zimm-Bragg model. Because with the CR had the 100 % of remove thus is very difficult obtain de experimental isotherm and finally MO had results fluctuating and therefore was impossible get the accurate data. Conclusions: The study of the removal of three dyes with xanthan as dye sequestering agents suggests that aggregation capacity of dyes and the charge resulting from structural characteristics such as molecular weight and functional groups have a relationship with the removal efficiency. Acknowledgements: We are gratefully acknowledged support for this project by Consejo Nacional de Ciencia y Tecnología, México (CONACyT, Grant No. 632694.)

Keywords: adsorption, azo dyes, xanthan gum, Zimm Bragg theory

Procedia PDF Downloads 280
204 Magnetic Navigation of Nanoparticles inside a 3D Carotid Model

Authors: E. G. Karvelas, C. Liosis, A. Theodorakakos, T. E. Karakasidis

Abstract:

Magnetic navigation of the drug inside the human vessels is a very important concept since the drug is delivered to the desired area. Consequently, the quantity of the drug required to reach therapeutic levels is being reduced while the drug concentration at targeted sites is increased. Magnetic navigation of drug agents can be achieved with the use of magnetic nanoparticles where anti-tumor agents are loaded on the surface of the nanoparticles. The magnetic field that is required to navigate the particles inside the human arteries is produced by a magnetic resonance imaging (MRI) device. The main factors which influence the efficiency of the usage of magnetic nanoparticles for biomedical applications in magnetic driving are the size and the magnetization of the biocompatible nanoparticles. In this study, a computational platform for the simulation of the optimal gradient magnetic fields for the navigation of magnetic nanoparticles inside a carotid artery is presented. For the propulsion model of the particles, seven major forces are considered, i.e., the magnetic force from MRIs main magnet static field as well as the magnetic field gradient force from the special propulsion gradient coils. The static field is responsible for the aggregation of nanoparticles, while the magnetic gradient contributes to the navigation of the agglomerates that are formed. Moreover, the contact forces among the aggregated nanoparticles and the wall and the Stokes drag force for each particle are considered, while only spherical particles are used in this study. In addition, gravitational forces due to gravity and the force due to buoyancy are included. Finally, Van der Walls force and Brownian motion are taken into account in the simulation. The OpenFoam platform is used for the calculation of the flow field and the uncoupled equations of particles' motion. To verify the optimal gradient magnetic fields, a covariance matrix adaptation evolution strategy (CMAES) is used in order to navigate the particles into the desired area. A desired trajectory is inserted into the computational geometry, which the particles are going to be navigated in. Initially, the CMAES optimization strategy provides the OpenFOAM program with random values of the gradient magnetic field. At the end of each simulation, the computational platform evaluates the distance between the particles and the desired trajectory. The present model can simulate the motion of particles when they are navigated by the magnetic field that is produced by the MRI device. Under the influence of fluid flow, the model investigates the effect of different gradient magnetic fields in order to minimize the distance of particles from the desired trajectory. In addition, the platform can navigate the particles into the desired trajectory with an efficiency between 80-90%. On the other hand, a small number of particles are stuck to the walls and remains there for the rest of the simulation.

Keywords: artery, drug, nanoparticles, navigation

Procedia PDF Downloads 107
203 Utilizing Temporal and Frequency Features in Fault Detection of Electric Motor Bearings with Advanced Methods

Authors: Mohammad Arabi

Abstract:

The development of advanced technologies in the field of signal processing and vibration analysis has enabled more accurate analysis and fault detection in electrical systems. This research investigates the application of temporal and frequency features in detecting faults in electric motor bearings, aiming to enhance fault detection accuracy and prevent unexpected failures. The use of methods such as deep learning algorithms and neural networks in this process can yield better results. The main objective of this research is to evaluate the efficiency and accuracy of methods based on temporal and frequency features in identifying faults in electric motor bearings to prevent sudden breakdowns and operational issues. Additionally, the feasibility of using techniques such as machine learning and optimization algorithms to improve the fault detection process is also considered. This research employed an experimental method and random sampling. Vibration signals were collected from electric motors under normal and faulty conditions. After standardizing the data, temporal and frequency features were extracted. These features were then analyzed using statistical methods such as analysis of variance (ANOVA) and t-tests, as well as machine learning algorithms like artificial neural networks and support vector machines (SVM). The results showed that using temporal and frequency features significantly improves the accuracy of fault detection in electric motor bearings. ANOVA indicated significant differences between normal and faulty signals. Additionally, t-tests confirmed statistically significant differences between the features extracted from normal and faulty signals. Machine learning algorithms such as neural networks and SVM also significantly increased detection accuracy, demonstrating high effectiveness in timely and accurate fault detection. This study demonstrates that using temporal and frequency features combined with machine learning algorithms can serve as an effective tool for detecting faults in electric motor bearings. This approach not only enhances fault detection accuracy but also simplifies and streamlines the detection process. However, challenges such as data standardization and the cost of implementing advanced monitoring systems must also be considered. Utilizing temporal and frequency features in fault detection of electric motor bearings, along with advanced machine learning methods, offers an effective solution for preventing failures and ensuring the operational health of electric motors. Given the promising results of this research, it is recommended that this technology be more widely adopted in industrial maintenance processes.

Keywords: electric motor, fault detection, frequency features, temporal features

Procedia PDF Downloads 47
202 Influence of Confinement on Phase Behavior in Unconventional Gas Condensate Reservoirs

Authors: Szymon Kuczynski

Abstract:

Poland is characterized by the presence of numerous sedimentary basins and hydrocarbon provinces. Since 2006 exploration for hydrocarbons in Poland become gradually more focus on new unconventional targets, particularly on the shale gas potential of the Upper Ordovician and Lower Silurian in the Baltic-Podlasie-Lublin Basin. The first forecast prepared by US Energy Information Administration in 2011 indicated to 5.3 Tcm of natural gas. In 2012, Polish Geological Institute presented its own forecast which estimated maximum reserves on 1.92 Tcm. The difference in the estimates was caused by problems with calculations of the initial amount of adsorbed, as well as free, gas trapped in shale rocks (GIIP - Gas Initially in Place). This value is dependent from sorption capacity, gas saturation and mutual interactions between gas, water, and rock. Determination of the reservoir type in the initial exploration phase brings essential knowledge, which has an impact on decisions related to the production. The study of porosity impact for phase envelope shift eliminates errors and improves production profitability. Confinement phenomenon affects flow characteristics, fluid properties, and phase equilibrium. The thermodynamic behavior of confined fluids in porous media is subject to the basic considerations for industrial applications such as hydrocarbons production. In particular the knowledge of the phase equilibrium and the critical properties of the contained fluid is essential for the design and optimization of such process. In pores with a small diameter (nanopores), the effect of the wall interaction with the fluid particles becomes significant and occurs in shale formations. Nano pore size is similar to the fluid particles’ diameter and the area of particles which flow without interaction with pore wall is almost equal to the area where this phenomenon occurs. The molecular simulation studies have shown an effect of confinement to the pseudo critical properties. Therefore, the critical parameters pressure and temperature and the flow characteristics of hydrocarbons in terms of nano-scale are under the strong influence of fluid particles with the pore wall. It can be concluded that the impact of a single pore size is crucial when it comes to the nanoscale because there is possible the above-described effect. Nano- porosity makes it difficult to predict the flow of reservoir fluid. Research are conducted to explain the mechanisms of fluid flow in the nanopores and gas extraction from porous media by desorption.

Keywords: adsorption, capillary condensation, phase envelope, nanopores, unconventional natural gas

Procedia PDF Downloads 337
201 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach

Authors: Kristina Pflug, Markus Busch

Abstract:

Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.

Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology

Procedia PDF Downloads 124
200 Cellulolytic and Xylanolytic Enzymes from Mycelial Fungi

Authors: T. Sadunishvili, L. Kutateladze, T. Urushadze, R. Khvedelidze, N. Zakariashvili, M. Jobava, G. Kvesitadze

Abstract:

Multiple repeated soil-climatic zones in Georgia determines the diversity of microorganisms. Hundreds of microscopic fungi of different genera have been isolated from different ecological niches, including some extreme environments. Biosynthetic ability of microscopic fungi has been studied. Trichoderma ressei, representative of the Ascomycetes secrete cellulolytic and xylanolytic enzymes that act in synergy to hydrolyze polysaccharide polymers to glucose, xylose and arabinose, which can be fermented to biofuels. The other mesophilic strains producing cellulases are Allesheria terrestris, Chaetomium thermophile, Fusarium oxysporium, Piptoporus betulinus, Penicillium echinulatum, P. purpurogenum, Aspergillus niger, A. wentii, A. versicolor, A. fumigatus etc. In the majority of the cases the cellulases produced by strains of genus Aspergillus usually have high β-glucosidase activity and average endoglucanases levels (with some exceptions), whereas strains representing Trichoderma have high endo enzyme and low β-glucosidase, and hence has limited efficiency in cellulose hydrolysis. Six producers of stable cellulases and xylanases from mesophilic and thermophilic fungi have been selected. By optimization of submerged cultivation conditions, high activities of cellulases and xylanases were obtained. For enzymes purification, their sedimentation by organic solvents such as ethyl alcohol, acetone, isopropanol and by ammonium sulphate in different ratios have been carried out. Best results were obtained with precipitation by ethyl alcohol (1:3.5) and ammonium sulphate. The yields of enzyme according to cellulase activities were 80-85% in both cases. Cellulase activity of enzyme preparation obtained from the strain Trichoderma viride X 33 is 126 U/g, from the strain Penicillium canescence D 85–185U/g and from the strain Sporotrichum pulverulentum T 5-0 110 U/g. Cellulase activity of enzyme preparation obtained from the strain Aspergillus sp. Av10 is 120 U/g, xylanase activity of enzyme preparation obtained from the strain Aspergillus niger A 7-5–1155U/g and from the strain Aspergillus niger Aj 38-1250 U/g. Optimum pH and temperature of operation and thermostability, of the enzyme preparations, were established. The efficiency of hydrolyses of different agricultural residues by the microscopic fungi cellulases has been studied. The glucose yield from the residues as a result of enzymatic hydrolysis is highly determined by the ratio of enzyme to substrate, pH, temperature, and duration of the process. Hydrolysis efficiency was significantly increased as a result of different pretreatment of the residues by different methods. Acknowledgement: The Study was supported by the ISTC project G-2117, funded by Korea.

Keywords: cellulase, xylanase, microscopic fungi, enzymatic hydrolysis

Procedia PDF Downloads 392
199 Optimization of Heat Source Assisted Combustion on Solid Rocket Motors

Authors: Minal Jain, Vinayak Malhotra

Abstract:

Solid Propellant ignition consists of rapid and complex events comprising of heat generation and transfer of heat with spreading of flames over the entire burning surface area. Proper combustion and thus propulsion depends heavily on the modes of heat transfer characteristics and cavity volume. Fire safety is an integral component of a successful rocket flight failing to which may lead to overall failure of the rocket. This leads to enormous forfeiture in resources viz., money, time, and labor involved. When the propellant is ignited, thrust is generated and the casing gets heated up. This heat adds on to the propellant heat and the casing, if not at proper orientation starts burning as well, leading to the whole rocket being completely destroyed. This has necessitated active research efforts emphasizing a comprehensive study on the inter-energy relations involved for effective utilization of the solid rocket motors for better space missions. Present work is focused on one of the major influential aspects of this detrimental burning which is the presence of an external heat source, in addition to a potential heat source which is already ignited. The study is motivated by the need to ensure better combustion and fire safety presented experimentally as a simplified small-scale mode of a rocket carrying a solid propellant inside a cavity. The experimental setup comprises of a paraffin wax candle as the pilot fuel and incense stick as the external heat source. The candle is fixed and the incense stick position and location is varied to investigate the find the influence of the pilot heat source. Different configurations of the external heat source presence with separation distance are tested upon. Regression rates of the pilot thin solid fuel are noted to fundamentally understand the non-linear heat and mass transfer which is the governing phenomenon. An attempt is made to understand the phenomenon fundamentally and the mechanism governing it. Results till now indicate non-linear heat transfer assisted with the occurrence of flaming transition at selected critical distances. With an increase in separation distance, the effect is noted to drop in a non-monotonic trend. The parametric study results are likely to provide useful physical insight about the governing physics and utilization in proper testing, validation, material selection, and designing of solid rocket motors with enhanced safety.

Keywords: combustion, propellant, regression, safety

Procedia PDF Downloads 161
198 Damping Optimal Design of Sandwich Beams Partially Covered with Damping Patches

Authors: Guerich Mohamed, Assaf Samir

Abstract:

The application of viscoelastic materials in the form of constrained layers in mechanical structures is an efficient and cost-effective technique for solving noise and vibration problems. This technique requires a design tool to select the best location, type, and thickness of the damping treatment. This paper presents a finite element model for the vibration of beams partially or fully covered with a constrained viscoelastic damping material. The model is based on Bernoulli-Euler theory for the faces and Timoshenko beam theory for the core. It uses four variables: the through-thickness constant deflection, the axial displacements of the faces, and the bending rotation of the beam. The sandwich beam finite element is compatible with the conventional C1 finite element for homogenous beams. To validate the proposed model, several free vibration analyses of fully or partially covered beams, with different locations of the damping patches and different percent coverage, are studied. The results show that the proposed approach can be used as an effective tool to study the influence of the location and treatment size on the natural frequencies and the associated modal loss factors. Then, a parametric study regarding the variation in the damping characteristics of partially covered beams has been conducted. In these studies, the effect of core shear modulus value, the effect of patch size variation, the thickness of constraining layer, and the core and the locations of the patches are considered. In partial coverage, the spatial distribution of additive damping by using viscoelastic material is as important as the thickness and material properties of the viscoelastic layer and the constraining layer. Indeed, to limit added mass and to attain maximum damping, the damping patches should be placed at optimum locations. These locations are often selected using the modal strain energy indicator. Following this approach, the damping patches are applied over regions of the base structure with the highest modal strain energy to target specific modes of vibration. In the present study, a more efficient indicator is proposed, which consists of placing the damping patches over regions of high energy dissipation through the viscoelastic layer of the fully covered sandwich beam. The presented approach is used in an optimization method to select the best location for the damping patches as well as the material thicknesses and material properties of the layers that will yield optimal damping with the minimum area of coverage.

Keywords: finite element model, damping treatment, viscoelastic materials, sandwich beam

Procedia PDF Downloads 147
197 Modeling of the Biodegradation Performance of a Membrane Bioreactor to Enhance Water Reuse in Agri-food Industry - Poultry Slaughterhouse as an Example

Authors: masmoudi Jabri Khaoula, Zitouni Hana, Bousselmi Latifa, Akrout Hanen

Abstract:

Mathematical modeling has become an essential tool for sustainable wastewater management, particularly for the simulation and the optimization of complex processes involved in activated sludge systems. In this context, the activated sludge model (ASM3h) was used for the simulation of a Biological Membrane Reactor (MBR) as it includes the integration of biological wastewater treatment and physical separation by membrane filtration. In this study, the MBR with a useful volume of 12.5 L was fed continuously with poultry slaughterhouse wastewater (PSWW) for 50 days at a feed rate of 2 L/h and for a hydraulic retention time (HRT) of 6.25h. Throughout its operation, High removal efficiency was observed for the removal of organic pollutants in terms of COD with 84% of efficiency. Moreover, the MBR has generated a treated effluent which fits with the limits of discharge into the public sewer according to the Tunisian standards which were set in March 2018. In fact, for the nitrogenous compounds, average concentrations of nitrate and nitrite in the permeat reached 0.26±0.3 mg. L-1 and 2.2±2.53 mg. L-1, respectively. The simulation of the MBR process was performed using SIMBA software v 5.0. The state variables employed in the steady state calibration of the ASM3h were determined using physical and respirometric methods. The model calibration was performed using experimental data obtained during the first 20 days of the MBR operation. Afterwards, kinetic parameters of the model were adjusted and the simulated values of COD, N-NH4+and N- NOx were compared with those reported from the experiment. A good prediction was observed for the COD, N-NH4+and N- NOx concentrations with 467 g COD/m³, 110.2 g N/m³, 3.2 g N/m³ compared to the experimental data which were 436.4 g COD/m³, 114.7 g N/m³ and 3 g N/m³, respectively. For the validation of the model under dynamic simulation, the results of the experiments obtained during the second treatment phase of 30 days were used. It was demonstrated that the model simulated the conditions accurately by yielding a similar pattern on the variation of the COD concentration. On the other hand, an underestimation of the N-NH4+ concentration was observed during the simulation compared to the experimental results and the measured N-NO3 concentrations were lower than the predicted ones, this difference could be explained by the fact that the ASM models were mainly designed for the simulation of biological processes in the activated sludge systems. In addition, more treatment time could be required by the autotrophic bacteria to achieve a complete and stable nitrification. Overall, this study demonstrated the effectiveness of mathematical modeling in the prediction of the performance of the MBR systems with respect to organic pollution, the model can be further improved for the simulation of nutrients removal for a longer treatment period.

Keywords: activated sludge model (ASM3h), membrane bioreactor (MBR), poultry slaughter wastewater (PSWW), reuse

Procedia PDF Downloads 58
196 Mechanism Design and Dynamic Analysis of Active Independent Front Steering System

Authors: Cheng-Chi Yu, Yu-Shiue Wang, Kei-Lin Kuo

Abstract:

Active Independent Front Steering system is a steering system which can according to vehicle driving situation adjusts the relation of steering angle between inner wheel and outer wheel. In low-speed cornering, AIFS sets the steering angles of inner and outer wheel into Ackerman steering geometry to make vehicle has less cornering radius. Besides, AIFS changes the steering geometry to parallel or even anti-Ackerman steering geometry to keep vehicle stability in high-speed cornering. Therefore, based on the analysis of the vehicle steering behavior from different steering geometries, this study develops a new screw type of active independent front steering system to make vehicles best cornering performance at any speeds. The screw type of active independent front steering system keeps the pinion and separates the rack into main rack and second rack. Two racks connect by a screw. Extra screw rotated motion powered by assistant motor through coupler makes second rack move relative to main rack, which can adjust both steering ratio and steering geometry. First of all, this study distinguishes the steering geometry by using Ackerman percentage and utilizes the software of ADAMS/Car to construct diverse steering geometry models. The different steering geometries are compared at low-speed and high-speed cornering, and then control strategies of the active independent front steering systems could be formulated. Secondly, this study applies closed loop equation to analyze tire steering angles and carries out optimization calculations to make the steering geometry from traditional rack and pinion steering system near to Ackerman steering geometry. Steering characteristics of the optimum steering mechanism and motion characteristics of vehicle installed the steering mechanism are verified by ADAMS/Car models of front suspension and full vehicle respectively. By adding dual auxiliary rack and dual motor to the optimum steering mechanism, the active independent front steering system could be developed to achieve the functions of variable steering ratio and variable steering geometry. At last, this study uses ADAMS/Car and Matlab/Simulink to co-simulate the cornering motion of vehicles confirms the vehicle installed the Active Independent Front Steering (AIFS) system has better handling performance than that with Active Independent Steering (AFS) system or with Electric Power Steering (EPS) system. At low-speed cornering, the vehicles with AIFS system and with AFS system have better maneuverability, less cornering radius, than the traditional vehicle with EPS system because that AIFS and AFS systems both provide function of variable steering ratio. However, there is a slight penalty in the motor(s) power consumption. In addition, because of the capability of variable steering geometry, the vehicle with AIFS system has better high-speed cornering stability, trajectory keeping, and even less motor(s) power consumption than that with EPS system and also with AFS system.

Keywords: active front steering system, active independent front steering system, steering geometry, steering ratio

Procedia PDF Downloads 189
195 Airon Project: IoT-Based Agriculture System for the Optimization of Irrigation Water Consumption

Authors: África Vicario, Fernando J. Álvarez, Felipe Parralejo, Fernando Aranda

Abstract:

The irrigation systems of traditional agriculture, such as gravity-fed irrigation, produce a great waste of water because, generally, there is no control over the amount of water supplied in relation to the water needed. The AIRON Project tries to solve this problem by implementing an IoT-based system to sensor the irrigation plots so that the state of the crops and the amount of water used for irrigation can be known remotely. The IoT system consists of a sensor network that measures the humidity of the soil, the weather conditions (temperature, relative humidity, wind and solar radiation) and the irrigation water flow. The communication between this network and a central gateway is conducted by means of long-range wireless communication that depends on the characteristics of the irrigation plot. The main objective of the AIRON project is to deploy an IoT sensor network in two different plots of the irrigation community of Aranjuez in the Spanish region of Madrid. The first plot is 2 km away from the central gateway, so LoRa has been used as the base communication technology. The problem with this plot is the absence of mains electric power, so devices with energy-saving modes have had to be used to maximize the external batteries' use time. An ESP32 SOC board with a LoRa module is employed in this case to gather data from the sensor network and send them to a gateway consisting of a Raspberry Pi with a LoRa hat. The second plot is located 18 km away from the gateway, a range that hampers the use of LoRa technology. In order to establish reliable communication in this case, the long-term evolution (LTE) standard is used, which makes it possible to reach much greater distances by using the cellular network. As mains electric power is available in this plot, a Raspberry Pi has been used instead of the ESP32 board to collect sensor data. All data received from the two plots are stored on a proprietary server located at the irrigation management company's headquarters. The analysis of these data by means of machine learning algorithms that are currently under development should allow a short-term prediction of the irrigation water demand that would significantly reduce the waste of this increasingly valuable natural resource. The major finding of this work is the real possibility of deploying a remote sensing system for irrigated plots by using Commercial-Off-The-Shelf (COTS) devices, easily scalable and adaptable to design requirements such as the distance to the control center or the availability of mains electrical power at the site.

Keywords: internet of things, irrigation water control, LoRa, LTE, smart farming

Procedia PDF Downloads 84