Search results for: Optimization
233 A Low-Cost Memristor Based on Hybrid Structures of Metal-Oxide Quantum Dots and Thin Films
Authors: Amir Shariffar, Haider Salman, Tanveer Siddique, Omar Manasreh
Abstract:
According to the recent studies on metal-oxide memristors, researchers tend to improve the stability, endurance, and uniformity of resistive switching (RS) behavior in memristors. Specifically, the main challenge is to prevent abrupt ruptures in the memristor’s filament during the RS process. To address this problem, we are proposing a low-cost hybrid structure of metal oxide quantum dots (QDs) and thin films to control the formation of filaments in memristors. We aim to use metal oxide quantum dots because of their unique electronic properties and quantum confinement, which may improve the resistive switching behavior. QDs have discrete energy spectra due to electron confinement in three-dimensional space. Because of Coulomb repulsion between electrons, only a few free electrons are contained in a quantum dot. This fact might guide the growth direction for the conducting filaments in the metal oxide memristor. As a result, it is expected that QDs can improve the endurance and uniformity of RS behavior in memristors. Moreover, we use a hybrid structure of intrinsic n-type quantum dots and p-type thin films to introduce a potential barrier at the junction that can smooth the transition between high and low resistance states. A bottom-up approach is used for fabricating the proposed memristor using different types of metal-oxide QDs and thin films. We synthesize QDs including, zinc oxide, molybdenum trioxide, and nickel oxide combined with spin-coated thin films of titanium dioxide, copper oxide, and hafnium dioxide. We employ fluorine-doped tin oxide (FTO) coated glass as the substrate for deposition and bottom electrode. Then, the active layer composed of one type of quantum dots, and the opposite type of thin films is spin-coated onto the FTO. Lastly, circular gold electrodes are deposited with a shadow mask by using electron-beam (e-beam) evaporation at room temperature. The fabricated devices are characterized using a probe station with a semiconductor parameter analyzer. The current-voltage (I-V) characterization is analyzed for each device to determine the conduction mechanism. We evaluate the memristor’s performance in terms of stability, endurance, and retention time to identify the optimal memristive structure. Finally, we assess the proposed hypothesis before we proceed to the optimization process for fabricating the memristor.Keywords: memristor, quantum dot, resistive switching, thin film
Procedia PDF Downloads 122232 A Quantitative Study on the “Unbalanced Phenomenon” of Mixed-Use Development in the Central Area of Nanjing Inner City Based on the Meta-Dimensional Model
Abstract:
Promoting urban regeneration in existing areas has been elevated to a national strategy in China. In this context, because of the multidimensional sustainable effect through the intensive use of land, mixed-use development has become an important objective for high-quality urban regeneration in the inner city. However, in the long period of time since China's reform and opening up, the "unbalanced phenomenon" of mixed-use development in China's inner cities has been very serious. On the one hand, the excessive focus on certain individual spaces has led to an increase in the level of mixed-use development in some areas, substantially ahead of others, resulting in a growing gap between different parts of the inner city; On the other hand, the excessive focus on a one-dimensional element of the spatial organization of mixed-use development, such as the enhancement of functional mix or spatial capacity, has led to a lagging phenomenon or neglect in the construction of other dimensional elements, such as pedestrian permeability, green environmental quality, social inclusion, etc. This phenomenon is particularly evident in the central area of the inner city, and it clearly runs counter to the need for sustainable development in China's new era. Therefore, a rational qualitative and quantitative analysis of the "unbalanced phenomenon" will help to identify the problem and provide a basis for the formulation of relevant optimization plans in the future. This paper builds a dynamic evaluation method of mixed-use development based on a meta-dimensional model and then uses spatial evolution analysis and spatial consistency analysis with ArcGIS software to reveal the "unbalanced phenomenon " in over the past 40 years of the central city area in Nanjing, a China’s typical city facing regeneration. This study result finds that, compared to the increase in functional mix and capacity, the dimensions of residential space mix, public service facility mix, pedestrian permeability, and greenness in Nanjing's city central area showed different degrees of lagging improvement, and the unbalanced development problems in each part of the city center are different, so the governance and planning plan for future mixed-use development needs to fully address these problems. The research methodology of this paper provides a tool for comprehensive dynamic identification of mixed-use development level’s change, and the results deepen the knowledge of the evolution of mixed-use development patterns in China’s inner cities and provide a reference basis for future regeneration practices.Keywords: mixed-use development, unbalanced phenomenon, the meta-dimensional model, over the past 40 years of Nanjing, China
Procedia PDF Downloads 106231 CO₂ Conversion by Low-Temperature Fischer-Tropsch
Authors: Pauline Bredy, Yves Schuurman, David Farrusseng
Abstract:
To fulfill climate objectives, the production of synthetic e-fuels using CO₂ as a raw material appears as part of the solution. In particular, Power-to-Liquid (PtL) concept combines CO₂ with hydrogen supplied from water electrolysis, powered by renewable sources, which is currently gaining interest as it allows the production of sustainable fossil-free liquid fuels. The proposed process discussed here is an upgrading of the well-known Fischer-Tropsch synthesis. The concept deals with two cascade reactions in one pot, with first the conversion of CO₂ into CO via the reverse water gas shift (RWGS) reaction, which is then followed by the Fischer-Tropsch Synthesis (FTS). Instead of using a Fe-based catalyst, which can carry out both reactions, we have chosen the strategy to decouple the two functions (RWGS and FT) on two different catalysts within the same reactor. The FTS shall shift the equilibrium of the RWGS reaction (which alone would be limited to 15-20% of conversion at 250°C) by converting the CO into hydrocarbons. This strategy shall enable optimization of the catalyst pair and thus lower the temperature of the reaction thanks to the equilibrium shift to gain selectivity in the liquid fraction. The challenge lies in maximizing the activity of the RWGS catalyst but also in the ability of the FT catalyst to be highly selective. Methane production is the main concern as the energetic barrier of CH₄ formation is generally lower than that of the RWGS reaction, so the goal will be to minimize methane selectivity. Here we report the study of different combinations of copper-based RWGS catalysts with different cobalt-based FTS catalysts. We investigated their behaviors under mild process conditions by the use of high-throughput experimentation. Our results show that at 250°C and 20 bars, Cobalt catalysts mainly act as methanation catalysts. Indeed, CH₄ selectivity never drops under 80% despite the addition of various protomers (Nb, K, Pt, Cu) on the catalyst and its coupling with active RWGS catalysts. However, we show that the activity of the RWGS catalyst has an impact and can lead to longer hydrocarbons chains selectivities (C₂⁺) of about 10%. We studied the influence of the reduction temperature on the activity and selectivity of the tandem catalyst system. Similar selectivity and conversion were obtained at reduction temperatures between 250-400°C. This leads to the question of the active phase of the cobalt catalysts, which is currently investigated by magnetic measurements and DRIFTS. Thus, in coupling it with a more selective FT catalyst, better results are expected. This was achieved using a cobalt/iron FTS catalyst. The CH₄ selectivity dropped to 62% at 265°C, 20 bars, and a GHSV of 2500ml/h/gcat. We propose that the conditions used for the cobalt catalysts could have generated this methanation because these catalysts are known to have their best performance around 210°C in classical FTS, whereas the iron catalysts are more flexible but are also known to have an RWGS activity.Keywords: cobalt-copper catalytic systems, CO₂-hydrogenation, Fischer-Tropsch synthesis, hydrocarbons, low-temperature process
Procedia PDF Downloads 58230 Evaluation of Cardiac Rhythm Patterns after Open Surgical Maze-Procedures from Three Years' Experiences in a Single Heart Center
Authors: J. Yan, B. Pieper, B. Bucsky, H. H. Sievers, B. Nasseri, S. A. Mohamed
Abstract:
In order to optimize the efficacy of medications, the regular follow-up with long-term continuous monitoring of heart rhythmic patterns has been facilitated since clinical introduction of cardiac implantable electronic monitoring devices (CIMD). Extensive analysis of rhythmic circadian properties is capable to disclose the distributions of arrhythmic events, which may support appropriate medication according rate-/rhythm-control strategy and minimize consequent afflictions. 348 patients (69 ± 0.5ys, male 61.8%) with predisposed atrial fibrillation (AF), undergoing primary ablating therapies combined to coronary or valve operations and secondary implantation of CIMDs, were involved and divided into 3 groups such as PAAF (paroxysmal AF) (n=99, male 68.7%), PEAF (persistent AF) (n=94, male 62.8%), and LSPEAF (long-standing persistent AF) (n=155, male 56.8%). All patients participated in three-year ambulant follow-up (3, 6, 9, 12, 18, 24, 30 and 36 months). Burdens of atrial fibrillation recurrence were assessed using cardiac monitor devices, whereby attacks frequencies and their circadian patterns were systemically analyzed. Anticoagulants and regular anti-arrhythmic medications were evaluated and the last were listed in terms of anti-rate and anti-rhythm regimens. Patients in the PEAF-group showed the least AF-burden after surgical ablating procedures compared to both of the other subtypes (p < 0.05). The AF-recurrences predominantly performed such attacks’ property as shorter than one hour, namely within 10 minutes (p < 0.05), regardless of AF-subtypes. Concerning circadian distribution of the recurrence attacks, frequent AF-attacks were mostly recorded in the morning in the PAAF-group (p < 0.05), while the patients with predisposed PEAF complained less attack-induced discomforts in the latter half of the night and the ones with LSPEAF only if they were not physically active after primary surgical ablations. Different AF-subtypes presented distinct therapeutic efficacies after appropriate surgical ablating procedures and recurrence properties in sense of circadian distribution. An optimization of medical regimen and drug dosages to maintain the therapeutic success needs more attention to detailed assessment of the long-term follow-up. Rate-control strategy plays a much more important role than rhythm-control in the ongoing follow-up examinations.Keywords: atrial fibrillation, CIMD, MAZE, rate-control, rhythm-control, rhythm patterns
Procedia PDF Downloads 157229 Finite Element Analysis of Layered Composite Plate with Elastic Pin Under Uniaxial Load Using ANSYS
Authors: R. M. Shabbir Ahmed, Mohamed Haneef, A. R. Anwar Khan
Abstract:
Analysis of stresses plays important role in the optimization of structures. Prior stress estimation helps in better design of the products. Composites find wide usage in the industrial and home applications due to its strength to weight ratio. Especially in the air craft industry, the usage of composites is more due to its advantages over the conventional materials. Composites are mainly made of orthotropic materials having unequal strength in the different directions. Composite materials have the drawback of delamination and debonding due to the weaker bond materials compared to the parent materials. So proper analysis should be done to the composite joints before using it in the practical conditions. In the present work, a composite plate with elastic pin is considered for analysis using finite element software Ansys. Basically the geometry is built using Ansys software using top down approach with different Boolean operations. The modelled object is meshed with three dimensional layered element solid46 for composite plate and solid element (Solid45) for pin material. Various combinations are considered to find the strength of the composite joint under uniaxial loading conditions. Due to symmetry of the problem, only quarter geometry is built and results are presented for full model using Ansys expansion options. The results show effect of pin diameter on the joint strength. Here the deflection and load sharing of the pin are increasing and other parameters like overall stress, pin stress and contact pressure are reducing due to lesser load on the plate material. Further material effect shows, higher young modulus material has little deflection, but other parameters are increasing. Interference analysis shows increasing of overall stress, pin stress, contact stress along with pin bearing load. This increase should be understood properly for increasing the load carrying capacity of the joint. Generally every structure is preloaded to increase the compressive stress in the joint to increase the load carrying capacity. But the stress increase should be properly analysed for composite due to its delamination and debonding effects due to failure of the bond materials. When results for an isotropic combination is compared with composite joint, isotropic joint shows uniformity of the results with lesser values for all parameters. This is mainly due to applied layer angle combinations. All the results are represented with necessasary pictorial plots.Keywords: bearing force, frictional force, finite element analysis, ANSYS
Procedia PDF Downloads 334228 Integrating Machine Learning and Rule-Based Decision Models for Enhanced B2B Sales Forecasting and Customer Prioritization
Authors: Wenqi Liu, Reginald Bailey
Abstract:
This study proposes a comprehensive and effective approach to business-to-business (B2B) sales forecasting by integrating advanced machine learning models with a rule-based decision-making framework. The methodology addresses the critical challenge of optimizing sales pipeline performance and improving conversion rates through predictive analytics and actionable insights. The first component involves developing a classification model to predict the likelihood of conversion, aiming to outperform traditional methods such as logistic regression in terms of accuracy, precision, recall, and F1 score. Feature importance analysis highlights key predictive factors, such as client revenue size and sales velocity, providing valuable insights into conversion dynamics. The second component focuses on forecasting sales value using a regression model, designed to achieve superior performance compared to linear regression by minimizing mean absolute error (MAE), mean squared error (MSE), and maximizing R-squared metrics. The regression analysis identifies primary drivers of sales value, further informing data-driven strategies. To bridge the gap between predictive modeling and actionable outcomes, a rule-based decision framework is introduced. This model categorizes leads into high, medium, and low priorities based on thresholds for conversion probability and predicted sales value. By combining classification and regression outputs, this framework enables sales teams to allocate resources effectively, focus on high-value opportunities, and streamline lead management processes. The integrated approach significantly enhances lead prioritization, increases conversion rates, and drives revenue generation, offering a robust solution to the declining pipeline conversion rates faced by many B2B organizations. Our findings demonstrate the practical benefits of blending machine learning with decision-making frameworks, providing a scalable, data-driven solution for strategic sales optimization. This study underscores the potential of predictive analytics to transform B2B sales operations, enabling more informed decision-making and improved organizational outcomes in competitive markets.Keywords: machine learning, XGBoost, regression, decision making framework, system engineering
Procedia PDF Downloads 21227 Frequency Response of Complex Systems with Localized Nonlinearities
Authors: E. Menga, S. Hernandez
Abstract:
Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber
Procedia PDF Downloads 266226 Impact of Climate Change on Flow Regime in Himalayan Basins, Nepal
Authors: Tirtha Raj Adhikari, Lochan Prasad Devkota
Abstract:
This research studied the hydrological regime of three glacierized river basins in Khumbu, Langtang and Annapurna regions of Nepal using the Hydraologiska Byrans Vattenbalansavde (HBV), HVB-light 3.0 model. Future scenario of discharge is also studied using downscaled climate data derived from statistical downscaling method. General Circulation Models (GCMs) successfully simulate future climate variability and climate change on a global scale; however, poor spatial resolution constrains their application for impact studies at a regional or a local level. The dynamically downscaled precipitation and temperature data from Coupled Global Circulation Model 3 (CGCM3) was used for the climate projection, under A2 and A1B SRES scenarios. In addition, the observed historical temperature, precipitation and discharge data were collected from 14 different hydro-metrological locations for the implementation of this study, which include watershed and hydro-meteorological characteristics, trends analysis and water balance computation. The simulated precipitation and temperature were corrected for bias before implementing in the HVB-light 3.0 conceptual rainfall-runoff model to predict the flow regime, in which Groups Algorithms Programming (GAP) optimization approach and then calibration were used to obtain several parameter sets which were finally reproduced as observed stream flow. Except in summer, the analysis showed that the increasing trends in annual as well as seasonal precipitations during the period 2001 - 2060 for both A2 and A1B scenarios over three basins under investigation. In these river basins, the model projected warmer days in every seasons of entire period from 2001 to 2060 for both A1B and A2 scenarios. These warming trends are higher in maximum than in minimum temperatures throughout the year, indicating increasing trend of daily temperature range due to recent global warming phenomenon. Furthermore, there are decreasing trends in summer discharge in Langtang Khola (Langtang region) which is increasing in Modi Khola (Annapurna region) as well as Dudh Koshi (Khumbu region) river basin. The flow regime is more pronounced during later parts of the future decades than during earlier parts in all basins. The annual water surplus of 1419 mm, 177 mm and 49 mm are observed in Annapurna, Langtang and Khumbu region, respectively.Keywords: temperature, precipitation, water discharge, water balance, global warming
Procedia PDF Downloads 344225 Discrete PID and Discrete State Feedback Control of a Brushed DC Motor
Authors: I. Valdez, J. Perdomo, M. Colindres, N. Castro
Abstract:
Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.Keywords: control, DC motor, discrete PID, discrete state feedback
Procedia PDF Downloads 268224 Scalable UI Test Automation for Large-scale Web Applications
Authors: Kuniaki Kudo, Raviraj Solanki, Kaushal Patel, Yash Virani
Abstract:
This research mainly concerns optimizing UI test automation for large-scale web applications. The test target application is the HHAexchange homecare management WEB application that seamlessly connects providers, state Medicaid programs, managed care organizations (MCOs), and caregivers through one platform with large-scale functionalities. This study focuses on user interface automation testing for the WEB application. The quality assurance team must execute many manual users interface test cases in the development process to confirm no regression bugs. The team automated 346 test cases; the UI automation test execution time was over 17 hours. The business requirement was reducing the execution time to release high-quality products quickly, and the quality assurance automation team modernized the test automation framework to optimize the execution time. The base of the WEB UI automation test environment is Selenium, and the test code is written in Python. Adopting a compilation language to write test code leads to an inefficient flow when introducing scalability into a traditional test automation environment. In order to efficiently introduce scalability into Test Automation, a scripting language was adopted. The scalability implementation is mainly implemented with AWS's serverless technology, an elastic container service. The definition of scalability here is the ability to automatically set up computers to test automation and increase or decrease the number of computers running those tests. This means the scalable mechanism can help test cases run parallelly. Then test execution time is dramatically decreased. Also, introducing scalable test automation is for more than just reducing test execution time. There is a possibility that some challenging bugs are detected by introducing scalable test automation, such as race conditions, Etc. since test cases can be executed at same timing. If API and Unit tests are implemented, the test strategies can be adopted more efficiently for this scalability testing. However, in WEB applications, as a practical matter, API and Unit testing cannot cover 100% functional testing since they do not reach front-end codes. This study applied a scalable UI automation testing strategy to the large-scale homecare management system. It confirmed the optimization of the test case execution time and the detection of a challenging bug. This study first describes the detailed architecture of the scalable test automation environment, then describes the actual performance reduction time and an example of challenging issue detection.Keywords: aws, elastic container service, scalability, serverless, ui automation test
Procedia PDF Downloads 108223 Quantification and Detection of Non-Sewer Water Infiltration and Inflow in Urban Sewer Systems
Authors: M. Beheshti, S. Saegrov, T. M. Muthanna
Abstract:
Separated sewer systems are designed to transfer the wastewater from houses and industrial sections to wastewater treatment plants. Unwanted water in the sewer systems is a well-known problem, i.e. storm-water inflow is around 50% of the foul sewer, and groundwater infiltration to the sewer system can exceed 50% of total wastewater volume in deteriorated networks. Infiltration and inflow of non-sewer water (I/I) into sewer systems is unfavorable in separated sewer systems and can trigger overloading the system and reducing the efficiency of wastewater treatment plants. Moreover, I/I has negative economic, environmental, and social impacts on urban areas. Therefore, for having sustainable management of urban sewer systems, I/I of unwanted water into the urban sewer systems should be considered carefully and maintenance and rehabilitation plan should be implemented on these water infrastructural assets. This study presents a methodology to identify and quantify the level of I/I into the sewer system. Amount of I/I is evaluated by accurate flow measurement in separated sewer systems for specified isolated catchments in Trondheim city (Norway). Advanced information about the characteristics of I/I is gained by CCTV inspection of sewer pipelines with high I/I contribution. Achieving enhanced knowledge about the detection and localization of non-sewer water in foul sewer system during the wet and dry weather conditions will enable the possibility for finding the problem of sewer system and prioritizing them and taking decisions for rehabilitation and renewal planning in the long-term. Furthermore, preventive measures and optimization of sewer systems functionality and efficiency can be executed by maintenance of sewer system. In this way, the exploitation of sewer system can be improved by maintenance and rehabilitation of existing pipelines in a sustainable way by more practical cost-effective and environmental friendly way. This study is conducted on specified catchments with different properties in Trondheim city. Risvollan catchment is one of these catchments with a measuring station to investigate hydrological parameters through the year, which also has a good database. For assessing the infiltration in a separated sewer system, applying the flow rate measurement method can be utilized in obtaining a general view of the network condition from infiltration point of view. This study discusses commonly used and advanced methods of localizing and quantifying I/I in sewer systems. A combination of these methods give sewer operators the possibility to compare different techniques and obtain reliable and accurate I/I data which is vital for long-term rehabilitation plans.Keywords: flow rate measurement, infiltration and inflow (I/I), non-sewer water, separated sewer systems, sustainable management
Procedia PDF Downloads 334222 Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth
Authors: Ella Tyuryumina, Alexey Neznanov
Abstract:
This study is an attempt to obtain reliable data on the natural history of breast cancer growth. We analyze the opportunities for using classical mathematical models (exponential and logistic tumor growth models, Gompertz and von Bertalanffy tumor growth models) to try to describe growth of the primary tumor and the secondary distant metastases of human breast cancer. The research aim is to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoMPaS and corresponding software. We are interested in: 1) modelling the whole natural history of the primary tumor and the secondary distant metastases; 2) developing adequate and precise CoMPaS which reflects relations between the primary tumor and the secondary distant metastases; 3) analyzing the CoMPaS scope of application; 4) implementing the model as a software tool. The foundation of the CoMPaS is the exponential tumor growth model, which is described by determinate nonlinear and linear equations. The CoMPaS corresponds to TNM classification. It allows to calculate different growth periods of the primary tumor and the secondary distant metastases: 1) ‘non-visible period’ for the primary tumor; 2) ‘non-visible period’ for the secondary distant metastases; 3) ‘visible period’ for the secondary distant metastases. The CoMPaS is validated on clinical data of 10-years and 15-years survival depending on the tumor stage and diameter of the primary tumor. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer growth models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. The CoMPaS model and predictive software: a) fit to clinical trials data; b) detect different growth periods of the primary tumor and the secondary distant metastases; c) make forecast of the period of the secondary distant metastases appearance; d) have higher average prediction accuracy than the other tools; e) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoMPaS: the number of doublings for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases. The CoMPaS enables, for the first time, to predict ‘whole natural history’ of the primary tumor and the secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on the primary tumor sizes. Summarizing: a) CoMPaS describes correctly the primary tumor growth of IA, IIA, IIB, IIIB (T1-4N0M0) stages without metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and inception of the secondary distant metastases.Keywords: breast cancer, exponential growth model, mathematical model, metastases in lymph nodes, primary tumor, survival
Procedia PDF Downloads 341221 The Aromaticity of P-Substituted O-(N-Dialkyl)Aminomethylphenols
Authors: Khodzhaberdi Allaberdiev
Abstract:
Aromaticity, one of the most important concepts in organic chemistry, has attracted considerable interest from both experimentalists and theoreticians. The geometry optimization of p-substituted o-(N-dialkyl)aminomethylphenols, o-DEAMPH XC₆ H₅CH ₂Y (X=p-OCH₃, CH₃, H, F, Cl, Br, COCH₃, COOCH₃, CHO, CN and NO₂, Y=o-N (C₂H₅)₂, o-DEAMPHs have been performed in the gas phase using the B3LYP/6-311+G(d,p) level. Aromaticities of the considered molecules were investigated using different indices included geometrical (HOMA and Bird), electronic (FLU, PDI and SA) magnetic (NICS(0), NICS(1) and NICS(1)zz indices. The linear dependencies were obtained between some aromaticity indices. The best correlation is observed between the Bird and PDI indices (R² =0.9240). However, not all types of indices or even different indices within the same type correlate well among each other. Surprisingly, for studied molecules in which geometrical and electronic cannot correctly give the aromaticity of ring, the magnetism based index successfully predicts the aromaticity of systems. 1H NMR spectra of compounds were obtained at B3LYP/6–311+G(d,p) level using the GIAO method. Excellent linear correlation (R²= 0.9996) between values the chemical shift of hydrogen atom obtained experimentally of 1H NMR and calculated using B3LYP/6–311+G(d,p) demonstrates a good assignment of the experimental values chemical shift to the calculated structures of o-DEAMPH. It is found that the best linear correlation with the Hammett substituent constants is observed for the NICS(1)zz index in comparison with the other indices: NICS(1)zz =-21.5552+1,1070 σp- (R²=0.9394). The presence intramolecular hydrogen bond in the studied molecules also revealed changes the aromatic character of substituted o-DEAMPHs. The HOMA index predicted for R=NO2 the reduction in the π-electron delocalization of 3.4% was about double that observed for p-nitrophenol. The influence intramolecular H-bonding on aromaticity of benzene ring in the ground state (S0) are described by equations between NICS(1)zz and H-bond energies: experimental, Eₑₓₚ, predicted IR spectroscopical, Eν and topological, EQTAIM with correlation coefficients R² =0.9666, R² =0.9028 and R² =0.8864, respectively. The NICS(1)zz index also correlates with usual descriptors of the hydrogen bond, while the other indices do not give any meaningful results. The influence of the intramolecular H-bonding formation on the aromaticity of some substituted o-DEAMPHs is criteria to consider the multidimensional character of aromaticity. The linear relationships as well as revealed between NICS(1)zz and both pyramidality nitrogen atom, ΣN(C₂H₅)₂ and dihedral angle, φ CAr – CAr -CCH₂ –N, to characterizing out-of-plane properties.These results demonstrated the nonplanar structure of o-DEAMPHs. Finally, when considering dependencies of NICS(1)zz, were excluded data for R=H, because the NICS(1) and NICS(1)zz values are the most negative for unsubstituted DEAMPH, indicating its highest aromaticity; that was not the case for NICS(0) index.Keywords: aminomethylphenols, DFT, aromaticity, correlations
Procedia PDF Downloads 181220 Optimization of Artisanal Fishing Waste Fermentation for Volatile Fatty Acids Production
Authors: Luz Stella Cadavid-Rodriguez, Viviana E. Castro-Lopez
Abstract:
Fish waste (FW) has a high content of potentially biodegradable components, so it is amenable to be digested anaerobically. In this line, anaerobic digestion (AD) of FW has been studied for biogas production. Nevertheless, intermediate products such as volatile fatty acids (VFA), generated during the acidogenic stage, have been scarce investigated, even though they have a high potential as a renewable source of carbon. In the literature, there are few studies about the Inoculum-Substrate (I/S) ratio on acidogenesis. On the other hand, it is well known that pH is a critical factor in the production of VFA. The optimum pH for the production of VFA seems to change depending on the substrate and can vary in a range between 5.25 and 11. Nonetheless, the literature about VFA production from protein-rich waste, such as FW, is scarce. In this context, it is necessary to deepen on the determination of the optimal operating conditions of acidogenic fermentation for VFA production from protein-rich waste. Therefore, the aim of this research was to optimize the volatile fatty acid production from artisanal fishing waste, studying the effect of pH and the I/S ratio on the acidogenic process. For this research, the inoculum used was a methanogenic sludge (MS) obtained from a UASB reactor treating wastewater of a slaughterhouse plant, and the FW was collected in the port of Tumaco (Colombia) from the local artisanal fishers. The acidogenic fermentation experiments were conducted in batch mode, in 500 mL glass bottles as anaerobic reactors, equipped with rubber stoppers provided with a valve to release biogas. The effective volume used was 300 mL. The experiments were carried out for 15 days at a mesophilic temperature of 37± 2 °C and constant agitation of 200 rpm. The effect of 3 pH levels: 5, 7, 9, coupled with five I/S ratios, corresponding to 0.20, 0.15, 0.10, 0.05, 0.00 was evaluated taking as a response variable the production of VFA. A complete randomized block design was selected for the experiments in a 5x3 factorial arrangement, with two repetitions per treatment. At the beginning and during the process, pH in the experimental reactors was adjusted to the corresponding values of 5, 7, and 9 using 1M NaOH or 1M H2SO4, as was appropriated. In addition, once the optimum I/S ratio was determined, the process was evaluated at this condition without pH control. The results indicated that pH is the main factor in the production of VFA, obtaining the highest concentration with neutral pH. By reducing the I/S ratio, as low as 0.05, it was possible to maximize VFA production. Thus, the optimum conditions found were natural pH (6.6-7.7) and I/S ratio of 0.05, with which it was possible to reach a maximum total VFA concentration of 70.3 g Ac/L, whose major components were acetic acid (35%) and butyric acid (32%). The findings showed that the acidogenic fermentation of FW is an efficient way of producing VFA and that the operating conditions can be simple and economical.Keywords: acidogenesis, artisanal fishing waste, inoculum to substrate ratio, volatile fatty acids
Procedia PDF Downloads 126219 Sustainable Technology and the Production of Housing
Authors: S. Arias
Abstract:
New housing developments and the technological changes that this implies, adapt the styles of living of its residents, as well as new family structures and forms of work due to the particular needs of a specific group of people which involves different techniques of dealing with, organize, equip and use a particular territory. Currently, own their own space is increasingly important and the cities are faced with the challenge of providing the opportunity for such demands, as well as energy, water and waste removal necessary in the process of construction and occupation of new human settlements. Until the day of today, not has failed to give full response to these demands and needs, resulting in cities that grow without control, badly used land, avenues and congested streets. Buildings and dwellings have an important impact on the environment and on the health of the people, therefore environmental quality associated with the comfort of humans to the sustainable development of natural resources. Applied to architecture, this concept involves the incorporation of new technologies in all the constructive process of a dwelling, changing customs of developers and users, what must be a greater effort in planning energy savings and thus reducing the emissions Greenhouse Gases (GHG) depending on the geographical location where it is planned to develop. Since the techniques of occupation of the territory are not the same everywhere, must take into account that these depend on the geographical, social, political, economic and climatic-environmental circumstances of place, which in modified according to the degree of development reached. In the analysis that must be undertaken to check the degree of sustainability of the place, it is necessary to make estimates of the energy used in artificial air conditioning and lighting. In the same way is required to diagnose the availability and distribution of the water resources used for hygiene and for the cooling of artificially air-conditioned spaces, as well as the waste resulting from these technological processes. Based on the results obtained through the different stages of the analysis, it is possible to perform an energy audit in the process of proposing recommendations of sustainability in architectural spaces in search of energy saving, rational use of water and natural resources optimization. The above can be carried out through the development of a sustainable building code in develop technical recommendations to the regional characteristics of each study site. These codes would seek to build bases to promote a building regulations applicable to new human settlements looking for is generated at the same time quality, protection and safety in them. This building regulation must be consistent with other regulations both national and municipal and State, such as the laws of human settlements, urban development and zoning regulations.Keywords: building regulations, housing, sustainability, technology
Procedia PDF Downloads 347218 Environmental Benefits of Corn Cob Ash in Lateritic Soil Cement Stabilization for Road Works in a Sub-Tropical Region
Authors: Ahmed O. Apampa, Yinusa A. Jimoh
Abstract:
The potential economic viability and environmental benefits of using a biomass waste, such as corn cob ash (CCA) as pozzolan in stabilizing soils for road pavement construction in a sub-tropical region was investigated. Corn cob was obtained from Maya in South West Nigeria and processed to ash of characteristics similar to Class C Fly Ash pozzolan as specified in ASTM C618-12. This was then blended with ordinary Portland cement in the CCA:OPC ratios of 1:1, 1:2 and 2:1. Each of these blends was then mixed with lateritic soil of ASHTO classification A-2-6(3) in varying percentages from 0 – 7.5% at 1.5% intervals. The soil-CCA-Cement mixtures were thereafter tested for geotechnical index properties including the BS Proctor Compaction, California Bearing Ratio (CBR) and the Unconfined Compression Strength Test. The tests were repeated for soil-cement mix without any CCA blending. The cost of the binder inputs and optimal blends of CCA:OPC in the stabilized soil were thereafter analyzed by developing algorithms that relate the experimental data on strength parameters (Unconfined Compression Strength, UCS and California Bearing Ratio, CBR) with the bivariate independent variables CCA and OPC content, using Matlab R2011b. An optimization problem was then set up minimizing the cost of chemical stabilization of laterite with CCA and OPC, subject to the constraints of minimum strength specifications. The Evolutionary Engine as well as the Generalized Reduced Gradient option of the Solver of MS Excel 2010 were used separately on the cells to obtain the optimal blend of CCA:OPC. The optimal blend attaining the required strength of 1800 kN/m2 was determined for the 1:2 CCA:OPC as 5.4% mix (OPC content 3.6%) compared with 4.2% for the OPC only option; and as 6.2% mix for the 1:1 blend (OPC content 3%). The 2:1 blend did not attain the required strength, though over a 100% gain in UCS value was obtained over the control sample with 0% binder. Upon the fact that 0.97 tonne of CO2 is released for every tonne of cement used (OEE, 2001), the reduced OPC requirement to attain the same result indicates the possibility of reducing the net CO2 contribution of the construction industry to the environment ranging from 14 – 28.5% if CCA:OPC blends are widely used in soil stabilization, going by the results of this study. The paper concludes by recommending that Nigeria and other developing countries in the sub-tropics with abundant stock of biomass waste should look in the direction of intensifying the use of biomass waste as fuel and the derived ash for the production of pozzolans for road-works, thereby reducing overall green house gas emissions and in compliance with the objectives of the United Nations Framework on Climate Change.Keywords: corn cob ash, biomass waste, lateritic soil, unconfined compression strength, CO2 emission
Procedia PDF Downloads 375217 Improving Alkaline Water Electrolysis by Using an Asymmetrical Electrode Cell Design
Authors: Gabriel Wosiak, Felipe Staciaki, Eryka Nobrega, Ernesto Pereira
Abstract:
Hydrogen is an energy carrier with potential applications in various industries. Alkaline electrolysis is a commonly used method for hydrogen production; however, its energy cost remains relatively high compared to other methods. This is due in part to interfacial pH changes that occur during the electrolysis process. Interfacial pH changes refer to the changes in pH that occur at the interface between the cathode electrode and the electrolyte solution. These changes are caused by the electrochemical reactions at both electrodes, which consume or produces hydroxide ions (OH-) from the electrolyte solution. This results in an important change in the local pH at the electrode surface, which can have several impacts on the energy consumption and durability of electrolysers. One impact of interfacial pH changes is an increase in the overpotential required for hydrogen production. Overpotential is the difference between the theoretical potential required for a reaction to occur and the actual potential that is applied to the electrodes. In the case of water electrolysis, the overpotential is caused by a number of factors, including the mass transport of reactants and products to and from the electrodes, the kinetics of the electrochemical reactions, and the interfacial pH. An increase in the interfacial pH at the anode surface in alkaline conditions can lead to an increase in the overpotential for hydrogen production. This is because the lower local pH makes it more difficult for the hydroxide ions to be oxidized. As a result, there is an increase in the required energy to the process occur. In addition to increasing the overpotential, interfacial pH changes can also lead to the degradation of the electrodes. This is because the lower pH can make the electrode more susceptible to corrosion. As a result, the electrodes may need to be replaced more frequently, which can increase the overall cost of water electrolysis. The method presented in the paper addresses the issue of interfacial pH changes by using a cell design with a different cell design, introducing the electrode asymmetry. This design helps to mitigate the pH gradient at the anode/electrolyte interface, which reduces the overpotential and improves the energy efficiency of the electrolyser. The method was tested using a multivariate approach in both laboratory and industrial current density conditions and validated the results with numerical simulations. The results demonstrated a clear improvement (11.6%) in energy efficiency, providing an important contribution to the field of sustainable energy production. The findings of the paper have important implications for the development of cost-effective and sustainable hydrogen production methods. By mitigating interfacial pH changes, it is possible to improve the energy efficiency of alkaline electrolysis and make it a more competitive option for hydrogen production.Keywords: electrolyser, interfacial pH, numerical simulation, optimization, asymmetric cell
Procedia PDF Downloads 70216 Investigation of Fluid-Structure-Seabed Interaction of Gravity Anchor under Liquefaction and Scour
Authors: Vinay Kumar Vanjakula, Frank Adam, Nils Goseberg, Christian Windt
Abstract:
When a structure is installed on a seabed, the presence of the structure will influence the flow field around it. The changes in the flow field include, formation of vortices, turbulence generation, waves or currents flow breaking and pressure differentials around the seabed sediment. These changes allow the local seabed sediment to be carried off and results in Scour (erosion). These are a threat to the structure's stability. In recent decades, rapid developments of research work and the knowledge of scour On fixed structures (bridges and Monopiles) in rivers and oceans has been carried out, and very limited research work on scour and liquefaction for gravity anchors, particularly for floating Tension Leg Platform (TLP) substructures. Due to its importance and need for enhancement of knowledge in scour and liquefaction around marine structures, the MarTERA funded a three-year (2020-2023) research program called NuLIMAS (Numerical Modeling of Liquefaction Around Marine Structures). It’s a group consists of European institutions (Universities, laboratories, and consulting companies). The objective of this study is to build a numerical model that replicates the reality, which indeed helps to simulate (predict) underwater flow conditions and to study different marine scour and Liquefication situations. It helps to design a heavyweight anchor for the TLP substructure and to minimize the time and expenditure on experiments. And also, the achieved results and the numerical model will be a basis for the development of other design and concepts For marine structures. The Computational Fluid Dynamics (CFD) numerical model will build in OpenFOAM. A conceptual design of heavyweight anchor for TLP substructure is designed through taking considerations of available state-of-the-art knowledge on scour and Liquefication concepts and references to Previous existing designs. These conceptual designs are validated with the available similar experimental benchmark data and also with the CFD numerical benchmark standards (CFD quality assurance study). CFD optimization model/tool is designed as to minimize the effect of fluid flow, scour, and Liquefication. A parameterized model is also developed to automate the calculation process to reduce user interactions. The parameters such as anchor Lowering Process, flow optimized outer contours, seabed interaction study, and FSSI (Fluid-Structure-Seabed Interactions) are investigated and used to carve the model as to build an optimized anchor.Keywords: gravity anchor, liquefaction, scour, computational fluid dynamics
Procedia PDF Downloads 144215 Market Segmentation of Cruise Ship Passengers: Implications for Marketing of Local Products and Services at Destination Points
Authors: Gunnar Oskarsson, Irena Georgsdottir
Abstract:
Tourism has been growing incredibly fast during the past years, including the cruise industry, which is gaining increasing popularity among various groups of travelers. It is a challenging task for companies serving cruise ship passengers with local products and services at the point of destination to reach them in due time with information about their offerings, as well learning how to adapt their offerings and messages to the type of customers arriving on each particular occasion. Although some research has been conducted in this sphere, there is still limited knowledge about many specifics within this sector of the tourist industry. The objective of this research is to examine one of these, with the main goal of studying the segmentation of cruise passengers and to learn about marketing practices directed towards them. A qualitative research method, based on in-depth interviews, was used, as this provides an opportunity to gain insight into the participants’ perspectives. Interviews were conducted with 10 respondents from different companies in the tourist industry in Iceland, who interact with cruise passengers on a regular basis in their work environment. The main objective was to gain an understanding of what distinguishes different customer groups, or segments, in this industry, and of the marketing approaches directed towards them. The main findings reveal that participants note the strongest difference between cruise passengers of different nationalities, passengers coming on different ships (size and type), and passengers arriving at different times of the year. A drastic difference was noticed between nationalities in four main segments, American, British, Other European, and Asian customers, although some of these segments could be divided into even further sub-segments. Other important differencing factors were size and type of ships, quality or number of stars on the ship, and travelling time of the year. Companies serving cruise ship passengers, as well as the customers themselves, could benefit if the offerings of services were designed specifically for particular segments within the industry. Concerning marketing towards cruise passengers, the results indicate that it is carried out almost exclusively through the Internet using; a reliable website and, search engine optimization. Marketing is also by word-of-mouth. This research can assist practitioners by offering a deeper understanding of the approaches that may be effective in marketing local products and services to cruise ship passengers, based on their segmentation and by identifying effective ways to reach them. The research, furthermore, provides a valuable contribution to marketing knowledge for the benefit of an increasingly important market segment in a fast growing tourist industry.Keywords: capabilities, global integration, internationalisation, SMEs
Procedia PDF Downloads 401214 Investigations on the Influence of Optimized Charge Air Cooling for a Diesel Passenger Car
Authors: Christian Doppler, Gernot Hirschl, Gerhard Zsiga
Abstract:
Starting from 2020, an EU-wide CO2-limitation of 95g/km is scheduled for the average of an OEMs passenger car fleet. Considering that, further measures of optimization on the diesel cycle will be necessary in order to reduce fuel consumption and emissions while keeping performance values adequate at the least. The present article deals with charge air cooling (CAC) on the basis of a diesel passenger car model in a 0D/1D-working process calculation environment. The considered engine is a 2.4 litre EURO VI diesel engine with variable geometry turbocharger (VGT) and low-pressure exhaust gas recirculation (LP EGR). The object of study was the impact of charge air cooling on the engine working process at constant boundary conditions which could have been conducted with an available and validated engine model in AVL BOOST. Part load was realized with constant power and NOx-emissions, whereas full load was accomplished with a lambda control in order to obtain maximum engine performance. The informative results were used to implement a simulation model in Matlab/Simulink which is further integrated into a full vehicle simulation environment via coupling with ICOS (Independent Co-Simulation Platform). Next, the dynamic engine behavior was validated and modified with load steps taken from the engine test bed. Due to the modular setup in the Co-Simulation, different CAC-models have been simulated quickly with their different influences on the working process. In doing so, a new cooler variation isn’t needed to be reproduced and implemented into the primary simulation model environment, but is implemented quickly and easily as an independent component into the simulation entity. By means of the association of the engine model, longitudinal dynamics vehicle model and different CAC models (air/air & water/air variants) in both steady state and transient operational modes, statements are gained regarding fuel consumption, NOx-emissions and power behavior. The fact that there is no more need of a complex engine model is very advantageous for the overall simulation volume. Beside of the simulation with the mentioned demonstrator engine, there have also been conducted several experimental investigations on the engine test bench. Here the comparison of a standard CAC with an intake-manifold-integrated CAC was executed in particular. Simulative as well as experimental tests showed benefits for the water/air CAC variant (on test bed especially the intake manifold integrated variant). The benefits are illustrated by a reduced pressure loss and a gain in air efficiency and CAC efficiency, those who all lead to minimized emission and fuel consumption for stationary and transient operation.Keywords: air/water-charge air cooler, co-simulation, diesel working process, EURO VI fuel consumption
Procedia PDF Downloads 270213 Study of the Removal Efficiency of Azo-Dyes Using Xanthan as Sequestering Agent
Authors: Cedillo Ortiz Cesar Isaac, Marañón-Ruiz Virginia-Francisca, Lozano-Alvarez Juan Antonio, Jáuregui-Rincón Juan, Roger Chiu Zarate
Abstract:
Introduction: The contamination of water with the azo-dye is a problem worldwide as although wastewater contaminate is treated in a municipal sewage system, still contain a considerable amount of dyes. In the present, there are different processes denominated tertiary method in which it is possible to lower the concentration of the dye. One of these methods is by adsorption onto various materials which can be organic or inorganic materials. The xanthan is a biomaterial as removal agents to decrease the dye content in aqueous solution. The Zimm-Bragg model described the experimental isotherms obtained when this biopolymer was used in the removal of textile dyes. Nevertheless, it was not established if a possible correlation between dye structure and removal efficiency exists. In this sense, the principal objective of this report is to propose a qualitative relationship between the structure of three azo-dyes (Congo Red (CR), Methyl Red (MR) and Methyl Orange (MO)) and their removal efficiency from aqueous environment when xanthan are used as dye sequestering agents. Methods: The dyes were subjected to different pH and ionic strength values to obtain the conditions of maximum dye removal. Afterward, these conditions were used to perform the adsorption isotherm as was reported in the previous study in our group. The Zimm-Bragg model was used to describe the experimental data and the parameters of nucleation (Ku) and cooperativity (U) were obtained by optimization using the R statistical software. The spectra from UV-Visible (aqueous solution), Infrared absorption and Raman spectroscopies (dry samples) were obtained from the biopolymer-dye complex. Results: The removal percent with xanthan in each dye are as follows: with CR had 99.98 % when the pH is 12 and ionic strength is 10.12, with MR had 84.79 % when the pH is 9.5 and ionic strength is 43 and finally the MO had 30 % in pH 4 and 72. It can be seen that when xanthan is used to remove the dyes, exists a lower dependence between structure and removal efficiency. This may be due to the different tendency to form aggregates of each dye. This aggregation capacity and the charge of each dye resulting from the pH and ionic strength values of aqueous solutions are key factors in the dye removal. The experimental isotherm of MR was only that adequately described by Zimm-Bragg model. Because with the CR had the 100 % of remove thus is very difficult obtain de experimental isotherm and finally MO had results fluctuating and therefore was impossible get the accurate data. Conclusions: The study of the removal of three dyes with xanthan as dye sequestering agents suggests that aggregation capacity of dyes and the charge resulting from structural characteristics such as molecular weight and functional groups have a relationship with the removal efficiency. Acknowledgements: We are gratefully acknowledged support for this project by Consejo Nacional de Ciencia y Tecnología, México (CONACyT, Grant No. 632694.)Keywords: adsorption, azo dyes, xanthan gum, Zimm Bragg theory
Procedia PDF Downloads 281212 Magnetic Navigation of Nanoparticles inside a 3D Carotid Model
Authors: E. G. Karvelas, C. Liosis, A. Theodorakakos, T. E. Karakasidis
Abstract:
Magnetic navigation of the drug inside the human vessels is a very important concept since the drug is delivered to the desired area. Consequently, the quantity of the drug required to reach therapeutic levels is being reduced while the drug concentration at targeted sites is increased. Magnetic navigation of drug agents can be achieved with the use of magnetic nanoparticles where anti-tumor agents are loaded on the surface of the nanoparticles. The magnetic field that is required to navigate the particles inside the human arteries is produced by a magnetic resonance imaging (MRI) device. The main factors which influence the efficiency of the usage of magnetic nanoparticles for biomedical applications in magnetic driving are the size and the magnetization of the biocompatible nanoparticles. In this study, a computational platform for the simulation of the optimal gradient magnetic fields for the navigation of magnetic nanoparticles inside a carotid artery is presented. For the propulsion model of the particles, seven major forces are considered, i.e., the magnetic force from MRIs main magnet static field as well as the magnetic field gradient force from the special propulsion gradient coils. The static field is responsible for the aggregation of nanoparticles, while the magnetic gradient contributes to the navigation of the agglomerates that are formed. Moreover, the contact forces among the aggregated nanoparticles and the wall and the Stokes drag force for each particle are considered, while only spherical particles are used in this study. In addition, gravitational forces due to gravity and the force due to buoyancy are included. Finally, Van der Walls force and Brownian motion are taken into account in the simulation. The OpenFoam platform is used for the calculation of the flow field and the uncoupled equations of particles' motion. To verify the optimal gradient magnetic fields, a covariance matrix adaptation evolution strategy (CMAES) is used in order to navigate the particles into the desired area. A desired trajectory is inserted into the computational geometry, which the particles are going to be navigated in. Initially, the CMAES optimization strategy provides the OpenFOAM program with random values of the gradient magnetic field. At the end of each simulation, the computational platform evaluates the distance between the particles and the desired trajectory. The present model can simulate the motion of particles when they are navigated by the magnetic field that is produced by the MRI device. Under the influence of fluid flow, the model investigates the effect of different gradient magnetic fields in order to minimize the distance of particles from the desired trajectory. In addition, the platform can navigate the particles into the desired trajectory with an efficiency between 80-90%. On the other hand, a small number of particles are stuck to the walls and remains there for the rest of the simulation.Keywords: artery, drug, nanoparticles, navigation
Procedia PDF Downloads 107211 Utilizing Temporal and Frequency Features in Fault Detection of Electric Motor Bearings with Advanced Methods
Authors: Mohammad Arabi
Abstract:
The development of advanced technologies in the field of signal processing and vibration analysis has enabled more accurate analysis and fault detection in electrical systems. This research investigates the application of temporal and frequency features in detecting faults in electric motor bearings, aiming to enhance fault detection accuracy and prevent unexpected failures. The use of methods such as deep learning algorithms and neural networks in this process can yield better results. The main objective of this research is to evaluate the efficiency and accuracy of methods based on temporal and frequency features in identifying faults in electric motor bearings to prevent sudden breakdowns and operational issues. Additionally, the feasibility of using techniques such as machine learning and optimization algorithms to improve the fault detection process is also considered. This research employed an experimental method and random sampling. Vibration signals were collected from electric motors under normal and faulty conditions. After standardizing the data, temporal and frequency features were extracted. These features were then analyzed using statistical methods such as analysis of variance (ANOVA) and t-tests, as well as machine learning algorithms like artificial neural networks and support vector machines (SVM). The results showed that using temporal and frequency features significantly improves the accuracy of fault detection in electric motor bearings. ANOVA indicated significant differences between normal and faulty signals. Additionally, t-tests confirmed statistically significant differences between the features extracted from normal and faulty signals. Machine learning algorithms such as neural networks and SVM also significantly increased detection accuracy, demonstrating high effectiveness in timely and accurate fault detection. This study demonstrates that using temporal and frequency features combined with machine learning algorithms can serve as an effective tool for detecting faults in electric motor bearings. This approach not only enhances fault detection accuracy but also simplifies and streamlines the detection process. However, challenges such as data standardization and the cost of implementing advanced monitoring systems must also be considered. Utilizing temporal and frequency features in fault detection of electric motor bearings, along with advanced machine learning methods, offers an effective solution for preventing failures and ensuring the operational health of electric motors. Given the promising results of this research, it is recommended that this technology be more widely adopted in industrial maintenance processes.Keywords: electric motor, fault detection, frequency features, temporal features
Procedia PDF Downloads 51210 Colored Image Classification Using Quantum Convolutional Neural Networks Approach
Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins
Abstract:
Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning
Procedia PDF Downloads 130209 Influence of Confinement on Phase Behavior in Unconventional Gas Condensate Reservoirs
Authors: Szymon Kuczynski
Abstract:
Poland is characterized by the presence of numerous sedimentary basins and hydrocarbon provinces. Since 2006 exploration for hydrocarbons in Poland become gradually more focus on new unconventional targets, particularly on the shale gas potential of the Upper Ordovician and Lower Silurian in the Baltic-Podlasie-Lublin Basin. The first forecast prepared by US Energy Information Administration in 2011 indicated to 5.3 Tcm of natural gas. In 2012, Polish Geological Institute presented its own forecast which estimated maximum reserves on 1.92 Tcm. The difference in the estimates was caused by problems with calculations of the initial amount of adsorbed, as well as free, gas trapped in shale rocks (GIIP - Gas Initially in Place). This value is dependent from sorption capacity, gas saturation and mutual interactions between gas, water, and rock. Determination of the reservoir type in the initial exploration phase brings essential knowledge, which has an impact on decisions related to the production. The study of porosity impact for phase envelope shift eliminates errors and improves production profitability. Confinement phenomenon affects flow characteristics, fluid properties, and phase equilibrium. The thermodynamic behavior of confined fluids in porous media is subject to the basic considerations for industrial applications such as hydrocarbons production. In particular the knowledge of the phase equilibrium and the critical properties of the contained fluid is essential for the design and optimization of such process. In pores with a small diameter (nanopores), the effect of the wall interaction with the fluid particles becomes significant and occurs in shale formations. Nano pore size is similar to the fluid particles’ diameter and the area of particles which flow without interaction with pore wall is almost equal to the area where this phenomenon occurs. The molecular simulation studies have shown an effect of confinement to the pseudo critical properties. Therefore, the critical parameters pressure and temperature and the flow characteristics of hydrocarbons in terms of nano-scale are under the strong influence of fluid particles with the pore wall. It can be concluded that the impact of a single pore size is crucial when it comes to the nanoscale because there is possible the above-described effect. Nano- porosity makes it difficult to predict the flow of reservoir fluid. Research are conducted to explain the mechanisms of fluid flow in the nanopores and gas extraction from porous media by desorption.Keywords: adsorption, capillary condensation, phase envelope, nanopores, unconventional natural gas
Procedia PDF Downloads 339208 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach
Authors: Kristina Pflug, Markus Busch
Abstract:
Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology
Procedia PDF Downloads 125207 Cellulolytic and Xylanolytic Enzymes from Mycelial Fungi
Authors: T. Sadunishvili, L. Kutateladze, T. Urushadze, R. Khvedelidze, N. Zakariashvili, M. Jobava, G. Kvesitadze
Abstract:
Multiple repeated soil-climatic zones in Georgia determines the diversity of microorganisms. Hundreds of microscopic fungi of different genera have been isolated from different ecological niches, including some extreme environments. Biosynthetic ability of microscopic fungi has been studied. Trichoderma ressei, representative of the Ascomycetes secrete cellulolytic and xylanolytic enzymes that act in synergy to hydrolyze polysaccharide polymers to glucose, xylose and arabinose, which can be fermented to biofuels. The other mesophilic strains producing cellulases are Allesheria terrestris, Chaetomium thermophile, Fusarium oxysporium, Piptoporus betulinus, Penicillium echinulatum, P. purpurogenum, Aspergillus niger, A. wentii, A. versicolor, A. fumigatus etc. In the majority of the cases the cellulases produced by strains of genus Aspergillus usually have high β-glucosidase activity and average endoglucanases levels (with some exceptions), whereas strains representing Trichoderma have high endo enzyme and low β-glucosidase, and hence has limited efficiency in cellulose hydrolysis. Six producers of stable cellulases and xylanases from mesophilic and thermophilic fungi have been selected. By optimization of submerged cultivation conditions, high activities of cellulases and xylanases were obtained. For enzymes purification, their sedimentation by organic solvents such as ethyl alcohol, acetone, isopropanol and by ammonium sulphate in different ratios have been carried out. Best results were obtained with precipitation by ethyl alcohol (1:3.5) and ammonium sulphate. The yields of enzyme according to cellulase activities were 80-85% in both cases. Cellulase activity of enzyme preparation obtained from the strain Trichoderma viride X 33 is 126 U/g, from the strain Penicillium canescence D 85–185U/g and from the strain Sporotrichum pulverulentum T 5-0 110 U/g. Cellulase activity of enzyme preparation obtained from the strain Aspergillus sp. Av10 is 120 U/g, xylanase activity of enzyme preparation obtained from the strain Aspergillus niger A 7-5–1155U/g and from the strain Aspergillus niger Aj 38-1250 U/g. Optimum pH and temperature of operation and thermostability, of the enzyme preparations, were established. The efficiency of hydrolyses of different agricultural residues by the microscopic fungi cellulases has been studied. The glucose yield from the residues as a result of enzymatic hydrolysis is highly determined by the ratio of enzyme to substrate, pH, temperature, and duration of the process. Hydrolysis efficiency was significantly increased as a result of different pretreatment of the residues by different methods. Acknowledgement: The Study was supported by the ISTC project G-2117, funded by Korea.Keywords: cellulase, xylanase, microscopic fungi, enzymatic hydrolysis
Procedia PDF Downloads 393206 Optimization of Heat Source Assisted Combustion on Solid Rocket Motors
Authors: Minal Jain, Vinayak Malhotra
Abstract:
Solid Propellant ignition consists of rapid and complex events comprising of heat generation and transfer of heat with spreading of flames over the entire burning surface area. Proper combustion and thus propulsion depends heavily on the modes of heat transfer characteristics and cavity volume. Fire safety is an integral component of a successful rocket flight failing to which may lead to overall failure of the rocket. This leads to enormous forfeiture in resources viz., money, time, and labor involved. When the propellant is ignited, thrust is generated and the casing gets heated up. This heat adds on to the propellant heat and the casing, if not at proper orientation starts burning as well, leading to the whole rocket being completely destroyed. This has necessitated active research efforts emphasizing a comprehensive study on the inter-energy relations involved for effective utilization of the solid rocket motors for better space missions. Present work is focused on one of the major influential aspects of this detrimental burning which is the presence of an external heat source, in addition to a potential heat source which is already ignited. The study is motivated by the need to ensure better combustion and fire safety presented experimentally as a simplified small-scale mode of a rocket carrying a solid propellant inside a cavity. The experimental setup comprises of a paraffin wax candle as the pilot fuel and incense stick as the external heat source. The candle is fixed and the incense stick position and location is varied to investigate the find the influence of the pilot heat source. Different configurations of the external heat source presence with separation distance are tested upon. Regression rates of the pilot thin solid fuel are noted to fundamentally understand the non-linear heat and mass transfer which is the governing phenomenon. An attempt is made to understand the phenomenon fundamentally and the mechanism governing it. Results till now indicate non-linear heat transfer assisted with the occurrence of flaming transition at selected critical distances. With an increase in separation distance, the effect is noted to drop in a non-monotonic trend. The parametric study results are likely to provide useful physical insight about the governing physics and utilization in proper testing, validation, material selection, and designing of solid rocket motors with enhanced safety.Keywords: combustion, propellant, regression, safety
Procedia PDF Downloads 162205 Damping Optimal Design of Sandwich Beams Partially Covered with Damping Patches
Authors: Guerich Mohamed, Assaf Samir
Abstract:
The application of viscoelastic materials in the form of constrained layers in mechanical structures is an efficient and cost-effective technique for solving noise and vibration problems. This technique requires a design tool to select the best location, type, and thickness of the damping treatment. This paper presents a finite element model for the vibration of beams partially or fully covered with a constrained viscoelastic damping material. The model is based on Bernoulli-Euler theory for the faces and Timoshenko beam theory for the core. It uses four variables: the through-thickness constant deflection, the axial displacements of the faces, and the bending rotation of the beam. The sandwich beam finite element is compatible with the conventional C1 finite element for homogenous beams. To validate the proposed model, several free vibration analyses of fully or partially covered beams, with different locations of the damping patches and different percent coverage, are studied. The results show that the proposed approach can be used as an effective tool to study the influence of the location and treatment size on the natural frequencies and the associated modal loss factors. Then, a parametric study regarding the variation in the damping characteristics of partially covered beams has been conducted. In these studies, the effect of core shear modulus value, the effect of patch size variation, the thickness of constraining layer, and the core and the locations of the patches are considered. In partial coverage, the spatial distribution of additive damping by using viscoelastic material is as important as the thickness and material properties of the viscoelastic layer and the constraining layer. Indeed, to limit added mass and to attain maximum damping, the damping patches should be placed at optimum locations. These locations are often selected using the modal strain energy indicator. Following this approach, the damping patches are applied over regions of the base structure with the highest modal strain energy to target specific modes of vibration. In the present study, a more efficient indicator is proposed, which consists of placing the damping patches over regions of high energy dissipation through the viscoelastic layer of the fully covered sandwich beam. The presented approach is used in an optimization method to select the best location for the damping patches as well as the material thicknesses and material properties of the layers that will yield optimal damping with the minimum area of coverage.Keywords: finite element model, damping treatment, viscoelastic materials, sandwich beam
Procedia PDF Downloads 148204 Modeling of the Biodegradation Performance of a Membrane Bioreactor to Enhance Water Reuse in Agri-food Industry - Poultry Slaughterhouse as an Example
Authors: masmoudi Jabri Khaoula, Zitouni Hana, Bousselmi Latifa, Akrout Hanen
Abstract:
Mathematical modeling has become an essential tool for sustainable wastewater management, particularly for the simulation and the optimization of complex processes involved in activated sludge systems. In this context, the activated sludge model (ASM3h) was used for the simulation of a Biological Membrane Reactor (MBR) as it includes the integration of biological wastewater treatment and physical separation by membrane filtration. In this study, the MBR with a useful volume of 12.5 L was fed continuously with poultry slaughterhouse wastewater (PSWW) for 50 days at a feed rate of 2 L/h and for a hydraulic retention time (HRT) of 6.25h. Throughout its operation, High removal efficiency was observed for the removal of organic pollutants in terms of COD with 84% of efficiency. Moreover, the MBR has generated a treated effluent which fits with the limits of discharge into the public sewer according to the Tunisian standards which were set in March 2018. In fact, for the nitrogenous compounds, average concentrations of nitrate and nitrite in the permeat reached 0.26±0.3 mg. L-1 and 2.2±2.53 mg. L-1, respectively. The simulation of the MBR process was performed using SIMBA software v 5.0. The state variables employed in the steady state calibration of the ASM3h were determined using physical and respirometric methods. The model calibration was performed using experimental data obtained during the first 20 days of the MBR operation. Afterwards, kinetic parameters of the model were adjusted and the simulated values of COD, N-NH4+and N- NOx were compared with those reported from the experiment. A good prediction was observed for the COD, N-NH4+and N- NOx concentrations with 467 g COD/m³, 110.2 g N/m³, 3.2 g N/m³ compared to the experimental data which were 436.4 g COD/m³, 114.7 g N/m³ and 3 g N/m³, respectively. For the validation of the model under dynamic simulation, the results of the experiments obtained during the second treatment phase of 30 days were used. It was demonstrated that the model simulated the conditions accurately by yielding a similar pattern on the variation of the COD concentration. On the other hand, an underestimation of the N-NH4+ concentration was observed during the simulation compared to the experimental results and the measured N-NO3 concentrations were lower than the predicted ones, this difference could be explained by the fact that the ASM models were mainly designed for the simulation of biological processes in the activated sludge systems. In addition, more treatment time could be required by the autotrophic bacteria to achieve a complete and stable nitrification. Overall, this study demonstrated the effectiveness of mathematical modeling in the prediction of the performance of the MBR systems with respect to organic pollution, the model can be further improved for the simulation of nutrients removal for a longer treatment period.Keywords: activated sludge model (ASM3h), membrane bioreactor (MBR), poultry slaughter wastewater (PSWW), reuse
Procedia PDF Downloads 60