Search results for: numerical weather prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6234

Search results for: numerical weather prediction

864 The Study of the Correlation of Future-Oriented Thinking and Retirement Planning: The Analysis of Two Professions

Authors: Ya-Hui Lee, Ching-Yi Lu, Chien Hung, Hsieh

Abstract:

The purpose of this study is to explore the difference between state-owned-enterprise employees and the civil servants regarding their future-oriented thinking and retirement planning. The researchers investigated 687 middle age and older adults (345 state-owned-enterprise employees and 342 civil servants) through survey research, to understand the relevance between and the prediction of their future-oriented thinking and retirement planning. The findings of this study are: 1.There are significant differences between these two professions regarding future-oriented thinking but not retirement planning. The results of the future-oriented thinking of civil servants are overall higher than that of the state-owned-enterprise employees. 2. There are significant differences both in the aspects of future-oriented thinking and retirement planning among civil servants of different ages. The future-oriented thinking and retirement planning of ages 55 and above are more significant than those of ages 45 or under. For the state-owned-enterprise employees, however, there is no significance found in their future-oriented thinking, but in their retirement planning. Moreover, retirement planning is higher at ages 55 or above than at other ages. 3. With regard to education, there is no correlation to future-oriented thinking or retirement planning for civil servants. For state-owned-enterprise employees, however, their levels of education directly affect their future-oriented thinking. Those with a master degree or above have greater future-oriented thinking than those with other educational degrees. As for retirement planning, there is no correlation. 4. Self-assessment of economic status significantly affects the future-oriented thinking and retirement planning of both civil servants and state-owned-enterprise employees. Those who assess themselves more affluently are more inclined to future-oriented thinking and retirement planning. 5. For civil servants, there are significant differences between their monthly income and retirement planning, but none with future-oriented thinking. As for state-owned-enterprise employees, there are significant differences between their monthly income and retirement planning as well as future-oriented thinking. State-owned-enterprise employees who have significantly higher monthly incomes (1,960 euros and above) have more significant future-oriented thinking and retirement planning than those with lower monthly incomes (1,469 euros and below). 6. The middle age and older adults of both professions have positive correlations with future-oriented thinking and retirement planning. Through stepwise multiple regression analysis, the results indicate that future-oriented thinking and retirement planning have positive predictions. The authors then present the findings of this study for state-owned-enterprises, public authorities, and older adult educational program designs in Taiwan as references.

Keywords: state-owned-enterprise employees, civil servants, future-oriented thinking, retirement planning

Procedia PDF Downloads 366
863 Downregulation of Epidermal Growth Factor Receptor in Advanced Stage Laryngeal Squamous Cell Carcinoma

Authors: Sarocha Vivatvakin, Thanaporn Ratchataswan, Thiratest Leesutipornchai, Komkrit Ruangritchankul, Somboon Keelawat, Virachai Kerekhanjanarong, Patnarin Mahattanasakul, Saknan Bongsebandhu-Phubhakdi

Abstract:

In this globalization era, much attention has been drawn to various molecular biomarkers, which may have the potential to predict the progression of cancer. Epidermal growth factor receptor (EGFR) is the classic member of the ErbB family of membrane-associated intrinsic tyrosine kinase receptors. EGFR expression was found in several organs throughout the body as its roles involve in the regulation of cell proliferation, survival, and differentiation in normal physiologic conditions. However, anomalous expression, whether over- or under-expression is believed to be the underlying mechanism of pathologic conditions, including carcinogenesis. Even though numerous discussions regarding the EGFR as a prognostic tool in head and neck cancer have been established, the consensus has not yet been met. The aims of the present study are to assess the correlation between the level of EGFR expression and demographic data as well as clinicopathological features and to evaluate the ability of EGFR as a reliable prognostic marker. Furthermore, another aim of this study is to investigate the probable pathophysiology that explains the finding results. This retrospective study included 30 squamous cell laryngeal carcinoma patients treated at King Chulalongkorn Memorial Hospital from January 1, 2000, to December 31, 2004. EGFR expression level was observed to be significantly downregulated with the progression of the laryngeal cancer stage. (one way ANOVA, p = 0.001) A statistically significant lower EGFR expression in the late stage of the disease compared to the early stage was recorded. (unpaired t-test, p = 0.041) EGFR overexpression also showed the tendency to increase recurrence of cancer (unpaired t-test, p = 0.128). A significant downregulation of EGFR expression was documented in advanced stage laryngeal cancer. The results indicated that EGFR level correlates to prognosis in term of stage progression. Thus, EGFR expression might be used as a prevailing biomarker for laryngeal squamous cell carcinoma prognostic prediction.

Keywords: downregulation, epidermal growth factor receptor, immunohistochemistry, laryngeal squamous cell carcinoma

Procedia PDF Downloads 108
862 Modeling Operating Theater Scheduling and Configuration: An Integrated Model in Health-Care Logistics

Authors: Sina Keyhanian, Abbas Ahmadi, Behrooz Karimi

Abstract:

We present a multi-objective binary programming model which considers surgical cases are scheduling among operating rooms and the configuration of surgical instruments in limited capacity hospital trays, simultaneously. Many mathematical models have been developed previously in the literature addressing different challenges in health-care logistics such as assigning operating rooms, leveling beds, etc. But what happens inside the operating rooms along with the inventory management of required instruments for various operations, and also their integration with surgical scheduling have been poorly discussed. Our model considers the minimization of movements between trays during a surgery which recalls the famous cell formation problem in group technology. This assumption can also provide a major potential contribution to robotic surgeries. The tray configuration problem which consumes surgical instruments requirement plan (SIRP) and sequence of surgical procedures based on required instruments (SIRO) is nested inside the bin packing problem. This modeling approach helps us understand that most of the same-output solutions will not be necessarily identical when it comes to the rearrangement of surgeries among rooms. A numerical example has been dealt with via a proposed nested simulated annealing (SA) optimization approach which provides insights about how various configurations inside a solution can alter the optimal condition.

Keywords: health-care logistics, hospital tray configuration, off-line bin packing, simulated annealing optimization, surgical case scheduling

Procedia PDF Downloads 281
861 Lattice Boltzmann Simulation of Fluid Flow and Heat Transfer Through Porous Media by Means of Pore-Scale Approach: Effect of Obstacles Size and Arrangement on Tortuosity and Heat Transfer for a Porosity Degree

Authors: Annunziata D’Orazio, Arash Karimipour, Iman Moradi

Abstract:

The size and arrangement of the obstacles in the porous media has an influential effect on the fluid flow and heat transfer, even in the same porosity. Regarding to this, in the present study, several different amounts of obstacles, in both regular and stagger arrangements, in the analogous porosity have been simulated through a channel. In order to compare the effect of stagger and regular arrangements, as well as different quantity of obstacles in the same porosity, on fluid flow and heat transfer. In the present study, the Single Relaxation Time Lattice Boltzmann Method, with Bhatnagar-Gross-Ktook (BGK) approximation and D2Q9 model, is implemented for the numerical simulation. Also, the temperature field is modeled through a Double Distribution Function (DDF) approach. Results are presented in terms of velocity and temperature fields, streamlines, percentage of pressure drop and Nusselt number of the obstacles walls. Also, the correlation between tortuosity and Nusselt number of the obstacles walls, for both regular and staggered arrangements, has been proposed. On the other hand, the results illustrated that by increasing the amount of obstacles, as well as changing their arrangement from regular to staggered, in the same porosity, the rate of tortuosity and Nusselt number of the obstacles walls increased.

Keywords: lattice boltzmann method, heat transfer, porous media, pore-scale, porosity, tortuosity

Procedia PDF Downloads 84
860 Predictive Relationship between Motivation Strategies and Musical Creativity of Secondary School Music Students

Authors: Lucy Lugo Mawang

Abstract:

Educational Psychologists have highlighted the significance of creativity in education. Likewise, a fundamental objective of music education concern the development of students’ musical creativity potential. The purpose of this study was to determine the relationship between motivation strategies and musical creativity, and establish the prediction equation of musical creativity. The study used purposive sampling and census to select 201 fourth-form music students (139 females/ 62 males), mainly from public secondary schools in Kenya. The mean age of participants was 17.24 years (SD = .78). Framed upon self- determination theory and the dichotomous model of achievement motivation, the study adopted an ex post facto research design. A self-report measure, the Achievement Goal Questionnaire-Revised (AGQ-R) was used in data collection for the independent variable. Musical creativity was based on a creative music composition task and measured by the Consensual Musical Creativity Assessment Scale (CMCAS). Data collected in two separate sessions within an interval of one month. The questionnaire was administered in the first session, lasting approximately 20 minutes. The second session was for notation of participants’ creative composition. The results indicated a positive correlation r(199) = .39, p ˂ .01 between musical creativity and intrinsic music motivation. Conversely, negative correlation r(199) = -.19, p < .01 was observed between musical creativity and extrinsic music motivation. The equation for predicting musical creativity from music motivation strategies was significant F(2, 198) = 20.8, p < .01, with R2 = .17. Motivation strategies accounted for approximately (17%) of the variance in participants’ musical creativity. Intrinsic music motivation had the highest significant predictive value (β = .38, p ˂ .01) on musical creativity. In the exploratory analysis, a significant mean difference t(118) = 4.59, p ˂ .01 in musical creativity for intrinsic and extrinsic music motivation was observed in favour of intrinsically motivated participants. Further, a significant gender difference t(93.47) = 4.31, p ˂ .01 in musical creativity was observed, with male participants scoring higher than females. However, there was no significant difference in participants’ musical creativity based on age. The study recommended that music educators should strive to enhance intrinsic music motivation among students. Specifically, schools should create conducive environments and have interventions for the development of intrinsic music motivation since it is the most facilitative motivation strategy in predicting musical creativity.

Keywords: extrinsic music motivation, intrinsic music motivation, musical creativity, music composition

Procedia PDF Downloads 153
859 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 143
858 Comparative Fragility Analysis of Shallow Tunnels Subjected to Seismic and Blast Loads

Authors: Siti Khadijah Che Osmi, Mohammed Ahmad Syed

Abstract:

Underground structures are crucial components which required detailed analysis and design. Tunnels, for instance, are massively constructed as transportation infrastructures and utilities network especially in urban environments. Considering their prime importance to the economy and public safety that cannot be compromised, thus any instability to these tunnels will be highly detrimental to their performance. Recent experience suggests that tunnels become vulnerable during earthquakes and blast scenarios. However, a very limited amount of studies has been carried out to study and understanding the dynamic response and performance of underground tunnels under those unpredictable extreme hazards. In view of the importance of enhancing the resilience of these structures, the overall aims of the study are to evaluate probabilistic future performance of shallow tunnels subjected to seismic and blast loads by developing detailed fragility analysis. Critical non-linear time history numerical analyses using sophisticated finite element software Midas GTS NX have been presented about the current methods of analysis, taking into consideration of structural typology, ground motion and explosive characteristics, effect of soil conditions and other associated uncertainties on the tunnel integrity which may ultimately lead to the catastrophic failure of the structures. The proposed fragility curves for both extreme loadings are discussed and compared which provide significant information the performance of the tunnel under extreme hazards which may beneficial for future risk assessment and loss estimation.

Keywords: fragility analysis, seismic loads, shallow tunnels, blast loads

Procedia PDF Downloads 343
857 Large Eddy Simulations for Flow Blurring Twin-Fluid Atomization Concept Using Volume of Fluid Method

Authors: Raju Murugan, Pankaj S. Kolhe

Abstract:

The present study is mainly focusing on the numerical simulation of Flow Blurring (FB) twin fluid injection concept was proposed by Ganan-Calvo, which involves back flow atomization based on global bifurcation of liquid and gas streams, thus creating two-phase flow near the injector exit. The interesting feature of FB injector spray is an insignificant effect of variation in atomizing air to liquid ratio (ALR) on a spray cone angle. Besides, FB injectors produce a nearly uniform spatial distribution of mean droplet diameter and are least susceptible to variation in thermo-physical properties of fuels, making it a perfect candidate for fuel flexible combustor development. The FB injector working principle has been realized through experimental flow visualization techniques only. The present study explores potential of ANSYS Fluent based Large Eddy Simulation(LES) with volume of fluid (VOF) method to investigate two-phase flow just upstream of injector dump plane and spray quality immediate downstream of injector dump plane. Note that, water and air represent liquid and gas phase in all simulations and ALR is varied by changing the air mass flow rate alone. Preliminary results capture two phase flow just upstream of injector dump plane and qualitative agreement is observed with the available experimental literature.

Keywords: flow blurring twin fluid atomization, large eddy simulation, volume of fluid, air to liquid ratio

Procedia PDF Downloads 213
856 Room Temperature Lasing from InGaAs Quantum Well Nanowires on Silicon-On-Insulator Substrates

Authors: Balthazar Temu, Zhao Yan, Bogdan-Petrin Ratiu, Sang Soon Oh, Qiang Li

Abstract:

Quantum confinement can be used to increase efficiency and control the emitted spectra in lasers and LEDs. In semiconductor nanowires, quantum confinement can be achieved in the axial direction by stacking multiple quantum disks or in the radial direction by forming a core-shell structure. In this work we demonstrate room temperature lasing in topological photonic crystal nanowire array lasers by using the InGaAs radial quantum well as the gain material. The nanowires with the GaAs/ InGaAs/ InGaP quantum well structure are arranged in a deformed honeycomb lattice, forming a photonic crystal surface emitting laser (PCSEL) . Under optical pumping we show that the PCSEL lase at the wavelength of 1001 nm (undeformed pattern) and 966 nm (stretched pattern), with the lasing threshold of 103 µJ〖/cm 〗^2. We compare the lasing wavelengths from devices with three different nanowire diameters for undeformed compressed and stretched devices, showing that the lasing wavelength increases as the nanowire diameter increases. The impact of deforming the honeycomb pattern is studied, where it was found out that the lasing wavelengths of undeformed devices are always larger than the corresponding stretched or compressed devices with the same nanowire diameter. Using photoluminescence results and numerical simulations on the field profile and the quality factors of the devices, we establish that the lasing of the device is from the radial quantum well structure.

Keywords: honeycomb PCSEL, nanowire laser, photonic crystal laser, quantum well laser

Procedia PDF Downloads 8
855 Optimization of Economic Order Quantity of Multi-Item Inventory Control Problem through Nonlinear Programming Technique

Authors: Prabha Rohatgi

Abstract:

To obtain an efficient control over a huge amount of inventory of drugs in pharmacy department of any hospital, generally, the medicines are categorized on the basis of their cost ‘ABC’ (Always Better Control), first and then categorize on the basis of their criticality ‘VED’ (Vital, Essential, desirable) for prioritization. About one-third of the annual expenditure of a hospital is spent on medicines. To minimize the inventory investment, the hospital management may like to keep the medicines inventory low, as medicines are perishable items. The main aim of each and every hospital is to provide better services to the patients under certain limited resources. To achieve the satisfactory level of health care services to outdoor patients, a hospital has to keep eye on the wastage of medicines because expiry date of medicines causes a great loss of money though it was limited and allocated for a particular period of time. The objectives of this study are to identify the categories of medicines requiring incentive managerial control. In this paper, to minimize the total inventory cost and the cost associated with the wastage of money due to expiry of medicines, an inventory control model is used as an estimation tool and then nonlinear programming technique is used under limited budget and fixed number of orders to be placed in a limited time period. Numerical computations have been given and shown that by using scientific methods in hospital services, we can give more effective way of inventory management under limited resources and can provide better health care services. The secondary data has been collected from a hospital to give empirical evidence.

Keywords: ABC-VED inventory classification, multi item inventory problem, nonlinear programming technique, optimization of EOQ

Procedia PDF Downloads 254
854 Predictive Analysis of the Stock Price Market Trends with Deep Learning

Authors: Suraj Mehrotra

Abstract:

The stock market is a volatile, bustling marketplace that is a cornerstone of economics. It defines whether companies are successful or in spiral. A thorough understanding of it is important - many companies have whole divisions dedicated to analysis of both their stock and of rivaling companies. Linking the world of finance and artificial intelligence (AI), especially the stock market, has been a relatively recent development. Predicting how stocks will do considering all external factors and previous data has always been a human task. With the help of AI, however, machine learning models can help us make more complete predictions in financial trends. Taking a look at the stock market specifically, predicting the open, closing, high, and low prices for the next day is very hard to do. Machine learning makes this task a lot easier. A model that builds upon itself that takes in external factors as weights can predict trends far into the future. When used effectively, new doors can be opened up in the business and finance world, and companies can make better and more complete decisions. This paper explores the various techniques used in the prediction of stock prices, from traditional statistical methods to deep learning and neural networks based approaches, among other methods. It provides a detailed analysis of the techniques and also explores the challenges in predictive analysis. For the accuracy of the testing set, taking a look at four different models - linear regression, neural network, decision tree, and naïve Bayes - on the different stocks, Apple, Google, Tesla, Amazon, United Healthcare, Exxon Mobil, J.P. Morgan & Chase, and Johnson & Johnson, the naïve Bayes model and linear regression models worked best. For the testing set, the naïve Bayes model had the highest accuracy along with the linear regression model, followed by the neural network model and then the decision tree model. The training set had similar results except for the fact that the decision tree model was perfect with complete accuracy in its predictions, which makes sense. This means that the decision tree model likely overfitted the training set when used for the testing set.

Keywords: machine learning, testing set, artificial intelligence, stock analysis

Procedia PDF Downloads 94
853 Streamflow Modeling Using the PyTOPKAPI Model with Remotely Sensed Rainfall Data: A Case Study of Gilgel Ghibe Catchment, Ethiopia

Authors: Zeinu Ahmed Rabba, Derek D Stretch

Abstract:

Remote sensing contributes valuable information to streamflow estimates. Usually, stream flow is directly measured through ground-based hydrological monitoring station. However, in many developing countries like Ethiopia, ground-based hydrological monitoring networks are either sparse or nonexistent, which limits the manage water resources and hampers early flood-warning systems. In such cases, satellite remote sensing is an alternative means to acquire such information. This paper discusses the application of remotely sensed rainfall data for streamflow modeling in Gilgel Ghibe basin in Ethiopia. Ten years (2001-2010) of two satellite-based precipitation products (SBPP), TRMM and WaterBase, were used. These products were combined with the PyTOPKAPI hydrological model to generate daily stream flows. The results were compared with streamflow observations at Gilgel Ghibe Nr, Assendabo gauging station using four statistical tools (Bias, R², NS and RMSE). The statistical analysis indicates that the bias-adjusted SBPPs agree well with gauged rainfall compared to bias-unadjusted ones. The SBPPs with no bias-adjustment tend to overestimate (high Bias and high RMSE) the extreme precipitation events and the corresponding simulated streamflow outputs, particularly during wet months (June-September) and underestimate the streamflow prediction over few dry months (January and February). This shows that bias-adjustment can be important for improving the performance of the SBPPs in streamflow forecasting. We further conclude that the general streamflow patterns were well captured at daily time scales when using SBPPs after bias adjustment. However, the overall results demonstrate that the simulated streamflow using the gauged rainfall is superior to those obtained from remotely sensed rainfall products including bias-adjusted ones.

Keywords: Ethiopia, PyTOPKAPI model, remote sensing, streamflow, Tropical Rainfall Measuring Mission (TRMM), waterBase

Procedia PDF Downloads 283
852 Finite Difference Modelling of Temperature Distribution around Fire Generated Heat Source in an Enclosure

Authors: A. A. Dare, E. U. Iniegbedion

Abstract:

Industrial furnaces generally involve enclosures of fire typically initiated by the combustion of gases. The fire leads to temperature distribution inside the enclosure. A proper understanding of the temperature and velocity distribution within the enclosure is often required for optimal design and use of the furnace. This study was therefore directed at numerical modeling of temperature distribution inside an enclosure as typical in a furnace. A mathematical model was developed from the conservation of mass, momentum and energy. The stream function-vorticity formulation of the governing equations was solved by an alternating direction implicit (ADI) finite difference technique. The finite difference formulation obtained were then developed into a computer code. This was used to determine the temperature, velocities, stream function and vorticity. The effect of the wall heat conduction was also considered, by assuming a one-dimensional heat flow through the wall. The computer code (MATLAB program) developed was used for the determination of the aforementioned variables. The results obtained showed that the transient temperature distribution assumed a uniform profile which becomes more chaotic with increasing time. The vertical velocity showed increasing turbulent behavior with time, while the horizontal velocity assumed decreasing laminar behavior with time. All of these behaviours were equally reported in the literature. The developed model has provided understanding of heat transfer process in an industrial furnace.

Keywords: heat source, modelling, enclosure, furnace

Procedia PDF Downloads 254
851 Two-Dimensional CFD Simulation of the Behaviors of Ferromagnetic Nanoparticles in Channel

Authors: Farhad Aalizadeh, Ali Moosavi

Abstract:

This paper presents a two-dimensional Computational Fluid Dynamics (CFDs) simulation for the steady, particle tracking. The purpose of this paper is applied magnetic field effect on Magnetic Nanoparticles velocities distribution. It is shown that the permeability of the particles determines the effect of the magnetic field on the deposition of the particles and the deposition of the particles is inversely proportional to the Reynolds number. Using MHD and its property it is possible to control the flow velocity, remove the fouling on the walls and return the system to its original form. we consider a channel 2D geometry and solve for the resulting spatial distribution of particles. According to obtained results when only magnetic fields are applied perpendicular to the flow, local particles velocity is decreased due to the direct effect of the magnetic field return the system to its original fom. In the method first, in order to avoid mixing with blood, the ferromagnetic particles are covered with a gel-like chemical composition and are injected into the blood vessels. Then, a magnetic field source with a specified distance from the vessel is used and the particles are guided to the affected area. This paper presents a two-dimensional Computational Fluid Dynamics (CFDs) simulation for the steady, laminar flow of an incompressible magnetorheological (MR) fluid between two fixed parallel plates in the presence of a uniform magnetic field. The purpose of this study is to develop a numerical tool that is able to simulate MR fluids flow in valve mode and determineB0, applied magnetic field effect on flow velocities and pressure distributions.

Keywords: MHD, channel clots, magnetic nanoparticles, simulations

Procedia PDF Downloads 367
850 Numerical Investigation of Material Behavior During Non-Equal Channel Multi Angular Extrusion

Authors: Mohamed S. El-Asfoury, Ahmed Abdel-Moneim, Mohamed N. A. Nasr

Abstract:

The current study uses finite element modeling to investigate and analyze a modified form of the from the conventional equal channel multi-angular pressing (ECMAP), using non-equal channels, on the workpiece material plastic deformation. The modified process non-equal channel multi-angular extrusion (NECMAE) is modeled using two-dimensional plane strain finite element model built using the commercial software ABAQUS. The workpiece material used is pure aluminum. The model was first validated by comparing its results to analytical solutions for single-pass equal channel angular extrusion (ECAP), as well as previously published data. After that, the model was used to examine the effects of different % of reductions of the area (for the second stage) on material plastic deformation, corner gap, and required the load. Three levels of reduction in the area were modeled; 10%, 30%, and 50%, and compared to single-pass and double-pass ECAP. Cases with a higher reduction in the area were found to have smaller corner gaps, higher and much uniform plastic deformation, as well as higher required loads. The current results are mainly attributed to the back pressure effects exerted by the second stage, as well as strain hardening effects experienced during the first stage.

Keywords: non-equal channel angular extrusion, multi-pass, sever plastic deformation, back pressure, Finite Element Modelling (FEM)

Procedia PDF Downloads 421
849 Arterial Compliance Measurement Using Split Cylinder Sensor/Actuator

Authors: Swati Swati, Yuhang Chen, Robert Reuben

Abstract:

Coronary stents are devices resembling the shape of a tube which are placed in coronary arteries, to keep the arteries open in the treatment of coronary arterial diseases. Coronary stents are routinely deployed to clear atheromatous plaque. The stent essentially applies an internal pressure to the artery because its structure is cylindrically symmetrical and this may introduce some abnormalities in final arterial shape. The goal of the project is to develop segmented circumferential arterial compliance measuring devices which can be deployed (eventually) in vivo. The segmentation of the device will allow the mechanical asymmetry of any stenosis to be assessed. The purpose will be to assess the quality of arterial tissue for applications in tailored stents and in the assessment of aortic aneurism. Arterial distensibility measurement is of utmost importance to diagnose cardiovascular diseases and for prediction of future cardiac events or coronary artery diseases. In order to arrive at some generic outcomes, a preliminary experimental set-up has been devised to establish the measurement principles for the device at macro-scale. The measurement methodology consists of a strain gauge system monitored by LABVIEW software in a real-time fashion. This virtual instrument employs a balloon within a gelatine model contained in a split cylinder with strain gauges fixed on it. The instrument allows automated measurement of the effect of air-pressure on gelatine and measurement of strain with respect to time and pressure during inflation. Compliance simple creep model has been applied to the results for the purpose of extracting some measures of arterial compliance. The results obtained from the experiments have been used to study the effect of air pressure on strain at varying time intervals. The results clearly demonstrate that with decrease in arterial volume and increase in arterial pressure, arterial strain increases thereby decreasing the arterial compliance. The measurement system could lead to development of portable, inexpensive and small equipment and could prove to be an efficient automated compliance measurement device.

Keywords: arterial compliance, atheromatous plaque, mechanical symmetry, strain measurement

Procedia PDF Downloads 279
848 Derivation of Bathymetry from High-Resolution Satellite Images: Comparison of Empirical Methods through Geographical Error Analysis

Authors: Anusha P. Wijesundara, Dulap I. Rathnayake, Nihal D. Perera

Abstract:

Bathymetric information is fundamental importance to coastal and marine planning and management, nautical navigation, and scientific studies of marine environments. Satellite-derived bathymetry data provide detailed information in areas where conventional sounding data is lacking and conventional surveys are inaccessible. The two empirical approaches of log-linear bathymetric inversion model and non-linear bathymetric inversion model are applied for deriving bathymetry from high-resolution multispectral satellite imagery. This study compares these two approaches by means of geographical error analysis for the site Kankesanturai using WorldView-2 satellite imagery. Based on the Levenberg-Marquardt method calibrated the parameters of non-linear inversion model and the multiple-linear regression model was applied to calibrate the log-linear inversion model. In order to calibrate both models, Single Beam Echo Sounding (SBES) data in this study area were used as reference points. Residuals were calculated as the difference between the derived depth values and the validation echo sounder bathymetry data and the geographical distribution of model residuals was mapped. The spatial autocorrelation was calculated by comparing the performance of the bathymetric models and the results showing the geographic errors for both models. A spatial error model was constructed from the initial bathymetry estimates and the estimates of autocorrelation. This spatial error model is used to generate more reliable estimates of bathymetry by quantifying autocorrelation of model error and incorporating this into an improved regression model. Log-linear model (R²=0.846) performs better than the non- linear model (R²=0.692). Finally, the spatial error models improved bathymetric estimates derived from linear and non-linear models up to R²=0.854 and R²=0.704 respectively. The Root Mean Square Error (RMSE) was calculated for all reference points in various depth ranges. The magnitude of the prediction error increases with depth for both the log-linear and the non-linear inversion models. Overall RMSE for log-linear and the non-linear inversion models were ±1.532 m and ±2.089 m, respectively.

Keywords: log-linear model, multi spectral, residuals, spatial error model

Procedia PDF Downloads 295
847 Experimental Characterization of Anti-Icing System and Accretion of Re-Emitted Droplets on Turbojet Engine Blades

Authors: Guillaume Linassier, Morgan Balland, Hugo Pervier, Marie Pervier, David Hammond

Abstract:

Atmospheric icing for turbojet is caused by ingestion of super-cooled water droplets. To prevent operability risks, manufacturer can implement ice protection systems. Thermal systems are commonly used for this purpose, but their activation can cause the formation of a water liquid film, that can freeze downstream the heated surface or even on other components. In the framework of STORM, a European project dedicated to icing physics in turbojet engines, a cascade rig representative of engine inlet blades was built and tested in an icing wind tunnel. This mock-up integrates two rows of blades, the upstream one being anti-iced using an electro-thermal device the downstream one being unheated. Under icing conditions, the anti-icing system is activated and set at power level to observe a liquid film on the surface and droplet re-emission at the trailing edge. These re-emitted droplets will impinge on the downstream row and contribute to ice accretion. A complete experimental database was generated, including the characterization of ice accretion shapes, and the characterization of electro-thermal anti-icing system (power limit for apparition of the runback water or ice accretion). These data will be used for validation of numerical tools for modeling thermal anti-icing systems in the scope of engine application, as well as validation of re-emission droplets model for stator parts.

Keywords: turbomachine, anti-icing, cascade rig, runback water

Procedia PDF Downloads 181
846 Uncertainty Assessment in Building Energy Performance

Authors: Fally Titikpina, Abderafi Charki, Antoine Caucheteux, David Bigaud

Abstract:

The building sector is one of the largest energy consumer with about 40% of the final energy consumption in the European Union. Ensuring building energy performance is of scientific, technological and sociological matter. To assess a building energy performance, the consumption being predicted or estimated during the design stage is compared with the measured consumption when the building is operational. When valuing this performance, many buildings show significant differences between the calculated and measured consumption. In order to assess the performance accurately and ensure the thermal efficiency of the building, it is necessary to evaluate the uncertainties involved not only in measurement but also those induced by the propagation of dynamic and static input data in the model being used. The evaluation of measurement uncertainty is based on both the knowledge about the measurement process and the input quantities which influence the result of measurement. Measurement uncertainty can be evaluated within the framework of conventional statistics presented in the \textit{Guide to the Expression of Measurement Uncertainty (GUM)} as well as by Bayesian Statistical Theory (BST). Another choice is the use of numerical methods like Monte Carlo Simulation (MCS). In this paper, we proposed to evaluate the uncertainty associated to the use of a simplified model for the estimation of the energy consumption of a given building. A detailed review and discussion of these three approaches (GUM, MCS and BST) is given. Therefore, an office building has been monitored and multiple sensors have been mounted on candidate locations to get required data. The monitored zone is composed of six offices and has an overall surface of 102 $m^2$. Temperature data, electrical and heating consumption, windows opening and occupancy rate are the features for our research work.

Keywords: building energy performance, uncertainty evaluation, GUM, bayesian approach, monte carlo method

Procedia PDF Downloads 457
845 The Effect of a Saturated Kink on the Dynamics of Tungsten Impurities in the Plasma Core

Authors: H. E. Ferrari, R. Farengo, C. F. Clauser

Abstract:

Tungsten (W) will be used in ITER as one of the plasma facing components (PFCs). The W could migrate to the plasma center. This could have a potentially deleterious effect on plasma confinement. Electron cyclotron resonance heating (ECRH) can be used to prevent W accumulation. We simulated a series of H mode discharges in ASDEX U with PFC containing W, where central ECRH was used to prevent W accumulation in the plasma center. The experiments showed that the W density profiles were flat after a sawtooth crash, and become hollow in between sawtooth crashes when ECRH has been applied. It was also observed that a saturated kink mode was active in these conditions. We studied the effect of saturated kink like instabilities on the redistribution of W impurities. The kink was modeled as the sum of a simple analytical equilibrium (large aspect ratio, circular cross section) plus the perturbation produced by the kink. A numerical code that follows the exact trajectories of the impurity ions in the total fields and includes collisions was employed. The code is written in Cuda C and runs in Graphical Processing Units (GPUs), allowing simulations with a large number of particles with modest resources. Our simulations show that when the W ions have a thermal velocity distribution, the kink has no effect on the W density. When we consider the plasma rotation, the kink can affect the W density. When the average passing frequency of the W particles is similar to the frequency of the kink mode, the expulsion of W ions from the plasma core is maximum, and the W density shows a hollow structure. This could have implications for the mitigation of W accumulation.

Keywords: impurity transport, kink instability, tungsten accumulation, tungsten dynamics

Procedia PDF Downloads 169
844 Evaluation of Residual Stresses in Human Face as a Function of Growth

Authors: M. A. Askari, M. A. Nazari, P. Perrier, Y. Payan

Abstract:

Growth and remodeling of biological structures have gained lots of attention over the past decades. Determining the response of living tissues to mechanical loads is necessary for a wide range of developing fields such as prosthetics design or computerassisted surgical interventions. It is a well-known fact that biological structures are never stress-free, even when externally unloaded. The exact origin of these residual stresses is not clear, but theoretically, growth is one of the main sources. Extracting body organ’s shapes from medical imaging does not produce any information regarding the existing residual stresses in that organ. The simplest cause of such stresses is gravity since an organ grows under its influence from birth. Ignoring such residual stresses might cause erroneous results in numerical simulations. Accounting for residual stresses due to tissue growth can improve the accuracy of mechanical analysis results. This paper presents an original computational framework based on gradual growth to determine the residual stresses due to growth. To illustrate the method, we apply it to a finite element model of a healthy human face reconstructed from medical images. The distribution of residual stress in facial tissues is computed, which can overcome the effect of gravity and maintain tissues firmness. Our assumption is that tissue wrinkles caused by aging could be a consequence of decreasing residual stress and thus not counteracting gravity. Taking into account these stresses seems therefore extremely important in maxillofacial surgery. It would indeed help surgeons to estimate tissues changes after surgery.

Keywords: finite element method, growth, residual stress, soft tissue

Procedia PDF Downloads 268
843 Relay-Augmented Bottleneck Throughput Maximization for Correlated Data Routing: A Game Theoretic Perspective

Authors: Isra Elfatih Salih Edrees, Mehmet Serdar Ufuk Türeli

Abstract:

In this paper, an energy-aware method is presented, integrating energy-efficient relay-augmented techniques for correlated data routing with the goal of optimizing bottleneck throughput in wireless sensor networks. The system tackles the dual challenge of throughput optimization while considering sensor network energy consumption. A unique routing metric has been developed to enable throughput maximization while minimizing energy consumption by utilizing data correlation patterns. The paper introduces a game theoretic framework to address the NP-complete optimization problem inherent in throughput-maximizing correlation-aware routing with energy limitations. By creating an algorithm that blends energy-aware route selection strategies with the best reaction dynamics, this framework provides a local solution. The suggested technique considerably raises the bottleneck throughput for each source in the network while reducing energy consumption by choosing the best routes that strike a compromise between throughput enhancement and energy efficiency. Extensive numerical analyses verify the efficiency of the method. The outcomes demonstrate the significant decrease in energy consumption attained by the energy-efficient relay-augmented bottleneck throughput maximization technique, in addition to confirming the anticipated throughput benefits.

Keywords: correlated data aggregation, energy efficiency, game theory, relay-augmented routing, throughput maximization, wireless sensor networks

Procedia PDF Downloads 81
842 Bioinformatic Design of a Non-toxic Modified Adjuvant from the Native A1 Structure of Cholera Toxin with Membrane Synthetic Peptide of Naegleria fowleri

Authors: Frida Carrillo Morales, Maria Maricela Carrasco Yépez, Saúl Rojas Hernández

Abstract:

Naegleria fowleri is the causative agent of primary amebic meningoencephalitis, this disease is acute and fulminant that affects humans. It has been reported that despite the existence of therapeutic options against this disease, its mortality rate is 97%. Therefore, the need arises to have vaccines that confer protection against this disease and, in addition to developing adjuvants to enhance the immune response. In this regard, in our work group, we obtained a peptide designed from the membrane protein MP2CL5 of Naegleria fowleri called Smp145 that was shown to be immunogenic; however, it would be of great importance to enhance its immunological response, being able to co-administer it with a non-toxic adjuvant. Therefore, the objective of this work was to carry out the bioinformatic design of a peptide of the Naegleria fowleri membrane protein MP2CL5 conjugated with a non-toxic modified adjuvant from the native A1 structure of Cholera Toxin. For which different bioinformatics tools were used to obtain a model with a modification in amino acid 61 of the A1 subunit of the CT (CTA1), to which the Smp145 peptide was added and both molecules were joined with a 13-glycine linker. As for the results obtained, the modification in CTA1 bound to the peptide produces a reduction in the toxicity of the molecule in in silico experiments, likewise, the prediction in the binding of Smp145 to the receptor of B cells suggests that the molecule is directed in specifically to the BCR receptor, decreasing its native enzymatic activity. The stereochemical evaluation showed that the generated model has a high number of adequately predicted residues. In the ERRAT test, the confidence with which it is possible to reject regions that exceed the error values was evaluated, in the generated model, a high score was obtained, which determines that the model has a good structural resolution. Therefore, the design of the conjugated peptide in this work will allow us to proceed with its chemical synthesis and subsequently be able to use it in the mouse meningitis protection model caused by N. fowleri.

Keywords: immunology, vaccines, pathogens, infectious disease

Procedia PDF Downloads 90
841 Numerical Investigation of a New Two-Fluid Model for Semi-Dilute Polymer Solutions

Authors: Soroush Hooshyar, Mohamadali Masoudian, Natalie Germann

Abstract:

Many soft materials such as polymer solutions can develop localized bands with different shear rates, which are known as shear bands. Using the generalized bracket approach of nonequilibrium thermodynamics, we recently developed a new two-fluid model to study shear banding for semi-dilute polymer solutions. The two-fluid approach is an appropriate means for describing diffusion processes such as Fickian diffusion and stress-induced migration. In this approach, it is assumed that the local gradients in concentration and, if accounted for, also stress generate a nontrivial velocity difference between the components. Since the differential velocity is treated as a state variable in our model, the implementation of the boundary conditions arising from the derivative diffusive terms is straightforward. Our model is a good candidate for benchmark simulations because of its simplicity. We analyzed its behavior in cylindrical Couette flow, a rectilinear channel flow, and a 4:1 planar contraction flow. The latter problem was solved using the OpenFOAM finite volume package and the impact of shear banding on the lip and salient vortices was investigated. For the other smooth geometries, we employed a standard Chebyshev pseudospectral collocation method. The results showed that the steady-state solution is unique with respect to initial conditions, deformation history, and the value of the diffusivity constant. However, smaller the value of the diffusivity constant is, the more time it takes to reach the steady state.

Keywords: nonequilibrium thermodynamics, planar contraction, polymer solutions, shear banding, two-fluid approach

Procedia PDF Downloads 330
840 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals

Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty

Abstract:

A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs, and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine-learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient but not the magnitude. A neural network with two hidden layers were then used to learn the coefficient magnitudes along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.

Keywords: quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction

Procedia PDF Downloads 112
839 Acceleration and Deceleration Behavior in the Vicinity of a Speed Camera, and Speed Section Control

Authors: Jean Felix Tuyisingize

Abstract:

Speeding or inappropriate speed is a major problem worldwide, contributing to 10-15% of road crashes and 30% of fatal injury crashes. The consequences of speeding put the driver's life at risk and the lives of other road users like motorists, cyclists, and pedestrians. To control vehicle speeds, governments, and traffic authorities enforced speed regulations through speed cameras and speed section control, which monitor all vehicle speeds and detect plate numbers to levy penalties. However, speed limit violations are prevalent, even on motorways with speed cameras. The problem with speed cameras is that they alter driver behaviors, and their effect declines with increasing distance from the speed camera location. Drivers decelerate short distances before the camera and vigorously accelerate above the speed limit just after passing by the camera. The sudden decelerating near cameras causes the drivers to try to make up for lost time after passing it, and they do this by speeding up, resulting in a phenomenon known as the "Kangaroo jump" or "V-profile" around camera/ASSC areas. This study investigated the impact of speed enforcement devices, specifically Average Speed Section Control (ASSCs) and fixed cameras, on acceleration and deceleration events within their vicinity. The research employed advanced statistical and Geographic Information System (GIS) analysis on naturalistic driving data, to uncover speeding patterns near the speed enforcement systems. The study revealed a notable concentration of events within a 600-meter radius of enforcement devices, suggesting their influence on driver behaviors within a specific range. However, most of these events are of low severity, suggesting that drivers may not significantly alter their speed upon encountering these devices. This behavior could be attributed to several reasons, such as consistently maintaining safe speeds or using real-time in-vehicle intervention systems. The complexity of driver behavior is also highlighted, indicating the potential influence of factors like traffic density, road conditions, weather, time of day, and driver characteristics. Further, the study highlighted that high-severity events often occurred outside speed enforcement zones, particularly around intersections, indicating these as potential hotspots for drastic speed changes. These findings call for a broader perspective on traffic safety interventions beyond reliance on speed enforcement devices. However, the study acknowledges certain limitations, such as its reliance on a specific geographical focus, which may impact the broad applicability of the findings. Additionally, the severity of speed modification events was categorized into low, medium, and high, which could oversimplify the continuum of speed changes and potentially mask trends within each category. This research contributes valuable insights to traffic safety and driver behavior literature, illuminating the complexity of driver behavior and the potential influence of factors beyond the presence of speed enforcement devices. Future research directions may employ various categories of event severity. They may also explore the role of in-vehicle technologies, driver characteristics, and a broader set of environmental variables in driving behavior and traffic safety.

Keywords: acceleration, deceleration, speeding, inappropriate speed, speed enforcement cameras

Procedia PDF Downloads 31
838 Factors Impacting Geostatistical Modeling Accuracy and Modeling Strategy of Fluvial Facies Models

Authors: Benbiao Song, Yan Gao, Zhuo Liu

Abstract:

Geostatistical modeling is the key technic for reservoir characterization, the quality of geological models will influence the prediction of reservoir performance greatly, but few studies have been done to quantify the factors impacting geostatistical reservoir modeling accuracy. In this study, 16 fluvial prototype models have been established to represent different geological complexity, 6 cases range from 16 to 361 wells were defined to reproduce all those 16 prototype models by different methodologies including SIS, object-based and MPFS algorithms accompany with different constraint parameters. Modeling accuracy ratio was defined to quantify the influence of each factor, and ten realizations were averaged to represent each accuracy ratio under the same modeling condition and parameters association. Totally 5760 simulations were done to quantify the relative contribution of each factor to the simulation accuracy, and the results can be used as strategy guide for facies modeling in the similar condition. It is founded that data density, geological trend and geological complexity have great impact on modeling accuracy. Modeling accuracy may up to 90% when channel sand width reaches up to 1.5 times of well space under whatever condition by SIS and MPFS methods. When well density is low, the contribution of geological trend may increase the modeling accuracy from 40% to 70%, while the use of proper variogram may have very limited contribution for SIS method. It can be implied that when well data are dense enough to cover simple geobodies, few efforts were needed to construct an acceptable model, when geobodies are complex with insufficient data group, it is better to construct a set of robust geological trend than rely on a reliable variogram function. For object-based method, the modeling accuracy does not increase obviously as SIS method by the increase of data density, but kept rational appearance when data density is low. MPFS methods have the similar trend with SIS method, but the use of proper geological trend accompany with rational variogram may have better modeling accuracy than MPFS method. It implies that the geological modeling strategy for a real reservoir case needs to be optimized by evaluation of dataset, geological complexity, geological constraint information and the modeling objective.

Keywords: fluvial facies, geostatistics, geological trend, modeling strategy, modeling accuracy, variogram

Procedia PDF Downloads 262
837 Experimental Study of Sand-Silt Mixtures with Torsional and Flexural Resonant Column Tests

Authors: Meghdad Payan, Kostas Senetakis, Arman Khoshghalb, Nasser Khalili

Abstract:

Dynamic properties of soils, especially at the range of very small strains, are of particular interest in geotechnical engineering practice for characterization of the behavior of geo-structures subjected to a variety of stress states. This study reports on the small-strain dynamic properties of sand-silt mixtures with particular emphasis on the effect of non-plastic fines content on the small strain shear modulus (Gmax), Young’s Modulus (Emax), material damping (Ds,min) and Poisson’s Ratio (v). Several clean sands with a wide range of grain size characteristics and particle shape are mixed with variable percentages of a silica non-plastic silt as fines content. Prepared specimens of sand-silt mixtures at different initial void ratios are subjected to sequential torsional and flexural resonant column tests with elastic dynamic properties measured along an isotropic stress path up to 800 kPa. It is shown that while at low percentages of fines content, there is a significant difference between the dynamic properties of the various samples due to the different characteristics of the sand portion of the mixtures, this variance diminishes as the fines content increases and the soil behavior becomes mainly silt-dominant, rendering no significant influence of sand properties on the elastic dynamic parameters. Indeed, beyond a specific portion of fines content, around 20% to 30% typically denoted as threshold fines content, silt is controlling the behavior of the mixture. Using the experimental results, new expressions for the prediction of small-strain dynamic properties of sand-silt mixtures are developed accounting for the percentage of silt and the characteristics of the sand portion. These expressions are general in nature and are capable of evaluating the elastic dynamic properties of sand-silt mixtures with any types of parent sand in the whole range of silt percentage. The inefficiency of skeleton void ratio concept in the estimation of small-strain stiffness of sand-silt mixtures is also illustrated.

Keywords: damping ratio, Poisson’s ratio, resonant column, sand-silt mixture, shear modulus, Young’s modulus

Procedia PDF Downloads 249
836 Imputing the Minimum Social Value of Public Healthcare: A General Equilibrium Model of Israel

Authors: Erez Yerushalmi, Sani Ziv

Abstract:

The rising demand for healthcare services, without a corresponding rise in public supply, led to a debate on whether to increase private healthcare provision - especially in hospital services and second-tier healthcare. Proponents for increasing private healthcare highlight gains in efficiency, while opponents its risk to social welfare. None, however, provide a measure of the social value and its impact on the economy in terms of a monetary value. In this paper, we impute a minimum social value of public healthcare that corresponds to indifference between gains in efficiency, with losses to social welfare. Our approach resembles contingent valuation methods that introduce a hypothetical market for non-commodities, but is different from them because we use numerical simulation techniques to exploit certain market failure conditions. In this paper, we develop a general equilibrium model that distinguishes between public-private healthcare services and public-private financing. Furthermore, the social value is modelled as a by product of healthcare services. The model is then calibrated to our unique health focused Social Accounting Matrix of Israel, and simulates the introduction of a hypothetical health-labour market - given that it is heavily regulated in the baseline (i.e., the true situation in Israel today). For baseline parameters, we estimate the minimum social value at around 18% public healthcare financing. The intuition is that the gain in economic welfare from improved efficiency, is offset by the loss in social welfare due to a reduction in available social value. We furthermore simulate a deregulated healthcare scenario that internalizes the imputed value of social value and searches for the optimal weight of public and private healthcare provision.

Keywords: contingent valuation method (CVM), general equilibrium model, hypothetical market, private-public healthcare, social value of public healthcare

Procedia PDF Downloads 146
835 Analysis of a Differential System to Get Insights on the Potential Establishment of Microsporidia MB in the Mosquito Population for Malaria Control

Authors: Charlene N. T. Mfangnia, Henri E. Z. Tonnang, Berge Tsanou, Jeremy Herren

Abstract:

Microsporidia MB is a recently discovered symbiont capable of blocking the transmission of Plasmodium from mosquitoes to humans. The symbiont can spread both horizontally and vertically among the mosquito population. This dual transmission gives the symbiont the ability to invade the mosquito population. The replacement of the mosquito population by the population of symbiont-infected mosquitoes then appears as a promising strategy for malaria control. In this context, the present study uses differential equations to model the transmission dynamics of Microsporidia MB in the population of female Anopheles mosquitoes. Long-term propagation scenarios of the symbiont, such as extinction, persistence or total infection, are obtained through the determination of the target and basic reproduction numbers, the equilibria, and the study of their stability. The stability is illustrated numerically, and the contribution of vertical and horizontal transmission in the spread of the symbiont is assessed. Data obtained from laboratory experiments are then used to explain the low prevalence observed in nature. The study also shows that the male death rate, the mating rate and the attractiveness of MB-positive mosquitoes are the factors that most influence the transmission of the symbiont. In addition, the introduction of temperature and the study of bifurcations show the significant influence of the environmental condition in the propagation of Microsporidia MB. This finding proves the necessity of taking into account environmental variables for the potential establishment of the symbiont in a new area.

Keywords: differential equations, stability analysis, malaria, microsporidia MB, horizontal transmission, vertical transmission, numerical illustration

Procedia PDF Downloads 112