Search results for: reduced order models
20996 An Unified Model for Longshore Sediment Transport Rate Estimation
Authors: Aleksandra Dudkowska, Gabriela Gic-Grusza
Abstract:
Wind wave-induced sediment transport is an important multidimensional and multiscale dynamic process affecting coastal seabed changes and coastline evolution. The knowledge about sediment transport rate is important to solve many environmental and geotechnical issues. There are many types of sediment transport models but none of them is widely accepted. It is bacause the process is not fully defined. Another problem is a lack of sufficient measurment data to verify proposed hypothesis. There are different types of models for longshore sediment transport (LST, which is discussed in this work) and cross-shore transport which is related to different time and space scales of the processes. There are models describing bed-load transport (discussed in this work), suspended and total sediment transport. LST models use among the others the information about (i) the flow velocity near the bottom, which in case of wave-currents interaction in coastal zone is a separate problem (ii) critical bed shear stress that strongly depends on the type of sediment and complicates in the case of heterogeneous sediment. Moreover, LST rate is strongly dependant on the local environmental conditions. To organize existing knowledge a series of sediment transport models intercomparisons was carried out as a part of the project “Development of a predictive model of morphodynamic changes in the coastal zone”. Four classical one-grid-point models were studied and intercompared over wide range of bottom shear stress conditions, corresponding with wind-waves conditions appropriate for coastal zone in polish marine areas. The set of models comprises classical theories that assume simplified influence of turbulence on the sediment transport (Du Boys, Meyer-Peter & Muller, Ribberink, Engelund & Hansen). It turned out that the values of estimated longshore instantaneous mass sediment transport are in general in agreement with earlier studies and measurements conducted in the area of interest. However, none of the formulas really stands out from the rest as being particularly suitable for the test location over the whole analyzed flow velocity range. Therefore, based on the models discussed a new unified formula for longshore sediment transport rate estimation is introduced, which constitutes the main original result of this study. Sediment transport rate is calculated based on the bed shear stress and critical bed shear stress. The dependence of environmental conditions is expressed by one coefficient (in a form of constant or function) thus the model presented can be quite easily adjusted to the local conditions. The discussion of the importance of each model parameter for specific velocity ranges is carried out. Moreover, it is shown that the value of near-bottom flow velocity is the main determinant of longshore bed-load in storm conditions. Thus, the accuracy of the results depends less on the sediment transport model itself and more on the appropriate modeling of the near-bottom velocities.Keywords: bedload transport, longshore sediment transport, sediment transport models, coastal zone
Procedia PDF Downloads 38520995 A New Mathematical Model of Human Olfaction
Authors: H. Namazi, H. T. N. Kuan
Abstract:
It is known that in humans, the adaptation to a given odor occurs within a quite short span of time (typically one minute) after the odor is presented to the brain. Different models of human olfaction have been developed by scientists but none of these models consider the diffusion phenomenon in olfaction. A novel microscopic model of the human olfaction is presented in this paper. We develop this model by incorporating the transient diffusivity. In fact, the mathematical model is written based on diffusion of the odorant within the mucus layer. By the use of the model developed in this paper, it becomes possible to provide quantification of the objective strength of odor.Keywords: diffusion, microscopic model, mucus layer, olfaction
Procedia PDF Downloads 50420994 Comparison of Volume of Fluid Model: Experimental and Empirical Results for Flows over Stacked Drop Manholes
Authors: Ramin Mansouri
Abstract:
The manhole is one of the types of structures that are installed at the site of change direction or change in the pipe diameter or sewage pipes as well as in step slope areas to reduce the flow velocity. In this study, the flow characteristics of hydraulic structures in a manhole structure have been investigated with a numerical model. In this research, the types of computational grid coarse, medium, and fines have been used for simulation. In order to simulate flow, k-ε model (standard, RNG, Realizable) and k-w model (standard SST) are used. Also, in order to find the best wall conditions, two types of standard and non-equilibrium wall functions were investigated. The turbulent model k-ε has the highest correlation with experimental results or all models. In terms of boundary conditions, constant speed is set for the flow input boundary, the output pressure is set in the boundaries which are in contact with the air, and the standard wall function is used for the effect of the wall function. In the numerical model, the depth at the output of the second manhole is estimated to be less than that of the laboratory and the output jet from the span. In the second regime, the jet flow collides with the manhole wall and divides into two parts, so hydraulic characteristics are the same as large vertical shaft hydraulic characteristics. In this situation, the turbulence is in a high range since it can be seen more energy loss in it. According to the results, energy loss in numerical is estimated at 9.359%, which is more than experimental data.Keywords: manhole, energy, depreciation, turbulence model, wall function, flow
Procedia PDF Downloads 8020993 A Comparative Evaluation of Finite Difference Methods for the Extended Boussinesq Equations and Application to Tsunamis Modelling
Authors: Aurore Cauquis, Philippe Heinrich, Mario Ricchiuto, Audrey Gailler
Abstract:
In this talk, we look for an accurate time scheme to model the propagation of waves. Several numerical schemes have been developed to solve the extended weakly nonlinear weakly dispersive Boussinesq Equations. The temporal schemes used are two Lax-Wendroff schemes, second or third order accurate, two Runge-Kutta schemes of second and third order and a simplified third order accurate Lax-Wendroff scheme. Spatial derivatives are evaluated with fourth order accuracy. The numerical model is applied to two monodimensional benchmarks on a flat bottom. It is also applied to the simulation of the Algerian tsunami generated by a Mw=6 seism on the 18th March 2021. The tsunami propagation was highly dispersive and propagated across the Mediterranean Sea. We study here the effects of the order of temporal discretization on the accuracy of the results and on the time of computation.Keywords: numerical analysis, tsunami propagation, water wave, boussinesq equations
Procedia PDF Downloads 23820992 Liesegang Phenomena: Experimental and Simulation Studies
Authors: Vemula Amalakrishna, S. Pushpavanam
Abstract:
Change and motion characterize and persistently reshape the world around us, on scales from molecular to global. The subtle interplay between change (Reaction) and motion (Diffusion) gives rise to an astonishing intricate spatial or temporal pattern. These pattern formation in nature has been intellectually appealing for many scientists since antiquity. Periodic precipitation patterns, also known as Liesegang patterns (LP), are one of the stimulating examples of such self-assembling reaction-diffusion (RD) systems. LP formation has a great potential in micro and nanotechnology. So far, the research on LPs has been concentrated mostly on how these patterns are forming, retrieving information to build a universal mathematical model for them. Researchers have developed various theoretical models to comprehensively construct the geometrical diversity of LPs. To the best of our knowledge, simulation studies of LPs assume an arbitrary value of RD parameters to explain experimental observation qualitatively. In this work, existing models were studied to understand the mechanism behind this phenomenon and challenges pertaining to models were understood and explained. These models are not computationally effective due to the presence of discontinuous precipitation rate in RD equations. To overcome the computational challenges, smoothened Heaviside functions have been introduced, which downsizes the computational time as well. Experiments were performed using a conventional LP system (AgNO₃-K₂Cr₂O₇) to understand the effects of different gels and temperatures on formed LPs. The model is extended for real parameter values to compare the simulated results with experimental data for both 1-D (Cartesian test tubes) and 2-D(cylindrical and Petri dish).Keywords: reaction-diffusion, spatio-temporal patterns, nucleation and growth, supersaturation
Procedia PDF Downloads 15120991 Production of Pig Iron by Smelting of Blended Pre-Reduced Titaniferous Magnetite Ore and Hematite Ore Using Lean Grade Coal
Authors: Bitan Kumar Sarkar, Akashdeep Agarwal, Rajib Dey, Gopes Chandra Das
Abstract:
The rapid depletion of high-grade iron ore (Fe2O3) has gained attention on the use of other sources of iron ore. Titaniferous magnetite ore (TMO) is a special type of magnetite ore having high titania content (23.23% TiO2 present in this case). Due to high TiO2 content and high density, TMO cannot be treated by the conventional smelting reduction. In this present work, the TMO has been collected from high-grade metamorphic terrain of the Precambrian Chotanagpur gneissic complex situated in the eastern part of India (Shaltora area, Bankura district, West Bengal) and the hematite ore has been collected from Visakhapatnam Steel Plant (VSP), Visakhapatnam. At VSP, iron ore is received from Bailadila mines, Chattisgarh of M/s. National Mineral Development Corporation. The preliminary characterization of TMO and hematite ore (HMO) has been investigated by WDXRF, XRD and FESEM analyses. Similarly, good quality of coal (mainly coking coal) is also getting depleted fast. The basic purpose of this work is to find how lean grade coal can be utilised along with TMO for smelting to produce pig iron. Lean grade coal has been characterised by using TG/DTA, proximate and ultimate analyses. The boiler grade coal has been found to contain 28.08% of fixed carbon and 28.31% of volatile matter. TMO fines (below 75 μm) and HMO fines (below 75 μm) have been separately agglomerated with lean grade coal fines (below 75 μm) in the form of briquettes using binders like bentonite and molasses. These green briquettes are dried first in oven at 423 K for 30 min and then reduced isothermally in tube furnace over the temperature range of 1323 K, 1373 K and 1423 K for 30 min & 60 min. After reduction, the reduced briquettes are characterized by XRD and FESEM analyses. The best reduced TMO and HMO samples are taken and blended in three different weight percentage ratios of 1:4, 1:8 and 1:12 of TMO:HMO. The chemical analysis of three blended samples is carried out and degree of metallisation of iron is found to contain 89.38%, 92.12% and 93.12%, respectively. These three blended samples are briquetted using binder like bentonite and lime. Thereafter these blended briquettes are separately smelted in raising hearth furnace at 1773 K for 30 min. The pig iron formed is characterized using XRD, microscopic analysis. It can be concluded that 90% yield of pig iron can be achieved when the blend ratio of TMO:HMO is 1:4.5. This means for 90% yield, the maximum TMO that could be used in the blend is about 18%.Keywords: briquetting reduction, lean grade coal, smelting reduction, TMO
Procedia PDF Downloads 31920990 Reduced Glycaemic Impact by Kiwifruit-Based Carbohydrate Exchanges Depends on Both Available Carbohydrate and Non-Digestible Fruit Residue
Authors: S. Mishra, J. Monro, H. Edwards, J. Podd
Abstract:
When a fruit such as kiwifruit is consumed its tissues are released from the physical /anatomical constraints existing in the fruit. During digestion they may expand several-fold to achieve a hydrated solids volume far greater than the original fruit, and occupy the available space in the gut, where they surround and interact with other food components. Within the cell wall dispersion, in vitro digestion of co-consumed carbohydrate, diffusion of digestion products, and mixing responsible for mass transfer of nutrients to the gut wall for absorption, were all retarded. All of the foregoing processes may be involved in the glycaemic response to carbohydrate foods consumed with kiwifruit, such as breakfast cereal. To examine their combined role in reducing the glycaemic response to wheat cereal consumed with kiwifruit we formulated diets containing equal amounts of breakfast cereal, with the addition of either kiwifruit, or sugars of the same composition and quantity as in kiwifruit. Therefore, the only difference between the diets was the presence of non-digestible fruit residues. The diet containing the entire disperse kiwifruit significantly reduced the glycaemic response amplitude and the area under the 0-120 min incremental blood glucose response curve (IAUC), compared with the equicarbohydrate diet containing the added kiwifruit sugars. It also slightly but significantly increased the 120-180 min IAUC by preventing a postprandial overcompensation, indicating improved homeostatic blood glucose control. In a subsequent study in which we used kiwifruit in a carbohydrate exchange format, in which the kiwifruit carbohydrate partially replaced breakfast cereal in equal carbohydrate meals, the blood glucose was further reduced without a loss of satiety, and with a reduction in insulin demand. The results show that kiwifruit may be a valuable component in low glycaemic impact diets.Keywords: carbohydrate, digestion, glycaemic response, kiwifruit
Procedia PDF Downloads 49220989 Numerical Analysis of Charge Exchange in an Opposed-Piston Engine
Authors: Zbigniew Czyż, Adam Majczak, Lukasz Grabowski
Abstract:
The paper presents a description of geometric models, computational algorithms, and results of numerical analyses of charge exchange in a two-stroke opposed-piston engine. The research engine was a newly designed internal Diesel engine. The unit is characterized by three cylinders in which three pairs of opposed-pistons operate. The engine will generate a power output equal to 100 kW at a crankshaft rotation speed of 3800-4000 rpm. The numerical investigations were carried out using ANSYS FLUENT solver. Numerical research, in contrast to experimental research, allows us to validate project assumptions and avoid costly prototype preparation for experimental tests. This makes it possible to optimize the geometrical model in countless variants with no production costs. The geometrical model includes an intake manifold, a cylinder, and an outlet manifold. The study was conducted for a series of modifications of manifolds and intake and exhaust ports to optimize the charge exchange process in the engine. The calculations specified a swirl coefficient obtained under stationary conditions for a full opening of intake and exhaust ports as well as a CA value of 280° for all cylinders. In addition, mass flow rates were identified separately in all of the intake and exhaust ports to achieve the best possible uniformity of flow in the individual cylinders. For the models under consideration, velocity, pressure and streamline contours were generated in important cross sections. The developed models are designed primarily to minimize the flow drag through the intake and exhaust ports while the mass flow rate increases. Firstly, in order to calculate the swirl ratio [-], tangential velocity v [m/s] and then angular velocity ω [rad / s] with respect to the charge as the mean of each element were calculated. The paper contains comparative analyses of all the intake and exhaust manifolds of the designed engine. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.Keywords: computational fluid dynamics, engine swirl, fluid mechanics, mass flow rates, numerical analysis, opposed-piston engine
Procedia PDF Downloads 19620988 Experimental Investigation of the Effect of Glass Granulated Blast Furnace Slag on Pavement Quality Concrete Pavement Made of Recycled Asphalt Pavement Material
Authors: Imran Altaf Wasil, Dinesh Ganvir
Abstract:
Due to a scarcity of virgin aggregates, the use of reclaimed asphalt pavement (RAP) as a substitute for natural aggregates has gained popularity. Despite the fact that RAP is recycled in asphalt pavement, there is still excess RAP, and its use in concrete pavements has expanded in recent years. According to a survey, 98 percent of India's pavements are flexible. As a result, the maintenance and reconstruction of such pavements generate RAP, which can be reused in concrete pavements as well as surface course, base course, and sub-base of flexible pavements. Various studies on the properties of reclaimed asphalt pavement and its optimal requirements for usage in concrete has been conducted throughout the years. In this study a total of four different mixes were prepared by partially replacing natural aggregates by RAP in different proportions. It was found that with the increase in the replacement level of Natural aggregates by RAP the mechanical and durability properties got reduced. In order to increase the mechanical strength of mixes 40% Glass Granulated Blast Furnace Slag (GGBS) was used and it was found that with replacement of cement by 40% of GGBS, there was an enhancement in the mechanical and durability properties of RAP inclusive PQC mixes. The reason behind the improvement in the properties is due to the processing technique used in order to remove the contaminant layers present in the coarse RAP aggregates. The replacement level of Natural aggregate with RAP was done in proportions of 20%, 40% and 60% along with the partial replacement of cement by 40% GGBS. It was found that all the mixes surpassed the design target value of 40 MPa in compression and 4.5 MPa in flexure making it much more economical and feasible.Keywords: reclaimed asphalt pavement, pavement quality concrete, glass granulated blast furnace slag, mechanical and durability properties
Procedia PDF Downloads 11220987 An Efficient Backward Semi-Lagrangian Scheme for Nonlinear Advection-Diffusion Equation
Authors: Soyoon Bak, Sunyoung Bu, Philsu Kim
Abstract:
In this paper, a backward semi-Lagrangian scheme combined with the second-order backward difference formula is designed to calculate the numerical solutions of nonlinear advection-diffusion equations. The primary aims of this paper are to remove any iteration process and to get an efficient algorithm with the convergence order of accuracy 2 in time. In order to achieve these objects, we use the second-order central finite difference and the B-spline approximations of degree 2 and 3 in order to approximate the diffusion term and the spatial discretization, respectively. For the temporal discretization, the second order backward difference formula is applied. To calculate the numerical solution of the starting point of the characteristic curves, we use the error correction methodology developed by the authors recently. The proposed algorithm turns out to be completely iteration-free, which resolves the main weakness of the conventional backward semi-Lagrangian method. Also, the adaptability of the proposed method is indicated by numerical simulations for Burgers’ equations. Throughout these numerical simulations, it is shown that the numerical results are in good agreement with the analytic solution and the present scheme offer better accuracy in comparison with other existing numerical schemes. Semi-Lagrangian method, iteration-free method, nonlinear advection-diffusion equation, second-order backward difference formulaKeywords: Semi-Lagrangian method, iteration free method, nonlinear advection-diffusion equation, second-order backward difference formula
Procedia PDF Downloads 31920986 Image Captioning with Vision-Language Models
Authors: Promise Ekpo Osaine, Daniel Melesse
Abstract:
Image captioning is an active area of research in the multi-modal artificial intelligence (AI) community as it connects vision and language understanding, especially in settings where it is required that a model understands the content shown in an image and generates semantically and grammatically correct descriptions. In this project, we followed a standard approach to a deep learning-based image captioning model, injecting architecture for the encoder-decoder setup, where the encoder extracts image features, and the decoder generates a sequence of words that represents the image content. As such, we investigated image encoders, which are ResNet101, InceptionResNetV2, EfficientNetB7, EfficientNetV2M, and CLIP. As a caption generation structure, we explored long short-term memory (LSTM). The CLIP-LSTM model demonstrated superior performance compared to the encoder-decoder models, achieving a BLEU-1 score of 0.904 and a BLEU-4 score of 0.640. Additionally, among the CNN-LSTM models, EfficientNetV2M-LSTM exhibited the highest performance with a BLEU-1 score of 0.896 and a BLEU-4 score of 0.586 while using a single-layer LSTM.Keywords: multi-modal AI systems, image captioning, encoder, decoder, BLUE score
Procedia PDF Downloads 7520985 Empirical Analyses of Students’ Self-Concepts and Their Mathematics Achievements
Authors: Adetunji Abiola Olaoye
Abstract:
The study examined the students’ self-concepts and mathematics achievement viz-a-viz the existing three theoretical models: Humanist self-concept (M1), Contemporary self-concept (M2) and Skills development self-concept (M3). As a qualitative research study, it comprised of one research question, which was transformed into hypothesis viz-a-viz the existing theoretical models. Sample to the study comprised of twelve public secondary schools from which twenty-five mathematics teachers, twelve counselling officers and one thousand students of Upper Basic II were selected based on intact class as school administrations and system did not allow for randomization. Two instruments namely 10 items ‘Achievement test in Mathematics’ (r1=0.81) and 10 items Student’s self-concept questionnaire (r2=0.75) were adapted, validated and used for the study. Data were analysed through descriptive, one way ANOVA, t-test and correlation statistics at 5% level of significance. Finding revealed mean and standard deviation of pre-achievement test scores of (51.322, 16.10), (54.461, 17.85) and (56.451, 18.22) for the Humanist Self-Concept, Contemporary Self-Concept and Skill Development Self-Concept respectively. Apart from that study showed that there was significant different in the academic performance of students along the existing models (F-cal>F-value, df = (2,997); P<0.05). Furthermore, study revealed students’ achievement in mathematics and self-concept questionnaire with the mean and standard deviation of (57.4, 11.35) and (81.6, 16.49) respectively. Result confirmed an affirmative relationship with the Contemporary Self-Concept model that expressed an individual subject and specific self-concept as the primary determinants of higher academic achievement in the subject as there is a statistical correlation between students’ self-concept and mathematics achievement viz-a-viz the existing three theoretical models of Contemporary (M2) with -Z_cal<-Z_val, df=998: P<0.05*. The implication of the study was discussed with recommendations and suggestion for further studies proffered.Keywords: contemporary, humanists, self-concepts, skill development
Procedia PDF Downloads 23620984 Influence of Gestational Diabetes Mellitus on the Activity of Steroid C17-Hydroxylase-C17,20-Lyase in Patients with Intrahepatic Cholestasis of Pregnancy
Authors: Leona Ondrejikova, Martin Hill, Antonin Parizek
Abstract:
The incidence of gestational diabetes mellitus (GDM) is higher in women predisposed to developing intrahepatic cholestasis of pregnancy (ICP). Both diseases are associated with altered steroidogenesis when compared with none-ICP controls. However, the effect of GDM on circulating steroids in ICP patients remains unclear. The question remains, whether the levels of circulating steroids differ between ICP patients with and without GDM. In total 10 ICP patients without GDM (ICP+GDM-), 7 ICP patients with GDM (ICP+GDM+), and 15 controls (ICP-GDM-) were monitored during late gestation, at labor, and during three periods postpartum (day 5, week 3, and week 6 postpartum) (Šimják et al., 2018). The relationships between steroid profiles and patients’ status were evaluated using the ANOVA model consisting of subject factor, between-subject factors Group (ICP+GDM+, ICP+GDM-, ICP-GDM-), gestational age at the diagnosis of ICP and gestational age at labor, and within-subject factor Stage and ICP × Stage interaction. The levels of the C21 and C19 Δ5 steroids and 5α/β-reduced C19 steroids were highest in ICP+GDM+, while those for the ICP-GDM-, and ICP+GDM- groups were lower. In the C21 Δ4 steroids and their 5α/β-reduced metabolites, the steroid levels were highest in the ICP+GDM-, intermediate in the ICP-GDM- and lowest in the ICP+GDM+ group. This higher concentration in ICP+GDM- group may be of importance as the 5α-pregnane-3α,20α-diol disulfate, is considered as the substance inducing ICP. In general, these data show that the comorbidity with GDM substantially changes the steroidome in ICP patients towards the higher activity of steroid CYP17A1 lyase step in adrenal zona reticularis reduced CYP17A1 hydroxylase step in zona fasciculata. This is consistent with our previously published hypothesis about the critical role of maternal zona reticularis in the pathophysiology of ICP. Our present data also indicate that the comorbidity with GDM might moderate the gravity of the ICP in this way.Keywords: CYP17A1, GC-MS, gestational diabetes mellitus, intrahepatic cholestasis of pregnancy
Procedia PDF Downloads 13620983 Optimized Text Summarization Model on Mobile Screens for Sight-Interpreters: An Empirical Study
Authors: Jianhua Wang
Abstract:
To obtain key information quickly from long texts on small screens of mobile devices, sight-interpreters need to establish optimized summarization model for fast information retrieval. Four summarization models based on previous studies were studied including title+key words (TKW), title+topic sentences (TTS), key words+topic sentences (KWTS) and title+key words+topic sentences (TKWTS). Psychological experiments were conducted on the four models for three different genres of interpreting texts to establish the optimized summarization model for sight-interpreters. This empirical study shows that the optimized summarization model for sight-interpreters to quickly grasp the key information of the texts they interpret is title+key words (TKW) for cultural texts, title+key words+topic sentences (TKWTS) for economic texts and topic sentences+key words (TSKW) for political texts.Keywords: different genres, mobile screens, optimized summarization models, sight-interpreters
Procedia PDF Downloads 31320982 Biodegradable Cellulose-Based Materials for the Use in Food Packaging
Authors: Azza A. Al-Ghamdi, Abir S. Abdel-Naby
Abstract:
Cellulose acetate (CA) is a natural biodegradable polymer. It forms transparent films by the casting technique. CA suffers from high degree of water permeability as well as the low thermal stability at high temperatures. To adjust the CA polymeric films to the manufacture of food packaging, its thermal and mechanical properties should be improved. The modification of CA by grafting it with N-Amino phenyl maleimide (N-APhM) led to the construction of hydrophobic branches throughout the polymeric matrix which reduced its wettability as compared to the parent CA. The branches built onto the polymeric chains had been characterized by UV/Vis, 13C-NMR and ESEM. The improvement of the thermal properties was investigated and compared to the parent CA using thermal gravimetric analysis (TGA), differential scanning calorimetry (DSC), differential thermal analysis (DTA), contact angle and mechanical testing measurements. The results revealed that the water-uptake was reduced by increasing the graft percentage. The thermal and mechanical properties were also improved.Keywords: cellulose acetate, food packaging, graft copolymerization, thermal properties
Procedia PDF Downloads 22020981 Model Observability – A Monitoring Solution for Machine Learning Models
Authors: Amreth Chandrasehar
Abstract:
Machine Learning (ML) Models are developed and run in production to solve various use cases that help organizations to be more efficient and help drive the business. But this comes at a massive development cost and lost business opportunities. According to the Gartner report, 85% of data science projects fail, and one of the factors impacting this is not paying attention to Model Observability. Model Observability helps the developers and operators to pinpoint the model performance issues data drift and help identify root cause of issues. This paper focuses on providing insights into incorporating model observability in model development and operationalizing it in production.Keywords: model observability, monitoring, drift detection, ML observability platform
Procedia PDF Downloads 11020980 Interest Rate Prediction with Taylor Rule
Authors: T. Bouchabchoub, A. Bendahmane, A. Haouriqui, N. Attou
Abstract:
This paper presents simulation results of Forex predicting model equations in order to give approximately a prevision of interest rates. First, Hall-Taylor (HT) equations have been used with Taylor rule (TR) to adapt them to European and American Forex Markets. Indeed, initial Taylor Rule equation is conceived for all Forex transactions in every States: It includes only one equation and six parameters. Here, the model has been used with Hall-Taylor equations, initially including twelve equations which have been reduced to only three equations. Analysis has been developed on the following base macroeconomic variables: Real change rate, investment wages, anticipated inflation, realized inflation, real production, interest rates, gap production and potential production. This model has been used to specifically study the impact of an inflation shock on macroeconomic director interest rates.Keywords: interest rate, Forex, Taylor rule, production, European Central Bank (ECB), Federal Reserve System (FED).
Procedia PDF Downloads 52520979 Generating 3D Battery Cathode Microstructures using Gaussian Mixture Models and Pix2Pix
Authors: Wesley Teskey, Vedran Glavas, Julian Wegener
Abstract:
Generating battery cathode microstructures is an important area of research, given the proliferation of the use of automotive batteries. Currently, finite element analysis (FEA) is often used for simulations of battery cathode microstructures before physical batteries can be manufactured and tested to verify the simulation results. Unfortunately, a key drawback of using FEA is that this method of simulation is very slow in terms of computational runtime. Generative AI offers the key advantage of speed when compared to FEA, and because of this, generative AI is capable of evaluating very large numbers of candidate microstructures. Given AI generated candidate microstructures, a subset of the promising microstructures can be selected for further validation using FEA. Leveraging the speed advantage of AI allows for a better final microstructural selection because high speed allows for the evaluation of many more candidate microstructures. For the approach presented, battery cathode 3D candidate microstructures are generated using Gaussian Mixture Models (GMMs) and pix2pix. This approach first uses GMMs to generate a population of spheres (representing the “active material” of the cathode). Once spheres have been sampled from the GMM, they are placed within a microstructure. Subsequently, the pix2pix sweeps over the 3D microstructure (iteratively) slice by slice and adds details to the microstructure to determine what portions of the microstructure will become electrolyte and what part of the microstructure will become binder. In this manner, each subsequent slice of the microstructure is evaluated using pix2pix, where the inputs into pix2pix are the previously processed layers of the microstructure. By feeding into pix2pix previously fully processed layers of the microstructure, pix2pix can be used to ensure candidate microstructures represent a realistic physical reality. More specifically, in order for the microstructure to represent a realistic physical reality, the locations of electrolyte and binder in each layer of the microstructure must reasonably match the locations of electrolyte and binder in previous layers to ensure geometric continuity. Using the above outlined approach, a 10x to 100x speed increase was possible when generating candidate microstructures using AI when compared to using a FEA only approach for this task. A key metric for evaluating microstructures was the battery specific power value that the microstructures would be able to produce. The best generative AI result obtained was a 12% increase in specific power for a candidate microstructure when compared to what a FEA only approach was capable of producing. This 12% increase in specific power was verified by FEA simulation.Keywords: finite element analysis, gaussian mixture models, generative design, Pix2Pix, structural design
Procedia PDF Downloads 10520978 Co-payment Strategies for Chronic Medications: A Qualitative and Comparative Analysis at European Level
Authors: Pedro M. Abreu, Bruno R. Mendes
Abstract:
The management of pharmacotherapy and the process of dispensing medicines is becoming critical in clinical pharmacy due to the increase of incidence and prevalence of chronic diseases, the complexity and customization of therapeutic regimens, the introduction of innovative and more expensive medicines, the unbalanced relation between expenditure and revenue as well as due to the lack of rationalization associated with medication use. For these reasons, co-payments emerged in Europe in the 70s and have been applied over the past few years in healthcare. Co-payments lead to a rationing and rationalization of user’s access under healthcare services and products, and simultaneously, to a qualification and improvement of the services and products for the end-user. This analysis, under hospital practices particularly and co-payment strategies in general, was carried out on all the European regions and identified four reference countries, that apply repeatedly this tool and with different approaches. The structure, content and adaptation of European co-payments were analyzed through 7 qualitative attributes and 19 performance indicators, and the results expressed in a scorecard, allowing to conclude that the German models (total score of 68,2% and 63,6% in both elected co-payments) can collect more compliance and effectiveness, the English models (total score of 50%) can be more accessible, and the French models (total score of 50%) can be more adequate to the socio-economic and legal framework. Other European models did not show the same quality and/or performance, so were not taken as a standard in the future design of co-payments strategies. In this sense, we can see in the co-payments a strategy not only to moderate the consumption of healthcare products and services, but especially to improve them, as well as a strategy to increment the value that the end-user assigns to these services and products, such as medicines.Keywords: clinical pharmacy, co-payments, healthcare, medicines
Procedia PDF Downloads 25020977 Mixed Treatment (Physical-Chemical and Biological) of Ouled Fayet Landfill Leachates
Authors: O. Balamane-Zizi, L. M. Rouidi, A. Boukhrissa, N. Daas, H. Ait-amar
Abstract:
The objective of this study was to test the possibility of a mixed treatment (physical-chemical and biological) of Ouled Fayet leachates which date of 10 years and has a large fraction of hard COD that can be reduced by coagulation-flocculation. Previous batch tests showed the possibility of applying the physical-chemical and biological treatments separately; the removal efficiencies obtained in this case were not interesting. We propose, therefore, to test the possibility of a combined treatment, in order to improve the quality of the leachates. Estimation of the treatment’s effectiveness was done by analysis of some pollution parameters such as COD, suspended solids, and heavy metals (particularly iron and nickel). The main results obtained after the combination of treatments, show reduction rate of about 63% for COD, 73% for suspended solids and 80% for iron and nickel. We also noted an improvement in the turbidity of treated leachates.Keywords: landfill leachates, COD, physical-chemical treatment, biological treatment
Procedia PDF Downloads 47020976 Fuzzy-Machine Learning Models for the Prediction of Fire Outbreak: A Comparative Analysis
Authors: Uduak Umoh, Imo Eyoh, Emmauel Nyoho
Abstract:
This paper compares fuzzy-machine learning algorithms such as Support Vector Machine (SVM), and K-Nearest Neighbor (KNN) for the predicting cases of fire outbreak. The paper uses the fire outbreak dataset with three features (Temperature, Smoke, and Flame). The data is pre-processed using Interval Type-2 Fuzzy Logic (IT2FL) algorithm. Min-Max Normalization and Principal Component Analysis (PCA) are used to predict feature labels in the dataset, normalize the dataset, and select relevant features respectively. The output of the pre-processing is a dataset with two principal components (PC1 and PC2). The pre-processed dataset is then used in the training of the aforementioned machine learning models. K-fold (with K=10) cross-validation method is used to evaluate the performance of the models using the matrices – ROC (Receiver Operating Curve), Specificity, and Sensitivity. The model is also tested with 20% of the dataset. The validation result shows KNN is the better model for fire outbreak detection with an ROC value of 0.99878, followed by SVM with an ROC value of 0.99753.Keywords: Machine Learning Algorithms , Interval Type-2 Fuzzy Logic, Fire Outbreak, Support Vector Machine, K-Nearest Neighbour, Principal Component Analysis
Procedia PDF Downloads 17920975 Restricted Boltzmann Machines and Deep Belief Nets for Market Basket Analysis: Statistical Performance and Managerial Implications
Authors: H. Hruschka
Abstract:
This paper presents the first comparison of the performance of the restricted Boltzmann machine and the deep belief net on binary market basket data relative to binary factor analysis and the two best-known topic models, namely Dirichlet allocation and the correlated topic model. This comparison shows that the restricted Boltzmann machine and the deep belief net are superior to both binary factor analysis and topic models. Managerial implications that differ between the investigated models are treated as well. The restricted Boltzmann machine is defined as joint Boltzmann distribution of hidden variables and observed variables (purchases). It comprises one layer of observed variables and one layer of hidden variables. Note that variables of the same layer are not connected. The comparison also includes deep belief nets with three layers. The first layer is a restricted Boltzmann machine based on category purchases. Hidden variables of the first layer are used as input variables by the second-layer restricted Boltzmann machine which then generates second-layer hidden variables. Finally, in the third layer hidden variables are related to purchases. A public data set is analyzed which contains one month of real-world point-of-sale transactions in a typical local grocery outlet. It consists of 9,835 market baskets referring to 169 product categories. This data set is randomly split into two halves. One half is used for estimation, the other serves as holdout data. Each model is evaluated by the log likelihood for the holdout data. Performance of the topic models is disappointing as the holdout log likelihood of the correlated topic model – which is better than Dirichlet allocation - is lower by more than 25,000 compared to the best binary factor analysis model. On the other hand, binary factor analysis on its own is clearly surpassed by both the restricted Boltzmann machine and the deep belief net whose holdout log likelihoods are higher by more than 23,000. Overall, the deep belief net performs best. We also interpret hidden variables discovered by binary factor analysis, the restricted Boltzmann machine and the deep belief net. Hidden variables characterized by the product categories to which they are related differ strongly between these three models. To derive managerial implications we assess the effect of promoting each category on total basket size, i.e., the number of purchased product categories, due to each category's interdependence with all the other categories. The investigated models lead to very different implications as they disagree about which categories are associated with higher basket size increases due to a promotion. Of course, recommendations based on better performing models should be preferred. The impressive performance advantages of the restricted Boltzmann machine and the deep belief net suggest continuing research by appropriate extensions. To include predictors, especially marketing variables such as price, seems to be an obvious next step. It might also be feasible to take a more detailed perspective by considering purchases of brands instead of purchases of product categories.Keywords: binary factor analysis, deep belief net, market basket analysis, restricted Boltzmann machine, topic models
Procedia PDF Downloads 19920974 Elastoplastic and Ductile Damage Model Calibration of Steels for Bolt-Sphere Joints Used in China’s Space Structure Construction
Authors: Huijuan Liu, Fukun Li, Hao Yuan
Abstract:
The bolted spherical node is a common type of joint in space steel structures. The bolt-sphere joint portion almost always controls the bearing capacity of the bolted spherical node. The investigation of the bearing performance and progressive failure in service often requires high-fidelity numerical models. This paper focuses on the constitutive models of bolt steel and sphere steel used in China’s space structure construction. The elastoplastic model is determined by a standard tensile test and calibrated Voce saturated hardening rule. The ductile damage is found dominant based on the fractography analysis. Then Rice-Tracey ductile fracture rule is selected and the model parameters are calibrated based on tensile tests of notched specimens. These calibrated material models can benefit research or engineering work in similar fields.Keywords: bolt-sphere joint, steel, constitutive model, ductile damage, model calibration
Procedia PDF Downloads 13520973 Modeling Core Flooding Experiments for Co₂ Geological Storage Applications
Authors: Avinoam Rabinovich
Abstract:
CO₂ geological storage is a proven technology for reducing anthropogenic carbon emissions, which is paramount for achieving the ambitious net zero emissions goal. Core flooding experiments are an important step in any CO₂ storage project, allowing us to gain information on the flow of CO₂ and brine in the porous rock extracted from the reservoir. This information is important for understanding basic mechanisms related to CO₂ geological storage as well as for reservoir modeling, which is an integral part of a field project. In this work, a different method for constructing accurate models of CO₂-brine core flooding will be presented. Results for synthetic cases and real experiments will be shown and compared with numerical models to exhibit their predictive capabilities. Furthermore, the various mechanisms which impact the CO₂ distribution and trapping in the rock samples will be discussed, and examples from models and experiments will be provided. The new method entails solving an inverse problem to obtain a three-dimensional permeability distribution which, along with the relative permeability and capillary pressure functions, constitutes a model of the flow experiments. The model is more accurate when data from a number of experiments are combined to solve the inverse problem. This model can then be used to test various other injection flow rates and fluid fractions which have not been tested in experiments. The models can also be used to bridge the gap between small-scale capillary heterogeneity effects (sub-core and core scale) and large-scale (reservoir scale) effects, known as the upscaling problem.Keywords: CO₂ geological storage, residual trapping, capillary heterogeneity, core flooding, CO₂-brine flow
Procedia PDF Downloads 6620972 Developing A Third Degree Of Freedom For Opinion Dynamics Models Using Scales
Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle
Abstract:
Opinion dynamics models use an agent-based modeling approach to model people’s opinions. Model's properties are usually explored by testing the two 'degrees of freedom': the interaction rule and the network topology. The latter defines the connection, and thus the possible interaction, among agents. The interaction rule, instead, determines how agents select each other and update their own opinion. Here we show the existence of the third degree of freedom. This can be used for turning one model into each other or to change the model’s output up to 100% of its initial value. Opinion dynamics models represent the evolution of real-world opinions parsimoniously. Thus, it is fundamental to know how real-world opinion (e.g., supporting a candidate) could be turned into a number. Specifically, we want to know if, by choosing a different opinion-to-number transformation, the model’s dynamics would be preserved. This transformation is typically not addressed in opinion dynamics literature. However, it has already been studied in psychometrics, a branch of psychology. In this field, real-world opinions are converted into numbers using abstract objects called 'scales.' These scales can be converted one into the other, in the same way as we convert meters to feet. Thus, in our work, we analyze how this scale transformation may affect opinion dynamics models. We perform our analysis both using mathematical modeling and validating it via agent-based simulations. To distinguish between scale transformation and measurement error, we first analyze the case of perfect scales (i.e., no error or noise). Here we show that a scale transformation may change the model’s dynamics up to a qualitative level. Meaning that a researcher may reach a totally different conclusion, even using the same dataset just by slightly changing the way data are pre-processed. Indeed, we quantify that this effect may alter the model’s output by 100%. By using two models from the standard literature, we show that a scale transformation can transform one model into the other. This transformation is exact, and it holds for every result. Lastly, we also test the case of using real-world data (i.e., finite precision). We perform this test using a 7-points Likert scale, showing how even a small scale change may result in different predictions or a number of opinion clusters. Because of this, we think that scale transformation should be considered as a third-degree of freedom for opinion dynamics. Indeed, its properties have a strong impact both on theoretical models and for their application to real-world data.Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics
Procedia PDF Downloads 15420971 Understanding the Role of Gas Hydrate Morphology on the Producibility of a Hydrate-Bearing Reservoir
Authors: David Lall, Vikram Vishal, P. G. Ranjith
Abstract:
Numerical modeling of gas production from hydrate-bearing reservoirs requires the solution of various thermal, hydrological, chemical, and mechanical phenomena in a coupled manner. Among the various reservoir properties that influence gas production estimates, the distribution of permeability across the domain is one of the most crucial parameters since it determines both heat transfer and mass transfer. The aspect of permeability in hydrate-bearing reservoirs is particularly complex compared to conventional reservoirs since it depends on the saturation of gas hydrates and hence, is dynamic during production. The dependence of permeability on hydrate saturation is mathematically represented using permeability-reduction models, which are specific to the expected morphology of hydrate accumulations (such as grain-coating or pore-filling hydrates). In this study, we demonstrate the impact of various permeability-reduction models, and consequently, different morphologies of hydrate deposits on the estimates of gas production using depressurization at the reservoir scale. We observe significant differences in produced water volumes and cumulative mass of produced gas between the models, thereby highlighting the uncertainty in production behavior arising from the ambiguity in the prevalent gas hydrate morphology.Keywords: gas hydrate morphology, multi-scale modeling, THMC, fluid flow in porous media
Procedia PDF Downloads 21820970 Relation Between Traffic Mix and Traffic Accidents in a Mixed Industrial Urban Area
Authors: Michelle Eliane Hernández-García, Angélica Lozano
Abstract:
The traffic accidents study usually contemplates the relation between factors such as the type of vehicle, its operation, and the road infrastructure. Traffic accidents can be explained by different factors, which have a greater or lower relevance. Two zones are studied, a mixed industrial zone and the extended zone of it. The first zone has mainly residential (57%), and industrial (23%) land uses. Trucks are mainly on the roads where industries are located. Four sensors give information about traffic and speed on the main roads. The extended zone (which includes the first zone) has mainly residential (47%) and mixed residential (43%) land use, and just 3% of industrial use. The traffic mix is composed mainly of non-trucks. 39 traffic and speed sensors are located on main roads. The traffic mix in a mixed land use zone, could be related to traffic accidents. To understand this relation, it is required to identify the elements of the traffic mix which are linked to traffic accidents. Models that attempt to explain what factors are related to traffic accidents have faced multiple methodological problems for obtaining robust databases. Poisson regression models are used to explain the accidents. The objective of the Poisson analysis is to estimate a vector to provide an estimate of the natural logarithm of the mean number of accidents per period; this estimate is achieved by standard maximum likelihood procedures. For the estimation of the relation between traffic accidents and the traffic mix, the database is integrated of eight variables, with 17,520 observations and six vectors. In the model, the dependent variable is the occurrence or non-occurrence of accidents, and the vectors that seek to explain it, correspond to the vehicle classes: C1, C2, C3, C4, C5, and C6, respectively, standing for car, microbus, and van, bus, unitary trucks (2 to 6 axles), articulated trucks (3 to 6 axles) and bi-articulated trucks (5 to 9 axles); in addition, there is a vector for the average speed of the traffic mix. A Poisson model is applied, using a logarithmic link function and a Poisson family. For the first zone, the Poisson model shows a positive relation among traffic accidents and C6, average speed, C3, C2, and C1 (in a decreasing order). The analysis of the coefficient shows a high relation with bi-articulated truck and bus (C6 and the C3), indicating an important participation of freight trucks. For the expanded zone, the Poisson model shows a positive relation among traffic accidents and speed average, biarticulated truck (C6), and microbus and vans (C2). The coefficients obtained in both Poisson models shows a higher relation among freight trucks and traffic accidents in the first industrial zone than in the expanded zone.Keywords: freight transport, industrial zone, traffic accidents, traffic mix, trucks
Procedia PDF Downloads 12620969 A Study on Weight-Reduction of Double Deck High-Speed Train Using Size Optimization Method
Authors: Jong-Yeon Kim, Kwang-Bok Shin, Tae-Hwan Ko
Abstract:
The purpose of this paper is to suggest a weight-reduction design method for the aluminum extrusion carbody structure of a double deck high-speed train using size optimization method. The size optimization method was used to optimize thicknesses of skin and rib of the aluminum extrusion for the carbody structure. Thicknesses of 1st underframe, 2nd underframe, solebar and roof frame were selected by design variables in order to conduct size optimization. The results of the size optimization analysis showed that the weight of the aluminum extrusion could be reduced by 0.61 tons (5.60%) compared to the weight of the original carbody structure.Keywords: double deck high-speed train, size optimization, weigh-reduction, aluminum extrusion
Procedia PDF Downloads 28820968 Iris Feature Extraction and Recognition Based on Two-Dimensional Gabor Wavelength Transform
Authors: Bamidele Samson Alobalorun, Ifedotun Roseline Idowu
Abstract:
Biometrics technologies apply the human body parts for their unique and reliable identification based on physiological traits. The iris recognition system is a biometric–based method for identification. The human iris has some discriminating characteristics which provide efficiency to the method. In order to achieve this efficiency, there is a need for feature extraction of the distinct features from the human iris in order to generate accurate authentication of persons. In this study, an approach for an iris recognition system using 2D Gabor for feature extraction is applied to iris templates. The 2D Gabor filter formulated the patterns that were used for training and equally sent to the hamming distance matching technique for recognition. A comparison of results is presented using two iris image subjects of different matching indices of 1,2,3,4,5 filter based on the CASIA iris image database. By comparing the two subject results, the actual computational time of the developed models, which is measured in terms of training and average testing time in processing the hamming distance classifier, is found with best recognition accuracy of 96.11% after capturing the iris localization or segmentation using the Daughman’s Integro-differential, the normalization is confined to the Daugman’s rubber sheet model.Keywords: Daugman rubber sheet, feature extraction, Hamming distance, iris recognition system, 2D Gabor wavelet transform
Procedia PDF Downloads 6320967 Aggregate Production Planning Framework in a Multi-Product Factory: A Case Study
Authors: Ignatio Madanhire, Charles Mbohwa
Abstract:
This study looks at the best model of aggregate planning activity in an industrial entity and uses the trial and error method on spreadsheets to solve aggregate production planning problems. Also linear programming model is introduced to optimize the aggregate production planning problem. Application of the models in a furniture production firm is evaluated to demonstrate that practical and beneficial solutions can be obtained from the models. Finally some benchmarking of other furniture manufacturing industries was undertaken to assess relevance and level of use in other furniture firmsKeywords: aggregate production planning, trial and error, linear programming, furniture industry
Procedia PDF Downloads 555