Search results for: cluster model approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26816

Search results for: cluster model approach

20156 A Three-Dimensional Investigation of Stabilized Turbulent Diffusion Flames Using Different Type of Fuel

Authors: Moataz Medhat, Essam E. Khalil, Hatem Haridy

Abstract:

In the present study, a numerical simulation study is used to 3-D model the steady-state combustion of a staged natural gas flame in a 300 kW swirl-stabilized burner, using ANSYS solver to find the highest combustion efficiency by changing the inlet air swirl number and burner quarl angle in a furnace and showing the effect of flue gas recirculation, type of fuel and staging. The combustion chamber of the gas turbine is a cylinder of diameter 1006.8 mm, and a height of 1651mm ending with a hood until the exhaust cylinder has been reached, where the exit of combustion products which have a diameter of 300 mm, with a height of 751mm. The model was studied by 15 degree of the circumference due to axisymmetric of the geometry and divided into a mesh of about 1.1 million cells. The numerical simulations were performed by solving the governing equations in a three-dimensional model using realizable K-epsilon equations to express the turbulence and non-premixed flamelet combustion model taking into consideration radiation effect. The validation of the results was done by comparing it with other experimental data to ensure the agreement of the results. The study showed two zones of recirculation. The primary one is at the center of the furnace, and the location of the secondary one varies by changing the quarl angle of the burner. It is found that the increase in temperature in the external recirculation zone is a result of increasing the swirl number of the inlet air stream. Also it was found that recirculating part of the combustion products back to the combustion zone decreases pollutants formation especially nitrogen monoxide.

Keywords: burner selection, natural gas, analysis, recirculation

Procedia PDF Downloads 151
20155 Case Study Approach Using Scenario Analysis to Analyze Unabsorbed Head Office Overheads

Authors: K. C. Iyer, T. Gupta, Y. M. Bindal

Abstract:

Head office overhead (HOOH) is an indirect cost and is recovered through individual project billings by the contractor. Delay in a project impacts the absorption of HOOH cost allocated to that particular project and thus diminishes the expected profit of the contractor. This unabsorbed HOOH cost is later claimed by contractors as damages. The subjective nature of the available formulae to compute unabsorbed HOOH is the difficulty that contractors and owners face and thus dispute it. The paper attempts to bring together the rationale of various HOOH formulae by gathering contractor’s HOOH cost data on all of its project, using case study approach and comparing variations in values of HOOH using scenario analysis. The case study approach uses project data collected from four construction projects of a contractor in India to calculate unabsorbed HOOH costs from various available formulae. Scenario analysis provides further variations in HOOH values after considering two independent situations mainly scope changes and new projects during the delay period. Interestingly, one of the findings in this study reveals that, in spite of HOOH getting absorbed by additional works available during the period of delay, a few formulae depict an increase in the value of unabsorbed HOOH, neglecting any absorption by the increase in scope. This indicates that these formulae are inappropriate for use in case of a change to the scope of work. Results of this study can help both parties in deciding on an appropriate formula more objectively, considering the events on a project causing the delay and contractor's position in respect of obtaining new projects.

Keywords: absorbed and unabsorbed overheads, head office overheads, scenario analysis, scope variation

Procedia PDF Downloads 154
20154 Data Modeling and Calibration of In-Line Pultrusion and Laser Ablation Machine Processes

Authors: David F. Nettleton, Christian Wasiak, Jonas Dorissen, David Gillen, Alexandr Tretyak, Elodie Bugnicourt, Alejandro Rosales

Abstract:

In this work, preliminary results are given for the modeling and calibration of two inline processes, pultrusion, and laser ablation, using machine learning techniques. The end product of the processes is the core of a medical guidewire, manufactured to comply with a user specification of diameter and flexibility. An ensemble approach is followed which requires training several models. Two state of the art machine learning algorithms are benchmarked: Kernel Recursive Least Squares (KRLS) and Support Vector Regression (SVR). The final objective is to build a precise digital model of the pultrusion and laser ablation process in order to calibrate the resulting diameter and flexibility of a medical guidewire, which is the end product while taking into account the friction on the forming die. The result is an ensemble of models, whose output is within a strict required tolerance and which covers the required range of diameter and flexibility of the guidewire end product. The modeling and automatic calibration of complex in-line industrial processes is a key aspect of the Industry 4.0 movement for cyber-physical systems.

Keywords: calibration, data modeling, industrial processes, machine learning

Procedia PDF Downloads 275
20153 Liver Tumor Detection by Classification through FD Enhancement of CT Image

Authors: N. Ghatwary, A. Ahmed, H. Jalab

Abstract:

In this paper, an approach for the liver tumor detection in computed tomography (CT) images is represented. The detection process is based on classifying the features of target liver cell to either tumor or non-tumor. Fractional differential (FD) is applied for enhancement of Liver CT images, with the aim of enhancing texture and edge features. Later on, a fusion method is applied to merge between the various enhanced images and produce a variety of feature improvement, which will increase the accuracy of classification. Each image is divided into NxN non-overlapping blocks, to extract the desired features. Support vector machines (SVM) classifier is trained later on a supplied dataset different from the tested one. Finally, the block cells are identified whether they are classified as tumor or not. Our approach is validated on a group of patients’ CT liver tumor datasets. The experiment results demonstrated the efficiency of detection in the proposed technique.

Keywords: fractional differential (FD), computed tomography (CT), fusion, aplha, texture features.

Procedia PDF Downloads 344
20152 Reacting Numerical Simulation of Axisymmetric Trapped Vortex Combustors for Methane, Propane and Hydrogen

Authors: Heval Serhat Uluk, Sam M. Dakka, Kuldeep Singh, Richard Jefferson-Loveday

Abstract:

The carbon footprint of the aviation sector in total measured 3.8% in 2017, and it is expected to triple by 2050. New combustion approaches and fuel types are necessary to prevent this. This paper will focus on using propane, methane, and hydrogen as fuel replacements for kerosene and implement a trapped vortex combustor design to increase efficiency. Reacting simulations were conducted for axisymmetric trapped vortex combustor to investigate the static pressure drop, combustion efficiency and pattern factor for various cavity aspect ratios for 0.3, 0.6 and 1 and air mass flow rates for 14 m/s, 28 m/s and 42 m/s. Propane, methane and hydrogen are used as alternative fuels. The combustion model was anchored based on swirl flame configuration with an emphasis on high fidelity of boundary conditions with favorable results of eddy dissipation model implementation. Reynolds Averaged Navier Stokes (RANS) k-ε model turbulence model for the validation effort was used for turbulence modelling. A grid independence study was conducted for the three-dimensional model to reduce computational time. Preliminary results for 24 m/s air mass flow rate provided a close temperature profile inside the cavity relative to the experimental study. The investigation will be carried out on the effect of air mass flow rates and cavity aspect ratio on the combustion efficiency, pattern factor and static pressure drop in the combustor. A comparison study among pure methane, propane and hydrogen will be conducted to investigate their suitability for trapped vortex combustors and conclude their advantages and disadvantages as a fuel replacement. Therefore, the study will be one of the milestones to achieving 2050 zero carbon emissions or reducing carbon emissions.

Keywords: computational fluid dynamics, aerodynamic, aerospace, propulsion, trapped vortex combustor

Procedia PDF Downloads 75
20151 An Embarrassingly Simple Semi-supervised Approach to Increase Recall in Online Shopping Domain to Match Structured Data with Unstructured Data

Authors: Sachin Nagargoje

Abstract:

Complete labeled data is often difficult to obtain in a practical scenario. Even if one manages to obtain the data, the quality of the data is always in question. In shopping vertical, offers are the input data, which is given by advertiser with or without a good quality of information. In this paper, an author investigated the possibility of using a very simple Semi-supervised learning approach to increase the recall of unhealthy offers (has badly written Offer Title or partial product details) in shopping vertical domain. The author found that the semisupervised learning method had improved the recall in the Smart Phone category by 30% on A=B testing on 10% traffic and increased the YoY (Year over Year) number of impressions per month by 33% at production. This also made a significant increase in Revenue, but that cannot be publicly disclosed.

Keywords: semi-supervised learning, clustering, recall, coverage

Procedia PDF Downloads 107
20150 Modeling Flow and Deposition Characteristics of Solid CO2 during Choked Flow of CO2 Pipeline in CCS

Authors: Teng lin, Li Yuxing, Han Hui, Zhao Pengfei, Zhang Datong

Abstract:

With the development of carbon capture and storage (CCS), the flow assurance of CO2 transportation becomes more important, particularly for supercritical CO2 pipelines. The relieving system using the choke valve is applied to control the pressure in CO2 pipeline. However, the temperature of fluid would drop rapidly because of Joule-Thomson cooling (JTC), which may cause solid CO2 form and block the pipe. In this paper, a Computational Fluid Dynamic (CFD) model, using the modified Lagrangian method, Reynold's Stress Transport model (RSM) for turbulence and stochastic tracking model (STM) for particle trajectory, was developed to predict the deposition characteristic of solid carbon dioxide. The model predictions were in good agreement with the experiment data published in the literature. It can be observed that the particle distribution affected the deposition behavior. In the region of the sudden expansion, the smaller particles accumulated tightly on the wall were dominant for pipe blockage. On the contrary, the size of solid CO2 particles deposited near the outlet usually was bigger and the stacked structure was looser. According to the calculation results, the movement of the particles can be regarded as the main four types: turbulent motion close to the sudden expansion structure, balanced motion at sudden expansion-middle region, inertial motion near the outlet and the escape. Furthermore the particle deposits accumulated primarily in the sudden expansion region, reattachment region and outlet region because of the four type of motion. Also the Stokes number had an effect on the deposition ratio and it is recommended for Stokes number to avoid 3-8St.

Keywords: carbon capture and storage, carbon dioxide pipeline, gas-particle flow, deposition

Procedia PDF Downloads 353
20149 Gender-Based Violence in Pakistan: Addressing the Root Causes

Authors: Hafiz Awais Ahmad

Abstract:

This paper aims to examine the root causes of gender-based violence (GBV) in Pakistan and proposes strategies to address this issue. Using a qualitative approach, this study analyzed data from various sources, including interviews with survivors of GBV and experts in the field. The findings revealed that GBV in Pakistan is deeply rooted in patriarchal attitudes and practices, economic insecurity, lack of education, and limited access to justice. The study recommends a multi-faceted approach to address GBV, including legislative reforms, awareness-raising campaigns, economic empowerment, and improved access to justice for survivors. Furthermore, the study highlights the importance of engaging men and boys in efforts to address GBV and promote gender equality. The findings of this study have important implications for policy-makers, practitioners, and researchers working towards ending GBV in Pakistan.

Keywords: gender-based violence, Pakistan, legislative reforms, advocacy

Procedia PDF Downloads 131
20148 Solar Power Satellites: Reconsideration Based on Novel Approaches

Authors: Alex Ellery

Abstract:

Solar power satellites (SPS), despite their promise as a clean energy source, have been relegated out of consideration due to their enormous cost and technological challenge. It has been suggested that for solar power satellites to become economically feasible, launch costs must decrease from their current $20,000/kg to < $200/kg. Even with the advent of single-stage-to-orbit launchers which propose launch costs dropping to $2,000/kg, this will not be realized. Yet, the advantages of solar power satellites are many. Here, I present a novel approach to reduce the specific cost of solar power satellites to ~$1/kg by leveraging two enabling technologies – in-situ resource utilization and 3D printing. The power of such technologies will open up enormous possibilities for providing additional options for combating climate change whilst meeting demands for global energy. From the constraints imposed by in-situ resource utilization, a novel approach to solar energy conversion in SPS may be realized.

Keywords: clean energy sources, in-situ resource utilisation, solar power satellites, thermionic emission

Procedia PDF Downloads 411
20147 Agent-Based Modeling to Simulate the Dynamics of Health Insurance Markets

Authors: Haripriya Chakraborty

Abstract:

The healthcare system in the United States is considered to be one of the most inefficient and expensive systems when compared to other developed countries. Consequently, there are persistent concerns regarding the overall functioning of this system. For instance, the large number of uninsured individuals and high premiums are pressing issues that are shown to have a negative effect on health outcomes with possible life-threatening consequences. The Affordable Care Act (ACA), which was signed into law in 2010, was aimed at improving some of these inefficiencies. This paper aims at providing a computational mechanism to examine some of these inefficiencies and the effects that policy proposals may have on reducing these inefficiencies. Agent-based modeling is an invaluable tool that provides a flexible framework to model complex systems. It can provide an important perspective into the nature of some interactions that occur and how the benefits of these interactions are allocated. In this paper, we propose a novel and versatile agent-based model with realistic assumptions to simulate the dynamics of a health insurance marketplace that contains a mixture of private and public insurers and individuals. We use this model to analyze the characteristics, motivations, payoffs, and strategies of these agents. In addition, we examine the effects of certain policies, including some of the provisions of the ACA, aimed at reducing the uninsured rate and the cost of premiums to move closer to a system that is more equitable and improves health outcomes for the general population. Our test results confirm the usefulness of our agent-based model in studying this complicated issue and suggest some implications for public policies aimed at healthcare reform.

Keywords: agent-based modeling, healthcare reform, insurance markets, public policy

Procedia PDF Downloads 122
20146 Genetic Algorithm Optimization of the Economical, Ecological and Self-Consumption Impact of the Energy Production of a Single Building

Authors: Ludovic Favre, Thibaut M. Schafer, Jean-Luc Robyr, Elena-Lavinia Niederhäuser

Abstract:

This paper presents an optimization method based on genetic algorithm for the energy management inside buildings developed in the frame of the project Smart Living Lab (SLL) in Fribourg (Switzerland). This algorithm optimizes the interaction between renewable energy production, storage systems and energy consumers. In comparison with standard algorithms, the innovative aspect of this project is the extension of the smart regulation over three simultaneous criteria: the energy self-consumption, the decrease of greenhouse gas emissions and operating costs. The genetic algorithm approach was chosen due to the large quantity of optimization variables and the non-linearity of the optimization function. The optimization process includes also real time data of the building as well as weather forecast and users habits. This information is used by a physical model of the building energy resources to predict the future energy production and needs, to select the best energetic strategy, to combine production or storage of energy in order to guarantee the demand of electrical and thermal energy. The principle of operation of the algorithm as well as typical output example of the algorithm is presented.

Keywords: building's energy, control system, energy management, energy storage, genetic optimization algorithm, greenhouse gases, modelling, renewable energy

Procedia PDF Downloads 240
20145 Optimizing Recycling and Reuse Strategies for Circular Construction Materials with Life Cycle Assessment

Authors: Zhongnan Ye, Xiaoyi Liu, Shu-Chien Hsu

Abstract:

Rapid urbanization has led to a significant increase in construction and demolition waste (C&D waste), underscoring the need for sustainable waste management strategies in the construction industry. Aiming to enhance the sustainability of urban construction practices, this study develops an optimization model to effectively suggest the optimal recycling and reuse strategies for C&D waste, including concrete and steel. By employing Life Cycle Assessment (LCA), the model evaluates the environmental impacts of adopted construction materials throughout their lifecycle. The model optimizes the quantity of materials to recycle or reuse, the selection of specific recycling and reuse processes, and logistics decisions related to the transportation and storage of recycled materials with the objective of minimizing the overall environmental impact, quantified in terms of carbon emissions, energy consumption, and associated costs, while adhering to a range of constraints. These constraints include capacity limitations, quality standards for recycled materials, compliance with environmental regulations, budgetary limits, and temporal considerations such as project deadlines and material availability. The strategies are expected to be both cost-effective and environmentally beneficial, promoting a circular economy within the construction sector, aligning with global sustainability goals, and providing a scalable framework for managing construction waste in densely populated urban environments. The model is helpful in reducing the carbon footprint of construction projects, conserving valuable resources, and supporting the industry’s transition towards a more sustainable future.

Keywords: circular construction, construction and demolition waste, material recycling, optimization modeling

Procedia PDF Downloads 42
20144 Investigations on the Influence of Optimized Charge Air Cooling for a Diesel Passenger Car

Authors: Christian Doppler, Gernot Hirschl, Gerhard Zsiga

Abstract:

Starting from 2020, an EU-wide CO2-limitation of 95g/km is scheduled for the average of an OEMs passenger car fleet. Considering that, further measures of optimization on the diesel cycle will be necessary in order to reduce fuel consumption and emissions while keeping performance values adequate at the least. The present article deals with charge air cooling (CAC) on the basis of a diesel passenger car model in a 0D/1D-working process calculation environment. The considered engine is a 2.4 litre EURO VI diesel engine with variable geometry turbocharger (VGT) and low-pressure exhaust gas recirculation (LP EGR). The object of study was the impact of charge air cooling on the engine working process at constant boundary conditions which could have been conducted with an available and validated engine model in AVL BOOST. Part load was realized with constant power and NOx-emissions, whereas full load was accomplished with a lambda control in order to obtain maximum engine performance. The informative results were used to implement a simulation model in Matlab/Simulink which is further integrated into a full vehicle simulation environment via coupling with ICOS (Independent Co-Simulation Platform). Next, the dynamic engine behavior was validated and modified with load steps taken from the engine test bed. Due to the modular setup in the Co-Simulation, different CAC-models have been simulated quickly with their different influences on the working process. In doing so, a new cooler variation isn’t needed to be reproduced and implemented into the primary simulation model environment, but is implemented quickly and easily as an independent component into the simulation entity. By means of the association of the engine model, longitudinal dynamics vehicle model and different CAC models (air/air & water/air variants) in both steady state and transient operational modes, statements are gained regarding fuel consumption, NOx-emissions and power behavior. The fact that there is no more need of a complex engine model is very advantageous for the overall simulation volume. Beside of the simulation with the mentioned demonstrator engine, there have also been conducted several experimental investigations on the engine test bench. Here the comparison of a standard CAC with an intake-manifold-integrated CAC was executed in particular. Simulative as well as experimental tests showed benefits for the water/air CAC variant (on test bed especially the intake manifold integrated variant). The benefits are illustrated by a reduced pressure loss and a gain in air efficiency and CAC efficiency, those who all lead to minimized emission and fuel consumption for stationary and transient operation.

Keywords: air/water-charge air cooler, co-simulation, diesel working process, EURO VI fuel consumption

Procedia PDF Downloads 254
20143 Explainable Graph Attention Networks

Authors: David Pham, Yongfeng Zhang

Abstract:

Graphs are an important structure for data storage and computation. Recent years have seen the success of deep learning on graphs such as Graph Neural Networks (GNN) on various data mining and machine learning tasks. However, most of the deep learning models on graphs cannot easily explain their predictions and are thus often labelled as “black boxes.” For example, Graph Attention Network (GAT) is a frequently used GNN architecture, which adopts an attention mechanism to carefully select the neighborhood nodes for message passing and aggregation. However, it is difficult to explain why certain neighbors are selected while others are not and how the selected neighbors contribute to the final classification result. In this paper, we present a graph learning model called Explainable Graph Attention Network (XGAT), which integrates graph attention modeling and explainability. We use a single model to target both the accuracy and explainability of problem spaces and show that in the context of graph attention modeling, we can design a unified neighborhood selection strategy that selects appropriate neighbor nodes for both better accuracy and enhanced explainability. To justify this, we conduct extensive experiments to better understand the behavior of our model under different conditions and show an increase in both accuracy and explainability.

Keywords: explainable AI, graph attention network, graph neural network, node classification

Procedia PDF Downloads 169
20142 Studies on Dye Removal by Aspergillus niger Strain

Authors: M. S. Mahmoud, Samah A. Mohamed, Neama A. Sobhy

Abstract:

For color removal from wastewater containing organic contaminants, biological treatment systems have been widely used such as physical and chemical methods of flocculation, coagulation. Fungal decolorization of dye containing wastewater is one of important goal in industrial wastewater treatment. This work was aimed to characterize Aspergillus niger strain for dye removal from aqueous solution and from raw textile wastewater. Batch experiments were studied for removal of color using fungal isolate biomass under different conditions. Environmental conditions like pH, contact time, adsorbent dose and initial dye concentration were studied. Influence of the pH on the removal of azo dye by Aspergillus niger was carried out between pH 1.0 and pH 11.0. The optimum pH for red dye decolonization was 9.0. Results showed the decolorization of dye was decreased with the increase of its initial dye concentration. The adsorption data was analyzed based on the models of equilibrium isotherm (Freundlich model and Langmuir model). During the adsorption isotherm studies; dye removal was better fitted to Freundlich model. The isolated fungal biomass was characterized according to its surface area both pre and post the decolorization process by Scanning Electron Microscope (SEM) analysis. Results indicate that the isolated fungal biomass showed higher affinity for dye in decolorization process.

Keywords: biomass, biosorption, dye, isotherms

Procedia PDF Downloads 294
20141 3D Microbubble Dynamics in a Weakly Viscous Fluid Near a Rigid Boundary Subject to Ultrasound

Authors: K. Manmi, Q. X. Wang

Abstract:

This paper investigates microbubble dynamics subject to ultrasound in a weakly viscous fluid near a rigid boundary. The phenomenon is simulated using a boundary integral method. The weak viscous effects are incorporated into the model through the normal stress balance across the bubble surface. The model agrees well with the Rayleigh-Plesset equation for a spherical bubble for several cycles. The effects of the fluid viscosity in the bubble dynamics are analyzed, including jet development, centroid movement and bubble volume.

Keywords: microbubble dynamics, bubble jetting, viscous effect, boundary integral method

Procedia PDF Downloads 469
20140 Impact of Health Indicators on Economic Growth: Application of Ardl Model on Pakistan’s Data Set

Authors: Sheraz Ahmad Choudhary

Abstract:

Health plays a vital role in the growth. The study examined the effect of health indicator on the growth of Pakistan. ARDL model is used to check the growth rate which is affected by the health by using the time series date of Pakistan from 1990 to 2017. Health indicator, fertility rate, life expectancy, foreign direct investment, and infant mortality rate are variables Where the unit root is applied to check the stationarity of the model. consequences find a significant relationship between GDP, foreign direct investment, fertility rate, and life expectancy in the short run, whereas mortality rate effected negatively to economic growth but have significant values. In the long run, foreign direct investment (FDI) and fertility rate(FR) have significantly influenced the GDP. The results show thateconomic growth is positively stimulated by most of the health indicators. The study accomplishes that nations can achieve a high level of economic growth by increasing wellbeing human capital.

Keywords: economic growth, health expenditures, fertility rate, human capital, life expectancy, foreign direct investment, and infant mortality rate

Procedia PDF Downloads 113
20139 Optimizing Recycling and Reuse Strategies for Circular Construction Materials with Life Cycle Assessment

Authors: Zhongnan Ye, Xiaoyi Liu, Shu-Chien Hsu

Abstract:

Rapid urbanization has led to a significant increase in construction and demolition waste (C&D waste), underscoring the need for sustainable waste management strategies in the construction industry. Aiming to enhance the sustainability of urban construction practices, this study develops an optimization model to effectively suggest the optimal recycling and reuse strategies for C&D waste, including concrete and steel. By employing Life Cycle Assessment (LCA), the model evaluates the environmental impacts of adopted construction materials throughout their lifecycle. The model optimizes the quantity of materials to recycle or reuse, the selection of specific recycling and reuse processes, and logistics decisions related to the transportation and storage of recycled materials with the objective of minimizing the overall environmental impact, quantified in terms of carbon emissions, energy consumption, and associated costs, while adhering to a range of constraints. These constraints include capacity limitations, quality standards for recycled materials, compliance with environmental regulations, budgetary limits, and temporal considerations such as project deadlines and material availability. The strategies are expected to be both cost-effective and environmentally beneficial, promoting a circular economy within the construction sector, aligning with global sustainability goals, and providing a scalable framework for managing construction waste in densely populated urban environments. The model is helpful in reducing the carbon footprint of construction projects, conserving valuable resources, and supporting the industry’s transition towards a more sustainable future.

Keywords: circular construction, construction and demolition waste, life cycle assessment, material recycling

Procedia PDF Downloads 57
20138 Integrated Nested Laplace Approximations For Quantile Regression

Authors: Kajingulu Malandala, Ranganai Edmore

Abstract:

The asymmetric Laplace distribution (ADL) is commonly used as the likelihood function of the Bayesian quantile regression, and it offers different families of likelihood method for quantile regression. Notwithstanding their popularity and practicality, ADL is not smooth and thus making it difficult to maximize its likelihood. Furthermore, Bayesian inference is time consuming and the selection of likelihood may mislead the inference, as the Bayes theorem does not automatically establish the posterior inference. Furthermore, ADL does not account for greater skewness and Kurtosis. This paper develops a new aspect of quantile regression approach for count data based on inverse of the cumulative density function of the Poisson, binomial and Delaporte distributions using the integrated nested Laplace Approximations. Our result validates the benefit of using the integrated nested Laplace Approximations and support the approach for count data.

Keywords: quantile regression, Delaporte distribution, count data, integrated nested Laplace approximation

Procedia PDF Downloads 147
20137 Applied of LAWA Classification for Assessment of the Water by Nutrients Elements: Case Oran Sebkha Basin

Authors: Boualla Nabila

Abstract:

The increasing demand on water, either for the drinkable water supply, or for the agricultural and industrial custom, requires a very thorough hydrochemical study to protect better and manage this resource. Oran is relatively a city with the worst quality of the water. Recently, the growing populations may put stress on natural waters by impairing the quality of the water. Campaign of water sampling of 55 points capturing different levels of the aquifer system was done for chemical analyzes of nutriments elements. The results allowed us to approach the problem of contamination based on the largely uniform nationwide approach LAWA (LänderarbeitsgruppeWasser), based on the EU CIS guidance, has been applied for the identification of pressures and impacts, allowing for easy comparison. Groundwater samples were analyzed, also, for physico-chemical parameters such as pH, sodium, potassium, calcium, magnesium, chloride, sulphate, carbonate and bicarbonate. The analytical results obtained in this hydrochemistry study were interpreted using Durov diagram. Based on these representations, the anomaly of high groundwater salinity observed in Oran Sebkha basin was explained by the high chloride concentration and to the presence of inverse cation exchange reaction. Durov diagram plot revealed that the groundwater has been evolved from Ca-HCO3 recharge water through mixing with the pre-existing groundwater to give mixed water of Mg-SO4 and Mg-Cl types that eventually reached a final stage of evolution represented by a Na-Cl water type.

Keywords: contamination, water quality, nutrients elements, approach LAWA, durov diagram

Procedia PDF Downloads 261
20136 Collaborative Data Refinement for Enhanced Ionic Conductivity Prediction in Garnet-Type Materials

Authors: Zakaria Kharbouch, Mustapha Bouchaara, F. Elkouihen, A. Habbal, A. Ratnani, A. Faik

Abstract:

Solid-state lithium-ion batteries have garnered increasing interest in modern energy research due to their potential for safer, more efficient, and sustainable energy storage systems. Among the critical components of these batteries, the electrolyte plays a pivotal role, with LLZO garnet-based electrolytes showing significant promise. Garnet materials offer intrinsic advantages such as high Li-ion conductivity, wide electrochemical stability, and excellent compatibility with lithium metal anodes. However, optimizing ionic conductivity in garnet structures poses a complex challenge, primarily due to the multitude of potential dopants that can be incorporated into the LLZO crystal lattice. The complexity of material design, influenced by numerous dopant options, requires a systematic method to find the most effective combinations. This study highlights the utility of machine learning (ML) techniques in the materials discovery process to navigate the complex range of factors in garnet-based electrolytes. Collaborators from the materials science and ML fields worked with a comprehensive dataset previously employed in a similar study and collected from various literature sources. This dataset served as the foundation for an extensive data refinement phase, where meticulous error identification, correction, outlier removal, and garnet-specific feature engineering were conducted. This rigorous process substantially improved the dataset's quality, ensuring it accurately captured the underlying physical and chemical principles governing garnet ionic conductivity. The data refinement effort resulted in a significant improvement in the predictive performance of the machine learning model. Originally starting at an accuracy of 0.32, the model underwent substantial refinement, ultimately achieving an accuracy of 0.88. This enhancement highlights the effectiveness of the interdisciplinary approach and underscores the substantial potential of machine learning techniques in materials science research.

Keywords: lithium batteries, all-solid-state batteries, machine learning, solid state electrolytes

Procedia PDF Downloads 46
20135 Development of a Model Based on Wavelets and Matrices for the Treatment of Weakly Singular Partial Integro-Differential Equations

Authors: Somveer Singh, Vineet Kumar Singh

Abstract:

We present a new model based on viscoelasticity for the Non-Newtonian fluids.We use a matrix formulated algorithm to approximate solutions of a class of partial integro-differential equations with the given initial and boundary conditions. Some numerical results are presented to simplify application of operational matrix formulation and reduce the computational cost. Convergence analysis, error estimation and numerical stability of the method are also investigated. Finally, some test examples are given to demonstrate accuracy and efficiency of the proposed method.

Keywords: Legendre Wavelets, operational matrices, partial integro-differential equation, viscoelasticity

Procedia PDF Downloads 315
20134 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 71
20133 A Collective Intelligence Approach to Safe Artificial General Intelligence

Authors: Craig A. Kaplan

Abstract:

If AGI proves to be a “winner-take-all” scenario where the first company or country to develop AGI dominates, then the first AGI must also be the safest. The safest, and fastest, path to Artificial General Intelligence (AGI) may be to harness the collective intelligence of multiple AI and human agents in an AGI network. This approach has roots in seminal ideas from four of the scientists who founded the field of Artificial Intelligence: Allen Newell, Marvin Minsky, Claude Shannon, and Herbert Simon. Extrapolating key insights from these founders of AI, and combining them with the work of modern researchers, results in a fast and safe path to AGI. The seminal ideas discussed are: 1) Society of Mind (Minsky), 2) Information Theory (Shannon), 3) Problem Solving Theory (Newell & Simon), and 4) Bounded Rationality (Simon). Society of Mind describes a collective intelligence approach that can be used with AI and human agents to create an AGI network. Information theory helps address the critical issue of how an AGI system will increase its intelligence over time. Problem Solving Theory provides a universal framework that AI and human agents can use to communicate efficiently, effectively, and safely. Bounded Rationality helps us better understand not only the capabilities of SuperIntelligent AGI but also how humans can remain relevant in a world where the intelligence of AGI vastly exceeds that of its human creators. Each key idea can be combined with recent work in the fields of Artificial Intelligence, Machine Learning, and Large Language Models to accelerate the development of a working, safe, AGI system.

Keywords: AI Agents, Collective Intelligence, Minsky, Newell, Shannon, Simon, AGI, AGI Safety

Procedia PDF Downloads 69
20132 Studying the Impact of Agricultural Producers Support Policy in Export Market

Authors: Yazdani Saeed, Rafiei Hamed, Nekoofar Farahnaz

Abstract:

Governments Policies play a major role in national and international Markets. Pistachio is one of the most important non-oil export commodity of Iran. Therefore, in this study the relation between the producer support policies and the export of Pistachio was examined. An econometric model (VAR) was applied to test the study hypothesis. According to the estimated coefficient in VAR model, lag of producer support index has a significant and negative effect on variation of Pistachio’s export in short term. In other word, in short term, export advantage index is dependent on the amount of producers support in previous period.

Keywords: producer support, export advantage, pistachio, Iran

Procedia PDF Downloads 28
20131 Theology and Music in the XXI. Century: An Exploratory Study of Current Interrelation

Authors: Andrzej Kesiak

Abstract:

Contemporary theology is often accused of answering questions that nobody is asking, and of employing hermetic language that has lost its communication capacity. There is also a question that theology is asking itself: how theological discourse can still be influential on other disciplines and, how to overcome the separation of theology and belief. Undoubtedly, in the wider spectrum, the theological discourse has been and will be needed. The difficulty is how to find the right model of it, the model that would help theology to enter in dialogue with culture, art, science, and politics. Presumably, there is no only one such model, theology constantly needs to seek such models, and this is probably a never-ending journey; in other words, theology should adopt a profile of ‘a restless being’ if it wants to remain influential. Music, on the other hand, has always been very close to theology; in fact, a huge part of classical music is either sacred or religious. Many composers sought inspiration in religion, liturgy, religious painting and sacred texts. This paper will argue that despite all that it seems that a proper and factual dialogue is still in a starting phase. Such a thing as a reciprocal relationship between theology and music definitely exists, but it has not yet been theoretically developed enough. Correlation between musical and theological disciplines constitutes a very broad and complex discourse. Therefore this study would rather narrow the subject and put it in a specific context: Theology and Music in the XXI. Century. This paper is a text-based study; therefore it will be based on textual-analysis with elements of the text hermeneutics.

Keywords: music, theology, reciprocal relationship between theology and music, XXI Century

Procedia PDF Downloads 145
20130 Spontaneous and Posed Smile Detection: Deep Learning, Traditional Machine Learning, and Human Performance

Authors: Liang Wang, Beste F. Yuksel, David Guy Brizan

Abstract:

A computational model of affect that can distinguish between spontaneous and posed smiles with no errors on a large, popular data set using deep learning techniques is presented in this paper. A Long Short-Term Memory (LSTM) classifier, a type of Recurrent Neural Network, is utilized and compared to human classification. Results showed that while human classification (mean of 0.7133) was above chance, the LSTM model was more accurate than human classification and other comparable state-of-the-art systems. Additionally, a high accuracy rate was maintained with small amounts of training videos (70 instances). The derivation of important features to further understand the success of our computational model were analyzed, and it was inferred that thousands of pairs of points within the eyes and mouth are important throughout all time segments in a smile. This suggests that distinguishing between a posed and spontaneous smile is a complex task, one which may account for the difficulty and lower accuracy of human classification compared to machine learning models.

Keywords: affective computing, affect detection, computer vision, deep learning, human-computer interaction, machine learning, posed smile detection, spontaneous smile detection

Procedia PDF Downloads 112
20129 The Model Establishment and Analysis of TRACE/FRAPTRAN for Chinshan Nuclear Power Plant Spent Fuel Pool

Authors: J. R. Wang, H. T. Lin, Y. S. Tseng, W. Y. Li, H. C. Chen, S. W. Chen, C. Shih

Abstract:

TRACE is developed by U.S. NRC for the nuclear power plants (NPPs) safety analysis. We focus on the establishment and application of TRACE/FRAPTRAN/SNAP models for Chinshan NPP (BWR/4) spent fuel pool in this research. The geometry is 12.17 m × 7.87 m × 11.61 m for the spent fuel pool. In this study, there are three TRACE/SNAP models: one-channel, two-channel, and multi-channel TRACE/SNAP model. Additionally, the cooling system failure of the spent fuel pool was simulated and analyzed by using the above models. According to the analysis results, the peak cladding temperature response was more accurate in the multi-channel TRACE/SNAP model. The results depicted that the uncovered of the fuels occurred at 2.7 day after the cooling system failed. In order to estimate the detailed fuel rods performance, FRAPTRAN code was used in this research. According to the results of FRAPTRAN, the highest cladding temperature located on the node 21 of the fuel rod (the highest node at node 23) and the cladding burst roughly after 3.7 day.

Keywords: TRACE, FRAPTRAN, BWR, spent fuel pool

Procedia PDF Downloads 340
20128 Audio-Visual Recognition Based on Effective Model and Distillation

Authors: Heng Yang, Tao Luo, Yakun Zhang, Kai Wang, Wei Qin, Liang Xie, Ye Yan, Erwei Yin

Abstract:

Recent years have seen that audio-visual recognition has shown great potential in a strong noise environment. The existing method of audio-visual recognition has explored methods with ResNet and feature fusion. However, on the one hand, ResNet always occupies a large amount of memory resources, restricting the application in engineering. On the other hand, the feature merging also brings some interferences in a high noise environment. In order to solve the problems, we proposed an effective framework with bidirectional distillation. At first, in consideration of the good performance in extracting of features, we chose the light model, Efficientnet as our extractor of spatial features. Secondly, self-distillation was applied to learn more information from raw data. Finally, we proposed a bidirectional distillation in decision-level fusion. In more detail, our experimental results are based on a multi-model dataset from 24 volunteers. Eventually, the lipreading accuracy of our framework was increased by 2.3% compared with existing systems, and our framework made progress in audio-visual fusion in a high noise environment compared with the system of audio recognition without visual.

Keywords: lipreading, audio-visual, Efficientnet, distillation

Procedia PDF Downloads 117
20127 Prioritization of Mutation Test Generation with Centrality Measure

Authors: Supachai Supmak, Yachai Limpiyakorn

Abstract:

Mutation testing can be applied for the quality assessment of test cases. Prioritization of mutation test generation has been a critical element of the industry practice that would contribute to the evaluation of test cases. The industry generally delivers the product under the condition of time to the market and thus, inevitably sacrifices software testing tasks, even though many test cases are required for software verification. This paper presents an approach of applying a social network centrality measure, PageRank, to prioritize mutation test generation. The source code with the highest values of PageRank will be focused first when developing their test cases as these modules are vulnerable to defects or anomalies which may cause the consequent defects in many other associated modules. Moreover, the approach would help identify the reducible test cases in the test suite, still maintaining the same criteria as the original number of test cases.

Keywords: software testing, mutation test, network centrality measure, test case prioritization

Procedia PDF Downloads 91