Search results for: analyst forecast dispersion
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1110

Search results for: analyst forecast dispersion

840 Neural Networks Based Prediction of Long Term Rainfall: Nine Pilot Study Zones over the Mediterranean Basin

Authors: Racha El Kadiri, Mohamed Sultan, Henrique Momm, Zachary Blair, Rachel Schultz, Tamer Al-Bayoumi

Abstract:

The Mediterranean Basin is a very diverse region of nationalities and climate zones, with a strong dependence on agricultural activities. Predicting long term (with a lead of 1 to 12 months) rainfall, and future droughts could contribute in a sustainable management of water resources and economical activities. In this study, an integrated approach was adopted to construct predictive tools with lead times of 0 to 12 months to forecast rainfall amounts over nine subzones of the Mediterranean Basin region. The following steps were conducted: (1) acquire, assess and intercorrelate temporal remote sensing-based rainfall products (e.g. The CPC Merged Analysis of Precipitation [CMAP]) throughout the investigation period (1979 to 2016), (2) acquire and assess monthly values for all of the climatic indices influencing the regional and global climatic patterns (e.g., Northern Atlantic Oscillation [NOI], Southern Oscillation Index [SOI], and Tropical North Atlantic Index [TNA]); (3) delineate homogenous climatic regions and select nine pilot study zones, (4) apply data mining methods (e.g. neural networks, principal component analyses) to extract relationships between the observed rainfall and the controlling factors (i.e. climatic indices with multiple lead-time periods) and (5) use the constructed predictive tools to forecast monthly rainfall and dry and wet periods. Preliminary results indicate that rainfall and dry/wet periods were successfully predicted with lead zones of 0 to 12 months using the adopted methodology, and that the approach is more accurately applicable in the southern Mediterranean region.

Keywords: rainfall, neural networks, climatic indices, Mediterranean

Procedia PDF Downloads 285
839 Two-Dimensional Observation of Oil Displacement by Water in a Petroleum Reservoir through Numerical Simulation and Application to a Petroleum Reservoir

Authors: Ahmad Fahim Nasiry, Shigeo Honma

Abstract:

We examine two-dimensional oil displacement by water in a petroleum reservoir. The pore fluid is immiscible, and the porous media is homogenous and isotropic in the horizontal direction. Buckley-Leverett theory and a combination of Laplacian and Darcy’s law are used to study the fluid flow through porous media, and the Laplacian that defines the dispersion and diffusion of fluid in the sand using heavy oil is discussed. The reservoir is homogenous in the horizontal direction, as expressed by the partial differential equation. Two main factors which are observed are the water saturation and pressure distribution in the reservoir, and they are evaluated for predicting oil recovery in two dimensions by a physical and mathematical simulation model. We review the numerical simulation that solves difficult partial differential reservoir equations. Based on the numerical simulations, the saturation and pressure equations are calculated by the iterative alternating direction implicit method and the iterative alternating direction explicit method, respectively, according to the finite difference assumption. However, to understand the displacement of oil by water and the amount of water dispersion in the reservoir better, an interpolated contour line of the water distribution of the five-spot pattern, that provides an approximate solution which agrees well with the experimental results, is also presented. Finally, a computer program is developed to calculate the equation for pressure and water saturation and to draw the pressure contour line and water distribution contour line for the reservoir.

Keywords: numerical simulation, immiscible, finite difference, IADI, IDE, waterflooding

Procedia PDF Downloads 297
838 Formulation and Evaluation of TDDS for Sustained Release Ondansetron HCL Patches

Authors: Baljinder Singh, Navneet Sharma

Abstract:

The skin can be used as the site for drug administration for continuous transdermal drug infusion into the systemic circulation. For the continuous diffusion/penetration of the drugs through the intact skin surface membrane-moderated systems, matrix dispersion type systems, adhesive diffusion controlled systems and micro reservoir systems have been developed. Various penetration enhancers are used for the drug diffusion through skin. In matrix dispersion type systems, the drug is dispersed in the solvent along with the polymers and solvent allowed to evaporate forming a homogeneous drug-polymer matrix. Matrix type systems were developed in the present study. In the present work, an attempt has been made to develop a matrix-type transdermal therapeutic system comprising of ondansetron-HCl with different ratios of hydrophilic and hydrophobic polymeric combinations using solvent evaporation technique. The physicochemical compatibility of the drug and the polymers was studied by infrared spectroscopy. The results obtained showed no physical-chemical incompatibility between the drug and the polymers. The patches were further subjected to various physical evaluations along with the in-vitro permeation studies using rat skin. On the basis of results obtained form the in vitro study and physical evaluation, the patches containing hydrophilic polymers i.e. polyvinyl alcohol and poly vinyl pyrrolidone with oleic acid as the penetration enhancer(5%) were considered as suitable for large scale manufacturing with a backing layer and a suitable adhesive membrane.

Keywords: transdermal drug delivery, penetration enhancers, hydrophilic and hydrophobic polymers, ondansetron HCl

Procedia PDF Downloads 296
837 Finding DEA Targets Using Multi-Objective Programming

Authors: Farzad Sharifi, Raziyeh Shamsi

Abstract:

In this paper, we obtain the projection of inefficient units in data envelopment analysis (DEA) in the case of stochastic inputs and outputs using the multi-objective programming (MOP) structure. In some problems, the inputs might be stochastic while the outputs are deterministic, and vice versa. In such cases, we propose molti-objective DEA-R model, because in some cases (e.g., when unnecessary and irrational weights by the BCC model reduces the efficiency score), an efficient DMU is introduced as inefficient by the BCC model, whereas the DMU is considered efficient by the DEA-R model. In some other case, only the ratio of stochastic data may be available (e.g; the ratio of stochastic inputs to stochastic outputs). Thus, we provide multi objective DEA model without explicit outputs and prove that in-put oriented MOP DEA-R model in the invariable return to scale case can be replacing by MOP- DEA model without explicit outputs in the variable return to scale and vice versa. Using the interactive methods for solving the proposed model, yields a projection corresponding to the viewpoint of the DM and the analyst, which is nearer to reality and more practical. Finally, an application is provided.

Keywords: DEA, MOLP, STOCHASTIC, DEA-R

Procedia PDF Downloads 376
836 Project Progress Prediction in Software Devlopment Integrating Time Prediction Algorithms and Large Language Modeling

Authors: Dong Wu, Michael Grenn

Abstract:

Managing software projects effectively is crucial for meeting deadlines, ensuring quality, and managing resources well. Traditional methods often struggle with predicting project timelines accurately due to uncertain schedules and complex data. This study addresses these challenges by combining time prediction algorithms with Large Language Models (LLMs). It makes use of real-world software project data to construct and validate a model. The model takes detailed project progress data such as task completion dynamic, team Interaction and development metrics as its input and outputs predictions of project timelines. To evaluate the effectiveness of this model, a comprehensive methodology is employed, involving simulations and practical applications in a variety of real-world software project scenarios. This multifaceted evaluation strategy is designed to validate the model's significant role in enhancing forecast accuracy and elevating overall management efficiency, particularly in complex software project environments. The results indicate that the integration of time prediction algorithms with LLMs has the potential to optimize software project progress management. These quantitative results suggest the effectiveness of the method in practical applications. In conclusion, this study demonstrates that integrating time prediction algorithms with LLMs can significantly improve the predictive accuracy and efficiency of software project management. This offers an advanced project management tool for the industry, with the potential to improve operational efficiency, optimize resource allocation, and ensure timely project completion.

Keywords: software project management, time prediction algorithms, large language models (LLMS), forecast accuracy, project progress prediction

Procedia PDF Downloads 44
835 Forecast Financial Bubbles: Multidimensional Phenomenon

Authors: Zouari Ezzeddine, Ghraieb Ikram

Abstract:

From the results of the academic literature which evokes the limitations of previous studies, this article shows the reasons for multidimensionality Prediction of financial bubbles. A new framework for modeling study predicting financial bubbles by linking a set of variable presented on several dimensions dictating its multidimensional character. It takes into account the preferences of financial actors. A multicriteria anticipation of the appearance of bubbles in international financial markets helps to fight against a possible crisis.

Keywords: classical measures, predictions, financial bubbles, multidimensional, artificial neural networks

Procedia PDF Downloads 540
834 Electrochemical Synthesis of Copper Nanoparticles

Authors: Juan Patricio Ibáñez, Exequiel López

Abstract:

A method for synthesizing copper nanoparticles through an electrochemical approach is proposed, employing surfactants to stabilize the size of the newly formed nanoparticles. The electrolyte was made up of a matrix of H₂SO₄ (190 g/L) having Cu²⁺ (from 3.2 to 9.5 g/L), sodium dodecyl sulfate -SDS- (from 0.5 to 1.0 g/L) and Tween 80 (from 0 to 7.5 mL/L). Tween 80 was used in a molar relation of 1 to 1 with SDS. A glass cell was used, which was in a thermostatic water bath to keep the system temperature, and the electrodes were cathodic copper as an anode and stainless steel 316-L as a cathode. This process was influenced by the control exerted through the initial copper concentration in the electrolyte and the applied current density. Copper nanoparticles of electrolytic purity, exhibiting a spherical morphology of varying sizes with low dispersion, were successfully produced, contingent upon the chemical composition of the electrolyte and current density. The minimum size achieved was 3.0 nm ± 0.9 nm, with an average standard deviation of 2.2 nm throughout the entire process. The deposited copper mass ranged from 0.394 g to 1.848 g per hour (over an area of 25 cm²), accompanied by an average Faradaic efficiency of 30.8% and an average specific energy consumption of 4.4 kWh/kg. The chemical analysis of the product employed X-ray powder diffraction (XRD), while physical characteristics such as size and morphology were assessed using atomic force microscopy (AFM). It was identified that the initial concentration of copper and the current density are the variables defining the size and dispersion of the nanoparticles, as they serve as reactants in the cathodic half-reaction. The presence of surfactants stabilizes the nanoparticle size as their molecules adsorb onto the nanoparticle surface, forming a thick barrier that prevents mass transfer with the exterior and halts further growth.

Keywords: copper nanopowder, electrochemical synthesis, current density, surfactant stabilizer

Procedia PDF Downloads 30
833 Effect of Bi-Dispersity on Particle Clustering in Sedimentation

Authors: Ali Abbas Zaidi

Abstract:

In free settling or sedimentation, particles form clusters at high Reynolds number and dilute suspensions. It is due to the entrapment of particles in the wakes of upstream particles. In this paper, the effect of bi-dispersity of settling particles on particle clustering is investigated using particle-resolved direct numerical simulation. Immersed boundary method is used for particle fluid interactions and discrete element method is used for particle-particle interactions. The solid volume fraction used in the simulation is 1% and the Reynolds number based on Sauter mean diameter is 350. Both solid volume fraction and Reynolds number lie in the clustering regime of sedimentation. In simulations, the particle diameter ratio (i.e. diameter of larger particle to smaller particle (d₁/d₂)) is varied from 2:1, 3:1 and 4:1. For each case of particle diameter ratio, solid volume fraction for each particle size (φ₁/φ₂) is varied from 1:1, 1:2 and 2:1. For comparison, simulations are also performed for monodisperse particles. For studying particles clustering, radial distribution function and instantaneous location of particles in the computational domain are studied. It is observed that the degree of particle clustering decreases with the increase in the bi-dispersity of settling particles. The smallest degree of particle clustering or dispersion of particles is observed for particles with d₁/d₂ equal to 4:1 and φ₁/φ₂ equal to 1:2. Simulations showed that the reduction in particle clustering by increasing bi-dispersity is due to the difference in settling velocity of particles. Particles with larger size settle faster and knockout the smaller particles from clustered regions of particles in the computational domain.

Keywords: dispersion in bi-disperse settling particles, particle microstructures in bi-disperse suspensions, particle resolved direct numerical simulations, settling of bi-disperse particles

Procedia PDF Downloads 174
832 Evolution of Predator-prey Body-size Ratio: Spatial Dimensions of Foraging Space

Authors: Xin Chen

Abstract:

It has been widely observed that marine food webs have significantly larger predator–prey body-size ratios compared with their terrestrial counterparts. A number of hypotheses have been proposed to account for such difference on the basis of primary productivity, trophic structure, biophysics, bioenergetics, habitat features, energy efficiency, etc. In this study, an alternative explanation is suggested based on the difference in the spatial dimensions of foraging arenas: terrestrial animals primarily forage in two dimensional arenas, while marine animals mostly forage in three dimensional arenas. Using 2-dimensional and 3-dimensional random walk simulations, it is shown that marine predators with 3-dimensional foraging would normally have a greater foraging efficiency than terrestrial predators with 2-dimensional foraging. Marine prey with 3-dimensional dispersion usually has greater swarms or aggregations than terrestrial prey with 2-dimensional dispersion, which again favours a greater predator foraging efficiency in marine animals. As an analytical tool, a Lotka-Volterra based adaptive dynamical model is developed with the predator-prey ratio embedded as an adaptive variable. The model predicts that high predator foraging efficiency and high prey conversion rate will dynamically lead to the evolution of a greater predator-prey ratio. Therefore, marine food webs with 3-dimensional foraging space, which generally have higher predator foraging efficiency, will evolve a greater predator-prey ratio than terrestrial food webs.

Keywords: predator-prey, body size, lotka-volterra, random walk, foraging efficiency

Procedia PDF Downloads 46
831 Multi-Scale Modeling of Ti-6Al-4V Mechanical Behavior: Size, Dispersion and Crystallographic Texture of Grains Effects

Authors: Fatna Benmessaoud, Mohammed Cheikh, Vencent Velay, Vanessa Vidal, Farhad Rezai-Aria, Christine Boher

Abstract:

Ti-6Al-4V titanium alloy is one of the most widely used materials in aeronautical and aerospace industries. Because of its high specific strength, good fatigue, and corrosion resistance, this alloy is very suitable for moderate temperature applications. At room temperature, Ti-6Al-4V mechanical behavior is generally controlled by the behavior of alpha phase (beta phase percent is less than 8%). The plastic strain of this phase notably based on crystallographic slip can be hindered by various obstacles and mechanisms (crystal lattice friction, sessile dislocations, strengthening by solute atoms and grain boundaries…). The grains aspect of alpha phase (its morphology and texture) and the nature of its crystallographic lattice (which is hexagonal compact) give to plastic strain heterogeneous, discontinuous and anisotropic characteristics at the local scale. The aim of this work is to develop a multi-scale model for Ti-6Al-4V mechanical behavior using crystal plasticity approach; this multi-scale model is used then to investigate grains size, dispersion of grains size, crystallographic texture and slip systems activation effects on Ti-6Al-4V mechanical behavior under monotone quasi-static loading. Nine representative elementary volume (REV) are built for taking into account the physical elements (grains size, dispersion and crystallographic) mentioned above, then boundary conditions of tension test are applied. Finally, simulation of the mechanical behavior of Ti-6Al-4V and study of slip systems activation in alpha phase is reported. The results show that the macroscopic mechanical behavior of Ti-6Al-4V is strongly linked to the active slip systems family (prismatic, basal or pyramidal). The crystallographic texture determines which family of slip systems can be activated; therefore it gives to the plastic strain a heterogeneous character thus an anisotropic macroscopic mechanical behavior of Ti-6Al-4V alloy modeled. The grains size influences also on mechanical proprieties of Ti-6Al-4V, especially on the yield stress; by decreasing of the grain size, the yield strength increases. Finally, the grains' distribution which characterizes the morphology aspect (homogeneous or heterogeneous) gives to the deformation fields distribution enough heterogeneity because the crystallographic slip is easier in large grains compared to small grains, which generates a localization of plastic deformation in certain areas and a concentration of stresses in others.

Keywords: multi-scale modeling, Ti-6Al-4V alloy, crystal plasticity, grains size, crystallographic texture

Procedia PDF Downloads 134
830 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 93
829 Wood as a Climate Buffer in a Supermarket

Authors: Kristine Nore, Alexander Severnisen, Petter Arnestad, Dimitris Kraniotis, Roy Rossebø

Abstract:

Natural materials like wood, absorb and release moisture. Thus wood can buffer indoor climate. When used wisely, this buffer potential can be used to counteract the outer climate influence on the building. The mass of moisture used in the buffer is defined as the potential hygrothermal mass, which can be an energy storage in a building. This works like a natural heat pump, where the moisture is active in damping the diurnal changes. In Norway, the ability of wood as a material used for climate buffering is tested in several buildings with the extensive use of wood, including supermarkets. This paper defines the potential of hygrothermal mass in a supermarket building. This includes the chosen ventilation strategy, and how the climate impact of the building is reduced. The building is located above the arctic circle, 50m from the coastline, in Valnesfjord. It was built in 2015, has a shopping area, including toilet and entrance, of 975 m². The climate of the area is polar according to the Köppen classification, but the supermarket still needs cooling on hot summer days. In order to contribute to the total energy balance, wood needs dynamic influence to activate its hygrothermal mass. Drying and moistening of the wood are energy intensive, and this energy potential can be exploited. Examples are to use solar heat for drying instead of heating the indoor air, and raw air with high enthalpy that allow dry wooden surfaces to absorb moisture and release latent heat. Weather forecasts are used to define the need for future cooling or heating. Thus, the potential energy buffering of the wood can be optimized with intelligent ventilation control. The ventilation control in Valnesfjord includes the weather forecast and historical data. That is a five-day forecast and a two-day history. This is to prevent adjustments to smaller weather changes. The ventilation control has three zones. During summer, the moisture is retained to dampen for solar radiation through drying. In the winter time, moist air let into the shopping area to contribute to the heating. When letting the temperature down during the night, the moisture absorbed in the wood slow down the cooling. The ventilation system is shut down during closing hours of the supermarket in this period. During the autumn and spring, a regime of either storing the moisture or drying out to according to the weather prognoses is defined. To ensure indoor climate quality, measurements of CO₂ and VOC overrule the low energy control if needed. Verified simulations of the Valnesfjord building will build a basic model for investigating wood as a climate regulating material also in other climates. Future knowledge on hygrothermal mass potential in materials is promising. When including the time-dependent buffer capacity of materials, building operators can achieve optimal efficiency of their ventilation systems. The use of wood as a climate regulating material, through its potential hygrothermal mass and connected to weather prognoses, may provide up to 25% energy savings related to heating, cooling, and ventilation of a building.

Keywords: climate buffer, energy, hygrothermal mass, ventilation, wood, weather forecast

Procedia PDF Downloads 182
828 Accurate Binding Energy of Ytterbium Dimer from Ab Initio Calculations and Ultracold Photoassociation Spectroscopy

Authors: Giorgio Visentin, Alexei A. Buchachenko

Abstract:

Recent proposals to use Yb dimer as an optical clock and as a sensor for non-Newtonian gravity imply the knowledge of its interaction potential. Here, the ground-state Born-Oppenheimer Yb₂ potential energy curve is represented by a semi-analytical function, consisting of short- and long-range contributions. For the former, the systematic ab initio all-electron exact 2-component scalar-relativistic CCSD(T) calculations are carried out. Special care is taken to saturate diffuse basis set component with the atom- and bond-centered primitives and reach the complete basis set limit through n = D, T, Q sequence of the correlation-consistent polarized n-zeta basis sets. Similar approaches are used to the long-range dipole and quadrupole dispersion terms by implementing the CCSD(3) polarization propagator method for dynamic polarizabilities. Dispersion coefficients are then computed through Casimir-Polder integration. The semiclassical constraint on the number of the bound vibrational levels known for the ¹⁷⁴Yb isotope is used to scale the potential function. The scaling, based on the most accurate ab initio results, bounds the interaction energy of two Yb atoms within the narrow 734 ± 4 cm⁻¹ range, in reasonable agreement with the previous ab initio-based estimations. The resulting potentials can be used as the reference for more sophisticated models that go beyond the Born-Oppenheimer approximation and provide the means of their uncertainty estimations. The work is supported by Russian Science Foundation grant # 17-13-01466.

Keywords: ab initio coupled cluster methods, interaction potential, semi-analytical function, ytterbium dimer

Procedia PDF Downloads 126
827 Matrix Method Posting

Authors: Varong Pongsai

Abstract:

The objective of this paper is introducing a new method of accounting posting which is called Matrix Method Posting. This method is based on the Matrix operation of pure Mathematics. Although, accounting field is classified as one of the social-science knowledge, many of accounting operations are placed by Mathematics sign and operation. Through the operation applying, it seems to be that the operations of Mathematics should be applied to accounting possibly. So, this paper tries to over-lap Mathematics logic to accounting logic smoothly. According to the context of discovery, deductive approach is employed to prove a simultaneously logical concept of both Mathematics and Accounting. The result proves that the Matrix can be placed to operate accounting perfectly, because Matrix and accounting logic also have a similarity concept which is balancing 2 sides during operations. Moreover, the Matrix posting also has a lot of benefit. It can help financial analyst calculating financial ratios comfortably. Furthermore, the matrix determinant which is a signature operation itself also helps auditors checking out the correction of clients’ recording. If the determinant is not equaled to 0, it will point out that the recording process of clients getting into the problem. Finally, the Matrix should be easily determining a concept of merger and consolidation far beyond the present day concept.

Keywords: matrix method posting, deductive approach, determinant, accounting application

Procedia PDF Downloads 337
826 Improvement of Environment and Climate Change Canada’s Gem-Hydro Streamflow Forecasting System

Authors: Etienne Gaborit, Dorothy Durnford, Daniel Deacu, Marco Carrera, Nathalie Gauthier, Camille Garnaud, Vincent Fortin

Abstract:

A new experimental streamflow forecasting system was recently implemented at the Environment and Climate Change Canada’s (ECCC) Canadian Centre for Meteorological and Environmental Prediction (CCMEP). It relies on CaLDAS (Canadian Land Data Assimilation System) for the assimilation of surface variables, and on a surface prediction system that feeds a routing component. The surface energy and water budgets are simulated with the SVS (Soil, Vegetation, and Snow) Land-Surface Scheme (LSS) at 2.5-km grid spacing over Canada. The routing component is based on the Watroute routing scheme at 1-km grid spacing for the Great Lakes and Nelson River watersheds. The system is run in two distinct phases: an analysis part and a forecast part. During the analysis part, CaLDAS outputs are used to force the routing system, which performs streamflow assimilation. In forecast mode, the surface component is forced with the Canadian GEM atmospheric forecasts and is initialized with a CaLDAS analysis. Streamflow performances of this new system are presented over 2019. Performances are compared to the current ECCC’s operational streamflow forecasting system, which is different from the new experimental system in many aspects. These new streamflow forecasts are also compared to persistence. Overall, the new streamflow forecasting system presents promising results, highlighting the need for an elaborated assimilation phase before performing the forecasts. However, the system is still experimental and is continuously being improved. Some major recent improvements are presented here and include, for example, the assimilation of snow cover data from remote sensing, a backward propagation of assimilated flow observations, a new numerical scheme for the routing component, and a new reservoir model.

Keywords: assimilation system, distributed physical model, offline hydro-meteorological chain, short-term streamflow forecasts

Procedia PDF Downloads 111
825 Photocatalytic Hydrogen Production, Effect of Metal Particle Size and Their Electronic/Optical Properties on the Reaction

Authors: Hicham Idriss

Abstract:

Hydrogen production from water is one of the most promising methods to secure renewable sources or vectors of energy for societies in general and for chemical industries in particular. At present over 90% of the total amount of hydrogen produced in the world is made from non-renewable fossil fuels (via methane reforming). There are many methods for producing hydrogen from water and these include reducible oxide materials (solar thermal production), combined PV/electrolysis, artificial photosynthesis and photocatalysis. The most promising of these processes is the one relying on photocatalysis; yet serious challenges are hindering its success so far. In order to make this process viable considerable improvement of the photon conversion is needed. Among the key studies that our group has been conducting in the last few years are those focusing on synergism between the semiconductor phases, photonic band gap materials, pn junctions, plasmonic resonance responses, charge transfer to metal cations, in addition to metal dispersion and band gap engineering. In this work results related to phase transformation of the anatase to rutile in the case of TiO2 (synergism), of Au and Ag dispersion (electron trapping and hydrogen-hydrogen recombination centers) as well as their plasmon resonance response (visible light conversion) are presented and discussed. It is found for example that synergism between the two common phases of TiO2 (anatase and rutile) is sensitive to the initial particle size. It is also found, in agreement with previous results, that the rate is very sensitive to the amount of metals (with similar particle size) on the surface unlike the case of thermal heterogeneous catalysis.

Keywords: photo-catalysis, hydrogen production, water splitting, plasmonic

Procedia PDF Downloads 222
824 Engineering a Band Gap Opening in Dirac Cones on Graphene/Tellurium Heterostructures

Authors: Beatriz Muñiz Cano, J. Ripoll Sau, D. Pacile, P. M. Sheverdyaeva, P. Moras, J. Camarero, R. Miranda, M. Garnica, M. A. Valbuena

Abstract:

Graphene, in its pristine state, is a semiconductor with a zero band gap and massless Dirac fermions carriers, which conducts electrons like a metal. Nevertheless, the absence of a bandgap makes it impossible to control the material’s electrons, something that is essential to perform on-off switching operations in transistors. Therefore, it is necessary to generate a finite gap in the energy dispersion at the Dirac point. Intense research has been developed to engineer band gaps while preserving the exceptional properties of graphene, and different strategies have been proposed, among them, quantum confinement of 1D nanoribbons or the introduction of super periodic potential in graphene. Besides, in the context of developing new 2D materials and Van der Waals heterostructures, with new exciting emerging properties, as 2D transition metal chalcogenides monolayers, it is fundamental to know any possible interaction between chalcogenide atoms and graphene-supporting substrates. In this work, we report on a combined Scanning Tunneling Microscopy (STM), Low Energy Electron Diffraction (LEED), and Angle-Resolved Photoemission Spectroscopy (ARPES) study on a new superstructure when Te is evaporated (and intercalated) onto graphene over Ir(111). This new superstructure leads to the electronic doping of the Dirac cone while the linear dispersion of massless Dirac fermions is preserved. Very interestingly, our ARPES measurements evidence a large band gap (~400 meV) at the Dirac point of graphene Dirac cones below but close to the Fermi level. We have also observed signatures of the Dirac point binding energy being tuned (upwards or downwards) as a function of Te coverage.

Keywords: angle resolved photoemission spectroscopy, ARPES, graphene, spintronics, spin-orbitronics, 2D materials, transition metal dichalcogenides, TMDCs, TMDs, LEED, STM, quantum materials

Procedia PDF Downloads 47
823 [Keynote Talk]: Three Dimensional Finite Element Analysis of Functionally Graded Radiation Shielding Nanoengineered Sandwich Composites

Authors: Nasim Abuali Galehdari, Thomas J. Ryan, Ajit D. Kelkar

Abstract:

In recent years, nanotechnology has played an important role in the design of an efficient radiation shielding polymeric composites. It is well known that, high loading of nanomaterials with radiation absorption properties can enhance the radiation attenuation efficiency of shielding structures. However, due to difficulties in dispersion of nanomaterials into polymer matrices, there has been a limitation in higher loading percentages of nanoparticles in the polymer matrix. Therefore, the objective of the present work is to provide a methodology to fabricate and then to characterize the functionally graded radiation shielding structures, which can provide an efficient radiation absorption property along with good structural integrity. Sandwich structures composed of Ultra High Molecular Weight Polyethylene (UHMWPE) fabric as face sheets and functionally graded epoxy nanocomposite as core material were fabricated. A method to fabricate a functionally graded core panel with controllable gradient dispersion of nanoparticles is discussed. In order to optimize the design of functionally graded sandwich composites and to analyze the stress distribution throughout the sandwich composite thickness, a finite element method was used. The sandwich panels were discretized using 3-Dimensional 8 nodded brick elements. Classical laminate analysis in conjunction with simplified micromechanics equations were used to obtain the properties of the face sheets. The presented finite element model would provide insight into deformation and damage mechanics of the functionally graded sandwich composites from the structural point of view.

Keywords: nanotechnology, functionally graded material, radiation shielding, sandwich composites, finite element method

Procedia PDF Downloads 445
822 The Study of the Concept of Aesthetics in Architecture Derived from the Ideas of Jörg Kurt Greuther

Authors: Mana Pirhadi, Maryam Pirhadi, Fatemeh Tavakoli

Abstract:

As there are several styles and attitudes among the practitioners of the present time, it is difficult to achieve a definition of beauty for contemporary architecture and aesthetic concepts has different frameworks in various disciplines. Beauty can be regarded as one of the most important elements of architecture; therefore, having a clear understanding of beauty can help architects and audiences to create or analyze an architectural work. This paper investigates the assumption that we can have a clearer understanding of the concept of aesthetics in architecture by analyzing the ideas of the contemporary analyst of architectural aesthetics, Jörg Greuther. Thus, the question is how the concept of aesthetics in architecture will be analyzed in their thoughts. In general, the paper aims to examine aesthetic concepts in the contemporary era that are expressed relying on Greuther's views. The paper adopts a descriptive-analytic approach in terms of methodology. Finally, through the study of the viewpoints of various scholars and specifically considering Greuther's definition that focuses on the effect of psychological-social factors on human perception and formation of the schema, it could be said that aesthetics means to have a good knowledge of truth and understand it.

Keywords: aesthetics, beauty perception, contemporary architecture, Jörg Greuther

Procedia PDF Downloads 302
821 Influence of Organic Modifier Loading on Particle Dispersion of Biodegradable Polycaprolactone/Montmorillonite Nanocomposites

Authors: O. I. H. Dimitry, N. A. Mansour, A. L. G. Saad

Abstract:

Natural sodium montmorillonite (NaMMT), Cloisite Na+ and two organophilic montmorillonites (OMMTs), Cloisites 20A and 15A were used. Polycaprolactone (PCL)/MMT composites containing 1, 3, 5, and 10 wt% of Cloisite Na+ and PCL/OMMT nanocomposites containing 5 and 10 wt% of Cloisites 20A and 15A were prepared via solution intercalation technique to study the influence of organic modifier loading on particle dispersion of PCL/ NaMMT composites. Thermal stabilities of the obtained composites were characterized by thermal analysis using the thermogravimetric analyzer (TGA) which showed that in the presence of nitrogen flow the incorporation of 5 and 10 wt% of filler brings some decrease in PCL thermal stability in the sequence: Cloisite Na+>Cloisite 15A > Cloisite 20A, while in the presence of air flow these fillers scarcely influenced the thermoxidative stability of PCL by slightly accelerating the process. The interaction between PCL and silicate layers was studied by Fourier transform infrared (FTIR) spectroscopy which confirmed moderate interactions between nanometric silicate layers and PCL segments. The electrical conductivity (σ) which describes the ionic mobility of the systems was studied as a function of temperature and showed that σ of PCL was enhanced on increasing the modifier loading at filler content of 5 wt%, especially at higher temperatures in the sequence: Cloisite Na+<Cloisite 20A<Cloisite 15A, and was then decreased to some extent with a further increase to 10 wt%. The activation energy Eσ obtained from the dependency of σ on temperature using Arrhenius equation was found to be lowest for the nanocomposite containing 5 wt% of Cloisite 15A. The dispersed behavior of clay in PCL matrix was evaluated by X-ray diffraction (XRD) and scanning electron microscopy (SEM) analyses which revealed partial intercalated structures in PCL/NaMMT composites and semi-intercalated/semi-exfoliated structures in PCL/OMMT nanocomposites containing 5 wt% of Cloisite 20A or Cloisite 15A.

Keywords: electrical conductivity, montmorillonite, nanocomposite, organoclay, polycaprolactone

Procedia PDF Downloads 351
820 The Comparison between Modelled and Measured Nitrogen Dioxide Concentrations in Cold and Warm Seasons in Kaunas

Authors: A. Miškinytė, A. Dėdelė

Abstract:

Road traffic is one of the main sources of air pollution in urban areas associated with adverse effects on human health and environment. Nitrogen dioxide (NO2) is considered as traffic-related air pollutant, which concentrations tend to be higher near highways, along busy roads and in city centres and exceedances are mainly observed in air quality monitoring stations located close to traffic. Atmospheric dispersion models can be used to examine emissions from many various sources and to predict the concentration of pollutants emitted from these sources into the atmosphere. The study aim was to compare modelled concentrations of nitrogen dioxide using ADMS-Urban dispersion model with air quality monitoring network in cold and warm seasons in Kaunas city. Modelled average seasonal concentrations of nitrogen dioxide for 2011 year have been verified with automatic air quality monitoring data from two stations in the city. Traffic station is located near high traffic street in industrial district and background station far away from the main sources of nitrogen dioxide pollution. The modelling results showed that the highest nitrogen dioxide concentration was modelled and measured in station located near intensive traffic street, both in cold and warm seasons. Modelled and measured nitrogen dioxide concentration was respectively 25.7 and 25.2 µg/m3 in cold season and 15.5 and 17.7 µg/m3 in warm season. While the lowest modelled and measured NO2 concentration was determined in background monitoring station, respectively 12.2 and 13.3 µg/m3 in cold season and 6.1 and 7.6 µg/m3 in warm season. The difference between monitoring station located near high traffic street and background monitoring station showed that better agreement between modelled and measured NO2 concentration was observed at traffic monitoring station.

Keywords: air pollution, nitrogen dioxide, modelling, ADMS-Urban model

Procedia PDF Downloads 386
819 Dynamic Model for Forecasting Rainfall Induced Landslides

Authors: R. Premasiri, W. A. H. A. Abeygunasekara, S. M. Hewavidana, T. Jananthan, R. M. S. Madawala, K. Vaheeshan

Abstract:

Forecasting the potential for disastrous events such as landslides has become one of the major necessities in the current world. Most of all, the landslides occurred in Sri Lanka are found to be triggered mostly by intense rainfall events. The study area is the landslide near Gerandiella waterfall which is located by the 41st kilometer post on Nuwara Eliya-Gampala main road in Kotmale Division in Sri Lanka. The landslide endangers the entire Kotmale town beneath the slope. Geographic Information System (GIS) platform is very much useful when it comes to the need of emulating the real-world processes. The models are used in a wide array of applications ranging from simple evaluations to the levels of forecast future events. This project investigates the possibility of developing a dynamic model to map the spatial distribution of the slope stability. The model incorporates several theoretical models including the infinite slope model, Green Ampt infiltration model and Perched ground water flow model. A series of rainfall values can be fed to the model as the main input to simulate the dynamics of slope stability. Hydrological model developed using GIS is used to quantify the perched water table height, which is one of the most critical parameters affecting the slope stability. Infinite slope stability model is used to quantify the degree of slope stability in terms of factor of safety. DEM was built with the use of digitized contour data. Stratigraphy was modeled in Surfer using borehole data and resistivity images. Data available from rainfall gauges and piezometers were used in calibrating the model. During the calibration, the parameters were adjusted until a good fit between the simulated ground water levels and the piezometer readings was obtained. This model equipped with the predicted rainfall values can be used to forecast of the slope dynamics of the area of interest. Therefore it can be investigated the slope stability of rainfall induced landslides by adjusting temporal dimensions.

Keywords: factor of safety, geographic information system, hydrological model, slope stability

Procedia PDF Downloads 392
818 Optimization of Sodium Lauryl Surfactant Concentration for Nanoparticle Production

Authors: Oluwatoyin Joseph Gbadeyan, Sarp Adali, Bright Glen, Bruce Sithole

Abstract:

Sodium lauryl surfactant concentration optimization, for nanoparticle production, provided the platform for advanced research studies. Different concentrations (0.05 %, 0.1 %, and 0.2 %) of sodium lauryl surfactant was added to snail shells powder during milling processes for producing CaCO3 at smaller particle size. Epoxy nanocomposites prepared at filler content 2 wt.% synthesized with different volumes of sodium lauryl surfactant were fabricated using a conventional resin casting method. Mechanical properties such as tensile strength, stiffness, and hardness of prepared nanocomposites was investigated to determine the effect of sodium lauryl surfactant concentration on nanocomposite properties. It was observed that the loading of the synthesized nano-calcium carbonate improved the mechanical properties of neat epoxy at lower concentrations of sodium lauryl surfactant 0.05 %. Meaningfully, loading of achatina fulica snail shell nanoparticles manufactures, with small concentrations of sodium lauryl surfactant 0.05 %, increased the neat epoxy tensile strength by 26%, stiffness by 55%, and hardness by 38%. Homogeneous dispersion facilitated, by the addition of sodium lauryl surfactant during milling processes, improved mechanical properties. Research evidence suggests that nano-CaCO3, synthesized from achatina fulica snail shell, possesses suitable reinforcement properties that can be used for nanocomposite fabrication. The evidence showed that adding small concentrations of sodium lauryl surfactant 0.05 %, improved dispersion of nanoparticles in polymetrix material that provided mechanical properties improvement.

Keywords: sodium lauryl surfactant, mechanical properties , achatina fulica snail shel, calcium carbonate nanopowder

Procedia PDF Downloads 115
817 Finding Data Envelopment Analysis Targets Using Multi-Objective Programming in DEA-R with Stochastic Data

Authors: R. Shamsi, F. Sharifi

Abstract:

In this paper, we obtain the projection of inefficient units in data envelopment analysis (DEA) in the case of stochastic inputs and outputs using the multi-objective programming (MOP) structure. In some problems, the inputs might be stochastic while the outputs are deterministic, and vice versa. In such cases, we propose a multi-objective DEA-R model because in some cases (e.g., when unnecessary and irrational weights by the BCC model reduce the efficiency score), an efficient decision-making unit (DMU) is introduced as inefficient by the BCC model, whereas the DMU is considered efficient by the DEA-R model. In some other cases, only the ratio of stochastic data may be available (e.g., the ratio of stochastic inputs to stochastic outputs). Thus, we provide a multi-objective DEA model without explicit outputs and prove that the input-oriented MOP DEA-R model in the invariable return to scale case can be replaced by the MOP-DEA model without explicit outputs in the variable return to scale and vice versa. Using the interactive methods for solving the proposed model yields a projection corresponding to the viewpoint of the DM and the analyst, which is nearer to reality and more practical. Finally, an application is provided.

Keywords: DEA-R, multi-objective programming, stochastic data, data envelopment analysis

Procedia PDF Downloads 79
816 Spectral Properties of Fiber Bragg Gratings

Authors: Y. Hamaizi, H. Triki, A. El-Akrmi

Abstract:

In this paper, the reflection spectra, group delay and dispersion of a uniform fiber Bragg grating (FBG) are obtained. FBGs with two types of apodized variations of the refractive index were modeled to show how the side-lobes can be suppressed. Apodization techniques are used to get optimized reflection spectra. The simulation is based on solving coupled mode equations together with the transfer matrix method.

Keywords: fiber bragg gratings, coupled-mode theory, reflectivity, apodization

Procedia PDF Downloads 680
815 Rheological Evaluation of Wall Materials and β-Carotene Loaded Microencapsules

Authors: Gargi Ghoshal, Ashay Jain, Deepika Thakur, U. S. Shivhare, O. P. Katare

Abstract:

The main objectives of this work were the rheological characterization of dispersions, emulsions at different pH used in the microcapsules preparation and the microcapsules obtain from gum arabic (A), guar gum (G), casein (C) and whey protein isolate (W) to keep β-carotene protected from degradation using the complex coacervation microencapsulation technique (CCM). The evaluation of rheological properties of dispersions, emulsions of different pH and so obtained microencapsules manifest the changes occur in the molecular structure of wall materials during the encapsulation process of β-carotene. These dispersions, emulsions of different pH and formulated microencapsules were subjected to go through various conducted experiments (flow curve test, amplitude sweep, and frequency sweep test) using controlled stress dynamic rheometer. Flow properties were evaluated as a function of apparent viscosity under steady shear rate ranging from 0.1 to 100 s-1. The frequency sweep test was conducted to determine the extent of viscosity and elasticity present in the samples at constant strain under changing angular frequency range from 0.1 to 100 rad/s at 25ºC. The dispersions and emulsion exhibited a shear thinning non-Newtonian behavior whereas microencapsules are considered as shear-thickening respectively. The apparent viscosity for dispersion, emulsions were decreased at low shear rates 20 s-1 and for microencapsules, it decreases up to ~50 s-1 besides these value, it has shown constant pattern. Oscillatory shear experiments showed a predominant viscous liquid behavior up to crossover frequencies of dispersions of C, W, A at 49.47 rad/s, 57.60 rad/s and 21.45 rad/s emulsion sample of AW at pH 5.0 it was 17.85 rad/s and GW microencapsules 61.40 rad/s respectively whereas no such crossover was found in G dispersion, emulsion with C and microencapsules still it showed more viscous behavior. Storage and loss modulus decreases with time also a shift of the crossover towards lower frequencies for A, W and C was observed respectively. However, their microencapsules showed more viscous behavior as compared to samples prior to blending.

Keywords: viscosity, gums, proteins, frequency sweep test, apparent viscosity

Procedia PDF Downloads 215
814 Preparation of Novel Silicone/Graphene-based Nanostructured Surfaces as Fouling Release Coatings

Authors: Mohamed S. Selim, Nesreen A. Fatthallah, Shimaa A. Higazy, Zhifeng Hao, Ping Jing Mo

Abstract:

As marine fouling-release (FR) surfaces, two new superhydrophobic nanocomposite series of polydimethylsiloxane (PDMS) loaded with reduced graphene oxide (RGO) and graphene oxide/boehmite nanorods (GO-γ-AlOOH) nanofillers were created. The self-cleaning and antifouling capabilities were modified by controlling the nanofillers' shapes and distribution in the silicone matrix. With an average diameter of 10-20 nm and a length of 200 nm, γ-AlOOH nanorods showed a single crystallinity. RGO was made using a hydrothermal process, whereas GO-γ-AlOOH nanocomposites were made using a chemical deposition method for use as fouling-release coating materials. These nanofillers were disseminated in the silicone matrix using the solution casting method to explore the synergetic effects of graphene-based materials on the surface, mechanical, and FR characteristics. Water contact angle (WCA), scanning electron, and atomic force microscopes were used to investigate the surface's hydrophobicity and antifouling capabilities (SEM and AFM). The roughness, superhydrophobicity, and surface mechanical characteristics of coatings all increased the homogeneity of the nanocomposite dispersion. To examine the antifouling effects of the coating systems, laboratory tests were conducted for 30 days using specified bacteria.PDMS/GO-γ-AlOOH nanorod composite demonstrated superior antibacterial efficacy against several bacterial strains than PDMS/RGO nanocomposite. The high surface area and stabilizing effects of the GO-γ-AlOOH hybrid nanofillers are to blame for this. The biodegradability percentage of the PDMS/GO-γ-AlOOH nanorod composite (3 wt.%) was the lowest (1.6%), while the microbial endurability percentages for gram-positive, gram-negative, and fungi were 86.42%, 97.94%, and 85.97%, respectively. The homogeneity of the GO-γ-AlOOH (3 wt.%) dispersion, which had a WCA of 151° and a rough surface, was the most profound superhydrophobic antifouling nanostructured coating.

Keywords: superhydrophobic nanocomposite, fouling release, nanofillers, surface coating

Procedia PDF Downloads 203
813 The Properties of Risk-based Approaches to Asset Allocation Using Combined Metrics of Portfolio Volatility and Kurtosis: Theoretical and Empirical Analysis

Authors: Maria Debora Braga, Luigi Riso, Maria Grazia Zoia

Abstract:

Risk-based approaches to asset allocation are portfolio construction methods that do not rely on the input of expected returns for the asset classes in the investment universe and only use risk information. They include the Minimum Variance Strategy (MV strategy), the traditional (volatility-based) Risk Parity Strategy (SRP strategy), the Most Diversified Portfolio Strategy (MDP strategy) and, for many, the Equally Weighted Strategy (EW strategy). All the mentioned approaches were based on portfolio volatility as a reference risk measure but in 2023, the Kurtosis-based Risk Parity strategy (KRP strategy) and the Minimum Kurtosis strategy (MK strategy) were introduced. Understandably, they used the fourth root of the portfolio-fourth moment as a proxy for portfolio kurtosis to work with a homogeneous function of degree one. This paper contributes mainly theoretically and methodologically to the framework of risk-based asset allocation approaches with two steps forward. First, a new and more flexible objective function considering a linear combination (with positive coefficients that sum to one) of portfolio volatility and portfolio kurtosis is used to alternatively serve a risk minimization goal or a homogeneous risk distribution goal. Hence, the new basic idea consists in extending the achievement of typical risk-based approaches’ goals to a combined risk measure. To give the rationale behind operating with such a risk measure, it is worth remembering that volatility and kurtosis are expressions of uncertainty, to be read as dispersion of returns around the mean and that both preserve adherence to a symmetric framework and consideration for the entire returns distribution as well, but also that they differ from each other in that the former captures the “normal” / “ordinary” dispersion of returns, while the latter is able to catch the huge dispersion. Therefore, the combined risk metric that uses two individual metrics focused on the same phenomena but differently sensitive to its intensity allows the asset manager to express, in the context of an objective function by varying the “relevance coefficient” associated with the individual metrics, alternatively, a wide set of plausible investment goals for the portfolio construction process while serving investors differently concerned with tail risk and traditional risk. Since this is the first study that also implements risk-based approaches using a combined risk measure, it becomes of fundamental importance to investigate the portfolio effects triggered by this innovation. The paper also offers a second contribution. Until the recent advent of the MK strategy and the KRP strategy, efforts to highlight interesting properties of risk-based approaches were inevitably directed towards the traditional MV strategy and SRP strategy. Previous literature established an increasing order in terms of portfolio volatility, starting from the MV strategy, through the SRP strategy, arriving at the EQ strategy and provided the mathematical proof for the “equalization effect” concerning marginal risks when the MV strategy is considered, and concerning risk contributions when the SRP strategy is considered. Regarding the validity of similar conclusions when referring to the MK strategy and KRP strategy, the development of a theoretical demonstration is still pending. This paper fills this gap.

Keywords: risk parity, portfolio kurtosis, risk diversification, asset allocation

Procedia PDF Downloads 39
812 Improved Computational Efficiency of Machine Learning Algorithm Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK

Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick

Abstract:

The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning archetypal that could forecast COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organisation (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data is split into 8:2 ratio for training and testing purposes to forecast future new COVID cases. Support Vector Machines (SVM), Random Forests, and linear regression algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID cases is evaluated. Random Forest outperformed the other two Machine Learning algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n=30. The mean square error obtained for Random Forest is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis Random Forest algorithm can perform more effectively and efficiently in predicting the new COVID cases, which could help the health sector to take relevant control measures for the spread of the virus.

Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest

Procedia PDF Downloads 87
811 Insights into Particle Dispersion, Agglomeration and Deposition in Turbulent Channel Flow

Authors: Mohammad Afkhami, Ali Hassanpour, Michael Fairweather

Abstract:

The work described in this paper was undertaken to gain insight into fundamental aspects of turbulent gas-particle flows with relevance to processes employed in a wide range of applications, such as oil and gas flow assurance in pipes, powder dispersion from dry powder inhalers, and particle resuspension in nuclear waste ponds, to name but a few. In particular, the influence of particle interaction and fluid phase behavior in turbulent flow on particle dispersion in a horizontal channel is investigated. The mathematical modeling technique used is based on the large eddy simulation (LES) methodology embodied in the commercial CFD code FLUENT, with flow solutions provided by this approach coupled to a second commercial code, EDEM, based on the discrete element method (DEM) which is used for the prediction of particle motion and interaction. The results generated by LES for the fluid phase have been validated against direct numerical simulations (DNS) for three different channel flows with shear Reynolds numbers, Reτ = 150, 300 and 590. Overall, the LES shows good agreement, with mean velocities and normal and shear stresses matching those of the DNS in both magnitude and position. The research work has focused on the prediction of those conditions favoring particle aggregation and deposition within turbulent flows. Simulations have been carried out to investigate the effects of particle size, density and concentration on particle agglomeration. Furthermore, particles with different surface properties have been simulated in three channel flows with different levels of flow turbulence, achieved by increasing the Reynolds number of the flow. The simulations mimic the conditions of two-phase, fluid-solid flows frequently encountered in domestic, commercial and industrial applications, for example, air conditioning and refrigeration units, heat exchangers, oil and gas suction and pressure lines. The particle size, density, surface energy and volume fractions selected are 45.6, 102 and 150 µm, 250, 1000 and 2159 kg m-3, 50, 500, and 5000 mJ m-2 and 7.84 × 10-6, 2.8 × 10-5, and 1 × 10-4, respectively; such particle properties are associated with particles found in soil, as well as metals and oxides prevalent in turbulent bounded fluid-solid flows due to erosion and corrosion of inner pipe walls. It has been found that the turbulence structure of the flow dominates the motion of the particles, creating particle-particle interactions, with most of these interactions taking place at locations close to the channel walls and in regions of high turbulence where their agglomeration is aided both by the high levels of turbulence and the high concentration of particles. A positive relationship between particle surface energy, concentration, size and density, and agglomeration was observed. Moreover, the results derived for the three Reynolds numbers considered show that the rate of agglomeration is strongly influenced for high surface energy particles by, and increases with, the intensity of the flow turbulence. In contrast, for lower surface energy particles, the rate of agglomeration diminishes with an increase in flow turbulence intensity.

Keywords: agglomeration, channel flow, DEM, LES, turbulence

Procedia PDF Downloads 291