Search results for: computational finance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2586

Search results for: computational finance

546 Assessment of Air Pollutant Dispersion and Soil Contamination: The Critical Role of MATLAB Modeling in Evaluating Emissions from the Covanta Municipal Solid Waste Incineration Facility

Authors: Jadon Matthiasa, Cindy Donga, Ali Al Jibouria, Hsin Kuo

Abstract:

The environmental impact of emissions from the Covanta Waste-to-Energy facility in Burnaby, BC, was comprehensively evaluated, focusing on the dispersion of air pollutants and the subsequent assessment of heavy metal contamination in surrounding soils. A Gaussian Plume Model, implemented in MATLAB, was utilized to simulate the dispersion of key pollutants to understand their atmospheric behaviour and potential deposition patterns. The MATLAB code developed for this study enhanced the accuracy of pollutant concentration predictions and provided capabilities for visualizing pollutant dispersion in 3D plots. Furthermore, the code could predict the maximum concentration of pollutants at ground level, eliminating the need to use the Ranchoux model for predictions. Complementing the modelling approach, empirical soil sampling and analysis were conducted to evaluate heavy metal concentrations in the vicinity of the facility. This integrated methodology underscored the importance of computational modelling in air pollution assessment and highlighted the necessity of soil analysis to obtain a holistic understanding of environmental impacts. The findings emphasized the effectiveness of current emissions controls while advocating for ongoing monitoring to safeguard public health and environmental integrity.

Keywords: air emissions, Gaussian Plume Model, MATLAB, soil contamination, air pollution monitoring, waste-to-energy, pollutant dispersion visualization, heavy metal analysis, environmental impact assessment, emission control effectiveness

Procedia PDF Downloads 23
545 Heat Sink Optimization for a High Power Wearable Thermoelectric Module

Authors: Zohreh Soleimani, Sally Salome Shahzad, Stamatis Zoras

Abstract:

As a result of current energy and environmental issues, the human body is known as one of the promising candidate for converting wasted heat to electricity (Seebeck effect). Thermoelectric generator (TEG) is one of the most prevalent means of harvesting body heat and converting that to eco-friendly electrical power. However, the uneven distribution of the body heat and its curvature geometry restrict harvesting adequate amount of energy. To perfectly transform the heat radiated by the body into power, the most direct solution is conforming the thermoelectric generators (TEG) with the arbitrary surface of the body and increase the temperature difference across the thermoelectric legs. Due to this, a computational survey through COMSOL Multiphysics is presented in this paper with the main focus on the impact of integrating a flexible wearable TEG with a corrugated shaped heat sink on the module power output. To eliminate external parameters (temperature, air flow, humidity), the simulations are conducted within indoor thermal level and when the wearer is stationary. The full thermoelectric characterization of the proposed TEG fabricated by a wavy shape heat sink has been computed leading to a maximum power output of 25µW/cm2 at a temperature gradient nearly 13°C. It is noteworthy that for the flexibility of the proposed TEG and heat sink, the applicability and efficiency of the module stay high even on the curved surfaces of the body. As a consequence, the results demonstrate the superiority of such a TEG to the most state of the art counterparts fabricated with no heat sink and offer a new train of thought for the development of self-sustained and unobtrusive wearable power suppliers which generate energy from low grade dissipated heat from the body.

Keywords: device simulation, flexible thermoelectric module, heat sink, human body heat

Procedia PDF Downloads 153
544 Modeling and Simulation of Secondary Breakup and Its Influence on Fuel Spray in High Torque Low Speed Diesel Engine

Authors: Mohsin Raza, Rizwan Latif, Syed Adnan Qasim, Imran Shafi

Abstract:

High torque low-speed diesel engine has a wide range of industrial and commercial applications. In literature, it’s found that lot of work has been done for the high-speed diesel engine and research on High Torque low-speed is rare. The fuel injection plays a key role in the efficiency of engine and reduction in exhaust emission. The fuel breakup plays a critical role in air-fuel mixture and spray combustion. The current study explains numerically an important phenomenon in spray combustion which is deformation and breakup of liquid drops in compression ignition internal combustion engine. The secondary breakup and its influence on spray and characteristics of compressed gas in-cylinder have been calculated by using simulation software in the backdrop of high torque low-speed diesel like conditions. The secondary spray breakup is modeled with KH - RT instabilities. The continuous field is described by turbulence model and dynamics of the dispersed droplet is modeled by Lagrangian tracking scheme. The results by using KH - RT model are compared against other default methods in OpenFOAM and published experimental data from research and implemented in CFD (Computational Fluid Dynamics). These numerical simulation, done in OpenFoam and Matlab, results are analyzed for the complete 720- degree 4 stroke engine cycle at a low engine speed, for favorable agreement to be achieved. Results thus obtained will be analyzed for better evaporation in near nozzle region. The proposed analyses will further help in better engine efficiency, low emission and improved fuel economy.

Keywords: diesel fuel, KH-RT, Lagrangian , Open FOAM, secondary breakup

Procedia PDF Downloads 267
543 Analyzing Emerging Scientific Domains in Biomedical Discourse: Case Study Comparing Microbiome, Metabolome, and Metagenome Research in Scientific Articles

Authors: Kenneth D. Aiello, M. Simeone, Manfred Laubichler

Abstract:

It is increasingly difficult to analyze emerging scientific fields as contemporary scientific fields are more dynamic, their boundaries are more porous, and the relational possibilities have increased due to Big Data and new information sources. In biomedicine, where funding, medical categories, and medical jurisdiction are determined by distinct boundaries on biomedical research fields and definitions of concepts, ambiguity persists between the microbiome, metabolome, and metagenome research fields. This ambiguity continues despite efforts by institutions and organizations to establish parameters on the core concepts and research discourses. Further, the explosive growth of microbiome, metabolome, and metagenomic research has led to unknown variation and covariation making application of findings across subfields or coming to a consensus difficult. This study explores the evolution and variation of knowledge within the microbiome, metabolome, and metagenome research fields related to ambiguous scholarly language and commensurable theoretical frameworks via a semantic analysis of key concepts and narratives. A computational historical framework of cultural evolution and large-scale publication data highlight the boundaries and overlaps between the competing scientific discourses surrounding the three research areas. The results of this study highlight how discourse and language distribute power within scholarly and scientific networks, specifically the power to set and define norms, central questions, methods, and knowledge.

Keywords: biomedicine, conceptual change, history of science, philosophy of science, science of science, sociolinguistics, sociology of knowledge

Procedia PDF Downloads 135
542 A Survey of Skin Cancer Detection and Classification from Skin Lesion Images Using Deep Learning

Authors: Joseph George, Anne Kotteswara Roa

Abstract:

Skin disease is one of the most common and popular kinds of health issues faced by people nowadays. Skin cancer (SC) is one among them, and its detection relies on the skin biopsy outputs and the expertise of the doctors, but it consumes more time and some inaccurate results. At the early stage, skin cancer detection is a challenging task, and it easily spreads to the whole body and leads to an increase in the mortality rate. Skin cancer is curable when it is detected at an early stage. In order to classify correct and accurate skin cancer, the critical task is skin cancer identification and classification, and it is more based on the cancer disease features such as shape, size, color, symmetry and etc. More similar characteristics are present in many skin diseases; hence it makes it a challenging issue to select important features from a skin cancer dataset images. Hence, the skin cancer diagnostic accuracy is improved by requiring an automated skin cancer detection and classification framework; thereby, the human expert’s scarcity is handled. Recently, the deep learning techniques like Convolutional neural network (CNN), Deep belief neural network (DBN), Artificial neural network (ANN), Recurrent neural network (RNN), and Long and short term memory (LSTM) have been widely used for the identification and classification of skin cancers. This survey reviews different DL techniques for skin cancer identification and classification. The performance metrics such as precision, recall, accuracy, sensitivity, specificity, and F-measures are used to evaluate the effectiveness of SC identification using DL techniques. By using these DL techniques, the classification accuracy increases along with the mitigation of computational complexities and time consumption.

Keywords: skin cancer, deep learning, performance measures, accuracy, datasets

Procedia PDF Downloads 134
541 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory

Authors: Xiaochen Mu

Abstract:

Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.

Keywords: data protection, property rights, intellectual property, Big data

Procedia PDF Downloads 44
540 Fire and Explosion Consequence Modeling Using Fire Dynamic Simulator: A Case Study

Authors: Iftekhar Hassan, Sayedil Morsalin, Easir A Khan

Abstract:

Accidents involving fire occur frequently in recent times and their causes showing a great deal of variety which require intervention methods and risk assessment strategies are unique in each case. On September 4, 2020, a fire and explosion occurred in a confined space caused by a methane gas leak from an underground pipeline in Baitus Salat Jame mosque during Night (Esha) prayer in Narayanganj District, Bangladesh that killed 34 people. In this research, this incident is simulated using Fire Dynamics Simulator (FDS) software to analyze and understand the nature of the accident and associated consequences. FDS is an advanced computational fluid dynamics (CFD) system of fire-driven fluid flow which solves numerically a large eddy simulation form of the Navier–Stokes’s equations for simulation of the fire and smoke spread and prediction of thermal radiation, toxic substances concentrations and other relevant parameters of fire. This study focuses on understanding the nature of the fire and consequence evaluation due to thermal radiation caused by vapor cloud explosion. An evacuation modeling was constructed to visualize the effect of evacuation time and fractional effective dose (FED) for different types of agents. The results were presented by 3D animation, sliced pictures and graphical representation to understand fire hazards caused by thermal radiation or smoke due to vapor cloud explosion. This study will help to design and develop appropriate respond strategy for preventing similar accidents.

Keywords: consequence modeling, fire and explosion, fire dynamics simulation (FDS), thermal radiation

Procedia PDF Downloads 230
539 Flood Modeling in Urban Area Using a Well-Balanced Discontinuous Galerkin Scheme on Unstructured Triangular Grids

Authors: Rabih Ghostine, Craig Kapfer, Viswanathan Kannan, Ibrahim Hoteit

Abstract:

Urban flooding resulting from a sudden release of water due to dam-break or excessive rainfall is a serious threatening environment hazard, which causes loss of human life and large economic losses. Anticipating floods before they occur could minimize human and economic losses through the implementation of appropriate protection, provision, and rescue plans. This work reports on the numerical modelling of flash flood propagation in urban areas after an excessive rainfall event or dam-break. A two-dimensional (2D) depth-averaged shallow water model is used with a refined unstructured grid of triangles for representing the urban area topography. The 2D shallow water equations are solved using a second-order well-balanced discontinuous Galerkin scheme. Theoretical test case and three flood events are described to demonstrate the potential benefits of the scheme: (i) wetting and drying in a parabolic basin (ii) flash flood over a physical model of the urbanized Toce River valley in Italy; (iii) wave propagation on the Reyran river valley in consequence of the Malpasset dam-break in 1959 (France); and (iv) dam-break flood in October 1982 at the town of Sumacarcel (Spain). The capability of the scheme is also verified against alternative models. Computational results compare well with recorded data and show that the scheme is at least as efficient as comparable second-order finite volume schemes, with notable efficiency speedup due to parallelization.

Keywords: dam-break, discontinuous Galerkin scheme, flood modeling, shallow water equations

Procedia PDF Downloads 175
538 Numerical Performance Evaluation of a Savonius Wind Turbines Using Resistive Torque Modeling

Authors: Guermache Ahmed Chafik, Khelfellah Ismail, Ait-Ali Takfarines

Abstract:

The Savonius vertical axis wind turbine is characterized by sufficient starting torque at low wind speeds, simple design and does not require orientation to the wind direction; however, the developed power is lower than other types of wind turbines such as Darrieus. To increase these performances several studies and researches have been developed, such as optimizing blades shape, using passive controls and also minimizing power losses sources like the resisting torque due to friction. This work aims to estimate the performance of a Savonius wind turbine introducing a User Defined Function to the CFD model analyzing resisting torque. This User Defined Function is developed to simulate the action of the wind speed on the rotor; it receives the moment coefficient as an input to compute the rotational velocity that should be imposed on computational domain rotating regions. The rotational velocity depends on the aerodynamic moment applied on the turbine and the resisting torque, which is considered a linear function. Linking the implemented User Defined Function with the CFD solver allows simulating the real functioning of the Savonius turbine exposed to wind. It is noticed that the wind turbine takes a while to reach the stationary regime where the rotational velocity becomes invariable; at that moment, the tip speed ratio, the moment and power coefficients are computed. To validate this approach, the power coefficient versus tip speed ratio curve is compared with the experimental one. The obtained results are in agreement with the available experimental results.

Keywords: resistant torque modeling, Savonius wind turbine, user-defined function, vertical axis wind turbine performances

Procedia PDF Downloads 161
537 Modeling Flow and Deposition Characteristics of Solid CO2 during Choked Flow of CO2 Pipeline in CCS

Authors: Teng lin, Li Yuxing, Han Hui, Zhao Pengfei, Zhang Datong

Abstract:

With the development of carbon capture and storage (CCS), the flow assurance of CO2 transportation becomes more important, particularly for supercritical CO2 pipelines. The relieving system using the choke valve is applied to control the pressure in CO2 pipeline. However, the temperature of fluid would drop rapidly because of Joule-Thomson cooling (JTC), which may cause solid CO2 form and block the pipe. In this paper, a Computational Fluid Dynamic (CFD) model, using the modified Lagrangian method, Reynold's Stress Transport model (RSM) for turbulence and stochastic tracking model (STM) for particle trajectory, was developed to predict the deposition characteristic of solid carbon dioxide. The model predictions were in good agreement with the experiment data published in the literature. It can be observed that the particle distribution affected the deposition behavior. In the region of the sudden expansion, the smaller particles accumulated tightly on the wall were dominant for pipe blockage. On the contrary, the size of solid CO2 particles deposited near the outlet usually was bigger and the stacked structure was looser. According to the calculation results, the movement of the particles can be regarded as the main four types: turbulent motion close to the sudden expansion structure, balanced motion at sudden expansion-middle region, inertial motion near the outlet and the escape. Furthermore the particle deposits accumulated primarily in the sudden expansion region, reattachment region and outlet region because of the four type of motion. Also the Stokes number had an effect on the deposition ratio and it is recommended for Stokes number to avoid 3-8St.

Keywords: carbon capture and storage, carbon dioxide pipeline, gas-particle flow, deposition

Procedia PDF Downloads 372
536 Agent-Based Modeling to Simulate the Dynamics of Health Insurance Markets

Authors: Haripriya Chakraborty

Abstract:

The healthcare system in the United States is considered to be one of the most inefficient and expensive systems when compared to other developed countries. Consequently, there are persistent concerns regarding the overall functioning of this system. For instance, the large number of uninsured individuals and high premiums are pressing issues that are shown to have a negative effect on health outcomes with possible life-threatening consequences. The Affordable Care Act (ACA), which was signed into law in 2010, was aimed at improving some of these inefficiencies. This paper aims at providing a computational mechanism to examine some of these inefficiencies and the effects that policy proposals may have on reducing these inefficiencies. Agent-based modeling is an invaluable tool that provides a flexible framework to model complex systems. It can provide an important perspective into the nature of some interactions that occur and how the benefits of these interactions are allocated. In this paper, we propose a novel and versatile agent-based model with realistic assumptions to simulate the dynamics of a health insurance marketplace that contains a mixture of private and public insurers and individuals. We use this model to analyze the characteristics, motivations, payoffs, and strategies of these agents. In addition, we examine the effects of certain policies, including some of the provisions of the ACA, aimed at reducing the uninsured rate and the cost of premiums to move closer to a system that is more equitable and improves health outcomes for the general population. Our test results confirm the usefulness of our agent-based model in studying this complicated issue and suggest some implications for public policies aimed at healthcare reform.

Keywords: agent-based modeling, healthcare reform, insurance markets, public policy

Procedia PDF Downloads 142
535 Thermal Transport Properties of Common Transition Single Metal Atom Catalysts

Authors: Yuxi Zhu, Zhenqian Chen

Abstract:

It is of great interest to investigate the thermal properties of non-precious metal catalysts for Proton exchange membrane fuel cell (PEMFC) based on the thermal management requirements. Due to the low symmetry of materials, to accurately obtain the thermal conductivity of materials, it is necessary to obtain the second and third order force constants by combining density functional theory and machine learning interatomic potential. To be specific, the interatomic force constants are obtained by moment tensor potential (MTP), which is trained by the computational trajectory of Ab initio molecular dynamics (AIMD) at 50, 300, 600, and 900 K for 1 ps each, with a time step of 1 fs in the AIMD computation. And then the thermal conductivity can be obtained by solving the Boltzmann transport equation. In this paper, the thermal transport properties of single metal atom catalysts are studied for the first time to our best knowledge by machine-learning interatomic potential (MLIP). Results show that the single metal atom catalysts exhibit anisotropic thermal conductivities and partially exhibit good thermal conductivity. The average lattice thermal conductivities of G-FeN₄, G-CoN₄ and G-NiN₄ at 300 K are 88.61 W/mK, 205.32 W/mK and 210.57 W/mK, respectively. While other single metal atom catalysts show low thermal conductivity due to their low phonon lifetime. The results also show that low-frequency phonons (0-10 THz) dominate thermal transport properties. The results provide theoretical insights into the application of single metal atom catalysts in thermal management.

Keywords: proton exchange membrane fuel cell, single metal atom catalysts, density functional theory, thermal conductivity, machine-learning interatomic potential

Procedia PDF Downloads 32
534 Weakly Solving Kalah Game Using Artificial Intelligence and Game Theory

Authors: Hiba El Assibi

Abstract:

This study aims to weakly solve Kalah, a two-player board game, by developing a start-to-finish winning strategy using an optimized Minimax algorithm with Alpha-Beta Pruning. In weakly solving Kalah, our focus is on creating an optimal strategy from the game's beginning rather than analyzing every possible position. The project will explore additional enhancements like symmetry checking and code optimizations to speed up the decision-making process. This approach is expected to give insights into efficient strategy formulation in board games and potentially help create games with a fair distribution of outcomes. Furthermore, this research provides a unique perspective on human versus Artificial Intelligence decision-making in strategic games. By comparing the AI-generated optimal moves with human choices, we can explore how seemingly advantageous moves can, in the long run, be harmful, thereby offering a deeper understanding of strategic thinking and foresight in games. Moreover, this paper discusses the evaluation of our strategy against existing methods, providing insights on performance and computational efficiency. We also discuss the scalability of our approach to the game, considering different board sizes (number of pits and stones) and rules (different variations) and studying how that affects performance and complexity. The findings have potential implications for the development of AI applications in strategic game planning, enhancing our understanding of human cognitive processes in game settings, and offer insights into creating balanced and engaging game experiences.

Keywords: minimax, alpha beta pruning, transposition tables, weakly solving, game theory

Procedia PDF Downloads 58
533 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations

Authors: Karthikeyan Kalirajan, Ashok Joshi

Abstract:

An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.

Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection

Procedia PDF Downloads 430
532 A Rationale to Describe Ambident Reactivity

Authors: David Ryan, Martin Breugst, Turlough Downes, Peter A. Byrne, Gerard P. McGlacken

Abstract:

An ambident nucleophile is a nucleophile that possesses two or more distinct nucleophilic sites that are linked through resonance and are effectively “in competition” for reaction with an electrophile. Examples include enolates, pyridone anions, and nitrite anions, among many others. Reactions of ambident nucleophiles and electrophiles are extremely prevalent at all levels of organic synthesis. The principle of hard and soft acids and bases (the “HSAB principle”) is most commonly cited in the explanation of selectivities in such reactions. Although this rationale is pervasive in any discussion on ambident reactivity, the HSAB principle has received considerable criticism. As a result, the principle’s supplantation has become an area of active interest in recent years. This project focuses on developing a model for rationalizing ambident reactivity. Presented here is an approach that incorporates computational calculations and experimental kinetic data to construct Gibbs energy profile diagrams. The preferred site of alkylation of nitrite anion with a range of ‘hard’ and ‘soft’ alkylating agents was established by ¹H NMR spectroscopy. Pseudo-first-order rate constants were measured directly by ¹H NMR reaction monitoring, and the corresponding second-order constants and Gibbs energies of activation were derived. These, in combination with computationally derived standard Gibbs energies of reaction, were sufficient to construct Gibbs energy wells. By representing the ambident system as a series of overlapping Gibbs energy wells, a more intuitive picture of ambident reactivity emerges. Here, previously unexplained switches in reactivity in reactions involving closely related electrophiles are elucidated.

Keywords: ambident, Gibbs, nucleophile, rates

Procedia PDF Downloads 90
531 Pareto System of Optimal Placement and Sizing of Distributed Generation in Radial Distribution Networks Using Particle Swarm Optimization

Authors: Sani M. Lawal, Idris Musa, Aliyu D. Usman

Abstract:

The Pareto approach of optimal solutions in a search space that evolved in multi-objective optimization problems is adopted in this paper, which stands for a set of solutions in the search space. This paper aims at presenting an optimal placement of Distributed Generation (DG) in radial distribution networks with an optimal size for minimization of power loss and voltage deviation as well as maximizing voltage profile of the networks. And these problems are formulated using particle swarm optimization (PSO) as a constraint nonlinear optimization problem with both locations and sizes of DG being continuous. The objective functions adopted are the total active power loss function and voltage deviation function. The multiple nature of the problem, made it necessary to form a multi-objective function in search of the solution that consists of both the DG location and size. The proposed PSO algorithm is used to determine optimal placement and size of DG in a distribution network. The output indicates that PSO algorithm technique shows an edge over other types of search methods due to its effectiveness and computational efficiency. The proposed method is tested on the standard IEEE 34-bus and validated with 33-bus test systems distribution networks. Results indicate that the sizing and location of DG are system dependent and should be optimally selected before installing the distributed generators in the system and also an improvement in the voltage profile and power loss reduction have been achieved.

Keywords: distributed generation, pareto, particle swarm optimization, power loss, voltage deviation

Procedia PDF Downloads 369
530 Computational Investigation of Secondary Flow Losses in Linear Turbine Cascade by Modified Leading Edge Fence

Authors: K. N. Kiran, S. Anish

Abstract:

It is well known that secondary flow loses account about one third of the total loss in any axial turbine. Modern gas turbine height is smaller and have longer chord length, which might lead to increase in secondary flow. In order to improve the efficiency of the turbine, it is important to understand the behavior of secondary flow and device mechanisms to curtail these losses. The objective of the present work is to understand the effect of a stream wise end-wall fence on the aerodynamics of a linear turbine cascade. The study is carried out computationally by using commercial software ANSYS CFX. The effect of end-wall on the flow field are calculated based on RANS simulation by using SST transition turbulence model. Durham cascade which is similar to high-pressure axial flow turbine for simulation is used. The aim of fencing in blade passage is to get the maximum benefit from flow deviation and destroying the passage vortex in terms of loss reduction. It is observed that, for the present analysis, fence in the blade passage helps reducing the strength of horseshoe vortex and is capable of restraining the flow along the blade passage. Fence in the blade passage helps in reducing the under turning by 70 in comparison with base case. Fence on end-wall is effective in preventing the movement of pressure side leg of horseshoe vortex and helps in breaking the passage vortex. Computations are carried for different fence height whose curvature is different from the blade camber. The optimum fence geometry and location reduces the loss coefficient by 15.6% in comparison with base case.

Keywords: boundary layer fence, horseshoe vortex, linear cascade, passage vortex, secondary flow

Procedia PDF Downloads 352
529 Tracing Digital Traces of Phatic Communion in #Mooc

Authors: Judith Enriquez-Gibson

Abstract:

This paper meddles with the notion of phatic communion introduced 90 years ago by Malinowski, who was a Polish-born British anthropologist. It explores the phatic in Twitter within the contents of tweets related to moocs (massive online open courses) as a topic or trend. It is not about moocs though. It is about practices that could easily be hidden or neglected if we let big or massive topics take the lead or if we simply follow the computational or secret codes behind Twitter itself and third party software analytics. It draws from media and cultural studies. Though at first it appears data-driven as I submitted data collection and analytics into the hands of a third party software, Twitonomy, the aim is to follow how phatic communion might be practised in a social media site, such as Twitter. Lurking becomes its research method to analyse mooc-related tweets. A total of 3,000 tweets were collected on 11 October 2013 (UK timezone). The emphasis of lurking is to engage with Twitter as a system of connectivity. One interesting finding is that a click is in fact a phatic practice. A click breaks the silence. A click in one of the mooc website is actually a tweet. A tweet was posted on behalf of a user who simply chose to click without formulating the text and perhaps without knowing that it contains #mooc. Surely, this mechanism is not about reciprocity. To break the silence, users did not use words. They just clicked the ‘tweet button’ on a mooc website. A click performs and maintains connectivity – and Twitter as the medium in attendance in our everyday, available when needed to be of service. In conclusion, the phatic culture of breaking silence in Twitter does not have to submit to the power of code and analytics. It is a matter of human code.

Keywords: click, Twitter, phatic communion, social media data, mooc

Procedia PDF Downloads 415
528 Reworking of the Anomalies in the Discounted Utility Model as a Combination of Cognitive Bias and Decrease in Impatience: Decision Making in Relation to Bounded Rationality and Emotional Factors in Intertemporal Choices

Authors: Roberta Martino, Viviana Ventre

Abstract:

Every day we face choices whose consequences are deferred in time. These types of choices are the intertemporal choices and play an important role in the social, economic, and financial world. The Discounted Utility Model is the mathematical model of reference to calculate the utility of intertemporal prospects. The discount rate is the main element of the model as it describes how the individual perceives the indeterminacy of subsequent periods. Empirical evidence has shown a discrepancy between the behavior expected from the predictions of the model and the effective choices made from the decision makers. In particular, the term temporal inconsistency indicates those choices that do not remain optimal with the passage of time. This phenomenon has been described with hyperbolic models of the discount rate which, unlike the linear or exponential nature assumed by the discounted utility model, is not constant over time. This paper explores the problem of inconsistency by tracing the decision-making process through the concept of impatience. The degree of impatience and the degree of decrease of impatience are two parameters that allow to quantify the weight of emotional factors and cognitive limitations during the evaluation and selection of alternatives. In fact, although the theory assumes perfectly rational decision makers, behavioral finance and cognitive psychology have made it possible to understand that distortions in the decision-making process and emotional influence have an inevitable impact on the decision-making process. The degree to which impatience is diminished is the focus of the first part of the study. By comparing consistent and inconsistent preferences over time, it was possible to verify that some anomalies in the discounted utility model are a result of the combination of cognitive bias and emotional factors. In particular: the delay effect and the interval effect are compared through the concept of misperception of time; starting from psychological considerations, a criterion is proposed to identify the causes of the magnitude effect that considers the differences in outcomes rather than their ratio; the sign effect is analyzed by integrating in the evaluation of prospects with negative outcomes the psychological aspects of loss aversion provided by Prospect Theory. An experiment implemented confirms three findings: the greatest variation in the degree of decrease in impatience corresponds to shorter intervals close to the present; the greatest variation in the degree of impatience occurs for outcomes of lower magnitude; the variation in the degree of impatience is greatest for negative outcomes. The experimental phase was implemented with the construction of the hyperbolic factor through the administration of questionnaires constructed for each anomaly. This work formalizes the underlying causes of the discrepancy between the discounted utility model and the empirical evidence of preference reversal.

Keywords: decreasing impatience, discount utility model, hyperbolic discount, hyperbolic factor, impatience

Procedia PDF Downloads 106
527 Variant Selection and Pre-transformation Phase Reconstruction for Deformation-Induced Transformation in AISI 304 Austenitic Stainless Steel

Authors: Manendra Singh Parihar, Sandip Ghosh Chowdhury

Abstract:

Austenitic stainless steels are widely used and give a good combination of properties. When this steel is plastically deformed, a phase transformation of the metastable Face Centred Cubic Austenite to the stable Body Centred Cubic (α’) or to the Hexagonal close packed (ԑ) martensite may occur, leading to the enhancement in the mechanical properties like strength. The work was based on variant selection and corresponding texture analysis for the strain induced martensitic transformation during deformation of the parent austenite FCC phase to form the product HCP and the BCC martensite phases separately, obeying their respective orientation relationships. The automated method for reconstruction of the parent phase orientation using the EBSD data of the product phase orientation is done using the MATLAB and TSL-OIM software. The method of triplets was used which involves the formation of a triplet of neighboring product grains having a common variant and linking them using a misorientation-based criterion. This led to the proper reconstruction of the pre-transformation phase orientation data and thus to its micro structure and texture. The computational speed of current method is better compared to the previously used methods of reconstruction. The reconstruction of austenite from ԑ and α’ martensite was carried out for multiple samples and their IPF images, pole figures, inverse pole figures and ODFs were compared. Similar type of results was observed for all samples. The comparison gives the idea for estimating the correct sequence of the transformation i.e. γ → ε → α’ or γ → α’, during deformation of AISI 304 austenitic stainless steel.

Keywords: variant selection, reconstruction, EBSD, austenitic stainless steel, martensitic transformation

Procedia PDF Downloads 492
526 Evaluating the Validity of CFD Model of Dispersion in a Complex Urban Geometry Using Two Sets of Experimental Measurements

Authors: Mohammad R. Kavian Nezhad, Carlos F. Lange, Brian A. Fleck

Abstract:

This research presents the validation study of a computational fluid dynamics (CFD) model developed to simulate the scalar dispersion emitted from rooftop sources around the buildings at the University of Alberta North Campus. The ANSYS CFX code was used to perform the numerical simulation of the wind regime and pollutant dispersion by solving the 3D steady Reynolds-averaged Navier-Stokes (RANS) equations on a building-scale high-resolution grid. The validation study was performed in two steps. First, the CFD model performance in 24 cases (eight wind directions and three wind speeds) was evaluated by comparing the predicted flow fields with the available data from the previous measurement campaign designed at the North Campus, using the standard deviation method (SDM), while the estimated results of the numerical model showed maximum average percent errors of approximately 53% and 37% for wind incidents from the North and Northwest, respectively. Good agreement with the measurements was observed for the other six directions, with an average error of less than 30%. In the second step, the reliability of the implemented turbulence model, numerical algorithm, modeling techniques, and the grid generation scheme was further evaluated using the Mock Urban Setting Test (MUST) dispersion dataset. Different statistical measures, including the fractional bias (FB), the geometric mean bias (MG), and the normalized mean square error (NMSE), were used to assess the accuracy of the predicted dispersion field. Our CFD results are in very good agreement with the field measurements.

Keywords: CFD, plume dispersion, complex urban geometry, validation study, wind flow

Procedia PDF Downloads 140
525 Generic Early Warning Signals for Program Student Withdrawals: A Complexity Perspective Based on Critical Transitions and Fractals

Authors: Sami Houry

Abstract:

Complex systems exhibit universal characteristics as they near a tipping point. Among them are common generic early warning signals which precede critical transitions. These signals include: critical slowing down in which the rate of recovery from perturbations decreases over time; an increase in the variance of the state variable; an increase in the skewness of the state variable; an increase in the autocorrelations of the state variable; flickering between different states; and an increase in spatial correlations over time. The presence of the signals has management implications, as the identification of the signals near the tipping point could allow management to identify intervention points. Despite the applications of the generic early warning signals in various scientific fields, such as fisheries, ecology and finance, a review of literature did not identify any applications that address the program student withdrawal problem at the undergraduate distance universities. This area could benefit from the application of generic early warning signals as the program withdrawal rate amongst distance students is higher than the program withdrawal rate at face-to-face conventional universities. This research specifically assessed the generic early warning signals through an intensive case study of undergraduate program student withdrawal at a Canadian distance university. The university is non-cohort based due to its system of continuous course enrollment where students can enroll in a course at the beginning of every month. The assessment of the signals was achieved through the comparison of the incidences of generic early warning signals among students who withdrew or simply became inactive in their undergraduate program of study, the true positives, to the incidences of the generic early warning signals among graduates, the false positives. This was achieved through significance testing. Research findings showed support for the signal pertaining to the rise in flickering which is represented in the increase in the student’s non-pass rates prior to withdrawing from a program; moderate support for the signals of critical slowing down as reflected in the increase in the time a student spends in a course; and moderate support for the signals on increase in autocorrelation and increase in variance in the grade variable. The findings did not support the signal on the increase in skewness of the grade variable. The research also proposes a new signal based on the fractal-like characteristic of student behavior. The research also sought to extend knowledge by investigating whether the emergence of a program withdrawal status is self-similar or fractal-like at multiple levels of observation, specifically the program level and the course level. In other words, whether the act of withdrawal at the program level is also present at the course level. The findings moderately supported self-similarity as a potential signal. Overall, the assessment of the signals suggests that the signals, with the exception with the increase of skewness, could be utilized as a predictive management tool and potentially add one more tool, the fractal-like characteristic of withdrawal, as an additional signal in addressing the student program withdrawal problem.

Keywords: critical transitions, fractals, generic early warning signals, program student withdrawal

Procedia PDF Downloads 187
524 Comparison of Different Machine Learning Algorithms for Solubility Prediction

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.

Keywords: random forest, machine learning, comparison, feature extraction

Procedia PDF Downloads 45
523 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data

Procedia PDF Downloads 337
522 Next Generation UK Storm Surge Model for the Insurance Market: The London Case

Authors: Iacopo Carnacina, Mohammad Keshtpoor, Richard Yablonsky

Abstract:

Non-structural protection measures against flooding are becoming increasingly popular flood risk mitigation strategies. In particular, coastal flood insurance impacts not only private citizens but also insurance and reinsurance companies, who may require it to retain solvency and better understand the risks they face from a catastrophic coastal flood event. In this context, a framework is presented here to assess the risk for coastal flooding across the UK. The area has a long history of catastrophic flood events, including the Great Flood of 1953 and the 2013 Cyclone Xaver storm, both of which led to significant loss of life and property. The current framework will leverage a technology based on a hydrodynamic model (Delft3D Flexible Mesh). This flexible mesh technology, coupled with a calibration technique, allows for better utilisation of computational resources, leading to higher resolution and more detailed results. The generation of a stochastic set of extra tropical cyclone (ETC) events supports the evaluation of the financial losses for the whole area, also accounting for correlations between different locations in different scenarios. Finally, the solution shows a detailed analysis for the Thames River, leveraging the information available on flood barriers and levees. Two realistic disaster scenarios for the Greater London area are simulated: In the first scenario, the storm surge intensity is not high enough to fail London’s flood defences, but in the second scenario, London’s flood defences fail, highlighting the potential losses from a catastrophic coastal flood event.

Keywords: storm surge, stochastic model, levee failure, Thames River

Procedia PDF Downloads 236
521 Combustion and Emissions Performance of Syngas Fuels Derived from Palm Kernel Shell and Polyethylene (PE) Waste via Catalytic Steam Gasification

Authors: Chaouki Ghenai

Abstract:

Computational fluid dynamics analysis of the burning of syngas fuels derived from biomass and plastic solid waste mixture through gasification process is presented in this paper. The syngas fuel is burned in gas turbine can combustor. Gas turbine can combustor with swirl is designed to burn the fuel efficiently and reduce the emissions. The main objective is to test the impact of the alternative syngas fuel compositions and lower heating value on the combustion performance and emissions. The syngas fuel is produced by blending Palm Kernel Shell (PKS) with Polyethylene (PE) waste via catalytic steam gasification (fluidized bed reactor). High hydrogen content syngas fuel was obtained by mixing 30% PE waste with PKS. The syngas composition obtained through the gasification process is 76.2% H2, 8.53% CO, 4.39% CO2 and 10.90% CH4. The lower heating value of the syngas fuel is LHV = 15.98 MJ/m3. Three fuels were tested in this study natural gas (100%CH4), syngas fuel and pure hydrogen (100% H2). The power from the combustor was kept constant for all the fuels tested in this study. The effect of syngas fuel composition and lower heating value on the flame shape, gas temperature, mass of carbon dioxide (CO2) and nitrogen oxides (NOX) per unit of energy generation is presented in this paper. The results show an increase of the peak flame temperature and NO mass fractions for the syngas and hydrogen fuels compared to natural gas fuel combustion. Lower average CO2 emissions at the exit of the combustor are obtained for the syngas compared to the natural gas fuel.

Keywords: CFD, combustion, emissions, gas turbine combustor, gasification, solid waste, syngas, waste to energy

Procedia PDF Downloads 596
520 Using Human-Centred Service Design and Partnerships as a Model to Promote Cross-Sector Social Responsibility in Disaster Resilience: An Australian Case Study

Authors: Keith Diamond, Tracy Collier, Ciara Sterling, Ben Kraal

Abstract:

The increased frequency and intensity of disaster events in the Asia-Pacific region is likely to require organisations to better understand how their initiatives, and the support they provide to their customers, intersect with other organisations aiming to support communities in achieving disaster resilience. While there is a growing awareness that disaster response and recovery rebuild programmes need to adapt to more integrated, community-led approaches, there is often a discrepancy between how programmes intend to work and how they are collectively experienced in the community, creating undesired effects on community resilience. Following Australia’s North Queensland Monsoon Disaster of 2019, this research set out to understand and evaluate how the service and support ecosystem impacted on the local community’s experience and influenced their ability to respond and recover. The purpose of this initiative was to identify actionable, cross-sector, people-centered improvements that support communities to recover and thrive when faced with disaster. The challenge arose as a group of organisations, including utility providers, banks, insurers, and community organisations, acknowledged that improving their own services would have limited impact on community wellbeing unless the other services people need are also improved and aligned. The research applied human-centred service design methods, typically applied to single products or services, to design a new way to understand a whole-of-community journey. Phase 1 of the research conducted deep contextual interviews with residents and small business owners impacted by the North Queensland Monsoon and qualitative data was analysed to produce community journey maps that detailed how individuals navigated essential services, such as accommodation, finance, health, and community. Phase 2 conducted interviews and focus groups with frontline workers who represented industries that provided essential services to assist the community. Data from Phase 1 and Phase 2 of the research was analysed and combined to generate a systems map that visualised the positive and negative impacts that occurred across the disaster response and recovery service ecosystem. Insights gained from the research has catalysed collective action to address future Australian disaster events. The case study outlines a transformative way for sectors and industries to rethink their corporate social responsibility activities towards a cross-sector partnership model that shares responsibility and approaches disaster response and recovery as a single service that can be designed to meet the needs of communities.

Keywords: corporate social responsibility, cross sector partnerships, disaster resilience, human-centred design, service design, systems change

Procedia PDF Downloads 157
519 Computational Fluid Dynamics Simulations and Analysis of Air Bubble Rising in a Column of Liquid

Authors: Baha-Aldeen S. Algmati, Ahmed R. Ballil

Abstract:

Multiphase flows occur widely in many engineering and industrial processes as well as in the environment we live in. In particular, bubbly flows are considered to be crucial phenomena in fluid flow applications and can be studied and analyzed experimentally, analytically, and computationally. In the present paper, the dynamic motion of an air bubble rising within a column of liquid is numerically simulated using an open-source CFD modeling tool 'OpenFOAM'. An interface tracking numerical algorithm called MULES algorithm, which is built-in OpenFOAM, is chosen to solve an appropriate mathematical model based on the volume of fluid (VOF) numerical method. The bubbles initially have a spherical shape and starting from rest in the stagnant column of liquid. The algorithm is initially verified against numerical results and is also validated against available experimental data. The comparison revealed that this algorithm provides results that are in a very good agreement with the 2D numerical data of other CFD codes. Also, the results of the bubble shape and terminal velocity obtained from the 3D numerical simulation showed a very good qualitative and quantitative agreement with the experimental data. The simulated rising bubbles yield a very small percentage of error in the bubble terminal velocity compared with the experimental data. The obtained results prove the capability of OpenFOAM as a powerful tool to predict the behavior of rising characteristics of the spherical bubbles in the stagnant column of liquid. This will pave the way for a deeper understanding of the phenomenon of the rise of bubbles in liquids.

Keywords: CFD simulations, multiphase flows, OpenFOAM, rise of bubble, volume of fluid method, VOF

Procedia PDF Downloads 127
518 Prediction of California Bearing Ratio of a Black Cotton Soil Stabilized with Waste Glass and Eggshell Powder using Artificial Neural Network

Authors: Biruhi Tesfaye, Avinash M. Potdar

Abstract:

The laboratory test process to determine the California bearing ratio (CBR) of black cotton soils is not only overpriced but also time-consuming as well. Hence advanced prediction of CBR plays a significant role as it is applicable In pavement design. The prediction of CBR of treated soil was executed by Artificial Neural Networks (ANNs) which is a Computational tool based on the properties of the biological neural system. To observe CBR values, combined eggshell and waste glass was added to soil as 4, 8, 12, and 16 % of the weights of the soil samples. Accordingly, the laboratory related tests were conducted to get the required best model. The maximum CBR value found at 5.8 at 8 % of eggshell waste glass powder addition. The model was developed using CBR as an output layer variable. CBR was considered as a function of the joint effect of liquid limit, plastic limit, and plastic index, optimum moisture content and maximum dry density. The best model that has been found was ANN with 5, 6 and 1 neurons in the input, hidden and output layer correspondingly. The performance of selected ANN has been 0.99996, 4.44E-05, 0.00353 and 0.0067 which are correlation coefficient (R), mean square error (MSE), mean absolute error (MAE) and root mean square error (RMSE) respectively. The research presented or summarized above throws light on future scope on stabilization with waste glass combined with different percentages of eggshell that leads to the economical design of CBR acceptable to pavement sub-base or base, as desired.

Keywords: CBR, artificial neural network, liquid limit, plastic limit, maximum dry density, OMC

Procedia PDF Downloads 196
517 The Assessment of Natural Ventilation Performance for Thermal Comfort in Educational Space: A Case Study of Design Studio in the Arab Academy for Science and Technology, Alexandria

Authors: Alaa Sarhan, Rania Abd El Gelil, Hana Awad

Abstract:

Through the last decades, the impact of thermal comfort on the working performance of users and occupants of an indoor space has been a concern. Research papers concluded that natural ventilation quality directly impacts the levels of thermal comfort. Natural ventilation must be put into account during the design process in order to improve the inhabitant's efficiency and productivity. One example of daily long-term occupancy spaces is educational facilities. Many individuals spend long times receiving a considerable amount of knowledge, and it takes additional time to apply this knowledge. Thus, this research is concerned with user's level of thermal comfort in design studios of educational facilities. The natural ventilation quality in spaces is affected by a number of parameters including orientation, opening design, and many other factors. This research aims to investigate the conscious manipulation of the physical parameters of the spaces and its impact on natural ventilation performance which subsequently affects thermal comfort of users. The current research uses inductive and deductive methods to define natural ventilation design considerations, which are used in a field study in a studio in the university building in Alexandria (AAST) to evaluate natural ventilation performance through analyzing and comparing the current case to the developed framework and conducting computational fluid dynamics simulation. Results have proved that natural ventilation performance is successful by only 50% of the natural ventilation design framework; these results are supported by CFD simulation.

Keywords: educational buildings, natural ventilation, , mediterranean climate, thermal comfort

Procedia PDF Downloads 228