Search results for: prediction equations
875 Application of the Finite Window Method to a Time-Dependent Convection-Diffusion Equation
Authors: Raoul Ouambo Tobou, Alexis Kuitche, Marcel Edoun
Abstract:
The FWM (Finite Window Method) is a new numerical meshfree technique for solving problems defined either in terms of PDEs (Partial Differential Equation) or by a set of conservation/equilibrium laws. The principle behind the FWM is that in such problem each element of the concerned domain is interacting with its neighbors and will always try to adapt to keep in equilibrium with respect to those neighbors. This leads to a very simple and robust problem solving scheme, well suited for transfer problems. In this work, we have applied the FWM to an unsteady scalar convection-diffusion equation. Despite its simplicity, it is well known that convection-diffusion problems can be challenging to be solved numerically, especially when convection is highly dominant. This has led researchers to set the scalar convection-diffusion equation as a benchmark one used to analyze and derive the required conditions or artifacts needed to numerically solve problems where convection and diffusion occur simultaneously. We have shown here that the standard FWM can be used to solve convection-diffusion equations in a robust manner as no adjustments (Upwinding or Artificial Diffusion addition) were required to obtain good results even for high Peclet numbers and coarse space and time steps. A comparison was performed between the FWM scheme and both a first order implicit Finite Volume Scheme (Upwind scheme) and a third order implicit Finite Volume Scheme (QUICK Scheme). The results of the comparison was that for equal space and time grid spacing, the FWM yields a much better precision than the used Finite Volume schemes, all having similar computational cost and conditioning number.Keywords: Finite Window Method, Convection-Diffusion, Numerical Technique, Convergence
Procedia PDF Downloads 332874 Theoretical Framework for Value Creation in Project Oriented Companies
Authors: Mariusz Hofman
Abstract:
The paper ‘Theoretical framework for value creation in Project-Oriented Companies’ is designed to determine, how organisations create value and whether this allows them to achieve market success. An assumption has been made that there are two routes to achieving this value. The first one is to create intangible assets (i.e. the resources of human, structural and relational capital), while the other one is to create added value (understood as the surplus of revenue over costs). It has also been assumed that the combination of the achieved added value and unique intangible assets translates to the success of a project-oriented company. The purpose of the paper is to present hypothetical and deductive model which describing the modus operandi of such companies and approach to model operationalisation. All the latent variables included in the model are theoretical constructs with observational indicators (measures). The existence of a latent variable (construct) and also submodels will be confirmed based on a covariance matrix which in turn is based on empirical data, being a set of observational indicators (measures). This will be achieved with a confirmatory factor analysis (CFA). Due to this statistical procedure, it will be verified whether the matrix arising from the adopted theoretical model differs statistically from the empirical matrix of covariance arising from the system of equations. The fit of the model with the empirical data will be evaluated using χ2, RMSEA and CFI (Comparative Fit Index). How well the theoretical model fits the empirical data is assessed through a number of indicators. If the theoretical conjectures are confirmed, an interesting development path can be defined for project-oriented companies. This will let such organisations perform efficiently in the face of the growing competition and pressure on innovation.Keywords: value creation, project-oriented company, structural equation modelling
Procedia PDF Downloads 297873 Analysis on Solar Panel Performance and PV-Inverter Configuration for Tropical Region
Authors: Eko Adhi Setiawan, Duli Asih Siregar, Aiman Setiawan
Abstract:
Solar energy is abundant in nature, particularly in the tropics which have peak sun hour that can reach 8 hours per day. In the fabrication process, Photovoltaic’s (PV) performance are tested in standard test conditions (STC). It specifies a module temperature of 25°C, an irradiance of 1000 W/ m² with an air mass 1.5 (AM1.5) spectrum and zero wind speed. Thus, the results of the performance testing of PV at STC conditions cannot fully represent the performance of PV in the tropics. For example Indonesia, which has a temperature of 20-40°C. In this paper, the effect of temperature on the choice of the 5 kW AC inverter topology on the PV system such as the Central Inverter, String Inverter and AC-Module specifically for the tropics will be discussed. The proper inverter topology can be determined by analysis of the effect of temperature and irradiation on the PV panel. The effect of temperature and irradiation will be represented in the characteristics of I-V and P-V curves. PV’s characteristics on high temperature would be analyzed using Solar panel modeling through MATLAB Simulink based on mathematical equations that form Solar panel’s characteristic curve. Based on PV simulation, it is known then that temperature coefficients of short circuit current (ISC), open circuit voltage (VOC), and maximum output power (PMAX) consecutively as high as 0.56%/oC, -0.31%/oC and -0.4%/oC. Those coefficients can be used to calculate PV’s electrical parameters such as ISC, VOC, and PMAX in certain earth’s surface’s certain point. Then, from the parameters, the utility of the 5 kW AC inverter system can be determined. As the result, for tropical area, string inverter topology has the highest utility rates with 98, 80 %. On the other hand, central inverter and AC-Module Topology has utility rates of 92.69 % and 87.7 % eventually.Keywords: Photovoltaic, PV-Inverter Configuration, PV Modeling, Solar Panel Characteristics.
Procedia PDF Downloads 379872 Numerical Modelling of Immiscible Fluids Flow in Oil Reservoir Rocks during Enhanced Oil Recovery Processes
Authors: Zahreddine Hafsi, Manoranjan Mishra , Sami Elaoud
Abstract:
Ensuring the maximum recovery rate of oil from reservoir rocks is a challenging task that requires preliminary numerical analysis of different techniques used to enhance the recovery process. After conventional oil recovery processes and in order to retrieve oil left behind after the primary recovery phase, water flooding in one of several techniques used for enhanced oil recovery (EOR). In this research work, EOR via water flooding is numerically modeled, and hydrodynamic instabilities resulted from immiscible oil-water flow in reservoir rocks are investigated. An oil reservoir is a porous medium consisted of many fractures of tiny dimensions. For modeling purposes, the oil reservoir is considered as a collection of capillary tubes which provides useful insights into how fluids behave in the reservoir pore spaces. Equations governing oil-water flow in oil reservoir rocks are developed and numerically solved following a finite element scheme. Numerical results are obtained using Comsol Multiphysics software. The two phase Darcy module of COMSOL Multiphysics allows modelling the imbibition process by the injection of water (as wetting phase) into an oil reservoir. Van Genuchten, Brooks Corey and Levrett models were considered as retention models and obtained flow configurations are compared, and the governing parameters are discussed. For the considered retention models it was found that onset of instabilities viz. fingering phenomenon is highly dependent on the capillary pressure as well as the boundary conditions, i.e., the inlet pressure and the injection velocity.Keywords: capillary pressure, EOR process, immiscible flow, numerical modelling
Procedia PDF Downloads 131871 The Role of Androgens in Prediction of Success in Smoking Cessation in Women
Authors: Michaela Dušková, Kateřina Šimůnková, Martin Hill, Hana Hruškovičová, Hana Pospíšilová, Eva Králíková, Luboslav Stárka
Abstract:
Smoking represents the most widespread substance dependence in the world. Several studies show the nicotine's ability to alter women hormonal homeostasis. Women smokers have higher testosterone and lower estradiol levels throughout life compared to non-smoker women. We monitored the effect of smoking discontinuation on steroid spectrum with 40 premenopausal and 60 postmenopausal women smokers. These women had been examined before they discontinued smoking and also after 6, 12, 24, and 48 weeks of abstinence. At each examination, blood was collected to determine steroid spectrum (measured by GC-MS), LH, FSH, and SHBG (measured by IRMA). Repeated measures ANOVA model was used for evaluation of the data. The study has been approved by the local Ethics Committee. Given the small number of premenopausal women who endured not to smoke, only the first 6 week period data could be analyzed. A slight increase in androgens after the smoking discontinuation occurred. In postmenopausal women, an increase in testosterone, dihydrotestosterone, dehydroepiandrosterone, and other androgens occurred, too. Nicotine replacement therapy, weight changes, and age does not play any role in the androgen level increase. The higher androgens levels correlated with failure in smoking cessation. Women smokers have higher androgen levels, which might play a role in smoking dependence development. Women successful in smoking cessation, compared to the non-successful ones, have lower androgen levels initially and also after smoking discontinuation. The question is what androgen levels women have before they start smoking.Keywords: addiction, smoking, cessation, androgens
Procedia PDF Downloads 381870 Ten Patterns of Organizational Misconduct and a Descriptive Model of Interactions
Authors: Ali Abbas
Abstract:
This paper presents a descriptive model of organizational misconduct based on observed patterns that occur before and after an ethical collapse. The patterns were classified by categorizing media articles in both "for-profit" and "not-for-profit" organizations. Based on the model parameters, the paper provides a descriptive model of various organizational deflection strategies under numerous scenarios, including situations where ethical complaints build-up, situations under which whistleblowers become more prevalent, situations where large scandals that relate to leadership occur, and strategies by which organizations deflect blame when pressure builds up or when media finds out. The model parameters start with the premise of a tolerance to double standards in unethical acts when conducted by leadership or by members of corporate governance. Following this premise, the model explains how organizations engage in discursive strategies to cover up the potential conflicts that arise, including secret agreements and weakening stakeholders who may oppose the organizational acts. Deflection strategies include "preemptive" and "post-complaint" secret agreements, absence of (or vague) documented procedures, engaging in blame and scapegoating, remaining silent on complaints until the media finds out, as well as being slow (if at all) to acknowledge misconduct and fast to cover it up. The results of this paper may be used to guide organizational leaders into the implications of such shortsighted strategies toward unethical acts, even if they are deemed legal. Validation of the model assumptions through numerous media articles is provided.Keywords: ethical decision making, prediction, scandals, organizational strategies
Procedia PDF Downloads 125869 Hysteresis Modeling in Iron-Dominated Magnets Based on a Deep Neural Network Approach
Authors: Maria Amodeo, Pasquale Arpaia, Marco Buzio, Vincenzo Di Capua, Francesco Donnarumma
Abstract:
Different deep neural network architectures have been compared and tested to predict magnetic hysteresis in the context of pulsed electromagnets for experimental physics applications. Modelling quasi-static or dynamic major and especially minor hysteresis loops is one of the most challenging topics for computational magnetism. Recent attempts at mathematical prediction in this context using Preisach models could not attain better than percent-level accuracy. Hence, this work explores neural network approaches and shows that the architecture that best fits the measured magnetic field behaviour, including the effects of hysteresis and eddy currents, is the nonlinear autoregressive exogenous neural network (NARX) model. This architecture aims to achieve a relative RMSE of the order of a few 100 ppm for complex magnetic field cycling, including arbitrary sequences of pseudo-random high field and low field cycles. The NARX-based architecture is compared with the state-of-the-art, showing better performance than the classical operator-based and differential models, and is tested on a reference quadrupole magnetic lens used for CERN particle beams, chosen as a case study. The training and test datasets are a representative example of real-world magnet operation; this makes the good result obtained very promising for future applications in this context.Keywords: deep neural network, magnetic modelling, measurement and empirical software engineering, NARX
Procedia PDF Downloads 130868 Development of Probability Distribution Models for Degree of Bending (DoB) in Chord Member of Tubular X-Joints under Bending Loads
Authors: Hamid Ahmadi, Amirreza Ghaffari
Abstract:
Fatigue life of tubular joints in offshore structures is not only dependent on the value of hot-spot stress, but is also significantly influenced by the through-the-thickness stress distribution characterized by the degree of bending (DoB). The DoB exhibits considerable scatter calling for greater emphasis in accurate determination of its governing probability distribution which is a key input for the fatigue reliability analysis of a tubular joint. Although the tubular X-joints are commonly found in offshore jacket structures, as far as the authors are aware, no comprehensive research has been carried out on the probability distribution of the DoB in tubular X-joints. What has been used so far as the probability distribution of the DoB in reliability analyses is mainly based on assumptions and limited observations, especially in terms of distribution parameters. In the present paper, results of parametric equations available for the calculation of the DoB have been used to develop probability distribution models for the DoB in the chord member of tubular X-joints subjected to four types of bending loads. Based on a parametric study, a set of samples was prepared and density histograms were generated for these samples using Freedman-Diaconis method. Twelve different probability density functions (PDFs) were fitted to these histograms. The maximum likelihood method was utilized to determine the parameters of fitted distributions. In each case, Kolmogorov-Smirnov test was used to evaluate the goodness of fit. Finally, after substituting the values of estimated parameters for each distribution, a set of fully defined PDFs have been proposed for the DoB in tubular X-joints subjected to bending loads.Keywords: tubular X-joint, degree of bending (DoB), probability density function (PDF), Kolmogorov-Smirnov goodness-of-fit test
Procedia PDF Downloads 719867 A Numerical Study on Semi-Active Control of a Bridge Deck under Seismic Excitation
Authors: A. Yanik, U. Aldemir
Abstract:
This study investigates the benefits of implementing the semi-active devices in relation to passive viscous damping in the context of seismically isolated bridge structures. Since the intrinsically nonlinear nature of semi-active devices prevents the direct evaluation of Laplace transforms, frequency response functions are compiled from the computed time history response to sinusoidal and pulse-like seismic excitation. A simple semi-active control policy is used in regard to passive linear viscous damping and an optimal non-causal semi-active control strategy. The control strategy requires optimization. Euler-Lagrange equations are solved numerically during this procedure. The optimal closed-loop performance is evaluated for an idealized controllable dash-pot. A simplified single-degree-of-freedom model of an isolated bridge is used as numerical example. Two bridge cases are investigated. These cases are; bridge deck without the isolation bearing and bridge deck with the isolation bearing. To compare the performances of the passive and semi-active control cases, frequency dependent acceleration, velocity and displacement response transmissibility ratios Ta(w), Tv(w), and Td(w) are defined. To fully investigate the behavior of the structure subjected to the sinusoidal and pulse type excitations, different damping levels are considered. Numerical results showed that, under the effect of external excitation, bridge deck with semi-active control showed better structural performance than the passive bridge deck case.Keywords: bridge structures, passive control, seismic, semi-active control, viscous damping
Procedia PDF Downloads 241866 Fault Tolerant Control of the Dynamical Systems Based on Internal Structure Systems
Authors: Seyed Mohammad Hashemi, Shahrokh Barati
Abstract:
The problem of fault-tolerant control (FTC) by accommodation method has been studied in this paper. The fault occurs in any system components such as actuators, sensors or internal structure of the system and leads to loss of performance and instability of the system. When a fault occurs, the purpose of the fault-tolerant control is designate strategy that can keep the control loop stable and system performance as much as possible perform it without shutting down the system. Here, the section of fault detection and isolation (FDI) system has been evaluated with regard to actuator's fault. Designing a fault detection and isolation system for a multi input-multi output (MIMO) is done by an unknown input observer, so the system is divided to several subsystems as the effect of other inputs such as disturbing given system state equations. In this observer design method, the effect of these disturbances will weaken and the only fault is detected on specific input. The results of this approach simulation can confirm the ability of the fault detection and isolation system design. After fault detection and isolation, it is necessary to redesign controller based on a suitable modification. In this regard after the use of unknown input observer theory and obtain residual signal and evaluate it, PID controller parameters redesigned for iterative. Stability of the closed loop system has proved in the presence of this method. Also, In order to soften the volatility caused by Annie variations of the PID controller parameters, modifying Sigma as a way acceptable solution used. Finally, the simulation results of three tank popular example confirm the accuracy of performance.Keywords: fault tolerant control, fault detection and isolation, actuator fault, unknown input observer
Procedia PDF Downloads 452865 Automatic Classification of Lung Diseases from CT Images
Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari
Abstract:
Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life of the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or Covidi-19 induced pneumonia. The early prediction and classification of such lung diseases help to reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans have pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publically available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.Keywords: CT scan, Covid-19, deep learning, image processing, lung disease classification
Procedia PDF Downloads 154864 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation
Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar
Abstract:
The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.Keywords: computational fluid dynamics (CFD), erosion, slurry transportation, k-ε Model
Procedia PDF Downloads 408863 Airline Choice Model for Domestic Flights: The Role of Airline Flexibility
Authors: Camila Amin-Puello, Lina Vasco-Diaz, Juan Ramirez-Arias, Claudia Munoz, Carlos Gonzalez-Calderon
Abstract:
Operational flexibility is a fundamental aspect in the field of airlines because although demand is constantly changing, it is the duty of companies to provide a service to users that satisfies their needs in an efficient manner without sacrificing factors such as comfort, safety and other perception variables. The objective of this research is to understand the factors that describe and explain operational flexibility by implementing advanced analytical methods such as exploratory factor analysis and structural equation modeling, examining multiple levels of operational flexibility and understanding how these variable influences users' decision-making when choosing an airline and in turn how it affects the airlines themselves. The use of a hybrid model and latent variables improves the efficiency and accuracy of airline performance prediction in the unpredictable Colombian market. This pioneering study delves into traveler motivations and their impact on domestic flight demand, offering valuable insights to optimize resources and improve the overall traveler experience. Applying the methods, it was identified that low-cost airlines are not useful for flexibility, while users, especially women, found airlines with greater flexibility in terms of ticket costs and flight schedules to be more useful. All of this allows airlines to anticipate and adapt to their customers' needs efficiently: to plan flight capacity appropriately, adjust pricing strategies and improve the overall passenger experience.Keywords: hybrid choice model, airline, business travelers, domestic flights
Procedia PDF Downloads 12862 Numerical Approach for Characterization of Flow Field in Pump Intake Using Two Phase Model: Detached Eddy Simulation
Authors: Rahul Paliwal, Gulshan Maheshwari, Anant S. Jhaveri, Channamallikarjun S. Mathpati
Abstract:
Large pumping facility is the necessary requirement of the cooling water systems for power plants, process and manufacturing facilities, flood control and water or waste water treatment plant. With a large capacity of few hundred to 50,000 m3/hr, cares must be taken to ensure the uniform flow to the pump to limit vibration, flow induced cavitation and performance problems due to formation of air entrained vortex and swirl flow. Successful prediction of these phenomena requires numerical method and turbulence model to characterize the dynamics of these flows. In the past years, single phase shear stress transport (SST) Reynolds averaged Navier Stokes Models (like k-ε, k-ω and RSM) were used to predict the behavior of flow. Literature study showed that two phase model will be more accurate over single phase model. In this paper, a 3D geometries simulated using detached eddy simulation (LES) is used to predict the behavior of the fluid and the results are compared with experimental results. Effect of different grid structure and boundary condition is also studied. It is observed that two phase flow model can more accurately predict the mean flow and turbulence statistics compared to the steady SST model. These validate model will be used for further analysis of vortex structure in lab scale model to generate their frequency-plot and intensity at different location in the set-up. This study will help in minimizing the ill effect of vortex on pump performance.Keywords: grid structure, pump intake, simulation, vibration, vortex
Procedia PDF Downloads 175861 Budget Optimization for Maintenance of Bridges in Egypt
Authors: Hesham Abd Elkhalek, Sherif M. Hafez, Yasser M. El Fahham
Abstract:
Allocating limited budget to maintain bridge networks and selecting effective maintenance strategies for each bridge represent challenging tasks for maintenance managers and decision makers. In Egypt, bridges are continuously deteriorating. In many cases, maintenance works are performed due to user complaints. The objective of this paper is to develop a practical and reliable framework to manage the maintenance, repair, and rehabilitation (MR&R) activities of Bridges network considering performance and budget limits. The model solves an optimization problem that maximizes the average condition of the entire network given the limited available budget using Genetic Algorithm (GA). The framework contains bridge inventory, condition assessment, repair cost calculation, deterioration prediction, and maintenance optimization. The developed model takes into account multiple parameters including serviceability requirements, budget allocation, element importance on structural safety and serviceability, bridge impact on network, and traffic. A questionnaire is conducted to complete the research scope. The proposed model is implemented in software, which provides a friendly user interface. The framework provides a multi-year maintenance plan for the entire network for up to five years. A case study of ten bridges is presented to validate and test the proposed model with data collected from Transportation Authorities in Egypt. Different scenarios are presented. The results are reasonable, feasible and within acceptable domain.Keywords: bridge management systems (BMS), cost optimization condition assessment, fund allocation, Markov chain
Procedia PDF Downloads 291860 Liquid Biopsy and Screening Biomarkers in Glioma Grading
Authors: Abdullah Abdu Qaseem Shamsan
Abstract:
Background: Gliomas represent the most frequent, heterogeneous group of tumors arising from glial cells, characterized by difficult monitoring, poor prognosis, and fatality. Tissue biopsy is an established procedure for tumor cell sampling that aids diagnosis, tumor grading, and prediction of prognosis. We studied and compared the levels of liquid biopsy markers in patients with different grades of glioma. Also, it tried to establish the potential association between glioma and specific blood groups antigen. Result: 78 patients were identified, among whom maximum percentage with glioblastoma possessed blood group O+ (53.8%). The second highest frequency had blood group A+ (20.4%), followed by B+ (9.0%) and A- (5.1%), and least with O-. Liquid biopsy biomarkers comprised of ALT, LDH, lymphocytes, Urea, Alkaline phosphatase, AST Neutrophils, and CRP. The levels of all the components increased significantly with the severity of glioma, with maximum levels seen in glioblastoma (grade IV), followed by grade III and grade II respectively. Conclusion: Gliomas possess significant clinical challenges due to their progression with heterogeneous nature and aggressive behavior. Liquid biopsy is a non-invasive approach which aids to establish the status of the patient and determine the tumor grade, therefore may show diagnostic and prognostic utility. Additionally, our study provides evidence to demonstrate the role of ABO blood group antigens in the development of glioma. However, future clinical research on liquid biopsy will improve the sensitivity and specificity of these tests and validate their clinical usefulness to guide treatment approaches.Keywords: GBM: glioblastoma multiforme, CT: computed tomography, MRI: magnetic resonance imaging, ctRNA: circulating tumor RNA
Procedia PDF Downloads 51859 Evaluation of the Dry Compressive Strength of Refractory Bricks Developed from Local Kaolin
Authors: Olanrewaju Rotimi Bodede, Akinlabi Oyetunji
Abstract:
Modeling the dry compressive strength of sodium silicate bonded kaolin refractory bricks was studied. The materials used for this research work included refractory clay obtained from Ijero-Ekiti kaolin deposit on coordinates 7º 49´N and 5º 5´E, sodium silicate obtained from the open market in Lagos on coordinates 6°27′11″N 3°23′45″E all in the South Western part of Nigeria. The mineralogical composition of the kaolin clay was determined using the Energy Dispersive X-Ray Fluorescence Spectrometer (ED-XRF). The clay samples were crushed and sieved using the laboratory pulveriser, ball mill and sieve shaker respectively to obtain 100 μm diameter particles. Manual pipe extruder of dimension 30 mm diameter by 43.30 mm height was used to prepare the samples with varying percentage volume of sodium silicate 5 %, 7.5 % 10 %, 12.5 %, 15 %, 17.5 %, 20% and 22.5 % while kaolin and water were kept at 50 % and 5 % respectively for the comprehensive test. The samples were left to dry in the open laboratory atmosphere for 24 hours to remove moisture. The samples were then were fired in an electrically powered muffle furnace. Firing was done at the following temperatures; 700ºC, 750ºC, 800ºC, 850ºC, 900ºC, 950ºC, 1000ºC and 1100ºC. Compressive strength test was carried out on the dried samples using a Testometric Universal Testing Machine (TUTM) equipped with a computer and printer, optimum compression of 4.41 kN/mm2 was obtained at 12.5 % sodium silicate; the experimental results were modeled with MATLAB and Origin packages using polynomial regression equations that predicted the estimated values for dry compressive strength and later validated with Pearson’s rank correlation coefficient, thereby obtaining a very high positive correlation value of 0.97.Keywords: dry compressive strength, kaolin, modeling, sodium silicate
Procedia PDF Downloads 455858 Fluid–Structure Interaction Modeling of Wind Turbines
Authors: Andre F. A. Cyrino
Abstract:
Knowing that the technological advance is the focus on the efficient extraction of energy from wind, and therefore in the design of wind turbine structures, this work aims the study of the fluid-structure interaction of an idealized wind turbine. The blade was studied as a beam attached to a cylindrical Hub with rotation axis pointing the air flow that passes through the rotor. Using the calculus of variations and the finite difference method the blade will be simulated by a discrete number of nodes and the aerodynamic forces were evaluated. The study presented here was written on Matlab and performs a numeric simulation of a simplified model of windmill containing a Hub and three blades modeled as Euler-Bernoulli beams for small strains and under the constant and uniform wind. The mathematical approach is done by Hamilton’s Extended Principle with the aerodynamic loads applied on the nodes considering the local relative wind speed, angle of attack and aerodynamic lift and drag coefficients. Due to the wide range of angles of attack, a wind turbine blade operates, the airfoil used on the model was NREL SERI S809 which allowed obtaining equations for Cl and Cd as functions of the angle of attack, based on a NASA study. Tridimensional flow effects were no taken in part, as well as torsion of the beam, which only bends. The results showed the dynamic response of the system in terms of displacement and rotational speed as the turbine reached the final speed. Although the results were not compared to real windmills or more complete models, the resulting values were consistent with the size of the system and wind speed.Keywords: blade aerodynamics, fluid–structure interaction, wind turbine aerodynamics, wind turbine blade
Procedia PDF Downloads 268857 Reliability Based Analysis of Multi-Lane Reinforced Concrete Slab Bridges
Authors: Ali Mahmoud, Shadi Najjar, Mounir Mabsout, Kassim Tarhini
Abstract:
Empirical expressions for estimating the wheel load distribution and live-load bending moment are typically specified in highway bridge codes such as the AASHTO procedures. The purpose of this paper is to analyze the reliability levels that are inherent in reinforced concrete slab bridges that are designed based on the simplified empirical live load equations in the AASHTO LRFD procedures. To achieve this objective, bridges with multi-lanes (three and four lanes) and different spans are modeled using finite-element analysis (FEA) subjected to HS20 truck loading, tandem loading, and standard lane loading per AASHTO LRFD procedures. The FEA results are compared with the AASHTO LRFD moments in order to quantify the biases that might result from the simplifying assumptions adopted in AASHTO. A reliability analysis is conducted to quantify the reliability index for bridges designed using AASHTO procedures. To reach a consistent level of safety for three- and four-lane bridges, following a previous study restricted to one- and two-lane bridges, the live load factor in the design equation proposed by AASHTO LRFD will be assessed and revised if needed by alternating the live load factor for these lanes. The results will provide structural engineers with more consistent provisions to design concrete slab bridges or evaluate the load-carrying capacity of existing bridges.Keywords: reliability analysis of concrete bridges, finite element modeling, reliability analysis, reinforced concrete bridge design, load carrying capacity
Procedia PDF Downloads 340856 Contemporary Challenges in Public Relations in the Context of Globalization
Authors: Marine Kobalava, Eter Narimanishvili, Nino Grigolaia
Abstract:
The paper analyzes the contemporary problems of public relations in Georgia. The approaches to public attitudes towards the relationship with the population of the country are studied on a global scale, the importance of forming the concept of public relations in Georgia in terms of globalization is justified. The basic components of public relations are characterized by the RACE system, namely analyzing research, action, communication, evaluation. The main challenges of public relations are identified in the research process; taking into consideration the scope of globalization, the influence of social, economic, and political changes in Georgia on PR development are identified. The article discusses the public relations as the strategic management function that facilitates communication with the society, recognition of public interests, and their prediction. In addition, the feminization of the sector is considered to be the most important achievement of public relations in the modern world. The conclusion is that the feminization indicator of the field is an unconditional increase in the employment rates of women. In the paper, the problems of globalization and public relations in the industrial countries are studied, the directions of improvement of public relations with the background of peculiarities of different countries and globalization process are proposed. Public relations under globalization are assessed in accordance with the theory of benefits and requirements, and the requirements are classified according to informational, self-identification, integration, social interaction, and other types of signs. In the article, conclusions on the current challenges of public relations in Georgia are made, and the recommendations for their solution, taking into consideration globalization processes in the world, are proposed.Keywords: public relations, globalization, RACE system, public relationship concept, feminization
Procedia PDF Downloads 171855 A Literature Review on Emotion Recognition Using Wireless Body Area Network
Authors: Christodoulou Christos, Politis Anastasios
Abstract:
The utilization of Wireless Body Area Network (WBAN) is experiencing a notable surge in popularity as a result of its widespread implementation in the field of smart health. WBANs utilize small sensors implanted within the human body to monitor and record physiological indicators. These sensors transmit the collected data to hospitals and healthcare facilities through designated access points. Bio-sensors exhibit a diverse array of shapes and sizes, and their deployment can be tailored to the condition of the individual. Multiple sensors may be strategically placed within, on, or around the human body to effectively observe, record, and transmit essential physiological indicators. These measurements serve as a basis for subsequent analysis, evaluation, and therapeutic interventions. In conjunction with physical health concerns, numerous smartwatches are engineered to employ artificial intelligence techniques for the purpose of detecting mental health conditions such as depression and anxiety. The utilization of smartwatches serves as a secure and cost-effective solution for monitoring mental health. Physiological signals are widely regarded as a highly dependable method for the recognition of emotions due to the inherent inability of individuals to deliberately influence them over extended periods of time. The techniques that WBANs employ to recognize emotions are thoroughly examined in this article.Keywords: emotion recognition, wireless body area network, WBAN, ERC, wearable devices, psychological signals, emotion, smart-watch, prediction
Procedia PDF Downloads 50854 Prediction of Product Size Distribution of a Vertical Stirred Mill Based on Breakage Kinetics
Authors: C. R. Danielle, S. Erik, T. Patrick, M. Hugh
Abstract:
In the last decade there has been an increase in demand for fine grinding due to the depletion of coarse-grained orebodies and an increase of processing fine disseminated minerals and complex orebodies. These ores have provided new challenges in concentrator design because fine and ultra-fine grinding is required to achieve acceptable recovery rates. Therefore, the correct design of a grinding circuit is important for minimizing unit costs and increasing product quality. The use of ball mills for grinding in fine size ranges is inefficient and, therefore, vertical stirred grinding mills are becoming increasingly popular in the mineral processing industry due to its already known high energy efficiency. This work presents a hypothesis of a methodology to predict the product size distribution of a vertical stirred mill using a Bond ball mill. The Population Balance Model (PBM) was used to empirically analyze the performance of a vertical mill and a Bond ball mill. The breakage parameters obtained for both grinding mills are compared to determine the possibility of predicting the product size distribution of a vertical mill based on the results obtained from the Bond ball mill. The biggest advantage of this methodology is that most of the minerals processing laboratories already have a Bond ball mill to perform the tests suggested in this study. Preliminary results show the possibility of predicting the performance of a laboratory vertical stirred mill using a Bond ball mill.Keywords: bond ball mill, population balance model, product size distribution, vertical stirred mill
Procedia PDF Downloads 291853 The Importance of Clinicopathological Features for Differentiation Between Crohn's Disease and Ulcerative Colitis
Authors: Ghada E. Esheba, Ghadeer F. Alharthi, Duaa A. Alhejaili, Rawan E. Hudairy, Wafaa A. Altaezi, Raghad M. Alhejaili
Abstract:
Background: Inflammatory bowel disease (IBD) consists of two specific gastrointestinal disorders: ulcerative colitis (UC) and Crohn's disease (CD). Despite their distinct natures, these two diseases share many similar etiologic, clinical and pathological features, as a result, their accurate differential diagnosis may sometimes be difficult. Correct diagnosis is important because surgical treatment and long-term prognosis differ from UC and CD. Aim: This study aims to study the characteristic clinicopathological features which help in the differential diagnosis between UC and CD, and assess the disease activity in ulcerative colitis. Materials and methods: This study was carried out on 50 selected cases. The cases included 27 cases of UC and 23 cases of CD. All the cases were examined using H& E and immunohistochemically for bcl-2 expression. Results: Characteristic features of UC include: decrease in mucous content, irregular or villous surface, crypt distortion, and cryptitis, whereas the main cardinal histopathological features seen in CD were: epitheloid granuloma, transmural chronic inflammation, absence of mucin depletion, irregular surface, or crypt distortion. 3 cases of UC were found to be associated with dysplasia. UC mucosa contains fewer Bcl-2+ cells compared with CD mucosa. Conclusion: This study using multiple parameters such clinicopathological features and Bcl-2 expression as studied by immunohistochemical stain, helped to gain an accurate differentiation between UC and CD. Furthermore, this work spotted the light on the activity and different grades of UC which could be important for the prediction of relapse.Keywords: Crohn's disease, dysplasia, inflammatory bowel disease, ulcerative colitis
Procedia PDF Downloads 191852 Times2D: A Time-Frequency Method for Time Series Forecasting
Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan
Abstract:
Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation
Procedia PDF Downloads 42851 Government Size and Economic Growth: Testing the Non-Linear Hypothesis for Nigeria
Authors: R. Santos Alimi
Abstract:
Using time-series techniques, this study empirically tested the validity of existing theory which stipulates there is a nonlinear relationship between government size and economic growth; such that government spending is growth-enhancing at low levels but growth-retarding at high levels, with the optimal size occurring somewhere in between. This study employed three estimation equations. First, for the size of government, two measures are considered as follows: (i) share of total expenditures to gross domestic product, (ii) share of recurrent expenditures to gross domestic product. Second, the study adopted real GDP (without government expenditure component), as a variant measure of economic growth other than the real total GDP, in estimating the optimal level of government expenditure. The study is based on annual Nigeria country-level data for the period 1970 to 2012. Estimation results show that the inverted U-shaped curve exists for the two measures of government size and the estimated optimum shares are 19.81% and 10.98%, respectively. Finally, with the adoption of real GDP (without government expenditure component), the optimum government size was found to be 12.58% of GDP. Our analysis shows that the actual share of government spending on average (2000 - 2012) is about 13.4%.This study adds to the literature confirming that the optimal government size exists not only for developed economies but also for developing economy like Nigeria. Thus, a public intervention threshold level that fosters economic growth is a reality; beyond this point economic growth should be left in the hands of the private sector. This finding has a significant implication for the appraisal of government spending and budgetary policy design.Keywords: public expenditure, economic growth, optimum level, fully modified OLS
Procedia PDF Downloads 420850 An Estimating Equation for Survival Data with a Possibly Time-Varying Covariates under a Semiparametric Transformation Models
Authors: Yemane Hailu Fissuh, Zhongzhan Zhang
Abstract:
An estimating equation technique is an alternative method of the widely used maximum likelihood methods, which enables us to ease some complexity due to the complex characteristics of time-varying covariates. In the situations, when both the time-varying covariates and left-truncation are considered in the model, the maximum likelihood estimation procedures become much more burdensome and complex. To ease the complexity, in this study, the modified estimating equations those have been given high attention and considerations in many researchers under semiparametric transformation model was proposed. The purpose of this article was to develop the modified estimating equation under flexible and general class of semiparametric transformation models for left-truncated and right censored survival data with time-varying covariates. Besides the commonly applied Cox proportional hazards model, such kind of problems can be also analyzed with a general class of semiparametric transformation models to estimate the effect of treatment given possibly time-varying covariates on the survival time. The consistency and asymptotic properties of the estimators were intuitively derived via the expectation-maximization (EM) algorithm. The characteristics of the estimators in the finite sample performance for the proposed model were illustrated via simulation studies and Stanford heart transplant real data examples. To sum up the study, the bias for covariates has been adjusted by estimating density function for the truncation time variable. Then the effect of possibly time-varying covariates was evaluated in some special semiparametric transformation models.Keywords: EM algorithm, estimating equation, semiparametric transformation models, time-to-event outcomes, time varying covariate
Procedia PDF Downloads 152849 Efficient Implementation of Finite Volume Multi-Resolution Weno Scheme on Adaptive Cartesian Grids
Authors: Yuchen Yang, Zhenming Wang, Jun Zhu, Ning Zhao
Abstract:
An easy-to-implement and robust finite volume multi-resolution Weighted Essentially Non-Oscillatory (WENO) scheme is proposed on adaptive cartesian grids in this paper. Such a multi-resolution WENO scheme is combined with the ghost cell immersed boundary method (IBM) and wall-function technique to solve Navier-Stokes equations. Unlike the k-exact finite volume WENO schemes which involve large amounts of extra storage, repeatedly solving the matrix generated in a least-square method or the process of calculating optimal linear weights on adaptive cartesian grids, the present methodology only adds very small overhead and can be easily implemented in existing edge-based computational fluid dynamics (CFD) codes with minor modifications. Also, the linear weights of this adaptive finite volume multi-resolution WENO scheme can be any positive numbers on condition that their sum is one. It is a way of bypassing the calculation of the optimal linear weights and such a multi-resolution WENO scheme avoids dealing with the negative linear weights on adaptive cartesian grids. Some benchmark viscous problems are numerical solved to show the efficiency and good performance of this adaptive multi-resolution WENO scheme. Compared with a second-order edge-based method, the presented method can be implemented into an adaptive cartesian grid with slight modification for big Reynolds number problems.Keywords: adaptive mesh refinement method, finite volume multi-resolution WENO scheme, immersed boundary method, wall-function technique.
Procedia PDF Downloads 148848 Prediction of Pounding between Two SDOF Systems by Using Link Element Based On Mathematic Relations and Suggestion of New Equation for Impact Damping Ratio
Authors: Seyed M. Khatami, H. Naderpour, R. Vahdani, R. C. Barros
Abstract:
Many previous studies have been carried out to calculate the impact force and the dissipated energy between two neighboring buildings during seismic excitation, when they collide with each other. Numerical studies are an important part of impact, which several researchers have tried to simulate the impact by using different formulas. Estimation of the impact force and the dissipated energy depends significantly on some parameters of impact. Mass of bodies, stiffness of spring, coefficient of restitution, damping ratio of dashpot and impact velocity are some known and unknown parameters to simulate the impact and measure dissipated energy during collision. Collision is usually shown by force-displacement hysteresis curve. The enclosed area of the hysteresis loop explains the dissipated energy during impact. In this paper, the effect of using different types of impact models is investigated in order to calculate the impact force. To increase the accuracy of impact model and to optimize the results of simulations, a new damping equation is assumed and is validated to get the best results of impact force and dissipated energy, which can show the accuracy of suggested equation of motion in comparison with other formulas. This relation is called "n-m". Based on mathematical relation, an initial value is selected for the mentioned coefficients and kinetic energy loss is calculated. After each simulation, kinetic energy loss and energy dissipation are compared with each other. If they are equal, selected parameters are true and, if not, the constant of parameters are modified and a new analysis is performed. Finally, two unknown parameters are suggested to estimate the impact force and calculate the dissipated energy.Keywords: impact force, dissipated energy, kinetic energy loss, damping relation
Procedia PDF Downloads 552847 Architecture - Performance Relationship in GPU Computing - Composite Process Flow Modeling and Simulations
Authors: Ram Mohan, Richard Haney, Ajit Kelkar
Abstract:
Current developments in computing have shown the advantage of using one or more Graphic Processing Units (GPU) to boost the performance of many computationally intensive applications but there are still limits to these GPU-enhanced systems. The major factors that contribute to the limitations of GPU(s) for High Performance Computing (HPC) can be categorized as hardware and software oriented in nature. Understanding how these factors affect performance is essential to develop efficient and robust applications codes that employ one or more GPU devices as powerful co-processors for HPC computational modeling. This research and technical presentation will focus on the analysis and understanding of the intrinsic interrelationship of both hardware and software categories on computational performance for single and multiple GPU-enhanced systems using a computationally intensive application that is representative of a large portion of challenges confronting modern HPC. The representative application uses unstructured finite element computations for transient composite resin infusion process flow modeling as the computational core, characteristics and results of which reflect many other HPC applications via the sparse matrix system used for the solution of linear system of equations. This work describes these various software and hardware factors and how they interact to affect performance of computationally intensive applications enabling more efficient development and porting of High Performance Computing applications that includes current, legacy, and future large scale computational modeling applications in various engineering and scientific disciplines.Keywords: graphical processing unit, software development and engineering, performance analysis, system architecture and software performance
Procedia PDF Downloads 363846 The Analysis of Competitive Balance Progress among Five Most Valuable Football Leagues from 1966 to 2015
Authors: Seyed Salahedin Naghshbandi, Zahra Bozorgzadeh, Leila Zakizadeh, Siavash Hamidzadeh
Abstract:
From the sport economy experts point of view, the existence of competitive balance among sport leagues and its numerous effects on league is an important and undeniable issue. In general, sport events fans are so eager to unpredictable results of competition in order to reach the top of excitement and necessary motivation for following competitions. The purpose of this research is to consider and compare the competitive balance among five provisional European football leagues (Spain, England, Italy, France and Germany) during 1966 - 2015 seasons. Research data are secondary and obtained from Premier League final tables of selected countries in 1966 - 2015 seasons. For analyzing data, C5 and C5ICB indicators used. whatever these indicators be less, more balance establishes in the league and vice-versa. The result showed that Le champion of France reached from 1,259 to 1,395; Italy Serie-A league from 1,316 to 1,432; England premier league from 1, 342 to 1,455; Germany Bundesliga from 1,238 to 1,465 and Spain La liga from 1,295 to 1,495. So by comparing C5ICB charts during 1966 - 2015 seasons, La liga of Spain moved more toward imbalance and enjoyed less balance with other European Leagues. Also, La champion of France during the mentioned season, enjoyed less imbalance and preserved its relative balance with monotonous process. It seems that football in France has been followed as stable during 1966 to 2015, and prediction of results was more difficult and competitions were so attractive for spectators, but in Italy, England, Germany, and Spain there were less balance, respectively.Keywords: competitive balance, professional football league, competition, C5ICB
Procedia PDF Downloads 142