Search results for: time series models
23654 Approaches to Reduce the Complexity of Mathematical Models for the Operational Optimization of Large-Scale Virtual Power Plants in Public Energy Supply
Authors: Thomas Weber, Nina Strobel, Thomas Kohne, Eberhard Abele
Abstract:
In context of the energy transition in Germany, the importance of so-called virtual power plants in the energy supply continues to increase. The progressive dismantling of the large power plants and the ongoing construction of many new decentralized plants result in great potential for optimization through synergies between the individual plants. These potentials can be exploited by mathematical optimization algorithms to calculate the optimal application planning of decentralized power and heat generators and storage systems. This also includes linear or linear mixed integer optimization. In this paper, procedures for reducing the number of decision variables to be calculated are explained and validated. On the one hand, this includes combining n similar installation types into one aggregated unit. This aggregated unit is described by the same constraints and target function terms as a single plant. This reduces the number of decision variables per time step and the complexity of the problem to be solved by a factor of n. The exact operating mode of the individual plants can then be calculated in a second optimization in such a way that the output of the individual plants corresponds to the calculated output of the aggregated unit. Another way to reduce the number of decision variables in an optimization problem is to reduce the number of time steps to be calculated. This is useful if a high temporal resolution is not necessary for all time steps. For example, the volatility or the forecast quality of environmental parameters may justify a high or low temporal resolution of the optimization. Both approaches are examined for the resulting calculation time as well as for optimality. Several optimization models for virtual power plants (combined heat and power plants, heat storage, power storage, gas turbine) with different numbers of plants are used as a reference for the investigation of both processes with regard to calculation duration and optimality.Keywords: CHP, Energy 4.0, energy storage, MILP, optimization, virtual power plant
Procedia PDF Downloads 17823653 Approximation of Periodic Functions Belonging to Lipschitz Classes by Product Matrix Means of Fourier Series
Authors: Smita Sonker, Uaday Singh
Abstract:
Various investigators have determined the degree of approximation of functions belonging to the classes W(L r , ξ(t)), Lip(ξ(t), r), Lip(α, r), and Lipα using different summability methods with monotonocity conditions. Recently, Lal has determined the degree of approximation of the functions belonging to Lipα and W(L r , ξ(t)) classes by using Ces`aro-N¨orlund (C 1 .Np)- summability with non-increasing weights {pn}. In this paper, we shall determine the degree of approximation of 2π - periodic functions f belonging to the function classes Lipα and W(L r , ξ(t)) by C 1 .T - means of Fourier series of f. Our theorems generalize the results of Lal and we also improve these results in the light off. From our results, we also derive some corollaries.Keywords: Lipschitz classes, product matrix operator, signals, trigonometric Fourier approximation
Procedia PDF Downloads 47723652 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies
Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey
Abstract:
Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. According to the sixth Intergovernmental Panel on Climate Change (IPCC) Technical Paper on Climate Change and water, changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although many previous research carried on effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.Keywords: climate change, downscaling, GCM, RCM
Procedia PDF Downloads 40623651 Evaluation of Newly Synthesized Steroid Derivatives Using In silico Molecular Descriptors and Chemometric Techniques
Authors: Milica Ž. Karadžić, Lidija R. Jevrić, Sanja Podunavac-Kuzmanović, Strahinja Z. Kovačević, Anamarija I. Mandić, Katarina Penov-Gaši, Andrea R. Nikolić, Aleksandar M. Oklješa
Abstract:
This study considered selection of the in silico molecular descriptors and the models for newly synthesized steroid derivatives description and their characterization using chemometric techniques. Multiple linear regression (MLR) models were established and gave the best molecular descriptors for quantitative structure-retention relationship (QSRR) modeling of the retention of the investigated molecules. MLR models were without multicollinearity among the selected molecular descriptors according to the variance inflation factor (VIF) values. Used molecular descriptors were ranked using generalized pair correlation method (GPCM). In this method, the significant difference between independent variables can be noticed regardless almost equal correlation between dependent variable. Generated MLR models were statistically and cross-validated and the best models were kept. Models were ranked using sum of ranking differences (SRD) method. According to this method, the most consistent QSRR model can be found and similarity or dissimilarity between the models could be noticed. In this study, SRD was performed using average values of experimentally observed data as a golden standard. Chemometric analysis was conducted in order to characterize newly synthesized steroid derivatives for further investigation regarding their potential biological activity and further synthesis. This article is based upon work from COST Action (CM1105), supported by COST (European Cooperation in Science and Technology).Keywords: generalized pair correlation method, molecular descriptors, regression analysis, steroids, sum of ranking differences
Procedia PDF Downloads 34723650 Unified Power Quality Conditioner Presentation and Dimensioning
Authors: Abderrahmane Kechich, Othmane Abdelkhalek
Abstract:
Static converters behave as nonlinear loads that inject harmonic currents into the grid and increase the consumption of the inactive power. On the other hand, the increased use of sensitive equipment requires the application of sinusoidal voltages. As a result, the electrical power quality control has become a major concern in the field of power electronics. In this context, the active power conditioner (UPQC) was developed. It combines both serial and parallel structures; the series filter can protect sensitive loads and compensate for voltage disturbances such as voltage harmonics, voltage dips or flicker when the shunt filter compensates for current disturbances such as current harmonics, reactive currents and imbalance. This double feature is that it is one of the most appropriate devices. Calculating parameters is an important step and in the same time it’s not easy for that reason several researchers based on trial and error method for calculating parameters but this method is not easy for beginners researchers especially what about the controller’s parameters, for that reason this paper gives a mathematical way to calculate of almost all of UPQC parameters away from trial and error method. This paper gives also a new approach for calculating of PI regulators parameters for purpose to have a stable UPQC able to compensate for disturbances acting on the waveform of line voltage and load current in order to improve the electrical power quality.Keywords: UPQC, Shunt active filer, series active filer, PI controller, PWM control, dual-loop control
Procedia PDF Downloads 40323649 Multi-Objective Simulated Annealing Algorithms for Scheduling Just-In-Time Assembly Lines
Authors: Ghorbanali Mohammadi
Abstract:
New approaches to sequencing mixed-model manufacturing systems are present. These approaches have attracted considerable attention due to their potential to deal with difficult optimization problems. This paper presents Multi-Objective Simulated Annealing Algorithms (MOSAA) approaches to the Just-In-Time (JIT) sequencing problem where workload-smoothing (WL) and the number of set-ups (St) are to be optimized simultaneously. Mixed-model assembly lines are types of production lines where varieties of product models similar in product characteristics are assembled. Moreover, this type of problem is NP-hard. Two annealing methods are proposed to solve the multi-objective problem and find an efficient frontier of all design configurations. The performances of the two methods are tested on several problems from the literature. Experimentation demonstrates the relative desirable performance of the presented methodology.Keywords: scheduling, just-in-time, mixed-model assembly line, sequencing, simulated annealing
Procedia PDF Downloads 12823648 Parametric Non-Linear Analysis of Reinforced Concrete Frames with Supplemental Damping Systems
Authors: Daniele Losanno, Giorgio Serino
Abstract:
This paper focuses on parametric analysis of reinforced concrete structures equipped with supplemental damping braces. Practitioners still luck sufficient data for current design of damper added structures and often reduce the real model to a pure damper braced structure even if this assumption is neither realistic nor conservative. In the present study, the damping brace is modelled as made by a linear supporting brace connected in series with the viscous/hysteretic damper. Deformation capacity of existing structures is usually not adequate to undergo the design earthquake. In spite of this, additional dampers could be introduced strongly limiting structural damage to acceptable values, or in some cases, reducing frame response to elastic behavior. This work is aimed at providing useful considerations for retrofit of existing buildings by means of supplemental damping braces. The study explicitly takes into consideration variability of (a) relative frame to supporting brace stiffness, (b) dampers’ coefficient (viscous coefficient or yielding force) and (c) non-linear frame behavior. Non-linear time history analysis has been run to account for both dampers’ behavior and non-linear plastic hinges modelled by Pivot hysteretic type. Parametric analysis based on previous studies on SDOF or MDOF linear frames provide reference values for nearly optimal damping systems design. With respect to bare frame configuration, seismic response of the damper-added frame is strongly improved, limiting deformations to acceptable values far below ultimate capacity. Results of the analysis also demonstrated the beneficial effect of stiffer supporting braces, thus highlighting inadequacy of simplified pure damper models. At the same time, the effect of variable damping coefficient and yielding force has to be treated as an optimization problem.Keywords: brace stiffness, dissipative braces, non-linear analysis, plastic hinges, reinforced concrete frames
Procedia PDF Downloads 29023647 ANOVA-Based Feature Selection and Machine Learning System for IoT Anomaly Detection
Authors: Muhammad Ali
Abstract:
Cyber-attacks and anomaly detection on the Internet of Things (IoT) infrastructure is emerging concern in the domain of data-driven intrusion. Rapidly increasing IoT risk is now making headlines around the world. denial of service, malicious control, data type probing, malicious operation, DDos, scan, spying, and wrong setup are attacks and anomalies that can affect an IoT system failure. Everyone talks about cyber security, connectivity, smart devices, and real-time data extraction. IoT devices expose a wide variety of new cyber security attack vectors in network traffic. For further than IoT development, and mainly for smart and IoT applications, there is a necessity for intelligent processing and analysis of data. So, our approach is too secure. We train several machine learning models that have been compared to accurately predicting attacks and anomalies on IoT systems, considering IoT applications, with ANOVA-based feature selection with fewer prediction models to evaluate network traffic to help prevent IoT devices. The machine learning (ML) algorithms that have been used here are KNN, SVM, NB, D.T., and R.F., with the most satisfactory test accuracy with fast detection. The evaluation of ML metrics includes precision, recall, F1 score, FPR, NPV, G.M., MCC, and AUC & ROC. The Random Forest algorithm achieved the best results with less prediction time, with an accuracy of 99.98%.Keywords: machine learning, analysis of variance, Internet of Thing, network security, intrusion detection
Procedia PDF Downloads 12523646 Diminishing Voices of Children in Mandatory Mediation Schemes
Authors: Yuliya Radanova, Agnė Tvaronavičienė
Abstract:
With the growing trend for mandating parties of family conflicts to out-of-court processes, the adopted statutory regulations often remain silent on the way the voice of the child is integrated into the procedure. Convention on the Rights of the Child (Art. 12) clearly states the obligation to assure to the child who can form his or her own views the right to express those views freely in all matters affecting him. This article seeks to explore the way children participate in the mandatory mediation schemes applicable to family disputes in the European Union. A review of scientific literature and empirical data has been conducted on those EU Member States that coerce parties to family mediation to establish that different models of practice are deployed, and there is a lack of synchronicity on how children’s role in mediation is viewed. Child-inclusive mediation processes are deemed to produce sustainable results over time but necessitate professional qualifications and skills for the purpose of mediators to accommodate that such discussions are aligned with the best interest of the child. However, there is no unanimous guidance, standards or protocols on the peculiar characteristics and manner through which children are involved in mediation. Herewith, it is suggested that the lack of such rigorous approaches and coherence in an ever-changing mediation setting transitioning towards mandatory mediation models jeopardizes the importance of children’s voices in the process. Thus, it is suggested that there is a need to consider the adoption of uniform guidelines on the specific role children have in mediation, particularly in its mandatory models.Keywords: family mediation, child involvement, mandatory mediation, child-inclusive, child-focused
Procedia PDF Downloads 7423645 The Role of Macroeconomic Condition and Volatility in Credit Risk: An Empirical Analysis of Credit Default Swap Index Spread on Structural Models in U.S. Market during Post-Crisis Period
Authors: Xu Wang
Abstract:
This research builds linear regressions of U.S. macroeconomic condition and volatility measures in the investment grade and high yield Credit Default Swap index spreads using monthly data from March 2009 to July 2016, to study the relationship between different dimensions of macroeconomy and overall credit risk quality. The most significant contribution of this research is systematically examining individual and joint effects of macroeconomic condition and volatility on CDX spreads by including macroeconomic time series that captures different dimensions of the U.S. economy. The industrial production index growth, non-farm payroll growth, consumer price index growth, 3-month treasury rate and consumer sentiment are introduced to capture the condition of real economic activity, employment, inflation, monetary policy and risk aversion respectively. The conditional variance of the macroeconomic series is constructed using ARMA-GARCH model and is used to measure macroeconomic volatility. The linear regression model is conducted to capture relationships between monthly average CDX spreads and macroeconomic variables. The Newey–West estimator is used to control for autocorrelation and heteroskedasticity in error terms. Furthermore, the sensitivity factor analysis and standardized coefficients analysis are conducted to compare the sensitivity of CDX spreads to different macroeconomic variables and to compare relative effects of macroeconomic condition versus macroeconomic uncertainty respectively. This research shows that macroeconomic condition can have a negative effect on CDX spread while macroeconomic volatility has a positive effect on determining CDX spread. Macroeconomic condition and volatility variables can jointly explain more than 70% of the whole variation of the CDX spread. In addition, sensitivity factor analysis shows that the CDX spread is the most sensitive to Consumer Sentiment index. Finally, the standardized coefficients analysis shows that both macroeconomic condition and volatility variables are important in determining CDX spread but macroeconomic condition category of variables have more relative importance in determining CDX spread than macroeconomic volatility category of variables. This research shows that the CDX spread can reflect the individual and joint effects of macroeconomic condition and volatility, which suggests that individual investors or government should carefully regard CDX spread as a measure of overall credit risk because the CDX spread is influenced by macroeconomy. In addition, the significance of macroeconomic condition and volatility variables, such as Non-farm Payroll growth rate and Industrial Production Index growth volatility suggests that the government, should pay more attention to the overall credit quality in the market when macroecnomy is low or volatile.Keywords: autoregressive moving average model, credit spread puzzle, credit default swap spread, generalized autoregressive conditional heteroskedasticity model, macroeconomic conditions, macroeconomic uncertainty
Procedia PDF Downloads 16723644 Learning Predictive Models for Efficient Energy Management of Exhibition Hall
Authors: Jeongmin Kim, Eunju Lee, Kwang Ryel Ryu
Abstract:
This paper addresses the problem of predictive control for energy management of large-scaled exhibition halls, where a lot of energy is consumed to maintain internal atmosphere under certain required conditions. Predictive control achieves better energy efficiency by optimizing the operation of air-conditioning facilities with not only the current but also some future status taken into account. In this paper, we propose to use predictive models learned from past sensor data of hall environment, for use in optimizing the operating plan for the air-conditioning facilities by simulating future environmental change. We have implemented an emulator of an exhibition hall by using EnergyPlus, a widely used building energy emulation tool, to collect data for learning environment-change models. Experimental results show that the learned models predict future change highly accurately on a short-term basis.Keywords: predictive control, energy management, machine learning, optimization
Procedia PDF Downloads 27423643 The Volume–Volatility Relationship Conditional to Market Efficiency
Authors: Massimiliano Frezza, Sergio Bianchi, Augusto Pianese
Abstract:
The relation between stock price volatility and trading volume represents a controversial issue which has received a remarkable attention over the past decades. In fact, an extensive literature shows a positive relation between price volatility and trading volume in the financial markets, but the causal relationship which originates such association is an open question, from both a theoretical and empirical point of view. In this regard, various models, which can be considered as complementary rather than competitive, have been introduced to explain this relationship. They include the long debated Mixture of Distributions Hypothesis (MDH); the Sequential Arrival of Information Hypothesis (SAIH); the Dispersion of Beliefs Hypothesis (DBH); the Noise Trader Hypothesis (NTH). In this work, we analyze whether stock market efficiency can explain the diversity of results achieved during the years. For this purpose, we propose an alternative measure of market efficiency, based on the pointwise regularity of a stochastic process, which is the Hurst–H¨older dynamic exponent. In particular, we model the stock market by means of the multifractional Brownian motion (mBm) that displays the property of a time-changing regularity. Mostly, such models have in common the fact that they locally behave as a fractional Brownian motion, in the sense that their local regularity at time t0 (measured by the local Hurst–H¨older exponent in a neighborhood of t0 equals the exponent of a fractional Brownian motion of parameter H(t0)). Assuming that the stock price follows an mBm, we introduce and theoretically justify the Hurst–H¨older dynamical exponent as a measure of market efficiency. This allows to measure, at any time t, markets’ departures from the martingale property, i.e. from efficiency as stated by the Efficient Market Hypothesis. This approach is applied to financial markets; using data for the SP500 index from 1978 to 2017, on the one hand we find that when efficiency is not accounted for, a positive contemporaneous relationship emerges and is stable over time. Conversely, it disappears as soon as efficiency is taken into account. In particular, this association is more pronounced during time frames of high volatility and tends to disappear when market becomes fully efficient.Keywords: volume–volatility relationship, efficient market hypothesis, martingale model, Hurst–Hölder exponent
Procedia PDF Downloads 7823642 A Decadal Flood Assessment Using Time-Series Satellite Data in Cambodia
Authors: Nguyen-Thanh Son
Abstract:
Flood is among the most frequent and costliest natural hazards. The flood disasters especially affect the poor people in rural areas, who are heavily dependent on agriculture and have lower incomes. Cambodia is identified as one of the most climate-vulnerable countries in the world, ranked 13th out of 181 countries most affected by the impacts of climate change. Flood monitoring is thus a strategic priority at national and regional levels because policymakers need reliable spatial and temporal information on flood-prone areas to form successful monitoring programs to reduce possible impacts on the country’s economy and people’s likelihood. This study aims to develop methods for flood mapping and assessment from MODIS data in Cambodia. We processed the data for the period from 2000 to 2017, following three main steps: (1) data pre-processing to construct smooth time-series vegetation and water surface indices, (2) delineation of flood-prone areas, and (3) accuracy assessment. The results of flood mapping were verified with the ground reference data, indicating the overall accuracy of 88.7% and a Kappa coefficient of 0.77, respectively. These results were reaffirmed by close agreement between the flood-mapping area and ground reference data, with the correlation coefficient of determination (R²) of 0.94. The seasonally flooded areas observed for 2010, 2015, and 2016 were remarkably smaller than other years, mainly attributed to the El Niño weather phenomenon exacerbated by impacts of climate change. Eventually, although several sources potentially lowered the mapping accuracy of flood-prone areas, including image cloud contamination, mixed-pixel issues, and low-resolution bias between the mapping results and ground reference data, our methods indicated the satisfactory results for delineating spatiotemporal evolutions of floods. The results in the form of quantitative information on spatiotemporal flood distributions could be beneficial to policymakers in evaluating their management strategies for mitigating the negative effects of floods on agriculture and people’s likelihood in the country.Keywords: MODIS, flood, mapping, Cambodia
Procedia PDF Downloads 12623641 Using Machine Learning as an Alternative for Predicting Exchange Rates
Authors: Pedro Paulo Galindo Francisco, Eli Dhadad Junior
Abstract:
This study addresses the Meese-Rogoff Puzzle by introducing the latest machine learning techniques as alternatives for predicting the exchange rates. Using RMSE as a comparison metric, Meese and Rogoff discovered that economic models are unable to outperform the random walk model as short-term exchange rate predictors. Decades after this study, no statistical prediction technique has proven effective in overcoming this obstacle; although there were positive results, they did not apply to all currencies and defined periods. Recent advancements in artificial intelligence technologies have paved the way for a new approach to exchange rate prediction. Leveraging this technology, we applied five machine learning techniques to attempt to overcome the Meese-Rogoff puzzle. We considered daily data for the real, yen, British pound, euro, and Chinese yuan against the US dollar over a time horizon from 2010 to 2023. Our results showed that none of the presented techniques were able to produce an RMSE lower than the Random Walk model. However, the performance of some models, particularly LSTM and N-BEATS were able to outperform the ARIMA model. The results also suggest that machine learning models have untapped potential and could represent an effective long-term possibility for overcoming the Meese-Rogoff puzzle.Keywords: exchage rate, prediction, machine learning, deep learning
Procedia PDF Downloads 3123640 Statistical Data Analysis of Migration Impact on the Spread of HIV Epidemic Model Using Markov Monte Carlo Method
Authors: Ofosuhene O. Apenteng, Noor Azina Ismail
Abstract:
Over the last several years, concern has developed over how to minimize the spread of HIV/AIDS epidemic in many countries. AIDS epidemic has tremendously stimulated the development of mathematical models of infectious diseases. The transmission dynamics of HIV infection that eventually developed AIDS has taken a pivotal role of much on building mathematical models. From the initial HIV and AIDS models introduced in the 80s, various improvements have been taken into account as how to model HIV/AIDS frameworks. In this paper, we present the impact of migration on the spread of HIV/AIDS. Epidemic model is considered by a system of nonlinear differential equations to supplement the statistical method approach. The model is calibrated using HIV incidence data from Malaysia between 1986 and 2011. Bayesian inference based on Markov Chain Monte Carlo is used to validate the model by fitting it to the data and to estimate the unknown parameters for the model. The results suggest that the migrants stay for a long time contributes to the spread of HIV. The model also indicates that susceptible individual becomes infected and moved to HIV compartment at a rate that is more significant than the removal rate from HIV compartment to AIDS compartment. The disease-free steady state is unstable since the basic reproduction number is 1.627309. This is a big concern and not a good indicator from the public heath point of view since the aim is to stabilize the epidemic at the disease equilibrium.Keywords: epidemic model, HIV, MCMC, parameter estimation
Procedia PDF Downloads 60023639 Utilizing Federated Learning for Accurate Prediction of COVID-19 from CT Scan Images
Authors: Jinil Patel, Sarthak Patel, Sarthak Thakkar, Deepti Saraswat
Abstract:
Recently, the COVID-19 outbreak has spread across the world, leading the World Health Organization to classify it as a global pandemic. To save the patient’s life, the COVID-19 symptoms have to be identified. But using an AI (Artificial Intelligence) model to identify COVID-19 symptoms within the allotted time was challenging. The RT-PCR test was found to be inadequate in determining the COVID status of a patient. To determine if the patient has COVID-19 or not, a Computed Tomography Scan (CT scan) of patient is a better alternative. It will be challenging to compile and store all the data from various hospitals on the server, though. Federated learning, therefore, aids in resolving this problem. Certain deep learning models help to classify Covid-19. This paper will have detailed work of certain deep learning models like VGG19, ResNet50, MobileNEtv2, and Deep Learning Aggregation (DLA) along with maintaining privacy with encryption.Keywords: federated learning, COVID-19, CT-scan, homomorphic encryption, ResNet50, VGG-19, MobileNetv2, DLA
Procedia PDF Downloads 7323638 Consumption of Fat Burners Leads to Acute Liver Failure: A Systematic Review protocol
Authors: Anjana Aggarwal, Sheilja Walia
Abstract:
Prevalence of obesity and overweight is increasing due to sedentary lifestyles and busy schedules of people that spend less time on physical exercise. To reduce weight, people are finding easier and more convenient ways. The easiest solution is the use of dietary supplements and fat burners. These are products that decrease body weight by increasing the basal metabolic rate. Various reports have been published on the consumption of fat burners leading to heart palpitations, seizures, anxiety, depression, psychosis, bradycardia, insomnia, muscle contractions, hepatotoxicity, and even liver failure. Case reports and series are reporting that the ingredients present in the fat burners caused acute liver failure (ALF) and hepatic toxicity in many cases. Another contributing factor is the absence of regulations from the Food and Drug Administration on these products, leading to increased consumption and a higher risk of liver diseases among the population. This systematic review aims to attain a better understanding of the dietary supplements used globally to reduce weight and document the case reports/series of acute liver failure caused by the consumption of fat burners. Electronic databases like PubMed, Cochrane, Google Scholar, etc., will be systematically searched for relevant articles. Various websites of dietary products and brands that sell such supplements, Journals of Hepatology, National and international projects launched for ALF, and their reports, along with the review of grey literature, will also be done to get a better understanding of the topic. After discussing with the co-author, the selection and screening of the articles will be performed by the author. The studies will be selected based on the predefined inclusion and exclusion criteria. The case reports and case series that will be included in the final list of the studies will be assessed for methodological quality using the CARE guidelines. The results from this study will provide insights and a better understanding of fat burners. Since the supplements are easily available in the market without any restrictions on their sale, people are unaware of their adverse effects. The consumption of these supplements causes acute liver failure. Thus, this review will provide a platform for future larger studies to be conducted.Keywords: acute liver failure, dietary supplements, fat burners, weight loss supplements
Procedia PDF Downloads 8423637 Simulation of Photovoltaic Array for Specified Ratings of Converter
Authors: Smita Pareek, Ratna Dahiya
Abstract:
The power generated by solar photovoltaic (PV) module depends on surrounding irradiance, temperature, shading conditions, and shading pattern. This paper presents a simulation of photovoltaic module using Matlab/Simulink. PV Array is also simulated by series and parallel connections of modules and their characteristics curves are given. Further PV module topology/configuration are proposed for 5.5kW inverter available in the literature. Shading of a PV array either complete or partial can have a significant impact on its power output and energy yield; therefore, the simulated model characteristics curves (I-V and P-V) are drawn for uniform shading conditions (USC) and then output power, voltage and current are calculated for variation in insolation for shading conditions. Additionally the characteristics curves are also given for a predetermined shadowing condition.Keywords: array, series, parallel, photovoltaic, partial shading
Procedia PDF Downloads 56623636 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU
Authors: Ali Abdul Kadhim, Fue Lien
Abstract:
Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model
Procedia PDF Downloads 20723635 Smartphone-Based Human Activity Recognition by Machine Learning Methods
Authors: Yanting Cao, Kazumitsu Nawata
Abstract:
As smartphones upgrading, their software and hardware are getting smarter, so the smartphone-based human activity recognition will be described as more refined, complex, and detailed. In this context, we analyzed a set of experimental data obtained by observing and measuring 30 volunteers with six activities of daily living (ADL). Due to the large sample size, especially a 561-feature vector with time and frequency domain variables, cleaning these intractable features and training a proper model becomes extremely challenging. After a series of feature selection and parameters adjustment, a well-performed SVM classifier has been trained.Keywords: smart sensors, human activity recognition, artificial intelligence, SVM
Procedia PDF Downloads 14423634 Modification Of Rubber Swab Tool With Brush To Reduce Rubber Swab Fraction Fishing Time
Authors: T. R. Hidayat, G. Irawan, F. Kurniawan, E. H. I. Prasetya, Suharto, T. F. Ridwan, A. Pitoyo, A. Juniantoro, R. T. Hidayat
Abstract:
Swab activities is an activity to lift fluid from inside the well with the use of a sand line that aims to find out fluid influx after conducting perforation or to reduce the level of fluid as an effort to get the difference between formation pressure with hydrostatic pressure in the well for underbalanced perforation. During the swab activity, problems occur frequent problems occur with the rubber swab. The rubber swab often breaks and becomes a fish inside the well. This rubber swab fishing activity caused the rig operation takes longer, the swab result data becomes too late and create potential losses of well operation for the company. The average time needed for fishing the fractions of rubber swab plus swab work is 42 hours. Innovation made for such problems is to modify the rubber swab tool. The rubber swab tool is modified by provided a series of brushes at the end part of the tool with a thread of connection in order to improve work safety, so when the rubber swab breaks, the broken swab will be lifted by the brush underneath; therefore, it reduces the loss time for rubber swab fishing. This tool has been applied, it and is proven that with this rubber swab tool modification, the rig operation becomes more efficient because it does not carry out the rubber swab fishing activity. The fish fractions of the rubber swab are lifted up to the surface. Therefore, it saves the fuel cost, and well production potentials are obtained. The average time to do swab work after the application of this modified tool is 8 hours.Keywords: rubber swab, modifikasi swab, brush, fishing rubber swab, saving cost
Procedia PDF Downloads 16723633 Exposing Latent Fingermarks on Problematic Metal Surfaces Using Time of Flight Secondary Ion Mass Spectroscopy
Authors: Tshaiya Devi Thandauthapani, Adam J. Reeve, Adam S. Long, Ian J. Turner, James S. Sharp
Abstract:
Fingermarks are a crucial form of evidence for identifying a person at a crime scene. However, visualising latent (hidden) fingermarks can be difficult, and the correct choice of techniques is essential to develop and preserve any fingermarks that might be present. Knives, firearms and other metal weapons have proven to be challenging substrates (stainless steel in particular) from which to reliably obtain fingermarks. In this study, time of flight secondary ion mass spectroscopy (ToF-SIMS) was used to image fingermarks on metal surfaces. This technique was compared to a conventional superglue based fuming technique that was accompanied by a series of contrast enhancing dyes (basic yellow 40 (BY40), crystal violet (CV) and Sudan black (SB)) on three different metal surfaces. The conventional techniques showed little to no evidence of fingermarks being present on the metal surfaces after a few days. However, ToF-SIMS images revealed fingermarks on the same and similar substrates with an exceptional level of detail demonstrating clear ridge definition as well as detail about sweat pore position and shape, that persist for over 26 days after deposition when the samples were stored under ambient conditions.Keywords: conventional techniques, latent fingermarks, metal substrates, time of flight secondary ion mass spectroscopy
Procedia PDF Downloads 16423632 Improvement of Transient Voltage Response Using PSS-SVC Coordination Based on ANFIS-Algorithm in a Three-Bus Power System
Authors: I Made Ginarsa, Agung Budi Muljono, I Made Ari Nrartha
Abstract:
Transient voltage response appears in power system operation when an additional loading is forced to load bus of power systems. In this research, improvement of transient voltage response is done by using power system stabilizer-static var compensator (PSS-SVC) based on adaptive neuro-fuzzy inference system (ANFIS)-algorithm. The main function of the PSS is to add damping component to damp rotor oscillation through automatic voltage regulator (AVR) and excitation system. Learning process of the ANFIS is done by using off-line method where data learning that is used to train the ANFIS model are obtained by simulating the PSS-SVC conventional. The ANFIS model uses 7 Gaussian membership functions at two inputs and 49 rules at an output. Then, the ANFIS-PSS and ANFIS-SVC models are applied to power systems. Simulation result shows that the response of transient voltage is improved with settling time at the time of 4.25 s.Keywords: improvement, transient voltage, PSS-SVC, ANFIS, settling time
Procedia PDF Downloads 57723631 Flow Transformation: An Investigation on Theoretical Aspects and Numerical Computation
Authors: Abhisek Sarkar, Abhimanyu Gaur
Abstract:
In this report we have discussed the theoretical aspects of the flow transformation, occurring through a series of bifurcations. The parameters and their continuous diversion, the intermittent bursts in the transition zone, variation of velocity and pressure with time, effect of roughness in turbulent zone, and changes in friction factor and head loss coefficient as a function of Reynolds number for a transverse flow across a cylinder have been discussed. An analysis of the variation in the wake length with Reynolds number was done in FORTRAN.Keywords: bifurcation, attractor, intermittence, energy cascade, energy spectra, vortex stretching
Procedia PDF Downloads 39823630 Predicting Options Prices Using Machine Learning
Authors: Krishang Surapaneni
Abstract:
The goal of this project is to determine how to predict important aspects of options, including the ask price. We want to compare different machine learning models to learn the best model and the best hyperparameters for that model for this purpose and data set. Option pricing is a relatively new field, and it can be very complicated and intimidating, especially to inexperienced people, so we want to create a machine learning model that can predict important aspects of an option stock, which can aid in future research. We tested multiple different models and experimented with hyperparameter tuning, trying to find some of the best parameters for a machine-learning model. We tested three different models: a Random Forest Regressor, a linear regressor, and an MLP (multi-layer perceptron) regressor. The most important feature in this experiment is the ask price; this is what we were trying to predict. In the field of stock pricing prediction, there is a large potential for error, so we are unable to determine the accuracy of the models based on if they predict the pricing perfectly. Due to this factor, we determined the accuracy of the model by finding the average percentage difference between the predicted and actual values. We tested the accuracy of the machine learning models by comparing the actual results in the testing data and the predictions made by the models. The linear regression model performed worst, with an average percentage error of 17.46%. The MLP regressor had an average percentage error of 11.45%, and the random forest regressor had an average percentage error of 7.42%Keywords: finance, linear regression model, machine learning model, neural network, stock price
Procedia PDF Downloads 7523629 Deformation Characteristics of Fire Damaged and Rehabilitated Normal Strength Concrete Beams
Authors: Yeo Kyeong Lee, Hae Won Min, Ji Yeon Kang, Hee Sun Kim, Yeong Soo Shin
Abstract:
Fire incidents have been steadily increased over the last year according to national emergency management agency of South Korea. Even though most of the fire incidents with property damage have been occurred in building, rehabilitation has not been properly done with consideration of structure safety. Therefore, this study aims at evaluating rehabilitation effects on fire damaged normal strength concrete beams through experiments and finite element analyses. For the experiments, reinforced concrete beams were fabricated having designed concrete strength of 21 MPa. Two different cover thicknesses were used as 40 mm and 50 mm. After cured, the fabricated beams were heated for 1hour or 2hours according to ISO-834 standard time-temperature curve. Rehabilitation was done by removing the damaged part of cover thickness and filling polymeric mortar into the removed part. Both fire damaged beams and rehabilitated beams were tested with four point loading system to observe structural behaviors and the rehabilitation effect. To verify the experiment, finite element (FE) models for structural analysis were generated using commercial software ABAQUS 6.10-3. For the rehabilitated beam models, integrated temperature-structural analyses were performed in advance to obtain geometries of the fire damaged beams. In addition to the fire damaged beam models, rehabilitated part was added with material properties of polymeric mortar. Three dimensional continuum brick elements were used for both temperature and structural analyses. The same loading and boundary conditions as experiments were implemented to the rehabilitated beam models and non-linear geometrical analyses were performed. Test results showed that maximum loads of the rehabilitated beams were 8~10% higher than those of the non-rehabilitated beams and even 1~6 % higher than those of the non-fire damaged beam. Stiffness of the rehabilitated beams were also larger than that of non-rehabilitated beams but smaller than that of the non-fire damaged beams. In addition, predicted structural behaviors from the analyses also showed good rehabilitation effect and the predicted load-deflection curves were similar to the experimental results. From this study, both experiments and analytical results demonstrated good rehabilitation effect on the fire damaged normal strength concrete beams. For the further, the proposed analytical method can be used to predict structural behaviors of rehabilitated and fire damaged concrete beams accurately without suffering from time and cost consuming experimental process.Keywords: fire, normal strength concrete, rehabilitation, reinforced concrete beam
Procedia PDF Downloads 50823628 The Martingale Options Price Valuation for European Puts Using Stochastic Differential Equation Models
Authors: H. C. Chinwenyi, H. D. Ibrahim, F. A. Ahmed
Abstract:
In modern financial mathematics, valuing derivatives such as options is often a tedious task. This is simply because their fair and correct prices in the future are often probabilistic. This paper examines three different Stochastic Differential Equation (SDE) models in finance; the Constant Elasticity of Variance (CEV) model, the Balck-Karasinski model, and the Heston model. The various Martingales option price valuation formulas for these three models were obtained using the replicating portfolio method. Also, the numerical solution of the derived Martingales options price valuation equations for the SDEs models was carried out using the Monte Carlo method which was implemented using MATLAB. Furthermore, results from the numerical examples using published data from the Nigeria Stock Exchange (NSE), all share index data show the effect of increase in the underlying asset value (stock price) on the value of the European Put Option for these models. From the results obtained, we see that an increase in the stock price yields a decrease in the value of the European put option price. Hence, this guides the option holder in making a quality decision by not exercising his right on the option.Keywords: equivalent martingale measure, European put option, girsanov theorem, martingales, monte carlo method, option price valuation formula
Procedia PDF Downloads 13423627 The Hyperbolic Smoothing Approach for Automatic Calibration of Rainfall-Runoff Models
Authors: Adilson Elias Xavier, Otto Corrêa Rotunno Filho, Paulo Canedo De Magalhães
Abstract:
This paper addresses the issue of automatic parameter estimation in conceptual rainfall-runoff (CRR) models. Due to threshold structures commonly occurring in CRR models, the associated mathematical optimization problems have the significant characteristic of being strongly non-differentiable. In order to face this enormous task, the resolution method proposed adopts a smoothing strategy using a special C∞ differentiable class function. The final estimation solution is obtained by solving a sequence of differentiable subproblems which gradually approach the original conceptual problem. The use of this technique, called Hyperbolic Smoothing Method (HSM), makes possible the application of the most powerful minimization algorithms, and also allows for the main difficulties presented by the original CRR problem to be overcome. A set of computational experiments is presented for the purpose of illustrating both the reliability and the efficiency of the proposed approach.Keywords: rainfall-runoff models, automatic calibration, hyperbolic smoothing method
Procedia PDF Downloads 14923626 Machine Learning-Enabled Classification of Climbing Using Small Data
Authors: Nicholas Milburn, Yu Liang, Dalei Wu
Abstract:
Athlete performance scoring within the climbing do-main presents interesting challenges as the sport does not have an objective way to assign skill. Assessing skill levels within any sport is valuable as it can be used to mark progress while training, and it can help an athlete choose appropriate climbs to attempt. Machine learning-based methods are popular for complex problems like this. The dataset available was composed of dynamic force data recorded during climbing; however, this dataset came with challenges such as data scarcity, imbalance, and it was temporally heterogeneous. Investigated solutions to these challenges include data augmentation, temporal normalization, conversion of time series to the spectral domain, and cross validation strategies. The investigated solutions to the classification problem included light weight machine classifiers KNN and SVM as well as the deep learning with CNN. The best performing model had an 80% accuracy. In conclusion, there seems to be enough information within climbing force data to accurately categorize climbers by skill.Keywords: classification, climbing, data imbalance, data scarcity, machine learning, time sequence
Procedia PDF Downloads 14323625 Effect of Cooking Time, Seed-To-Water Ratio and Soaking Time on the Proximate Composition and Functional Properties of Tetracarpidium conophorum (Nigerian Walnut) Seeds
Authors: J. O. Idoko, C. N. Michael, T. O. Fasuan
Abstract:
This study investigated the effects of cooking time, seed-to-water ratio and soaking time on proximate and functional properties of African walnut seed using Box-Behnken design and Response Surface Methodology (BBD-RSM) with a view to increase its utilization in the food industry. African walnut seeds were sorted washed, soaked, cooked, dehulled, sliced, dried and milled. Proximate analysis and functional properties of the samples were evaluated using standard procedures. Data obtained were analyzed using descriptive and inferential statistics. Quadratic models were obtained to predict the proximate and functional qualities as a function of cooking time, seed-to-water ratio and soaking time. The results showed that the crude protein ranged between 11.80% and 23.50%, moisture content ranged between 1.00% and 4.66%, ash content ranged between 3.35% and 5.25%, crude fibre ranged from 0.10% to 7.25% and carbohydrate ranged from 1.22% to 29.35%. The functional properties showed that soluble protein ranged from 16.26% to 42.96%, viscosity ranged from 23.43 mPas to 57 mPas, emulsifying capacity ranged from 17.14% to 39.43% and water absorption capacity ranged from 232% to 297%. An increase in the volume of water used during cooking resulted in loss of water soluble protein through leaching, the length of soaking time and the moisture content of the dried product are inversely related, ash content is inversely related to the cooking time and amount of water used, extraction of fat is enhanced by increase in soaking time while increase in cooking and soaking times result into decrease in fibre content. The results obtained indicated that African walnut could be used in several food formulations as protein supplement and binder.Keywords: African walnut, functional properties, proximate analysis, response surface methodology
Procedia PDF Downloads 396