Search results for: gaussian mixture models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8012

Search results for: gaussian mixture models

6572 Object-Based Flow Physics for Aerodynamic Modelling in Real-Time Environments

Authors: William J. Crowther, Conor Marsh

Abstract:

Object-based flow simulation allows fast computation of arbitrarily complex aerodynamic models made up of simple objects with limited flow interactions. The proposed approach is universally applicable to objects made from arbitrarily scaled ellipsoid primitives at arbitrary aerodynamic attitude and angular rate. The use of a component-based aerodynamic modelling approach increases efficiency by allowing selective inclusion of different physics models at run-time and allows extensibility through the development of new models. Insight into the numerical stability of the model under first order fixed-time step integration schemes is provided by stability analysis of the drag component. The compute cost of model components and functions is evaluated and compared against numerical benchmarks. Model static outputs are verified against theoretical expectations and dynamic behaviour using falling plate data from the literature. The model is applied to a range of case studies to demonstrate the efficacy of its application in extensibility, ease of use, and low computational cost. Dynamically complex multi-body systems can be implemented in a transparent and efficient manner, and we successfully demonstrate large scenes with hundreds of objects interacting with diverse flow fields.

Keywords: aerodynamics, real-time simulation, low-order model, flight dynamics

Procedia PDF Downloads 81
6571 Modeling the Effects of Temperature on Ambient Air Quality Using AERMOD

Authors: Mustapha Babatunde, Bassam Tawabini, Ole John Nielson

Abstract:

Air dispersion (AD) models such as AERMOD are important tools for estimating the environmental impacts of air pollutant emissions into the atmosphere from anthropogenic sources. The outcome of these models is significantly linked to the climate condition like air temperature, which is expected to differ in the future due to the global warming phenomenon. With projections from scientific sources of impending changes to the future climate of Saudi Arabia, especially anticipated temperature rise, there is a potential direct impact on the dispersion patterns of air pollutants results from AD models. To our knowledge, no similar studies were carried out in Saudi Arabia to investigate such impact. Therefore, this research investigates the effects of climate temperature change on air quality in the Dammam Metropolitan area, Saudi Arabia, using AERMOD coupled with Station data using Sulphur dioxide (SO₂) – as a model air pollutant. The research uses AERMOD model to predict the SO₂ dispersion trends in the surrounding area. Emissions from five (5) industrial stacks on twenty-eight (28) receptors in the study area were considered for the climate period (2010-2019) and future period of mid-century (2040-2060) under different scenarios of elevated temperature profiles (+1ᵒC, + 3ᵒC and + 5ᵒC) across averaging time periods of 1hr, 4hr and 8hr. Results showed that levels of SO₂ at the receiving sites under current and simulated future climactic condition fall within the allowable limit of WHO and KSA air quality standards. Results also revealed that the projected rise in temperature would only have mild increment on the SO₂ concentration levels. The average increase of SO₂ levels was 0.04%, 0.14%, and 0.23% due to the temperature increase of 1, 3, and 5 degrees, respectively. In conclusion, the outcome of this work elucidates the degree of the effects of global warming and climate changes phenomena on air quality and can help the policymakers in their decision-making, given the significant health challenges associated with ambient air pollution in Saudi Arabia.

Keywords: air quality, sulfur dioxide, dispersion models, global warming, KSA

Procedia PDF Downloads 58
6570 Interaction of Phytochemicals Present in Green Tea, Honey and Cinnamon to Human Melanocortin 4 Receptor

Authors: Chinmayee Choudhury

Abstract:

Human Melanocortin 4 Receptor (HMC4R) is one of the most potential drug targets for the treatment of obesity which controls the appetite. A deletion of the residues 88-92 in HMC4R is sometimes the cause of severe obesity in the humans. In this study, two homology models are constructed for the normal as well as mutated HMC4Rs and some phytochemicals present in Green Tea, Honey and Cinnamon have been docked to them to study their differential binding to the normal and mutated HMC4R as compared to the natural agonist α- MSH. Two homology models have been constructed for the normal as well as mutated HMC4Rs using the Modeller9v7. Some of the phytochemicals present in Green Tea, Honey, and Cinnamon, which have appetite suppressant activities are constructed, minimized and docked to these normal and mutated HMC4R models using ArgusLab 4.0.1. The mode of binding of the phytochemicals with the Normal and Mutated HMC4Rs have been compared. Further, the mode of binding of these phytochemicals with that of the natural agonist α- Melanocyte Stimulating Hormone(α-MSH) to both normal and mutated HMC4Rs have also been studied. It is observed that the phytochemicals Kaempherol, Epigallocatechin-3-gallate (EGCG) present in Green Tea and Honey, Isorhamnetin, Chlorogenic acid, Chrysin, Galangin, Pinocambrin present in Honey, Cinnamaldehyde, Cinnamyl acetate and Cinnamyl alcohol present in Cinnamon have capacity to form more stable complexes with the Mutated HMC4R as compared to α- MSH. So they may be potential agonists of HMC4R to suppress the appetite.

Keywords: HMC4R, α-MSH, docking, photochemical, appetite suppressant, homology modelling

Procedia PDF Downloads 173
6569 A System Dynamics Approach to Technological Learning Impact for Cost Estimation of Solar Photovoltaics

Authors: Rong Wang, Sandra Hasanefendic, Elizabeth von Hauff, Bart Bossink

Abstract:

Technological learning and learning curve models have been continuously used to estimate the photovoltaics (PV) cost development over time for the climate mitigation targets. They can integrate a number of technological learning sources which influence the learning process. Yet the accuracy and realistic predictions for cost estimations of PV development are still difficult to achieve. This paper develops four hypothetical-alternative learning curve models by proposing different combinations of technological learning sources, including both local and global technology experience and the knowledge stock. This paper specifically focuses on the non-linear relationship between the costs and technological learning source and their dynamic interaction and uses the system dynamics approach to predict a more accurate PV cost estimation for future development. As the case study, the data from China is gathered and drawn to illustrate that the learning curve model that incorporates both the global and local experience is more accurate and realistic than the other three models for PV cost estimation. Further, absorbing and integrating the global experience into the local industry has a positive impact on PV cost reduction. Although the learning curve model incorporating knowledge stock is not realistic for current PV cost deployment in China, it still plays an effective positive role in future PV cost reduction.

Keywords: photovoltaic, system dynamics, technological learning, learning curve

Procedia PDF Downloads 79
6568 Mathematical Programming Models for Portfolio Optimization Problem: A Review

Authors: Mazura Mokhtar, Adibah Shuib, Daud Mohamad

Abstract:

Portfolio optimization problem has received a lot of attention from both researchers and practitioners over the last six decades. This paper provides an overview of the current state of research in portfolio optimization with the support of mathematical programming techniques. On top of that, this paper also surveys the solution algorithms for solving portfolio optimization models classifying them according to their nature in heuristic and exact methods. To serve these purposes, 40 related articles appearing in the international journal from 2003 to 2013 have been gathered and analyzed. Based on the literature review, it has been observed that stochastic programming and goal programming constitute the highest number of mathematical programming techniques employed to tackle the portfolio optimization problem. It is hoped that the paper can meet the needs of researchers and practitioners for easy references of portfolio optimization.

Keywords: portfolio optimization, mathematical programming, multi-objective programming, solution approaches

Procedia PDF Downloads 331
6567 Reliability Evaluation of a Payment Model in Mobile E-Commerce Using Colored Petri Net

Authors: Abdolghader Pourali, Mohammad V. Malakooti, Muhammad Hussein Yektaie

Abstract:

A mobile payment system in mobile e-commerce generally have high security so that the user can trust it for doing business deals, sales, paying financial transactions, etc. in the mobile payment system. Since an architecture or payment model in e-commerce only shows the way of interaction and collaboration among users and mortgagers and does not present any evaluation of effectiveness and confidence about financial transactions to stakeholders. In this paper, we try to present a detailed assessment of the reliability of a mobile payment model in the mobile e-commerce using formal models and colored Petri nets. Finally, we demonstrate that the reliability of this system has high value (case study: a secure payment model in mobile commerce.

Keywords: reliability, colored Petri net, assessment, payment models, m-commerce

Procedia PDF Downloads 523
6566 Applying Arima Data Mining Techniques to ERP to Generate Sales Demand Forecasting: A Case Study

Authors: Ghaleb Y. Abbasi, Israa Abu Rumman

Abstract:

This paper modeled sales history archived from 2012 to 2015 bulked in monthly bins for five products for a medical supply company in Jordan. The sales forecasts and extracted consistent patterns in the sales demand history from the Enterprise Resource Planning (ERP) system were used to predict future forecasting and generate sales demand forecasting using time series analysis statistical technique called Auto Regressive Integrated Moving Average (ARIMA). This was used to model and estimate realistic sales demand patterns and predict future forecasting to decide the best models for five products. Analysis revealed that the current replenishment system indicated inventory overstocking.

Keywords: ARIMA models, sales demand forecasting, time series, R code

Procedia PDF Downloads 365
6565 Photoluminescent Properties of Noble Metal Nanoparticles Supported Yttrium Aluminum Garnet Nanoparticles Doped with Cerium (Ⅲ) Ions

Authors: Mitsunobu Iwasaki, Akifumi Iseda

Abstract:

Yttrium aluminum garnet doped with cerium (Ⅲ) ions (Y3Al5O12:Ce3+, YAG:Ce3+) has attracted a great attention because it can efficiently convert the blue light into a very broad yellow emission band, which produces white light emitting diodes and is applied for panel displays. To improve the brightness and resolution of the display, a considerable attention has been directed to develop fine phosphor particles. We have prepared YAG:Ce3+ nanophosphors by environmental-friendly wet process. The peak maximum of absorption spectra of surface plasmon of Ag nanopaticles are close to that of the excitation spectra (460 nm) of YAG:Ce3+. It can be expected that Ag nanoparticles supported onto the surface of YAG:Ce3+ (Ag-YAG:Ce3+) enhance the absorption of Ce3+ ions. In this study, we have prepared Ag-YAG:Ce3+ nanophosphors and investigated their photoluminescent properties. YCl3・6H2O and AlCl3・6H2O with a molar ratio of Y:Al=3:5 were dissolved in ethanol (100 ml), and CeCl3•7H2O (0.3 mol%) was further added to the above solution. Then, NaOH (4.6×10-2 mol) dissolved in ethanol (50 ml) was added dropwise to the mixture under reflux over 2 hours, and the solution was further refluxed for 1 hour. After cooling to room temperature, precipitates in the reaction mixture were heated at 673 K for 1 hour. After the calcination, the particles were immersed in AgNO3 solution for 1 hour, followed by sintering at 1123 K for 1 hour. YAG:Ce3+ were confirmed to be nanocrystals with a crystallite size of 50-80 nm in diameter. Ag nanoparticles supported onto YAG:Ce3+ were single nanometers in diameter. The excitation and emission spectra were 454 nm and 539 nm at a maximum wavelength, respectively. The emission intensity was maximum for Ag-YAG:Ce3+ immersed into 0.5 mM AgCl (Ag-YAG:Ce (0.5 mM)). The absorption maximum (461 nm) was increased for Ag-YAG:Ce3+ in comparison with that for YAG:Ce3+, indicating that the absorption was enhanced by the addition of Ag. The external and internal quantum efficiencies became 11.2 % and 36.9 % for Ag-YAG:Ce (0.5 mM), respectively. The emission intensity and absorption maximum of Ag-YAG:Ce (0.5 mM)×n (n=1, 2, 3) were increased with an increase of the number of supporting times (n), respectively. The external and internal quantum efficiencies were increased for the increase of n, respectively. The external quantum efficiency of Ag-YAG:Ce (0.5 mM) (n=3) became twice as large as that of YAG:Ce. In conclusion, Ag nanoparticles supported onto YAG:Ce3+ increased absorption and quantum efficiency. Therefore, the support of Ag nanoparticles enhanced the photoluminescent properties of YAG:Ce3+.

Keywords: plasmon, quantum efficiency, silver nanoparticles, yttrium aluminum garnet

Procedia PDF Downloads 254
6564 Kinetic Study of Municipal Plastic Waste

Authors: Laura Salvia Diaz Silvarrey, Anh Phan

Abstract:

Municipal Plastic Waste (MPW) comprises a mixture of thermoplastics such as high and low density polyethylene (HDPE and LDPE), polypropylene (PP), polystyrene (PS) and polyethylene terephthalate (PET). Recycling rate of these plastics is low, e.g. only 27% in 2013. The remains were incinerated or disposed in landfills. As MPW generation increases approximately 5% per annum, MPW management technologies have to be developed to comply with legislation . Pyrolysis, thermochemical decomposition, provides an excellent alternative to convert MPW into valuable resources like fuels and chemicals. Most studies on waste plastic kinetics only focused on HDPE and LDPE with a simple assumption of first order decomposition, which is not the real reaction mechanism. The aim of this study was to develop a kinetic study for each of the polymers in the MPW mixture using thermogravimetric analysis (TGA) over a range of heating rates (5, 10, 20 and 40°C/min) in N2 atmosphere and sample size of 1 – 4mm. A model-free kinetic method was applied to quantify the activation energy at each level of conversion. Kissinger–Akahira–Sunose (KAS) and Flynn–Wall–Ozawa (FWO) equations jointly with Master Plots confirmed that the activation energy was not constant along all the reaction for all the five plastic studied, showing that MPW decomposed through a complex mechanism and not by first-order kinetics. Master plots confirmed that MPW decomposed following a random scission mechanism at conversions above 40%. According to the random scission mechanism, different radicals are formed along the backbone producing the cleavage of bonds by chain scission into molecules of different lengths. The cleavage of bonds during random scission follows first-order kinetics and it is related with the conversion. When a bond is broken one part of the initial molecule becomes an unsaturated one and the other a terminal free radical. The latter can react with hydrogen from and adjacent carbon releasing another free radical and a saturated molecule or reacting with another free radical and forming an alkane. Not every time a bonds is broken a molecule is evaporated. At early stages of the reaction (conversion and temperature below 40% and 300°C), most products are not short enough to evaporate. Only at higher degrees of conversion most of cleavage of bonds releases molecules small enough to evaporate.

Keywords: kinetic, municipal plastic waste, pyrolysis, random scission

Procedia PDF Downloads 338
6563 The Use of Drones in Measuring Environmental Impacts of the Forest Garden Approach

Authors: Andrew J. Zacharias

Abstract:

The forest garden approach (FGA) was established by Trees for the Future (TREES) over the organization’s 30 years of agroforestry projects in Sub-Saharan Africa. This method transforms traditional agricultural systems into highly managed gardens that produce food and marketable products year-round. The effects of the FGA on food security, dietary diversity, and economic resilience have been measured closely, and TREES has begun to closely monitor the environmental impacts through the use of sensors mounted on unmanned aerial vehicles, commonly known as 'drones'. These drones collect thousands of pictures to create 3-D models in both the visible and the near-infrared wavelengths. Analysis of these models provides TREES with quantitative and qualitative evidence of improvements to the annual above-ground biomass and leaf area indices, as measured in-situ using NDVI calculations.

Keywords: agroforestry, biomass, drones, NDVI

Procedia PDF Downloads 141
6562 An Adjusted Network Information Criterion for Model Selection in Statistical Neural Network Models

Authors: Christopher Godwin Udomboso, Angela Unna Chukwu, Isaac Kwame Dontwi

Abstract:

In selecting a Statistical Neural Network model, the Network Information Criterion (NIC) has been observed to be sample biased, because it does not account for sample sizes. The selection of a model from a set of fitted candidate models requires objective data-driven criteria. In this paper, we derived and investigated the Adjusted Network Information Criterion (ANIC), based on Kullback’s symmetric divergence, which has been designed to be an asymptotically unbiased estimator of the expected Kullback-Leibler information of a fitted model. The analyses show that on a general note, the ANIC improves model selection in more sample sizes than does the NIC.

Keywords: statistical neural network, network information criterion, adjusted network, information criterion, transfer function

Procedia PDF Downloads 544
6561 Experimental Parameters’ Effects on the Electrical Discharge Machining Performances

Authors: Asmae Tafraouti, Yasmina Layouni, Pascal Kleimann

Abstract:

The growing market for Microsystems (MST) and Micro-Electromechanical Systems (MEMS) is driving the research for alternative manufacturing techniques to microelectronics-based technologies, which are generally expensive and time-consuming. Hot-embossing and micro-injection modeling of thermoplastics appear to be industrially viable processes. However, both require the use of master models, usually made in hard materials such as steel. These master models cannot be fabricated using standard microelectronics processes. Thus, other micromachining processes are used, such as laser machining or micro-electrical discharge machining (µEDM). In this work, µEDM has been used. The principle of µEDM is based on the use of a thin cylindrical micro-tool that erodes the workpiece surface. The two electrodes are immersed in a dielectric with a distance of a few micrometers (gap). When an electrical voltage is applied between the two electrodes, electrical discharges are generated, which cause material machining. In order to produce master models with high resolution and smooth surfaces, it is necessary to well control the discharge mechanism. However, several problems are encountered, such as a random electrical discharge process, the fluctuation of the discharge energy, the electrodes' polarity inversion, and the wear of the micro-tool. The effect of different parameters, such as the applied voltage, the working capacitor, the micro-tool diameter, and the initial gap, has been studied. This analysis helps to improve the machining performances, such as the workpiece surface condition and the lateral crater's gap.

Keywords: craters, electrical discharges, micro-electrical discharge machining, microsystems

Procedia PDF Downloads 56
6560 Electrical Load Estimation Using Estimated Fuzzy Linear Parameters

Authors: Bader Alkandari, Jamal Y. Madouh, Ahmad M. Alkandari, Anwar A. Alnaqi

Abstract:

A new formulation of fuzzy linear estimation problem is presented. It is formulated as a linear programming problem. The objective is to minimize the spread of the data points, taking into consideration the type of the membership function of the fuzzy parameters to satisfy the constraints on each measurement point and to insure that the original membership is included in the estimated membership. Different models are developed for a fuzzy triangular membership. The proposed models are applied to different examples from the area of fuzzy linear regression and finally to different examples for estimating the electrical load on a busbar. It had been found that the proposed technique is more suited for electrical load estimation, since the nature of the load is characterized by the uncertainty and vagueness.

Keywords: fuzzy regression, load estimation, fuzzy linear parameters, electrical load estimation

Procedia PDF Downloads 520
6559 Finite Element-Based Stability Analysis of Roadside Settlements Slopes from Barpak to Yamagaun through Laprak Village of Gorkha, an Epicentral Location after the 7.8Mw 2015 Barpak, Gorkha, Nepal Earthquake

Authors: N. P. Bhandary, R. C. Tiwari, R. Yatabe

Abstract:

The research employs finite element method to evaluate the stability of roadside settlements slopes from Barpak to Yamagaon through Laprak village of Gorkha, Nepal after the 7.8Mw 2015 Barpak, Gorkha, Nepal earthquake. It includes three major villages of Gorkha, i.e., Barpak, Laprak and Yamagaun that were devastated by 2015 Gorkhas’ earthquake. The road head distance from the Barpak to Laprak and Laprak to Yamagaun are about 14 and 29km respectively. The epicentral distance of main shock of magnitude 7.8 and aftershock of magnitude 6.6 were respectively 7 and 11 kilometers (South-East) far from the Barpak village nearer to Laprak and Yamagaon. It is also believed that the epicenter of the main shock as said until now was not in the Barpak village, it was somewhere near to the Yamagaun village. The chaos that they had experienced during the earthquake in the Yamagaun was much more higher than the Barpak. In this context, we have carried out a detailed study to investigate the stability of Yamagaun settlements slope as a case study, where ground fissures, ground settlement, multiple cracks and toe failures are the most severe. In this regard, the stability issues of existing settlements and proposed road alignment, on the Yamagaon village slope are addressed, which is surrounded by many newly activated landslides. Looking at the importance of this issue, field survey is carried out to understand the behavior of ground fissures and multiple failure characteristics of the slopes. The results suggest that the Yamgaun slope in Profile 2-2, 3-3 and 4-4 are not safe enough for infrastructure development even in the normal soil slope conditions as per 2, 3 and 4 material models; however, the slope seems quite safe for at Profile 1-1 for all 4 material models. The result also indicates that the first three profiles are marginally safe for 2, 3 and 4 material models respectively. The Profile 4-4 is not safe enough for all 4 material models. Thus, Profile 4-4 needs a special care to make the slope stable.

Keywords: earthquake, finite element method, landslide, stability

Procedia PDF Downloads 329
6558 Epigenetic Drugs for Major Depressive Disorder: A Critical Appraisal of Available Studies

Authors: Aniket Kumar, Jacob Peedicayil

Abstract:

Major depressive disorder (MDD) is a common and important psychiatric disorder. Several clinical features of MDD suggest an epigenetic basis for its pathogenesis. Since epigenetics (heritable changes in gene expression not involving changes in DNA sequence) may underlie the pathogenesis of MDD, epigenetic drugs such as DNA methyltransferase inhibitors (DNMTi) and histone deactylase inhibitors (HDACi) may be useful for treating MDD. The available literature indexed in Pubmed on preclinical drug trials of epigenetic drugs for the treatment of MDD was investigated. The search terms we used were ‘depression’ or ‘depressive’ and ‘HDACi’ or ‘DNMTi’. Among epigenetic drugs, it was found that there were 3 preclinical trials using HDACi and 3 using DNMTi for the treatment of MDD. All the trials were conducted on rodents (mice or rats). The animal models of depression that were used were: learned helplessness-induced animal model, forced swim test, open field test, and the tail suspension test. One study used a genetic rat model of depression (the Flinders Sensitive Line). The HDACi that were tested were: sodium butyrate, compound 60 (Cpd-60), and valproic acid. The DNMTi that were tested were: 5-azacytidine and decitabine. Among the three preclinical trials using HDACi, all showed an antidepressant effect in animal models of depression. Among the 3 preclinical trials using DNMTi also, all showed an antidepressant effect in animal models of depression. Thus, epigenetic drugs, namely, HDACi and DNMTi, may prove to be useful in the treatment of MDD and merit further investigation for the treatment of this disorder.

Keywords: DNA methylation, drug discovery, epigenetics, major depressive disorder

Procedia PDF Downloads 173
6557 A Biomechanical Model for the Idiopathic Scoliosis Using the Antalgic-Trak Technology

Authors: Joao Fialho

Abstract:

The mathematical modelling of idiopathic scoliosis has been studied throughout the years. The models presented on those papers are based on the orthotic stabilization of the idiopathic scoliosis, which are based on a transversal force being applied to the human spine on a continuous form. When considering the ATT (Antalgic-Trak Technology) device, the existent models cannot be used, as the type of forces applied are no longer transversal nor applied in a continuous manner. In this device, vertical traction is applied. In this study we propose to model the idiopathic scoliosis, using the ATT (Antalgic-Trak Technology) device, and with the parameters obtained from the mathematical modeling, set up a case-by-case individualized therapy plan, for each patient.

Keywords: idiopathic scoliosis, mathematical modelling, human spine, Antalgic-Trak technology

Procedia PDF Downloads 249
6556 On the Use of Analytical Performance Models to Design a High-Performance Active Queue Management Scheme

Authors: Shahram Jamali, Samira Hamed

Abstract:

One of the open issues in Random Early Detection (RED) algorithm is how to set its parameters to reach high performance for the dynamic conditions of the network. Although original RED uses fixed values for its parameters, this paper follows a model-based approach to upgrade performance of the RED algorithm. It models the routers queue behavior by using the Markov model and uses this model to predict future conditions of the queue. This prediction helps the proposed algorithm to make some tunings over RED's parameters and provide efficiency and better performance. Widespread packet level simulations confirm that the proposed algorithm, called Markov-RED, outperforms RED and FARED in terms of queue stability, bottleneck utilization and dropped packets count.

Keywords: active queue management, RED, Markov model, random early detection algorithm

Procedia PDF Downloads 523
6555 Using Historical Data for Stock Prediction

Authors: Sofia Stoica

Abstract:

In this paper, we use historical data to predict the stock price of a tech company. To this end, we use a dataset consisting of the stock prices in the past five years of ten major tech companies – Adobe, Amazon, Apple, Facebook, Google, Microsoft, Netflix, Oracle, Salesforce, and Tesla. We experimented with a variety of models– a linear regressor model, K nearest Neighbors (KNN), a sequential neural network – and algorithms - Multiplicative Weight Update, and AdaBoost. We found that the sequential neural network performed the best, with a testing error of 0.18%. Interestingly, the linear model performed the second best with a testing error of 0.73%. These results show that using historical data is enough to obtain high accuracies, and a simple algorithm like linear regression has a performance similar to more sophisticated models while taking less time and resources to implement.

Keywords: finance, machine learning, opening price, stock market

Procedia PDF Downloads 160
6554 Downscaling Grace Gravity Models Using Spectral Combination Techniques for Terrestrial Water Storage and Groundwater Storage Estimation

Authors: Farzam Fatolazadeh, Kalifa Goita, Mehdi Eshagh, Shusen Wang

Abstract:

The Gravity Recovery and Climate Experiment (GRACE) is a satellite mission with twin satellites for the precise determination of spatial and temporal variations in the Earth’s gravity field. The products of this mission are monthly global gravity models containing the spherical harmonic coefficients and their errors. These GRACE models can be used for estimating terrestrial water storage (TWS) variations across the globe at large scales, thereby offering an opportunity for surface and groundwater storage (GWS) assessments. Yet, the ability of GRACE to monitor changes at smaller scales is too limited for local water management authorities. This is largely due to the low spatial and temporal resolutions of its models (~200,000 km2 and one month, respectively). High-resolution GRACE data products would substantially enrich the information that is needed by local-scale decision-makers while offering the data for the regions that lack adequate in situ monitoring networks, including northern parts of Canada. Such products could eventually be obtained through downscaling. In this study, we extended the spectral combination theory to simultaneously downscale spatiotemporally the 3o spatial coarse resolution of GRACE to 0.25o degrees resolution and monthly coarse resolution to daily resolution. This method combines the monthly gravity field solution of GRACE and daily hydrological model products in the form of both low and high-frequency signals to produce high spatiotemporal resolution TWSA and GWSA products. The main contribution and originality of this study are to comprehensively and simultaneously consider GRACE and hydrological variables and their uncertainties to form the estimator in the spectral domain. Therefore, it is predicted that we reach downscale products with an acceptable accuracy.

Keywords: GRACE satellite, groundwater storage, spectral combination, terrestrial water storage

Procedia PDF Downloads 66
6553 The Factors Affecting the Use of Massive Open Online Courses in Blended Learning by Lecturers in Universities

Authors: Taghreed Alghamdi, Wendy Hall, David Millard

Abstract:

Massive Open Online Courses (MOOCs) have recently gained widespread interest in the academic world, starting a wide range of discussion of a number of issues. One of these issues, using MOOCs in teaching and learning in the higher education by integrating MOOCs’ contents with traditional face-to-face activities in blended learning format, is called blended MOOCs (bMOOCs) and is intended not to replace traditional learning but to enhance students learning. Most research on MOOCs has focused on students’ perception and institutional threats whereas there is a lack of published research on academics’ experiences and practices. Thus, the first aim of the study is to develop a classification of blended MOOCs models by conducting a systematic literature review, classifying 19 different case studies, and identifying the broad types of bMOOCs models namely: Supplementary Model and Integrated Model. Thus, the analyses phase will emphasize on these different types of bMOOCs models in terms of adopting MOOCs by lecturers. The second aim of the study is to improve the understanding of lecturers’ acceptance of bMOOCs by investigate the factors that influence academics’ acceptance of using MOOCs in traditional learning by distributing an online survey to lecturers who participate in MOOCs platforms. These factors can help institutions to encourage their lecturers to integrate MOOCs with their traditional courses in universities.

Keywords: acceptance, blended learning, blended MOOCs, higher education, lecturers, MOOCs, professors

Procedia PDF Downloads 119
6552 Study of the Nanostructured Fe₅₀Cr₃₅Ni₁₅ Powder Alloy Developed by Mechanical Alloying

Authors: Salim Triaa, Fella Kali-Ali

Abstract:

Nanostructured Fe₅₀Cr3₃₅Ni₁₅ alloys were prepared from pure elemental powders using high energy mechanical alloying. The mixture powders obtained are characterized by several techniques. X-ray diffraction analysis revelated the formation of the Fe₁Cr₁ compound with BBC structure after one hour of milling. A second compound Fe₃Ni₂ with FCC structure was observed after 12 hours of milling. The size of crystallite determined by Williamson Hall method was about 5.1 nm after 48h of mill. SEM observations confirmed the growth of crushed particles as a function of milling time, while the homogenization of our powders into different constituent elements was verified by the EDX analysis.

Keywords: Fe-Cr-Ni alloy, mechanical alloying, nanostructure, SEM, XRD

Procedia PDF Downloads 163
6551 Assessment of Pre-Processing Influence on Near-Infrared Spectra for Predicting the Mechanical Properties of Wood

Authors: Aasheesh Raturi, Vimal Kothiyal, P. D. Semalty

Abstract:

We studied mechanical properties of Eucalyptus tereticornis using FT-NIR spectroscopy. Firstly, spectra were pre-processed to eliminate useless information. Then, prediction model was constructed by partial least squares regression. To study the influence of pre-processing on prediction of mechanical properties for NIR analysis of wood samples, we applied various pretreatment methods like straight line subtraction, constant offset elimination, vector-normalization, min-max normalization, multiple scattering. Correction, first derivative, second derivatives and their combination with other treatment such as First derivative + straight line subtraction, First derivative+ vector normalization and First derivative+ multiplicative scattering correction. The data processing methods in combination of preprocessing with different NIR regions, RMSECV, RMSEP and optimum factors/rank were obtained by optimization process of model development. More than 350 combinations were obtained during optimization process. More than one pre-processing method gave good calibration/cross-validation and prediction/test models, but only the best calibration/cross-validation and prediction/test models are reported here. The results show that one can safely use NIR region between 4000 to 7500 cm-1 with straight line subtraction, constant offset elimination, first derivative and second derivative preprocessing method which were found to be most appropriate for models development.

Keywords: FT-NIR, mechanical properties, pre-processing, PLS

Procedia PDF Downloads 327
6550 Economic Development Impacts of Connected and Automated Vehicles (CAV)

Authors: Rimon Rafiah

Abstract:

This paper will present a combination of two seemingly unrelated models, which are the one for estimating economic development impacts as a result of transportation investment and the other for increasing CAV penetration in order to reduce congestion. Measuring economic development impacts resulting from transportation investments is becoming more recognized around the world. Examples include the UK’s Wider Economic Benefits (WEB) model, Economic Impact Assessments in the USA, various input-output models, and additional models around the world. The economic impact model is based on WEB and is based on the following premise: investments in transportation will reduce the cost of personal travel, enabling firms to be more competitive, creating additional throughput (the same road allows more people to travel), and reducing the cost of travel of workers to a new workplace. This reduction in travel costs was estimated in out-of-pocket terms in a given localized area and was then translated into additional employment based on regional labor supply elasticity. This additional employment was conservatively assumed to be at minimum wage levels, translated into GDP terms, and from there into direct taxation (i.e., an increase in tax taken by the government). The CAV model is based on economic principles such as CAV usage, supply, and demand. Usage of CAVs can increase capacity using a variety of means – increased automation (known as Level I thru Level IV) and also by increased penetration and usage, which has been predicted to go up to 50% by 2030 according to several forecasts, with possible full conversion by 2045-2050. Several countries have passed policies and/or legislation on sales of gasoline-powered vehicles (none) starting in 2030 and later. Supply was measured via increased capacity on given infrastructure as a function of both CAV penetration and implemented technologies. The CAV model, as implemented in the USA, has shown significant savings in travel time and also in vehicle operating costs, which can be translated into economic development impacts in terms of job creation, GDP growth and salaries as well. The models have policy implications as well and can be adapted for use in Japan as well.

Keywords: CAV, economic development, WEB, transport economics

Procedia PDF Downloads 60
6549 AutoML: Comprehensive Review and Application to Engineering Datasets

Authors: Parsa Mahdavi, M. Amin Hariri-Ardebili

Abstract:

The development of accurate machine learning and deep learning models traditionally demands hands-on expertise and a solid background to fine-tune hyperparameters. With the continuous expansion of datasets in various scientific and engineering domains, researchers increasingly turn to machine learning methods to unveil hidden insights that may elude classic regression techniques. This surge in adoption raises concerns about the adequacy of the resultant meta-models and, consequently, the interpretation of the findings. In response to these challenges, automated machine learning (AutoML) emerges as a promising solution, aiming to construct machine learning models with minimal intervention or guidance from human experts. AutoML encompasses crucial stages such as data preparation, feature engineering, hyperparameter optimization, and neural architecture search. This paper provides a comprehensive overview of the principles underpinning AutoML, surveying several widely-used AutoML platforms. Additionally, the paper offers a glimpse into the application of AutoML on various engineering datasets. By comparing these results with those obtained through classical machine learning methods, the paper quantifies the uncertainties inherent in the application of a single ML model versus the holistic approach provided by AutoML. These examples showcase the efficacy of AutoML in extracting meaningful patterns and insights, emphasizing its potential to revolutionize the way we approach and analyze complex datasets.

Keywords: automated machine learning, uncertainty, engineering dataset, regression

Procedia PDF Downloads 42
6548 Regularization of Gene Regulatory Networks Perturbed by White Noise

Authors: Ramazan I. Kadiev, Arcady Ponosov

Abstract:

Mathematical models of gene regulatory networks can in many cases be described by ordinary differential equations with switching nonlinearities, where the initial value problem is ill-posed. Several regularization methods are known in the case of deterministic networks, but the presence of stochastic noise leads to several technical difficulties. In the presentation, it is proposed to apply the methods of the stochastic singular perturbation theory going back to Yu. Kabanov and Yu. Pergamentshchikov. This approach is used to regularize the above ill-posed problem, which, e.g., makes it possible to design stable numerical schemes. Several examples are provided in the presentation, which support the efficiency of the suggested analysis. The method can also be of interest in other fields of biomathematics, where differential equations contain switchings, e.g., in neural field models.

Keywords: ill-posed problems, singular perturbation analysis, stochastic differential equations, switching nonlinearities

Procedia PDF Downloads 178
6547 Development and Validation of First Derivative Method and Artificial Neural Network for Simultaneous Spectrophotometric Determination of Two Closely Related Antioxidant Nutraceuticals in Their Binary Mixture”

Authors: Mohamed Korany, Azza Gazy, Essam Khamis, Marwa Adel, Miranda Fawzy

Abstract:

Background: Two new, simple and specific methods; First, a Zero-crossing first-derivative technique and second, a chemometric-assisted spectrophotometric artificial neural network (ANN) were developed and validated in accordance with ICH guidelines. Both methods were used for the simultaneous estimation of the two closely related antioxidant nutraceuticals ; Coenzyme Q10 (Q) ; also known as Ubidecarenone or Ubiquinone-10, and Vitamin E (E); alpha-tocopherol acetate, in their pharmaceutical binary mixture. Results: For first method: By applying the first derivative, both Q and E were alternatively determined; each at the zero-crossing of the other. The D1 amplitudes of Q and E, at 285 nm and 235 nm respectively, were recorded and correlated to their concentrations. The calibration curve is linear over the concentration range of 10-60 and 5.6-70 μg mL-1 for Q and E, respectively. For second method: ANN (as a multivariate calibration method) was developed and applied for the simultaneous determination of both analytes. A training set (or a concentration set) of 90 different synthetic mixtures containing Q and E, in wide concentration ranges between 0-100 µg/mL and 0-556 µg/mL respectively, were prepared in ethanol. The absorption spectra of the training sets were recorded in the spectral region of 230–300 nm. A Gradient Descend Back Propagation ANN chemometric calibration was computed by relating the concentration sets (x-block) to their corresponding absorption data (y-block). Another set of 45 synthetic mixtures of the two drugs, in defined range, was used to validate the proposed network. Neither chemical separation, preparation stage nor mathematical graphical treatment were required. Conclusions: The proposed methods were successfully applied for the assay of Q and E in laboratory prepared mixtures and combined pharmaceutical tablet with excellent recoveries. The ANN method was superior over the derivative technique as the former determined both drugs in the non-linear experimental conditions. It also offers rapidity, high accuracy, effort and money saving. Moreover, no need for an analyst for its application. Although the ANN technique needed a large training set, it is the method of choice in the routine analysis of Q and E tablet. No interference was observed from common pharmaceutical additives. The results of the two methods were compared together

Keywords: coenzyme Q10, vitamin E, chemometry, quantitative analysis, first derivative spectrophotometry, artificial neural network

Procedia PDF Downloads 429
6546 Working Capital Management and Profitability of Uk Firms: A Contingency Theory Approach

Authors: Ishmael Tingbani

Abstract:

This paper adopts a contingency theory approach to investigate the relationship between working capital management and profitability using data of 225 listed British firms on the London Stock Exchange for the period 2001-2011. The paper employs a panel data analysis on a series of interactive models to estimate this relationship. The findings of the study confirm the relevance of the contingency theory. Evidence from the study suggests that the impact of working capital management on profitability varies and is constrained by organizational contingencies (environment, resources, and management factors) of the firm. These findings have implications for a more balanced and nuanced view of working capital management policy for policy-makers.

Keywords: working capital management, profitability, contingency theory approach, interactive models

Procedia PDF Downloads 317
6545 Experimental Parameters’ Effects on the Electrical Discharge Machining Performances (µEDM)

Authors: Asmae Tafraouti, Yasmina Layouni, Pascal Kleimann

Abstract:

The growing market for Microsystems (MST) and Micro-Electromechanical Systems (MEMS) is driving the research for alternative manufacturing techniques to microelectronics-based technologies, which are generally expensive and time-consuming. Hot-embossing and micro-injection modeling of thermoplastics appear to be industrially viable processes. However, both require the use of master models, usually made in hard materials such as steel. These master models cannot be fabricated using standard microelectronics processes. Thus, other micromachining processes are used, as laser machining or micro-electrical discharge machining (µEDM). In this work, µEDM has been used. The principle of µEDM is based on the use of a thin cylindrical micro-tool that erodes the workpiece surface. The two electrodes are immersed in a dielectric with a distance of a few micrometers (gap). When an electrical voltage is applied between the two electrodes, electrical discharges are generated, which cause material machining. In order to produce master models with high resolution and smooth surfaces, it is necessary to well control the discharge mechanism. However, several problems are encountered, such as a random electrical discharge process, the fluctuation of the discharge energy, the electrodes' polarity inversion, and the wear of the micro-tool. The effect of different parameters, such as the applied voltage, the working capacitor, the micro-tool diameter, the initial gap, has been studied. This analysis helps to improve the machining performances, such: the workpiece surface condition and the lateral crater's gap.

Keywords: craters, electrical discharges, micro-electrical discharge machining (µEDM), microsystems

Procedia PDF Downloads 80
6544 Selective Immobilization of Fructosyltransferase onto Glutaraldehyde Modified Support and Its Application in the Production of Fructo-Oligosaccharides

Authors: Milica B. Veljković, Milica B. Simović, Marija M. Ćorović, Ana D. Milivojević, Anja I. Petrov, Katarina M. Banjanac, Dejan I. Bezbradica

Abstract:

In recent decades, the scientific community has recognized the growing importance of prebiotics, and therefore, numerous studies are focused on their economic production due to their low presence in natural resources. It has been confirmed that prebiotics is a source of energy for probiotics in the gastrointestinal tract (GIT) and enable their proliferation, consequently leading to the normal functioning of the intestinal microbiota. Also, products of their fermentation are short-chain fatty acids (SCFA), which play a key role in maintaining and improving the health not only of the GIT but also of the whole organism. Among several confirmed prebiotics, fructooligosaccharides (FOS) are considered interesting candidates for use in a wide range of products in the food industry. They are characterized as low-calorie and non-cariogenic substances that represent an adequate sugar substitute and can be considered suitable for use in products intended for diabetics. The subject of this research will be the production of FOS by transforming sucrose using a fructosyltransferase (FTase) present in commercial preparation Pectinex® Ultra SP-L, with special emphasis on the development of adequate FTase immobilization method that would enable selective isolation of the enzyme responsible for the synthesis of FOS from the complex enzymatic mixture. This would lead to considerable enzyme purification and allow its direct incorporation into different sucrose-based products without the fear that the action of the other hydrolytic enzymes may adversely affect the products' functional characteristics. Accordingly, the possibility of selective immobilization of the enzyme using support with primary amino groups, Purolite® A109, which was previously activated and modified using glutaraldehyde (GA), was investigated. In the initial phase of the research, the effects of individual immobilization parameters such as pH, enzyme concentration, and immobilization time were investigated to optimize the process using support chemically activated with 15% and 0.5% GA to form dimers and monomers, respectively. It was determined that highly active immobilized preparations (371.8 IU/g of support - dimer and 213.8 IU/g of support – monomer) were achieved under acidic conditions (pH 4) provided that an enzyme concentration was 50 mg/g of support after 7 h and 3 h, respectively. Bearing in mind the obtained results of the expressed activity, it is noticeable that the formation of dimers showed higher reactivity compared to the form of monomers. Also, in the case of support modification using 15% GA, the value of the ratio of FTase and pectinase (as dominant enzyme mixture component) activity immobilization yields was 16.45, indicating the high feasibility of selective immobilization of FTase on modified polystyrene resin. After obtaining immobilized preparations of satisfactory features, they were tested in a reaction of FOS synthesis under determined optimal conditions. The maximum FOS yields of approximately 50% of total carbohydrates in the reaction mixture were recorded after 21 h. Finally, it can be concluded that the examined immobilization method yielded highly active, stable and, more importantly, refined enzyme preparation that can be further utilized on a larger scale for the development of continual processes for FOS synthesis, as well as for modification of different sucrose-based mediums.

Keywords: chemical modification, fructooligosaccharides, glutaraldehyde, immobilization of fructosyltransferase

Procedia PDF Downloads 168
6543 Mathematics Model Approaching: Parameter Estimation of Transmission Dynamics of HIV and AIDS in Indonesia

Authors: Endrik Mifta Shaiful, Firman Riyudha

Abstract:

Acquired Immunodeficiency Syndrome (AIDS) is one of the world's deadliest diseases caused by the Human Immunodeficiency Virus (HIV) that infects white blood cells and cause a decline in the immune system. AIDS quickly became a world epidemic disease that affects almost all countries. Therefore, mathematical modeling approach to the spread of HIV and AIDS is needed to anticipate the spread of HIV and AIDS which are widespread. The purpose of this study is to determine the parameter estimation on mathematical models of HIV transmission and AIDS using cumulative data of people with HIV and AIDS each year in Indonesia. In this model, there are parameters of r ∈ [0,1) which is the effectiveness of the treatment in patients with HIV. If the value of r is close to 1, the number of people with HIV and AIDS will decline toward zero. The estimation results indicate when the value of r is close to unity, there will be a significant decline in HIV patients, whereas in AIDS patients constantly decreases towards zero.

Keywords: HIV, AIDS, parameter estimation, mathematical models

Procedia PDF Downloads 232