Search results for: path loss model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20141

Search results for: path loss model

15041 Bathymetric Change of Brahmaputra River and Its Influence on Flooding Scenario

Authors: Arup Kumar Sarma, Rohan Kar

Abstract:

The development of physical model of River like Brahmaputra, which finds its origin in the Chema Yundung glacier of Tibet and flows through India and Bangladesh, is always expensive and very much time consuming. With the advancement of computational technique, mathematical modeling has found wide application. MIKE 21C is one such commercial software, developed by Danish Hydraulic Institute (DHI), with the depth-averaged approach and a two-dimensional curvilinear finite-difference model, which is capable of modeling hydrodynamic and morphological processes with some limitations. The main purpose of this study are to generate bathymetry of the River Brahmaputra starting from “Sadia” at upstream to “Dhubri,” at downstream stretching a distance of approximately 695 km, for four different years: 1957, 1971, 1977, and 1981 over the grid generated in the MIKE 21C and to carry out the hydrodynamic simulation for these years to analyze the effect of bathymetry change on the surface water elevation. The study has established that bathymetric change can influence the flood level significantly in some of the river reaches and therefore the modification or updating of regular bathymetry is very much essential for the reliable flood routing in alluvial rivers.

Keywords: bathymetry, brahmaputra river, hydrodynamic model, surface water elevation

Procedia PDF Downloads 449
15040 Supersymmetry versus Compositeness: 2-Higgs Doublet Models Tell the Story

Authors: S. De Curtis, L. Delle Rose, S. Moretti, K. Yagyu

Abstract:

Supersymmetry and compositeness are the two prevalent paradigms providing both a solution to the hierarchy problem and motivation for a light Higgs boson state. An open door towards the solution is found in the context of 2-Higgs Doublet Models (2HDMs), which are necessary to supersymmetry and natural within compositeness in order to enable Electro-Weak Symmetry Breaking. In scenarios of compositeness, the two isospin doublets arise as pseudo Nambu-Goldstone bosons from the breaking of SO(6). By calculating the Higgs potential at one-loop level through the Coleman-Weinberg mechanism from the explicit breaking of the global symmetry induced by the partial compositeness of fermions and gauge bosons, we derive the phenomenological properties of the Higgs states and highlight the main signatures of this Composite 2-Higgs Doublet Model at the Large Hadron Collider. These include modifications to the SM-like Higgs couplings as well as production and decay channels of heavier Higgs bosons. We contrast the properties of this composite scenario to the well-known ones established in supersymmetry, with the MSSM being the most notorious example. We show how 2HDM spectra of masses and couplings accessible at the Large Hadron Collider may allow one to distinguish between the two paradigms.

Keywords: beyond the standard model, composite Higgs, supersymmetry, Two-Higgs Doublet Model

Procedia PDF Downloads 123
15039 Using Machine Learning to Build a Real-Time COVID-19 Mask Safety Monitor

Authors: Yash Jain

Abstract:

The US Center for Disease Control has recommended wearing masks to slow the spread of the virus. The research uses a video feed from a camera to conduct real-time classifications of whether or not a human is correctly wearing a mask, incorrectly wearing a mask, or not wearing a mask at all. Utilizing two distinct datasets from the open-source website Kaggle, a mask detection network had been trained. The first dataset that was used to train the model was titled 'Face Mask Detection' on Kaggle, where the dataset was retrieved from and the second dataset was titled 'Face Mask Dataset, which provided the data in a (YOLO Format)' so that the TinyYoloV3 model could be trained. Based on the data from Kaggle, two machine learning models were implemented and trained: a Tiny YoloV3 Real-time model and a two-stage neural network classifier. The two-stage neural network classifier had a first step of identifying distinct faces within the image, and the second step was a classifier to detect the state of the mask on the face and whether it was worn correctly, incorrectly, or no mask at all. The TinyYoloV3 was used for the live feed as well as for a comparison standpoint against the previous two-stage classifier and was trained using the darknet neural network framework. The two-stage classifier attained a mean average precision (MAP) of 80%, while the model trained using TinyYoloV3 real-time detection had a mean average precision (MAP) of 59%. Overall, both models were able to correctly classify stages/scenarios of no mask, mask, and incorrectly worn masks.

Keywords: datasets, classifier, mask-detection, real-time, TinyYoloV3, two-stage neural network classifier

Procedia PDF Downloads 156
15038 Effect of Installation Method on the Ratio of Tensile to Compressive Shaft Capacity of Piles in Dense Sand

Authors: A. C. Galvis-Castro, R. D. Tovar, R. Salgado, M. Prezzi

Abstract:

It is generally accepted that the shaft capacity of piles in the sand is lower for tensile loading that for compressive loading. So far, very little attention has been paid to the role of the influence of the installation method on the tensile to compressive shaft capacity ratio. The objective of this paper is to analyze the effect of installation method on the tensile to compressive shaft capacity of piles in dense sand as observed in tests on half-circular model pile tests in a half-circular calibration chamber with digital image correlation (DIC) capability. Model piles are either monotonically jacked, jacked with multiple strokes or pre-installed into the dense sand samples. Digital images of the model pile and sand are taken during both the installation and loading stages of each test and processed using the DIC technique to obtain the soil displacement and strain fields. The study provides key insights into the mobilization of shaft resistance in tensile and compressive loading for both displacement and non-displacement piles.

Keywords: digital image correlation, piles, sand, shaft resistance

Procedia PDF Downloads 268
15037 Efficient Principal Components Estimation of Large Factor Models

Authors: Rachida Ouysse

Abstract:

This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.

Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting

Procedia PDF Downloads 146
15036 Reaction Kinetics of Biodiesel Production from Refined Cottonseed Oil Using Calcium Oxide

Authors: Ude N. Callistus, Amulu F. Ndidi, Onukwuli D. Okechukwu, Amulu E. Patrick

Abstract:

Power law approximation was used in this study to evaluate the reaction orders of calcium oxide, CaO catalyzed transesterification of refined cottonseed oil and methanol. The kinetics study was carried out at temperatures of 45, 55 and 65 oC. The kinetic parameters such as reaction order 2.02 and rate constant 2.8 hr-1g-1cat, obtained at the temperature of 65 oC best fitted the kinetic model. The activation energy, Ea obtained was 127.744 KJ/mol. The results indicate that the transesterification reaction of the refined cottonseed oil using calcium oxide catalyst is approximately second order reaction.

Keywords: refined cottonseed oil, transesterification, CaO, heterogeneous catalysts, kinetic model

Procedia PDF Downloads 535
15035 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels

Authors: Joshua Buli, David Pietrowski, Samuel Britton

Abstract:

Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.

Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization

Procedia PDF Downloads 74
15034 Thermal Instability in Rivlin-Ericksen Elastico-Viscous Nanofluid with Connective Boundary Condition: Effect of Vertical Throughflow

Authors: Shivani Saini

Abstract:

The effect of vertical throughflow on the onset of convection in Rivlin-Ericksen Elastico-Viscous nanofluid with convective boundary condition is investigated. The flow is stimulated with modified Darcy model under the assumption that the nanoparticle volume fraction is not actively managed on the boundaries. The heat conservation equation is formulated by introducing the convective term of nanoparticle flux. A linear stability analysis based upon normal mode is performed, and an approximate solution of eigenvalue problems is obtained using the Galerkin weighted residual method. Investigation of the dependence of the Rayleigh number on various viscous and nanofluid parameter is performed. It is found that through flow and nanofluid parameters hasten the convection while capacity ratio, kinematics viscoelasticity, and Vadasz number do not govern the stationary convection. Using the convective component of nanoparticle flux, critical wave number is the function of nanofluid parameters as well as the throughflow parameter. The obtained solution provides important physical insight into the behavior of this model.

Keywords: Darcy model, nanofluid, porous layer, throughflow

Procedia PDF Downloads 131
15033 The Internet of Things Ecosystem: Survey of the Current Landscape, Identity Relationship Management, Multifactor Authentication Mechanisms, and Underlying Protocols

Authors: Nazli W. Hardy

Abstract:

A critical component in the Internet of Things (IoT) ecosystem is the need for secure and appropriate transmission, processing, and storage of the data. Our current forms of authentication, and identity and access management do not suffice because they are not designed to service cohesive, integrated, interconnected devices, and service applications. The seemingly endless opportunities of IoT are in fact circumscribed on multiple levels by concerns such as trust, privacy, security, loss of control, and related issues. This paper considers multi-factor authentication (MFA) mechanisms and cohesive identity relationship management (IRM) standards. It also surveys messaging protocols that are appropriate for the IoT ecosystem.

Keywords: identity relation management, multifactor authentication, protocols, survey of internet of things ecosystem

Procedia PDF Downloads 346
15032 A Hybrid of BioWin and Computational Fluid Dynamics Based Modeling of Biological Wastewater Treatment Plants for Model-Based Control

Authors: Komal Rathore, Kiesha Pierre, Kyle Cogswell, Aaron Driscoll, Andres Tejada Martinez, Gita Iranipour, Luke Mulford, Aydin Sunol

Abstract:

Modeling of Biological Wastewater Treatment Plants requires several parameters for kinetic rate expressions, thermo-physical properties, and hydrodynamic behavior. The kinetics and associated mechanisms become complex due to several biological processes taking place in wastewater treatment plants at varying times and spatial scales. A dynamic process model that incorporated the complex model for activated sludge kinetics was developed using the BioWin software platform for an Advanced Wastewater Treatment Plant in Valrico, Florida. Due to the extensive number of tunable parameters, an experimental design was employed for judicious selection of the most influential parameter sets and their bounds. The model was tuned using both the influent and effluent plant data to reconcile and rectify the forecasted results from the BioWin Model. Amount of mixed liquor suspended solids in the oxidation ditch, aeration rates and recycle rates were adjusted accordingly. The experimental analysis and plant SCADA data were used to predict influent wastewater rates and composition profiles as a function of time for extended periods. The lumped dynamic model development process was coupled with Computational Fluid Dynamics (CFD) modeling of the key units such as oxidation ditches in the plant. Several CFD models that incorporate the nitrification-denitrification kinetics, as well as, hydrodynamics was developed and being tested using ANSYS Fluent software platform. These realistic and verified models developed using BioWin and ANSYS were used to plan beforehand the operating policies and control strategies for the biological wastewater plant accordingly that further allows regulatory compliance at minimum operational cost. These models, with a little bit of tuning, can be used for other biological wastewater treatment plants as well. The BioWin model mimics the existing performance of the Valrico Plant which allowed the operators and engineers to predict effluent behavior and take control actions to meet the discharge limits of the plant. Also, with the help of this model, we were able to find out the key kinetic and stoichiometric parameters which are significantly more important for modeling of biological wastewater treatment plants. One of the other important findings from this model were the effects of mixed liquor suspended solids and recycle ratios on the effluent concentration of various parameters such as total nitrogen, ammonia, nitrate, nitrite, etc. The ANSYS model allowed the abstraction of information such as the formation of dead zones increases through the length of the oxidation ditches as compared to near the aerators. These profiles were also very useful in studying the behavior of mixing patterns, effect of aerator speed, and use of baffles which in turn helps in optimizing the plant performance.

Keywords: computational fluid dynamics, flow-sheet simulation, kinetic modeling, process dynamics

Procedia PDF Downloads 202
15031 Supply Air Pressure Control of HVAC System Using MPC Controller

Authors: P. Javid, A. Aeenmehr, J. Taghavifar

Abstract:

In this paper, supply air pressure of HVAC system has been modeled with second-order transfer function plus dead-time. In HVAC system, the desired input has step changes, and the output of proposed control system should be able to follow the input reference, so the idea of using model based predictive control is proceeded and designed in this paper. The closed loop control system is implemented in MATLAB software and the simulation results are provided. The simulation results show that the model based predictive control is able to control the plant properly.

Keywords: air conditioning system, GPC, dead time, air supply control

Procedia PDF Downloads 523
15030 Process Mining as an Ecosystem Platform to Mitigate a Deficiency of Processes Modelling

Authors: Yusra Abdulsalam Alqamati, Ahmed Alkilany

Abstract:

The teaching staff is a distinct group whose impact is on the educational process and which plays an important role in enhancing the quality of the academic education process. To improve the management effectiveness of the academy, the Teaching Staff Management System (TSMS) proposes that all teacher processes be digitized. Since the BPMN approach can accurately describe the processes, it lacks a clear picture of the process flow map, something that the process mining approach has, which is extracting information from event logs for discovery, monitoring, and model enhancement. Therefore, these two methodologies were combined to create the most accurate representation of system operations, the ability to extract data records and mining processes, recreate them in the form of a Petri net, and then generate them in a BPMN model for a more in-depth view of process flow. Additionally, the TSMS processes will be orchestrated to handle all requests in a guaranteed small-time manner thanks to the integration of the Google Cloud Platform (GCP), the BPM engine, and allowing business owners to take part throughout the entire TSMS project development lifecycle.

Keywords: process mining, BPM, business process model and notation, Petri net, teaching staff, Google Cloud Platform

Procedia PDF Downloads 136
15029 Designing Price Stability Model of Red Cayenne Pepper Price in Wonogiri District, Centre Java, Using ARCH/GARCH Method

Authors: Fauzia Dianawati, Riska W. Purnomo

Abstract:

Food and agricultural sector become the biggest sector contributing to inflation in Indonesia. Especially in Wonogiri district, red cayenne pepper was the biggest sector contributing to inflation on 2016. A national statistic proved that in recent five years red cayenne pepper has the highest average level of fluctuation among all commodities. Some factors, like supply chain, price disparity, production quantity, crop failure, and oil price become the possible factor causes high volatility level in red cayenne pepper price. Therefore, this research tries to find the key factor causing fluctuation on red cayenne pepper by using ARCH/GARCH method. The method could accommodate the presence of heteroscedasticity in time series data. At the end of the research, it is statistically found that the second level of supply chain becomes the biggest part contributing to inflation with 3,35 of coefficient in fluctuation forecasting model of red cayenne pepper price. This model could become a reference to the government to determine the appropriate policy in maintaining the price stability of red cayenne pepper.

Keywords: ARCH/GARCH, forecasting, red cayenne pepper, volatility, supply chain

Procedia PDF Downloads 183
15028 Remaining Useful Life (RUL) Assessment Using Progressive Bearing Degradation Data and ANN Model

Authors: Amit R. Bhende, G. K. Awari

Abstract:

Remaining useful life (RUL) prediction is one of key technologies to realize prognostics and health management that is being widely applied in many industrial systems to ensure high system availability over their life cycles. The present work proposes a data-driven method of RUL prediction based on multiple health state assessment for rolling element bearings. Bearing degradation data at three different conditions from run to failure is used. A RUL prediction model is separately built in each condition. Feed forward back propagation neural network models are developed for prediction modeling.

Keywords: bearing degradation data, remaining useful life (RUL), back propagation, prognosis

Procedia PDF Downloads 433
15027 A Model of the Universe without Expansion of Space

Authors: Jia-Chao Wang

Abstract:

A model of the universe without invoking space expansion is proposed to explain the observed redshift-distance relation and the cosmic microwave background radiation (CMB). The main hypothesized feature of the model is that photons traveling in space interact with the CMB photon gas. This interaction causes the photons to gradually lose energy through dissipation and, therefore, experience redshift. The interaction also causes some of the photons to be scattered off their track toward an observer and, therefore, results in beam intensity attenuation. As observed, the CMB exists everywhere in space and its photon density is relatively high (about 410 per cm³). The small average energy of the CMB photons (about 6.3×10⁻⁴ eV) can reduce the energies of traveling photons gradually and will not alter their momenta drastically as in, for example, Compton scattering, to totally blur the images of distant objects. An object moving through a thermalized photon gas, such as the CMB, experiences a drag. The cause is that the object sees a blue shifted photon gas along the direction of motion and a redshifted one in the opposite direction. An example of this effect can be the observed CMB dipole: The earth travels at about 368 km/s (600 km/s) relative to the CMB. In the all-sky map from the COBE satellite, radiation in the Earth's direction of motion appears 0.35 mK hotter than the average temperature, 2.725 K, while radiation on the opposite side of the sky is 0.35 mK colder. The pressure of a thermalized photon gas is given by Pγ = Eγ/3 = αT⁴/3, where Eγ is the energy density of the photon gas and α is the Stefan-Boltzmann constant. The observed CMB dipole, therefore, implies a pressure difference between the two sides of the earth and results in a CMB drag on the earth. By plugging in suitable estimates of quantities involved, such as the cross section of the earth and the temperatures on the two sides, this drag can be estimated to be tiny. But for a photon traveling at the speed of light, 300,000 km/s, the drag can be significant. In the present model, for the dissipation part, it is assumed that a photon traveling from a distant object toward an observer has an effective interaction cross section pushing against the pressure of the CMB photon gas. For the attenuation part, the coefficient of the typical attenuation equation is used as a parameter. The values of these two parameters are determined by fitting the 748 µ vs. z data points compiled from 643 supernova and 105 γ-ray burst observations with z values up to 8.1. The fit is as good as that obtained from the lambda cold dark matter (ΛCDM) model using online cosmological calculators and Planck 2015 results. The model can be used to interpret Hubble's constant, Olbers' paradox, the origin and blackbody nature of the CMB radiation, the broadening of supernova light curves, and the size of the observable universe.

Keywords: CMB as the lowest energy state, model of the universe, origin of CMB in a static universe, photon-CMB photon gas interaction

Procedia PDF Downloads 130
15026 An Analytical Wall Function for 2-D Shock Wave/Turbulent Boundary Layer Interactions

Authors: X. Wang, T. J. Craft, H. Iacovides

Abstract:

When handling the near-wall regions of turbulent flows, it is necessary to account for the viscous effects which are important over the thin near-wall layers. Low-Reynolds- number turbulence models do this by including explicit viscous and also damping terms which become active in the near-wall regions, and using very fine near-wall grids to properly resolve the steep gradients present. In order to overcome the cost associated with the low-Re turbulence models, a more advanced wall function approach has been implemented within OpenFoam and tested together with a standard log-law based wall function in the prediction of flows which involve 2-D shock wave/turbulent boundary layer interactions (SWTBLIs). On the whole, from the calculation of the impinging shock interaction, the three turbulence modelling strategies, the Lauder-Sharma k-ε model with Yap correction (LS), the high-Re k-ε model with standard wall function (SWF) and analytical wall function (AWF), display good predictions of wall-pressure. However, the SWF approach tends to underestimate the tendency of the flow to separate as a result of the SWTBLI. The analytical wall function, on the other hand, is able to reproduce the shock-induced flow separation and returns predictions similar to those of the low-Re model, using a much coarser mesh.

Keywords: SWTBLIs, skin-friction, turbulence modeling, wall function

Procedia PDF Downloads 344
15025 Kinetic, Equilibrium and Thermodynamic Studies of the Adsorption of Crystal Violet Dye Using Groundnut Hulls

Authors: Olumuyiwa Ayoola Kokapi, Olugbenga Solomon Bello

Abstract:

Dyes are organic compounds with complex aromatic molecular structure that resulted in fast colour on a substance. Dye effluent found in wastewater generated from the dyeing industries is one of the greatest contributors to water pollution. Groundnut hull (GH) is an agricultural material that constitutes waste in the environment. Environmental contamination by hazardous organic chemicals is an urgent problem, which is partially solved through adsorption technologies. The choice of groundnut hull was promised on the understanding that some materials of agricultural origin have shown potentials to act as Adsorbate for hazardous organic chemicals. The aim of this research is to evaluate the potential of groundnut hull to adsorb Crystal violet dye through kinetic, isotherm and thermodynamic studies. The prepared groundnut hulls was characterized using Brunauer, Emmett and Teller (BET), Fourier transform infrared (FTIR) and scanning electron microscopy (SEM). Operational parameters such as contact time, initial dye concentration, pH, and effect of temperature were studied. Equilibrium time for the adsorption process was attained in 80 minutes. Adsorption isotherms used to test the adsorption data were Langmuir and Freundlich isotherms model. Thermodynamic parameters such as ∆G°, ∆H°, and ∆S° of the adsorption processes were determined. The results showed that the uptake of dye by groundnut hulls occurred at a faster rate, corresponding to an increase in adsorption capacity at equilibrium time of 80 min from 0.78 to 4.45 mg/g and 0.77 to 4.45mg/g with an increase in the initial dye concentration from 10 to 50 mg/L for pH 3.0 and 8.0 respectively. High regression values obtained for pseudo-second-order kinetic model, sum of square error (SSE%) values along with strong agreement between experimental and calculated values of qe proved that pseudo second-order kinetic model fitted more than pseudo first-order kinetic model. The result of Langmuir and Freundlich model showed that the adsorption data fit the Langmuir model more than the Freundlich model. Thermodynamic study demonstrated the feasibility, spontaneous and endothermic nature of the adsorption process due to negative values of free energy change (∆G) at all temperatures and positive value of enthalpy change (∆H) respectively. The positive values of ∆S showed that there was increased disorderliness and randomness at the solid/solution interface of crystal violet dye and groundnut hulls. The present investigation showed that, groundnut hulls (GH) is a good low-cost alternative adsorbent for the removal of Crystal Violet (CV) dye from aqueous solution.

Keywords: adsorption, crystal violet dye, groundnut halls, kinetics

Procedia PDF Downloads 367
15024 Component-Based Approach in Assessing Sewer Manholes

Authors: Khalid Kaddoura, Tarek Zayed

Abstract:

Sewer networks are constructed to protect the communities and the environment from any contact with the sewer mediums. Pipelines, being laterals or sewer mains, and manholes form the huge underground infrastructure in every urban city. Due to the sewer networks importance, the infrastructure asset management field has extensive advancement in condition assessment and rehabilitation decision models. However, most of the focus was devoted to pipelines giving little attention toward manholes condition assessment. In fact, recent studies started to emerge in this area to preserve manholes from any malfunction. Therefore, the main objective of this study is to propose a condition assessment model for sewer manholes. The model divides the manhole into several components and determines the relative importance weight of each component using the Analytic Network Process (ANP) decision-making method. Later, the condition of the manhole is computed by aggregating the condition of each component with its corresponding weight. Accordingly, the proposed assessment model will enable decision-makers to have a final index suggesting the overall condition of the manhole and a backward analysis to check the condition of each component. Consequently, better decisions are made pertinent to maintenance, rehabilitation, and replacement actions.

Keywords: Analytic Network Process (ANP), condition assessment, decision-making, manholes

Procedia PDF Downloads 348
15023 Carbohydrate Intake Estimation in Type I Diabetic Patients Described by UVA/Padova Model

Authors: David A. Padilla, Rodolfo Villamizar

Abstract:

In recent years, closed loop control strategies have been developed in order to establish a healthy glucose profile in type 1 diabetic mellitus (T1DM) patients. However, the controller itself is unable to define a suitable reference trajectory for glucose. In this paper, a control strategy Is proposed where the shape of the reference trajectory is generated bases in the amount of carbohydrates present during the digestive process, due to the effect of carbohydrate intake. Since there no exists a sensor to measure the amount of carbohydrates consumed, an estimator is proposed. Thus this paper presents the entire process of designing a carbohydrate estimator, which allows estimate disturbance for a predictive controller (MPC) in a T1MD patient, the estimation will be used to establish a profile of reference and improve the response of the controller by providing the estimated information of ingested carbohydrates. The dynamics of the diabetic model used are due to the equations described by the UVA/Padova model of the T1DMS simulator, the system was developed and simulated in Simulink, taking into account the noise and limitations of the glucose control system actuators.

Keywords: estimation, glucose control, predictive controller, MPC, UVA/Padova

Procedia PDF Downloads 259
15022 Analyzing the Market Growth in Application Programming Interface Economy Using Time-Evolving Model

Authors: Hiroki Yoshikai, Shin’ichi Arakawa, Tetsuya Takine, Masayuki Murata

Abstract:

API (Application Programming Interface) economy is expected to create new value by converting corporate services such as information processing and data provision into APIs and using these APIs to connect services. Understanding the dynamics of a market of API economy under the strategies of participants is crucial to fully maximize the values of the API economy. To capture the behavior of a market in which the number of participants changes over time, we present a time-evolving market model for a platform in which API providers who provide APIs to service providers participate in addition to service providers and consumers. Then, we use the market model to clarify the role API providers play in expanding market participants and forming ecosystems. The results show that the platform with API providers increased the number of market participants by 67% and decreased the cost to develop services by 25% compared to the platform without API providers. Furthermore, during the expansion phase of the market, it is found that the profits of participants are mostly the same when 70% of the revenue from consumers is distributed to service providers and API providers. It is also found that when the market is mature, the profits of the service provider and API provider will decrease significantly due to their competition, and the profit of the platform increases.

Keywords: API economy, ecosystem, platform, API providers

Procedia PDF Downloads 86
15021 Mathematical Modeling of Nonlinear Process of Assimilation

Authors: Temur Chilachava

Abstract:

In work the new nonlinear mathematical model describing assimilation of the people (population) with some less widespread language by two states with two various widespread languages, taking into account demographic factor is offered. In model three subjects are considered: the population and government institutions with the widespread first language, influencing by means of state and administrative resources on the third population with some less widespread language for the purpose of their assimilation; the population and government institutions with the widespread second language, influencing by means of state and administrative resources on the third population with some less widespread language for the purpose of their assimilation; the third population (probably small state formation, an autonomy), exposed to bilateral assimilation from two rather powerful states. Earlier by us it was shown that in case of zero demographic factor of all three subjects, the population with less widespread language completely assimilates the states with two various widespread languages, and the result of assimilation (redistribution of the assimilated population) is connected with initial quantities, technological and economic capabilities of the assimilating states. In considered model taking into account demographic factor natural decrease in the population of the assimilating states and a natural increase of the population which has undergone bilateral assimilation is supposed. At some ratios between coefficients of natural change of the population of the assimilating states, and also assimilation coefficients, for nonlinear system of three differential equations are received the two first integral. Cases of two powerful states assimilating the population of small state formation (autonomy), with different number of the population, both with identical and with various economic and technological capabilities are considered. It is shown that in the first case the problem is actually reduced to nonlinear system of two differential equations describing the classical model "predator - the victim", thus, naturally a role of the victim plays the population which has undergone assimilation, and a predator role the population of one of the assimilating states. The population of the second assimilating state in the first case changes in proportion (the coefficient of proportionality is equal to the relation of the population of assimilators in an initial time point) to the population of the first assimilator. In the second case the problem is actually reduced to nonlinear system of two differential equations describing type model "a predator – the victim", with the closed integrated curves on the phase plane. In both cases there is no full assimilation of the population to less widespread language. Intervals of change of number of the population of all three objects of model are found. The considered mathematical models which in some approach can model real situations, with the real assimilating countries and the state formations (an autonomy or formation with the unrecognized status), undergone to bilateral assimilation, show that for them the only possibility to avoid from assimilation is the natural demographic increase in population and hope for natural decrease in the population of the assimilating states.

Keywords: nonlinear mathematical model, bilateral assimilation, demographic factor, first integrals, result of assimilation, intervals of change of number of the population

Procedia PDF Downloads 465
15020 Estimation of Constant Coefficients of Bourgoyne and Young Drilling Rate Model for Drill Bit Wear Prediction

Authors: Ahmed Z. Mazen, Nejat Rahmanian, Iqbal Mujtaba, Ali Hassanpour

Abstract:

In oil and gas well drilling, the drill bit is an important part of the Bottom Hole Assembly (BHA), which is installed and designed to drill and produce a hole by several mechanisms. The efficiency of the bit depends on many drilling parameters such as weight on bit, rotary speed, and mud properties. When the bit is pulled out of the hole, the evaluation of the bit damage must be recorded very carefully to guide engineers in order to select the bits for further planned wells. Having a worn bit for hole drilling may cause severe damage to bit leading to cutter or cone losses in the bottom of hole, where a fishing job will have to take place, and all of these will increase the operating cost. The main factor to reduce the cost of drilling operation is to maximize the rate of penetration by analyzing real-time data to predict the drill bit wear while drilling. There are numerous models in the literature for prediction of the rate of penetration based on drilling parameters, mostly based on empirical approaches. One of the most commonly used approaches is Bourgoyne and Young model, where the rate of penetration can be estimated by the drilling parameters as well as a wear index using an empirical correlation, provided all the constants and coefficients are accurately determined. This paper introduces a new methodology to estimate the eight coefficients for Bourgoyne and Young model using the gPROMS parameters estimation GPE (Version 4.2.0). Real data collected form similar formations (12 ¼’ sections) in two different fields in Libya are used to estimate the coefficients. The estimated coefficients are then used in the equations and applied to nearby wells in the same field to predict the bit wear.

Keywords: Bourgoyne and Young model, bit wear, gPROMS, rate of penetration

Procedia PDF Downloads 150
15019 Effect of Springback Analysis on Influences of the Steel Demoulding Using FEM

Authors: Byeong-Sam Kim, Jongmin Park

Abstract:

The present work is motivated by the industrial challenge to produce complex composite shapes cost-effectively. The model used an anisotropical thermoviscoelastic is analyzed by an implemented finite element solver. The stress relaxation can be constructed by Prony series for the nonlinear thermoviscoelastic model. The calculation of process induced internal stresses relaxation during the cooling stage of the manufacturing cycle was carried out by the spring back phenomena observed from the part containing a cylindrical segment. The finite element results obtained from the present formulation are compared with experimental data, and the results show good correlations.

Keywords: thermoviscoelastic, springback phenomena, FEM analysis, thermoplastic composite structures

Procedia PDF Downloads 357
15018 India’s Energy Transition, Pathways for Green Economy

Authors: B. Sudhakara Reddy

Abstract:

In modern economy, energy is fundamental to virtually every product and service in use. It has been developed on the dependence of abundant and easy-to-transform polluting fossil fuels. On one hand, increase in population and income levels combined with increased per capita energy consumption requires energy production to keep pace with economic growth, and on the other, the impact of fossil fuel use on environmental degradation is enormous. The conflicting policy objectives of protecting the environment while increasing economic growth and employment has resulted in this paradox. Hence, it is important to decouple economic growth from environmental degeneration. Hence, the search for green energy involving affordable, low-carbon, and renewable energies has become global priority. This paper explores a transition to a sustainable energy system using the socio-economic-technical scenario method. This approach takes into account the multifaceted nature of transitions which not only require the development and use of new technologies, but also of changes in user behaviour, policy and regulation. The scenarios that are developed are: baseline business as usual (BAU) as well as green energy (GE). The baseline scenario assumes that the current trends (energy use, efficiency levels, etc.) will continue in future. India’s population is projected to grow by 23% during 2010 –2030, reaching 1.47 billion. The real GDP, as per the model, is projected to grow by 6.5% per year on average between 2010 and 2030 reaching US$5.1 trillion or $3,586 per capita (base year 2010). Due to increase in population and GDP, the primary energy demand will double in two decades reaching 1,397 MTOE in 2030 with the share of fossil fuels remaining around 80%. The increase in energy use corresponds to an increase in energy intensity (TOE/US $ of GDP) from 0.019 to 0.036. The carbon emissions are projected to increase by 2.5 times from 2010 reaching 3,440 million tonnes with per capita emissions of 2.2 tons/annum. However, the carbon intensity (tons per US$ of GDP) decreases from 0.96 to 0.67. As per GE scenario, energy use will reach 1079 MTOE by 2030, a saving of about 30% over BAU. The penetration rate of renewable energy resources will reduce the total primary energy demand by 23% under GE. The reduction in fossil fuel demand and focus on clean energy will reduce the energy intensity to 0.21 (TOE/US$ of GDP) and carbon intensity to 0.42 (ton/US$ of GDP) under the GE scenario. The study develops new ‘pathways out of poverty’ by creating more than 10 million jobs and thus raise the standard of living of low-income people. Our scenarios are, to a great extent, based on the existing technologies. The challenges to this path lie in socio-economic-political domains. However, to attain a green economy the appropriate policy package should be in place which will be critical in determining the kind of investments that will be needed and the incidence of costs and benefits. These results provide a basis for policy discussions on investments, policies and incentives to be put in place by national and local governments.

Keywords: energy, renewables, green technology, scenario

Procedia PDF Downloads 245
15017 Parasitic Capacitance Modeling in Pulse Transformer Using FEA

Authors: D. Habibinia, M. R. Feyzi

Abstract:

Nowadays, specialized software is vastly used to verify the performance of an electric machine prototype by evaluating a model of the system. These models mainly consist of electrical parameters such as inductances and resistances. However, when the operating frequency of the device is above one kHz, the effect of parasitic capacitances grows significantly. In this paper, a software-based procedure is introduced to model these capacitances within the electromagnetic simulation of the device. The case study is a high-frequency high-voltage pulse transformer. The Finite Element Analysis (FEA) software with coupled field analysis is used in this method.

Keywords: finite element analysis, parasitic capacitance, pulse transformer, high frequency

Procedia PDF Downloads 512
15016 Therapeutic Management of Toxocara canis Induced Hepatitis in Dogs

Authors: Milind D. Meshram

Abstract:

Ascarids are the most frequent worm parasite of dogs and cats. There are two species that commonly infect dogs: Toxocara canis and Toxascaris leonina. Adult roundworms live in the stomach and intestines and can grow to 7 inches (18 cm) long. A female may lay 200,000 eggs in a day. The eggs are protected by a hard shell. They are extremely hardy and can live for months or years in the soil. A dog aged about 6 years, from Satara was referred to Teaching Veterinary Clinical Complex (TVCC) with a complaint of abdominal pain, anorexia, loss of condition and dull body coat with mucous pale membrane. The clinical examination revealed Anaemia, palpation of abdomen revealed enlargement of liver, slimy feel of the intestine loop, diarrhea.

Keywords: therapeutic management, Toxocara canis, induced hepatitis, dogs

Procedia PDF Downloads 586
15015 A Robust Model Predictive Control for a Photovoltaic Pumping System Subject to Actuator Saturation Nonlinearity and Parameter Uncertainties: A Linear Matrix Inequality Approach

Authors: Sofiane Bououden, Ilyes Boulkaibet

Abstract:

In this paper, a robust model predictive controller (RMPC) for uncertain nonlinear system under actuator saturation is designed to control a DC-DC buck converter in PV pumping application, where this system is subject to actuator saturation and parameter uncertainties. The considered nonlinear system contains a linear constant part perturbed by an additive state-dependent nonlinear term. Based on the saturating actuator property, an appropriate linear feedback control law is constructed and used to minimize an infinite horizon cost function within the framework of linear matrix inequalities. The proposed approach has successfully provided a solution to the optimization problem that can stabilize the nonlinear plants. Furthermore, sufficient conditions for the existence of the proposed controller guarantee the robust stability of the system in the presence of polytypic uncertainties. In addition, the simulation results have demonstrated the efficiency of the proposed control scheme.

Keywords: PV pumping system, DC-DC buck converter, robust model predictive controller, nonlinear system, actuator saturation, linear matrix inequality

Procedia PDF Downloads 177
15014 A Case Study to Observe How Students’ Perception of the Possibility of Success Impacts Their Performance in Summative Exams

Authors: Rochelle Elva

Abstract:

Faculty in Higher Education today are faced with the challenge of convincing their students of the importance of learning and mastery of skills. This is because most students often have a single motivation -to get high grades. If it appears that this goal will not be met, they lose their motivation, and their academic efforts wane. This is true even for students in the competitive fields of STEM, including Computer Science majors. As educators, we have to understand our students and leverage what motivates them to achieve our learning outcomes. This paper presents a case study that utilizes cognitive psychology’s Expectancy Value Theory and Motivation Theory to investigate the effect of sustained expectancy for success on students’ learning outcomes. In our case study, we explore how students’ motivation and persistence in their academic efforts are impacted by providing them with an unexpected possible path to success that continues to the end of the semester. The approach was tested in an undergraduate computer science course with n = 56. The results of the study indicate that when presented with the real possibility of success, despite existing low grades, both low and high-scoring students persisted in their efforts to improve their performance. Their final grades were, on average, one place higher on the +/-letter grade scale, with some students scoring as high as three places above their predicted grade.

Keywords: expectancy for success and persistence, motivation and performance, computer science education, motivation and performance in computer science

Procedia PDF Downloads 73
15013 Importance of Human Capital Development and Management in Industries

Authors: Birce Boga Bakirli

Abstract:

In this paper, we investigate ideas on human capital development and management in industries. We structured a model to be able to gather the data from the interviews conducted with worker, specialists and owners of companies. Different aspects of the situation are found in these interviews, and we used the information to model the benefit of the business owners and workers perspectives. These are modelled as a bi-level programming problem. Several instances of the generic cases are solved. The results show the importance of education within and out of the company for workers, and it returns for the company.

Keywords: bi-level programming, corporate strategy, cost tradeoffs, human capital, mixed integer programming, Stackelberg game, supplier relations, strategic planning

Procedia PDF Downloads 349
15012 Killing for the Great Peace: An Internal Perspective on the Anti-Manchu Theme in the Taiping Movement

Authors: Zihao He

Abstract:

The majority of existing studies on the Taiping Movement (1851-1864) viewed their anti-Manchu attitudes as nationalist agendas: Taiping was aimed at revolting against the Manchu government and establishing a new political regime. To explain these aggressive and violent attitudes towards Manchu, these studies mainly found socio-economic factors and stressed the status of “being deprived”. Even the ‘demon-slaying’ narrative of the Taiping to dehumanize the Manchu tends to be viewed as a “religious tool” to achieve their political, nationalist aim. This paper argues that these studies on Taiping’s anti-Manchu attitudes and behaviors are analyzed from an external angle and have two major problems. Firstly, they distinguished “religion” from “nationalist” or “political”, focusing on the “political” nature of the movement. “Religion” and the religious experience within Taiping were largely ignored. This paper argues that there was no separable and independent “religion” in the Taiping Movement, as opposed to secular, nationalist politics. Secondly, these analyses held an external perspective on Taiping’s anti-Manchu agenda. Demonizing and killing Manchu were viewed as purely political actions. On the contrary, this paper focuses on the internal perspective of anti-Manchu narratives in the Taiping Movement. The method of this paper is mainly textual analysis, focusing on the official documents, edicts, and proclamations of the Taiping movement. It views the writing of the Taiping as a coherent narrative and rhetoric, which was attractive and convincing for its followers. In terms of the main findings, firstly, internal and external perspectives on anti-Manchu violence are different. Externally, violence was viewed as a tool and necessary process to achieve the political goal. However, internally speaking, in Taiping’s writing, violence was a result of Godlessness, which would be solved as far as the faith in God is restored in China. Having a framework of universal love among human beings as sons and daughters of the Heavenly Father and killing was forbidden, the Taiping excluded Manchus from the family of human beings and demonized them. “Demon-slaying” was not violence. It was constructed as a necessary process to achieve the Great Peace. Moreover, Taiping’s anti-Manchu violence was not merely “political.” Rather, the category “religion” and its binary opposition, “secular,” is not suitable for Taiping. A key point related to this argument is the revolutionary violence against the Manchu government, which inherited the traditional “Heavenly Mandate” model. From an internal, theological perspective, anti-Manchu was ordained and commanded by the Heavenly Father. Manchu, as a regime, was standing as a hindrance in the path toward God. Besides, Manchu was not only viewed as a regime, but they were also “demons.” Therefore, the paper examines how Manchus were dehumanized in Taiping’s writings and were situated outside of the consideration of nonviolent and love. Manchu as a regime and Manchu as demons are in a dynamic relationship. As a regime, the Manchu government was preventing Chinese people from worshipping the Heavenly Father, so they were demonized. As they were demons, killing Manchus during the revolt was justified and not viewed as being contradicted the universal love among human beings.

Keywords: anti-manchu, demon-slaying, heavenly mandate, religion and violence, the taiping movement.

Procedia PDF Downloads 65