Search results for: computable general equilibrium model
16460 Remote Criminal Proceedings as Implication to Rethink the Principles of Criminal Procedure
Authors: Inga Žukovaitė
Abstract:
This paper aims to present postdoc research on remote criminal proceedings in court. In this period, when most countries have introduced the possibility of remote criminal proceedings in their procedural laws, it is not only possible to identify the weaknesses and strengths of the legal regulation but also assess the effectiveness of the instrument used and to develop an approach to the process. The example of some countries (for example, Italy) shows, on the one hand, that criminal procedure, based on orality and immediacy, does not lend itself to easy modifications that pose even a slight threat of devaluation of these principles in a society with well-established traditions of this procedure. On the other hand, such strong opposition and criticism make us ask whether we are facing the possibility of rethinking the traditional ways to understand the safeguards in order to preserve their essence without devaluing their traditional package but looking for new components to replace or compensate for the so-called “loss” of safeguards. The reflection on technological progress in the field of criminal procedural law indicates the need to rethink, on the basis of fundamental procedural principles, the safeguards that can replace or compensate for those that are in crisis as a result of the intervention of technological progress. Discussions in academic doctrine on the impact of technological interventions on the proceedings as such or on the limits of such interventions refer to the principles of criminal procedure as to a point of reference. In the context of the inferiority of technology, scholarly debate still addresses the issue of whether the court will not gradually become a mere site for the exercise of penal power with the resultant consequences – the deformation of the procedure itself as a physical ritual. In this context, this work seeks to illustrate the relationship between remote criminal proceedings in court and the principle of immediacy, the concept of which is based on the application of different models of criminal procedure (inquisitorial and adversarial), the aim is to assess the challenges posed for legal regulation by the interaction of technological progress with the principles of criminal procedure. The main hypothesis to be tested is that the adoption of remote proceedings is directly linked to the prevailing model of criminal procedure, arguing that the more principles of the inquisitorial model are applied to the criminal process, the more remote criminal trial is acceptable, and conversely, the more the criminal process is based on an adversarial model, more the remote criminal process is seen as incompatible with the principle of immediacy. In order to achieve this goal, the following tasks are set: to identify whether there is a difference in assessing remote proceedings with the immediacy principle between the adversarial model and the inquisitorial model, to analyse the main aspects of the regulation of remote criminal proceedings based on the examples of different countries (for example Lithuania, Italy, etc.).Keywords: remote criminal proceedings, principle of orality, principle of immediacy, adversarial model inquisitorial model
Procedia PDF Downloads 6816459 Enhancing Spatial Interpolation: A Multi-Layer Inverse Distance Weighting Model for Complex Regression and Classification Tasks in Spatial Data Analysis
Authors: Yakin Hajlaoui, Richard Labib, Jean-François Plante, Michel Gamache
Abstract:
This study introduces the Multi-Layer Inverse Distance Weighting Model (ML-IDW), inspired by the mathematical formulation of both multi-layer neural networks (ML-NNs) and Inverse Distance Weighting model (IDW). ML-IDW leverages ML-NNs' processing capabilities, characterized by compositions of learnable non-linear functions applied to input features, and incorporates IDW's ability to learn anisotropic spatial dependencies, presenting a promising solution for nonlinear spatial interpolation and learning from complex spatial data. it employ gradient descent and backpropagation to train ML-IDW, comparing its performance against conventional spatial interpolation models such as Kriging and standard IDW on regression and classification tasks using simulated spatial datasets of varying complexity. the results highlight the efficacy of ML-IDW, particularly in handling complex spatial datasets, exhibiting lower mean square error in regression and higher F1 score in classification.Keywords: deep learning, multi-layer neural networks, gradient descent, spatial interpolation, inverse distance weighting
Procedia PDF Downloads 5216458 Automating 2D CAD to 3D Model Generation Process: Wall pop-ups
Authors: Mohit Gupta, Chialing Wei, Thomas Czerniawski
Abstract:
In this paper, we have built a neural network that can detect walls on 2D sheets and subsequently create a 3D model in Revit using Dynamo. The training set includes 3500 labeled images, and the detection algorithm used is YOLO. Typically, engineers/designers make concentrated efforts to convert 2D cad drawings to 3D models. This costs a considerable amount of time and human effort. This paper makes a contribution in automating the task of 3D walls modeling. 1. Detecting Walls in 2D cad and generating 3D pop-ups in Revit. 2. Saving designer his/her modeling time in drafting elements like walls from 2D cad to 3D representation. An object detection algorithm YOLO is used for wall detection and localization. The neural network is trained over 3500 labeled images of size 256x256x3. Then, Dynamo is interfaced with the output of the neural network to pop-up 3D walls in Revit. The research uses modern technological tools like deep learning and artificial intelligence to automate the process of generating 3D walls without needing humans to manually model them. Thus, contributes to saving time, human effort, and money.Keywords: neural networks, Yolo, 2D to 3D transformation, CAD object detection
Procedia PDF Downloads 14416457 Bathymetric Change of Brahmaputra River and Its Influence on Flooding Scenario
Authors: Arup Kumar Sarma, Rohan Kar
Abstract:
The development of physical model of River like Brahmaputra, which finds its origin in the Chema Yundung glacier of Tibet and flows through India and Bangladesh, is always expensive and very much time consuming. With the advancement of computational technique, mathematical modeling has found wide application. MIKE 21C is one such commercial software, developed by Danish Hydraulic Institute (DHI), with the depth-averaged approach and a two-dimensional curvilinear finite-difference model, which is capable of modeling hydrodynamic and morphological processes with some limitations. The main purpose of this study are to generate bathymetry of the River Brahmaputra starting from “Sadia” at upstream to “Dhubri,” at downstream stretching a distance of approximately 695 km, for four different years: 1957, 1971, 1977, and 1981 over the grid generated in the MIKE 21C and to carry out the hydrodynamic simulation for these years to analyze the effect of bathymetry change on the surface water elevation. The study has established that bathymetric change can influence the flood level significantly in some of the river reaches and therefore the modification or updating of regular bathymetry is very much essential for the reliable flood routing in alluvial rivers.Keywords: bathymetry, brahmaputra river, hydrodynamic model, surface water elevation
Procedia PDF Downloads 45516456 Supersymmetry versus Compositeness: 2-Higgs Doublet Models Tell the Story
Authors: S. De Curtis, L. Delle Rose, S. Moretti, K. Yagyu
Abstract:
Supersymmetry and compositeness are the two prevalent paradigms providing both a solution to the hierarchy problem and motivation for a light Higgs boson state. An open door towards the solution is found in the context of 2-Higgs Doublet Models (2HDMs), which are necessary to supersymmetry and natural within compositeness in order to enable Electro-Weak Symmetry Breaking. In scenarios of compositeness, the two isospin doublets arise as pseudo Nambu-Goldstone bosons from the breaking of SO(6). By calculating the Higgs potential at one-loop level through the Coleman-Weinberg mechanism from the explicit breaking of the global symmetry induced by the partial compositeness of fermions and gauge bosons, we derive the phenomenological properties of the Higgs states and highlight the main signatures of this Composite 2-Higgs Doublet Model at the Large Hadron Collider. These include modifications to the SM-like Higgs couplings as well as production and decay channels of heavier Higgs bosons. We contrast the properties of this composite scenario to the well-known ones established in supersymmetry, with the MSSM being the most notorious example. We show how 2HDM spectra of masses and couplings accessible at the Large Hadron Collider may allow one to distinguish between the two paradigms.Keywords: beyond the standard model, composite Higgs, supersymmetry, Two-Higgs Doublet Model
Procedia PDF Downloads 12616455 Using Machine Learning to Build a Real-Time COVID-19 Mask Safety Monitor
Authors: Yash Jain
Abstract:
The US Center for Disease Control has recommended wearing masks to slow the spread of the virus. The research uses a video feed from a camera to conduct real-time classifications of whether or not a human is correctly wearing a mask, incorrectly wearing a mask, or not wearing a mask at all. Utilizing two distinct datasets from the open-source website Kaggle, a mask detection network had been trained. The first dataset that was used to train the model was titled 'Face Mask Detection' on Kaggle, where the dataset was retrieved from and the second dataset was titled 'Face Mask Dataset, which provided the data in a (YOLO Format)' so that the TinyYoloV3 model could be trained. Based on the data from Kaggle, two machine learning models were implemented and trained: a Tiny YoloV3 Real-time model and a two-stage neural network classifier. The two-stage neural network classifier had a first step of identifying distinct faces within the image, and the second step was a classifier to detect the state of the mask on the face and whether it was worn correctly, incorrectly, or no mask at all. The TinyYoloV3 was used for the live feed as well as for a comparison standpoint against the previous two-stage classifier and was trained using the darknet neural network framework. The two-stage classifier attained a mean average precision (MAP) of 80%, while the model trained using TinyYoloV3 real-time detection had a mean average precision (MAP) of 59%. Overall, both models were able to correctly classify stages/scenarios of no mask, mask, and incorrectly worn masks.Keywords: datasets, classifier, mask-detection, real-time, TinyYoloV3, two-stage neural network classifier
Procedia PDF Downloads 16316454 Effect of Installation Method on the Ratio of Tensile to Compressive Shaft Capacity of Piles in Dense Sand
Authors: A. C. Galvis-Castro, R. D. Tovar, R. Salgado, M. Prezzi
Abstract:
It is generally accepted that the shaft capacity of piles in the sand is lower for tensile loading that for compressive loading. So far, very little attention has been paid to the role of the influence of the installation method on the tensile to compressive shaft capacity ratio. The objective of this paper is to analyze the effect of installation method on the tensile to compressive shaft capacity of piles in dense sand as observed in tests on half-circular model pile tests in a half-circular calibration chamber with digital image correlation (DIC) capability. Model piles are either monotonically jacked, jacked with multiple strokes or pre-installed into the dense sand samples. Digital images of the model pile and sand are taken during both the installation and loading stages of each test and processed using the DIC technique to obtain the soil displacement and strain fields. The study provides key insights into the mobilization of shaft resistance in tensile and compressive loading for both displacement and non-displacement piles.Keywords: digital image correlation, piles, sand, shaft resistance
Procedia PDF Downloads 27216453 Reaction Kinetics of Biodiesel Production from Refined Cottonseed Oil Using Calcium Oxide
Authors: Ude N. Callistus, Amulu F. Ndidi, Onukwuli D. Okechukwu, Amulu E. Patrick
Abstract:
Power law approximation was used in this study to evaluate the reaction orders of calcium oxide, CaO catalyzed transesterification of refined cottonseed oil and methanol. The kinetics study was carried out at temperatures of 45, 55 and 65 oC. The kinetic parameters such as reaction order 2.02 and rate constant 2.8 hr-1g-1cat, obtained at the temperature of 65 oC best fitted the kinetic model. The activation energy, Ea obtained was 127.744 KJ/mol. The results indicate that the transesterification reaction of the refined cottonseed oil using calcium oxide catalyst is approximately second order reaction.Keywords: refined cottonseed oil, transesterification, CaO, heterogeneous catalysts, kinetic model
Procedia PDF Downloads 54316452 Tool for Fast Detection of Java Code Snippets
Authors: Tomáš Bublík, Miroslav Virius
Abstract:
This paper presents general results on the Java source code snippet detection problem. We propose the tool which uses graph and sub graph isomorphism detection. A number of solutions for all of these tasks have been proposed in the literature. However, although that all these solutions are really fast, they compare just the constant static trees. Our solution offers to enter an input sample dynamically with the Scripthon language while preserving an acceptable speed. We used several optimizations to achieve very low number of comparisons during the matching algorithm.Keywords: AST, Java, tree matching, scripthon source code recognition
Procedia PDF Downloads 42516451 Carbon Pool Assessment in Community Forests, Nepal
Authors: Medani Prasad Rijal
Abstract:
Forest itself is a factory as well as product. It supplies tangible and intangible goods and services. It supplies timber, fuel wood, fodder, grass leaf litter as well as non timber edible goods and medicinal and aromatic products additionally provides environmental services. These environmental services are of local, national or even global importance. In Nepal, more than 19 thousands community forests are providing environmental service in less economic benefit than actual efficiency. There is a risk of cost of management of those forest exceeds benefits and forests get converted to open access resources in future. Most of the environmental goods and services do not have markets which mean no prices at which they are available to the consumers, therefore the valuation of these services goods and services establishment of paying mechanism for such services and insure the benefit to community is more relevant in local as well as global scale. There are few examples of carbon trading in domestic level to meet the country wide emission goal. In this contest, the study aims to explore the public attitude towards carbon offsetting and their responsibility over service providers. This study helps in promotion of environment service awareness among general people, service provider and community forest. The research helps to unveil the carbon pool scenario in community forest and willingness to pay for carbon offsetting of people who are consuming more energy than general people and emitting relatively more carbon in atmosphere. The study has assessed the carbon pool status in two community forest and valuated carbon service from community forest through willingness to pay in Dharan municipality situated in eastern. In the study, in two community forests carbon pools were assessed following the guideline “Forest Carbon Inventory Guideline 2010” prescribed by Ministry of Forest and soil Conservation, Nepal. Final outcomes of analysis in intensively managed area of Hokse CF recorded as 103.58 tons C /ha with 6173.30 tons carbon stock. Similarly in Hariyali CF carbon density was recorded 251.72 mg C /ha. The total carbon stock of intensively managed blocks in Hariyali CF is 35839.62 tons carbon.Keywords: carbon, offsetting, sequestration, valuation, willingness to pay
Procedia PDF Downloads 35516450 Thick Data Analytics for Learning Cataract Severity: A Triplet Loss Siamese Neural Network Model
Authors: Jinan Fiaidhi, Sabah Mohammed
Abstract:
Diagnosing cataract severity is an important factor in deciding to undertake surgery. It is usually conducted by an ophthalmologist or through taking a variety of fundus photography that needs to be examined by the ophthalmologist. This paper carries out an investigation using a Siamese neural net that can be trained with small anchor samples to score cataract severity. The model used in this paper is based on a triplet loss function that takes the ophthalmologist best experience in rating positive and negative anchors to a specific cataract scaling system. This approach that takes the heuristics of the ophthalmologist is generally called the thick data approach, which is a kind of machine learning approach that learn from a few shots. Clinical Relevance: The lens of the eye is mostly made up of water and proteins. A cataract occurs when these proteins at the eye lens start to clump together and block lights causing impair vision. This research aims at employing thick data machine learning techniques to rate the severity of the cataract using Siamese neural network.Keywords: thick data analytics, siamese neural network, triplet-loss model, few shot learning
Procedia PDF Downloads 11116449 Thermal Instability in Rivlin-Ericksen Elastico-Viscous Nanofluid with Connective Boundary Condition: Effect of Vertical Throughflow
Authors: Shivani Saini
Abstract:
The effect of vertical throughflow on the onset of convection in Rivlin-Ericksen Elastico-Viscous nanofluid with convective boundary condition is investigated. The flow is stimulated with modified Darcy model under the assumption that the nanoparticle volume fraction is not actively managed on the boundaries. The heat conservation equation is formulated by introducing the convective term of nanoparticle flux. A linear stability analysis based upon normal mode is performed, and an approximate solution of eigenvalue problems is obtained using the Galerkin weighted residual method. Investigation of the dependence of the Rayleigh number on various viscous and nanofluid parameter is performed. It is found that through flow and nanofluid parameters hasten the convection while capacity ratio, kinematics viscoelasticity, and Vadasz number do not govern the stationary convection. Using the convective component of nanoparticle flux, critical wave number is the function of nanofluid parameters as well as the throughflow parameter. The obtained solution provides important physical insight into the behavior of this model.Keywords: Darcy model, nanofluid, porous layer, throughflow
Procedia PDF Downloads 13716448 A Hybrid of BioWin and Computational Fluid Dynamics Based Modeling of Biological Wastewater Treatment Plants for Model-Based Control
Authors: Komal Rathore, Kiesha Pierre, Kyle Cogswell, Aaron Driscoll, Andres Tejada Martinez, Gita Iranipour, Luke Mulford, Aydin Sunol
Abstract:
Modeling of Biological Wastewater Treatment Plants requires several parameters for kinetic rate expressions, thermo-physical properties, and hydrodynamic behavior. The kinetics and associated mechanisms become complex due to several biological processes taking place in wastewater treatment plants at varying times and spatial scales. A dynamic process model that incorporated the complex model for activated sludge kinetics was developed using the BioWin software platform for an Advanced Wastewater Treatment Plant in Valrico, Florida. Due to the extensive number of tunable parameters, an experimental design was employed for judicious selection of the most influential parameter sets and their bounds. The model was tuned using both the influent and effluent plant data to reconcile and rectify the forecasted results from the BioWin Model. Amount of mixed liquor suspended solids in the oxidation ditch, aeration rates and recycle rates were adjusted accordingly. The experimental analysis and plant SCADA data were used to predict influent wastewater rates and composition profiles as a function of time for extended periods. The lumped dynamic model development process was coupled with Computational Fluid Dynamics (CFD) modeling of the key units such as oxidation ditches in the plant. Several CFD models that incorporate the nitrification-denitrification kinetics, as well as, hydrodynamics was developed and being tested using ANSYS Fluent software platform. These realistic and verified models developed using BioWin and ANSYS were used to plan beforehand the operating policies and control strategies for the biological wastewater plant accordingly that further allows regulatory compliance at minimum operational cost. These models, with a little bit of tuning, can be used for other biological wastewater treatment plants as well. The BioWin model mimics the existing performance of the Valrico Plant which allowed the operators and engineers to predict effluent behavior and take control actions to meet the discharge limits of the plant. Also, with the help of this model, we were able to find out the key kinetic and stoichiometric parameters which are significantly more important for modeling of biological wastewater treatment plants. One of the other important findings from this model were the effects of mixed liquor suspended solids and recycle ratios on the effluent concentration of various parameters such as total nitrogen, ammonia, nitrate, nitrite, etc. The ANSYS model allowed the abstraction of information such as the formation of dead zones increases through the length of the oxidation ditches as compared to near the aerators. These profiles were also very useful in studying the behavior of mixing patterns, effect of aerator speed, and use of baffles which in turn helps in optimizing the plant performance.Keywords: computational fluid dynamics, flow-sheet simulation, kinetic modeling, process dynamics
Procedia PDF Downloads 21016447 Supply Air Pressure Control of HVAC System Using MPC Controller
Authors: P. Javid, A. Aeenmehr, J. Taghavifar
Abstract:
In this paper, supply air pressure of HVAC system has been modeled with second-order transfer function plus dead-time. In HVAC system, the desired input has step changes, and the output of proposed control system should be able to follow the input reference, so the idea of using model based predictive control is proceeded and designed in this paper. The closed loop control system is implemented in MATLAB software and the simulation results are provided. The simulation results show that the model based predictive control is able to control the plant properly.Keywords: air conditioning system, GPC, dead time, air supply control
Procedia PDF Downloads 52716446 Maackiain Attenuates Alpha-Synuclein Accumulation and Improves 6-OHDA-Induced Dopaminergic Neuron Degeneration in Parkinson's Disease Animal Model
Authors: Shao-Hsuan Chien, Ju-Hui Fu
Abstract:
Parkinson’s disease (PD) is a degenerative disorder of the central nervous system that is characterized by progressive loss of dopaminergic neurons in the substantia nigra pars compacta and motor impairment. Aggregation of α-synuclein in neuronal cells plays a key role in this disease. At present, therapeutics for PD provides moderate symptomatic benefit but is not able to delay the development of this disease. Current efforts for the treatment of PD are to identify new drugs that show slow or arrest progressive course of PD by interfering with a disease-specific pathogenetic process in PD patients. Maackiain is a bioactive compound isolated from the roots of the Chinese herb Sophora flavescens. The purpose of the present study was to assess the potential for maackiain to ameliorate PD in Caenorhabditis elegans models. Our data reveal that maackiain prevents α-synuclein accumulation in the transgenic Caenorhabditis elegans model and also improves dopaminergic neuron degeneration, food-sensing behavior, and life-span in 6-hydroxydopamine-induced Caenorhabditis elegans model, thus indicating its potential as a candidate antiparkinsonian drug.Keywords: maackiain, Parkinson’s disease, dopaminergic neurons, α-Synuclein
Procedia PDF Downloads 19916445 Process Mining as an Ecosystem Platform to Mitigate a Deficiency of Processes Modelling
Authors: Yusra Abdulsalam Alqamati, Ahmed Alkilany
Abstract:
The teaching staff is a distinct group whose impact is on the educational process and which plays an important role in enhancing the quality of the academic education process. To improve the management effectiveness of the academy, the Teaching Staff Management System (TSMS) proposes that all teacher processes be digitized. Since the BPMN approach can accurately describe the processes, it lacks a clear picture of the process flow map, something that the process mining approach has, which is extracting information from event logs for discovery, monitoring, and model enhancement. Therefore, these two methodologies were combined to create the most accurate representation of system operations, the ability to extract data records and mining processes, recreate them in the form of a Petri net, and then generate them in a BPMN model for a more in-depth view of process flow. Additionally, the TSMS processes will be orchestrated to handle all requests in a guaranteed small-time manner thanks to the integration of the Google Cloud Platform (GCP), the BPM engine, and allowing business owners to take part throughout the entire TSMS project development lifecycle.Keywords: process mining, BPM, business process model and notation, Petri net, teaching staff, Google Cloud Platform
Procedia PDF Downloads 14216444 Designing Price Stability Model of Red Cayenne Pepper Price in Wonogiri District, Centre Java, Using ARCH/GARCH Method
Authors: Fauzia Dianawati, Riska W. Purnomo
Abstract:
Food and agricultural sector become the biggest sector contributing to inflation in Indonesia. Especially in Wonogiri district, red cayenne pepper was the biggest sector contributing to inflation on 2016. A national statistic proved that in recent five years red cayenne pepper has the highest average level of fluctuation among all commodities. Some factors, like supply chain, price disparity, production quantity, crop failure, and oil price become the possible factor causes high volatility level in red cayenne pepper price. Therefore, this research tries to find the key factor causing fluctuation on red cayenne pepper by using ARCH/GARCH method. The method could accommodate the presence of heteroscedasticity in time series data. At the end of the research, it is statistically found that the second level of supply chain becomes the biggest part contributing to inflation with 3,35 of coefficient in fluctuation forecasting model of red cayenne pepper price. This model could become a reference to the government to determine the appropriate policy in maintaining the price stability of red cayenne pepper.Keywords: ARCH/GARCH, forecasting, red cayenne pepper, volatility, supply chain
Procedia PDF Downloads 18616443 Remaining Useful Life (RUL) Assessment Using Progressive Bearing Degradation Data and ANN Model
Authors: Amit R. Bhende, G. K. Awari
Abstract:
Remaining useful life (RUL) prediction is one of key technologies to realize prognostics and health management that is being widely applied in many industrial systems to ensure high system availability over their life cycles. The present work proposes a data-driven method of RUL prediction based on multiple health state assessment for rolling element bearings. Bearing degradation data at three different conditions from run to failure is used. A RUL prediction model is separately built in each condition. Feed forward back propagation neural network models are developed for prediction modeling.Keywords: bearing degradation data, remaining useful life (RUL), back propagation, prognosis
Procedia PDF Downloads 43616442 A Model of the Universe without Expansion of Space
Authors: Jia-Chao Wang
Abstract:
A model of the universe without invoking space expansion is proposed to explain the observed redshift-distance relation and the cosmic microwave background radiation (CMB). The main hypothesized feature of the model is that photons traveling in space interact with the CMB photon gas. This interaction causes the photons to gradually lose energy through dissipation and, therefore, experience redshift. The interaction also causes some of the photons to be scattered off their track toward an observer and, therefore, results in beam intensity attenuation. As observed, the CMB exists everywhere in space and its photon density is relatively high (about 410 per cm³). The small average energy of the CMB photons (about 6.3×10⁻⁴ eV) can reduce the energies of traveling photons gradually and will not alter their momenta drastically as in, for example, Compton scattering, to totally blur the images of distant objects. An object moving through a thermalized photon gas, such as the CMB, experiences a drag. The cause is that the object sees a blue shifted photon gas along the direction of motion and a redshifted one in the opposite direction. An example of this effect can be the observed CMB dipole: The earth travels at about 368 km/s (600 km/s) relative to the CMB. In the all-sky map from the COBE satellite, radiation in the Earth's direction of motion appears 0.35 mK hotter than the average temperature, 2.725 K, while radiation on the opposite side of the sky is 0.35 mK colder. The pressure of a thermalized photon gas is given by Pγ = Eγ/3 = αT⁴/3, where Eγ is the energy density of the photon gas and α is the Stefan-Boltzmann constant. The observed CMB dipole, therefore, implies a pressure difference between the two sides of the earth and results in a CMB drag on the earth. By plugging in suitable estimates of quantities involved, such as the cross section of the earth and the temperatures on the two sides, this drag can be estimated to be tiny. But for a photon traveling at the speed of light, 300,000 km/s, the drag can be significant. In the present model, for the dissipation part, it is assumed that a photon traveling from a distant object toward an observer has an effective interaction cross section pushing against the pressure of the CMB photon gas. For the attenuation part, the coefficient of the typical attenuation equation is used as a parameter. The values of these two parameters are determined by fitting the 748 µ vs. z data points compiled from 643 supernova and 105 γ-ray burst observations with z values up to 8.1. The fit is as good as that obtained from the lambda cold dark matter (ΛCDM) model using online cosmological calculators and Planck 2015 results. The model can be used to interpret Hubble's constant, Olbers' paradox, the origin and blackbody nature of the CMB radiation, the broadening of supernova light curves, and the size of the observable universe.Keywords: CMB as the lowest energy state, model of the universe, origin of CMB in a static universe, photon-CMB photon gas interaction
Procedia PDF Downloads 13416441 An Analytical Wall Function for 2-D Shock Wave/Turbulent Boundary Layer Interactions
Authors: X. Wang, T. J. Craft, H. Iacovides
Abstract:
When handling the near-wall regions of turbulent flows, it is necessary to account for the viscous effects which are important over the thin near-wall layers. Low-Reynolds- number turbulence models do this by including explicit viscous and also damping terms which become active in the near-wall regions, and using very fine near-wall grids to properly resolve the steep gradients present. In order to overcome the cost associated with the low-Re turbulence models, a more advanced wall function approach has been implemented within OpenFoam and tested together with a standard log-law based wall function in the prediction of flows which involve 2-D shock wave/turbulent boundary layer interactions (SWTBLIs). On the whole, from the calculation of the impinging shock interaction, the three turbulence modelling strategies, the Lauder-Sharma k-ε model with Yap correction (LS), the high-Re k-ε model with standard wall function (SWF) and analytical wall function (AWF), display good predictions of wall-pressure. However, the SWF approach tends to underestimate the tendency of the flow to separate as a result of the SWTBLI. The analytical wall function, on the other hand, is able to reproduce the shock-induced flow separation and returns predictions similar to those of the low-Re model, using a much coarser mesh.Keywords: SWTBLIs, skin-friction, turbulence modeling, wall function
Procedia PDF Downloads 34616440 Modeling and Energy Analysis of Limestone Decomposition with Microwave Heating
Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira
Abstract:
The energy transition is spurred by structural changes in energy demand, supply, and prices. Microwave technology was first proposed as a faster alternative for cooking food. It was found that food heated instantly when interacting with high-frequency electromagnetic waves. The dielectric properties account for a material’s ability to absorb electromagnetic energy and dissipate this energy in the form of heat. Many energy-intense industries could benefit from electromagnetic heating since many of the raw materials are dielectric at high temperatures. Limestone sedimentary rock is a dielectric material intensively used in the cement industry to produce unslaked lime. A numerical 3D model was implemented in COMSOL Multiphysics to study the limestone continuous processing under microwave heating. The model solves the two-way coupling between the Energy equation and Maxwell’s equations as well as the coupling between heat transfer and chemical interfaces. Complementary, a controller was implemented to optimize the overall heating efficiency and control the numerical model stability. This was done by continuously matching the cavity impedance and predicting the required energy for the system, avoiding energy inefficiencies. This controller was developed in MATLAB and successfully fulfilled all these goals. The limestone load influence on thermal decomposition and overall process efficiency was the main object of this study. The procedure considered the Verification and Validation of the chemical kinetics model separately from the coupled model. The chemical model was found to correctly describe the chosen kinetic equation, and the coupled model successfully solved the equations describing the numerical model. The interaction between flow of material and electric field Poynting vector revealed to influence limestone decomposition, as a result from the low dielectric properties of limestone. The numerical model considered this effect and took advantage from this interaction. The model was demonstrated to be highly unstable when solving non-linear temperature distributions. Limestone has a dielectric loss response that increases with temperature and has low thermal conductivity. For this reason, limestone is prone to produce thermal runaway under electromagnetic heating, as well as numerical model instabilities. Five different scenarios were tested by considering a material fill ratio of 30%, 50%, 65%, 80%, and 100%. Simulating the tube rotation for mixing enhancement was proven to be beneficial and crucial for all loads considered. When uniform temperature distribution is accomplished, the electromagnetic field and material interaction is facilitated. The results pointed out the inefficient development of the electric field within the bed for 30% fill ratio. The thermal efficiency showed the propensity to stabilize around 90%for loads higher than 50%. The process accomplished a maximum microwave efficiency of 75% for the 80% fill ratio, sustaining that the tube has an optimal fill of material. Electric field peak detachment was observed for the case with 100% fill ratio, justifying the lower efficiencies compared to 80%. Microwave technology has been demonstrated to be an important ally for the decarbonization of the cement industry.Keywords: CFD numerical simulations, efficiency optimization, electromagnetic heating, impedance matching, limestone continuous processing
Procedia PDF Downloads 17516439 Factorization of Computations in Bayesian Networks: Interpretation of Factors
Authors: Linda Smail, Zineb Azouz
Abstract:
Given a Bayesian network relative to a set I of discrete random variables, we are interested in computing the probability distribution P(S) where S is a subset of I. The general idea is to write the expression of P(S) in the form of a product of factors where each factor is easy to compute. More importantly, it will be very useful to give an interpretation of each of the factors in terms of conditional probabilities. This paper considers a semantic interpretation of the factors involved in computing marginal probabilities in Bayesian networks. Establishing such a semantic interpretations is indeed interesting and relevant in the case of large Bayesian networks.Keywords: Bayesian networks, D-Separation, level two Bayesian networks, factorization of computation
Procedia PDF Downloads 53016438 Component-Based Approach in Assessing Sewer Manholes
Authors: Khalid Kaddoura, Tarek Zayed
Abstract:
Sewer networks are constructed to protect the communities and the environment from any contact with the sewer mediums. Pipelines, being laterals or sewer mains, and manholes form the huge underground infrastructure in every urban city. Due to the sewer networks importance, the infrastructure asset management field has extensive advancement in condition assessment and rehabilitation decision models. However, most of the focus was devoted to pipelines giving little attention toward manholes condition assessment. In fact, recent studies started to emerge in this area to preserve manholes from any malfunction. Therefore, the main objective of this study is to propose a condition assessment model for sewer manholes. The model divides the manhole into several components and determines the relative importance weight of each component using the Analytic Network Process (ANP) decision-making method. Later, the condition of the manhole is computed by aggregating the condition of each component with its corresponding weight. Accordingly, the proposed assessment model will enable decision-makers to have a final index suggesting the overall condition of the manhole and a backward analysis to check the condition of each component. Consequently, better decisions are made pertinent to maintenance, rehabilitation, and replacement actions.Keywords: Analytic Network Process (ANP), condition assessment, decision-making, manholes
Procedia PDF Downloads 35416437 Turbulent Flow Characteristics and Bed Morphology around Circular Bridge Pier
Authors: Pratik Acharya
Abstract:
Scour is the natural phenomenon brought about by erosive action of the flowing stream in alluvial channels. Frequent scouring around bridge piers may cause damage to the structures. In alluvial channels, a complex interaction between the streamflow and the bed particles results in scouring around piers. Thus, the study of characteristics of flow around piers can give sound knowledge about the scouring process. The present research has been done to investigate the turbulent flow characteristics around bridge piers and corresponding changes in bed morphology. Laboratory experiments were carried out in a tilting flume with a sand bed. The velocities around the pier are measured by Acoustic Doppler Velocimeter. Measurements show that at upstream of the pier velocity and Reynolds stresses are negative near the bed and near the free surface at downstream of the pier. At the downstream of the pier, Reynolds stresses changes rapidly due to the formation of wake vortices. Experimental results show that secondary currents are more predominant at the downstream of the pier. As the flowing stream hits the pier, the flow gets separated in the form of downflow along the face of the pier due to a strong pressure gradient and along the sides of the piers. Separation of flow around the pier leads to scour the bed material and develop the vortex. The downflow hits the bed and removes the bed material, which can be carried forward by the flow circulations along sides of the piers. Eroded bed material is deposited along the centerline at the rear side of the pier and produces hump in the downstream region. Initially, the rate of scouring is high and reduces gradually with increasing time. After a certain limit, equilibrium sets between the erosive capacity of the flowing stream and resistance to the motion by bed particles.Keywords: acoustic doppler velocimeter, pier, Reynolds stress, scour depth, velocity
Procedia PDF Downloads 14816436 Symbolic Computation and Abundant Travelling Wave Solutions to Modified Burgers' Equation
Authors: Muhammad Younis
Abstract:
In this article, the novel (G′/G)-expansion method is successfully applied to construct the abundant travelling wave solutions to the modified Burgers’ equation with the aid of computation. The method is reliable and useful, which gives more general exact travelling wave solutions than the existing methods. These obtained solutions are in the form of hyperbolic, trigonometric and rational functions including solitary, singular and periodic solutions which have many potential applications in physical science and engineering. Some of these solutions are new and some have already been constructed. Additionally, the constraint conditions, for the existence of the solutions are also listed.Keywords: traveling wave solutions, NLPDE, computation, integrability
Procedia PDF Downloads 43416435 Carbohydrate Intake Estimation in Type I Diabetic Patients Described by UVA/Padova Model
Authors: David A. Padilla, Rodolfo Villamizar
Abstract:
In recent years, closed loop control strategies have been developed in order to establish a healthy glucose profile in type 1 diabetic mellitus (T1DM) patients. However, the controller itself is unable to define a suitable reference trajectory for glucose. In this paper, a control strategy Is proposed where the shape of the reference trajectory is generated bases in the amount of carbohydrates present during the digestive process, due to the effect of carbohydrate intake. Since there no exists a sensor to measure the amount of carbohydrates consumed, an estimator is proposed. Thus this paper presents the entire process of designing a carbohydrate estimator, which allows estimate disturbance for a predictive controller (MPC) in a T1MD patient, the estimation will be used to establish a profile of reference and improve the response of the controller by providing the estimated information of ingested carbohydrates. The dynamics of the diabetic model used are due to the equations described by the UVA/Padova model of the T1DMS simulator, the system was developed and simulated in Simulink, taking into account the noise and limitations of the glucose control system actuators.Keywords: estimation, glucose control, predictive controller, MPC, UVA/Padova
Procedia PDF Downloads 26116434 Analyzing the Market Growth in Application Programming Interface Economy Using Time-Evolving Model
Authors: Hiroki Yoshikai, Shin’ichi Arakawa, Tetsuya Takine, Masayuki Murata
Abstract:
API (Application Programming Interface) economy is expected to create new value by converting corporate services such as information processing and data provision into APIs and using these APIs to connect services. Understanding the dynamics of a market of API economy under the strategies of participants is crucial to fully maximize the values of the API economy. To capture the behavior of a market in which the number of participants changes over time, we present a time-evolving market model for a platform in which API providers who provide APIs to service providers participate in addition to service providers and consumers. Then, we use the market model to clarify the role API providers play in expanding market participants and forming ecosystems. The results show that the platform with API providers increased the number of market participants by 67% and decreased the cost to develop services by 25% compared to the platform without API providers. Furthermore, during the expansion phase of the market, it is found that the profits of participants are mostly the same when 70% of the revenue from consumers is distributed to service providers and API providers. It is also found that when the market is mature, the profits of the service provider and API provider will decrease significantly due to their competition, and the profit of the platform increases.Keywords: API economy, ecosystem, platform, API providers
Procedia PDF Downloads 9116433 Mathematical Modeling of Nonlinear Process of Assimilation
Authors: Temur Chilachava
Abstract:
In work the new nonlinear mathematical model describing assimilation of the people (population) with some less widespread language by two states with two various widespread languages, taking into account demographic factor is offered. In model three subjects are considered: the population and government institutions with the widespread first language, influencing by means of state and administrative resources on the third population with some less widespread language for the purpose of their assimilation; the population and government institutions with the widespread second language, influencing by means of state and administrative resources on the third population with some less widespread language for the purpose of their assimilation; the third population (probably small state formation, an autonomy), exposed to bilateral assimilation from two rather powerful states. Earlier by us it was shown that in case of zero demographic factor of all three subjects, the population with less widespread language completely assimilates the states with two various widespread languages, and the result of assimilation (redistribution of the assimilated population) is connected with initial quantities, technological and economic capabilities of the assimilating states. In considered model taking into account demographic factor natural decrease in the population of the assimilating states and a natural increase of the population which has undergone bilateral assimilation is supposed. At some ratios between coefficients of natural change of the population of the assimilating states, and also assimilation coefficients, for nonlinear system of three differential equations are received the two first integral. Cases of two powerful states assimilating the population of small state formation (autonomy), with different number of the population, both with identical and with various economic and technological capabilities are considered. It is shown that in the first case the problem is actually reduced to nonlinear system of two differential equations describing the classical model "predator - the victim", thus, naturally a role of the victim plays the population which has undergone assimilation, and a predator role the population of one of the assimilating states. The population of the second assimilating state in the first case changes in proportion (the coefficient of proportionality is equal to the relation of the population of assimilators in an initial time point) to the population of the first assimilator. In the second case the problem is actually reduced to nonlinear system of two differential equations describing type model "a predator – the victim", with the closed integrated curves on the phase plane. In both cases there is no full assimilation of the population to less widespread language. Intervals of change of number of the population of all three objects of model are found. The considered mathematical models which in some approach can model real situations, with the real assimilating countries and the state formations (an autonomy or formation with the unrecognized status), undergone to bilateral assimilation, show that for them the only possibility to avoid from assimilation is the natural demographic increase in population and hope for natural decrease in the population of the assimilating states.Keywords: nonlinear mathematical model, bilateral assimilation, demographic factor, first integrals, result of assimilation, intervals of change of number of the population
Procedia PDF Downloads 47016432 Estimation of Constant Coefficients of Bourgoyne and Young Drilling Rate Model for Drill Bit Wear Prediction
Authors: Ahmed Z. Mazen, Nejat Rahmanian, Iqbal Mujtaba, Ali Hassanpour
Abstract:
In oil and gas well drilling, the drill bit is an important part of the Bottom Hole Assembly (BHA), which is installed and designed to drill and produce a hole by several mechanisms. The efficiency of the bit depends on many drilling parameters such as weight on bit, rotary speed, and mud properties. When the bit is pulled out of the hole, the evaluation of the bit damage must be recorded very carefully to guide engineers in order to select the bits for further planned wells. Having a worn bit for hole drilling may cause severe damage to bit leading to cutter or cone losses in the bottom of hole, where a fishing job will have to take place, and all of these will increase the operating cost. The main factor to reduce the cost of drilling operation is to maximize the rate of penetration by analyzing real-time data to predict the drill bit wear while drilling. There are numerous models in the literature for prediction of the rate of penetration based on drilling parameters, mostly based on empirical approaches. One of the most commonly used approaches is Bourgoyne and Young model, where the rate of penetration can be estimated by the drilling parameters as well as a wear index using an empirical correlation, provided all the constants and coefficients are accurately determined. This paper introduces a new methodology to estimate the eight coefficients for Bourgoyne and Young model using the gPROMS parameters estimation GPE (Version 4.2.0). Real data collected form similar formations (12 ¼’ sections) in two different fields in Libya are used to estimate the coefficients. The estimated coefficients are then used in the equations and applied to nearby wells in the same field to predict the bit wear.Keywords: Bourgoyne and Young model, bit wear, gPROMS, rate of penetration
Procedia PDF Downloads 15416431 Effect of Springback Analysis on Influences of the Steel Demoulding Using FEM
Authors: Byeong-Sam Kim, Jongmin Park
Abstract:
The present work is motivated by the industrial challenge to produce complex composite shapes cost-effectively. The model used an anisotropical thermoviscoelastic is analyzed by an implemented finite element solver. The stress relaxation can be constructed by Prony series for the nonlinear thermoviscoelastic model. The calculation of process induced internal stresses relaxation during the cooling stage of the manufacturing cycle was carried out by the spring back phenomena observed from the part containing a cylindrical segment. The finite element results obtained from the present formulation are compared with experimental data, and the results show good correlations.Keywords: thermoviscoelastic, springback phenomena, FEM analysis, thermoplastic composite structures
Procedia PDF Downloads 358