Search results for: anaerobic modeling
3403 Post-Earthquake Damage Detection Using System Identification with a Pair of Seismic Recordings
Authors: Lotfi O. Gargab, Ruichong R. Zhang
Abstract:
A wave-based framework is presented for modeling seismic motion in multistory buildings and using measured response for system identification which can be utilized to extract important information regarding structure integrity. With one pair of building response at two locations, a generalized model response is formulated based on wave propagation features and expressed as frequency and time response functions denoted, respectively, as GFRF and GIRF. In particular, GIRF is fundamental in tracking arrival times of impulsive wave motion initiated at response level which is dependent on local model properties. Matching model and measured-structure responses can help in identifying model parameters and infer building properties. To show the effectiveness of this approach, the Millikan Library in Pasadena, California is identified with recordings of the Yorba Linda earthquake of September 3, 2002.Keywords: system identification, continuous-discrete mass modeling, damage detection, post-earthquake
Procedia PDF Downloads 3693402 Numerical Study of the Influence of the Primary Stream Pressure on the Performance of the Ejector Refrigeration System Based on Heat Exchanger Modeling
Authors: Elhameh Narimani, Mikhail Sorin, Philippe Micheau, Hakim Nesreddine
Abstract:
Numerical models of the heat exchangers in ejector refrigeration system (ERS) were developed and validated with the experimental data. The models were based on the switched heat exchangers model using the moving boundary method, which were capable of estimating the zones’ lengths, the outlet temperatures of both sides and the heat loads at various experimental points. The developed models were utilized to investigate the influence of the primary flow pressure on the performance of an R245fa ERS based on its coefficient of performance (COP) and exergy efficiency. It was illustrated numerically and proved experimentally that increasing the primary flow pressure slightly reduces the COP while the exergy efficiency goes through a maximum before decreasing.Keywords: Coefficient of Performance, COP, Ejector Refrigeration System, ERS, exergy efficiency (ηII), heat exchangers modeling, moving boundary method
Procedia PDF Downloads 2023401 PWM Based Control of Dstatcom for Voltage Sag, Swell Mitigation in Distribution Systems
Authors: A. Assif
Abstract:
This paper presents the modeling of a prototype distribution static compensator (D-STATCOM) for voltage sag and swell mitigation in an unbalanced distribution system. Here the concept that an inverter can be used as generalized impedance converter to realize either inductive or capacitive reactance has been used to mitigate power quality issues of distribution networks. The D-STATCOM is here supposed to replace the widely used StaticVar Compensator (SVC). The scheme is based on the Voltage Source Converter (VSC) principle. In this model PWM based control scheme has been implemented to control the electronic valves of VSC. Phase shift control Algorithm method is used for converter control. The D-STATCOM injects a current into the system to mitigate the voltage sags. In this paper the modeling of D¬STATCOM has been designed using MATLAB SIMULINIC. Accordingly, simulations are first carried out to illustrate the use of D-STATCOM in mitigating voltage sag in a distribution system. Simulation results prove that the D-STATCOM is capable of mitigating voltage sag as well as improving power quality of a system.Keywords: D-STATCOM, voltage sag, voltage source converter (VSC), phase shift control
Procedia PDF Downloads 3433400 Instant Fire Risk Assessment Using Artifical Neural Networks
Authors: Tolga Barisik, Ali Fuat Guneri, K. Dastan
Abstract:
Major industrial facilities have a high potential for fire risk. In particular, the indices used for the detection of hidden fire are used very effectively in order to prevent the fire from becoming dangerous in the initial stage. These indices provide the opportunity to prevent or intervene early by determining the stage of the fire, the potential for hazard, and the type of the combustion agent with the percentage values of the ambient air components. In this system, artificial neural network will be modeled with the input data determined using the Levenberg-Marquardt algorithm, which is a multi-layer sensor (CAA) (teacher-learning) type, before modeling the modeling methods in the literature. The actual values produced by the indices will be compared with the outputs produced by the network. Using the neural network and the curves to be created from the resulting values, the feasibility of performance determination will be investigated.Keywords: artifical neural networks, fire, Graham Index, levenberg-marquardt algoritm, oxygen decrease percentage index, risk assessment, Trickett Index
Procedia PDF Downloads 1373399 Energy Consumption Modeling for Strawberry Greenhouse Crop by Adaptive Nero Fuzzy Inference System Technique: A Case Study in Iran
Authors: Azar Khodabakhshi, Elham Bolandnazar
Abstract:
Agriculture as the most important food manufacturing sector is not only the energy consumer, but also is known as energy supplier. Using energy is considered as a helpful parameter for analyzing and evaluating the agricultural sustainability. In this study, the pattern of energy consumption of strawberry greenhouses of Jiroft in Kerman province of Iran was surveyed. The total input energy required in the strawberries production was calculated as 113314.71 MJ /ha. Electricity with 38.34% contribution of the total energy was considered as the most energy consumer in strawberry production. In this study, Neuro Fuzzy networks was used for function modeling in the production of strawberries. Results showed that the best model for predicting the strawberries function had a correlation coefficient, root mean square error (RMSE) and mean absolute percentage error (MAPE) equal to 0.9849, 0.0154 kg/ha and 0.11% respectively. Regards to these results, it can be said that Neuro Fuzzy method can be well predicted and modeled the strawberry crop function.Keywords: crop yield, energy, neuro-fuzzy method, strawberry
Procedia PDF Downloads 3813398 NMR-Based Metabolomics Reveals Dietary Effects in Liver Extracts of Arctic Charr (Salvelinus alpinus) and Tilapia (Oreochromis mossambicus) Fed Different Levels of Starch
Authors: Rani Abro, Ali Ata Moazzami, Jan Erik Lindberg, Torbjörn Lundh
Abstract:
The effect of dietary starch level on liver metabolism in Arctic charr (Salvelinus alpinus) and tilapia (Oreochromis mossambicus) was studied using 1H-NMR based metabolomics. Fingerlings were fed iso-nitrogenous diets containing 0, 10 and 20 % starch for two months before liver samples were collected for metabolite analysis. Metabolite profiling was performed using 600 MHz NMR Chenomx software. In total, 48 metabolites were profiled in liver extracts from both fish species. Following the profiling, principal component analysis (PCA) and orthogonal partial least square discriminant analysis (OPLC-DA) were performed. These revealed that differences in the concentration of significant metabolites were correlated to the dietary starch level in both species. The most prominent difference in metabolic response to starch feeding between the omnivorous tilapia and the carnivorous Arctic charr was an indication of higher anaerobic metabolism in Arctic charr. The data also indicated that amino acid and pyrimidine metabolism was higher in Artic charr than in tilapia.Keywords: arctic charr, metabolomics, starch, tilapia
Procedia PDF Downloads 4573397 Modeling of Oxygen Supply Profiles in Stirred-Tank Aggregated Stem Cells Cultivation Process
Authors: Vytautas Galvanauskas, Vykantas Grincas, Rimvydas Simutis
Abstract:
This paper investigates a possible practical solution for reasonable oxygen supply during the pluripotent stem cells expansion processes, where the stem cells propagate as aggregates in stirred-suspension bioreactors. Low glucose and low oxygen concentrations are preferred for efficient proliferation of pluripotent stem cells. However, strong oxygen limitation, especially inside of cell aggregates, can lead to cell starvation and death. In this research, the oxygen concentration profile inside of stem cell aggregates in a stem cell expansion process was predicted using a modified oxygen diffusion model. This profile can be realized during the stem cells cultivation process by manipulating the oxygen concentration in inlet gas or inlet gas flow. The proposed approach is relatively simple and may be attractive for installation in a real pluripotent stem cell expansion processes.Keywords: aggregated stem cells, dissolved oxygen profiles, modeling, stirred-tank, 3D expansion
Procedia PDF Downloads 3053396 Solid-Liquid-Solid Interface of Yakam Matrix: Mathematical Modeling of the Contact Between an Aircraft Landing Gear and a Wet Pavement
Authors: Trudon Kabangu Mpinga, Ruth Mutala, Shaloom Mbambu, Yvette Kalubi Kashama, Kabeya Mukeba Yakasham
Abstract:
A mathematical model is developed to describe the contact dynamics between the landing gear wheels of an aircraft and a wet pavement during landing. The model is based on nonlinear partial differential equations, using the Yakam Matrix to account for the interaction between solid, liquid, and solid phases. This framework incorporates the influence of environmental factors, particularly water or rain on the runway, on braking performance and aircraft stability. Given the absence of exact analytical solutions, our approach enhances the understanding of key physical phenomena, including Coulomb friction forces, hydrodynamic effects, and the deformation of the pavement under the aircraft's load. Additionally, the dynamics of aquaplaning are simulated numerically to estimate the braking performance limits on wet surfaces, thereby contributing to strategies aimed at minimizing risk during landing on wet runways.Keywords: aircraft, modeling, simulation, yakam matrix, contact, wet runway
Procedia PDF Downloads 83395 Evaluation of Capacity of Bed Planted with Macrophytes for Wastewater Treatment of Biskra City, Algeria
Authors: Mimeche Leila, Debabeche Mahmoud
Abstract:
It is question to study and to value the possibility of settling the process of purification by plants (constructed wetland) to treat the domestic waste water of Biskra, city in a semi-arid environment with grave problems of. According to the bibliography, the process of treatment by plants is considered as more advantageous than the classic techniques. It is the use of beds with macrophytes where the purification is made by the combined action of plants and micro-organisms in a filtering bed. The micro-organisms which are aerobic bacteria and\or anaerobic have for main function to degrade the polluting materials. Plants in the macrophytes beds have for function to serve as support in the development of bacteria and to favour also their development. In this study, we present a preliminary experimental analysis of the potentialities of treatment of some macrpohytes plants, implanted in basins filled of gravel. Analyses physico chemical and bacteriological of the waste water indicate a good elimination of the polluting materials, and put in evidence the purifier power of these plants, in association with bacteria. The obtained results seem to be interesting and encourage deepening the study for other types of plants in other conditions.Keywords: constructed wetlands, macrophytes, sewage treatment, wastewater
Procedia PDF Downloads 4003394 Physical Modeling of Woodwind Ancient Greek Musical Instruments: The Case of Plagiaulos
Authors: Dimitra Marini, Konstantinos Bakogiannis, Spyros Polychronopoulos, Georgios Kouroupetroglou
Abstract:
Archaemusicology cannot entirely depend on the study of the excavated ancient musical instruments as most of the time their condition is not ideal (i.e., missing/eroded parts) and moreover, because of the concern damaging the originals during the experiments. Researchers, in order to overcome the above obstacles, build replicas. This technique is still the most popular one, although it is rather expensive and time-consuming. Throughout the last decades, the development of physical modeling techniques has provided tools that enable the study of musical instruments through their digitally simulated models. This is not only a more cost and time-efficient technique but also provides additional flexibility as the user can easily modify parameters such as their geometrical features and materials. This paper thoroughly describes the steps to create a physical model of a woodwind ancient Greek instrument, Plagiaulos. This instrument could be considered as the ancestor of the modern flute due to the common geometry and air-jet excitation mechanism. Plagiaulos is comprised of a single resonator with an open end and a number of tone holes. The combination of closed and open tone holes produces the pitch variations. In this work, the effects of all the instrument’s components are described by means of physics and then simulated based on digital waveguides. The synthesized sound of the proposed model complies with the theory, highlighting its validity. Further, the synthesized sound of the model simulating the Plagiaulos of Koile (2nd century BCE) was compared with its replica build in our laboratory by following the scientific methodologies of archeomusicology. The aforementioned results verify that robust dynamic digital tools can be introduced in the field of computational, experimental archaemusicology.Keywords: archaeomusicology, digital waveguides, musical acoustics, physical modeling
Procedia PDF Downloads 1133393 Assessing the Effectiveness of Machine Learning Algorithms for Cyber Threat Intelligence Discovery from the Darknet
Authors: Azene Zenebe
Abstract:
Deep learning is a subset of machine learning which incorporates techniques for the construction of artificial neural networks and found to be useful for modeling complex problems with large dataset. Deep learning requires a very high power computational and longer time for training. By aggregating computing power, high performance computer (HPC) has emerged as an approach to resolving advanced problems and performing data-driven research activities. Cyber threat intelligence (CIT) is actionable information or insight an organization or individual uses to understand the threats that have, will, or are currently targeting the organization. Results of review of literature will be presented along with results of experimental study that compares the performance of tree-based and function-base machine learning including deep learning algorithms using secondary dataset collected from darknet.Keywords: deep-learning, cyber security, cyber threat modeling, tree-based machine learning, function-based machine learning, data science
Procedia PDF Downloads 1543392 Application of Causal Inference and Discovery in Curriculum Evaluation and Continuous Improvement
Authors: Lunliang Zhong, Bin Duan
Abstract:
The undergraduate graduation project is a vital part of the higher education curriculum, crucial for engineering accreditation. Current evaluations often summarize data without identifying underlying issues. This study applies the Peter-Clark algorithm to analyze causal relationships within the graduation project data of an Electronics and Information Engineering program, creating a causal model. Structural equation modeling confirmed the model's validity. The analysis reveals key teaching stages affecting project success, uncovering problems in the process. Introducing causal discovery and inference into project evaluation helps identify issues and propose targeted improvement measures. The effectiveness of these measures is validated by comparing the learning outcomes of two student cohorts, stratified by confounding factors, leading to improved teaching quality.Keywords: causal discovery, causal inference, continuous improvement, Peter-Clark algorithm, structural equation modeling
Procedia PDF Downloads 183391 The Impact of Gamification on Self-Assessment for English Language Learners in Saudi Arabia
Authors: Wala A. Bagunaid, Maram Meccawy, Arwa Allinjawi, Zilal Meccawy
Abstract:
Continuous self-assessment becomes crucial in self-paced online learning environments. Students often depend on themselves to assess their progress; which is considered an essential requirement for any successful learning process. Today’s education institutions face major problems around student motivation and engagement. Thus, personalized e-learning systems aim to help and guide the students. Gamification provides an opportunity to help students for self-assessment and social comparison with other students through attempting to harness the motivational power of games and apply it to the learning environment. Furthermore, Open Social Student Modeling (OSSM) as considered as the latest user modeling technologies is believed to improve students’ self-assessment and to allow them to social comparison with other students. This research integrates OSSM approach and gamification concepts in order to provide self-assessment for English language learners at King Abdulaziz University (KAU). This is achieved through an interactive visual representation of their learning progress.Keywords: e-learning system, gamification, motivation, social comparison, visualization
Procedia PDF Downloads 1523390 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture
Authors: Charbel Aoun, Loic Lagadec
Abstract:
A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS
Procedia PDF Downloads 1773389 A Computational Diagnostics for Dielectric Barrier Discharge Plasma
Authors: Zainab D. Abd Ali, Thamir H. Khalaf
Abstract:
In this paper, the characteristics of electric discharge in gap between two (parallel-plate) dielectric plates are studies, the gap filled with Argon gas in atm pressure at ambient temperature, the thickness of gap typically less than 1 mm and dielectric may be up 10 cm in diameter. One of dielectric plates a sinusoidal voltage is applied with Rf frequency, the other plates is electrically grounded. The simulation in this work depending on Boltzmann equation solver in first few moments, fluid model and plasma chemistry, in one dimensional modeling. This modeling have insight into characteristics of Dielectric Barrier Discharge through studying properties of breakdown of gas, electric field, electric potential, and calculating electron density, mean electron energy, electron current density ,ion current density, total plasma current density. The investigation also include: 1. The influence of change in thickness of gap between two plates if we doubled or reduced gap to half. 2. The effect of thickness of dielectric plates. 3. The influence of change in type and properties of dielectric material (gass, silicon, Teflon).Keywords: computational diagnostics, Boltzmann equation, electric discharge, electron density
Procedia PDF Downloads 7773388 Realistic Modeling of the Preclinical Small Animal Using Commercial Software
Authors: Su Chul Han, Seungwoo Park
Abstract:
As the increasing incidence of cancer, the technology and modality of radiotherapy have advanced and the importance of preclinical model is increasing in the cancer research. Furthermore, the small animal dosimetry is an essential part of the evaluation of the relationship between the absorbed dose in preclinical small animal and biological effect in preclinical study. In this study, we carried out realistic modeling of the preclinical small animal phantom possible to verify irradiated dose using commercial software. The small animal phantom was modeling from 4D Digital Mouse whole body phantom. To manipulate Moby phantom in commercial software (Mimics, Materialise, Leuven, Belgium), we converted Moby phantom to DICOM image file of CT by Matlab and two- dimensional of CT images were converted to the three-dimensional image and it is possible to segment and crop CT image in Sagittal, Coronal and axial view). The CT images of small animals were modeling following process. Based on the profile line value, the thresholding was carried out to make a mask that was connection of all the regions of the equal threshold range. Using thresholding method, we segmented into three part (bone, body (tissue). lung), to separate neighboring pixels between lung and body (tissue), we used region growing function of Mimics software. We acquired 3D object by 3D calculation in the segmented images. The generated 3D object was smoothing by remeshing operation and smoothing operation factor was 0.4, iteration value was 5. The edge mode was selected to perform triangle reduction. The parameters were that tolerance (0.1mm), edge angle (15 degrees) and the number of iteration (5). The image processing 3D object file was converted to an STL file to output with 3D printer. We modified 3D small animal file using 3- Matic research (Materialise, Leuven, Belgium) to make space for radiation dosimetry chips. We acquired 3D object of realistic small animal phantom. The width of small animal phantom was 2.631 cm, thickness was 2.361 cm, and length was 10.817. Mimics software supported efficiency about 3D object generation and usability of conversion to STL file for user. The development of small preclinical animal phantom would increase reliability of verification of absorbed dose in small animal for preclinical study.Keywords: mimics, preclinical small animal, segmentation, 3D printer
Procedia PDF Downloads 3663387 Building Capacity and Personnel Flow Modeling for Operating amid COVID-19
Authors: Samuel Fernandes, Dylan Kato, Emin Burak Onat, Patrick Keyantuo, Raja Sengupta, Amine Bouzaghrane
Abstract:
The COVID-19 pandemic has spread across the United States, forcing cities to impose stay-at-home and shelter-in-place orders. Building operations had to adjust as non-essential personnel worked from home. But as buildings prepare for personnel to return, they need to plan for safe operations amid new COVID-19 guidelines. In this paper we propose a methodology for capacity and flow modeling of personnel within buildings to safely operate under COVID-19 guidelines. We model personnel flow within buildings by network flows with queuing constraints. We study maximum flow, minimum cost, and minimax objectives. We compare our network flow approach with a simulation model through a case study and present the results. Our results showcase various scenarios of how buildings could be operated under new COVID-19 guidelines and provide a framework for building operators to plan and operate buildings in this new paradigm.Keywords: network analysis, building simulation, COVID-19
Procedia PDF Downloads 1603386 Extracellular Polymeric Substances (EPS) Attribute to Biofouling of Anaerobic Membrane Bioreactor: Adhesion and Viscoelastic Properties
Authors: Kbrom Mearg Haile
Abstract:
Introduction: Membrane fouling is the bottleneck for the anaerobic membrane bioreactor (AnMBR) robust continuous operation, primarily caused by the mixed liquor suspended solids (MLSS) characteristics formed by aggregated flocs and a scaffold of microbial self-produced extracellular polymeric substances (EPS), which dictates the flocs integrity. Accordingly, the adhesion of EPS to the membrane surface versus their role in forming firm, elastic, and mechanically stable flocs under the reactor’s hydraulic shear is critical for minimizing interactions between EPS and colloids originating from the MLSS flocs with the membrane. This study aims to gain insight and investigate the effect of MLSS flocs properties, EPS adhesion and viscoelasticity, viscoelastic properties of the sludge, and membrane fouling propensity. Experimental: As a working hypothesis, to alter the aforementioned flocs’ and EPS’s properties, the addition of either coagulant or surfactant was carried out during the AnMBR operation. In the AnMBR, two flat-sheet 300 kDa pore size polyether sulfone (PES) membranes with a total filtration area of 352 cm2 were immersed in the AnMBR system treating municipal wastewater of Midreshet Ben-Gurion village at the Negev highlands, Israel. The system temperature, pH, biogas recirculation, and hydraulic retention time were regulated. TMP fluctuations during a 30-day experiment were recorded under three operating conditions: Baseline (without the addition of coagulating or dispersing agent), coagulant addition (FeCl3), and surfactant addition (sodium dodecyl sulfate). At the end of each experiment, EPS were extracted from the MLSS and from the fouled membrane, characterized for their protein, polysaccharides, and DOC contents, and correlated with the fouling tendency of the submerged UF membrane. The EPS adherence and viscoelastic properties were revealed using QCM-D via the PES-coated gold sensor used as a membrane-mimicking surface providing a detailed real-time EPS adhesion. The associated shifts in the resonance frequency and dissipation at different overtones were further modeled using the Voigt-based viscoelastic model (using Dfind software, Q-Sense Biolin Scientific) in which the thickness, shear modulus, and shear viscosity values of the adsorbed EPS layers on the PES coated sensor were calculated. Results and discussion: The observations obtained from the QCM-D analysis indicate a greater decrease in the frequency shift for the elevated membrane fouling scenarios, likely due to an observed decrease in the calculated shear viscosity and shear modulus of the EPS adsorbed layer, coupled with an increase in EPS layer hydrated thickness and fluidity (ΔD/Δf slopes). Further analysis is being conducted for the three major operating conditions-analyzing their effects on sludge rheology, dewaterability (capillary suction time-CST) and settle ability (SVI). The biofouling layer is further characterized microscopically using a confocal laser scanning microscope (CLSM) and scanning electron microscope (SEM), for analyzing the consistency of the development of the biofouling layer with sludge characteristics, i.e., thicker biofouling layer on the membrane surface when operated with surfactant addition, due to flocs with reduced integrity and availability of EPS/colloids to the membrane. Conversely, a thinner layer when operated with coagulant compared to the baseline experiment, due to elevation in flocs integrity.Keywords: viscoelasticity, biofouling, viscoelastic, AnMBR, EPS, elocintegrity
Procedia PDF Downloads 223385 Forecasting Electricity Spot Price with Generalized Long Memory Modeling: Wavelet and Neural Network
Authors: Souhir Ben Amor, Heni Boubaker, Lotfi Belkacem
Abstract:
This aims of this paper is to forecast the electricity spot prices. First, we focus on modeling the conditional mean of the series so we adopt a generalized fractional -factor Gegenbauer process (k-factor GARMA). Secondly, the residual from the -factor GARMA model has used as a proxy for the conditional variance; these residuals were predicted using two different approaches. In the first approach, a local linear wavelet neural network model (LLWNN) has developed to predict the conditional variance using the Back Propagation learning algorithms. In the second approach, the Gegenbauer generalized autoregressive conditional heteroscedasticity process (G-GARCH) has adopted, and the parameters of the k-factor GARMA-G-GARCH model has estimated using the wavelet methodology based on the discrete wavelet packet transform (DWPT) approach. The empirical results have shown that the k-factor GARMA-G-GARCH model outperform the hybrid k-factor GARMA-LLWNN model, and find it is more appropriate for forecasts.Keywords: electricity price, k-factor GARMA, LLWNN, G-GARCH, forecasting
Procedia PDF Downloads 2313384 PM10 Prediction and Forecasting Using CART: A Case Study for Pleven, Bulgaria
Authors: Snezhana G. Gocheva-Ilieva, Maya P. Stoimenova
Abstract:
Ambient air pollution with fine particulate matter (PM10) is a systematic permanent problem in many countries around the world. The accumulation of a large number of measurements of both the PM10 concentrations and the accompanying atmospheric factors allow for their statistical modeling to detect dependencies and forecast future pollution. This study applies the classification and regression trees (CART) method for building and analyzing PM10 models. In the empirical study, average daily air data for the city of Pleven, Bulgaria for a period of 5 years are used. Predictors in the models are seven meteorological variables, time variables, as well as lagged PM10 variables and some lagged meteorological variables, delayed by 1 or 2 days with respect to the initial time series, respectively. The degree of influence of the predictors in the models is determined. The selected best CART models are used to forecast future PM10 concentrations for two days ahead after the last date in the modeling procedure and show very accurate results.Keywords: cross-validation, decision tree, lagged variables, short-term forecasting
Procedia PDF Downloads 1943383 Modeling Of The Random Impingement Erosion Due To The Impact Of The Solid Particles
Authors: Siamack A. Shirazi, Farzin Darihaki
Abstract:
Solid particles could be found in many multiphase flows, including transport pipelines and pipe fittings. Such particles interact with the pipe material and cause erosion which threats the integrity of the system. Therefore, predicting the erosion rate is an important factor in the design and the monitor of such systems. Mechanistic models can provide reliable predictions for many conditions while demanding only relatively low computational cost. Mechanistic models utilize a representative particle trajectory to predict the impact characteristics of the majority of the particle impacts that cause maximum erosion rate in the domain. The erosion caused by particle impacts is not only due to the direct impacts but also random impingements. In the present study, an alternative model has been introduced to describe the erosion due to random impingement of particles. The present model provides a realistic trend for erosion with changes in the particle size and particle Stokes number. The present model is examined against the experimental data and CFD simulation results and indicates better agreement with the data incomparison to the available models in the literature.Keywords: erosion, mechanistic modeling, particles, multiphase flow, gas-liquid-solid
Procedia PDF Downloads 1693382 Approaches to Valuing Ecosystem Services in Agroecosystems From the Perspectives of Ecological Economics and Agroecology
Authors: Sandra Cecilia Bautista-Rodríguez, Vladimir Melgarejo
Abstract:
Climate change, loss of ecosystems, increasing poverty, increasing marginalization of rural communities and declining food security are global issues that require urgent attention. In this regard, a great deal of research has focused on how agroecosystems respond to these challenges as they provide ecosystem services (ES) that lead to higher levels of resilience, adaptation, productivity and self-sufficiency. Hence, the valuing of ecosystem services plays an important role in the decision-making process for the design and management of agroecosystems. This paper aims to define the link between ecosystem service valuation methods and ES value dimensions in agroecosystems from ecological economics and agroecology. The method used to identify valuation methodologies was a literature review in the fields of Agroecology and Ecological Economics, based on a strategy of information search and classification. The conceptual framework of the work is based on the multidimensionality of value, considering the social, ecological, political, technological and economic dimensions. Likewise, the valuation process requires consideration of the ecosystem function associated with ES, such as regulation, habitat, production and information functions. In this way, valuation methods for ES in agroecosystems can integrate more than one value dimension and at least one ecosystem function. The results allow correlating the ecosystem functions with the ecosystem services valued, and the specific tools or models used, the dimensions and valuation methods. The main methodologies identified are multi-criteria valuation (1), deliberative - consultative valuation (2), valuation based on system dynamics modeling (3), valuation through energy or biophysical balances (4), valuation through fuzzy logic modeling (5), valuation based on agent-based modeling (6). Amongst the main conclusions, it is highlighted that the system dynamics modeling approach has a high potential for development in valuation processes, due to its ability to integrate other methods, especially multi-criteria valuation and energy and biophysical balances, to describe through causal cycles the interrelationships between ecosystem services, the dimensions of value in agroecosystems, thus showing the relationships between the value of ecosystem services and the welfare of communities. As for methodological challenges, it is relevant to achieve the integration of tools and models provided by different methods, to incorporate the characteristics of a complex system such as the agroecosystem, which allows reducing the limitations in the processes of valuation of ES.Keywords: ecological economics, agroecosystems, ecosystem services, valuation of ecosystem services
Procedia PDF Downloads 1233381 2D-Modeling with Lego Mindstorms
Authors: Miroslav Popelka, Jakub Nozicka
Abstract:
The whole work is based on possibility to use Lego Mindstorms robotics systems to reduce costs. Lego Mindstorms consists of a wide variety of hardware components necessary to simulate, programme and test of robotics systems in practice. To programme algorithm, which simulates space using the ultrasonic sensor, was used development environment supplied with kit. Software Matlab was used to render values afterwards they were measured by ultrasonic sensor. The algorithm created for this paper uses theoretical knowledge from area of signal processing. Data being processed by algorithm are collected by ultrasonic sensor that scans 2D space in front of it. Ultrasonic sensor is placed on moving arm of robot which provides horizontal moving of sensor. Vertical movement of sensor is provided by wheel drive. The robot follows map in order to get correct positioning of measured data. Based on discovered facts it is possible to consider Lego Mindstorm for low-cost and capable kit for real-time modelling.Keywords: LEGO Mindstorms, ultrasonic sensor, real-time modeling, 2D object, low-cost robotics systems, sensors, Matlab, EV3 Home Edition Software
Procedia PDF Downloads 4733380 Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping
Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa
Abstract:
The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.Keywords: neural network computing, continuous functions generating the input-output mapping, decreasing the training time, machines with big memories
Procedia PDF Downloads 2833379 Logical-Probabilistic Modeling of the Reliability of Complex Systems
Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia
Abstract:
The paper presents logical-probabilistic methods, models, and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. It is important to design systems based on structural analysis, research, and evaluation of efficiency indicators. One of the important efficiency criteria is the reliability of the system, which depends on the components of the structure. Quantifying the reliability of large-scale systems is a computationally complex process, and it is advisable to perform it with the help of a computer. Logical-probabilistic modeling is one of the effective means of describing the structure of a complex system and quantitatively evaluating its reliability, which was the basis of our application. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of “weights” of elements of system. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research, and designing of optimal structure systems are carried out.Keywords: complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability of systems, “weights” of elements
Procedia PDF Downloads 663378 Modeling the Risk Perception of Pedestrians Using a Nested Logit Structure
Authors: Babak Mirbaha, Mahmoud Saffarzadeh, Atieh Asgari Toorzani
Abstract:
Pedestrians are the most vulnerable road users since they do not have a protective shell. One of the most common collisions for them is pedestrian-vehicle at intersections. In order to develop appropriate countermeasures to improve safety for them, researches have to be conducted to identify the factors that affect the risk of getting involved in such collisions. More specifically, this study investigates factors such as the influence of walking alone or having a baby while crossing the street, the observable age of pedestrian, the speed of pedestrians and the speed of approaching vehicles on risk perception of pedestrians. A nested logit model was used for modeling the behavioral structure of pedestrians. The results show that the presence of more lanes at intersections and not being alone especially having a baby while crossing, decrease the probability of taking a risk among pedestrians. Also, it seems that teenagers show more risky behaviors in crossing the street in comparison to other age groups. Also, the speed of approaching vehicles was considered significant. The probability of risk taking among pedestrians decreases by increasing the speed of approaching vehicle in both the first and the second lanes of crossings.Keywords: pedestrians, intersection, nested logit, risk
Procedia PDF Downloads 1863377 Perspectives of Computational Modeling in Sanskrit Lexicons
Authors: Baldev Ram Khandoliyan, Ram Kishor
Abstract:
India has a classical tradition of Sanskrit Lexicons. Research work has been done on the study of Indian lexicography. India has seen amazing strides in Information and Communication Technology (ICT) applications for Indian languages in general and for Sanskrit in particular. Since Machine Translation from Sanskrit to other Indian languages is often the desired goal, traditional Sanskrit lexicography has attracted a lot of attention from the ICT and Computational Linguistics community. From Nighaŋţu and Nirukta to Amarakośa and Medinīkośa, Sanskrit owns a rich history of lexicography. As these kośas do not follow the same typology or standard in the selection and arrangement of the words and the information related to them, several types of Kośa-styles have emerged in this tradition. The model of a grammar given by Aṣṭādhyāyī is well appreciated by Indian and western linguists and grammarians. But the different models provided by lexicographic tradition also have importance. The general usefulness of Sanskrit traditional Kośas is well discussed by some scholars. That is most of the matter made available in the text. Some also have discussed the good arrangement of lexica. This paper aims to discuss some more use of the different models of Sanskrit lexicography especially focusing on its computational modeling and its use in different computational operations.Keywords: computational lexicography, Sanskrit Lexicons, nighanṭu, kośa, Amarkosa
Procedia PDF Downloads 1643376 Geochemical Modeling of Mineralogical Changes in Rock and Concrete in Interaction with Groundwater
Authors: Barbora Svechova, Monika Licbinska
Abstract:
Geochemical modeling of mineralogical changes of various materials in contact with an aqueous solution is an important tool for predicting the processes and development of given materials at the site. The modeling focused on the mutual interaction of groundwater at the contact with the rock mass and its subsequent influence on concrete structures. The studied locality is located in Slovakia in the area of the Liptov Basin, which is a significant inter-mountain lowland, which is bordered on the north and south by the core mountains belt of the Tatras, where in the center the crystalline rises to the surface accompanied by Mesozoic cover. Groundwater in the area is bound to structures with complicated geological structures. From the hydrogeological point of view, it is an environment with a crack-fracture character. The area is characterized by a shallow surface circulation of groundwater without a significant collector structure, and from a chemical point of view, groundwater in the area has been classified as calcium bicarbonate with a high content of CO2 and SO4 ions. According to the European standard EN 206-1, these are waters with medium aggression towards the concrete. Three rock samples were taken from the area. Based on petrographic and mineralogical research, they were evaluated as calcareous shale, micritic limestone and crystalline shale. These three rock samples were placed in demineralized water for one month and the change in the chemical composition of the water was monitored. During the solution-rock interaction there was an increase in the concentrations of all major ions, except nitrates. There was an increase in concentration after a week, but at the end of the experiment, the concentration was lower than the initial value. Another experiment was the interaction of groundwater from the studied locality with a concrete structure. The concrete sample was also left in the water for 1 month. The results of the experiment confirmed the assumption of a reduction in the concentrations of calcium and bicarbonate ions in water due to the precipitation of amorphous forms of CaCO3 on the surface of the sample.Vice versa, it was surprising to increase the concentration of sulphates, sodium, iron and aluminum due to the leaching of concrete. Chemical analyzes from these experiments were performed in the PHREEQc program, which calculated the probability of the formation of amorphous forms of minerals. From the results of chemical analyses and hydrochemical modeling of water collected in situ and water from experiments, it was found: groundwater at the site is unsaturated and shows moderate aggression towards reinforced concrete structures according to EN 206-1a, which will affect the homogeneity and integrity of concrete structures; from the rocks in the given area, Ca, Na, Fe, HCO3 and SO4. Unsaturated waters will dissolve everything as soon as they come into contact with the solid matrix. The speed of this process then depends on the physicochemical parameters of the environment (T, ORP, p, n, water retention time in the environment, etc.).Keywords: geochemical modeling, concrete , dissolution , PHREEQc
Procedia PDF Downloads 1973375 Development of a Multi-Factorial Instrument for Accident Analysis Based on Systemic Methods
Authors: C. V. Pietreanu, S. E. Zaharia, C. Dinu
Abstract:
The present research is built on three major pillars, commencing by making some considerations on accident investigation methods and pointing out both defining aspects and differences between linear and non-linear analysis. The traditional linear focus on accident analysis describes accidents as a sequence of events, while the latest systemic models outline interdependencies between different factors and define the processes evolution related to a specific (normal) situation. Linear and non-linear accident analysis methods have specific limitations, so the second point of interest is mirrored by the aim to discover the drawbacks of systemic models which becomes a starting point for developing new directions to identify risks or data closer to the cause of incidents/accidents. Since communication represents a critical issue in the interaction of human factor and has been proved to be the answer of the problems made by possible breakdowns in different communication procedures, from this focus point, on the third pylon a new error-modeling instrument suitable for risk assessment/accident analysis will be elaborated.Keywords: accident analysis, multi-factorial error modeling, risk, systemic methods
Procedia PDF Downloads 2083374 Roadmaps as a Tool of Innovation Management: System View
Authors: Matich Lyubov
Abstract:
Today roadmaps are becoming commonly used tools for detecting and designing a desired future for companies, states and the international community. The growing popularity of this method puts tasks such as identifying basic roadmapping principles, creation of concepts and determination of the characteristics of the use of roadmaps depending on the objectives as well as restrictions and opportunities specific to the study area on the agenda. However, the system approach, e.g. the elements which are recognized to be major for high-quality roadmapping, remains one of the main fields for improving the methodology and practice of their development as limited research was devoted to the detailed analysis of the roadmaps from the view of system approach. Therefore, this article is an attempt to examine roadmaps from the view of the system analysis, to compare areas, where, as a rule, roadmaps and systems analysis are considered the most effective tools. To compare the structure and composition of roadmaps and systems models the identification of common points between construction stages of roadmaps and system modeling and the determination of future directions for research roadmaps from a systems perspective are of special importance.Keywords: technology roadmap, roadmapping, systems analysis, system modeling, innovation management
Procedia PDF Downloads 311