Search results for: modeling of geomaterials
3301 Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System
Authors: Karima Qayumi, Alex Norta
Abstract:
The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.Keywords: agent-oriented modeling (AOM), business intelligence model (BIM), distributed data mining (DDM), multi-agent system (MAS)
Procedia PDF Downloads 4323300 An Integrated Approach to the Carbonate Reservoir Modeling: Case Study of the Eastern Siberia Field
Authors: Yana Snegireva
Abstract:
Carbonate reservoirs are known for their heterogeneity, resulting from various geological processes such as diagenesis and fracturing. These complexities may cause great challenges in understanding fluid flow behavior and predicting the production performance of naturally fractured reservoirs. The investigation of carbonate reservoirs is crucial, as many petroleum reservoirs are naturally fractured, which can be difficult due to the complexity of their fracture networks. This can lead to geological uncertainties, which are important for global petroleum reserves. The problem outlines the key challenges in carbonate reservoir modeling, including the accurate representation of fractures and their connectivity, as well as capturing the impact of fractures on fluid flow and production. Traditional reservoir modeling techniques often oversimplify fracture networks, leading to inaccurate predictions. Therefore, there is a need for a modern approach that can capture the complexities of carbonate reservoirs and provide reliable predictions for effective reservoir management and production optimization. The modern approach to carbonate reservoir modeling involves the utilization of the hybrid fracture modeling approach, including the discrete fracture network (DFN) method and implicit fracture network, which offer enhanced accuracy and reliability in characterizing complex fracture systems within these reservoirs. This study focuses on the application of the hybrid method in the Nepsko-Botuobinskaya anticline of the Eastern Siberia field, aiming to prove the appropriateness of this method in these geological conditions. The DFN method is adopted to model the fracture network within the carbonate reservoir. This method considers fractures as discrete entities, capturing their geometry, orientation, and connectivity. But the method has significant disadvantages since the number of fractures in the field can be very high. Due to limitations in the amount of main memory, it is very difficult to represent these fractures explicitly. By integrating data from image logs (formation micro imager), core data, and fracture density logs, a discrete fracture network (DFN) model can be constructed to represent fracture characteristics for hydraulically relevant fractures. The results obtained from the DFN modeling approaches provide valuable insights into the East Siberia field's carbonate reservoir behavior. The DFN model accurately captures the fracture system, allowing for a better understanding of fluid flow pathways, connectivity, and potential production zones. The analysis of simulation results enables the identification of zones of increased fracturing and optimization opportunities for reservoir development with the potential application of enhanced oil recovery techniques, which were considered in further simulations on the dual porosity and dual permeability models. This approach considers fractures as separate, interconnected flow paths within the reservoir matrix, allowing for the characterization of dual-porosity media. The case study of the East Siberia field demonstrates the effectiveness of the hybrid model method in accurately representing fracture systems and predicting reservoir behavior. The findings from this study contribute to improved reservoir management and production optimization in carbonate reservoirs with the use of enhanced and improved oil recovery methods.Keywords: carbonate reservoir, discrete fracture network, fracture modeling, dual porosity, enhanced oil recovery, implicit fracture model, hybrid fracture model
Procedia PDF Downloads 763299 Material Chemistry Level Deformation and Failure in Cementitious Materials
Authors: Ram V. Mohan, John Rivas-Murillo, Ahmed Mohamed, Wayne D. Hodo
Abstract:
Cementitious materials, an excellent example of highly complex, heterogeneous material systems, are cement-based systems that include cement paste, mortar, and concrete that are heavily used in civil infrastructure; though commonly used are one of the most complex in terms of the material morphology and structure than most materials, for example, crystalline metals. Processes and features occurring at the nanometer sized morphological structures affect the performance, deformation/failure behavior at larger length scales. In addition, cementitious materials undergo chemical and morphological changes gaining strength during the transient hydration process. Hydration in cement is a very complex process creating complex microstructures and the associated molecular structures that vary with hydration. A fundamental understanding can be gained through multi-scale level modeling for the behavior and properties of cementitious materials starting from the material chemistry level atomistic scale to further explore their role and the manifested effects at larger length and engineering scales. This predictive modeling enables the understanding, and studying the influence of material chemistry level changes and nanomaterial additives on the expected resultant material characteristics and deformation behavior. Atomistic-molecular dynamic level modeling is required to couple material science to engineering mechanics. Starting at the molecular level a comprehensive description of the material’s chemistry is required to understand the fundamental properties that govern behavior occurring across each relevant length scale. Material chemistry level models and molecular dynamics modeling and simulations are employed in our work to describe the molecular-level chemistry features of calcium-silicate-hydrate (CSH), one of the key hydrated constituents of cement paste, their associated deformation and failure. The molecular level atomic structure for CSH can be represented by Jennite mineral structure. Jennite has been widely accepted by researchers and is typically used to represent the molecular structure of the CSH gel formed during the hydration of cement clinkers. This paper will focus on our recent work on the shear and compressive deformation and failure behavior of CSH represented by Jennite mineral structure that has been widely accepted by researchers and is typically used to represent the molecular structure of CSH formed during the hydration of cement clinkers. The deformation and failure behavior under shear and compression loading deformation in traditional hydrated CSH; effect of material chemistry changes on the predicted stress-strain behavior, transition from linear to non-linear behavior and identify the on-set of failure based on material chemistry structures of CSH Jennite and changes in its chemistry structure will be discussed.Keywords: cementitious materials, deformation, failure, material chemistry modeling
Procedia PDF Downloads 2873298 A Methodology to Integrate Data in the Company Based on the Semantic Standard in the Context of Industry 4.0
Authors: Chang Qin, Daham Mustafa, Abderrahmane Khiat, Pierre Bienert, Paulo Zanini
Abstract:
Nowadays, companies are facing lots of challenges in the process of digital transformation, which can be a complex and costly undertaking. Digital transformation involves the collection and analysis of large amounts of data, which can create challenges around data management and governance. Furthermore, it is also challenged to integrate data from multiple systems and technologies. Although with these pains, companies are still pursuing digitalization because by embracing advanced technologies, companies can improve efficiency, quality, decision-making, and customer experience while also creating different business models and revenue streams. In this paper, the issue that data is stored in data silos with different schema and structures is focused. The conventional approaches to addressing this issue involve utilizing data warehousing, data integration tools, data standardization, and business intelligence tools. However, these approaches primarily focus on the grammar and structure of the data and neglect the importance of semantic modeling and semantic standardization, which are essential for achieving data interoperability. In this session, the challenge of data silos in Industry 4.0 is addressed by developing a semantic modeling approach compliant with Asset Administration Shell (AAS) models as an efficient standard for communication in Industry 4.0. The paper highlights how our approach can facilitate the data mapping process and semantic lifting according to existing industry standards such as ECLASS and other industrial dictionaries. It also incorporates the Asset Administration Shell technology to model and map the company’s data and utilize a knowledge graph for data storage and exploration.Keywords: data interoperability in industry 4.0, digital integration, industrial dictionary, semantic modeling
Procedia PDF Downloads 943297 Post-Earthquake Damage Detection Using System Identification with a Pair of Seismic Recordings
Authors: Lotfi O. Gargab, Ruichong R. Zhang
Abstract:
A wave-based framework is presented for modeling seismic motion in multistory buildings and using measured response for system identification which can be utilized to extract important information regarding structure integrity. With one pair of building response at two locations, a generalized model response is formulated based on wave propagation features and expressed as frequency and time response functions denoted, respectively, as GFRF and GIRF. In particular, GIRF is fundamental in tracking arrival times of impulsive wave motion initiated at response level which is dependent on local model properties. Matching model and measured-structure responses can help in identifying model parameters and infer building properties. To show the effectiveness of this approach, the Millikan Library in Pasadena, California is identified with recordings of the Yorba Linda earthquake of September 3, 2002.Keywords: system identification, continuous-discrete mass modeling, damage detection, post-earthquake
Procedia PDF Downloads 3703296 Numerical Study of the Influence of the Primary Stream Pressure on the Performance of the Ejector Refrigeration System Based on Heat Exchanger Modeling
Authors: Elhameh Narimani, Mikhail Sorin, Philippe Micheau, Hakim Nesreddine
Abstract:
Numerical models of the heat exchangers in ejector refrigeration system (ERS) were developed and validated with the experimental data. The models were based on the switched heat exchangers model using the moving boundary method, which were capable of estimating the zones’ lengths, the outlet temperatures of both sides and the heat loads at various experimental points. The developed models were utilized to investigate the influence of the primary flow pressure on the performance of an R245fa ERS based on its coefficient of performance (COP) and exergy efficiency. It was illustrated numerically and proved experimentally that increasing the primary flow pressure slightly reduces the COP while the exergy efficiency goes through a maximum before decreasing.Keywords: Coefficient of Performance, COP, Ejector Refrigeration System, ERS, exergy efficiency (ηII), heat exchangers modeling, moving boundary method
Procedia PDF Downloads 2023295 PWM Based Control of Dstatcom for Voltage Sag, Swell Mitigation in Distribution Systems
Authors: A. Assif
Abstract:
This paper presents the modeling of a prototype distribution static compensator (D-STATCOM) for voltage sag and swell mitigation in an unbalanced distribution system. Here the concept that an inverter can be used as generalized impedance converter to realize either inductive or capacitive reactance has been used to mitigate power quality issues of distribution networks. The D-STATCOM is here supposed to replace the widely used StaticVar Compensator (SVC). The scheme is based on the Voltage Source Converter (VSC) principle. In this model PWM based control scheme has been implemented to control the electronic valves of VSC. Phase shift control Algorithm method is used for converter control. The D-STATCOM injects a current into the system to mitigate the voltage sags. In this paper the modeling of D¬STATCOM has been designed using MATLAB SIMULINIC. Accordingly, simulations are first carried out to illustrate the use of D-STATCOM in mitigating voltage sag in a distribution system. Simulation results prove that the D-STATCOM is capable of mitigating voltage sag as well as improving power quality of a system.Keywords: D-STATCOM, voltage sag, voltage source converter (VSC), phase shift control
Procedia PDF Downloads 3443294 Instant Fire Risk Assessment Using Artifical Neural Networks
Authors: Tolga Barisik, Ali Fuat Guneri, K. Dastan
Abstract:
Major industrial facilities have a high potential for fire risk. In particular, the indices used for the detection of hidden fire are used very effectively in order to prevent the fire from becoming dangerous in the initial stage. These indices provide the opportunity to prevent or intervene early by determining the stage of the fire, the potential for hazard, and the type of the combustion agent with the percentage values of the ambient air components. In this system, artificial neural network will be modeled with the input data determined using the Levenberg-Marquardt algorithm, which is a multi-layer sensor (CAA) (teacher-learning) type, before modeling the modeling methods in the literature. The actual values produced by the indices will be compared with the outputs produced by the network. Using the neural network and the curves to be created from the resulting values, the feasibility of performance determination will be investigated.Keywords: artifical neural networks, fire, Graham Index, levenberg-marquardt algoritm, oxygen decrease percentage index, risk assessment, Trickett Index
Procedia PDF Downloads 1383293 Energy Consumption Modeling for Strawberry Greenhouse Crop by Adaptive Nero Fuzzy Inference System Technique: A Case Study in Iran
Authors: Azar Khodabakhshi, Elham Bolandnazar
Abstract:
Agriculture as the most important food manufacturing sector is not only the energy consumer, but also is known as energy supplier. Using energy is considered as a helpful parameter for analyzing and evaluating the agricultural sustainability. In this study, the pattern of energy consumption of strawberry greenhouses of Jiroft in Kerman province of Iran was surveyed. The total input energy required in the strawberries production was calculated as 113314.71 MJ /ha. Electricity with 38.34% contribution of the total energy was considered as the most energy consumer in strawberry production. In this study, Neuro Fuzzy networks was used for function modeling in the production of strawberries. Results showed that the best model for predicting the strawberries function had a correlation coefficient, root mean square error (RMSE) and mean absolute percentage error (MAPE) equal to 0.9849, 0.0154 kg/ha and 0.11% respectively. Regards to these results, it can be said that Neuro Fuzzy method can be well predicted and modeled the strawberry crop function.Keywords: crop yield, energy, neuro-fuzzy method, strawberry
Procedia PDF Downloads 3833292 Modeling of Oxygen Supply Profiles in Stirred-Tank Aggregated Stem Cells Cultivation Process
Authors: Vytautas Galvanauskas, Vykantas Grincas, Rimvydas Simutis
Abstract:
This paper investigates a possible practical solution for reasonable oxygen supply during the pluripotent stem cells expansion processes, where the stem cells propagate as aggregates in stirred-suspension bioreactors. Low glucose and low oxygen concentrations are preferred for efficient proliferation of pluripotent stem cells. However, strong oxygen limitation, especially inside of cell aggregates, can lead to cell starvation and death. In this research, the oxygen concentration profile inside of stem cell aggregates in a stem cell expansion process was predicted using a modified oxygen diffusion model. This profile can be realized during the stem cells cultivation process by manipulating the oxygen concentration in inlet gas or inlet gas flow. The proposed approach is relatively simple and may be attractive for installation in a real pluripotent stem cell expansion processes.Keywords: aggregated stem cells, dissolved oxygen profiles, modeling, stirred-tank, 3D expansion
Procedia PDF Downloads 3063291 Solid-Liquid-Solid Interface of Yakam Matrix: Mathematical Modeling of the Contact Between an Aircraft Landing Gear and a Wet Pavement
Authors: Trudon Kabangu Mpinga, Ruth Mutala, Shaloom Mbambu, Yvette Kalubi Kashama, Kabeya Mukeba Yakasham
Abstract:
A mathematical model is developed to describe the contact dynamics between the landing gear wheels of an aircraft and a wet pavement during landing. The model is based on nonlinear partial differential equations, using the Yakam Matrix to account for the interaction between solid, liquid, and solid phases. This framework incorporates the influence of environmental factors, particularly water or rain on the runway, on braking performance and aircraft stability. Given the absence of exact analytical solutions, our approach enhances the understanding of key physical phenomena, including Coulomb friction forces, hydrodynamic effects, and the deformation of the pavement under the aircraft's load. Additionally, the dynamics of aquaplaning are simulated numerically to estimate the braking performance limits on wet surfaces, thereby contributing to strategies aimed at minimizing risk during landing on wet runways.Keywords: aircraft, modeling, simulation, yakam matrix, contact, wet runway
Procedia PDF Downloads 153290 Physical Modeling of Woodwind Ancient Greek Musical Instruments: The Case of Plagiaulos
Authors: Dimitra Marini, Konstantinos Bakogiannis, Spyros Polychronopoulos, Georgios Kouroupetroglou
Abstract:
Archaemusicology cannot entirely depend on the study of the excavated ancient musical instruments as most of the time their condition is not ideal (i.e., missing/eroded parts) and moreover, because of the concern damaging the originals during the experiments. Researchers, in order to overcome the above obstacles, build replicas. This technique is still the most popular one, although it is rather expensive and time-consuming. Throughout the last decades, the development of physical modeling techniques has provided tools that enable the study of musical instruments through their digitally simulated models. This is not only a more cost and time-efficient technique but also provides additional flexibility as the user can easily modify parameters such as their geometrical features and materials. This paper thoroughly describes the steps to create a physical model of a woodwind ancient Greek instrument, Plagiaulos. This instrument could be considered as the ancestor of the modern flute due to the common geometry and air-jet excitation mechanism. Plagiaulos is comprised of a single resonator with an open end and a number of tone holes. The combination of closed and open tone holes produces the pitch variations. In this work, the effects of all the instrument’s components are described by means of physics and then simulated based on digital waveguides. The synthesized sound of the proposed model complies with the theory, highlighting its validity. Further, the synthesized sound of the model simulating the Plagiaulos of Koile (2nd century BCE) was compared with its replica build in our laboratory by following the scientific methodologies of archeomusicology. The aforementioned results verify that robust dynamic digital tools can be introduced in the field of computational, experimental archaemusicology.Keywords: archaeomusicology, digital waveguides, musical acoustics, physical modeling
Procedia PDF Downloads 1163289 Assessing the Effectiveness of Machine Learning Algorithms for Cyber Threat Intelligence Discovery from the Darknet
Authors: Azene Zenebe
Abstract:
Deep learning is a subset of machine learning which incorporates techniques for the construction of artificial neural networks and found to be useful for modeling complex problems with large dataset. Deep learning requires a very high power computational and longer time for training. By aggregating computing power, high performance computer (HPC) has emerged as an approach to resolving advanced problems and performing data-driven research activities. Cyber threat intelligence (CIT) is actionable information or insight an organization or individual uses to understand the threats that have, will, or are currently targeting the organization. Results of review of literature will be presented along with results of experimental study that compares the performance of tree-based and function-base machine learning including deep learning algorithms using secondary dataset collected from darknet.Keywords: deep-learning, cyber security, cyber threat modeling, tree-based machine learning, function-based machine learning, data science
Procedia PDF Downloads 1553288 Application of Causal Inference and Discovery in Curriculum Evaluation and Continuous Improvement
Authors: Lunliang Zhong, Bin Duan
Abstract:
The undergraduate graduation project is a vital part of the higher education curriculum, crucial for engineering accreditation. Current evaluations often summarize data without identifying underlying issues. This study applies the Peter-Clark algorithm to analyze causal relationships within the graduation project data of an Electronics and Information Engineering program, creating a causal model. Structural equation modeling confirmed the model's validity. The analysis reveals key teaching stages affecting project success, uncovering problems in the process. Introducing causal discovery and inference into project evaluation helps identify issues and propose targeted improvement measures. The effectiveness of these measures is validated by comparing the learning outcomes of two student cohorts, stratified by confounding factors, leading to improved teaching quality.Keywords: causal discovery, causal inference, continuous improvement, Peter-Clark algorithm, structural equation modeling
Procedia PDF Downloads 203287 The Impact of Gamification on Self-Assessment for English Language Learners in Saudi Arabia
Authors: Wala A. Bagunaid, Maram Meccawy, Arwa Allinjawi, Zilal Meccawy
Abstract:
Continuous self-assessment becomes crucial in self-paced online learning environments. Students often depend on themselves to assess their progress; which is considered an essential requirement for any successful learning process. Today’s education institutions face major problems around student motivation and engagement. Thus, personalized e-learning systems aim to help and guide the students. Gamification provides an opportunity to help students for self-assessment and social comparison with other students through attempting to harness the motivational power of games and apply it to the learning environment. Furthermore, Open Social Student Modeling (OSSM) as considered as the latest user modeling technologies is believed to improve students’ self-assessment and to allow them to social comparison with other students. This research integrates OSSM approach and gamification concepts in order to provide self-assessment for English language learners at King Abdulaziz University (KAU). This is achieved through an interactive visual representation of their learning progress.Keywords: e-learning system, gamification, motivation, social comparison, visualization
Procedia PDF Downloads 1543286 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture
Authors: Charbel Aoun, Loic Lagadec
Abstract:
A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS
Procedia PDF Downloads 1783285 A Computational Diagnostics for Dielectric Barrier Discharge Plasma
Authors: Zainab D. Abd Ali, Thamir H. Khalaf
Abstract:
In this paper, the characteristics of electric discharge in gap between two (parallel-plate) dielectric plates are studies, the gap filled with Argon gas in atm pressure at ambient temperature, the thickness of gap typically less than 1 mm and dielectric may be up 10 cm in diameter. One of dielectric plates a sinusoidal voltage is applied with Rf frequency, the other plates is electrically grounded. The simulation in this work depending on Boltzmann equation solver in first few moments, fluid model and plasma chemistry, in one dimensional modeling. This modeling have insight into characteristics of Dielectric Barrier Discharge through studying properties of breakdown of gas, electric field, electric potential, and calculating electron density, mean electron energy, electron current density ,ion current density, total plasma current density. The investigation also include: 1. The influence of change in thickness of gap between two plates if we doubled or reduced gap to half. 2. The effect of thickness of dielectric plates. 3. The influence of change in type and properties of dielectric material (gass, silicon, Teflon).Keywords: computational diagnostics, Boltzmann equation, electric discharge, electron density
Procedia PDF Downloads 7773284 Realistic Modeling of the Preclinical Small Animal Using Commercial Software
Authors: Su Chul Han, Seungwoo Park
Abstract:
As the increasing incidence of cancer, the technology and modality of radiotherapy have advanced and the importance of preclinical model is increasing in the cancer research. Furthermore, the small animal dosimetry is an essential part of the evaluation of the relationship between the absorbed dose in preclinical small animal and biological effect in preclinical study. In this study, we carried out realistic modeling of the preclinical small animal phantom possible to verify irradiated dose using commercial software. The small animal phantom was modeling from 4D Digital Mouse whole body phantom. To manipulate Moby phantom in commercial software (Mimics, Materialise, Leuven, Belgium), we converted Moby phantom to DICOM image file of CT by Matlab and two- dimensional of CT images were converted to the three-dimensional image and it is possible to segment and crop CT image in Sagittal, Coronal and axial view). The CT images of small animals were modeling following process. Based on the profile line value, the thresholding was carried out to make a mask that was connection of all the regions of the equal threshold range. Using thresholding method, we segmented into three part (bone, body (tissue). lung), to separate neighboring pixels between lung and body (tissue), we used region growing function of Mimics software. We acquired 3D object by 3D calculation in the segmented images. The generated 3D object was smoothing by remeshing operation and smoothing operation factor was 0.4, iteration value was 5. The edge mode was selected to perform triangle reduction. The parameters were that tolerance (0.1mm), edge angle (15 degrees) and the number of iteration (5). The image processing 3D object file was converted to an STL file to output with 3D printer. We modified 3D small animal file using 3- Matic research (Materialise, Leuven, Belgium) to make space for radiation dosimetry chips. We acquired 3D object of realistic small animal phantom. The width of small animal phantom was 2.631 cm, thickness was 2.361 cm, and length was 10.817. Mimics software supported efficiency about 3D object generation and usability of conversion to STL file for user. The development of small preclinical animal phantom would increase reliability of verification of absorbed dose in small animal for preclinical study.Keywords: mimics, preclinical small animal, segmentation, 3D printer
Procedia PDF Downloads 3673283 Building Capacity and Personnel Flow Modeling for Operating amid COVID-19
Authors: Samuel Fernandes, Dylan Kato, Emin Burak Onat, Patrick Keyantuo, Raja Sengupta, Amine Bouzaghrane
Abstract:
The COVID-19 pandemic has spread across the United States, forcing cities to impose stay-at-home and shelter-in-place orders. Building operations had to adjust as non-essential personnel worked from home. But as buildings prepare for personnel to return, they need to plan for safe operations amid new COVID-19 guidelines. In this paper we propose a methodology for capacity and flow modeling of personnel within buildings to safely operate under COVID-19 guidelines. We model personnel flow within buildings by network flows with queuing constraints. We study maximum flow, minimum cost, and minimax objectives. We compare our network flow approach with a simulation model through a case study and present the results. Our results showcase various scenarios of how buildings could be operated under new COVID-19 guidelines and provide a framework for building operators to plan and operate buildings in this new paradigm.Keywords: network analysis, building simulation, COVID-19
Procedia PDF Downloads 1603282 Forecasting Electricity Spot Price with Generalized Long Memory Modeling: Wavelet and Neural Network
Authors: Souhir Ben Amor, Heni Boubaker, Lotfi Belkacem
Abstract:
This aims of this paper is to forecast the electricity spot prices. First, we focus on modeling the conditional mean of the series so we adopt a generalized fractional -factor Gegenbauer process (k-factor GARMA). Secondly, the residual from the -factor GARMA model has used as a proxy for the conditional variance; these residuals were predicted using two different approaches. In the first approach, a local linear wavelet neural network model (LLWNN) has developed to predict the conditional variance using the Back Propagation learning algorithms. In the second approach, the Gegenbauer generalized autoregressive conditional heteroscedasticity process (G-GARCH) has adopted, and the parameters of the k-factor GARMA-G-GARCH model has estimated using the wavelet methodology based on the discrete wavelet packet transform (DWPT) approach. The empirical results have shown that the k-factor GARMA-G-GARCH model outperform the hybrid k-factor GARMA-LLWNN model, and find it is more appropriate for forecasts.Keywords: electricity price, k-factor GARMA, LLWNN, G-GARCH, forecasting
Procedia PDF Downloads 2323281 PM10 Prediction and Forecasting Using CART: A Case Study for Pleven, Bulgaria
Authors: Snezhana G. Gocheva-Ilieva, Maya P. Stoimenova
Abstract:
Ambient air pollution with fine particulate matter (PM10) is a systematic permanent problem in many countries around the world. The accumulation of a large number of measurements of both the PM10 concentrations and the accompanying atmospheric factors allow for their statistical modeling to detect dependencies and forecast future pollution. This study applies the classification and regression trees (CART) method for building and analyzing PM10 models. In the empirical study, average daily air data for the city of Pleven, Bulgaria for a period of 5 years are used. Predictors in the models are seven meteorological variables, time variables, as well as lagged PM10 variables and some lagged meteorological variables, delayed by 1 or 2 days with respect to the initial time series, respectively. The degree of influence of the predictors in the models is determined. The selected best CART models are used to forecast future PM10 concentrations for two days ahead after the last date in the modeling procedure and show very accurate results.Keywords: cross-validation, decision tree, lagged variables, short-term forecasting
Procedia PDF Downloads 1963280 Modeling Of The Random Impingement Erosion Due To The Impact Of The Solid Particles
Authors: Siamack A. Shirazi, Farzin Darihaki
Abstract:
Solid particles could be found in many multiphase flows, including transport pipelines and pipe fittings. Such particles interact with the pipe material and cause erosion which threats the integrity of the system. Therefore, predicting the erosion rate is an important factor in the design and the monitor of such systems. Mechanistic models can provide reliable predictions for many conditions while demanding only relatively low computational cost. Mechanistic models utilize a representative particle trajectory to predict the impact characteristics of the majority of the particle impacts that cause maximum erosion rate in the domain. The erosion caused by particle impacts is not only due to the direct impacts but also random impingements. In the present study, an alternative model has been introduced to describe the erosion due to random impingement of particles. The present model provides a realistic trend for erosion with changes in the particle size and particle Stokes number. The present model is examined against the experimental data and CFD simulation results and indicates better agreement with the data incomparison to the available models in the literature.Keywords: erosion, mechanistic modeling, particles, multiphase flow, gas-liquid-solid
Procedia PDF Downloads 1693279 Approaches to Valuing Ecosystem Services in Agroecosystems From the Perspectives of Ecological Economics and Agroecology
Authors: Sandra Cecilia Bautista-Rodríguez, Vladimir Melgarejo
Abstract:
Climate change, loss of ecosystems, increasing poverty, increasing marginalization of rural communities and declining food security are global issues that require urgent attention. In this regard, a great deal of research has focused on how agroecosystems respond to these challenges as they provide ecosystem services (ES) that lead to higher levels of resilience, adaptation, productivity and self-sufficiency. Hence, the valuing of ecosystem services plays an important role in the decision-making process for the design and management of agroecosystems. This paper aims to define the link between ecosystem service valuation methods and ES value dimensions in agroecosystems from ecological economics and agroecology. The method used to identify valuation methodologies was a literature review in the fields of Agroecology and Ecological Economics, based on a strategy of information search and classification. The conceptual framework of the work is based on the multidimensionality of value, considering the social, ecological, political, technological and economic dimensions. Likewise, the valuation process requires consideration of the ecosystem function associated with ES, such as regulation, habitat, production and information functions. In this way, valuation methods for ES in agroecosystems can integrate more than one value dimension and at least one ecosystem function. The results allow correlating the ecosystem functions with the ecosystem services valued, and the specific tools or models used, the dimensions and valuation methods. The main methodologies identified are multi-criteria valuation (1), deliberative - consultative valuation (2), valuation based on system dynamics modeling (3), valuation through energy or biophysical balances (4), valuation through fuzzy logic modeling (5), valuation based on agent-based modeling (6). Amongst the main conclusions, it is highlighted that the system dynamics modeling approach has a high potential for development in valuation processes, due to its ability to integrate other methods, especially multi-criteria valuation and energy and biophysical balances, to describe through causal cycles the interrelationships between ecosystem services, the dimensions of value in agroecosystems, thus showing the relationships between the value of ecosystem services and the welfare of communities. As for methodological challenges, it is relevant to achieve the integration of tools and models provided by different methods, to incorporate the characteristics of a complex system such as the agroecosystem, which allows reducing the limitations in the processes of valuation of ES.Keywords: ecological economics, agroecosystems, ecosystem services, valuation of ecosystem services
Procedia PDF Downloads 1253278 2D-Modeling with Lego Mindstorms
Authors: Miroslav Popelka, Jakub Nozicka
Abstract:
The whole work is based on possibility to use Lego Mindstorms robotics systems to reduce costs. Lego Mindstorms consists of a wide variety of hardware components necessary to simulate, programme and test of robotics systems in practice. To programme algorithm, which simulates space using the ultrasonic sensor, was used development environment supplied with kit. Software Matlab was used to render values afterwards they were measured by ultrasonic sensor. The algorithm created for this paper uses theoretical knowledge from area of signal processing. Data being processed by algorithm are collected by ultrasonic sensor that scans 2D space in front of it. Ultrasonic sensor is placed on moving arm of robot which provides horizontal moving of sensor. Vertical movement of sensor is provided by wheel drive. The robot follows map in order to get correct positioning of measured data. Based on discovered facts it is possible to consider Lego Mindstorm for low-cost and capable kit for real-time modelling.Keywords: LEGO Mindstorms, ultrasonic sensor, real-time modeling, 2D object, low-cost robotics systems, sensors, Matlab, EV3 Home Edition Software
Procedia PDF Downloads 4733277 Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping
Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa
Abstract:
The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.Keywords: neural network computing, continuous functions generating the input-output mapping, decreasing the training time, machines with big memories
Procedia PDF Downloads 2833276 Logical-Probabilistic Modeling of the Reliability of Complex Systems
Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia
Abstract:
The paper presents logical-probabilistic methods, models, and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. It is important to design systems based on structural analysis, research, and evaluation of efficiency indicators. One of the important efficiency criteria is the reliability of the system, which depends on the components of the structure. Quantifying the reliability of large-scale systems is a computationally complex process, and it is advisable to perform it with the help of a computer. Logical-probabilistic modeling is one of the effective means of describing the structure of a complex system and quantitatively evaluating its reliability, which was the basis of our application. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of “weights” of elements of system. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research, and designing of optimal structure systems are carried out.Keywords: complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability of systems, “weights” of elements
Procedia PDF Downloads 663275 Modeling the Risk Perception of Pedestrians Using a Nested Logit Structure
Authors: Babak Mirbaha, Mahmoud Saffarzadeh, Atieh Asgari Toorzani
Abstract:
Pedestrians are the most vulnerable road users since they do not have a protective shell. One of the most common collisions for them is pedestrian-vehicle at intersections. In order to develop appropriate countermeasures to improve safety for them, researches have to be conducted to identify the factors that affect the risk of getting involved in such collisions. More specifically, this study investigates factors such as the influence of walking alone or having a baby while crossing the street, the observable age of pedestrian, the speed of pedestrians and the speed of approaching vehicles on risk perception of pedestrians. A nested logit model was used for modeling the behavioral structure of pedestrians. The results show that the presence of more lanes at intersections and not being alone especially having a baby while crossing, decrease the probability of taking a risk among pedestrians. Also, it seems that teenagers show more risky behaviors in crossing the street in comparison to other age groups. Also, the speed of approaching vehicles was considered significant. The probability of risk taking among pedestrians decreases by increasing the speed of approaching vehicle in both the first and the second lanes of crossings.Keywords: pedestrians, intersection, nested logit, risk
Procedia PDF Downloads 1873274 Perspectives of Computational Modeling in Sanskrit Lexicons
Authors: Baldev Ram Khandoliyan, Ram Kishor
Abstract:
India has a classical tradition of Sanskrit Lexicons. Research work has been done on the study of Indian lexicography. India has seen amazing strides in Information and Communication Technology (ICT) applications for Indian languages in general and for Sanskrit in particular. Since Machine Translation from Sanskrit to other Indian languages is often the desired goal, traditional Sanskrit lexicography has attracted a lot of attention from the ICT and Computational Linguistics community. From Nighaŋţu and Nirukta to Amarakośa and Medinīkośa, Sanskrit owns a rich history of lexicography. As these kośas do not follow the same typology or standard in the selection and arrangement of the words and the information related to them, several types of Kośa-styles have emerged in this tradition. The model of a grammar given by Aṣṭādhyāyī is well appreciated by Indian and western linguists and grammarians. But the different models provided by lexicographic tradition also have importance. The general usefulness of Sanskrit traditional Kośas is well discussed by some scholars. That is most of the matter made available in the text. Some also have discussed the good arrangement of lexica. This paper aims to discuss some more use of the different models of Sanskrit lexicography especially focusing on its computational modeling and its use in different computational operations.Keywords: computational lexicography, Sanskrit Lexicons, nighanṭu, kośa, Amarkosa
Procedia PDF Downloads 1653273 Geochemical Modeling of Mineralogical Changes in Rock and Concrete in Interaction with Groundwater
Authors: Barbora Svechova, Monika Licbinska
Abstract:
Geochemical modeling of mineralogical changes of various materials in contact with an aqueous solution is an important tool for predicting the processes and development of given materials at the site. The modeling focused on the mutual interaction of groundwater at the contact with the rock mass and its subsequent influence on concrete structures. The studied locality is located in Slovakia in the area of the Liptov Basin, which is a significant inter-mountain lowland, which is bordered on the north and south by the core mountains belt of the Tatras, where in the center the crystalline rises to the surface accompanied by Mesozoic cover. Groundwater in the area is bound to structures with complicated geological structures. From the hydrogeological point of view, it is an environment with a crack-fracture character. The area is characterized by a shallow surface circulation of groundwater without a significant collector structure, and from a chemical point of view, groundwater in the area has been classified as calcium bicarbonate with a high content of CO2 and SO4 ions. According to the European standard EN 206-1, these are waters with medium aggression towards the concrete. Three rock samples were taken from the area. Based on petrographic and mineralogical research, they were evaluated as calcareous shale, micritic limestone and crystalline shale. These three rock samples were placed in demineralized water for one month and the change in the chemical composition of the water was monitored. During the solution-rock interaction there was an increase in the concentrations of all major ions, except nitrates. There was an increase in concentration after a week, but at the end of the experiment, the concentration was lower than the initial value. Another experiment was the interaction of groundwater from the studied locality with a concrete structure. The concrete sample was also left in the water for 1 month. The results of the experiment confirmed the assumption of a reduction in the concentrations of calcium and bicarbonate ions in water due to the precipitation of amorphous forms of CaCO3 on the surface of the sample.Vice versa, it was surprising to increase the concentration of sulphates, sodium, iron and aluminum due to the leaching of concrete. Chemical analyzes from these experiments were performed in the PHREEQc program, which calculated the probability of the formation of amorphous forms of minerals. From the results of chemical analyses and hydrochemical modeling of water collected in situ and water from experiments, it was found: groundwater at the site is unsaturated and shows moderate aggression towards reinforced concrete structures according to EN 206-1a, which will affect the homogeneity and integrity of concrete structures; from the rocks in the given area, Ca, Na, Fe, HCO3 and SO4. Unsaturated waters will dissolve everything as soon as they come into contact with the solid matrix. The speed of this process then depends on the physicochemical parameters of the environment (T, ORP, p, n, water retention time in the environment, etc.).Keywords: geochemical modeling, concrete , dissolution , PHREEQc
Procedia PDF Downloads 1983272 Development of a Multi-Factorial Instrument for Accident Analysis Based on Systemic Methods
Authors: C. V. Pietreanu, S. E. Zaharia, C. Dinu
Abstract:
The present research is built on three major pillars, commencing by making some considerations on accident investigation methods and pointing out both defining aspects and differences between linear and non-linear analysis. The traditional linear focus on accident analysis describes accidents as a sequence of events, while the latest systemic models outline interdependencies between different factors and define the processes evolution related to a specific (normal) situation. Linear and non-linear accident analysis methods have specific limitations, so the second point of interest is mirrored by the aim to discover the drawbacks of systemic models which becomes a starting point for developing new directions to identify risks or data closer to the cause of incidents/accidents. Since communication represents a critical issue in the interaction of human factor and has been proved to be the answer of the problems made by possible breakdowns in different communication procedures, from this focus point, on the third pylon a new error-modeling instrument suitable for risk assessment/accident analysis will be elaborated.Keywords: accident analysis, multi-factorial error modeling, risk, systemic methods
Procedia PDF Downloads 208