Search results for: data modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26483

Search results for: data modeling

26453 Knowledge Representation and Inconsistency Reasoning of Class Diagram Maintenance in Big Data

Authors: Chi-Lun Liu

Abstract:

Requirements modeling and analysis are important in successful information systems' maintenance. Unified Modeling Language (UML) class diagrams are useful standards for modeling information systems. To our best knowledge, there is a lack of a systems development methodology described by the organism metaphor. The core concept of this metaphor is adaptation. Using the knowledge representation and reasoning approach and ontologies to adopt new requirements are emergent in recent years. This paper proposes an organic methodology which is based on constructivism theory. This methodology is a knowledge representation and reasoning approach to analyze new requirements in the class diagrams maintenance. The process and rules in the proposed methodology automatically analyze inconsistencies in the class diagram. In the big data era, developing an automatic tool based on the proposed methodology to analyze large amounts of class diagram data is an important research topic in the future.

Keywords: knowledge representation, reasoning, ontology, class diagram, software engineering

Procedia PDF Downloads 210
26452 A Model Architecture Transformation with Approach by Modeling: From UML to Multidimensional Schemas of Data Warehouses

Authors: Ouzayr Rabhi, Ibtissam Arrassen

Abstract:

To provide a complete analysis of the organization and to help decision-making, leaders need to have relevant data; Data Warehouses (DW) are designed to meet such needs. However, designing DW is not trivial and there is no formal method to derive a multidimensional schema from heterogeneous databases. In this article, we present a Model-Driven based approach concerning the design of data warehouses. We describe a multidimensional meta-model and also specify a set of transformations starting from a Unified Modeling Language (UML) metamodel. In this approach, the UML metamodel and the multidimensional one are both considered as a platform-independent model (PIM). The first meta-model is mapped into the second one through transformation rules carried out by the Query View Transformation (QVT) language. This proposal is validated through the application of our approach to generating a multidimensional schema of a Balanced Scorecard (BSC) DW. We are interested in the BSC perspectives, which are highly linked to the vision and the strategies of an organization.

Keywords: data warehouse, meta-model, model-driven architecture, transformation, UML

Procedia PDF Downloads 128
26451 Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System

Authors: Karima Qayumi, Alex Norta

Abstract:

The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.

Keywords: agent-oriented modeling (AOM), business intelligence model (BIM), distributed data mining (DDM), multi-agent system (MAS)

Procedia PDF Downloads 391
26450 Drying Modeling of Banana Using Cellular Automata

Authors: M. Fathi, Z. Farhaninejad, M. Shahedi, M. Sadeghi

Abstract:

Drying is one of the oldest preservation methods for food and agriculture products. Appropriate control of operation can be obtained by modeling. Limitation of continues models for complex boundary condition and non-regular geometries leading to appearance of discrete novel methods such as cellular automata, which provides a platform for obtaining fast predictions by rule-based mathematics. In this research a one D dimensional CA was used for simulating thin layer drying of banana. Banana slices were dried with a convectional air dryer and experimental data were recorded for validating of final model. The model was programmed by MATLAB, run for 70000 iterations and von-Neumann neighborhood. The validation results showed a good accordance between experimental and predicted data (R=0.99). Cellular automata are capable to reproduce the expected pattern of drying and have a powerful potential for solving physical problems with reasonable accuracy and low calculating resources.

Keywords: banana, cellular automata, drying, modeling

Procedia PDF Downloads 406
26449 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis

Authors: Meng Su

Abstract:

High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.

Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis

Procedia PDF Downloads 67
26448 Causal Modeling of the Glucose-Insulin System in Type-I Diabetic Patients

Authors: J. Fernandez, N. Aguilar, R. Fernandez de Canete, J. C. Ramos-Diaz

Abstract:

In this paper, a simulation model of the glucose-insulin system for a patient undergoing diabetes Type 1 is developed by using a causal modeling approach under system dynamics. The OpenModelica simulation environment has been employed to build the so called causal model, while the glucose-insulin model parameters were adjusted to fit recorded mean data of a diabetic patient database. Model results under different conditions of a three-meal glucose and exogenous insulin ingestion patterns have been obtained. This simulation model can be useful to evaluate glucose-insulin performance in several circumstances, including insulin infusion algorithms in open-loop and decision support systems in closed-loop.

Keywords: causal modeling, diabetes, glucose-insulin system, diabetes, causal modeling, OpenModelica software

Procedia PDF Downloads 306
26447 Validation of Electrical Field Effect on Electrostatic Desalter Modeling with Experimental Laboratory Data

Authors: Fatemeh Yazdanmehr, Iulian Nistor

Abstract:

The scope of the current study is the evaluation of the electric field effect on electrostatic desalting mathematical modeling with laboratory data. This research study was focused on developing a model for an existing operation desalting unit of one of the Iranian heavy oil field with a 75 MBPD production capacity. The high temperature of inlet oil to dehydration unit reduces the oil recovery, so the mathematical modeling of desalter operation parameters is very significant. The existing production unit operating data has been used for the accuracy of the mathematical desalting plant model. The inlet oil temperature to desalter was decreased from 110 to 80°C, and the desalted electrical field was increased from 0.75 to 2.5 Kv/cm. The model result shows that the desalter parameter changes meet the water-oil specification and also the oil production and consequently annual income is increased. In addition to that, changing desalter operation conditions reduces environmental footprint because of flare gas reduction. Following to specify the accuracy of selected electrostatic desalter electrical field, laboratory data has been used. Experimental data are used to ensure the effect of electrical field change on desalter. Therefore, the lab test is done on a crude oil sample. The results include the dehydration efficiency in the presence of a demulsifier and under electrical field (0.75 Kv) conditions at various temperatures. Comparing lab experimental and electrostatic desalter mathematical model results shows 1-3 percent acceptable error which confirms the validity of desalter specification and operation conditions changes.

Keywords: desalter, electrical field, demulsification, mathematical modeling, water-oil separation

Procedia PDF Downloads 96
26446 Development of Management System of the Experience of Defensive Modeling and Simulation by Data Mining Approach

Authors: D. Nam Kim, D. Jin Kim, Jeonghwan Jeon

Abstract:

Defense Defensive Modeling and Simulation (M&S) is a system which enables impracticable training for reducing constraints of time, space and financial resources. The necessity of defensive M&S has been increasing not only for education and training but also virtual fight. Soldiers who are using defensive M&S for education and training will obtain empirical knowledge and know-how. However, the obtained knowledge of individual soldiers have not been managed and utilized yet since the nature of military organizations: confidentiality and frequent change of members. Therefore, this study aims to develop a management system for the experience of defensive M&S based on data mining approach. Since individual empirical knowledge gained through using the defensive M&S is both quantitative and qualitative data, data mining approach is appropriate for dealing with individual empirical knowledge. This research is expected to be helpful for soldiers and military policy makers.

Keywords: data mining, defensive m&s, management system, knowledge management

Procedia PDF Downloads 226
26445 Analysis of School Burnout and Academic Motivation through Structural Equation Modeling

Authors: Ismail Seçer

Abstract:

The purpose of this study is to analyze the relationship between school burnout and academic motivation in high school students. The working group of the study consists of 455 students from the high schools in Erzurum city center, selected with appropriate sampling method. School Burnout Scale and Academic Motivation Scale were used in the study to collect data. Correlation analysis and structural equation modeling were used in the analysis of the data collected through the study. As a result of the study, it was determined that there are significant and negative relations between school burnout and academic motivation, and the school burnout has direct and indirect significant effects on the getting over himself, using knowledge and exploration dimension through the latent variable of academic motivation. Lastly, it was determined that school burnout is a significant predictor of academic motivation.

Keywords: school burnout, motivation, structural equation modeling, university

Procedia PDF Downloads 282
26444 A Review on 3D Smart City Platforms Using Remotely Sensed Data to Aid Simulation and Urban Analysis

Authors: Slim Namouchi, Bruno Vallet, Imed Riadh Farah

Abstract:

3D urban models provide powerful tools for decision making, urban planning, and smart city services. The accuracy of this 3D based systems is directly related to the quality of these models. Since manual large-scale modeling, such as cities or countries is highly time intensive and very expensive process, a fully automatic 3D building generation is needed. However, 3D modeling process result depends on the input data, the proprieties of the captured objects, and the required characteristics of the reconstructed 3D model. Nowadays, producing 3D real-world model is no longer a problem. Remotely sensed data had experienced a remarkable increase in the recent years, especially data acquired using unmanned aerial vehicles (UAV). While the scanning techniques are developing, the captured data amount and the resolution are getting bigger and more precise. This paper presents a literature review, which aims to identify different methods of automatic 3D buildings extractions either from LiDAR or the combination of LiDAR and satellite or aerial images. Then, we present open source technologies, and data models (e.g., CityGML, PostGIS, Cesiumjs) used to integrate these models in geospatial base layers for smart city services.

Keywords: CityGML, LiDAR, remote sensing, SIG, Smart City, 3D urban modeling

Procedia PDF Downloads 102
26443 Unsupervised Text Mining Approach to Early Warning System

Authors: Ichihan Tai, Bill Olson, Paul Blessner

Abstract:

Traditional early warning systems that alarm against crisis are generally based on structured or numerical data; therefore, a system that can make predictions based on unstructured textual data, an uncorrelated data source, is a great complement to the traditional early warning systems. The Chicago Board Options Exchange (CBOE) Volatility Index (VIX), commonly referred to as the fear index, measures the cost of insurance against market crash, and spikes in the event of crisis. In this study, news data is consumed for prediction of whether there will be a market-wide crisis by predicting the movement of the fear index, and the historical references to similar events are presented in an unsupervised manner. Topic modeling-based prediction and representation are made based on daily news data between 1990 and 2015 from The Wall Street Journal against VIX index data from CBOE.

Keywords: early warning system, knowledge management, market prediction, topic modeling.

Procedia PDF Downloads 301
26442 Artificial Intelligence Assisted Sentiment Analysis of Hotel Reviews Using Topic Modeling

Authors: Sushma Ghogale

Abstract:

With a surge in user-generated content or feedback or reviews on the internet, it has become possible and important to know consumers' opinions about products and services. This data is important for both potential customers and businesses providing the services. Data from social media is attracting significant attention and has become the most prominent channel of expressing an unregulated opinion. Prospective customers look for reviews from experienced customers before deciding to buy a product or service. Several websites provide a platform for users to post their feedback for the provider and potential customers. However, the biggest challenge in analyzing such data is in extracting latent features and providing term-level analysis of the data. This paper proposes an approach to use topic modeling to classify the reviews into topics and conduct sentiment analysis to mine the opinions. This approach can analyse and classify latent topics mentioned by reviewers on business sites or review sites, or social media using topic modeling to identify the importance of each topic. It is followed by sentiment analysis to assess the satisfaction level of each topic. This approach provides a classification of hotel reviews using multiple machine learning techniques and comparing different classifiers to mine the opinions of user reviews through sentiment analysis. This experiment concludes that Multinomial Naïve Bayes classifier produces higher accuracy than other classifiers.

Keywords: latent Dirichlet allocation, topic modeling, text classification, sentiment analysis

Procedia PDF Downloads 75
26441 Hybrid Knowledge Approach for Determining Health Care Provider Specialty from Patient Diagnoses

Authors: Erin Lynne Plettenberg, Jeremy Vickery

Abstract:

In an access-control situation, the role of a user determines whether a data request is appropriate. This paper combines vetted web mining and logic modeling to build a lightweight system for determining the role of a health care provider based only on their prior authorized requests. The model identifies provider roles with 100% recall from very little data. This shows the value of vetted web mining in AI systems, and suggests the impact of the ICD classification on medical practice.

Keywords: electronic medical records, information extraction, logic modeling, ontology, vetted web mining

Procedia PDF Downloads 149
26440 Collision Detection Algorithm Based on Data Parallelism

Authors: Zhen Peng, Baifeng Wu

Abstract:

Modern computing technology enters the era of parallel computing with the trend of sustainable and scalable parallelism. Single Instruction Multiple Data (SIMD) is an important way to go along with the trend. It is able to gather more and more computing ability by increasing the number of processor cores without the need of modifying the program. Meanwhile, in the field of scientific computing and engineering design, many computation intensive applications are facing the challenge of increasingly large amount of data. Data parallel computing will be an important way to further improve the performance of these applications. In this paper, we take the accurate collision detection in building information modeling as an example. We demonstrate a model for constructing a data parallel algorithm. According to the model, a complex object is decomposed into the sets of simple objects; collision detection among complex objects is converted into those among simple objects. The resulting algorithm is a typical SIMD algorithm, and its advantages in parallelism and scalability is unparalleled in respect to the traditional algorithms.

Keywords: data parallelism, collision detection, single instruction multiple data, building information modeling, continuous scalability

Procedia PDF Downloads 257
26439 Agile Methodology for Modeling and Design of Data Warehouses -AM4DW-

Authors: Nieto Bernal Wilson, Carmona Suarez Edgar

Abstract:

The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects.  Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.

Keywords: data warehouse, model data, big data, object fact, object relational fact, process developed data warehouse

Procedia PDF Downloads 381
26438 Multi-Level Meta-Modeling for Enabling Dynamic Subtyping for Industrial Automation

Authors: Zoltan Theisz, Gergely Mezei

Abstract:

Modern industrial automation relies on service oriented concepts of Internet of Things (IoT) device modeling in order to provide a flexible and extendable environment for service meta-repository. However, state-of-the-art meta-modeling techniques prefer design-time modeling, which results in a heavy usage of class sometimes unnecessary static subtyping. Although this approach benefits from clear-cut object-oriented design principles, it also seals the model repository for further dynamic extensions. In this paper, a dynamic multi-level modeling approach is introduced that enables dynamic subtyping through a more relaxed partial instantiation mechanism. The approach is demonstrated on a simple sensor network example.

Keywords: meta-modeling, dynamic subtyping, DMLA, industrial automation, arrowhead

Procedia PDF Downloads 330
26437 A Comprehensive Method of Fault Detection and Isolation based on Testability Modeling Data

Authors: Junyou Shi, Weiwei Cui

Abstract:

Testability modeling is a commonly used method in testability design and analysis of system. A dependency matrix will be obtained from testability modeling, and we will give a quantitative evaluation about fault detection and isolation. Based on the dependency matrix, we can obtain the diagnosis tree. The tree provides the procedures of the fault detection and isolation. But the dependency matrix usually includes built-in test (BIT) and manual test in fact. BIT runs the test automatically and is not limited by the procedures. The method above cannot give a more efficient diagnosis and use the advantages of the BIT. A Comprehensive method of fault detection and isolation is proposed. This method combines the advantages of the BIT and Manual test by splitting the matrix. The result of the case study shows that the method is effective.

Keywords: fault detection, fault isolation, testability modeling, BIT

Procedia PDF Downloads 298
26436 Evaluation of a Data Fusion Algorithm for Detecting and Locating a Radioactive Source through Monte Carlo N-Particle Code Simulation and Experimental Measurement

Authors: Hadi Ardiny, Amir Mohammad Beigzadeh

Abstract:

Through the utilization of a combination of various sensors and data fusion methods, the detection of potential nuclear threats can be significantly enhanced by extracting more information from different data. In this research, an experimental and modeling approach was employed to track a radioactive source by combining a surveillance camera and a radiation detector (NaI). To run this experiment, three mobile robots were utilized, with one of them equipped with a radioactive source. An algorithm was developed in identifying the contaminated robot through correlation between camera images and camera data. The computer vision method extracts the movements of all robots in the XY plane coordinate system, and the detector system records the gamma-ray count. The position of the robots and the corresponding count of the moving source were modeled using the MCNPX simulation code while considering the experimental geometry. The results demonstrated a high level of accuracy in finding and locating the target in both the simulation model and experimental measurement. The modeling techniques prove to be valuable in designing different scenarios and intelligent systems before initiating any experiments.

Keywords: nuclear threats, radiation detector, MCNPX simulation, modeling techniques, intelligent systems

Procedia PDF Downloads 69
26435 Data Modeling and Calibration of In-Line Pultrusion and Laser Ablation Machine Processes

Authors: David F. Nettleton, Christian Wasiak, Jonas Dorissen, David Gillen, Alexandr Tretyak, Elodie Bugnicourt, Alejandro Rosales

Abstract:

In this work, preliminary results are given for the modeling and calibration of two inline processes, pultrusion, and laser ablation, using machine learning techniques. The end product of the processes is the core of a medical guidewire, manufactured to comply with a user specification of diameter and flexibility. An ensemble approach is followed which requires training several models. Two state of the art machine learning algorithms are benchmarked: Kernel Recursive Least Squares (KRLS) and Support Vector Regression (SVR). The final objective is to build a precise digital model of the pultrusion and laser ablation process in order to calibrate the resulting diameter and flexibility of a medical guidewire, which is the end product while taking into account the friction on the forming die. The result is an ensemble of models, whose output is within a strict required tolerance and which covers the required range of diameter and flexibility of the guidewire end product. The modeling and automatic calibration of complex in-line industrial processes is a key aspect of the Industry 4.0 movement for cyber-physical systems.

Keywords: calibration, data modeling, industrial processes, machine learning

Procedia PDF Downloads 252
26434 Implications of Meteorological Parameters in Decision Making for Public Protective Actions during a Nuclear Emergency

Authors: M. Hussaina, K. Mahboobb, S. Z. Ilyasa, S. Shaheena

Abstract:

Plume dispersion modeling is a computational procedure to establish a relationship between emissions, meteorology, atmospheric concentrations, deposition and other factors. The emission characteristics (stack height, stack diameter, release velocity, heat contents, chemical and physical properties of the gases/particle released etc.), terrain (surface roughness, local topography, nearby buildings) and meteorology (wind speed, stability, mixing height, etc.) are required for the modeling of the plume dispersion and estimation of ground and air concentration. During the early phase of Fukushima accident, plume dispersion modeling and decisions were taken for the implementation of protective measures. A difference in estimated results and decisions made by different countries for taking protective actions created a concern in local and international community regarding the exact identification of the safe zone. The current study is focused to highlight the importance of accurate and exact weather data availability, scientific approach for decision making for taking urgent protective actions, compatible and harmonized approach for plume dispersion modeling during a nuclear emergency. As a case study, the influence of meteorological data on plume dispersion modeling and decision-making process has been performed.

Keywords: decision making process, radiation doses, nuclear emergency, meteorological implications

Procedia PDF Downloads 158
26433 Modeling Residual Modulus of Elasticity of Self-Compacted Concrete Using Artificial Neural Networks

Authors: Ahmed M. Ashteyat

Abstract:

Artificial Neural Network (ANN) models have been widely used in material modeling, inter-correlations, as well as behavior and trend predictions when the nonlinear relationship between system parameters cannot be quantified explicitly and mathematically. In this paper, ANN was used to predict the residual modulus of elasticity (RME) of self compacted concrete (SCC) damaged by heat. The ANN model was built, trained, tested and validated using a total of 112 experimental data sets, gathered from available literature. The data used in model development included temperature, relative humidity conditions, mix proportions, filler types, and fiber type. The result of ANN training, testing, and validation indicated that the RME of SCC, exposed to different temperature and relative humidity levels, could be predicted accurately with ANN techniques. The reliability between the predicated outputs and the actual experimental data was 99%. This show that ANN has strong potential as a feasible tool for predicting residual elastic modulus of SCC damaged by heat within the range of input parameter. The ANN model could be used to estimate the RME of SCC, as a rapid inexpensive substitute for the much more complicated and time consuming direct measurement of the RME of SCC.

Keywords: residual modulus of elasticity, artificial neural networks, self compacted-concrete, material modeling

Procedia PDF Downloads 504
26432 Variability of Hydrological Modeling of the Blue Nile

Authors: Abeer Samy, Oliver C. Saavedra Valeriano, Abdelazim Negm

Abstract:

The Blue Nile Basin is the most important tributary of the Nile River. Egypt and Sudan are almost dependent on water originated from the Blue Nile. This multi-dependency creates conflicts among the three countries Egypt, Sudan, and Ethiopia making the management of these conflicts as an international issue. Good assessment of the water resources of the Blue Nile is an important to help in managing such conflicts. Hydrological models are good tool for such assessment. This paper presents a critical review of the nature and variability of the climate and hydrology of the Blue Nile Basin as a first step of using hydrological modeling to assess the water resources of the Blue Nile. Many several attempts are done to develop basin-scale hydrological modeling on the Blue Nile. Lumped and semi distributed models used averages of meteorological inputs and watershed characteristics in hydrological simulation, to analyze runoff for flood control and water resource management. Distributed models include the temporal and spatial variability of catchment conditions and meteorological inputs to allow better representation of the hydrological process. The main challenge of all used models was to assess the water resources of the basin is the shortage of the data needed for models calibration and validation. It is recommended to use distributed model for their higher accuracy to cope with the great variability and complexity of the Blue Nile basin and to collect sufficient data to have more sophisticated and accurate hydrological modeling.

Keywords: Blue Nile Basin, climate change, hydrological modeling, watershed

Procedia PDF Downloads 332
26431 Decision Tree Modeling in Emergency Logistics Planning

Authors: Yousef Abu Nahleh, Arun Kumar, Fugen Daver, Reham Al-Hindawi

Abstract:

Despite the availability of natural disaster related time series data for last 110 years, there is no forecasting tool available to humanitarian relief organizations to determine forecasts for emergency logistics planning. This study develops a forecasting tool based on identifying probability of disaster for each country in the world by using decision tree modeling. Further, the determination of aggregate forecasts leads to efficient pre-disaster planning. Based on the research findings, the relief agencies can optimize the various resources allocation in emergency logistics planning.

Keywords: decision tree modeling, forecasting, humanitarian relief, emergency supply chain

Procedia PDF Downloads 450
26430 Defining a Holistic Approach for Model-Based System Engineering: Paradigm and Modeling Requirements

Authors: Hycham Aboutaleb, Bruno Monsuez

Abstract:

Current systems complexity has reached a degree that requires addressing conception and design issues while taking into account all the necessary aspects. Therefore, one of the main challenges is the way complex systems are specified and designed. The exponential growing effort, cost and time investment of complex systems in modeling phase emphasize the need for a paradigm, a framework and a environment to handle the system model complexity. For that, it is necessary to understand the expectations of the human user of the model and his limits. This paper presents a generic framework for designing complex systems, highlights the requirements a system model needs to fulfill to meet human user expectations, and defines the refined functional as well as non functional requirements modeling tools needs to meet to be useful in model-based system engineering.

Keywords: system modeling, modeling language, modeling requirements, framework

Procedia PDF Downloads 505
26429 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion process, trends functions, bi-parameters weibull density function

Procedia PDF Downloads 267
26428 Multi-Source Data Fusion for Urban Comprehensive Management

Authors: Bolin Hua

Abstract:

In city governance, various data are involved, including city component data, demographic data, housing data and all kinds of business data. These data reflects different aspects of people, events and activities. Data generated from various systems are different in form and data source are different because they may come from different sectors. In order to reflect one or several facets of an event or rule, data from multiple sources need fusion together. Data from different sources using different ways of collection raised several issues which need to be resolved. Problem of data fusion include data update and synchronization, data exchange and sharing, file parsing and entry, duplicate data and its comparison, resource catalogue construction. Governments adopt statistical analysis, time series analysis, extrapolation, monitoring analysis, value mining, scenario prediction in order to achieve pattern discovery, law verification, root cause analysis and public opinion monitoring. The result of Multi-source data fusion is to form a uniform central database, which includes people data, location data, object data, and institution data, business data and space data. We need to use meta data to be referred to and read when application needs to access, manipulate and display the data. A uniform meta data management ensures effectiveness and consistency of data in the process of data exchange, data modeling, data cleansing, data loading, data storing, data analysis, data search and data delivery.

Keywords: multi-source data fusion, urban comprehensive management, information fusion, government data

Procedia PDF Downloads 349
26427 Optimal Design of Propellant Grain Shape Based on Structural Strength Analysis

Authors: Chen Xiong, Tong Xin, Li Hao, Xu Jin-Sheng

Abstract:

Experiment and simulation researches on the structural integrity of propellant grain in solid rocket motor (SRM) with high volumetric fraction were conducted. First, by using SRM parametric modeling functions with secondary development tool Python of ABAQUS, the three dimensional parameterized modeling programs of star shaped grain, wheel shaped grain and wing cylindrical grain were accomplished. Then, the mechanical properties under different loads for star shaped grain were obtained with the application of automatically established finite element model in ABAQUS. Next, several optimization algorithms are introduced to optimize the star shaped grain, wheel shaped grain and wing cylindrical grain. After meeting the demands of burning surface changes and volumetric fraction, the optimum three dimensional shapes of grain were obtained. Finally, by means of parametric modeling functions, pressure data of SRM’s cold pressurization test was directly applied to simulation of grain in terms of mechanical performance. The results verify the reliability and practical of parameterized modeling program of SRM.

Keywords: cold pressurization test, ğarametric modeling, structural integrity, propellant grain, SRM

Procedia PDF Downloads 324
26426 Monomial Form Approach to Rectangular Surface Modeling

Authors: Taweechai Nuntawisuttiwong, Natasha Dejdumrong

Abstract:

Geometric modeling plays an important role in the constructions and manufacturing of curve, surface and solid modeling. Their algorithms are critically important not only in the automobile, ship and aircraft manufacturing business, but are also absolutely necessary in a wide variety of modern applications, e.g., robotics, optimization, computer vision, data analytics and visualization. The calculation and display of geometric objects can be accomplished by these six techniques: Polynomial basis, Recursive, Iterative, Coefficient matrix, Polar form approach and Pyramidal algorithms. In this research, the coefficient matrix (simply called monomial form approach) will be used to model polynomial rectangular patches, i.e., Said-Ball, Wang-Ball, DP, Dejdumrong and NB1 surfaces. Some examples of the monomial forms for these surface modeling are illustrated in many aspects, e.g., construction, derivatives, model transformation, degree elevation and degress reduction.

Keywords: monomial forms, rectangular surfaces, CAGD curves, monomial matrix applications

Procedia PDF Downloads 123
26425 Transportation Accidents Mortality Modeling in Thailand

Authors: W. Sriwattanapongse, S. Prasitwattanaseree, S. Wongtrangan

Abstract:

The transportation accidents mortality is a major problem that leads to loss of human lives, and economic. The objective was to identify patterns of statistical modeling for estimating mortality rates due to transportation accidents in Thailand by using data from 2000 to 2009. The data was taken from the death certificate, vital registration database. The number of deaths and mortality rates were computed classifying by gender, age, year and region. There were 114,790 cases of transportation accidents deaths. The highest average age-specific transport accident mortality rate is 3.11 per 100,000 per year in males, Southern region and the lowest average age-specific transport accident mortality rate is 1.79 per 100,000 per year in females, North-East region. Linear, poisson and negative binomial models were chosen for fitting statistical model. Among the models fitted, the best was chosen based on the analysis of deviance and AIC. The negative binomial model was clearly appropriate fitted.

Keywords: transportation accidents, mortality, modeling, analysis of deviance

Procedia PDF Downloads 213
26424 [Keynote Speech]: Feature Selection and Predictive Modeling of Housing Data Using Random Forest

Authors: Bharatendra Rai

Abstract:

Predictive data analysis and modeling involving machine learning techniques become challenging in presence of too many explanatory variables or features. Presence of too many features in machine learning is known to not only cause algorithms to slow down, but they can also lead to decrease in model prediction accuracy. This study involves housing dataset with 79 quantitative and qualitative features that describe various aspects people consider while buying a new house. Boruta algorithm that supports feature selection using a wrapper approach build around random forest is used in this study. This feature selection process leads to 49 confirmed features which are then used for developing predictive random forest models. The study also explores five different data partitioning ratios and their impact on model accuracy are captured using coefficient of determination (r-square) and root mean square error (rsme).

Keywords: housing data, feature selection, random forest, Boruta algorithm, root mean square error

Procedia PDF Downloads 287