Search results for: elemental graph data model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12927

Search results for: elemental graph data model

12357 Simulating and Forecasting Qualitative Marcoeconomic Models Using Rule-Based Fuzzy Cognitive Maps

Authors: Spiros Mazarakis, George Matzavinos, Peter P. Groumpos

Abstract:

Economic models are complex dynamic systems with a lot of uncertainties and fuzzy data. Conventional modeling approaches using well known methods and techniques cannot provide realistic and satisfactory answers to today-s challenging economic problems. Qualitative modeling using fuzzy logic and intelligent system theories can be used to model macroeconomic models. Fuzzy Cognitive maps (FCM) is a new method been used to model the dynamic behavior of complex systems. For the first time FCMs and the Mamdani Model of Intelligent control is used to model macroeconomic models. This new model is referred as the Mamdani Rule-Based Fuzzy Cognitive Map (MBFCM) and provides the academic and research community with a new promising integrated advanced computational model. A new economic model is developed for a qualitative approach to Macroeconomic modeling. Fuzzy Controllers for such models are designed. Simulation results for an economic scenario are provided and extensively discussed

Keywords: Macroeconomic Models, Mamdani Rule Based- FCMs(MBFCMs), Qualitative and Dynamics System, Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1900
12356 Classification of Soil Aptness to Establish of Panicum virgatum in Mississippi using Sensitivity Analysis and GIS

Authors: Eduardo F. Arias, William Cooke III, Zhaofei Fan, William Kingery

Abstract:

During the last decade Panicum virgatum, known as Switchgrass, has been broadly studied because of its remarkable attributes as a substitute pasture and as a functional biofuel source. The objective of this investigation was to establish soil suitability for Switchgrass in the State of Mississippi. A linear weighted additive model was developed to forecast soil suitability. Multicriteria analysis and Sensitivity analysis were utilized to adjust and optimize the model. The model was fit using seven years of field data associated with soils characteristics collected from Natural Resources Conservation System - United States Department of Agriculture (NRCS-USDA). The best model was selected by correlating calculated biomass yield with each model's soils-based output for Switchgrass suitability. Coefficient of determination (r2) was the decisive factor used to establish the 'best' soil suitability model. Coefficients associated with the 'best' model were implemented within a Geographic Information System (GIS) to create a map of relative soil suitability for Switchgrass in Mississippi. A Geodatabase associated with soil parameters was built and is available for future Geographic Information System use.

Keywords: Aptness, GIS, sensitivity analysis, switchgrass, soil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1535
12355 Fatigue Properties and Strength Degradation of Carbon Fibber Reinforced Composites

Authors: Pasquale Verde, Giuseppe Lamanna

Abstract:

A two-parameter fatigue model explicitly accounting for the cyclic as well as the mean stress was used to fit static and fatigue data available in literature concerning carbon fiber reinforced composite laminates subjected tension-tension fatigue. The model confirms the strength–life equal rank assumption and predicts reasonably the probability of failure under cyclic loading. The model parameters were found by best fitting procedures and required a minimum of experimental tests.

Keywords: Fatigue life, strength, composites, Weibull distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1989
12354 Artificial Visual Percepts for Image Understanding

Authors: Jeewanee Bamunusinghe, Damminda Alahakoon

Abstract:

Visual inputs are one of the key sources from which humans perceive the environment and 'understand' what is happening. Artificial systems perceive the visual inputs as digital images. The images need to be processed and analysed. Within the human brain, processing of visual inputs and subsequent development of perception is one of its major functionalities. In this paper we present part of our research project, which aims at the development of an artificial model for visual perception (or 'understanding') based on the human perceptive and cognitive systems. We propose a new model for perception from visual inputs and a way of understaning or interpreting images using the model. We demonstrate the implementation and use of the model with a real image data set.

Keywords: Image understanding, percept, visual perception.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1718
12353 A Study on Metal Hexagonal Honeycomb Crushing Under Quasi-Static Loading

Authors: M. Zarei Mahmoudabadi, M. Sadighi

Abstract:

In the study of honeycomb crushing under quasistatic loading, two parameters are important, the mean crushing stress and the wavelength of the folding mode. The previous theoretical models did not consider the true cylindrical curvature effects and the flow stress in the folding mode of honeycomb material. The present paper introduces a modification on Wierzbicki-s model based on considering two above mentioned parameters in estimating the mean crushing stress and the wavelength through implementation of the energy method. Comparison of the results obtained by the new model and Wierzbicki-s model with existing experimental data shows better prediction by the model presented in this paper.

Keywords: Crush strength, Flow stress, Honeycomb, Quasistatic load.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2302
12352 Feature Selection and Predictive Modeling of Housing Data Using Random Forest

Authors: Bharatendra Rai

Abstract:

Predictive data analysis and modeling involving machine learning techniques become challenging in presence of too many explanatory variables or features. Presence of too many features in machine learning is known to not only cause algorithms to slow down, but they can also lead to decrease in model prediction accuracy. This study involves housing dataset with 79 quantitative and qualitative features that describe various aspects people consider while buying a new house. Boruta algorithm that supports feature selection using a wrapper approach build around random forest is used in this study. This feature selection process leads to 49 confirmed features which are then used for developing predictive random forest models. The study also explores five different data partitioning ratios and their impact on model accuracy are captured using coefficient of determination (r-square) and root mean square error (rsme).

Keywords: Housing data, feature selection, random forest, Boruta algorithm, root mean square error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1716
12351 Estimation of Human Absorbed Dose Using Compartmental Model

Authors: M. Mousavi-Daramoroudi, H. Yousefnia, F. Abbasi-Davani, S. Zolghadri

Abstract:

Dosimetry is an indispensable and precious factor in patient treatment planning to minimize the absorbed dose in vital tissues. In this study, compartmental model was used in order to estimate the human absorbed dose of 177Lu-DOTATOC from the biodistribution data in wild type rats. For this purpose, 177Lu-DOTATOC was prepared under optimized conditions and its biodistribution was studied in male Syrian rats up to 168 h. Compartmental model was applied to mathematical description of the drug behaviour in tissue at different times. Dosimetric estimation of the complex was performed using radiation absorbed dose assessment resource (RADAR). The biodistribution data showed high accumulation in the adrenal and pancreas as the major expression sites for somatostatin receptor (SSTR). While kidneys as the major route of excretion receive 0.037 mSv/MBq, pancreas and adrenal also obtain 0.039 and 0.028 mSv/MBq. Due to the usage of this method, the points of accumulated activity data were enhanced, and further information of tissues uptake was collected that it will be followed by high (or improved) precision in dosimetric calculations.

Keywords: Compartmental modeling, human absorbed dose, 177Lu-DOTATOC, Syrian rats.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 926
12350 Quantitative Study for Exchange of Gases from Open Sewer Channel to Atmosphere

Authors: Asif Mansoor, Nasiruddin Khan, Noreen Jamil

Abstract:

In this communication a quantitative modeling approach is applied to construct model for the exchange of gases from open sewer channel to the atmosphere. The data for the exchange of gases of the open sewer channel for the year January 1979 to December 2006 is utilized for the construction of the model. The study reveals that stream flow of the open sewer channel exchanges the toxic gases continuously with time varying scale. We find that the quantitative modeling approach is more parsimonious model for these exchanges. The usual diagnostic tests are applied for the model adequacy. This model is beneficial for planner and managerial bodies for the improvement of implemented policies to overcome future environmental problems.

Keywords: Open sewer channel, Industrial waste, Municipalwaste, Gases exchange, Atmosphere, Stochastic models, Diagnosticschecks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1556
12349 Definition of Foot Size Model using Kohonen Network

Authors: Khawla Ben Abderrahim

Abstract:

In order to define a new model of Tunisian foot sizes and for building the most comfortable shoes, Tunisian industrialists must be able to offer for their customers products able to put on and adjust the majority of the target population concerned. Moreover, the use of models of shoes, mainly from others country, causes a mismatch between the foot and comfort of the Tunisian shoes. But every foot is unique; these models become uncomfortable for the Tunisian foot. We have a set of measures produced from a 3D scan of the feet of a diverse population (women, men ...) and we try to analyze this data to define a model of foot specific to the Tunisian footwear design. In this paper we propose tow new approaches to modeling a new foot sizes model. We used, indeed, the neural networks, and specially the Kohonen network. Next, we combine neural networks with the concept of half-foot size to improve the models already found. Finally, it was necessary to compare the results obtained by applying each approach and we decide what-s the best approach that give us the most model of foot improving more comfortable shoes.

Keywords: Morphology of the foot, foot size, half foot size, neural network, Kohonen network, model of foot size.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1556
12348 Large Eddy Simulation of Hydrogen Deflagration in Open Space and Vented Enclosure

Authors: T. Nozu, K. Hibi, T. Nishiie

Abstract:

This paper discusses the applicability of the numerical model for a damage prediction method of the accidental hydrogen explosion occurring in a hydrogen facility. The numerical model was based on an unstructured finite volume method (FVM) code “NuFD/FrontFlowRed”. For simulating unsteady turbulent combustion of leaked hydrogen gas, a combination of Large Eddy Simulation (LES) and a combustion model were used. The combustion model was based on a two scalar flamelet approach, where a G-equation model and a conserved scalar model expressed a propagation of premixed flame surface and a diffusion combustion process, respectively. For validation of this numerical model, we have simulated the previous two types of hydrogen explosion tests. One is open-space explosion test, and the source was a prismatic 5.27 m3 volume with 30% of hydrogen-air mixture. A reinforced concrete wall was set 4 m away from the front surface of the source. The source was ignited at the bottom center by a spark. The other is vented enclosure explosion test, and the chamber was 4.6 m × 4.6 m × 3.0 m with a vent opening on one side. Vent area of 5.4 m2 was used. Test was performed with ignition at the center of the wall opposite the vent. Hydrogen-air mixtures with hydrogen concentrations close to 18% vol. were used in the tests. The results from the numerical simulations are compared with the previous experimental data for the accuracy of the numerical model, and we have verified that the simulated overpressures and flame time-of-arrival data were in good agreement with the results of the previous two explosion tests.

Keywords: Deflagration, Large Eddy Simulation, Turbulent combustion, Vented enclosure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1477
12347 Phenomenological and Semi-microscopic Analysis for Elastic Scattering of Protons on 6,7Li

Authors: A. Amar, N. Burtebayev, Sh. Hamada, Kerimkulov Zhambul, N. Amangieldy

Abstract:

Analysis of the elastic scattering of protons on 6,7Li nuclei has been done in the framework of the optical model at the beam energies up to 50 MeV. Differential cross sections for the 6,7Li + p scattering were measured over the proton laboratory–energy range from 400 to 1050 keV. The elastic scattering of 6,7Li+p data at different proton incident energies have been analyzed using singlefolding model. In each case the real potential obtained from the folding model was supplemented by a phenomenological imaginary potential, and during the fitting process the real potential was normalized and the imaginary potential optimized. Normalization factor NR is calculated in the range between 0.70 and 0.84.

Keywords: scattering of protons on 6, 7Li nuclei, Esis88 Codesingle-folding model, phenomenological.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1375
12346 Order Partitioning in Hybrid MTS/MTO Contexts using Fuzzy ANP

Authors: H. Rafiei, M. Rabbani

Abstract:

A novel concept to balance and tradeoff between make-to-stock and make-to-order has been hybrid MTS/MTO production context. One of the most important decisions involved in the hybrid MTS/MTO environment is determining whether a product is manufactured to stock, to order, or hybrid MTS/MTO strategy. In this paper, a model based on analytic network process is developed to tackle the addressed decision. Since the regarded decision deals with the uncertainty and ambiguity of data as well as experts- and managers- linguistic judgments, the proposed model is equipped with fuzzy sets theory. An important attribute of the model is its generality due to diverse decision factors which are elicited from the literature and developed by the authors. Finally, the model is validated by applying to a real case study to reveal how the proposed model can actually be implemented.

Keywords: Fuzzy analytic network process, Hybrid make-tostock/ make-to-order, Order partitioning, Production planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2176
12345 Improved Data Warehousing: Lessons Learnt from the Systems Approach

Authors: Roelien Goede

Abstract:

Data warehousing success is not high enough. User dissatisfaction and failure to adhere to time frames and budgets are too common. Most traditional information systems practices are rooted in hard systems thinking. Today, the great systems thinkers are forgotten by information systems developers. A data warehouse is still a system and it is worth investigating whether systems thinkers such as Churchman can enhance our practices today. This paper investigates data warehouse development practices from a systems thinking perspective. An empirical investigation is done in order to understand the everyday practices of data warehousing professionals from a systems perspective. The paper presents a model for the application of Churchman-s systems approach in data warehouse development.

Keywords: Data warehouse development, Information systemsdevelopment, Interpretive case study, Systems thinking

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1596
12344 An Approach for Coagulant Dosage Optimization Using Soft Jar Test: A Case Study of Bangkhen Water Treatment Plant

Authors: Ninlawat Phuangchoke, Waraporn Viyanon, Setta Sasananan

Abstract:

The most important process of the water treatment plant process is coagulation, which uses alum and poly aluminum chloride (PACL). Therefore, determining the dosage of alum and PACL is the most important factor to be prescribed. This research applies an artificial neural network (ANN), which uses the Levenberg–Marquardt algorithm to create a mathematical model (Soft Jar Test) for chemical dose prediction, as used for coagulation, such as alum and PACL, with input data consisting of turbidity, pH, alkalinity, conductivity, and, oxygen consumption (OC) of the Bangkhen Water Treatment Plant (BKWTP), under the authority of the Metropolitan Waterworks Authority of Thailand. The data were collected from 1 January 2019 to 31 December 2019 in order to cover the changing seasons of Thailand. The input data of ANN are divided into three groups: training set, test set, and validation set. The coefficient of determination and the mean absolute errors of the alum model are 0.73, 3.18 and the PACL model are 0.59, 3.21, respectively.

Keywords: Soft jar test, jar test, water treatment plant process, artificial neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 664
12343 Exploring the Activity Fabric of an Intelligent Environment with Hierarchical Hidden Markov Theory

Authors: Chiung-Hui Chen

Abstract:

The Internet of Things (IoT) was designed for widespread convenience. With the smart tag and the sensing network, a large quantity of dynamic information is immediately presented in the IoT. Through the internal communication and interaction, meaningful objects provide real-time services for users. Therefore, the service with appropriate decision-making has become an essential issue. Based on the science of human behavior, this study employed the environment model to record the time sequences and locations of different behaviors and adopted the probability module of the hierarchical Hidden Markov Model for the inference. The statistical analysis was conducted to achieve the following objectives: First, define user behaviors and predict the user behavior routes with the environment model to analyze user purposes. Second, construct the hierarchical Hidden Markov Model according to the logic framework, and establish the sequential intensity among behaviors to get acquainted with the use and activity fabric of the intelligent environment. Third, establish the intensity of the relation between the probability of objects’ being used and the objects. The indicator can describe the possible limitations of the mechanism. As the process is recorded in the information of the system created in this study, these data can be reused to adjust the procedure of intelligent design services.

Keywords: Behavior, big data, hierarchical Hidden Markov Model, intelligent object.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 764
12342 Predication Model for Leukemia Diseases Based on Data Mining Classification Algorithms with Best Accuracy

Authors: Fahd Sabry Esmail, M. Badr Senousy, Mohamed Ragaie

Abstract:

In recent years, there has been an explosion in the rate of using technology that help discovering the diseases. For example, DNA microarrays allow us for the first time to obtain a "global" view of the cell. It has great potential to provide accurate medical diagnosis, to help in finding the right treatment and cure for many diseases. Various classification algorithms can be applied on such micro-array datasets to devise methods that can predict the occurrence of Leukemia disease. In this study, we compared the classification accuracy and response time among eleven decision tree methods and six rule classifier methods using five performance criteria. The experiment results show that the performance of Random Tree is producing better result. Also it takes lowest time to build model in tree classifier. The classification rules algorithms such as nearest- neighbor-like algorithm (NNge) is the best algorithm due to the high accuracy and it takes lowest time to build model in classification.

Keywords: Data mining, classification techniques, decision tree, classification rule, leukemia diseases, microarray data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2558
12341 Application of Company Financial Crisis Early Warning Model- Use of “Financial Reference Database“

Authors: Chiung-ying Lee, Chia-hua Chang

Abstract:

In July 1, 2007, Taiwan Stock Exchange (TWSE) on market observation post system (MOPS) adds a new "Financial reference database" for investors to do investment reference. This database as a warning to public offering companies listed on the public financial information and it original within eight targets. In this paper, this database provided by the indicators for the application of company financial crisis early warning model verify that the database provided by the indicator forecast for the financial crisis, whether or not companies have a high accuracy rate as opposed to domestic and foreign scholars have positive results. There is use of Logistic Regression Model application of the financial early warning model, in which no joined back-conditions is the first model, joined it in is the second model, has been taken occurred in the financial crisis of companies to research samples and then business took place before the financial crisis point with T-1 and T-2 sample data to do positive analysis. The results show that this database provided the debt ratio and net per share for the best forecast variables.

Keywords: Financial reference database, Financial early warning model, Logistic Regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1429
12340 Artificial Neural Network-Based Short-Term Load Forecasting for Mymensingh Area of Bangladesh

Authors: S. M. Anowarul Haque, Md. Asiful Islam

Abstract:

Electrical load forecasting is considered to be one of the most indispensable parts of a modern-day electrical power system. To ensure a reliable and efficient supply of electric energy, special emphasis should have been put on the predictive feature of electricity supply. Artificial Neural Network-based approaches have emerged to be a significant area of interest for electric load forecasting research. This paper proposed an Artificial Neural Network model based on the particle swarm optimization algorithm for improved electric load forecasting for Mymensingh, Bangladesh. The forecasting model is developed and simulated on the MATLAB environment with a large number of training datasets. The model is trained based on eight input parameters including historical load and weather data. The predicted load data are then compared with an available dataset for validation. The proposed neural network model is proved to be more reliable in terms of day-wise load forecasting for Mymensingh, Bangladesh.

Keywords: Load forecasting, artificial neural network, particle swarm optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 688
12339 A New Integer Programming Formulation for the Chinese Postman Problem with Time Dependent Travel Times

Authors: Jinghao Sun, Guozhen Tan, Guangjian Hou

Abstract:

The Chinese Postman Problem (CPP) is one of the classical problems in graph theory and is applicable in a wide range of fields. With the rapid development of hybrid systems and model based testing, Chinese Postman Problem with Time Dependent Travel Times (CPPTDT) becomes more realistic than the classical problems. In the literature, we have proposed the first integer programming formulation for the CPPTDT problem, namely, circuit formulation, based on which some polyhedral results are investigated and a cutting plane algorithm is also designed. However, there exists a main drawback: the circuit formulation is only available for solving the special instances with all circuits passing through the origin. Therefore, this paper proposes a new integer programming formulation for solving all the general instances of CPPTDT. Moreover, the size of the circuit formulation is too large, which is reduced dramatically here. Thus, it is possible to design more efficient algorithm for solving the CPPTDT in the future research.

Keywords: Chinese Postman Problem, Time Dependent, Integer Programming, Upper Bound Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2741
12338 Measurement of Operational and Environmental Performance of the Coal-Fired Power Plants in India by Using Data Envelopment Analysis

Authors: Vijay Kumar Bajpai, Sudhir Kumar Singh

Abstract:

In this study, the performance analyses of the twenty five Coal-Fired Power Plants (CFPPs) used for electricity generation are carried out through various Data Envelopment Analysis (DEA) models. Three efficiency indices are defined and pursued. During the calculation of the operational performance, energy and non-energy variables are used as input, and net electricity produced is used as desired output (Model-1). CO2 emitted to the environment is used as the undesired output (Model-2) in the computation of the pure environmental performance while in Model-3 CO2 emissions is considered as detrimental input in the calculation of operational and environmental performance. Empirical results show that most of the plants are operating in increasing returns to scale region and Mettur plant is efficient one with regards to energy use and environment. The result also indicates that the undesirable output effect is insignificant in the research sample. The present study will provide clues to plant operators towards raising the operational and environmental performance of CFPPs.

Keywords: Coal fired power plants, environmental performance, data envelopment analysis, operational performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2361
12337 Optimizing Telehealth Internet of Things Integration: A Sustainable Approach through Fog and Cloud Computing Platforms for Energy Efficiency

Authors: Yunyong Guo, Sudhakar Ganti, Bryan Guo

Abstract:

The swift proliferation of telehealth Internet of Things (IoT) devices has sparked concerns regarding energy consumption and the need for streamlined data processing. This paper presents an energy-efficient model that integrates telehealth IoT devices into a platform based on fog and cloud computing. This integrated system provides a sustainable and robust solution to address the challenges. Our model strategically utilizes fog computing as a localized data processing layer and leverages cloud computing for resource-intensive tasks, resulting in a significant reduction in overall energy consumption. The incorporation of adaptive energy-saving strategies further enhances the efficiency of our approach. Simulation analysis validates the effectiveness of our model in improving energy efficiency for telehealth IoT systems, particularly when integrated with localized fog nodes and both private and public cloud infrastructures. Subsequent research endeavors will concentrate on refining the energy-saving model, exploring additional functional enhancements, and assessing its broader applicability across various healthcare and industry sectors.

Keywords: Energy-efficient, fog computing, IoT, telehealth.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 87
12336 A Mixed Integer Programming for Port Anzali Development Plan

Authors: Mahdieh Allahviranloo

Abstract:

This paper introduces a mixed integer programming model to find the optimum development plan for port Anzali. The model minimizes total system costs taking into account both port infrastructure costs and shipping costs. Due to the multipurpose function of the port, the model consists of 1020 decision variables and 2490 constraints. Results of the model determine the optimum number of berths that should be constructed in each period and for each type of cargo. In addition to, the results of sensitivity analysis on port operation quantity provide useful information for managers to choose the best scenario for port planning with the lowest investment risks. Despite all limitations-due to data availability-the model offers a straightforward decision tools to port planners aspiring to achieve optimum port planning steps.

Keywords: MILP, Multipurpose Terminal, Port Operation Optimization, Port Anzali.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1652
12335 A Specification-Based Approach for Retrieval of Reusable Business Component for Software Reuse

Authors: Meng Fanchao, Zhan Dechen, Xu Xiaofei

Abstract:

Software reuse can be considered as the most realistic and promising way to improve software engineering productivity and quality. Automated assistance for software reuse involves the representation, classification, retrieval and adaptation of components. The representation and retrieval of components are important to software reuse in Component-Based on Software Development (CBSD). However, current industrial component models mainly focus on the implement techniques and ignore the semantic information about component, so it is difficult to retrieve the components that satisfy user-s requirements. This paper presents a method of business component retrieval based on specification matching to solve the software reuse of enterprise information system. First, a business component model oriented reuse is proposed. In our model, the business data type is represented as sign data type based on XML, which can express the variable business data type that can describe the variety of business operations. Based on this model, we propose specification match relationships in two levels: business operation level and business component level. In business operation level, we use input business data types, output business data types and the taxonomy of business operations evaluate the similarity between business operations. In the business component level, we propose five specification matches between business components. To retrieval reusable business components, we propose the measure of similarity degrees to calculate the similarities between business components. Finally, a business component retrieval command like SQL is proposed to help user to retrieve approximate business components from component repository.

Keywords: Business component, business operation, business data type, specification matching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1409
12334 Dynamic Model of a Buck Converter with a Sliding Mode Control

Authors: S. Chonsatidjamroen , K-N. Areerak, K-L. Areerak

Abstract:

This paper presents the averaging model of a buck converter derived from the generalized state-space averaging method. The sliding mode control is used to regulate the output voltage of the converter and taken into account in the model. The proposed model requires the fast computational time compared with those of the full topology model. The intensive time-domain simulations via the exact topology model are used as the comparable model. The results show that a good agreement between the proposed model and the switching model is achieved in both transient and steady-state responses. The reported model is suitable for the optimal controller design by using the artificial intelligence techniques.

Keywords: Generalized state-space averaging method, buck converter, sliding mode control, modeling, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2990
12333 Migration of the Relational Data Base (RDB) to the Object Relational Data Base (ORDB)

Authors: Alae El Alami, Mohamed Bahaj

Abstract:

This paper proposes an approach for translating an existing relational database (RDB) schema into ORDB. The transition is done with methods that can extract various functions from a RDB which is based on aggregations, associations between the various tables, and the reflexive relationships. These methods can extract even the inheritance knowing that no process of reverse engineering can know that it is an Inheritance; therefore, our approach exceeded all of the previous studies made for ​​the transition from RDB to ORDB. In summation, the creation of the New Data Model (NDM) that stocks the RDB in a form of a structured table, and from the NDM we create our navigational model in order to simplify the implementation object from which we develop our different types. Through these types we precede to the last step, the creation of tables.

The step mentioned above does not require any human interference. All this is done automatically, and a prototype has already been created which proves the effectiveness of this approach.

Keywords: Relational databases, Object-relational databases, Semantic enrichment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1953
12332 Automated Fact-Checking By Incorporating Contextual Knowledge and Multi-Faceted Search

Authors: Wenbo Wang, Yi-fang Brook Wu

Abstract:

The spread of misinformation and disinformation has become a major concern, particularly with the rise of social media as a primary source of information for many people. As a means to address this phenomenon, automated fact-checking has emerged as a safeguard against the spread of misinformation and disinformation. Existing fact-checking approaches aim to determine whether a news claim is true or false, and they have achieved decent veracity prediction accuracy. However, the state of the art methods rely on manually verified external information to assist the checking model in making judgments, which requires significant human resources. This study presents a framework, SAC, which focuses on 1) augmenting the representation of a claim by incorporating additional context using general-purpose, comprehensive and authoritative data; 2) developing a search function to automatically select relevant, new and credible references; 3) focusing on the important parts of the representations of a claim and its reference that are most relevant to the fact-checking task. The experimental results demonstrate that: 1) Augmenting the representations of claims and references through the use of a knowledge base, combined with the multi-head attention technique, contributes to improved performance of fact-checking. 2) SAC with auto-selected references outperforms existing fact-checking approaches with manual selected references. Future directions of this study include I) exploring knowledge graph in Wikidata to dynamically augment the representations of claims and references without introducing too much noises; II) exploring semantic relations in claims and references to further enhance fact-checking.

Keywords: Fact checking, claim verification, Deep Learning, Natural Language Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83
12331 A Dynamic Hybrid Option Pricing Model by Genetic Algorithm and Black- Scholes Model

Authors: Yi-Chang Chen, Shan-Lin Chang, Chia-Chun Wu

Abstract:

Unlike this study focused extensively on trading behavior of option market, those researches were just taken their attention to model-driven option pricing. For example, Black-Scholes (B-S) model is one of the most famous option pricing models. However, the arguments of B-S model are previously mentioned by some pricing models reviewing. This paper following suggests the importance of the dynamic character for option pricing, which is also the reason why using the genetic algorithm (GA). Because of its natural selection and species evolution, this study proposed a hybrid model, the Genetic-BS model which combining GA and B-S to estimate the price more accurate. As for the final experiments, the result shows that the output estimated price with lower MAE value than the calculated price by either B-S model or its enhanced one, Gram-Charlier garch (G-C garch) model. Finally, this work would conclude that the Genetic-BS pricing model is exactly practical.

Keywords: genetic algorithm, Genetic-BS, option pricing model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2245
12330 Developing New Algorithm and Its Application on Optimal Control of Pumps in Water Distribution Network

Authors: R. Rajabpour, N. Talebbeydokhti, M. H. Ahmadi

Abstract:

In recent years, new techniques for solving complex problems in engineering are proposed. One of these techniques is JPSO algorithm. With innovative changes in the nature of the jump algorithm JPSO, it is possible to construct a graph-based solution with a new algorithm called G-JPSO. In this paper, a new algorithm to solve the optimal control problem Fletcher-Powell and optimal control of pumps in water distribution network was evaluated. Optimal control of pumps comprise of optimum timetable operation (status on and off) for each of the pumps at the desired time interval. Maximum number of status on and off for each pumps imposed to the objective function as another constraint. To determine the optimal operation of pumps, a model-based optimization-simulation algorithm was developed based on G-JPSO and JPSO algorithms. The proposed algorithm results were compared well with the ant colony algorithm, genetic and JPSO results. This shows the robustness of proposed algorithm in finding near optimum solutions with reasonable computational cost.

Keywords: G-JPSO, operation, optimization, pumping station, water distribution networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1638
12329 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: Goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, type-I error, penalized quasi-likelihood, power, quasi-likelihood.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 733
12328 Development of a Non-invasive System to Measure the Thickness of the Subcutaneous Adipose Tissue Layer for Human

Authors: Hyuck Ki Hong, Young Chang Jo, Yeon Shik Choi, Beom Joon Kim, Hyo Derk Park

Abstract:

To measure the thickness of the subcutaneous adipose tissue layer, a non-invasive optical measurement system (λ=1300 nm) is introduced. Animal and human subjects are used for the experiments. The results of human subjects are compared with the data of ultrasound device measurements, and a high correlation (r=0.94 for n=11) is observed. There are two modes in the corresponding signals measured by the optical system, which can be explained by two-layered and three-layered tissue models. If the target tissue is thinner than the critical thickness, detected data using diffuse reflectance method follow the three-layered tissue model, so the data increase as the thickness increases. On the other hand, if the target tissue is thicker than the critical thickness, the data follow the two-layered tissue model, so they decrease as the thickness increases.

Keywords: Subcutaneous adipose tissue layer, non-invasive measurement system, two-layered and three-layered tissue models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846