Search results for: Process Models.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7444

Search results for: Process Models.

6874 Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark

Authors: B. Elshafei, X. Mao

Abstract:

The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.

Keywords: Data fusion, Gaussian process regression, signal denoise, temporal extrapolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 501
6873 Evolutionary Training of Hybrid Systems of Recurrent Neural Networks and Hidden Markov Models

Authors: Rohitash Chandra, Christian W. Omlin

Abstract:

We present a hybrid architecture of recurrent neural networks (RNNs) inspired by hidden Markov models (HMMs). We train the hybrid architecture using genetic algorithms to learn and represent dynamical systems. We train the hybrid architecture on a set of deterministic finite-state automata strings and observe the generalization performance of the hybrid architecture when presented with a new set of strings which were not present in the training data set. In this way, we show that the hybrid system of HMM and RNN can learn and represent deterministic finite-state automata. We ran experiments with different sets of population sizes in the genetic algorithm; we also ran experiments to find out which weight initializations were best for training the hybrid architecture. The results show that the hybrid architecture of recurrent neural networks inspired by hidden Markov models can train and represent dynamical systems. The best training and generalization performance is achieved when the hybrid architecture is initialized with random real weight values of range -15 to 15.

Keywords: Deterministic finite-state automata, genetic algorithm, hidden Markov models, hybrid systems and recurrent neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1890
6872 Seismic Analysis of a S-Curved Viaduct using Stick and Finite Element Models

Authors: Sourabh Agrawal, Ashok K. Jain

Abstract:

Stick models are widely used in studying the behaviour of straight as well as skew bridges and viaducts subjected to earthquakes while carrying out preliminary studies. The application of such models to highly curved bridges continues to pose challenging problems. A viaduct proposed in the foothills of the Himalayas in Northern India is chosen for the study. It is having 8 simply supported spans @ 30 m c/c. It is doubly curved in horizontal plane with 20 m radius. It is inclined in vertical plane as well. The superstructure consists of a box section. Three models have been used: a conventional stick model, an improved stick model and a 3D finite element model. The improved stick model is employed by making use of body constraints in order to study its capabilities. The first 8 frequencies are about 9.71% away in the latter two models. Later the difference increases to 80% in 50th mode. The viaduct was subjected to all three components of the El Centro earthquake of May 1940. The numerical integration was carried out using the Hilber- Hughes-Taylor method as implemented in SAP2000. Axial forces and moments in the bridge piers as well as lateral displacements at the bearing levels are compared for the three models. The maximum difference in the axial forces and bending moments and displacements vary by 25% between the improved and finite element model. Whereas, the maximum difference in the axial forces, moments, and displacements in various sections vary by 35% between the improved stick model and equivalent straight stick model. The difference for torsional moment was as high as 75%. It is concluded that the stick model with body constraints to model the bearings and expansion joints is not desirable in very sharp S curved viaducts even for preliminary analysis. This model can be used only to determine first 10 frequency and mode shapes but not for member forces. A 3D finite element analysis must be carried out for meaningful results.

Keywords: Bearing, body constraint, box girder, curved viaduct, expansion joint, finite element, link element, seismic, stick model, time history analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2358
6871 Establishing Econometric Modeling Equations for Lumpy Skin Disease Outbreaks in the Nile Delta of Egypt under Current Climate Conditions

Authors: Abdelgawad, Salah El-Tahawy

Abstract:

This paper aimed to establish econometrical equation models for the Nile delta region in Egypt, which will represent a basement for future predictions of Lumpy skin disease outbreaks and its pathway in relation to climate change. Data of lumpy skin disease (LSD) outbreaks were collected from the cattle farms located in the provinces representing the Nile delta region during 1 January, 2015 to December, 2015. The obtained results indicated that there was a significant association between the degree of the LSD outbreaks and the investigated climate factors (temperature, wind speed, and humidity) and the outbreaks peaked during the months of June, July, and August and gradually decreased to the lowest rate in January, February, and December. The model obtained depicted that the increment of these climate factors were associated with evidently increment on LSD outbreaks on the Nile Delta of Egypt. The model validation process was done by the root mean square error (RMSE) and means bias (MB) which compared the number of LSD outbreaks expected with the number of observed outbreaks and estimated the confidence level of the model. The value of RMSE was 1.38% and MB was 99.50% confirming that this established model described the current association between the LSD outbreaks and the change on climate factors and also can be used as a base for predicting the of LSD outbreaks depending on the climatic change on the future.

Keywords: LSD, climate factors, econometric models, Nile Delta.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 961
6870 Analysis of an Electrical Transformer: A Bond Graph Approach

Authors: Gilberto Gonzalez-A

Abstract:

Bond graph models of an electrical transformer including the nonlinear saturation are presented. These models determine the relation between self and mutual inductances, and the leakage and magnetizing inductances of power transformers with two and three windings using the properties of a bond graph. The modelling and analysis using this methodology to three phase power transformers or transformers with internal incipient faults can be extended.

Keywords: Bond graph, electrical transformer, nonlinear saturation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1540
6869 Nonlinear Estimation Model for Rail Track Deterioration

Authors: M. Karimpour, L. Hitihamillage, N. Elkhoury, S. Moridpour, R. Hesami

Abstract:

Rail transport authorities around the world have been facing a significant challenge when predicting rail infrastructure maintenance work for a long period of time. Generally, maintenance monitoring and prediction is conducted manually. With the restrictions in economy, the rail transport authorities are in pursuit of improved modern methods, which can provide precise prediction of rail maintenance time and location. The expectation from such a method is to develop models to minimize the human error that is strongly related to manual prediction. Such models will help them in understanding how the track degradation occurs overtime under the change in different conditions (e.g. rail load, rail type, rail profile). They need a well-structured technique to identify the precise time that rail tracks fail in order to minimize the maintenance cost/time and secure the vehicles. The rail track characteristics that have been collected over the years will be used in developing rail track degradation prediction models. Since these data have been collected in large volumes and the data collection is done both electronically and manually, it is possible to have some errors. Sometimes these errors make it impossible to use them in prediction model development. This is one of the major drawbacks in rail track degradation prediction. An accurate model can play a key role in the estimation of the long-term behavior of rail tracks. Accurate models increase the track safety and decrease the cost of maintenance in long term. In this research, a short review of rail track degradation prediction models has been discussed before estimating rail track degradation for the curve sections of Melbourne tram track system using Adaptive Network-based Fuzzy Inference System (ANFIS) model.

Keywords: ANFIS, MGT, Prediction modeling, rail track degradation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1595
6868 MCDM Spectrum Handover Models for Cognitive Wireless Networks

Authors: Cesar Hernández, Diego Giral, Fernando Santa

Abstract:

Spectrum handover is a significant topic in the cognitive radio networks to assure an efficient data transmission in the cognitive radio user’s communications. This paper proposes a comparison between three spectrum handover models: VIKOR, SAW and MEW. Four evaluation metrics are used. These metrics are, accumulative average of failed handover, accumulative average of handover performed, accumulative average of transmission bandwidth and, accumulative average of the transmission delay. As a difference with related work, the performance of the three spectrum handover models was validated with captured data of spectrum occupancy in experiments performed at the GSM frequency band (824 MHz - 849 MHz). These data represent the actual behavior of the licensed users for this wireless frequency band. The results of the comparison show that VIKOR Algorithm provides a 15.8% performance improvement compared to SAW Algorithm and, it is 12.1% better than the MEW Algorithm.

Keywords: Cognitive radio, decision making, MEW, SAW, spectrum handover, VIKOR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2155
6867 Definition of Foot Size Model using Kohonen Network

Authors: Khawla Ben Abderrahim

Abstract:

In order to define a new model of Tunisian foot sizes and for building the most comfortable shoes, Tunisian industrialists must be able to offer for their customers products able to put on and adjust the majority of the target population concerned. Moreover, the use of models of shoes, mainly from others country, causes a mismatch between the foot and comfort of the Tunisian shoes. But every foot is unique; these models become uncomfortable for the Tunisian foot. We have a set of measures produced from a 3D scan of the feet of a diverse population (women, men ...) and we try to analyze this data to define a model of foot specific to the Tunisian footwear design. In this paper we propose tow new approaches to modeling a new foot sizes model. We used, indeed, the neural networks, and specially the Kohonen network. Next, we combine neural networks with the concept of half-foot size to improve the models already found. Finally, it was necessary to compare the results obtained by applying each approach and we decide what-s the best approach that give us the most model of foot improving more comfortable shoes.

Keywords: Morphology of the foot, foot size, half foot size, neural network, Kohonen network, model of foot size.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1555
6866 Time Series Regression with Meta-Clusters

Authors: Monika Chuchro

Abstract:

This paper presents a preliminary attempt to apply classification of time series using meta-clusters in order to improve the quality of regression models. In this case, clustering was performed as a method to obtain subgroups of time series data with normal distribution from the inflow into wastewater treatment plant data, composed of several groups differing by mean value. Two simple algorithms, K-mean and EM, were chosen as a clustering method. The Rand index was used to measure the similarity. After simple meta-clustering, a regression model was performed for each subgroups. The final model was a sum of the subgroups models. The quality of the obtained model was compared with the regression model made using the same explanatory variables, but with no clustering of data. Results were compared using determination coefficient (R2), measure of prediction accuracy- mean absolute percentage error (MAPE) and comparison on a linear chart. Preliminary results allow us to foresee the potential of the presented technique.

Keywords: Clustering, Data analysis, Data mining, Predictive models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1951
6865 Simulating Action Potential as a Linear Combination of Gating Dynamics

Authors: S. H. Sabzpoushan

Abstract:

In this research we show that the dynamics of an action potential in a cell can be modeled with a linear combination of the dynamics of the gating state variables. It is shown that the modeling error is negligible. Our findings can be used for simplifying cell models and reduction of computational burden i.e. it is useful for simulating action potential propagation in large scale computations like tissue modeling. We have verified our finding with the use of several cell models.

Keywords: Linear model, Action potential, gating dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1275
6864 Assessing Community Participation in Decision-Making Process under Co-Management: A Case Study on Hail Haor, Bangladesh

Authors: R. Ferdous

Abstract:

Power, responsibility sharing, and democratic decision-making are the central ethos to co-management. It is assumed that involving local community in the decision-making process can create a sense of ownership and responsibility of that community and motivate the community towards collective action. But this paper demonstrated that the process to involve local community is not simple and straightforward as it is influenced by structural aspects, power relations among the actors, and social embedded institutions. These factors shape the process in that way who will participate, how they will participate and how the local community maneuvers their agency in the decision-making process. To grasp the complexities that materialize in the process of participation and to understand the inclusionary and exclusionary nature of participation, this paper examines the subjective understanding of different stakeholders concerning participation and furthermore observes the enabling or constraining factors that affect the community to exercise their agency.

Keywords: Participation, social embeddedness, power, structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1687
6863 Quality Based Approach for Efficient Biologics Manufacturing

Authors: Takashi Kaminagayoshi, Shigeyuki Haruyama

Abstract:

To improve the manufacturing efficiency of biologics, such as antibody drugs, a quality engineering framework was designed. Within this framework, critical steps and parameters in the manufacturing process were studied. Identification of these critical steps and critical parameters allows a deeper understanding of manufacturing capabilities, and suggests to process development department process control standards based on actual manufacturing capabilities as part of a PDCA (plan-do-check-act) cycle. This cycle can be applied to each manufacturing process so that it can be standardized, reducing the time needed to establish each new process.

Keywords: Antibody drugs, biologics, manufacturing efficiency, PDCA cycle, quality engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1653
6862 Averaging Model of a Three-Phase Controlled Rectifier Feeding an Uncontrolled Buck Converter

Authors: P. Ruttanee, K-N. Areerak, K-L. Areerak

Abstract:

Dynamic models of power converters are normally time-varying because of their switching actions. Several approaches are applied to analyze the power converters to achieve the timeinvariant models suitable for system analysis and design via the classical control theory. The paper presents how to derive dynamic models of the power system consisting of a three-phase controlled rectifier feeding an uncontrolled buck converter by using the combination between the well known techniques called the DQ and the generalized state-space averaging methods. The intensive timedomain simulations of the exact topology model are used to support the accuracies of the reported model. The results show that the proposed model can provide good accuracies in both transient and steady-state responses.

Keywords: DQ method, Generalized state-space averaging method, Three-phase controlled rectifier, Uncontrolled buck converter, Averaging model, Modeling, Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3821
6861 A Comparative Study on the Dimensional Error of 3D CAD Model and SLS RP Model for Reconstruction of Cranial Defect

Authors: L. Siva Rama Krishna, Sriram Venkatesh, M. Sastish Kumar, M. Uma Maheswara Chary

Abstract:

Rapid Prototyping (RP) is a technology that produces models and prototype parts from 3D CAD model data, CT/MRI scan data, and model data created from 3D object digitizing systems. There are several RP process like Stereolithography (SLA), Solid Ground Curing (SGC), Selective Laser Sintering (SLS), Fused Deposition Modeling (FDM), 3D Printing (3DP) among them SLS and FDM RP processes are used to fabricate pattern of custom cranial implant. RP technology is useful in engineering and biomedical application. This is helpful in engineering for product design, tooling and manufacture etc. RP biomedical applications are design and development of medical devices, instruments, prosthetics and implantation; it is also helpful in planning complex surgical operation. The traditional approach limits the full appreciation of various bony structure movements and therefore the custom implants produced are difficult to measure the anatomy of parts and analyze the changes in facial appearances accurately. Cranioplasty surgery is a surgical correction of a defect in cranial bone by implanting a metal or plastic replacement to restore the missing part. This paper aims to do a comparative study on the dimensional error of CAD and SLS RP Models for reconstruction of cranial defect by comparing the virtual CAD with the physical RP model of a cranial defect.

Keywords: Rapid Prototyping, Selective Laser Sintering, Cranial defect, Dimensional Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3362
6860 The Impact of ISO 9001 Certification on Brazilian Firms’ Performance: Insights from Multiple Case Studies

Authors: Matheus Borges Carneiro, Fabiane Letícia Lizarelli, José Carlos de Toledo

Abstract:

The evolution of quality management by companies was strongly enabled by, among others, ISO 9001 certification, which is considered a crucial requirement for several customers. Likewise, performance measurement provides useful insights for companies to identify the reflection of their decision-making process on their improvement. One of the most used performance measurement models is the balanced scorecard (BSC), which uses four perspectives to address a firm’s performance: financial, internal process, customer satisfaction, and learning and growth. Since ISO 9001 certified firms are likely to measure their performance through BSC approach, it is important to verify whether the certificate influences the firm performance or not. Therefore, this paper aims to verify the impact of ISO 9001:2015 on Brazilian firms’ performance based on the BSC perspective. Hence, nine certified companies located in the Southeast region of Brazil were studied through a multiple case study approach. Within this study, it was possible to identify the positive impact of ISO 9001 on firms’ overall performance, and four Critical Success Factors (CSFs) were identified as relevant on the linkage among ISO 9001 and firms’ performance: employee involvement, top management, process management, and customer focus. Due to the COVID-19 pandemic, the number of interviews was limited to the quality manager specialist, and the sample was limited since several companies were closed during the period of the study. This study presents an in-depth analysis of how the relationship between ISO 9001 certification and firms’ performance in a developing country is.

Keywords: Balanced scorecard, Brazilian firms’ performance, critical success factors, ISO 9001 certification, performance measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 581
6859 Comprehensive Assessment of Energy Efficiency within the Production Process

Authors: S. Kreitlein, N. Eder, A. Syed-Khaja, J. Franke

Abstract:

The importance of energy efficiency within the production processes increases steadily. For a comprehensive assessment of energy efficiency within the production process, unfortunately no tools exist or have been developed yet. Therefore the Institute for Factory Automation and Production Systems at the Friedrich-Alexander-University Erlangen-Nuremberg has developed two methods with the goal of achieving transparency and a quantitative assessment of energy efficiency namely EEV (Energy Efficiency Value) and EPE (Energetic Process Efficiency). This paper describes the basics and state-of-the-art as well as the developed approaches.

Keywords: Energy efficiency, energy efficiency value, energetic process efficiency, production.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2279
6858 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: Crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1174
6857 Evaluation of Algorithms for Sequential Decision in Biosonar Target Classification

Authors: Turgay Temel, John Hallam

Abstract:

A sequential decision problem, based on the task ofidentifying the species of trees given acoustic echo data collectedfrom them, is considered with well-known stochastic classifiers,including single and mixture Gaussian models. Echoes are processedwith a preprocessing stage based on a model of mammalian cochlearfiltering, using a new discrete low-pass filter characteristic. Stoppingtime performance of the sequential decision process is evaluated andcompared. It is observed that the new low pass filter processingresults in faster sequential decisions.

Keywords: Classification, neuro-spike coding, parametricmodel, Gaussian mixture with EM algorithm, sequential decision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1547
6856 Ranking Alternatives in Multi-Criteria Decision Analysis using Common Weights Based on Ideal and Anti-ideal Frontiers

Authors: Saber Saati Mohtadi, Ali Payan, Azizallah Kord

Abstract:

One of the most important issues in multi-criteria decision analysis (MCDA) is to determine the weights of criteria so that all alternatives can be compared based on the collective performance of criteria. In this paper, one of popular methods in data envelopment analysis (DEA) known as common weights (CWs) is used to determine the weights in MCDA. Two frontiers named ideal and anti-ideal frontiers, instead of ideal and anti-ideal alternatives, are defined based on two new proposed CWs models. Ideal and antiideal frontiers are more flexible than that of alternatives. According to the optimal solutions of these two models, the distances of an alternative from the ideal and anti-ideal frontiers are derived. Then, a relative distance is introduced to measure the value of each alternative. The suggested models are linear and despite weight restrictions are feasible. An example is presented for explaining the method and for comparing to the existing literature.

Keywords: Anti-ideal frontier, Common weights (CWs), Ideal frontier, Multi-criteria decision analysis (MCDA)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1891
6855 A New Quantile Based Fuzzy Time Series Forecasting Model

Authors: Tahseen A. Jilani, Aqil S. Burney, C. Ardil

Abstract:

Time series models have been used to make predictions of academic enrollments, weather, road accident, casualties and stock prices, etc. Based on the concepts of quartile regression models, we have developed a simple time variant quantile based fuzzy time series forecasting method. The proposed method bases the forecast using prediction of future trend of the data. In place of actual quantiles of the data at each point, we have converted the statistical concept into fuzzy concept by using fuzzy quantiles using fuzzy membership function ensemble. We have given a fuzzy metric to use the trend forecast and calculate the future value. The proposed model is applied for TAIFEX forecasting. It is shown that proposed method work best as compared to other models when compared with respect to model complexity and forecasting accuracy.

Keywords: Quantile Regression, Fuzzy time series, fuzzy logicalrelationship groups, heuristic trend prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1997
6854 Experimental Investigation on Freeze-Concentration Process Desalting for Highly Saline Brines

Authors: H. Al-Jabli

Abstract:

Using the freeze-melting process for the disposing of high saline brines was the aim of the paper by confirming the performance estimation of the treatment system. A laboratory bench scale freezing technique test unit was designed, constructed, and tested at Doha Research Plant (DRP) in Kuwait. The principal unit operations that have been considered for the laboratory study are: ice crystallization, separation, washing, and melting. The applied process is characterized as “the secondary-refrigerant indirect freezing”, which is utilizing normal freezing concept. The high saline brine was used as definite feed water, i.e. average TDS of 250,000 ppm. Kuwait desalination plants were carried out in the experimental study to measure the performance of the proposed treatment system. Experimental analysis shows that the freeze-melting process is capable of dropping the TDS of the feed water from 249,482 ppm to 56,880 ppm of the freeze-melting process in the two-phase’s course, whereas overall recovery results of the salt passage and salt rejection are 31.11%, 19.05%, and 80.95%, correspondingly. Therefore, the freeze-melting process is encouraging for the proposed application, as it shows on the results, which approves the process capability of reducing a major amount of the dissolved salts of the high saline brine with reasonable sensible recovery. This process might be reasonable with other brine disposal processes.

Keywords: High saline brine, freeze-melting process, ice crystallization, brine disposal process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1059
6853 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: Computer vision, deep learning, object detection, semiconductor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 827
6852 Comparison of Fundamental Frequency Model and PWM Based Model of UPFC

Authors: S.A. Al-Qallaf, S.A. Al-Mawsawi, A. Haider

Abstract:

Among all FACTS devices, the unified power flow controller (UPFC) is considered to be the most versatile device. This is due to its capability to control all the transmission system parameters (impedance, voltage magnitude, and phase angle). With the growing interest in UPFC, the attention to develop a mathematical model has increased. Several models were introduced for UPFC in literature for different type of studies in power systems. In this paper a novel comparison study between two dynamic models of UPFC with their proposed control strategies.

Keywords: FACTS, UPFC, Dynamic Modeling, PWM, Fundamental Frequency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2220
6851 A Simulation Model for the H-gate PDSOI MOSFET

Authors: Bu Jianhui, Bi Jinshun, Liu Mengxin, Luo Jiajun, Han Zhengsheng

Abstract:

The floating body effect is a serious problem for the PDSOI MOSFET, and the H-gate layout is frequently used as the body contact to eliminate this effect. Unfortunately, most of the standard commercial SOI MOSFET model is for the device with finger gate, the necessity of the new models for the H-gate device arises. A simulation model for the H-gate PDSOI MOSFET is proposed based on the 0.35μm PDSOI process developed by the Institute of Microelectronics of the Chinese Academy of Sciences (IMECAS), and then the model is well verified by the ring-oscillator.

Keywords: PDSOI H-gate Device model Body contact.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2242
6850 Customer Need Type Classification Model using Data Mining Techniques for Recommender Systems

Authors: Kyoung-jae Kim

Abstract:

Recommender systems are usually regarded as an important marketing tool in the e-commerce. They use important information about users to facilitate accurate recommendation. The information includes user context such as location, time and interest for personalization of mobile users. We can easily collect information about location and time because mobile devices communicate with the base station of the service provider. However, information about user interest can-t be easily collected because user interest can not be captured automatically without user-s approval process. User interest usually represented as a need. In this study, we classify needs into two types according to prior research. This study investigates the usefulness of data mining techniques for classifying user need type for recommendation systems. We employ several data mining techniques including artificial neural networks, decision trees, case-based reasoning, and multivariate discriminant analysis. Experimental results show that CHAID algorithm outperforms other models for classifying user need type. This study performs McNemar test to examine the statistical significance of the differences of classification results. The results of McNemar test also show that CHAID performs better than the other models with statistical significance.

Keywords: Customer need type, Data mining techniques, Recommender system, Personalization, Mobile user.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2146
6849 Role-Governed Categorization and Category Learning as a Result from Structural Alignment: The RoleMap Model

Authors: Yolina A. Petrova, Georgi I. Petkov

Abstract:

The paper presents a symbolic model for category learning and categorization (called RoleMap). Unlike the other models which implement learning in a separate working mode, role-governed category learning and categorization emerge in RoleMap while it does its usual reasoning. The model is based on several basic mechanisms known as reflecting the sub-processes of analogy-making. It steps on the assumption that in their everyday life people constantly compare what they experience and what they know. Various commonalities between the incoming information (current experience) and the stored one (long-term memory) emerge from those comparisons. Some of those commonalities are considered to be highly important, and they are transformed into concepts for further use. This process denotes the category learning. When there is missing knowledge in the incoming information (i.e. the perceived object is still not recognized), the model makes anticipations about what is missing, based on the similar episodes from its long-term memory. Various such anticipations may emerge for different reasons. However, with time only one of them wins and is transformed into a category member. This process denotes the act of categorization.

Keywords: Categorization, category learning, role-governed category, analogy-making, cognitive modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 661
6848 Peakwise Smoothing of Data Models using Wavelets

Authors: D Sudheer Reddy, N Gopal Reddy, P V Radhadevi, J Saibaba, Geeta Varadan

Abstract:

Smoothing or filtering of data is first preprocessing step for noise suppression in many applications involving data analysis. Moving average is the most popular method of smoothing the data, generalization of this led to the development of Savitzky-Golay filter. Many window smoothing methods were developed by convolving the data with different window functions for different applications; most widely used window functions are Gaussian or Kaiser. Function approximation of the data by polynomial regression or Fourier expansion or wavelet expansion also gives a smoothed data. Wavelets also smooth the data to great extent by thresholding the wavelet coefficients. Almost all smoothing methods destroys the peaks and flatten them when the support of the window is increased. In certain applications it is desirable to retain peaks while smoothing the data as much as possible. In this paper we present a methodology called as peak-wise smoothing that will smooth the data to any desired level without losing the major peak features.

Keywords: smoothing, moving average, peakwise smoothing, spatialdensity models, planar shape models, wavelets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1750
6847 The Effect of Multi-Layer Bandage on the Interface Pressure Applied by Compression Bandages

Authors: Jawad Al Khaburi, Abbas A. Dehghani-Sanij, E. Andrea Nelson, Jerry Hutchinson

Abstract:

Medical compression bandages are widely used in the treatment of chronic venous disorder. In order to design effective compression bandages, researchers have attempted to describe the interface pressure applied by multi-layer bandages using mathematical models. This paper reports on the work carried out to compare and validate the mathematical models used to describe the interface pressure applied by multi-layer bandages. Both analytical and experimental results showed that using simple multiplication of a number of bandage layers with the pressure applied by one layer of bandage or ignoring the increase in the limb radius due to former layers of bandage will result in overestimating the pressure. Experimental results showed that the mathematical models, which take into consideration the increase in the limb radius due to former bandage layers, are more accurate than the one which does not.

Keywords: Compression bandages, FlexiForce, interface pressure, venous ulcer

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2719
6846 Self-Supervised Pretraining on Paired Sequences of fMRI Data for Transfer Learning to Brain Decoding Tasks

Authors: Sean Paulsen, Michael Casey

Abstract:

In this work, we present a self-supervised pretraining framework for transformers on functional Magnetic Resonance Imaging (fMRI) data. First, we pretrain our architecture on two self-supervised tasks simultaneously to teach the model a general understanding of the temporal and spatial dynamics of human auditory cortex during music listening. Our pretraining results are the first to suggest a synergistic effect of multitask training on fMRI data. Second, we finetune the pretrained models and train additional fresh models on a supervised fMRI classification task. We observe significantly improved accuracy on held-out runs with the finetuned models, which demonstrates the ability of our pretraining tasks to facilitate transfer learning. This work contributes to the growing body of literature on transformer architectures for pretraining and transfer learning with fMRI data, and serves as a proof of concept for our pretraining tasks and multitask pretraining on fMRI data.

Keywords: Transfer learning, fMRI, self-supervised, brain decoding, transformer, multitask training.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 151
6845 Hydrodynamic Modeling of a Surface Water Treatment Pilot Plant

Authors: C.-M. Militaru, A. Pǎcalǎ, I. Vlaicu, K. Bodor, G.-A. Dumitrel, T. Todinca

Abstract:

A mathematical model for the hydrodynamics of a surface water treatment pilot plant was developed and validated by the determination of the residence time distribution (RTD) for the main equipments of the unit. The well known models of ideal/real mixing, ideal displacement (plug flow) and (one-dimensional axial) dispersion model were combined in order to identify the structure that gives the best fitting of the experimental data for each equipment of the pilot plant. RTD experimental results have shown that pilot plant hydrodynamics can be quite well approximated by a combination of simple mathematical models, structure which is suitable for engineering applications. Validated hydrodynamic models will be further used in the evaluation and selection of the most suitable coagulation-flocculation reagents, optimum operating conditions (injection point, reaction times, etc.), in order to improve the quality of the drinking water.

Keywords: drinking water, hydrodynamic modeling, pilot plant, residence time distribution, surface water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1672