Search results for: binary data matrix model
12078 Dispersed Error Control based on Error Filter Design for Improving Halftone Image Quality
Authors: Sang-Chul Kim, Sung-Il Chien
Abstract:
The error diffusion method generates worm artifacts, and weakens the edge of the halftone image when the continuous gray scale image is reproduced by a binary image. First, to enhance the edges, we propose the edge-enhancing filter by considering the quantization error information and gradient of the neighboring pixels. Furthermore, to remove worm artifacts often appearing in a halftone image, we add adaptively random noise into the weights of an error filter.Keywords: Artifact suppression, Edge enhancement, Error diffusion method, Halftone image
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 142412077 Knowledge Management Model for Research Projects Masters Program
Authors: Víctor Hugo Medina García, Darío Alejandro Segura Torres
Abstract:
This paper presents the adaptation of the knowledge management model and intellectual capital measurement NOVA to the needs of work or research project must be developed when conducting a program of graduate-level master. Brackets are added in each of the blocks which is represented in the original model NOVA and which allows to represent those involved in each of these.
Keywords: Knowledge management, masters programs, Nova model, research projects
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 134012076 Toward a Risk Assessment Model Based On Multi-Agent System for Cloud Consumer
Authors: Saadia Drissi, Siham Benhadou, Hicham Medromi
Abstract:
The cloud computing is an innovative paradigm that introduces several changes in technology that have resulted a new ways for cloud providers to deliver their services to cloud consumers mainly in term of security risk assessment, thus, adapting a current risk assessment tools to cloud computing is a very difficult task due to its several characteristics that challenge the effectiveness of risk assessment approaches. As consequence, there is a need of risk assessment model adapted to cloud computing. This paper requires a new risk assessment model based on multi-agent system and AHP model as fundamental steps towards the development of flexible risk assessment approach regarding cloud consumers.
Keywords: Cloud computing, risk assessment model, multi-agent system, AHP model, cloud consumer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 225612075 Improving Spatiotemporal Change Detection: A High Level Fusion Approach for Discovering Uncertain Knowledge from Satellite Image Database
Authors: Wadii Boulila, Imed Riadh Farah, Karim Saheb Ettabaa, Basel Solaiman, Henda Ben Ghezala
Abstract:
This paper investigates the problem of tracking spa¬tiotemporal changes of a satellite image through the use of Knowledge Discovery in Database (KDD). The purpose of this study is to help a given user effectively discover interesting knowledge and then build prediction and decision models. Unfortunately, the KDD process for spatiotemporal data is always marked by several types of imperfections. In our paper, we take these imperfections into consideration in order to provide more accurate decisions. To achieve this objective, different KDD methods are used to discover knowledge in satellite image databases. Each method presents a different point of view of spatiotemporal evolution of a query model (which represents an extracted object from a satellite image). In order to combine these methods, we use the evidence fusion theory which considerably improves the spatiotemporal knowledge discovery process and increases our belief in the spatiotemporal model change. Experimental results of satellite images representing the region of Auckland in New Zealand depict the improvement in the overall change detection as compared to using classical methods.
Keywords: Knowledge discovery in satellite databases, knowledge fusion, data imperfection, data mining, spatiotemporal change detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 154712074 Investigation Bubble Growth and Nucleation Rates during the Pool Boiling Heat Transfer of Distilled Water Using Population Balance Model
Authors: V. Nikkhah Rashidabad, M. Manteghian, M. Masoumi, S. Mousavian
Abstract:
In this research, the changes in bubbles diameter and number that may occur due to the change in heat flux of pure water during pool boiling process. For this purpose, test equipment was designed and developed to collect test data. The bubbles were graded using Caliper Screen software. To calculate the growth and nucleation rates of bubbles under different fluxes, population balance model was employed. The results show that the increase in heat flux from q=20 kw/m2 to q= 102 kw/m2 raised the growth and nucleation rates of bubbles.
Keywords: Heat flux, bubble growth, bubble nucleation, population balance model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 246812073 Basic Calibration and Normalization Techniques for Time Domain Reflectometry Measurements
Authors: Shagufta Tabassum
Abstract:
The study of dielectric properties in a binary mixture of liquids is very useful to understand the liquid structure, molecular interaction, dynamics, and kinematics of the mixture. Time-domain reflectometry (TDR) is a powerful tool for studying the cooperation and molecular dynamics of the H-bonded system. Here we discuss the basic calibration and normalization procedure for TDR measurements. Our aim is to explain different types of error occur during TDR measurements and how to minimize it.
Keywords: time domain reflectometry measurement technique, cable and connector loss, oscilloscope loss, normalization technique
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 50512072 Oil Refineries Emissions: Source and Impact: A Study using AERMOD
Authors: Amir. AL-Haddad, Hisham. Ettouney, Samiya. Saqer
Abstract:
The main objectives of this paper are to measure pollutants concentrations in the oil refinery area in Kuwait over three periods during one year, obtain recent emission inventory for the three refineries of Kuwait, use AERMOD and the emission inventory to predict pollutants concentrations and distribution, compare model predictions against measured data, and perform numerical experiments to determine conditions at which emission rates and the resulting pollutant dispersion is below maximum allowable limits.Keywords: Emissions, ISCST3 model, Modeling, Pollutants, Refinery
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 161812071 Towards Real-Time Classification of Finger Movement Direction Using Encephalography Independent Components
Authors: Mohamed Mounir Tellache, Hiroyuki Kambara, Yasuharu Koike, Makoto Miyakoshi, Natsue Yoshimura
Abstract:
This study explores the practicality of using electroencephalographic (EEG) independent components to predict eight-direction finger movements in pseudo-real-time. Six healthy participants with individual-head MRI images performed finger movements in eight directions with two different arm configurations. The analysis was performed in two stages. The first stage consisted of using independent component analysis (ICA) to separate the signals representing brain activity from non-brain activity signals and to obtain the unmixing matrix. The resulting independent components (ICs) were checked, and those reflecting brain-activity were selected. Finally, the time series of the selected ICs were used to predict eight finger-movement directions using Sparse Logistic Regression (SLR). The second stage consisted of using the previously obtained unmixing matrix, the selected ICs, and the model obtained by applying SLR to classify a different EEG dataset. This method was applied to two different settings, namely the single-participant level and the group-level. For the single-participant level, the EEG dataset used in the first stage and the EEG dataset used in the second stage originated from the same participant. For the group-level, the EEG datasets used in the first stage were constructed by temporally concatenating each combination without repetition of the EEG datasets of five participants out of six, whereas the EEG dataset used in the second stage originated from the remaining participants. The average test classification results across datasets (mean ± S.D.) were 38.62 ± 8.36% for the single-participant, which was significantly higher than the chance level (12.50 ± 0.01%), and 27.26 ± 4.39% for the group-level which was also significantly higher than the chance level (12.49% ± 0.01%). The classification accuracy within [–45°, 45°] of the true direction is 70.03 ± 8.14% for single-participant and 62.63 ± 6.07% for group-level which may be promising for some real-life applications. Clustering and contribution analyses further revealed the brain regions involved in finger movement and the temporal aspect of their contribution to the classification. These results showed the possibility of using the ICA-based method in combination with other methods to build a real-time system to control prostheses.Keywords: Brain-computer interface, BCI, electroencephalography, EEG, finger motion decoding, independent component analysis, pseudo-real-time motion decoding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59912070 2D and 3D Unsteady Simulation of the Heat Transfer in the Sample during Heat Treatment by Moving Heat Source
Authors: Z. Veselý, M. Honner, J. Mach
Abstract:
The aim of the performed work is to establish the 2D and 3D model of direct unsteady task of sample heat treatment by moving source employing computer model on the basis of finite element method. Complex boundary condition on heat loaded sample surface is the essential feature of the task. Computer model describes heat treatment of the sample during heat source movement over the sample surface. It is started from 2D task of sample cross section as a basic model. Possibilities of extension from 2D to 3D task are discussed. The effect of the addition of third model dimension on temperature distribution in the sample is showed. Comparison of various model parameters on the sample temperatures is observed. Influence of heat source motion on the depth of material heat treatment is shown for several velocities of the movement. Presented computer model is prepared for the utilization in laser treatment of machine parts.Keywords: Computer simulation, unsteady model, heat treatment, complex boundary condition, moving heat source.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 203812069 Two-Channels Thermal Energy Storage Tank: Experiments and Short-Cut Modelling
Authors: M. Capocelli, A. Caputo, M. De Falco, D. Mazzei, V. Piemonte
Abstract:
This paper presents the experimental results and the related modeling of a thermal energy storage (TES) facility, ideated and realized by ENEA and realizing the thermocline with an innovative geometry. Firstly, the thermal energy exchange model of an equivalent shell & tube heat exchanger is described and tested to reproduce the performance of the spiral exchanger installed in the TES. Through the regression of the experimental data, a first-order thermocline model was also validated to provide an analytical function of the thermocline, useful for the performance evaluation and the comparison with other systems and implementation in simulations of integrated systems (e.g. power plants). The experimental data obtained from the plant start-up and the short-cut modeling of the system can be useful for the process analysis, for the scale-up of the thermal storage system and to investigate the feasibility of its implementation in actual case-studies.Keywords: Thermocline, modelling, heat exchange, spiral, shell, tube.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 92512068 An Exploratory Study in Nursing Education: Factors Influencing Nursing Students’ Acceptance of Mobile Learning
Authors: R. Abdulrahman, A. Eardley, A. Soliman
Abstract:
The proliferation in the development of mobile learning (m-learning) has played a vital role in the rapidly growing electronic learning market. This relatively new technology can help to encourage the development of in learning and to aid knowledge transfer a number of areas, by familiarizing students with innovative information and communications technologies (ICT). M-learning plays a substantial role in the deployment of learning methods for nursing students by using the Internet and portable devices to access learning resources ‘anytime and anywhere’. However, acceptance of m-learning by students is critical to the successful use of m-learning systems. Thus, there is a need to study the factors that influence student’s intention to use m-learning. This paper addresses this issue. It outlines the outcomes of a study that evaluates the unified theory of acceptance and use of technology (UTAUT) model as applied to the subject of user acceptance in relation to m-learning activity in nurse education. The model integrates the significant components across eight prominent user acceptance models. Therefore, a standard measure is introduced with core determinants of user behavioural intention. The research model extends the UTAUT in the context of m-learning acceptance by modifying and adding individual innovativeness (II) and quality of service (QoS) to the original structure of UTAUT. The paper goes on to add the factors of previous experience (of using mobile devices in similar applications) and the nursing students’ readiness (to use the technology) to influence their behavioural intentions to use m-learning. This study uses a technique called ‘convenience sampling’ which involves student volunteers as participants in order to collect numerical data. A quantitative method of data collection was selected and involves an online survey using a questionnaire form. This form contains 33 questions to measure the six constructs, using a 5-point Likert scale. A total of 42 respondents participated, all from the Nursing Institute at the Armed Forces Hospital in Saudi Arabia. The gathered data were then tested using a research model that employs the structural equation modelling (SEM), including confirmatory factor analysis (CFA). The results of the CFA show that the UTAUT model has the ability to predict student behavioural intention and to adapt m-learning activity to the specific learning activities. It also demonstrates satisfactory, dependable and valid scales of the model constructs. This suggests further analysis to confirm the model as a valuable instrument in order to evaluate the user acceptance of m-learning activity.
Keywords: Mobile learning, nursing institute, unified theory of acceptance and use of technology model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 120612067 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.
Keywords: Diesel engine, machine learning, NOx emission, semi-empirical.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 85512066 Angles of Arrival Estimation with Unitary Partial Propagator
Authors: Youssef Khmou, Said Safi
Abstract:
In this paper, we investigated the effect of real valued transformation of the spectral matrix of the received data for Angles Of Arrival estimation problem. Indeed, the unitary transformation of Partial Propagator (UPP) for narrowband sources is proposed and applied on Uniform Linear Array (ULA).
Monte Carlo simulations proved the performance of the UPP spectrum comparatively with Forward Backward Partial Propagator (FBPP) and Unitary Propagator (UP). The results demonstrates that when some of the sources are fully correlated and closer than the Rayleigh angular limit resolution of the broadside array, the UPP method outperforms the FBPP in both of spatial resolution and complexity.
Keywords: DOA, Uniform Linear Array, Narrowband, Propagator, Real valued transformation, Subspace, Unitary Operator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 228412065 Adaptive Gaussian Mixture Model for Skin Color Segmentation
Authors: Reza Hassanpour, Asadollah Shahbahrami, Stephan Wong
Abstract:
Skin color based tracking techniques often assume a static skin color model obtained either from an offline set of library images or the first few frames of a video stream. These models can show a weak performance in presence of changing lighting or imaging conditions. We propose an adaptive skin color model based on the Gaussian mixture model to handle the changing conditions. Initial estimation of the number and weights of skin color clusters are obtained using a modified form of the general Expectation maximization algorithm, The model adapts to changes in imaging conditions and refines the model parameters dynamically using spatial and temporal constraints. Experimental results show that the method can be used in effectively tracking of hand and face regions.Keywords: Face detection, Segmentation, Tracking, Gaussian Mixture Model, Adaptation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 241512064 Using the Technology Acceptance Model to Examine Seniors’ Attitudes toward Facebook
Authors: Chien-Jen Liu, Shu Ching Yang
Abstract:
Using the technology acceptance model (TAM), this study examined the external variables of technological complexity (TC) to acquire a better understanding of the factors that influence the acceptance of computer application courses by learners at Active Aging Universities. After the learners in this study had completed a 27-hour Facebook course, 44 learners responded to a modified TAM survey. Data were collected to examine the path relationships among the variables that influence the acceptance of Facebook-mediated community learning. The partial least squares (PLS) method was used to test the measurement and the structural model. The study results demonstrated that attitudes toward Facebook use directly influence behavioral intentions (BI) with respect to Facebook use, evincing a high prediction rate of 58.3%. In addition to the perceived usefulness (PU) and perceived ease of use (PEOU) measures that are proposed in the TAM, other external variables, such as TC, also indirectly influence BI. These four variables can explain 88% of the variance in BI and demonstrate a high level of predictive ability. Finally, limitations of this investigation and implications for further research are discussed.
Keywords: Technology acceptance model (TAM), technological complexity, partial least squares (PLS), perceived usefulness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 319612063 Comparative Analysis of the Third Generation of Research Data for Evaluation of Solar Energy Potential
Authors: Claudineia Brazil, Elison Eduardo Jardim Bierhals, Luciane Teresa Salvi, Rafael Haag
Abstract:
Renewable energy sources are dependent on climatic variability, so for adequate energy planning, observations of the meteorological variables are required, preferably representing long-period series. Despite the scientific and technological advances that meteorological measurement systems have undergone in the last decades, there is still a considerable lack of meteorological observations that form series of long periods. The reanalysis is a system of assimilation of data prepared using general atmospheric circulation models, based on the combination of data collected at surface stations, ocean buoys, satellites and radiosondes, allowing the production of long period data, for a wide gamma. The third generation of reanalysis data emerged in 2010, among them is the Climate Forecast System Reanalysis (CFSR) developed by the National Centers for Environmental Prediction (NCEP), these data have a spatial resolution of 0.50 x 0.50. In order to overcome these difficulties, it aims to evaluate the performance of solar radiation estimation through alternative data bases, such as data from Reanalysis and from meteorological satellites that satisfactorily meet the absence of observations of solar radiation at global and/or regional level. The results of the analysis of the solar radiation data indicated that the reanalysis data of the CFSR model presented a good performance in relation to the observed data, with determination coefficient around 0.90. Therefore, it is concluded that these data have the potential to be used as an alternative source in locations with no seasons or long series of solar radiation, important for the evaluation of solar energy potential.
Keywords: Climate, reanalysis, renewable energy, solar radiation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 90612062 Improvement of Parallel Compressor Model in Dealing Outlet Unequal Pressure Distribution
Authors: Kewei Xu, Jens Friedrich, Kevin Dwinger, Wei Fan, Xijin Zhang
Abstract:
Parallel Compressor Model (PCM) is a simplified approach to predict compressor performance with inlet distortions. In PCM calculation, it is assumed that the sub-compressors’ outlet static pressure is uniform and therefore simplifies PCM calculation procedure. However, if the compressor’s outlet duct is not long and straight, such assumption frequently induces error ranging from 10% to 15%. This paper provides a revised calculation method of PCM that can correct the error. The revised method employs energy equation, momentum equation and continuity equation to acquire needed parameters and replace the equal static pressure assumption. Based on the revised method, PCM is applied on two compression system with different blades types. The predictions of their performance in non-uniform inlet conditions are yielded through the revised calculation method and are employed to evaluate the method’s efficiency. Validating the results by experimental data, it is found that although little deviation occurs, calculated result agrees well with experiment data whose error ranges from 0.1% to 3%. Therefore, this proves the revised calculation method of PCM possesses great advantages in predicting the performance of the distorted compressor with limited exhaust duct.Keywords: Parallel Compressor Model (PCM), Revised Calculation Method, Inlet Distortion, Outlet Unequal Pressure Distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168812061 Prediction of Slump in Concrete using Artificial Neural Networks
Authors: V. Agrawal, A. Sharma
Abstract:
High Strength Concrete (HSC) is defined as concrete that meets special combination of performance and uniformity requirements that cannot be achieved routinely using conventional constituents and normal mixing, placing, and curing procedures. It is a highly complex material, which makes modeling its behavior a very difficult task. This paper aimed to show possible applicability of Neural Networks (NN) to predict the slump in High Strength Concrete (HSC). Neural Network models is constructed, trained and tested using the available test data of 349 different concrete mix designs of High Strength Concrete (HSC) gathered from a particular Ready Mix Concrete (RMC) batching plant. The most versatile Neural Network model is selected to predict the slump in concrete. The data used in the Neural Network models are arranged in a format of eight input parameters that cover the Cement, Fly Ash, Sand, Coarse Aggregate (10 mm), Coarse Aggregate (20 mm), Water, Super-Plasticizer and Water/Binder ratio. Furthermore, to test the accuracy for predicting slump in concrete, the final selected model is further used to test the data of 40 different concrete mix designs of High Strength Concrete (HSC) taken from the other batching plant. The results are compared on the basis of error function (or performance function).Keywords: Artificial Neural Networks, Concrete, prediction ofslump, slump in concrete
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 359712060 Topographic Arrangement of 3D Design Components on 2D Maps by Unsupervised Feature Extraction
Authors: Stefan Menzel
Abstract:
As a result of the daily workflow in the design development departments of companies, databases containing huge numbers of 3D geometric models are generated. According to the given problem engineers create CAD drawings based on their design ideas and evaluate the performance of the resulting design, e.g. by computational simulations. Usually, new geometries are built either by utilizing and modifying sets of existing components or by adding single newly designed parts to a more complex design. The present paper addresses the two facets of acquiring components from large design databases automatically and providing a reasonable overview of the parts to the engineer. A unified framework based on the topographic non-negative matrix factorization (TNMF) is proposed which solves both aspects simultaneously. First, on a given database meaningful components are extracted into a parts-based representation in an unsupervised manner. Second, the extracted components are organized and visualized on square-lattice 2D maps. It is shown on the example of turbine-like geometries that these maps efficiently provide a wellstructured overview on the database content and, at the same time, define a measure for spatial similarity allowing an easy access and reuse of components in the process of design development.Keywords: Design decomposition, topographic non-negative matrix factorization, parts-based representation, self-organization, unsupervised feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 137912059 Javanese Character Recognition Using Hidden Markov Model
Authors: Anastasia Rita Widiarti, Phalita Nari Wastu
Abstract:
Hidden Markov Model (HMM) is a stochastic method which has been used in various signal processing and character recognition. This study proposes to use HMM to recognize Javanese characters from a number of different handwritings, whereby HMM is used to optimize the number of state and feature extraction. An 85.7 % accuracy is obtained as the best result in 16-stated vertical model using pure HMM. This initial result is satisfactory for prompting further research.Keywords: Character recognition, off-line handwritingrecognition, Hidden Markov Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 198912058 Solubility of Water in CO2 Mixtures at Pipeline Operation Conditions
Authors: Mohammad Ahmad, Sander Gersen, Erwin Wilbers
Abstract:
Carbon capture, transport and underground storage have become a major solution to reduce CO2 emissions from power plants and other large CO2 sources. A big part of this captured CO2 stream is transported at high pressure dense phase conditions and stored in offshore underground depleted oil and gas fields. CO2 is also transported in offshore pipelines to be used for enhanced oil and gas recovery. The captured CO2 stream with impurities may contain water that causes severe corrosion problems, flow assurance failure and might damage valves and instrumentations. Thus, free water formation should be strictly prevented. The purpose of this work is to study the solubility of water in pure CO2 and in CO2 mixtures under real pipeline pressure (90-150 bar) and temperature operation conditions (5-35°C). A set up was constructed to generate experimental data. The results show the solubility of water in CO2 mixtures increasing with the increase of the temperature or/and with the increase in pressure. A drop in water solubility in CO2 is observed in the presence of impurities. The data generated were then used to assess the capabilities of two mixture models: the GERG-2008 model and the EOS-CG model. By generating the solubility data, this study contributes to determine the maximum allowable water content in CO2 pipelines.
Keywords: Carbon capture and storage, water solubility, equation of states.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 291412057 Platform-as-a-Service Sticky Policies for Privacy Classification in the Cloud
Authors: Maha Shamseddine, Amjad Nusayr, Wassim Itani
Abstract:
In this paper, we present a Platform-as-a-Service (PaaS) model for controlling the privacy enforcement mechanisms applied on user data when stored and processed in Cloud data centers. The proposed architecture consists of establishing user configurable ‘sticky’ policies on the Graphical User Interface (GUI) data-bound components during the application development phase to specify the details of privacy enforcement on the contents of these components. Various privacy classification classes on the data components are formally defined to give the user full control on the degree and scope of privacy enforcement including the type of execution containers to process the data in the Cloud. This not only enhances the privacy-awareness of the developed Cloud services, but also results in major savings in performance and energy efficiency due to the fact that the privacy mechanisms are solely applied on sensitive data units and not on all the user content. The proposed design is implemented in a real PaaS cloud computing environment on the Microsoft Azure platform.Keywords: Privacy enforcement, Platform-as-a-Service privacy awareness, cloud computing privacy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 75912056 Breast Cancer Prediction Using Score-Level Fusion of Machine Learning and Deep Learning Models
Authors: [email protected]
Abstract:
Breast cancer is one of the most common types in women. Early prediction of breast cancer helps physicians detect cancer in its early stages. Big cancer data need a very powerful tool to analyze and extract predictions. Machine learning and deep learning are two of the most efficient tools for predicting cancer based on textual data. In this study, we developed a fusion model of two machine learning and deep learning models. To obtain the final prediction, Long-Short Term Memory (LSTM), ensemble learning with hyper parameters optimization, and score-level fusion is used. Experiments are done on the Breast Cancer Surveillance Consortium (BCSC) dataset after balancing and grouping the class categories. Five different training scenarios are used, and the tests show that the designed fusion model improved the performance by 3.3% compared to the individual models.
Keywords: Machine learning, Deep learning, cancer prediction, breast cancer, LSTM, Score-Level Fusion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40412055 Novel Method for Elliptic Curve Multi-Scalar Multiplication
Authors: Raveen R. Goundar, Ken-ichi Shiota, Masahiko Toyonaga
Abstract:
The major building block of most elliptic curve cryptosystems are computation of multi-scalar multiplication. This paper proposes a novel algorithm for simultaneous multi-scalar multiplication, that is by employing addition chains. The previously known methods utilizes double-and-add algorithm with binary representations. In order to accomplish our purpose, an efficient empirical method for finding addition chains for multi-exponents has been proposed.Keywords: elliptic curve cryptosystems, multi-scalar multiplication, addition chains, Fibonacci sequence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 161112054 Management Pattern for Lodging Business in Bang Khonthi Samut Songkram with Sufficient Economy Approach
Authors: Krisada Sungkhamanee
Abstract:
The objectives of this research are to search the management pattern of Bang Khonthi lodging entrepreneurs for sufficient economy ways, to know the threat that affects this sector and design fit arrangement model to sustain their business with Samut Songkram style. What will happen if they do not use this approach? Will they have a financial crisis? The data and information are collected by informal discussions with 8 managers and 400 questionnaires. A mixed methods of both qualitative research and quantitative research are used. Bent Flyvbjerg-s phronesis is utilized for this analysis. Our research will prove that sufficient economy can help small business firms to solve their problems. We think that the results of our research will be a financial model to solve many problems of the entrepreneurs and this way will can be a model for other provinces of Thailand.Keywords: Bang Khonthi, Lodging Business, Sufficient Economy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 372412053 Simultaneous Term Structure Estimation of Hazard and Loss Given Default with a Statistical Model using Credit Rating and Financial Information
Authors: Tomohiro Ando, Satoshi Yamashita
Abstract:
The objective of this study is to propose a statistical modeling method which enables simultaneous term structure estimation of the risk-free interest rate, hazard and loss given default, incorporating the characteristics of the bond issuing company such as credit rating and financial information. A reduced form model is used for this purpose. Statistical techniques such as spline estimation and Bayesian information criterion are employed for parameter estimation and model selection. An empirical analysis is conducted using the information on the Japanese bond market data. Results of the empirical analysis confirm the usefulness of the proposed method.Keywords: Empirical Bayes, Hazard term structure, Loss given default.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 166712052 An Automatic Tool for Checking Consistency between Data Flow Diagrams (DFDs)
Authors: Rosziati Ibrahim, Siow Yen Yen
Abstract:
System development life cycle (SDLC) is a process uses during the development of any system. SDLC consists of four main phases: analysis, design, implement and testing. During analysis phase, context diagram and data flow diagrams are used to produce the process model of a system. A consistency of the context diagram to lower-level data flow diagrams is very important in smoothing up developing process of a system. However, manual consistency check from context diagram to lower-level data flow diagrams by using a checklist is time-consuming process. At the same time, the limitation of human ability to validate the errors is one of the factors that influence the correctness and balancing of the diagrams. This paper presents a tool that automates the consistency check between Data Flow Diagrams (DFDs) based on the rules of DFDs. The tool serves two purposes: as an editor to draw the diagrams and as a checker to check the correctness of the diagrams drawn. The consistency check from context diagram to lower-level data flow diagrams is embedded inside the tool to overcome the manual checking problem.Keywords: Data Flow Diagram, Context Diagram, ConsistencyCheck, Syntax and Semantic Rules
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 343912051 Limiting Fiber Extensibility as Parameter for Damage in Venous Wall
Authors: Lukas Horny, Rudolf Zitny, Hynek Chlup, Tomas Adamek, Michal Sara
Abstract:
An inflation–extension test with human vena cava inferior was performed with the aim to fit a material model. The vein was modeled as a thick–walled tube loaded by internal pressure and axial force. The material was assumed to be an incompressible hyperelastic fiber reinforced continuum. Fibers are supposed to be arranged in two families of anti–symmetric helices. Considered anisotropy corresponds to local orthotropy. Used strain energy density function was based on a concept of limiting strain extensibility. The pressurization was comprised by four pre–cycles under physiological venous loading (0 – 4kPa) and four cycles under nonphysiological loading (0 – 21kPa). Each overloading cycle was performed with different value of axial weight. Overloading data were used in regression analysis to fit material model. Considered model did not fit experimental data so good. Especially predictions of axial force failed. It was hypothesized that due to nonphysiological values of loading pressure and different values of axial weight the material was not preconditioned enough and some damage occurred inside the wall. A limiting fiber extensibility parameter Jm was assumed to be in relation to supposed damage. Each of overloading cycles was fitted separately with different values of Jm. Other parameters were held the same. This approach turned out to be successful. Variable value of Jm can describe changes in the axial force – axial stretch response and satisfy pressure – radius dependence simultaneously.Keywords: Constitutive model, damage, fiber reinforcedcomposite, limiting fiber extensibility, preconditioning, vena cavainferior.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 147312050 HIV Treatment Planning on a case-by-CASE Basis
Authors: Marios M. Hadjiandreou, Raul Conejeros, Ian Wilson
Abstract:
This study presents a mathematical modeling approach to the planning of HIV therapies on an individual basis. The model replicates clinical data from typical-progressors to AIDS for all stages of the disease with good agreement. Clinical data from rapid-progressors and long-term non-progressors is also matched by estimation of immune system parameters only. The ability of the model to reproduce these phenomena validates the formulation, a fact which is exploited in the investigation of effective therapies. The therapy investigation suggests that, unlike continuous therapy, structured treatment interruptions (STIs) are able to control the increase in both the drug-sensitive and drug-resistant virus population and, hence, prevent the ultimate progression from HIV to AIDS. The optimization results further suggest that even patients characterised by the same progression type can respond very differently to the same treatment and that the latter should be designed on a case-by-case basis. Such a methodology is presented here.
Keywords: AIDS, chemotherapy, mathematical modeling, optimal control, progression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168512049 Analytical Model to Predict the Shear Capacity of Reinforced Concrete Beams Externally Strengthened with CFRP Composites Conditions
Authors: Rajai Al-Rousan
Abstract:
This paper presents a proposed analytical model for predicting the shear strength of reinforced concrete beams strengthened with CFRP composites as external reinforcement. The proposed analytical model can predict the shear contribution of CFRP composites of RC beams with an acceptable coefficient of correlation with the tested results. Based on the comparison of the proposed model with the published well-known models (ACI model, Triantafillou model, and Colotti model), the ACI model had a wider range of 0.16 to 10.08 for the ratio between tested and predicted ultimate shears at failure. Also, an acceptable range of 0.27 to 2.78 for the ratio between tested and predicted ultimate shears by the Triantafillou model. Finally, the best prediction (the ratio between the tested and predicted ones) of the ultimate shear capacity is observed by using Colotti model with a range of 0.20 to 1.78. Thus, the contribution of the CFRP composites as external reinforcement can be predicted with high accuracy by using the proposed analytical model.
Keywords: Predicting, shear capacity, reinforced concrete, beams, strengthened, externally, CFRP composites.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 872