Search results for: joint model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17618

Search results for: joint model

14468 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information

Authors: Haifeng Wang, Haili Zhang

Abstract:

Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.

Keywords: computational social science, movie preference, machine learning, SVM

Procedia PDF Downloads 260
14467 Predicting Photovoltaic Energy Profile of Birzeit University Campus Based on Weather Forecast

Authors: Muhammad Abu-Khaizaran, Ahmad Faza’, Tariq Othman, Yahia Yousef

Abstract:

This paper presents a study to provide sufficient and reliable information about constructing a Photovoltaic energy profile of the Birzeit University campus (BZU) based on the weather forecast. The developed Photovoltaic energy profile helps to predict the energy yield of the Photovoltaic systems based on the weather forecast and hence helps planning energy production and consumption. Two models will be developed in this paper; a Clear Sky Irradiance model and a Cloud-Cover Radiation model to predict the irradiance for a clear sky day and a cloudy day, respectively. The adopted procedure for developing such models takes into consideration two levels of abstraction. First, irradiance and weather data were acquired by a sensory (measurement) system installed on the rooftop of the Information Technology College building at Birzeit University campus. Second, power readings of a fully operational 51kW commercial Photovoltaic system installed in the University at the rooftop of the adjacent College of Pharmacy-Nursing and Health Professions building are used to validate the output of a simulation model and to help refine its structure. Based on a comparison between a mathematical model, which calculates Clear Sky Irradiance for the University location and two sets of accumulated measured data, it is found that the simulation system offers an accurate resemblance to the installed PV power station on clear sky days. However, these comparisons show a divergence between the expected energy yield and actual energy yield in extreme weather conditions, including clouding and soiling effects. Therefore, a more accurate prediction model for irradiance that takes into consideration weather factors, such as relative humidity and cloudiness, which affect irradiance, was developed; Cloud-Cover Radiation Model (CRM). The equivalent mathematical formulas implement corrections to provide more accurate inputs to the simulation system. The results of the CRM show a very good match with the actual measured irradiance during a cloudy day. The developed Photovoltaic profile helps in predicting the output energy yield of the Photovoltaic system installed at the University campus based on the predicted weather conditions. The simulation and practical results for both models are in a very good match.

Keywords: clear-sky irradiance model, cloud-cover radiation model, photovoltaic, weather forecast

Procedia PDF Downloads 133
14466 BIM-Based Tool for Sustainability Assessment and Certification Documents Provision

Authors: Taki Eddine Seghier, Mohd Hamdan Ahmad, Yaik-Wah Lim, Samuel Opeyemi Williams

Abstract:

The assessment of building sustainability to achieve a specific green benchmark and the preparation of the required documents in order to receive a green building certification, both are considered as major challenging tasks for green building design team. However, this labor and time-consuming process can take advantage of the available Building Information Modeling (BIM) features such as material take-off and scheduling. Furthermore, the workflow can be automated in order to track potentially achievable credit points and provide rating feedback for several design options by using integrated Visual Programing (VP) to handle the stored parameters within the BIM model. Hence, this study proposes a BIM-based tool that uses Green Building Index (GBI) rating system requirements as a unique input case to evaluate the building sustainability in the design stage of the building project life cycle. The tool covers two key models for data extraction, firstly, a model for data extraction, calculation and the classification of achievable credit points in a green template, secondly, a model for the generation of the required documents for green building certification. The tool was validated on a BIM model of residential building and it serves as proof of concept that building sustainability assessment of GBI certification can be automatically evaluated and documented through BIM.

Keywords: green building rating system, GBRS, building information modeling, BIM, visual programming, VP, sustainability assessment

Procedia PDF Downloads 326
14465 Artificial Neural Network-Based Prediction of Effluent Quality of Wastewater Treatment Plant Employing Data Preprocessing Approaches

Authors: Vahid Nourani, Atefeh Ashrafi

Abstract:

Prediction of treated wastewater quality is a matter of growing importance in water treatment procedure. In this way artificial neural network (ANN), as a robust data-driven approach, has been widely used for forecasting the effluent quality of wastewater treatment. However, developing ANN model based on appropriate input variables is a major concern due to the numerous parameters which are collected from treatment process and the number of them are increasing in the light of electronic sensors development. Various studies have been conducted, using different clustering methods, in order to classify most related and effective input variables. This issue has been overlooked in the selecting dominant input variables among wastewater treatment parameters which could effectively lead to more accurate prediction of water quality. In the presented study two ANN models were developed with the aim of forecasting effluent quality of Tabriz city’s wastewater treatment plant. Biochemical oxygen demand (BOD) was utilized to determine water quality as a target parameter. Model A used Principal Component Analysis (PCA) for input selection as a linear variance-based clustering method. Model B used those variables identified by the mutual information (MI) measure. Therefore, the optimal ANN structure when the result of model B compared with model A showed up to 15% percent increment in Determination Coefficient (DC). Thus, this study highlights the advantage of PCA method in selecting dominant input variables for ANN modeling of wastewater plant efficiency performance.

Keywords: Artificial Neural Networks, biochemical oxygen demand, principal component analysis, mutual information, Tabriz wastewater treatment plant, wastewater treatment plant

Procedia PDF Downloads 128
14464 Development of a Classification Model for Value-Added and Non-Value-Added Operations in Retail Logistics: Insights from a Supermarket Case Study

Authors: Helena Macedo, Larissa Tomaz, Levi Guimarães, Luís Cerqueira-Pinto, José Dinis-Carvalho

Abstract:

In the context of retail logistics, the pursuit of operational efficiency and cost optimization involves a rigorous distinction between value-added and non-value-added activities. In today's competitive market, optimizing efficiency and reducing operational costs are paramount for retail businesses. This research paper focuses on the development of a classification model adapted to the retail sector, specifically examining internal logistics processes. Based on a comprehensive analysis conducted in a retail supermarket located in the north of Portugal, which covered various aspects of internal retail logistics, this study questions the concept of value and the definition of wastes traditionally applied in a manufacturing context and proposes a new way to assess activities in the context of internal logistics. This study combines quantitative data analysis with qualitative evaluations. The proposed classification model offers a systematic approach to categorize operations within the retail logistics chain, providing actionable insights for decision-makers to streamline processes, enhance productivity, and allocate resources more effectively. This model contributes not only to academic discourse but also serves as a practical tool for retail businesses, aiding in the enhancement of their internal logistics dynamics.

Keywords: lean retail, lean logisitcs, retail logistics, value-added and non-value-added

Procedia PDF Downloads 66
14463 Thermodynamic Properties of Binary Gold-Rare Earth Compounds (Au-RE)

Authors: H. Krarchaa, A. Ferroudj

Abstract:

This work presents the results of thermodynamic properties of intermetallic rare earth-gold compounds at different stoichiometric structures. It mentions the existence of the AuRE AuRE2, Au2RE, Au51RE14, Au6RE, Au3RE and Au4RE phases in the majority of Au-RE phase diagrams. It's observed that equiatomic composition is a common compound for all gold rare earth alloys and it has the highest melting temperature. Enthalpies of the formation of studied compounds are calculated based on a new reformulation of Miedema’s model.

Keywords: rare earth element, enthalpy of formation, thermodynamic properties, macroscopic model

Procedia PDF Downloads 21
14462 3D CFD Modelling of the Airflow and Heat Transfer in Cold Room Filled with Dates

Authors: Zina Ghiloufi, Tahar Khir

Abstract:

A transient three-dimensional computational fluid dynamics (CFD) model is developed to determine the velocity and temperature distribution in different positions cold room during pre-cooling of dates. The turbulence model used is the k-ω Shear Stress Transport (SST) with the standard wall function, the air. The numerical results obtained show that cooling rate is not uniform inside the room; the product at the medium of room has a slower cooling rate. This cooling heterogeneity has a large effect on the energy consumption during cold storage.

Keywords: CFD, cold room, cooling rate, dDates, numerical simulation, k-ω (SST)

Procedia PDF Downloads 235
14461 Tests for Zero Inflation in Count Data with Measurement Error in Covariates

Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao

Abstract:

In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.

Keywords: count data, measurement error, score test, zero inflation

Procedia PDF Downloads 288
14460 Characteristics of Business Models of Industrial-Internet-of-Things Platforms

Authors: Peter Kress, Alexander Pflaum, Ulrich Loewen

Abstract:

The number of Internet-of-Things (IoT) platforms is steadily increasing across various industries, especially for smart factories, smart homes and smart mobility. Also in the manufacturing industry, the number of Industrial-IoT platforms is growing. Both IT players, start-ups and increasingly also established industry players and small-and-medium-enterprises introduce offerings for the connection of industrial equipment on platforms, enabled by advanced information and communication technology. Beside the offered functionalities, the established ecosystem of partners around a platform is one of the key differentiators to generate a competitive advantage. The key question is how platform operators design the business model around their platform to attract a high number of customers and partners to co-create value for the entire ecosystem. The present research tries to answer this question by determining the key characteristics of business models of successful platforms in the manufacturing industry. To achieve that, the authors selected an explorative qualitative research approach and created an inductive comparative case study. The authors generated valuable descriptive insights of the business model elements (e.g., value proposition, pricing model or partnering model) of various established platforms. Furthermore, patterns across the various cases were identified to derive propositions for the successful design of business models of platforms in the manufacturing industry.

Keywords: industrial-internet-of-things, business models, platforms, ecosystems, case study

Procedia PDF Downloads 243
14459 A Biophysical Model of CRISPR/Cas9 on- and off-Target Binding for Rational Design of Guide RNAs

Authors: Iman Farasat, Howard M. Salis

Abstract:

The CRISPR/Cas9 system has revolutionized genome engineering by enabling site-directed and high-throughput genome editing, genome insertion, and gene knockdowns in several species, including bacteria, yeast, flies, worms, and human cell lines. This technology has the potential to enable human gene therapy to treat genetic diseases and cancer at the molecular level; however, the current CRISPR/Cas9 system suffers from seemingly sporadic off-target genome mutagenesis that prevents its use in gene therapy. A comprehensive mechanistic model that explains how the CRISPR/Cas9 functions would enable the rational design of the guide-RNAs responsible for target site selection while minimizing unexpected genome mutagenesis. Here, we present the first quantitative model of the CRISPR/Cas9 genome mutagenesis system that predicts how guide-RNA sequences (crRNAs) control target site selection and cleavage activity. We used statistical thermodynamics and law of mass action to develop a five-step biophysical model of cas9 cleavage, and examined it in vivo and in vitro. To predict a crRNA's binding specificities and cleavage rates, we then compiled a nearest neighbor (NN) energy model that accounts for all possible base pairings and mismatches between the crRNA and the possible genomic DNA sites. These calculations correctly predicted crRNA specificity across 5518 sites. Our analysis reveals that cas9 activity and specificity are anti-correlated, and, the trade-off between them is the determining factor in performing an RNA-mediated cleavage with minimal off-targets. To find an optimal solution, we first created a scheme of safe-design criteria for Cas9 target selection by systematic analysis of available high throughput measurements. We then used our biophysical model to determine the optimal Cas9 expression levels and timing that maximizes on-target cleavage and minimizes off-target activity. We successfully applied this approach in bacterial and mammalian cell lines to reduce off-target activity to near background mutagenesis level while maintaining high on-target cleavage rate.

Keywords: biophysical model, CRISPR, Cas9, genome editing

Procedia PDF Downloads 406
14458 Adhesive Connections in Timber: A Comparison between Rough and Smooth Wood Bonding Surfaces

Authors: Valentina Di Maria, Anton Ianakiev

Abstract:

The use of adhesive anchors for wooden constructions is an efficient technology to connect and design timber members in new timber structures and to rehabilitate the damaged structural members of historical buildings. Due to the lack of standard regulation in this specific area of structural design, designers’ choices are still supported by test analysis that enables knowledge, and the prediction, of the structural behavior of glued in rod joints. The paper outlines an experimental research activity aimed at identifying the tensile resistance capacity of several new adhesive joint prototypes made of epoxy resin, steel bar and timber, Oak and Douglas Fir species. The development of new adhesive connectors has been carried out by using epoxy to glue stainless steel bars into pre-drilled holes, characterized by smooth and rough internal surfaces, in timber samples. The realization of a threaded contact surface using a specific drill bit has led to an improved bond between wood and epoxy. The applied changes have also reduced the cost of the joints’ production. The paper presents the results of this parametric analysis and a Finite Element analysis that enables identification and study of the internal stress distribution in the proposed adhesive anchors.

Keywords: glued in rod joints, adhesive anchors, timber, epoxy, rough contact surface, threaded hole shape

Procedia PDF Downloads 551
14457 Effect of Two Bouts of Eccentric Exercise on Knee Flexors Changes in Muscle-Tendon Lengths

Authors: Shang-Hen Wu, Yung-Chen Lin, Wei-Song Chang, Ming-Ju Lin

Abstract:

This study investigated whether the repeated bout effect (RBE) of knee flexors (KF) eccentric exercise would be changed in muscle-tendon lengths. Eight healthy university male students used their KF of non-dominant leg and performed a bout of 60 maximal isokinetic (30°/s) eccentric contractions (MaxECC1). A week after MaxECC1, all subjects used the same KF to perform a subsequent bout of MaxECC2. Changes in maximal isokinetic voluntary contraction torque (MVC-CON), muscle soreness (SOR), relaxed knee joint angle (RANG), leg circumference (CIR), and ultrasound images (UI; muscle-tendon length and muscle angle) were measured before, immediately after, 1-5 days after each bout. Two-way ANOVA was used to analyze all the dependent variables. After MaxECC1, all the dependent variables (e.g. MVC-CON: ↓30%, muscle-tendon length: ↑24%, muscle angle: ↑15%) showed significantly change. Following MaxECC2, all the above dependent variables (e.g. MVC-CON:↓21%, tendon length: ↑16%, muscle angle: ↑6%) were significantly smaller than those of MaxECC1. These results of this study found that protective effect conferred by MaxECC1 against MaxECC2, and changes in muscle damage indicators, muscle-tendon length and muscle angle following MaxECC2 were smaller than MaxECC1. Thus, the amount of shift of muscle-tendon length and muscle angle was related to the RBE.

Keywords: eccentric exercise, maximal isokinetic voluntary contraction torque, repeated bout effect, ultrasound

Procedia PDF Downloads 332
14456 Computing Machinery and Legal Intelligence: Towards a Reflexive Model for Computer Automated Decision Support in Public Administration

Authors: Jacob Livingston Slosser, Naja Holten Moller, Thomas Troels Hildebrandt, Henrik Palmer Olsen

Abstract:

In this paper, we propose a model for human-AI interaction in public administration that involves legal decision-making. Inspired by Alan Turing’s test for machine intelligence, we propose a way of institutionalizing a continuous working relationship between man and machine that aims at ensuring both good legal quality and higher efficiency in decision-making processes in public administration. We also suggest that our model enhances the legitimacy of using AI in public legal decision-making. We suggest that case loads in public administration could be divided between a manual and an automated decision track. The automated decision track will be an algorithmic recommender system trained on former cases. To avoid unwanted feedback loops and biases, part of the case load will be dealt with by both a human case worker and the automated recommender system. In those cases an experienced human case worker will have the role of an evaluator, choosing between the two decisions. This model will ensure that the algorithmic recommender system is not compromising the quality of the legal decision making in the institution. It also enhances the legitimacy of using algorithmic decision support because it provides justification for its use by being seen as superior to human decisions when the algorithmic recommendations are preferred by experienced case workers. The paper outlines in some detail the process through which such a model could be implemented. It also addresses the important issue that legal decision making is subject to legislative and judicial changes and that legal interpretation is context sensitive. Both of these issues requires continuous supervision and adjustments to algorithmic recommender systems when used for legal decision making purposes.

Keywords: administrative law, algorithmic decision-making, decision support, public law

Procedia PDF Downloads 217
14455 Analysis of Structural Modeling on Digital English Learning Strategy Use

Authors: Gyoomi Kim, Jiyoung Bae

Abstract:

The purpose of this study was to propose a framework that verifies the structural relationships among students’ use of digital English learning strategy (DELS), affective domains, and their individual variables. The study developed a hypothetical model based on previous studies on language learning strategy use as well as digital language learning. The participants were 720 Korean high school students and 430 university students. The instrument was a self-response questionnaire that contained 70 question items based on Oxford’s SILL (Strategy Inventory for Language Learning) as well as the previous studies on language learning strategies in digital learning environment in order to measure DELS and affective domains. The collected data were analyzed through structural equation modeling (SEM). This study used quantitative data analysis procedures: Explanatory factor analysis (EFA) and confirmatory factor analysis (CFA). Firstly, the EFA was conducted in order to verify the hypothetical model; the factor analysis was conducted preferentially to identify the underlying relationships between measured variables of DELS and the affective domain in the EFA process. The hypothetical model was established with six indicators of learning strategies (memory, cognitive, compensation, metacognitive, affective, and social strategies) under the latent variable of the use of DELS. In addition, the model included four indicators (self-confidence, interests, self-regulation, and attitude toward digital learning) under the latent variable of learners’ affective domain. Secondly, the CFA was used to determine the suitability of data and research models, so all data from the present study was used to assess model fits. Lastly, the model also included individual learner factors as covariates and five constructs selected were learners’ gender, the level of English proficiency, the duration of English learning, the period of using digital devices, and previous experience of digital English learning. The results verified from SEM analysis proposed a theoretical model that showed the structural relationships between Korean students’ use of DELS and their affective domains. Therefore, the results of this study help ESL/EFL teachers understand how learners use and develop appropriate learning strategies in digital learning contexts. The pedagogical implication and suggestions for the further study will be also presented.

Keywords: Digital English Learning Strategy, DELS, individual variables, learners' affective domains, Structural Equation Modeling, SEM

Procedia PDF Downloads 125
14454 The Planner's Pentangle: A Proposal for a 21st-Century Model of Planning for Sustainable Development

Authors: Sonia Hirt

Abstract:

The Planner's Triangle, an oft-cited model that visually defined planning as the search for sustainability to balance the three basic priorities of equity, economy, and environment, has influenced planning theory and practice for a quarter of a century. In this essay, we argue that the triangle requires updating and expansion. Even if planners keep sustainability as their key core aspiration at the center of their imaginary geometry, the triangle's vertices have to be rethought. Planners should move on to a 21st-century concept. We propose a Planner's Pentangle with five basic priorities as vertices of a new conceptual polygon. These five priorities are Wellbeing, Equity, Economy, Environment, and Esthetics (WE⁴). The WE⁴ concept more accurately and fully represents planning’s history. This is especially true in the United States, where public art and public health played pivotal roles in the establishment of the profession in the late 19th and early 20th centuries. It also more accurately represents planning’s future. Both health/wellness and aesthetic concerns are becoming increasingly important in the 21st century. The pentangle can become an effective tool for understanding and visualizing planning's history and present. Planning has a long history of representing urban presents and future as conceptual models in visual form. Such models can play an important role in understanding and shaping practice. For over two decades, one such model, the Planner's Triangle, stood apart as the expression of planning's pursuit for sustainability. But if the model is outdated and insufficiently robust, it can diminish our understanding of planning practice, as well as the appreciation of the profession among non-planners. Thus, we argue for a new conceptual model of what planners do.

Keywords: sustainable development, planning for sustainable development, planner's triangle, planner's pentangle, planning and health, planning and art, planning history

Procedia PDF Downloads 141
14453 Health Percentage Evaluation for Satellite Electrical Power System Based on Linear Stresses Accumulation Damage Theory

Authors: Lin Wenli, Fu Linchun, Zhang Yi, Wu Ming

Abstract:

To meet the demands of long-life and high-intelligence for satellites, the electrical power system should be provided with self-health condition evaluation capability. Any over-stress events in operations should be recorded. Based on Linear stresses accumulation damage theory, accumulative damage analysis was performed on thermal-mechanical-electrical united stresses for three components including the solar array, the batteries and the power conditioning unit. Then an overall health percentage evaluation model for satellite electrical power system was built. To obtain the accurate quantity for system health percentage, an automatic feedback closed-loop correction method for all coefficients in the evaluation model was present. The evaluation outputs could be referred as taking earlier fault-forecast and interventions for Ground Control Center or Satellites self.

Keywords: satellite electrical power system, health percentage, linear stresses accumulation damage, evaluation model

Procedia PDF Downloads 412
14452 Spectral Analysis Applied to Variables of Oil Wells Profiling

Authors: Suzana Leitão Russo, Mayara Laysa de Oliveira Silva, José Augusto Andrade Filho, Vitor Hugo Simon

Abstract:

Currently, seismic methods and prospecting methods are commonly applied in the oil industry and, according to the information reported every day; oil is a source of non-renewable energy. It is easier to understand why the ownership of areas of oil extraction is coveted by many nations. It is necessary to think about ways that will enable the maximization of oil production. The technique of spectral analysis can be used to analyze the behavior of the variables already defined in oil well the profile. The main objective is to verify the series dependence of variables, and to model the variables using the frequency domain to observe the model residuals.

Keywords: oil, well, spectral analysis, oil extraction

Procedia PDF Downloads 535
14451 Study of a Crude Oil Desalting Plant of the National Iranian South Oil Company in Gachsaran by Using Artificial Neural Networks

Authors: H. Kiani, S. Moradi, B. Soltani Soulgani, S. Mousavian

Abstract:

Desalting/dehydration plants (DDP) are often installed in crude oil production units in order to remove water-soluble salts from an oil stream. In order to optimize this process, desalting unit should be modeled. In this research, artificial neural network is used to model efficiency of desalting unit as a function of input parameter. The result of this research shows that the mentioned model has good agreement with experimental data.

Keywords: desalting unit, crude oil, neural networks, simulation, recovery, separation

Procedia PDF Downloads 450
14450 Forecasting Etching Behavior Silica Sand Using the Design of Experiments Method

Authors: Kefaifi Aissa, Sahraoui Tahar, Kheloufi Abdelkrim, Anas Sabiha, Hannane Farouk

Abstract:

The aim of this study is to show how the Design of Experiments Method (DOE) can be put into use as a practical approach for silica sand etching behavior modeling during its primary step of leaching. In the present work, we have studied etching effect on particle size during a primary step of leaching process on Algerian silica sand with florid acid (HF) at 20% and 30 % during 4 and 8 hours. Therefore, a new purity of the sand is noted depending on the time of leaching. This study was expanded by a numerical approach using a method of experiment design, which shows the influence of each parameter and the interaction between them in the process and approved the obtained experimental results. This model is a predictive approach using hide software. Based on the measured parameters experimentally in the interior of the model, the use of DOE method can make it possible to predict the outside parameters of the model in question and can give us the optimize response without making the experimental measurement.

Keywords: acid leaching, design of experiments method(DOE), purity silica, silica etching

Procedia PDF Downloads 286
14449 The Main Steamline Break Transient Analysis for Advanced Boiling Water Reactor Using TRACE, PARCS, and SNAP Codes

Authors: H. C. Chang, J. R. Wang, A. L. Ho, S. W. Chen, J. H. Yang, C. Shih, L. C. Wang

Abstract:

To confirm the reactor and containment integrity of the Advanced Boiling Water Reactor (ABWR), we perform the analysis of main steamline break (MSLB) transient by using the TRACE, PARCS, and SNAP codes. The process of the research has four steps. First, the ABWR nuclear power plant (NPP) model is developed by using the above codes. Second, the steady state analysis is performed by using this model. Third, the ABWR model is used to run the analysis of MSLB transient. Fourth, the predictions of TRACE and PARCS are compared with the data of FSAR. The results of TRACE/PARCS and FSAR are similar. According to the TRACE/PARCS results, the reactor and containment integrity of ABWR can be maintained in a safe condition for MSLB.

Keywords: advanced boiling water reactor, TRACE, PARCS, SNAP

Procedia PDF Downloads 207
14448 Improvements in Double Q-Learning for Anomalous Radiation Source Searching

Authors: Bo-Bin Xiaoa, Chia-Yi Liua

Abstract:

In the task of searching for anomalous radiation sources, personnel holding radiation detectors to search for radiation sources may be exposed to unnecessary radiation risk, and automated search using machines becomes a required project. The research uses various sophisticated algorithms, which are double Q learning, dueling network, and NoisyNet, of deep reinforcement learning to search for radiation sources. The simulation environment, which is a 10*10 grid and one shielding wall setting in it, improves the development of the AI model by training 1 million episodes. In each episode of training, the radiation source position, the radiation source intensity, agent position, shielding wall position, and shielding wall length are all set randomly. The three algorithms are applied to run AI model training in four environments where the training shielding wall is a full-shielding wall, a lead wall, a concrete wall, and a lead wall or a concrete wall appearing randomly. The 12 best performance AI models are selected by observing the reward value during the training period and are evaluated by comparing these AI models with the gradient search algorithm. The results show that the performance of the AI model, no matter which one algorithm, is far better than the gradient search algorithm. In addition, the simulation environment becomes more complex, the AI model which applied Double DQN combined Dueling and NosiyNet algorithm performs better.

Keywords: double Q learning, dueling network, NoisyNet, source searching

Procedia PDF Downloads 113
14447 Subspace Rotation Algorithm for Implementing Restricted Hopfield Network as an Auto-Associative Memory

Authors: Ci Lin, Tet Yeap, Iluju Kiringa

Abstract:

This paper introduces the subspace rotation algorithm (SRA) to train the Restricted Hopfield Network (RHN) as an auto-associative memory. Subspace rotation algorithm is a gradient-free subspace tracking approach based on the singular value decomposition (SVD). In comparison with Backpropagation Through Time (BPTT) on training RHN, it is observed that SRA could always converge to the optimal solution and BPTT could not achieve the same performance when the model becomes complex, and the number of patterns is large. The AUTS case study showed that the RHN model trained by SRA could achieve a better structure of attraction basin with larger radius(in general) than the Hopfield Network(HNN) model trained by Hebbian learning rule. Through learning 10000 patterns from MNIST dataset with RHN models with different number of hidden nodes, it is observed that an several components could be adjusted to achieve a balance between recovery accuracy and noise resistance.

Keywords: hopfield neural network, restricted hopfield network, subspace rotation algorithm, hebbian learning rule

Procedia PDF Downloads 118
14446 Similar Script Character Recognition on Kannada and Telugu

Authors: Gurukiran Veerapur, Nytik Birudavolu, Seetharam U. N., Chandravva Hebbi, R. Praneeth Reddy

Abstract:

This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts.

Keywords: base characters, modifiers, guninthalu, aksharas, vattakshara, VAN

Procedia PDF Downloads 53
14445 Determination of Anchor Lengths by Retaining Walls

Authors: Belabed Lazhar

Abstract:

The dimensioning of the anchored retaining screens passes always by the analysis of their stability. The calculation of anchoring lengths is practically carried out according to the mechanical model suggested by Kranz which is often criticized. The safety is evaluated through the comparison of interior force and external force. The force of anchoring over the length cut behind the failure solid is neglected. The failure surface cuts anchoring in the medium length of sealing. In this article, one proposes a new mechanical model which overcomes these disadvantages (simplifications) and gives interesting results.

Keywords: retaining walls, anchoring, stability, mechanical modeling, safety

Procedia PDF Downloads 352
14444 Design and Validation of an Aerodynamic Model of the Cessna Citation X Horizontal Stabilizer Using both OpenVSP and Digital Datcom

Authors: Marine Segui, Matthieu Mantilla, Ruxandra Mihaela Botez

Abstract:

This research is the part of a major project at the Research Laboratory in Active Controls, Avionics and Aeroservoelasticity (LARCASE) aiming to improve a Cessna Citation X aircraft cruise performance with an application of the morphing wing technology on its horizontal tail. However, the horizontal stabilizer of the Cessna Citation X turns around its span axis with an angle between -8 and 2 degrees. Within this range, the horizontal stabilizer generates certainly some unwanted drag. To cancel this drag, the LARCASE proposes to trim the aircraft with a horizontal stabilizer equipped by a morphing wing technology. This technology aims to optimize aerodynamic performances by changing the conventional horizontal tail shape during the flight. As a consequence, this technology will be able to generate enough lift on the horizontal tail to balance the aircraft without an unwanted drag generation. To conduct this project, an accurate aerodynamic model of the horizontal tail is firstly required. This aerodynamic model will finally allow precise comparison between a conventional horizontal tail and a morphed horizontal tail results. This paper presents how this aerodynamic model was designed. In this way, it shows how the 2D geometry of the horizontal tail was collected and how the unknown airfoil’s shape of the horizontal tail has been recovered. Finally, the complete horizontal tail airfoil shape was found and a comparison between aerodynamic polar of the real horizontal tail and the horizontal tail found in this paper shows a maximum difference of 0.04 on the lift or the drag coefficient which is very good. Aerodynamic polar data of the aircraft horizontal tail are obtained from the CAE Inc. level D research aircraft flight simulator of the Cessna Citation X.

Keywords: aerodynamic, Cessna, citation, coefficient, Datcom, drag, lift, longitudinal, model, OpenVSP

Procedia PDF Downloads 374
14443 Transient and Persistent Efficiency Estimation for Electric Grid Utilities Based on Meta-Frontier: Comparative Analysis of China and Japan

Authors: Bai-Chen Xie, Biao Li

Abstract:

With the deepening of international exchanges and investment, the international comparison of power grid firms has become the focus of regulatory authorities. Ignoring the differences in the economic environment, resource endowment, technology, and other aspects of different countries or regions may lead to efficiency bias. Based on the Meta-frontier model, this paper divides China and Japan into two groups by using the data of China and Japan from 2006 to 2020. While preserving the differences between the two countries, it analyzes and compares the efficiency of the transmission and distribution industries of the two countries. Combined with the four-component stochastic frontier model, the efficiency is divided into transient and persistent efficiency. We found that there are obvious differences between the transmission and distribution sectors in China and Japan. On the one hand, the inefficiency of the two countries is mostly caused by long-term and structural problems. The key to improve the efficiency of the two countries is to focus more on solving long-term and structural problems. On the other hand, the long-term and structural problems that cause the inefficiency of the two countries are not the same. Quality factors have different effects on the efficiency of the two countries, and this different effect is captured by the common frontier model but is offset in the overall model. Based on these findings, this paper proposes some targeted policy recommendations.

Keywords: transmission and distribution industries, transient efficiency, persistent efficiency, meta-frontier, international comparison

Procedia PDF Downloads 100
14442 Complete Ensemble Empirical Mode Decomposition with Adaptive Noise Temporal Convolutional Network for Remaining Useful Life Prediction of Lithium Ion Batteries

Authors: Jing Zhao, Dayong Liu, Shihao Wang, Xinghua Zhu, Delong Li

Abstract:

Uhumanned Underwater Vehicles generally operate in the deep sea, which has its own unique working conditions. Lithium-ion power batteries should have the necessary stability and endurance for use as an underwater vehicle’s power source. Therefore, it is essential to accurately forecast how long lithium-ion batteries will last in order to maintain the system’s reliability and safety. In order to model and forecast lithium battery Remaining Useful Life (RUL), this research suggests a model based on Complete Ensemble Empirical Mode Decomposition with Adaptive noise-Temporal Convolutional Net (CEEMDAN-TCN). In this study, two datasets, NASA and CALCE, which have a specific gap in capacity data fluctuation, are used to verify the model and examine the experimental results in order to demonstrate the generalizability of the concept. The experiments demonstrate the network structure’s strong universality and ability to achieve good fitting outcomes on the test set for various battery dataset types. The evaluation metrics reveal that the CEEMDAN-TCN prediction performance of TCN is 25% to 35% better than that of a single neural network, proving that feature expansion and modal decomposition can both enhance the model’s generalizability and be extremely useful in industrial settings.

Keywords: lithium-ion battery, remaining useful life, complete EEMD with adaptive noise, temporal convolutional net

Procedia PDF Downloads 154
14441 Green Supply Chain Management and Corporate Performance: The Mediation Mechanism of Information Sharing among Firms

Authors: Seigo Matsuno, Yasuo Uchida, Shozo Tokinaga

Abstract:

This paper proposes and empirically tests a model of the relationships between green supply chain management (GSCM) activities and corporate performance. From the literature review, we identified five constructs, namely, environmental commitment, supplier collaboration, supplier assessment, information sharing among suppliers, and business process improvement. These explanatory variables are used to form a structural model explaining the environmental and economic performance. The model was analyzed using the data from a survey of a sample of manufacturing firms in Japan. The results suggest that the degree of supplier collaboration has an influence on the environmental performance directly. While, the impact of supplier assessment on the environmental performance is mediated by the information sharing and/or business process improvement. And the environmental performance has a positive relationship on the economic performance. Academic and managerial implications of our findings are discussed.

Keywords: corporate performance, empirical study, green supply chain management, path modeling

Procedia PDF Downloads 393
14440 Optimization of Heat Insulation Structure and Heat Flux Calculation Method of Slug Calorimeter

Authors: Zhu Xinxin, Wang Hui, Yang Kai

Abstract:

Heat flux is one of the most important test parameters in the ground thermal protection test. Slug calorimeter is selected as the main sensor measuring heat flux in arc wind tunnel test due to the convenience and low cost. However, because of excessive lateral heat transfer and the disadvantage of the calculation method, the heat flux measurement error of the slug calorimeter is large. In order to enhance measurement accuracy, the heat insulation structure and heat flux calculation method of slug calorimeter were improved. The heat transfer model of the slug calorimeter was built according to the energy conservation principle. Based on the heat transfer model, the insulating sleeve of the hollow structure was designed, which helped to greatly decrease lateral heat transfer. And the slug with insulating sleeve of hollow structure was encapsulated using a package shell. The improved insulation structure reduced heat loss and ensured that the heat transfer characteristics were almost the same when calibrated and tested. The heat flux calibration test was carried out in arc lamp system for heat flux sensor calibration, and the results show that test accuracy and precision of slug calorimeter are improved greatly. In the meantime, the simulation model of the slug calorimeter was built. The heat flux values in different temperature rise time periods were calculated by the simulation model. The results show that extracting the data of the temperature rise rate as soon as possible can result in a smaller heat flux calculation error. Then the different thermal contact resistance affecting calculation error was analyzed by the simulation model. The contact resistance between the slug and the insulating sleeve was identified as the main influencing factor. The direct comparison calibration correction method was proposed based on only heat flux calibration. The numerical calculation correction method was proposed based on the heat flux calibration and simulation model of slug calorimeter after the simulation model was solved by solving the contact resistance between the slug and the insulating sleeve. The simulation and test results show that two methods can greatly reduce the heat flux measurement error. Finally, the improved slug calorimeter was tested in the arc wind tunnel. And test results show that the repeatability accuracy of improved slug calorimeter is less than 3%. The deviation of measurement value from different slug calorimeters is less than 3% in the same fluid field. The deviation of measurement value between slug calorimeter and Gordon Gage is less than 4% in the same fluid field.

Keywords: correction method, heat flux calculation, heat insulation structure, heat transfer model, slug calorimeter

Procedia PDF Downloads 118
14439 A Numerical and Experimental Study on Fast Pyrolysis of Single Wood Particle

Authors: Hamid Rezaei, Xiaotao Bi, C. Jim Lim, Anthony Lau, Shahab Sokhansanj

Abstract:

A one-dimensional heat transfer model coupled with the kinetic information has been used to predict the overall pyrolysis mass loss of a single wood particle. The kinetic parameters were determined experimentally and the regime and characteristics of the conversion were evaluated in terms of the particle size and reactor temperature. The order of overall mass loss changed from n=1 at temperatures lower than 350 °C to n=0.5 at temperatures higher that 350 °C. Conversion time analysis showed that particles larger than 0.5 mm were controlled by internal thermal resistances. The valid range of particle size to use the simplified lumped model depends on the fluid temperature around the particles. The critical particle size was 0.6-0.7 mm for the fluid temperature of 500 °C and 0.9-1.0 mm for the fluid temperature of 100 °C. Experimental pyrolysis of moist particles did not show distinct drying and pyrolysis stages. The process was divided into two hypothetical drying and pyrolysis dominated zones and empirical correlations are developed to predict the rate of mass loss in each zone.

Keywords: pyrolysis, kinetics, model, single particle

Procedia PDF Downloads 320