Search results for: Grey prediction model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17628

Search results for: Grey prediction model

16548 Small Scale Mobile Robot Auto-Parking Using Deep Learning, Image Processing, and Kinematics-Based Target Prediction

Authors: Mingxin Li, Liya Ni

Abstract:

Autonomous parking is a valuable feature applicable to many robotics applications such as tour guide robots, UV sanitizing robots, food delivery robots, and warehouse robots. With auto-parking, the robot will be able to park at the charging zone and charge itself without human intervention. As compared to self-driving vehicles, auto-parking is more challenging for a small-scale mobile robot only equipped with a front camera due to the camera view limited by the robot’s height and the narrow Field of View (FOV) of the inexpensive camera. In this research, auto-parking of a small-scale mobile robot with a front camera only was achieved in a four-step process: Firstly, transfer learning was performed on the AlexNet, a popular pre-trained convolutional neural network (CNN). It was trained with 150 pictures of empty parking slots and 150 pictures of occupied parking slots from the view angle of a small-scale robot. The dataset of images was divided into a group of 70% images for training and the remaining 30% images for validation. An average success rate of 95% was achieved. Secondly, the image of detected empty parking space was processed with edge detection followed by the computation of parametric representations of the boundary lines using the Hough Transform algorithm. Thirdly, the positions of the entrance point and center of available parking space were predicted based on the robot kinematic model as the robot was driving closer to the parking space because the boundary lines disappeared partially or completely from its camera view due to the height and FOV limitations. The robot used its wheel speeds to compute the positions of the parking space with respect to its changing local frame as it moved along, based on its kinematic model. Lastly, the predicted entrance point of the parking space was used as the reference for the motion control of the robot until it was replaced by the actual center when it became visible again by the robot. The linear and angular velocities of the robot chassis center were computed based on the error between the current chassis center and the reference point. Then the left and right wheel speeds were obtained using inverse kinematics and sent to the motor driver. The above-mentioned four subtasks were all successfully accomplished, with the transformed learning, image processing, and target prediction performed in MATLAB, while the motion control and image capture conducted on a self-built small scale differential drive mobile robot. The small-scale robot employs a Raspberry Pi board, a Pi camera, an L298N dual H-bridge motor driver, a USB power module, a power bank, four wheels, and a chassis. Future research includes three areas: the integration of all four subsystems into one hardware/software platform with the upgrade to an Nvidia Jetson Nano board that provides superior performance for deep learning and image processing; more testing and validation on the identification of available parking space and its boundary lines; improvement of performance after the hardware/software integration is completed.

Keywords: autonomous parking, convolutional neural network, image processing, kinematics-based prediction, transfer learning

Procedia PDF Downloads 127
16547 Towards Efficient Reasoning about Families of Class Diagrams Using Union Models

Authors: Tejush Badal, Sanaa Alwidian

Abstract:

Class diagrams are useful tools within the Unified Modelling Language (UML) to model and visualize the relationships between, and properties of objects within a system. As a system evolves over time and space (e.g., products), a series of models with several commonalities and variabilities create what is known as a model family. In circumstances where there are several versions of a model, examining each model individually, becomes expensive in terms of computation resources. To avoid performing redundant operations, this paper proposes an approach for representing a family of class diagrams into Union Models to represent model families using a single generic model. The paper aims to analyze and reason about a family of class diagrams using union models as opposed to individual analysis of each member model in the family. The union algorithm provides a holistic view of the model family, where the latter cannot be otherwise obtained from an individual analysis approach, this in turn, enhances the analysis performed in terms of speeding up the time needed to analyze a family of models together as opposed to analyzing individual models, one model at a time.

Keywords: analysis, class diagram, model family, unified modeling language, union model

Procedia PDF Downloads 58
16546 Solid State Drive End to End Reliability Prediction, Characterization and Control

Authors: Mohd Azman Abdul Latif, Erwan Basiron

Abstract:

A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.

Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control

Procedia PDF Downloads 161
16545 Development of a Novel Clinical Screening Tool, Using the BSGE Pain Questionnaire, Clinical Examination and Ultrasound to Predict the Severity of Endometriosis Prior to Laparoscopic Surgery

Authors: Marlin Mubarak

Abstract:

Background: Endometriosis is a complex disabling disease affecting young females in the reproductive period mainly. The aim of this project is to generate a diagnostic model to predict severity and stage of endometriosis prior to Laparoscopic surgery. This will help to improve the pre-operative diagnostic accuracy of stage 3 & 4 endometriosis and as a result, refer relevant women to a specialist centre for complex Laparoscopic surgery. The model is based on the British Society of Gynaecological Endoscopy (BSGE) pain questionnaire, clinical examination and ultrasound scan. Design: This is a prospective, observational, study, in which women completed the BSGE pain questionnaire, a BSGE requirement. Also, as part of the routine preoperative assessment patient had a routine ultrasound scan and when recto-vaginal and deep infiltrating endometriosis was suspected an MRI was performed. Setting: Luton & Dunstable University Hospital. Patients: Symptomatic women (n = 56) scheduled for laparoscopy due to pelvic pain. The age ranged between 17 – 52 years of age (mean 33.8 years, SD 8.7 years). Interventions: None outside the recognised and established endometriosis centre protocol set up by BSGE. Main Outcome Measure(s): Sensitivity and specificity of endometriosis diagnosis predicted by symptoms based on BSGE pain questionnaire, clinical examinations and imaging. Findings: The prevalence of diagnosed endometriosis was calculated to be 76.8% and the prevalence of advanced stage was 55.4%. Deep infiltrating endometriosis in various locations was diagnosed in 32/56 women (57.1%) and some had DIE involving several locations. Logistic regression analysis was performed on 36 clinical variables to create a simple clinical prediction model. After creating the scoring system using variables with P < 0.05, the model was applied to the whole dataset. The sensitivity was 83.87% and specificity 96%. The positive likelihood ratio was 20.97 and the negative likelihood ratio was 0.17, indicating that the model has a good predictive value and could be useful in predicting advanced stage endometriosis. Conclusions: This is a hypothesis-generating project with one operator, but future proposed research would provide validation of the model and establish its usefulness in the general setting. Predictive tools based on such model could help organise the appropriate investigation in clinical practice, reduce risks associated with surgery and improve outcome. It could be of value for future research to standardise the assessment of women presenting with pelvic pain. The model needs further testing in a general setting to assess if the initial results are reproducible.

Keywords: deep endometriosis, endometriosis, minimally invasive, MRI, ultrasound.

Procedia PDF Downloads 339
16544 Application of Artificial Neural Network for Prediction of High Tensile Steel Strands in Post-Tensioned Slabs

Authors: Gaurav Sancheti

Abstract:

This study presents an impacting approach of Artificial Neural Networks (ANNs) in determining the quantity of High Tensile Steel (HTS) strands required in post-tensioned (PT) slabs. Various PT slab configurations were generated by varying the span and depth of the slab. For each of these slab configurations, quantity of required HTS strands were recorded. ANNs with backpropagation algorithm and varying architectures were developed and their performance was evaluated in terms of Mean Square Error (MSE). The recorded data for the quantity of HTS strands was used as a feeder database for training the developed ANNs. The networks were validated using various validation techniques. The results show that the proposed ANNs have a great potential with good prediction and generalization capability.

Keywords: artificial neural networks, back propagation, conceptual design, high tensile steel strands, post tensioned slabs, validation techniques

Procedia PDF Downloads 207
16543 Predicting Bridge Pier Scour Depth with SVM

Authors: Arun Goel

Abstract:

Prediction of maximum local scour is necessary for the safety and economical design of the bridges. A number of equations have been developed over the years to predict local scour depth using laboratory data and a few pier equations have also been proposed using field data. Most of these equations are empirical in nature as indicated by the past publications. In this paper, attempts have been made to compute local depth of scour around bridge pier in dimensional and non-dimensional form by using linear regression, simple regression and SVM (Poly and Rbf) techniques along with few conventional empirical equations. The outcome of this study suggests that the SVM (Poly and Rbf) based modeling can be employed as an alternate to linear regression, simple regression and the conventional empirical equations in predicting scour depth of bridge piers. The results of present study on the basis of non-dimensional form of bridge pier scour indicates the improvement in the performance of SVM (Poly and Rbf) in comparison to dimensional form of scour.

Keywords: modeling, pier scour, regression, prediction, SVM (Poly and Rbf kernels)

Procedia PDF Downloads 439
16542 Contribution in Fatigue Life Prediction of Composite Material

Authors: Mostefa Bendouba, Djebli Abdelkader, Abdelkrim Aid, Mohamed Benguediab

Abstract:

The damage evolution mechanism is one of the important focuses of fatigue behaviour investigation of composite materials and also is the foundation to predict fatigue life of composite structures for engineering application. This paper is dedicated to a damage investigation under two block loading cycle fatigue conditions submitted to composite material. The loading sequence effect and the influence of the cycle ratio of the first stage on the cumulative fatigue life were studied herein. Two loading sequences, i.e., high-to-low and low-to-high cases are considered in this paper. The proposed damage indicator is connected cycle by cycle to the S-N curve and the experimental results are in agreement with model expectations. Some experimental researches are used to validate this proposition.

Keywords: fatigue, damage acumulation, composite, evolution

Procedia PDF Downloads 486
16541 The Social Change Leadership Model for Administrators and Teachers Development in Northeast Thailand

Authors: D. Thawinkarn, S. Wongbutlee

Abstract:

The Social Change Leadership model is strongly aligned with administration’s mission. This research aims to examine the elements of social change leadership, build and develop leadership for social change, and evaluate effectiveness of leadership development model for social change. The research operation has 3 phases: model studies by in-depth interviews and survey research; drafting and creating model which verified by the experts; and trial of model in schools. The results showed that administrators and teachers have the elements of leadership for social change in moderate level. These elements are ranged descending from consciousness of self, common purpose, congruence, collaboration, commitment, citizenship, and controversy with civility. Model of leadership for social change is included the principles, objectives, content, process. Workshop process: Results show that the model of leadership development for social change in administrators and teachers leads to higher score in leadership evaluation prior to administering the operation.

Keywords: leadership, social change model, organization, administrators

Procedia PDF Downloads 403
16540 Deep Learning Prediction of Residential Radon Health Risk in Canada and Sweden to Prevent Lung Cancer Among Non-Smokers

Authors: Selim M. Khan, Aaron A. Goodarzi, Joshua M. Taron, Tryggve Rönnqvist

Abstract:

Indoor air quality, a prime determinant of health, is strongly influenced by the presence of hazardous radon gas within the built environment. As a health issue, dangerously high indoor radon arose within the 20th century to become the 2nd leading cause of lung cancer. While the 21st century building metrics and human behaviors have captured, contained, and concentrated radon to yet higher and more hazardous levels, the issue is rapidly worsening in Canada. It is established that Canadians in the Prairies are the 2nd highest radon-exposed population in the world, with 1 in 6 residences experiencing 0.2-6.5 millisieverts (mSv) radiation per week, whereas the Canadian Nuclear Safety Commission sets maximum 5-year occupational limits for atomic workplace exposure at only 20 mSv. This situation is also deteriorating over time within newer housing stocks containing higher levels of radon. Deep machine learning (LSTM) algorithms were applied to analyze multiple quantitative and qualitative features, determine the most important contributory factors, and predicted radon levels in the known past (1990-2020) and projected future (2021-2050). The findings showed gradual downwards patterns in Sweden, whereas it would continue to go from high to higher levels in Canada over time. The contributory factors found to be the basement porosity, roof insulation depthness, R-factor, and air dynamics of the indoor environment related to human window opening behaviour. Building codes must consider including these factors to ensure adequate indoor ventilation and healthy living that can prevent lung cancer in non-smokers.

Keywords: radon, building metrics, deep learning, LSTM prediction model, lung cancer, canada, sweden

Procedia PDF Downloads 101
16539 Prediction of Heavy-Weight Impact Noise and Vibration of Floating Floor Using Modified Impact Spectrum

Authors: Ju-Hyung Kim, Dae-Ho Mun, Hong-Gun Park

Abstract:

When an impact is applied to a floating floor, noise and vibration response of high-frequency range is reduced effectively, while amplifies the response at low-frequency range. This means floating floor can make worse noise condition when heavy-weight impact is applied. The amplified response is the result of interaction between finishing layer (mortar plate) and concrete slab. Because an impact force is not directly delivered to concrete slab, the impact force waveform or spectrum can be changed. In this paper, the changed impact spectrum was derived from several floating floor vibration tests. Based on the measured data, numerical modeling can describe the floating floor response, especially at low-frequency range. As a result, heavy-weight impact noise can be predicted using modified impact spectrum.

Keywords: floating floor, heavy-weight impact, prediction, vibration

Procedia PDF Downloads 359
16538 Earthquake Identification to Predict Tsunami in Andalas Island, Indonesia Using Back Propagation Method and Fuzzy TOPSIS Decision Seconder

Authors: Muhamad Aris Burhanudin, Angga Firmansyas, Bagus Jaya Santosa

Abstract:

Earthquakes are natural hazard that can trigger the most dangerous hazard, tsunami. 26 December 2004, a giant earthquake occurred in north-west Andalas Island. It made giant tsunami which crushed Sumatra, Bangladesh, India, Sri Lanka, Malaysia and Singapore. More than twenty thousand people dead. The occurrence of earthquake and tsunami can not be avoided. But this hazard can be mitigated by earthquake forecasting. Early preparation is the key factor to reduce its damages and consequences. We aim to investigate quantitatively on pattern of earthquake. Then, we can know the trend. We study about earthquake which has happened in Andalas island, Indonesia one last decade. Andalas is island which has high seismicity, more than a thousand event occur in a year. It is because Andalas island is in tectonic subduction zone of Hindia sea plate and Eurasia plate. A tsunami forecasting is needed to mitigation action. Thus, a Tsunami Forecasting Method is presented in this work. Neutral Network has used widely in many research to estimate earthquake and it is convinced that by using Backpropagation Method, earthquake can be predicted. At first, ANN is trained to predict Tsunami 26 December 2004 by using earthquake data before it. Then after we get trained ANN, we apply to predict the next earthquake. Not all earthquake will trigger Tsunami, there are some characteristics of earthquake that can cause Tsunami. Wrong decision can cause other problem in the society. Then, we need a method to reduce possibility of wrong decision. Fuzzy TOPSIS is a statistical method that is widely used to be decision seconder referring to given parameters. Fuzzy TOPSIS method can make the best decision whether it cause Tsunami or not. This work combines earthquake prediction using neural network method and using Fuzzy TOPSIS to determine the decision that the earthquake triggers Tsunami wave or not. Neural Network model is capable to capture non-linear relationship and Fuzzy TOPSIS is capable to determine the best decision better than other statistical method in tsunami prediction.

Keywords: earthquake, fuzzy TOPSIS, neural network, tsunami

Procedia PDF Downloads 475
16537 An Empirical Evaluation of Performance of Machine Learning Techniques on Imbalanced Software Quality Data

Authors: Ruchika Malhotra, Megha Khanna

Abstract:

The development of change prediction models can help the software practitioners in planning testing and inspection resources at early phases of software development. However, a major challenge faced during the training process of any classification model is the imbalanced nature of the software quality data. A data with very few minority outcome categories leads to inefficient learning process and a classification model developed from the imbalanced data generally does not predict these minority categories correctly. Thus, for a given dataset, a minority of classes may be change prone whereas a majority of classes may be non-change prone. This study explores various alternatives for adeptly handling the imbalanced software quality data using different sampling methods and effective MetaCost learners. The study also analyzes and justifies the use of different performance metrics while dealing with the imbalanced data. In order to empirically validate different alternatives, the study uses change data from three application packages of open-source Android data set and evaluates the performance of six different machine learning techniques. The results of the study indicate extensive improvement in the performance of the classification models when using resampling method and robust performance measures.

Keywords: change proneness, empirical validation, imbalanced learning, machine learning techniques, object-oriented metrics

Procedia PDF Downloads 408
16536 Predicting and Obtaining New Solvates of Curcumin, Demethoxycurcumin and Bisdemethoxycurcumin Based on the Ccdc Statistical Tools and Hansen Solubility Parameters

Authors: J. Ticona Chambi, E. A. De Almeida, C. A. Andrade Raymundo Gaiotto, A. M. Do Espírito Santo, L. Infantes, S. L. Cuffini

Abstract:

The solubility of active pharmaceutical ingredients (APIs) is challenging for the pharmaceutical industry. The new multicomponent crystalline forms as cocrystal and solvates present an opportunity to improve the solubility of APIs. Commonly, the procedure to obtain multicomponent crystalline forms of a drug starts by screening the drug molecule with the different coformers/solvents. However, it is necessary to develop methods to obtain multicomponent forms in an efficient way and with the least possible environmental impact. The Hansen Solubility Parameters (HSPs) is considered a tool to obtain theoretical knowledge of the solubility of the target compound in the chosen solvent. H-Bond Propensity (HBP), Molecular Complementarity (MC), Coordination Values (CV) are tools used for statistical prediction of cocrystals developed by the Cambridge Crystallographic Data Center (CCDC). The HSPs and the CCDC tools are based on inter- and intra-molecular interactions. The curcumin (Cur), target molecule, is commonly used as an anti‐inflammatory. The demethoxycurcumin (Demcur) and bisdemethoxycurcumin (Bisdcur) are natural analogues of Cur from turmeric. Those target molecules have differences in their solubilities. In this way, the work aimed to analyze and compare different tools for multicomponent forms prediction (solvates) of Cur, Demcur and Biscur. The HSP values were calculated for Cur, Demcur, and Biscur using the chemical group contribution methods and the statistical optimization from experimental data. The HSPmol software was used. From the HSPs of the target molecules and fifty solvents (listed in the HSP books), the relative energy difference (RED) was determined. The probability of the target molecules would be interacting with the solvent molecule was determined using the CCDC tools. A dataset of fifty molecules of different organic solvents was ranked for each prediction method and by a consensus ranking of different combinations: HSP, CV, HBP and MC values. Based on the prediction, 15 solvents were selected as Dimethyl Sulfoxide (DMSO), Tetrahydrofuran (THF), Acetonitrile (ACN), 1,4-Dioxane (DOX) and others. In a starting analysis, the slow evaporation technique from 50°C at room temperature and 4°C was used to obtain solvates. The single crystals were collected by using a Bruker D8 Venture diffractometer, detector Photon100. The data processing and crystal structure determination were performed using APEX3 and Olex2-1.5 software. According to the results, the HSPs (theoretical and optimized) and the Hansen solubility sphere for Cur, Demcur and Biscur were obtained. With respect to prediction analyses, a way to evaluate the predicting method was through the ranking and the consensus ranking position of solvates already reported in the literature. It was observed that the combination of HSP-CV obtained the best results when compared to the other methods. Furthermore, as a result of solvent selected, six new solvates, Cur-DOX, Cur-DMSO, Bicur-DOX, Bircur-THF, Demcur-DOX, Demcur-ACN and a new Biscur hydrate, were obtained. Crystal structures were determined for Cur-DOX, Biscur-DOX, Demcur-DOX and Bicur-Water. Moreover, the unit-cell parameter information for Cur-DMSO, Biscur-THF and Demcur-ACN were obtained. The preliminary results showed that the prediction method is showing a promising strategy to evaluate the possibility of forming multicomponent. It is currently working on obtaining multicomponent single crystals.

Keywords: curcumin, HSPs, prediction, solvates, solubility

Procedia PDF Downloads 49
16535 SiC Merged PiN and Schottky (MPS) Power Diodes Electrothermal Modeling in SPICE

Authors: A. Lakrim, D. Tahri

Abstract:

This paper sets out a behavioral macro-model of a Merged PiN and Schottky (MPS) diode based on silicon carbide (SiC). This model holds good for both static and dynamic electrothermal simulations for industrial applications. Its parameters have been worked out from datasheets curves by drawing on the optimization method: Simulated Annealing (SA) for the SiC MPS diodes made available in the industry. The model also adopts the Analog Behavioral Model (ABM) of PSPICE in which it has been implemented. The thermal behavior of the devices was also taken into consideration by making use of Foster’ canonical network as figured out from electro-thermal measurement provided by the manufacturer of the device.

Keywords: SiC MPS diode, electro-thermal, SPICE model, behavioral macro-model

Procedia PDF Downloads 397
16534 Prediction of in situ Permeability for Limestone Rock Using Rock Quality Designation Index

Authors: Ahmed T. Farid, Muhammed Rizwan

Abstract:

Geotechnical study for evaluating soil or rock permeability is a highly important parameter. Permeability values for rock formations are more difficult for determination than soil formation as it is an effect of the rock quality and its fracture values. In this research, the prediction of in situ permeability of limestone rock formations was predicted. The limestone rock permeability was evaluated using Lugeon tests (in-situ packer permeability). Different sites which spread all over the Riyadh region of Saudi Arabia were chosen to conduct our study of predicting the in-situ permeability of limestone rock. Correlations were deducted between the values of in-situ permeability of the limestone rock with the value of the rock quality designation (RQD) calculated during the execution of the boreholes of the study areas. The study was performed for different ranges of RQD values measured during drilling of the sites boreholes. The developed correlations are recommended for the onsite determination of the in-situ permeability of limestone rock only. For the other sedimentary formations of rock, more studies are needed for predicting the actual correlations related to each type.

Keywords: In situ, packer, permeability, rock, quality

Procedia PDF Downloads 362
16533 Lee-Carter Mortality Forecasting Method with Dynamic Normal Inverse Gaussian Mortality Index

Authors: Funda Kul, İsmail Gür

Abstract:

Pension scheme providers have to price mortality risk by accurate mortality forecasting method. There are many mortality-forecasting methods constructed and used in literature. The Lee-Carter model is the first model to consider stochastic improvement trends in life expectancy. It is still precisely used. Mortality forecasting is done by mortality index in the Lee-Carter model. It is assumed that mortality index fits ARIMA time series model. In this paper, we propose and use dynamic normal inverse gaussian distribution to modeling mortality indes in the Lee-Carter model. Using population mortality data for Italy, France, and Turkey, the model is forecasting capability is investigated, and a comparative analysis with other models is ensured by some well-known benchmarking criterions.

Keywords: mortality, forecasting, lee-carter model, normal inverse gaussian distribution

Procedia PDF Downloads 345
16532 Combining the Dynamic Conditional Correlation and Range-GARCH Models to Improve Covariance Forecasts

Authors: Piotr Fiszeder, Marcin Fałdziński, Peter Molnár

Abstract:

The dynamic conditional correlation model of Engle (2002) is one of the most popular multivariate volatility models. However, this model is based solely on closing prices. It has been documented in the literature that the high and low price of the day can be used in an efficient volatility estimation. We, therefore, suggest a model which incorporates high and low prices into the dynamic conditional correlation framework. Empirical evaluation of this model is conducted on three datasets: currencies, stocks, and commodity exchange-traded funds. The utilisation of realized variances and covariances as proxies for true variances and covariances allows us to reach a strong conclusion that our model outperforms not only the standard dynamic conditional correlation model but also a competing range-based dynamic conditional correlation model.

Keywords: volatility, DCC model, high and low prices, range-based models, covariance forecasting

Procedia PDF Downloads 169
16531 The Establishment of RELAP5/SNAP Model for Kuosheng Nuclear Power Plant

Authors: C. Shih, J. R. Wang, H. C. Chang, S. W. Chen, S. C. Chiang, T. Y. Yu

Abstract:

After the measurement uncertainty recapture (MUR) power uprates, Kuosheng nuclear power plant (NPP) was uprated the power from 2894 MWt to 2943 MWt. For power upgrade, several codes (e.g., TRACE, RELAP5, etc.) were applied to assess the safety of Kuosheng NPP. Hence, the main work of this research is to establish a RELAP5/MOD3.3 model of Kuosheng NPP with SNAP interface. The establishment of RELAP5/SNAP model was referred to the FSAR, training documents, and TRACE model which has been developed and verified before. After completing the model establishment, the startup test scenarios would be applied to the RELAP5/SNAP model. With comparing the startup test data and TRACE analysis results, the applicability of RELAP5/SNAP model would be assessed.

Keywords: RELAP5, TRACE, SNAP, BWR

Procedia PDF Downloads 416
16530 The Evaluation of Current Pile Driving Prediction Methods for Driven Monopile Foundations in London Clay

Authors: John Davidson, Matteo Castelletti, Ismael Torres, Victor Terente, Jamie Irvine, Sylvie Raymackers

Abstract:

The current industry approach to pile driving predictions consists of developing a model of the hammer-pile-soil system which simulates the relationship between soil resistance to driving (SRD) and blow counts (or pile penetration per blow). The SRD methods traditionally used are broadly based on static pile capacity calculations. The SRD is used in combination with the one-dimensional wave equation model to indicate the anticipated blowcounts with depth for specific hammer energy settings. This approach has predominantly been calibrated on relatively long slender piles used in the oil and gas industry but is now being extended to allow calculations to be undertaken for relatively short rigid large diameter monopile foundations. This paper evaluates the accuracy of current industry practice when applied to a site where large diameter monopiles were installed in predominantly stiff fissured clay. Actual geotechnical and pile installation data, including pile driving records and signal matching analysis (based upon pile driving monitoring techniques), were used for the assessment on the case study site.

Keywords: driven piles, fissured clay, London clay, monopiles, offshore foundations

Procedia PDF Downloads 211
16529 QoS-CBMG: A Model for e-Commerce Customer Behavior

Authors: Hoda Ghavamipoor, S. Alireza Hashemi Golpayegani

Abstract:

An approach to model the customer interaction with e-commerce websites is presented. Considering the service quality level as a predictive feature, we offer an improved method based on the Customer Behavior Model Graph (CBMG), a state-transition graph model. To derive the Quality of Service sensitive-CBMG (QoS-CBMG) model, process-mining techniques is applied to pre-processed website server logs which are categorized as ‘buy’ or ‘visit’. Experimental results on an e-commerce website data confirmed that the proposed method outperforms CBMG based method.

Keywords: customer behavior model, electronic commerce, quality of service, customer behavior model graph, process mining

Procedia PDF Downloads 399
16528 Development of a Fire Analysis Drone for Smoke Toxicity Measurement for Fire Prediction and Management

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

This research presents the design and creation of a drone gas analyser, aimed at addressing the need for independent data collection and analysis of gas emissions during large-scale fires, particularly wasteland fires. The analyser drone, comprising a lightweight gas analysis system attached to a remote-controlled drone, enables the real-time assessment of smoke toxicity and the monitoring of gases released into the atmosphere during such incidents. The key components of the analyser unit included two gas line inlets connected to glass wool filters, a pump with regulated flow controlled by a mass flow controller, and electrochemical cells for detecting nitrogen oxides, hydrogen cyanide, and oxygen levels. Additionally, a non-dispersive infrared (NDIR) analyser is employed to monitor carbon monoxide (CO), carbon dioxide (CO₂), and hydrocarbon concentrations. Thermocouples can be attached to the analyser to monitor temperature, as well as McCaffrey probes combined with pressure transducers to monitor air velocity and wind direction. These additions allow for monitoring of the large fire and can be used for predictions of fire spread. The innovative system not only provides crucial data for assessing smoke toxicity but also contributes to fire prediction and management. The remote-controlled drone's mobility allows for safe and efficient data collection in proximity to the fire source, reducing the need for human exposure to hazardous conditions. The data obtained from the gas analyser unit facilitates informed decision-making by emergency responders, aiding in the protection of both human health and the environment. This abstract highlights the successful development of a drone gas analyser, illustrating its potential for enhancing smoke toxicity analysis and fire prediction capabilities. The integration of this technology into fire management strategies offers a promising solution for addressing the challenges associated with wildfires and other large-scale fire incidents. The project's methodology and results contribute to the growing body of knowledge in the field of environmental monitoring and safety, emphasizing the practical utility of drones for critical applications.

Keywords: fire prediction, drone, smoke toxicity, analyser, fire management

Procedia PDF Downloads 73
16527 Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

This study is an attempt to obtain reliable data on the natural history of breast cancer growth. We analyze the opportunities for using classical mathematical models (exponential and logistic tumor growth models, Gompertz and von Bertalanffy tumor growth models) to try to describe growth of the primary tumor and the secondary distant metastases of human breast cancer. The research aim is to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoMPaS and corresponding software. We are interested in: 1) modelling the whole natural history of the primary tumor and the secondary distant metastases; 2) developing adequate and precise CoMPaS which reflects relations between the primary tumor and the secondary distant metastases; 3) analyzing the CoMPaS scope of application; 4) implementing the model as a software tool. The foundation of the CoMPaS is the exponential tumor growth model, which is described by determinate nonlinear and linear equations. The CoMPaS corresponds to TNM classification. It allows to calculate different growth periods of the primary tumor and the secondary distant metastases: 1) ‘non-visible period’ for the primary tumor; 2) ‘non-visible period’ for the secondary distant metastases; 3) ‘visible period’ for the secondary distant metastases. The CoMPaS is validated on clinical data of 10-years and 15-years survival depending on the tumor stage and diameter of the primary tumor. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer growth models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. The CoMPaS model and predictive software: a) fit to clinical trials data; b) detect different growth periods of the primary tumor and the secondary distant metastases; c) make forecast of the period of the secondary distant metastases appearance; d) have higher average prediction accuracy than the other tools; e) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoMPaS: the number of doublings for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases. The CoMPaS enables, for the first time, to predict ‘whole natural history’ of the primary tumor and the secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on the primary tumor sizes. Summarizing: a) CoMPaS describes correctly the primary tumor growth of IA, IIA, IIB, IIIB (T1-4N0M0) stages without metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and inception of the secondary distant metastases.

Keywords: breast cancer, exponential growth model, mathematical model, metastases in lymph nodes, primary tumor, survival

Procedia PDF Downloads 332
16526 Model Based Simulation Approach to a 14-Dof Car Model Using Matlab/Simulink

Authors: Ishit Sheth, Chandrasekhar Jinendran, Chinmaya Ranjan Sahu

Abstract:

A fourteen degree of freedom (DOF) ride and handling control mathematical model is developed for a car using generalized boltzmann hamel equation which will create a basis for design of ride and handling controller. Mathematical model developed yield equations of motion for non-holonomic constrained systems in quasi-coordinates. The governing differential equation developed integrates ride and handling control of car. Model-based systems engineering approach is implemented for simulation using matlab/simulink, vehicle’s response in different DOF is examined and later validated using commercial software (ADAMS). This manuscript involves detailed derivation of full car vehicle model which provides response in longitudinal, lateral and yaw motion to demonstrate the advantages of the developed model over the existing dynamic model. The dynamic behaviour of the developed ride and handling model is simulated for different road conditions.

Keywords: Full Vehicle Model, MBSE, Non Holonomic Constraints, Boltzmann Hamel Equation

Procedia PDF Downloads 201
16525 Comprehensive Risk Assessment Model in Agile Construction Environment

Authors: Jolanta Tamošaitienė

Abstract:

The article focuses on a developed comprehensive model to be used in an agile environment for the risk assessment and selection based on multi-attribute methods. The model is based on a multi-attribute evaluation of risk in construction, and the determination of their optimality criterion values are calculated using complex Multiple Criteria Decision-Making methods. The model may be further applied to risk assessment in an agile construction environment. The attributes of risk in a construction project are selected by applying the risk assessment condition to the construction sector, and the construction process efficiency in the construction industry accounts for the agile environment. The paper presents the comprehensive risk assessment model in an agile construction environment. It provides a background and a description of the proposed model and the developed analysis of the comprehensive risk assessment model in an agile construction environment with the criteria.

Keywords: assessment, environment, agile, model, risk

Procedia PDF Downloads 242
16524 An Online Priority-Configuration Algorithm for Obstacle Avoidance of the Unmanned Air Vehicles Swarm

Authors: Lihua Zhu, Jianfeng Du, Yu Wang, Zhiqiang Wu

Abstract:

Collision avoidance problems of a swarm of unmanned air vehicles (UAVs) flying in an obstacle-laden environment are investigated in this paper. Given that the UAV swarm needs to adapt to the obstacle distribution in dynamic operation, a priority configuration is designed to guide the UAVs to pass through the obstacles in turn. Based on the collision cone approach and the prediction of the collision time, a collision evaluation model is established to judge the urgency of the imminent collision of each UAV, and the evaluation result is used to assign the priority of each UAV to further instruct them going through the obstacles in descending order. At last, the simulation results provide the promising validation in terms of the efficiency and scalability of the proposed approach.

Keywords: UAV swarm, collision avoidance, complex environment, online priority design

Procedia PDF Downloads 200
16523 Formal Verification of Cache System Using a Novel Cache Memory Model

Authors: Guowei Hou, Lixin Yu, Wei Zhuang, Hui Qin, Xue Yang

Abstract:

Formal verification is proposed to ensure the correctness of the design and make functional verification more efficient. As cache plays a vital role in the design of System on Chip (SoC), and cache with Memory Management Unit (MMU) and cache memory unit makes the state space too large for simulation to verify, then a formal verification is presented for such system design. In the paper, a formal model checking verification flow is suggested and a new cache memory model which is called “exhaustive search model” is proposed. Instead of using large size ram to denote the whole cache memory, exhaustive search model employs just two cache blocks. For cache system contains data cache (Dcache) and instruction cache (Icache), Dcache memory model and Icache memory model are established separately using the same mechanism. At last, the novel model is employed to the verification of a cache which is module of a custom-built SoC system that has been applied in practical, and the result shows that the cache system is verified correctly using the exhaustive search model, and it makes the verification much more manageable and flexible.

Keywords: cache system, formal verification, novel model, system on chip (SoC)

Procedia PDF Downloads 481
16522 Development of Simple-To-Apply Biogas Kinetic Models for the Co-Digestion of Food Waste and Maize Husk

Authors: Owamah Hilary, O. C. Izinyon

Abstract:

Many existing biogas kinetic models are difficult to apply to substrates they were not developed for, as they are substrate specific. Biodegradability kinetic (BIK) model and maximum biogas production potential and stability assessment (MBPPSA) model were therefore developed in this study for the anaerobic co-digestion of food waste and maize husk. Biodegradability constant (k) was estimated as 0.11d-1 using the BIK model. The results of maximum biogas production potential (A) obtained using the MBPPSA model corresponded well with the results obtained using the popular but complex modified Gompertz model for digesters B-1, B-2, B-3, B-4, and B-5. The (If) value of MBPPSA model also showed that digesters B-3, B-4, and B-5 were stable, while B-1 and B-2 were unstable. Similar stability observation was also obtained using the modified Gompertz model. The MBPPSA model can therefore be used as alternative model for anaerobic digestion feasibility studies and plant design.

Keywords: biogas, inoculum, model development, stability assessment

Procedia PDF Downloads 412
16521 Reflection on Using Bar Model Method in Learning and Teaching Primary Mathematics: A Hong Kong Case Study

Authors: Chui Ka Shing

Abstract:

This case study research attempts to examine the use of the Bar Model Method approach in learning and teaching mathematics in a primary school in Hong Kong. The objectives of the study are to find out to what extent (a) the Bar Model Method approach enhances the construction of students’ mathematics concepts, and (b) the school-based mathematics curriculum development with adopting the Bar Model Method approach. This case study illuminates the effectiveness of using the Bar Model Method to solve mathematics problems from Primary 1 to Primary 6. Some effective pedagogies and assessments were developed to strengthen the use of the Bar Model Method across year levels. Suggestions including school-based curriculum development for using Bar Model Method and further study were discussed.

Keywords: bar model method, curriculum development, mathematics education, problem solving

Procedia PDF Downloads 207
16520 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning

Authors: Hossein Havaeji, Tony Wong, Thien-My Dao

Abstract:

1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.

Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning

Procedia PDF Downloads 110
16519 Investigation of Single Particle Breakage inside an Impact Mill

Authors: E. Ghasemi Ardi, K. J. Dong, A. B. Yu, R. Y. Yang

Abstract:

In current work, a numerical model based on the discrete element method (DEM) was developed which provided information about particle dynamic and impact event condition inside a laboratory scale impact mill (Fritsch). It showed that each particle mostly experiences three impacts inside the mill. While the first impact frequently happens at front surface of the rotor’s rib, the frequent location of the second impact is side surfaces of the rotor’s rib. It was also showed that while the first impact happens at small impact angle mostly varying around 35º, the second impact happens at around 70º which is close to normal impact condition. Also analyzing impact energy revealed that varying mill speed from 6000 to 14000 rpm, the ratio of first impact’s average impact energy and minimum required energy to break particle (Wₘᵢₙ) increased from 0.30 to 0.85. Moreover, it was seen that second impact poses intense impact energy on particle which can be considered as the main cause of particle splitting. Finally, obtained information from DEM simulation along with obtained data from conducted experiments was implemented in semi-empirical equations in order to find selection and breakage functions. Then, using a back-calculation approach, those parameters were used to predict the PSDs of ground particles under different impact energies. Results were compared with experiment results and showed reasonable accuracy and prediction ability.

Keywords: single particle breakage, particle dynamic, population balance model, particle size distribution, discrete element method

Procedia PDF Downloads 277