Search results for: vector error correction model (VECM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18897

Search results for: vector error correction model (VECM)

15237 Mine Project Evaluations in the Rising of Uncertainty: Real Options Analysis

Authors: I. Inthanongsone, C. Drebenstedt, J. C. Bongaerts, P. Sontamino

Abstract:

The major concern in evaluating the value of mining projects related to the deficiency of the traditional discounted cash flow (DCF) method. This method does not take uncertainties into account and, hence it does not allow for an economic assessment of managerial flexibility and operational adaptability, which are increasingly determining long-term corporate success. Such an assessment can be performed with the real options valuation (ROV) approach, since it allows for a comparative evaluation of unforeseen uncertainties in a project life cycle. This paper presents an economic evaluation model for open pit mining projects based on real options valuation approach. Uncertainties in the model are caused by metal prices and cost uncertainties and the system dynamics (SD) modeling method is used to structure and solve the real options model. The model is applied to a case study. It can be shown that that managerial flexibility reacting to uncertainties may create additional value to a mining project in comparison to the outcomes of a DCF method. One important insight for management dealing with uncertainty is seen in choosing the optimal time to exercise strategic options.

Keywords: DCF methods, ROV approach, system dynamics modeling methods, uncertainty

Procedia PDF Downloads 506
15236 Factors Influencing the Choice of Multi-Month Drug Dispensing Model Amongst Children and Adolescents Living with HIV (C/ALHIV) in Eswatini

Authors: Mbuso Siwela

Abstract:

Background: The Sub-Saharan Africa region has the greatest number of people eligible to receive antiretroviral treatment (ART). Multi-month Drug dispensing (MMD) of antiretroviral treatment (ART) aims to reduce patient-related barriers to access long-term treatment and improve health system efficiency. In Eswatini, however, few children and adolescents are on MMD. Young Heroes is implementing an HIV program that aims to avert new HIV infections in children and youth and improve treatment outcomes for children and adolescents living with HIV (C/ALHIV: 0-19 Years) and OVC caregivers with HIV prevention and impact mitigation interventions that prevent new HIV infections and reduce vulnerability. Aim of the study: The study aimed to ascertain factors that are associated with the assignment of the MMD model on C/ALHIVs. Methodology: The project provides treatment adherence support through well-trained community cadres (Home Visitors - HVs) at both community and health facility levels. During door-to-door visits, HVs track all C/ALHIV enrolled in the project monthly and refer any who might have stopped or interrupted treatment. C/ALHIV with unsuppressed viral load is supported through case conferencing and teen clubs. A quantitative cross-sectional analysis was conducted using STATA for children and adolescents living with HIV enrolled in the project. Bivariate analysis was conducted, and the Logistic Regression model was used to ascertain the effects of duration on ART on the choice of MMD model. Results: Data for 544 C/ALHIV (0-19 Years) was analyzed in STATA. Results show a strong association between (duration on ART, Age, being in teen club) and enrolment in an MMD model. Duration on ART is a major predictor for the choice of MMD model at (95% CI: 0.0012905 – 0.0039812; P = <0.0001). C/ALHIV who have been on ART for less than a year are less likely to be on MMD. C/ALHIVs who are 1 or more years on ART are more likely to be in 3 months dispensing, while those who are 5 years or more are most likely to be in 6 months model.

Keywords: C/ALHIV, OVC, HIV, treatment

Procedia PDF Downloads 50
15235 Quantitative Structure Activity Relationship Model for Predicting the Aromatase Inhibition Activity of 1,2,3-Triazole Derivatives

Authors: M. Ouassaf, S. Belaidi

Abstract:

Aromatase is an estrogen biosynthetic enzyme belonging to the cytochrome P450 family, which catalyzes the limiting step in the conversion of androgens to estrogens. As it is relevant for the promotion of tumor cell growth. A set of thirty 1,2,3-triazole derivatives was used in the quantitative structure activity relationship (QSAR) study using regression multiple linear (MLR), We divided the data into two training and testing groups. The results showed a good predictive ability of the MLR model, the models were statistically robust internally (R² = 0.982) and the predictability of the model was tested by several parameters. including external criteria (R²pred = 0.851, CCC = 0.946). The knowledge gained in this study should provide relevant information that contributes to the origins of aromatase inhibitory activity and, therefore, facilitates our ongoing quest for aromatase inhibitors with robust properties.

Keywords: aromatase inhibitors, QSAR, MLR, 1, 2, 3-triazole

Procedia PDF Downloads 117
15234 Accuracy of Autonomy Navigation of Unmanned Aircraft Systems through Imagery

Authors: Sidney A. Lima, Hermann J. H. Kux, Elcio H. Shiguemori

Abstract:

The Unmanned Aircraft Systems (UAS) usually navigate through the Global Navigation Satellite System (GNSS) associated with an Inertial Navigation System (INS). However, GNSS can have its accuracy degraded at any time or even turn off the signal of GNSS. In addition, there is the possibility of malicious interferences, known as jamming. Therefore, the image navigation system can solve the autonomy problem, because if the GNSS is disabled or degraded, the image navigation system would continue to provide coordinate information for the INS, allowing the autonomy of the system. This work aims to evaluate the accuracy of the positioning though photogrammetry concepts. The methodology uses orthophotos and Digital Surface Models (DSM) as a reference to represent the object space and photograph obtained during the flight to represent the image space. For the calculation of the coordinates of the perspective center and camera attitudes, it is necessary to know the coordinates of homologous points in the object space (orthophoto coordinates and DSM altitude) and image space (column and line of the photograph). So if it is possible to automatically identify in real time the homologous points the coordinates and attitudes can be calculated whit their respective accuracies. With the methodology applied in this work, it is possible to verify maximum errors in the order of 0.5 m in the positioning and 0.6º in the attitude of the camera, so the navigation through the image can reach values equal to or higher than the GNSS receivers without differential correction. Therefore, navigating through the image is a good alternative to enable autonomous navigation.

Keywords: autonomy, navigation, security, photogrammetry, remote sensing, spatial resection, UAS

Procedia PDF Downloads 195
15233 Assessing Readiness Model for Business Intelligence Implementation in Organization

Authors: Abdul Razak Rahmat, Azizah Ahmad, Azman Ta’aa

Abstract:

The deployment of Business Intelligence (BI) for organization at the beginning phase is very crucial. Results from the previous studies found that more than half of the BI project fails to meet the objective even though a lot money are spent. Based on that problem, the readiness level of BI for the organization is important to identify in order to reduce the risk before the actual BI project is implemented. In this paper, rigorous literature review on the aspect success factors such as Critical Success Factors (CSFs), Readiness Factors (RFs), Success Factors (SFs), are discussed by different authors. The paper also adopted a few models from previous study as a guide for the assessment of BI readiness. The expected finding from this research is the Business Intelligent Readiness Model (BiRM) as a guild before implement the BI system.

Keywords: business intelligence readiness model, business intelligence for higher learning, BI readiness factors, BI critical success factors(CSF)

Procedia PDF Downloads 377
15232 In vitro and in vivo Anticancer Activity of Nanosize Zinc Oxide Composites of Doxorubicin

Authors: Emma R. Arakelova, Stepan G. Grigoryan, Flora G. Arsenyan, Nelli S. Babayan, Ruzanna M. Grigoryan, Natalia K. Sarkisyan

Abstract:

Novel nanosize zinc oxide composites of doxorubicin obtained by deposition of 180 nm thick zinc oxide film on the drug surface using DC-magnetron sputtering of a zinc target in the form of gels (PEO+Dox+ZnO and Starch+NaCMC+Dox+ZnO) were studied for drug delivery applications. The cancer specificity was revealed both in in vitro and in vivo models. The cytotoxicity of the test compounds was analyzed against human cancer (HeLa) and normal (MRC5) cell lines using MTT colorimetric cell viability assay. IC50 values were determined and compared to reveal the cancer specificity of the test samples. The mechanistic study of the most active compound was investigated using Flow cytometry analyzing of the DNA content after PI (propidium iodide) staining. Data were analyzed with Tree Star FlowJo software using cell cycle analysis Dean-Jett-Fox module. The in vivo anticancer activity estimation experiments were carried out on mice with inoculated ascitic Ehrlich’s carcinoma at intraperitoneal introduction of doxorubicin and its zinc oxide compositions. It was shown that the nanosize zinc oxide film deposition on the drug surface leads to the selective anticancer activity of composites at the cellular level with the range of selectivity index (SI) from 4 (Starch+NaCMC+Dox+ZnO) to 200 (PEO(gel)+Dox+ZnO) which is higher than that of free Dox (SI = 56). The significant increase in vivo antitumor activity (by a factor of 2-2.5) and decrease of general toxicity of zinc oxide compositions of doxorubicin in the form of the above mentioned gels compared to free doxorubicin were shown on the model of inoculated Ehrlich's ascitic carcinoma. Mechanistic studies of anticancer activity revealed the cytostatic effect based on the high level of DNA biosynthesis inhibition at considerable low concentrations of zinc oxide compositions of doxorubicin. The results of studies in vitro and in vivo behavior of PEO+Dox+ZnO and Starch+NaCMC+Dox+ZnO composites confirm the high potential of the nanosize zinc oxide composites as a vector delivery system for future application in cancer chemotherapy.

Keywords: anticancer activity, cancer specificity, doxorubicin, zinc oxide

Procedia PDF Downloads 416
15231 Assessing Available Power from a Renewable Energy Source in the Southern Hemisphere using Anisotropic Model

Authors: Asowata Osamede, Trudy Sutherland

Abstract:

The purpose of this paper is to assess the available power from a Renewable Energy Source (off-grid photovoltaic (PV) panel) in the Southern Hemisphere using anisotropic model. Direct solar radiation is the driving force in photovoltaics. In a basic PV panels in the Southern Hemisphere, Power conversion is eminent, and this is achieved by the PV cells converting solar energy into electrical energy. In this research, the results was determined for a 6 month period from September 2022 through February 2023. Preliminary results, which include Normal Probability plot, data analysis - R2 value, effective conversion-time per week and work-time per day, indicate a favorably comparison between the empirical results and the simulation results.

Keywords: power-conversion, mathematical model, PV panels, DC-DC converters, direct solar radiation

Procedia PDF Downloads 95
15230 Knowledge Sharing Model Based on Individual and Organizational Factors Related to Faculty Members of University

Authors: Mitra Sadoughi

Abstract:

This study presents the knowledge-sharing model based on individual and organizational factors related to faculty members. To achieve this goal, individual and organizational factors were presented through qualitative research in the form of open codes, axial, and selective observations; then, the final model was obtained using structural equation model. Participants included 1,719 faculty members of the Azad Universities, Mazandaran Province, Region 3. The samples related to the qualitative survey included 25 faculty members experienced at teaching and the samples related to the quantitative survey included 326 faculty members selected by multistage cluster sampling. A 72-item questionnaire was used to measure the quantitative variables. The reliability of the questionnaire was 0.93. Its content and face validity was determined with the help of faculty members, consultants, and other experts. For the analysis of quantitative data obtained from structural model and regression, SPSS and LISREL were used. The results showed that the status of knowledge sharing is moderate in the universities. Individual factors influencing knowledge sharing included the sharing of educational materials, perception, confidence and knowledge self-efficiency, and organizational factors influencing knowledge sharing included structural social capital, cognitive social capital, social capital relations, organizational communication, organizational structure, organizational culture, IT infrastructure and systems of rewards. Finally, it was found that the contribution of individual factors on knowledge sharing was more than organizational factors; therefore, a model was presented in which contribution of individual and organizational factors were determined.

Keywords: knowledge sharing, social capital, organizational communication, knowledge self-efficiency, perception, trust, organizational culture

Procedia PDF Downloads 395
15229 A Finite Element Model to Study the Behaviour of Corroded Reinforced Concrete Beams Repaired with near Surface Mounted Technique

Authors: B. Almassri, F. Almahmoud, R. Francois

Abstract:

Near surface mounted reinforcement (NSM) technique is one of the promising techniques used nowadays to strengthen reinforced concrete (RC) structures. In the NSM technique, the Carbon Fibre Reinforced Polymer (CFRP) rods are placed inside pre-cut grooves and are bonded to the concrete with epoxy adhesive. This paper studies the non-classical mode of failure ‘the separation of concrete cover’ according to experimental and numerical FE modelling results. Experimental results and numerical modelling results of a 3D finite element (FE) model using the commercial software Abaqus and 2D FE model FEMIX were obtained on two beams, one corroded (25 years of corrosion procedure) and one control (A1CL3-R and A1T-R) were each repaired in bending using NSM CFRP rod and were then tested up to failure. The results showed that the NSM technique increased the overall capacity of control and corroded beams despite a non-classical mode of failure with separation of the concrete cover occurring in the corroded beam due to damage induced by corrosion. Another FE model used external steel stirrups around the repaired corroded beam A1CL3-R which failed with the separation of concrete cover, this model showed a change in the mode of failure form a non-classical mode of failure by the separation of concrete cover to the same mode of failure of the repaired control beam by the crushing of compressed concrete.

Keywords: corrosion, repair, Reinforced Concrete, FEM, CFRP, FEMIX

Procedia PDF Downloads 170
15228 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning

Authors: Xingyu Gao, Qiang Wu

Abstract:

Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.

Keywords: patent influence, interpretable machine learning, predictive models, SHAP

Procedia PDF Downloads 53
15227 A Model of Applied Psychology Research Defining Community Participation and Collective Identity as a Major Asset for Strategic Planning and Political Decision: The Project SIA (Social Inclusion through Accessibility)

Authors: Rui Serôdio, Alexandra Serra, José Albino Lima, Luísa Catita, Paula Lopes

Abstract:

We will present the outline of the Project SIA (Social Inclusion through Accessibility) focusing in one of its core components: how our applied research model contributes to define community participation as a pillar for strategic and political agenda amongst local authorities. Project ISA, supported by EU regional funding, was design as part of a broader model developed by SIMLab–Social Inclusion Monitoring Laboratory, in which the relation University-Community is a core element. The project illustrates how University of Porto developed a large scale project of applied psychology research in a close partnership with 18 municipalities that cover almost all regions of Portugal, and with a private architecture enterprise, specialized in inclusive accessibility and “design for all”. Three fundamental goals were defined: (1) creation of a model that would promote the effective civic participation of local citizens; (2) the “voice” of such participation should be both individual and collective; (3) the scientific and technical framework should serve as one of the bases for political decision on inclusive accessibility local planning. The two main studies were run in a standardized model across all municipalities and the samples of the three modalities of community participation were the following: individual participation based on 543 semi-structured interviews and 6373 inquiries; collective participation based on group session with 302 local citizens. We present some of the broader findings of Project SIA and discuss how they relate to our applied research model.

Keywords: applied psychology, collective identity, community participation, inclusive accessibility

Procedia PDF Downloads 454
15226 Multi-Atlas Segmentation Based on Dynamic Energy Model: Application to Brain MR Images

Authors: Jie Huo, Jonathan Wu

Abstract:

Segmentation of anatomical structures in medical images is essential for scientific inquiry into the complex relationships between biological structure and clinical diagnosis, treatment and assessment. As a method of incorporating the prior knowledge and the anatomical structure similarity between a target image and atlases, multi-atlas segmentation has been successfully applied in segmenting a variety of medical images, including the brain, cardiac, and abdominal images. The basic idea of multi-atlas segmentation is to transfer the labels in atlases to the coordinate of the target image by matching the target patch to the atlas patch in the neighborhood. However, this technique is limited by the pairwise registration between target image and atlases. In this paper, a novel multi-atlas segmentation approach is proposed by introducing a dynamic energy model. First, the target is mapped to each atlas image by minimizing the dynamic energy function, then the segmentation of target image is generated by weighted fusion based on the energy. The method is tested on MICCAI 2012 Multi-Atlas Labeling Challenge dataset which includes 20 target images and 15 atlases images. The paper also analyzes the influence of different parameters of the dynamic energy model on the segmentation accuracy and measures the dice coefficient by using different feature terms with the energy model. The highest mean dice coefficient obtained with the proposed method is 0.861, which is competitive compared with the recently published method.

Keywords: brain MRI segmentation, dynamic energy model, multi-atlas segmentation, energy minimization

Procedia PDF Downloads 339
15225 IOT Based Process Model for Heart Monitoring Process

Authors: Dalyah Y. Al-Jamal, Maryam H. Eshtaiwi, Liyakathunisa Syed

Abstract:

Connecting health services with technology has a huge demand as people health situations are becoming worse day by day. In fact, engaging new technologies such as Internet of Things (IOT) into the medical services can enhance the patient care services. Specifically, patients suffering from chronic diseases such as cardiac patients need a special care and monitoring. In reality, some efforts were previously taken to automate and improve the patient monitoring systems. However, the previous efforts have some limitations and lack the real-time feature needed for chronic kind of diseases. In this paper, an improved process model for patient monitoring system specialized for cardiac patients is presented. A survey was distributed and interviews were conducted to gather the needed requirements to improve the cardiac patient monitoring system. Business Process Model and Notation (BPMN) language was used to model the proposed process. In fact, the proposed system uses the IOT Technology to assist doctors to remotely monitor and follow-up with their heart patients in real-time. In order to validate the effectiveness of the proposed solution, simulation analysis was performed using Bizagi Modeler tool. Analysis results show performance improvements in the heart monitoring process. For the future, authors suggest enhancing the proposed system to cover all the chronic diseases.

Keywords: IoT, process model, remote patient monitoring system, smart watch

Procedia PDF Downloads 335
15224 Constrained RGBD SLAM with a Prior Knowledge of the Environment

Authors: Kathia Melbouci, Sylvie Naudet Collette, Vincent Gay-Bellile, Omar Ait-Aider, Michel Dhome

Abstract:

In this paper, we handle the problem of real time localization and mapping in indoor environment assisted by a partial prior 3D model, using an RGBD sensor. The proposed solution relies on a feature-based RGBD SLAM algorithm to localize the camera and update the 3D map of the scene. To improve the accuracy and the robustness of the localization, we propose to combine in a local bundle adjustment process, geometric information provided by a prior coarse 3D model of the scene (e.g. generated from the 2D floor plan of the building) along with RGBD data from a Kinect camera. The proposed approach is evaluated on a public benchmark dataset as well as on real scene acquired by a Kinect sensor.

Keywords: SLAM, global localization, 3D sensor, bundle adjustment, 3D model

Procedia PDF Downloads 418
15223 The Influence of Demographic on Tea Consumption in China

Authors: Xiguan Jiangfan Yang

Abstract:

This study investigates the tea consumption based on the Double-Hurdle model. The results of a CHNS survey of 12,745 samples in China offer two preliminary insights: First, we can’t apply the conclusions we get by using all samples to the men or women subgroups. Second, men and women are impacted by different demographic not only on the intention to drink tea, but also on the quantities of tea consumed. These two findings suggest that appropriate and corresponding marketing strategies should be developed to targeting on the different groups of tea consumers.

Keywords: Chinese, CHNS, Double-Hurdle model, tea consumption

Procedia PDF Downloads 416
15222 An Intelligent Prediction Method for Annular Pressure Driven by Mechanism and Data

Authors: Zhaopeng Zhu, Xianzhi Song, Gensheng Li, Shuo Zhu, Shiming Duan, Xuezhe Yao

Abstract:

Accurate calculation of wellbore pressure is of great significance to prevent wellbore risk during drilling. The traditional mechanism model needs a lot of iterative solving procedures in the calculation process, which reduces the calculation efficiency and is difficult to meet the demand of dynamic control of wellbore pressure. In recent years, many scholars have introduced artificial intelligence algorithms into wellbore pressure calculation, which significantly improves the calculation efficiency and accuracy of wellbore pressure. However, due to the ‘black box’ property of intelligent algorithm, the existing intelligent calculation model of wellbore pressure is difficult to play a role outside the scope of training data and overreacts to data noise, often resulting in abnormal calculation results. In this study, the multi-phase flow mechanism is embedded into the objective function of the neural network model as a constraint condition, and an intelligent prediction model of wellbore pressure under the constraint condition is established based on more than 400,000 sets of pressure measurement while drilling (MPD) data. The constraint of the multi-phase flow mechanism makes the prediction results of the neural network model more consistent with the distribution law of wellbore pressure, which overcomes the black-box attribute of the neural network model to some extent. The main performance is that the accuracy of the independent test data set is further improved, and the abnormal calculation values basically disappear. This method is a prediction method driven by MPD data and multi-phase flow mechanism, and it is the main way to predict wellbore pressure accurately and efficiently in the future.

Keywords: multiphase flow mechanism, pressure while drilling data, wellbore pressure, mechanism constraints, combined drive

Procedia PDF Downloads 177
15221 Optimized Real Ground Motion Scaling for Vulnerability Assessment of Building Considering the Spectral Uncertainty and Shape

Authors: Chen Bo, Wen Zengping

Abstract:

Based on the results of previous studies, we focus on the research of real ground motion selection and scaling method for structural performance-based seismic evaluation using nonlinear dynamic analysis. The input of earthquake ground motion should be determined appropriately to make them compatible with the site-specific hazard level considered. Thus, an optimized selection and scaling method are established including the use of not only Monte Carlo simulation method to create the stochastic simulation spectrum considering the multivariate lognormal distribution of target spectrum, but also a spectral shape parameter. Its applications in structural fragility analysis are demonstrated through case studies. Compared to the previous scheme with no consideration of the uncertainty of target spectrum, the method shown here can make sure that the selected records are in good agreement with the median value, standard deviation and spectral correction of the target spectrum, and greatly reveal the uncertainty feature of site-specific hazard level. Meanwhile, it can help improve computational efficiency and matching accuracy. Given the important infection of target spectrum’s uncertainty on structural seismic fragility analysis, this work can provide the reasonable and reliable basis for structural seismic evaluation under scenario earthquake environment.

Keywords: ground motion selection, scaling method, seismic fragility analysis, spectral shape

Procedia PDF Downloads 298
15220 Dynamic Response and Damage Modeling of Glass Fiber Reinforced Epoxy Composite Pipes: Numerical Investigation

Authors: Ammar Maziz, Mostapha Tarfaoui, Said Rechak

Abstract:

The high mechanical performance of composite pipes can be adversely affected by their low resistance to impact loads. Loads in dynamic origin are dangerous and cause consequences on the operation of pipes because the damage is often not detected and can affect the structural integrity of composite pipes. In this work, an advanced 3-D finite element (FE) model, based on the use of intralaminar damage models was developed and used to predict damage under low-velocity impact. The performance of the numerical model is validated with the confrontation with the results of experimental tests. The results show that at low impact energy, the damage happens mainly by matrix cracking and delamination. The model capabilities to simulate the low-velocity impact events on the full-scale composite structures were proved.

Keywords: composite materials, low velocity impact, FEA, dynamic behavior, progressive damage modeling

Procedia PDF Downloads 175
15219 Adapted Intersection over Union: A Generalized Metric for Evaluating Unsupervised Classification Models

Authors: Prajwal Prakash Vasisht, Sharath Rajamurthy, Nishanth Dara

Abstract:

In a supervised machine learning approach, metrics such as precision, accuracy, and coverage can be calculated using ground truth labels to help in model tuning, evaluation, and selection. In an unsupervised setting, however, where the data has no ground truth, there are few interpretable metrics that can guide us to do the same. Our approach creates a framework to adapt the Intersection over Union metric, referred to as Adapted IoU, usually used to evaluate supervised learning models, into the unsupervised domain, which solves the problem by factoring in subject matter expertise and intuition about the ideal output from the model. This metric essentially provides a scale that allows us to compare the performance across numerous unsupervised models or tune hyper-parameters and compare different versions of the same model.

Keywords: general metric, unsupervised learning, classification, intersection over union

Procedia PDF Downloads 54
15218 Software Quality Measurement System for Telecommunication Industry in Malaysia

Authors: Nor Fazlina Iryani Abdul Hamid, Mohamad Khatim Hasan

Abstract:

Evolution of software quality measurement has been started since McCall introduced his quality model in year 1977. Starting from there, several software quality models and software quality measurement methods had emerged but none of them focused on telecommunication industry. In this paper, the implementation of software quality measurement system for telecommunication industry was compulsory to accommodate the rapid growth of telecommunication industry. The quality value of the telecommunication related software could be calculated using this system by entering the required parameters. The system would calculate the quality value of the measured system based on predefined quality metrics and aggregated by referring to the quality model. It would classify the quality level of the software based on Net Satisfaction Index (NSI). Thus, software quality measurement system was important to both developers and users in order to produce high quality software product for telecommunication industry.

Keywords: software quality, quality measurement, quality model, quality metric, net satisfaction index

Procedia PDF Downloads 596
15217 Electro-Fenton Degradation of Erythrosine B Using Carbon Felt as a Cathode: Doehlert Design as an Optimization Technique

Authors: Sourour Chaabane, Davide Clematis, Marco Panizza

Abstract:

This study investigates the oxidation of Erythrosine B (EB) food dye by a homogeneous electro-Fenton process using iron (II) sulfate heptahydrate as a catalyst, carbon felt as cathode, and Ti/RuO2. The treated synthetic wastewater contains 100 mg L⁻¹ of EB and has a pH = 3. The effects of three independent variables have been considered for process optimization, such as applied current intensity (0.1 – 0.5 A), iron concentration (1 – 10 mM), and stirring rate (100 – 1000 rpm). Their interactions were investigated considering response surface methodology (RSM) based on Doehlert design as optimization method. EB removal efficiency and energy consumption were considered model responses after 30 minutes of electrolysis. Analysis of variance (ANOVA) revealed that the quadratic model was adequately fitted to the experimental data with R² (0.9819), adj-R² (0.9276) and low Fisher probability (< 0.0181) for EB removal model, and R² (0.9968), adj-R² (0.9872) and low Fisher probability (< 0.0014) relative to the energy consumption model reflected a robust statistical significance. The energy consumption model significantly depends on current density, as expected. The foregoing results obtained by RSM led to the following optimal conditions for EB degradation: current intensity of 0.2 A, iron concentration of 9.397 mM, and stirring rate of 500 rpm, which gave a maximum decolorization rate of 98.15 % with a minimum energy consumption of 0.74 kWh m⁻³ at 30 min of electrolysis.

Keywords: electrofenton, erythrosineb, dye, response serface methdology, carbon felt

Procedia PDF Downloads 78
15216 Formal Verification for Ethereum Smart Contract Using Coq

Authors: Xia Yang, Zheng Yang, Haiyong Sun, Yan Fang, Jingyu Liu, Jia Song

Abstract:

The smart contract in Ethereum is a unique program deployed on the Ethereum Virtual Machine (EVM) to help manage cryptocurrency. The security of this smart contract is critical to Ethereum’s operation and highly sensitive. In this paper, we present a formal model for smart contract, using the separated term-obligation (STO) strategy to formalize and verify the smart contract. We use the IBM smart sponsor contract (SSC) as an example to elaborate the detail of the formalizing process. We also propose a formal smart sponsor contract model (FSSCM) and verify SSC’s security properties with an interactive theorem prover Coq. We found the 'Unchecked-Send' vulnerability in the SSC, using our formal model and verification method. Finally, we demonstrate how we can formalize and verify other smart contracts with this approach, and our work indicates that this formal verification can effectively verify the correctness and security of smart contracts.

Keywords: smart contract, formal verification, Ethereum, Coq

Procedia PDF Downloads 698
15215 An Approach to Analyze Testing of Nano On-Chip Networks

Authors: Farnaz Fotovvatikhah, Javad Akbari

Abstract:

Test time of a test architecture is an important factor which depends on the architecture's delay and test patterns. Here a new architecture to store the test results based on network on chip is presented. In addition, simple analytical model is proposed to calculate link test time for built in self-tester (BIST) and external tester (Ext) in multiprocessor systems. The results extracted from the model are verified using FPGA implementation and experimental measurements. Systems consisting 16, 25, and 36 processors are implemented and simulated and test time is calculated. In addition, BIST and Ext are compared in terms of test time at different conditions such as at different number of test patterns and nodes. Using the model the maximum frequency of testing could be calculated and the test structure could be optimized for high speed testing.

Keywords: test, nano on-chip network, JTAG, modelling

Procedia PDF Downloads 492
15214 Synthesis of a Model Predictive Controller for Artificial Pancreas

Authors: Mohamed El Hachimi, Abdelhakim Ballouk, Ilyas Khelafa, Abdelaziz Mouhou

Abstract:

Introduction: Type 1 diabetes occurs when beta cells are destroyed by the body's own immune system. Treatment of type 1 diabetes mellitus could be greatly improved by applying a closed-loop control strategy to insulin delivery, also known as an Artificial Pancreas (AP). Method: In this paper, we present a new formulation of the cost function for a Model Predictive Control (MPC) utilizing a technic which accelerates the speed of control of the AP and tackles the nonlinearity of the control problem via asymmetric objective functions. Finding: The finding of this work consists in a new Model Predictive Control algorithm that leads to good performances like decreasing the time of hyperglycaemia and avoiding hypoglycaemia. Conclusion: These performances are validated under in silico trials.

Keywords: artificial pancreas, control algorithm, biomedical control, MPC, objective function, nonlinearity

Procedia PDF Downloads 309
15213 A Study on the Quantitative Evaluation Method of Asphalt Pavement Condition through the Visual Investigation

Authors: Sungho Kim, Jaechoul Shin, Yujin Baek

Abstract:

In recent years, due to the environmental impacts and time factor, etc., various type of pavement deterioration is increasing rapidly such as crack, pothole, rutting and roughness degradation. The Ministry of Land, Infrastructure and Transport maintains regular pavement condition of the highway and the national highway using the pavement condition survey equipment and structural survey equipment in Korea. Local governments that maintain local roads, farm roads, etc. are difficult to maintain the pavement condition using the pavement condition survey equipment depending on economic conditions, skills shortages and local conditions such as narrow roads. This study presents a quantitative evaluation method of the pavement condition through the visual inspection to overcome these problems of roads managed by local governments. It is difficult to evaluate rutting and roughness with the naked eye. However, the condition of cracks can be evaluated with the naked eye. Linear cracks (m), area cracks (m²) and potholes (number, m²) were investigated with the naked eye every 100 meters for survey the cracks. In this paper, crack ratio was calculated using the results of the condition of cracks and pavement condition was evaluated by calculated crack ratio. The pavement condition survey equipment also investigated the pavement condition in the same section in order to evaluate the reliability of pavement condition evaluation by the calculated crack ratio. The pavement condition was evaluated through the SPI (Seoul Pavement Index) and calculated crack ratio using results of field survey. The results of a comparison between 'the SPI considering only crack ratio' and 'the SPI considering rutting and roughness either' using the equipment survey data showed a margin of error below 5% when the SPI is less than 5. The SPI 5 is considered the base point to determine whether to maintain the pavement condition. It showed that the pavement condition can be evaluated using only the crack ratio. According to the analysis results of the crack ratio between the visual inspection and the equipment survey, it has an average error of 1.86%(minimum 0.03%, maximum 9.58%). Economically, the visual inspection costs only 10% of the equipment survey and will also help the economy by creating new jobs. This paper advises that local governments maintain the pavement condition through the visual investigations. However, more research is needed to improve reliability. Acknowledgment: The author would like to thank the MOLIT (Ministry of Land, Infrastructure, and Transport). This work was carried out through the project funded by the MOLIT. The project name is 'development of 20mm grade for road surface detecting roadway condition and rapid detection automation system for removal of pothole'.

Keywords: asphalt pavement maintenance, crack ratio, evaluation of asphalt pavement condition, SPI (Seoul Pavement Index), visual investigation

Procedia PDF Downloads 168
15212 A Self-Coexistence Strategy for Spectrum Allocation Using Selfish and Unselfish Game Models in Cognitive Radio Networks

Authors: Noel Jeygar Robert, V. K.Vidya

Abstract:

Cognitive radio is a software-defined radio technology that allows cognitive users to operate on the vacant bands of spectrum allocated to licensed users. Cognitive radio plays a vital role in the efficient utilization of wireless radio spectrum available between cognitive users and licensed users without making any interference to licensed users. The spectrum allocation followed by spectrum sharing is done in a fashion where a cognitive user has to wait until spectrum holes are identified and allocated when the licensed user moves out of his own allocated spectrum. In this paper, we propose a self –coexistence strategy using bargaining and Cournot game model for achieving spectrum allocation in cognitive radio networks. The game-theoretic model analyses the behaviour of cognitive users in both cooperative and non-cooperative scenarios and provides an equilibrium level of spectrum allocation. Game-theoretic models such as bargaining game model and Cournot game model produce a balanced distribution of spectrum resources and energy consumption. Simulation results show that both game theories achieve better performance compared to other popular techniques

Keywords: cognitive radio, game theory, bargaining game, Cournot game

Procedia PDF Downloads 304
15211 Modeling and Experimental Verification of Crystal Growth Kinetics in Glass Forming Alloys

Authors: Peter K. Galenko, Stefanie Koch, Markus Rettenmayr, Robert Wonneberger, Evgeny V. Kharanzhevskiy, Maria Zamoryanskaya, Vladimir Ankudinov

Abstract:

We analyze the structure of undercooled melts, crystal growth kinetics and amorphous/crystalline microstructure of rapidly solidifying glass-forming Pd-based and CuZr-based alloys. A dendrite growth model is developed using a combination of the kinetic phase-field model and mesoscopic sharp interface model. The model predicts features of crystallization kinetics in alloys from thermodynamically controlled growth (governed by the Gibbs free energy change on solidification) to the kinetically limited regime (governed by atomic attachment-detachment processes at the solid/liquid interface). Comparing critical undercoolings observed in the crystallization kinetics with experimental data on melt viscosity, atomistic simulation's data on liquid microstructure and theoretically predicted dendrite growth velocity allows us to conclude that the dendrite growth kinetics strongly depends on the cluster structure changes of the melt. The obtained data of theoretical and experimental investigations are used for interpretation of microstructure of samples processed in electro-magnetic levitator on board International Space Station in the frame of the project "MULTIPHAS" (European Space Agency and German Aerospace Center, 50WM1941) and "KINETIKA" (ROSKOSMOS).

Keywords: dendrite, kinetics, model, solidification

Procedia PDF Downloads 125
15210 Mathematical Modeling Pressure Losses of Trapezoidal Labyrinth Channel and Bi-Objective Optimization of the Design Parameters

Authors: Nina Philipova

Abstract:

The influence of the geometric parameters of trapezoidal labyrinth channel on the pressure losses along the labyrinth length is investigated in this work. The impact of the dentate height is studied at fixed values of the dentate angle and the dentate spacing. The objective of the work presented in this paper is to derive a mathematical model of the pressure losses along the labyrinth length depending on the dentate height. The numerical simulations of the water flow movement are performed by using Commercial codes ANSYS GAMBIT and FLUENT. Dripper inlet pressure is set up to be 1 bar. As a result, the mathematical model of the pressure losses is determined as a second-order polynomial by means Commercial code STATISTIKA. Bi-objective optimization is performed by using the mean algebraic function of utility. The optimum value of the dentate height is defined at fixed values of the dentate angle and the dentate spacing. The derived model of the pressure losses and the optimum value of the dentate height are used as a basis for a more successful emitter design.

Keywords: drip irrigation, labyrinth channel hydrodynamics, numerical simulations, Reynolds stress model

Procedia PDF Downloads 156
15209 Earnings vs Cash Flows: The Valuation Perspective

Authors: Megha Agarwal

Abstract:

The research paper is an effort to compare the earnings based and cash flow based methods of valuation of an enterprise. The theoretically equivalent methods based on either earnings such as Residual Earnings Model (REM), Abnormal Earnings Growth Model (AEGM), Residual Operating Income Method (ReOIM), Abnormal Operating Income Growth Model (AOIGM) and its extensions multipliers such as price/earnings ratio, price/book value ratio; or cash flow based models such as Dividend Valuation Method (DVM) and Free Cash Flow Method (FCFM) all provide different estimates of valuation of the Indian giant corporate Reliance India Limited (RIL). An ex-post analysis of published accounting and financial data for four financial years from 2008-09 to 2011-12 has been conducted. A comparison of these valuation estimates with the actual market capitalization of the company shows that the complex accounting based model AOIGM provides closest forecasts. These different estimates may be derived due to inconsistencies in discount rate, growth rates and the other forecasted variables. Although inputs for earnings based models may be available to the investor and analysts through published statements, precise estimation of free cash flows may be better undertaken by the internal management. The estimation of value from more stable parameters as residual operating income and RNOA could be considered superior to the valuations from more volatile return on equity.

Keywords: earnings, cash flows, valuation, Residual Earnings Model (REM)

Procedia PDF Downloads 381
15208 Fully Coupled Porous Media Model

Authors: Nia Mair Fry, Matthew Profit, Chenfeng Li

Abstract:

This work focuses on the development and implementation of a fully implicit-implicit, coupled mechanical deformation and porous flow, finite element software tool. The fully implicit software accurately predicts classical fundamental analytical solutions such as the Terzaghi consolidation problem. Furthermore, it can capture other analytical solutions less well known in the literature, such as Gibson’s sedimentation rate problem and Coussy’s problems investigating wellbore stability for poroelastic rocks. The mechanical volume strains are transferred to the porous flow governing equation in an implicit framework. This will overcome some of the many current industrial issues, which use explicit solvers for the mechanical governing equations and only implicit solvers on the porous flow side. This can potentially lead to instability and non-convergence issues in the coupled system, plus giving results with an accountable degree of error. The specification of a fully monolithic implicit-implicit coupled porous media code sees the solution of both seepage-mechanical equations in one matrix system, under a unified time-stepping scheme, which makes the problem definition much easier. When using an explicit solver, additional input such as the damping coefficient and mass scaling factor is required, which are circumvented with a fully implicit solution. Further, improved accuracy is achieved as the solution is not dependent on predictor-corrector methods for the pore fluid pressure solution, but at the potential cost of reduced stability. In testing of this fully monolithic porous media code, there is the comparison of the fully implicit coupled scheme against an existing staggered explicit-implicit coupled scheme solution across a range of geotechnical problems. These cases include 1) Biot coefficient calculation, 2) consolidation theory with Terzaghi analytical solution, 3) sedimentation theory with Gibson analytical solution, and 4) Coussy well-bore poroelastic analytical solutions.

Keywords: coupled, implicit, monolithic, porous media

Procedia PDF Downloads 141