Search results for: threshold models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7208

Search results for: threshold models

6638 Dry Relaxation Shrinkage Prediction of Bordeaux Fiber Using a Feed Forward Neural

Authors: Baeza S. Roberto

Abstract:

The knitted fabric suffers a deformation in its dimensions due to stretching and tension factors, transverse and longitudinal respectively, during the process in rectilinear knitting machines so it performs a dry relaxation shrinkage procedure and thermal action of prefixed to obtain stable conditions in the knitting. This paper presents a dry relaxation shrinkage prediction of Bordeaux fiber using a feed forward neural network and linear regression models. Six operational alternatives of shrinkage were predicted. A comparison of the results was performed finding neural network models with higher levels of explanation of the variability and prediction. The presence of different reposes are included. The models were obtained through a neural toolbox of Matlab and Minitab software with real data in a knitting company of Southern Guanajuato. The results allow predicting dry relaxation shrinkage of each alternative operation.

Keywords: neural network, dry relaxation, knitting, linear regression

Procedia PDF Downloads 567
6637 Convective Hot Air Drying of Different Varieties of Blanched Sweet Potato Slices

Authors: M. O. Oke, T. S. Workneh

Abstract:

Drying behaviour of blanched sweet potato in a cabinet dryer using different five air temperatures (40-80oC) and ten sweet potato varieties sliced to 5 mm thickness were investigated. The drying data were fitted to eight models. The Modified Henderson and Pabis model gave the best fit to the experimental moisture ratio data obtained during the drying of all the varieties while Newton (Lewis) and Wang and Singh models gave the least fit. The values of Deff obtained for Bophelo variety (1.27 x 10-9 to 1.77 x 10-9 m2/s) was the least while that of S191 (1.93 x 10-9 to 2.47 x 10-9 m2/s) was the highest which indicates that moisture diffusivity in sweet potato is affected by the genetic factor. Activation energy values ranged from 0.27-6.54 kJ/mol. The lower activation energy indicates that drying of sweet potato slices requires less energy and is hence a cost and energy saving method. The drying behavior of blanched sweet potato was investigated in a cabinet dryer. Drying time decreased considerably with increase in hot air temperature. Out of the eight models fitted, the Modified Henderson and Pabis model gave the best fit to the experimental moisture ratio data on all the varieties while Newton, Wang and Singh models gave the least. The lower activation energy (0.27-6.54 kJ/mol) obtained indicates that drying of sweet potato slices requires less energy and is hence a cost and energy saving method.

Keywords: sweet potato slice, drying models, moisture ratio, moisture diffusivity, activation energy

Procedia PDF Downloads 504
6636 The Changing Face of Pedagogy and Curriculum Development Sub-Components of Teacher Education in Nigeria: A Comparative Evaluation of the University of Lagos, Lagos State University, and Sokoto State University Models

Authors: Saheed A. Rufai

Abstract:

Courses in Pedagogy and Curriculum Development expectedly occupy a core place in the professional education components of teacher education at Lagos, Lagos State, and Sokoto State Universities. This is in keeping with the National Teacher Education Policy statement that stipulates that for student teachers to learn effectively teacher education institutions must be equipped to prepare them adequately. However, there is a growing concern over the unfaithfulness of some of the dominant Nigerian models of teacher education, to this policy statement on teacher educators’ knowledge and skills. The purpose of this paper is to comparatively evaluate both the curricular provisions and the manpower for the pedagogy and curriculum development sub-components of the Lagos, Lagos State, and Sokoto State models of teacher preparation. The paper employs a combination of quantitative and qualitative methods. Preliminary analysis revealed a new trend in teacher educators’ pedagogical knowledge and understanding, with regard to the two intertwined sub-components. The significance of such a study lies in its potential to determine the degree of conformity of each of the three models to the stipulated standards. The paper’s contribution to scholarship lies in its correlation of deficiencies in teacher educators’ professional knowledge and skills and articulation of the implications of such deficiencies for the professional knowledge and skills of the prospective teachers, with a view to providing a framework for reforms.

Keywords: curriculum development, pedagogy, teacher education, dominant Nigerian teacher preparation models

Procedia PDF Downloads 431
6635 Statistical Analysis of Natural Images after Applying ICA and ISA

Authors: Peyman Sheikholharam Mashhadi

Abstract:

Difficulties in analyzing real world images in classical image processing and machine vision framework have motivated researchers towards considering the biology-based vision. It is a common belief that mammalian visual cortex has been adapted to the statistics of the real world images through the evolution process. There are two well-known successful models of mammalian visual cortical cells: Independent Component Analysis (ICA) and Independent Subspace Analysis (ISA). In this paper, we statistically analyze the dependencies which remain in the components after applying these models to the natural images. Also, we investigate the response of feature detectors to gratings with various parameters in order to find optimal parameters of the feature detectors. Finally, the selectiveness of feature detectors to phase, in both models is considered.

Keywords: statistics, independent component analysis, independent subspace analysis, phase, natural images

Procedia PDF Downloads 330
6634 The Use of the TRIGRS Model and Geophysics Methodologies to Identify Landslides Susceptible Areas: Case Study of Campos do Jordao-SP, Brazil

Authors: Tehrrie Konig, Cassiano Bortolozo, Daniel Metodiev, Rodolfo Mendes, Marcio Andrade, Marcio Moraes

Abstract:

Gravitational mass movements are recurrent events in Brazil, usually triggered by intense rainfall. When these events occur in urban areas, they end up becoming disasters due to the economic damage, social impact, and loss of human life. To identify the landslide-susceptible areas, it is important to know the geotechnical parameters of the soil, such as cohesion, internal friction angle, unit weight, hydraulic conductivity, and hydraulic diffusivity. The measurement of these parameters is made by collecting soil samples to analyze in the laboratory and by using geophysical methodologies, such as Vertical Electrical Survey (VES). The geophysical surveys analyze the soil properties with minimal impact in its initial structure. Statistical analysis and mathematical models of physical basis are used to model and calculate the Factor of Safety for steep slope areas. In general, such mathematical models work from the combination of slope stability models and hydrological models. One example is the mathematical model TRIGRS (Transient Rainfall Infiltration and Grid-based Regional Slope- Stability Model) which calculates the variation of the Factor of Safety of a determined study area. The model relies on changes in pore-pressure and soil moisture during a rainfall event. TRIGRS was written in the Fortran programming language and associates the hydrological model, which is based on the Richards Equation, with the stability model based on the principle of equilibrium limit. Therefore, the aims of this work are modeling the slope stability of Campos do Jordão with TRIGRS, using geotechnical and geophysical methodologies to acquire the soil properties. The study area is located at southern-east of Sao Paulo State in the Mantiqueira Mountains and has a historic landslide register. During the fieldwork, soil samples were collected, and the VES method applied. These procedures provide the soil properties, which were used as input data in the TRIGRS model. The hydrological data (infiltration rate and initial water table height) and rainfall duration and intensity, were acquired from the eight rain gauges installed by Cemaden in the study area. A very high spatial resolution digital terrain model was used to identify the slopes declivity. The analyzed period is from March 6th to March 8th of 2017. As results, the TRIGRS model calculates the variation of the Factor of Safety within a 72-hour period in which two heavy rainfall events stroke the area and six landslides were registered. After each rainfall, the Factor of Safety declined, as expected. The landslides happened in areas identified by the model with low values of Factor of Safety, proving its efficiency on the identification of landslides susceptible areas. This study presents a critical threshold for landslides, in which an accumulated rainfall higher than 80mm/m² in 72 hours might trigger landslides in urban and natural slopes. The geotechnical and geophysics methods are shown to be very useful to identify the soil properties and provide the geological characteristics of the area. Therefore, the combine geotechnical and geophysical methods for soil characterization and the modeling of landslides susceptible areas with TRIGRS are useful for urban planning. Furthermore, early warning systems can be developed by combining the TRIGRS model and weather forecast, to prevent disasters in urban slopes.

Keywords: landslides, susceptibility, TRIGRS, vertical electrical survey

Procedia PDF Downloads 160
6633 Modeling and Shape Prediction for Elastic Kinematic Chains

Authors: Jiun Jeon, Byung-Ju Yi

Abstract:

This paper investigates modeling and shape prediction of elastic kinematic chains such as colonoscopy. 2D and 3D models of elastic kinematic chains are suggested and their behaviors are demonstrated through simulation. To corroborate the effectiveness of those models, experimental work is performed using a magnetic sensor system.

Keywords: elastic kinematic chain, shape prediction, colonoscopy, modeling

Procedia PDF Downloads 593
6632 The Models of Character Development Bali Police to Improve Quality of Moral Members in Bali Police Headquarters

Authors: Agus Masrukhin

Abstract:

This research aims to find and analyze the model of character building in the Police Headquarters in Bali with a case study of Muslim members in improving the quality of the morality of its members. The formation of patterns of thinking, behavior, mentality, and police officers noble character, later can be used as a solution to reduce the hedonistic nature of the challenges in the era of globalization. The benefit of this study is expected to be a positive recommendation to find a constructive character building models of police officers in the Republic of Indonesia, especially Bali Police. For the long term, the discovery of the character building models can be developed for the entire police force in Indonesia. The type of research that would apply in this study researchers mix the qualitative research methods based on the narrative between the subject and the concrete experience of field research and quantitative research methods with 92 respondents from the police regional police Bali. This research used a descriptive analysis and SWOT analysis then it is presented in the FGD (focus group discussion). The results of this research indicate that the variable modeling the leadership of the police and variable police offices culture have significant influence on the implementation of spiritual development.

Keywords: positive constructive, hedonistic, character models, morality

Procedia PDF Downloads 350
6631 Comparative Mesh Sensitivity Study of Different Reynolds Averaged Navier Stokes Turbulence Models in OpenFOAM

Authors: Zhuoneng Li, Zeeshan A. Rana, Karl W. Jenkins

Abstract:

In industry, to validate a case, often a multitude of simulation are required and in order to demonstrate confidence in the process where users tend to use a coarser mesh. Therefore, it is imperative to establish the coarsest mesh that could be used while keeping reasonable simulation accuracy. To date, the two most reliable, affordable and broadly used advanced simulations are the hybrid RANS (Reynolds Averaged Navier Stokes)/LES (Large Eddy Simulation) and wall modelled LES. The potentials in these two simulations will still be developed in the next decades mainly because the unaffordable computational cost of a DNS (Direct Numerical Simulation). In the wall modelled LES, the turbulence model is applied as a sub-grid scale model in the most inner layer near the wall. The RANS turbulence models cover the entire boundary layer region in a hybrid RANS/LES (Detached Eddy Simulation) and its variants, therefore, the RANS still has a very important role in the state of art simulations. This research focuses on the turbulence model mesh sensitivity analysis where various turbulence models such as the S-A (Spalart-Allmaras), SSG (Speziale-Sarkar-Gatski), K-Omega transitional SST (Shear Stress Transport), K-kl-Omega, γ-Reθ transitional model, v2f are evaluated within the OpenFOAM. The simulations are conducted on a fully developed turbulent flow over a flat plate where the skin friction coefficient as well as velocity profiles are obtained to compare against experimental values and DNS results. A concrete conclusion is made to clarify the mesh sensitivity for different turbulence models.

Keywords: mesh sensitivity, turbulence models, OpenFOAM, RANS

Procedia PDF Downloads 245
6630 Bayesian Value at Risk Forecast Using Realized Conditional Autoregressive Expectiel Mdodel with an Application of Cryptocurrency

Authors: Niya Chen, Jennifer Chan

Abstract:

In the financial market, risk management helps to minimize potential loss and maximize profit. There are two ways to assess risks; the first way is to calculate the risk directly based on the volatility. The most common risk measurements are Value at Risk (VaR), sharp ratio, and beta. Alternatively, we could look at the quantile of the return to assess the risk. Popular return models such as GARCH and stochastic volatility (SV) focus on modeling the mean of the return distribution via capturing the volatility dynamics; however, the quantile/expectile method will give us an idea of the distribution with the extreme return value. It will allow us to forecast VaR using return which is direct information. The advantage of using these non-parametric methods is that it is not bounded by the distribution assumptions from the parametric method. But the difference between them is that expectile uses a second-order loss function while quantile regression uses a first-order loss function. We consider several quantile functions, different volatility measures, and estimates from some volatility models. To estimate the expectile of the model, we use Realized Conditional Autoregressive Expectile (CARE) model with the bayesian method to achieve this. We would like to see if our proposed models outperform existing models in cryptocurrency, and we will test it by using Bitcoin mainly as well as Ethereum.

Keywords: expectile, CARE Model, CARR Model, quantile, cryptocurrency, Value at Risk

Procedia PDF Downloads 99
6629 Statistical Analysis and Impact Forecasting of Connected and Autonomous Vehicles on the Environment: Case Study in the State of Maryland

Authors: Alireza Ansariyar, Safieh Laaly

Abstract:

Over the last decades, the vehicle industry has shown increased interest in integrating autonomous, connected, and electrical technologies in vehicle design with the primary hope of improving mobility and road safety while reducing transportation’s environmental impact. Using the State of Maryland (M.D.) in the United States as a pilot study, this research investigates CAVs’ fuel consumption and air pollutants (C.O., PM, and NOx) and utilizes meaningful linear regression models to predict CAV’s environmental effects. Maryland transportation network was simulated in VISUM software, and data on a set of variables were collected through a comprehensive survey. The number of pollutants and fuel consumption were obtained for the time interval 2010 to 2021 from the macro simulation. Eventually, four linear regression models were proposed to predict the amount of C.O., NOx, PM pollutants, and fuel consumption in the future. The results highlighted that CAVs’ pollutants and fuel consumption have a significant correlation with the income, age, and race of the CAV customers. Furthermore, the reliability of four statistical models was compared with the reliability of macro simulation model outputs in the year 2030. The error of three pollutants and fuel consumption was obtained at less than 9% by statistical models in SPSS. This study is expected to assist researchers and policymakers with planning decisions to reduce CAV environmental impacts in M.D.

Keywords: connected and autonomous vehicles, statistical model, environmental effects, pollutants and fuel consumption, VISUM, linear regression models

Procedia PDF Downloads 429
6628 The Network Relative Model Accuracy (NeRMA) Score: A Method to Quantify the Accuracy of Prediction Models in a Concurrent External Validation

Authors: Carl van Walraven, Meltem Tuna

Abstract:

Background: Network meta-analysis (NMA) quantifies the relative efficacy of 3 or more interventions from studies containing a subgroup of interventions. This study applied the analytical approach of NMA to quantify the relative accuracy of prediction models with distinct inclusion criteria that are evaluated on a common population (‘concurrent external validation’). Methods: We simulated binary events in 5000 patients using a known risk function. We biased the risk function and modified its precision by pre-specified amounts to create 15 prediction models with varying accuracy and distinct patient applicability. Prediction model accuracy was measured using the Scaled Brier Score (SBS). Overall prediction model accuracy was measured using fixed-effects methods that accounted for model applicability patterns. Prediction model accuracy was summarized as the Network Relative Model Accuracy (NeRMA) Score which ranges from -∞ through 0 (accuracy of random guessing) to 1 (accuracy of most accurate model in concurrent external validation). Results: The unbiased prediction model had the highest SBS. The NeRMA score correctly ranked all simulated prediction models by the extent of bias from the known risk function. A SAS macro and R-function was created to implement the NeRMA Score. Conclusions: The NeRMA Score makes it possible to quantify the accuracy of binomial prediction models having distinct inclusion criteria in a concurrent external validation.

Keywords: prediction model accuracy, scaled brier score, fixed effects methods, concurrent external validation

Procedia PDF Downloads 216
6627 Investigating the Factors Affecting Generalization of Deep Learning Models for Plant Disease Detection

Authors: Praveen S. Muthukumarana, Achala C. Aponso

Abstract:

A large percentage of global crop harvest is lost due to crop diseases. Timely identification and treatment of crop diseases is difficult in many developing nations due to insufficient trained professionals in the field of agriculture. Many crop diseases can be accurately diagnosed by visual symptoms. In the past decade, deep learning has been successfully utilized in domains such as healthcare but adoption in agriculture for plant disease detection is rare. The literature shows that models trained with popular datasets such as PlantVillage does not generalize well on real world images. This paper attempts to find out how to make plant disease identification models that generalize well with real world images.

Keywords: agriculture, convolutional neural network, deep learning, plant disease classification, plant disease detection, plant disease diagnosis

Procedia PDF Downloads 131
6626 Deep Learning Based, End-to-End Metaphor Detection in Greek with Recurrent and Convolutional Neural Networks

Authors: Konstantinos Perifanos, Eirini Florou, Dionysis Goutsos

Abstract:

This paper presents and benchmarks a number of end-to-end Deep Learning based models for metaphor detection in Greek. We combine Convolutional Neural Networks and Recurrent Neural Networks with representation learning to bear on the metaphor detection problem for the Greek language. The models presented achieve exceptional accuracy scores, significantly improving the previous state-of-the-art results, which had already achieved accuracy 0.82. Furthermore, no special preprocessing, feature engineering or linguistic knowledge is used in this work. The methods presented achieve accuracy of 0.92 and F-score 0.92 with Convolutional Neural Networks (CNNs) and bidirectional Long Short Term Memory networks (LSTMs). Comparable results of 0.91 accuracy and 0.91 F-score are also achieved with bidirectional Gated Recurrent Units (GRUs) and Convolutional Recurrent Neural Nets (CRNNs). The models are trained and evaluated only on the basis of training tuples, the related sentences and their labels. The outcome is a state-of-the-art collection of metaphor detection models, trained on limited labelled resources, which can be extended to other languages and similar tasks.

Keywords: metaphor detection, deep learning, representation learning, embeddings

Procedia PDF Downloads 137
6625 Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering

Authors: Waqqas-ur-Rehman Butt, Martin Servin, Marion Pause

Abstract:

In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods.

Keywords: image processing, illumination equalization, shadow filtering, object detection

Procedia PDF Downloads 205
6624 Applications of the Morphological Variability in River Management: A Study of West Rapti River

Authors: Partha Sarathi Mondal, Srabani Sanyal

Abstract:

Different geomorphic agents produce a different landforms pattern. Similarly rivers also have a distinct and diverse landforms pattern. And even, within a river course different and distinct assemblage of landforms i.e. morphological variability are seen. These morphological variability are produced by different river processes. Channel and floodplain morphology helps to interpret river processes. Consequently morphological variability can be used as an important tool for assessing river processes, hydrological connectivity and river health, which will help us to draw inference about river processes and therefore, management of river health. The present study is documented on West Rapti river, a trans-boundary river flowing through Nepal and India, from its source to confluence with Ghaghra river in India. The river shows a significant morphological variability throughout its course. The present study tries to find out factors and processes responsible for the morphological variability of the river and in which way it can be applied in river management practices. For this purpose channel and floodplain morphology of West Rapti river was mapped as accurately as possible and then on the basis of process-form interactions, inferences are drawn to understand factors of morphological variability. The study shows that the valley setting of West Rapti river, in the Himalayan region, is confined and somewhere partly confined whereas, channel of the West Rapti river is single thread in most part of Himalayan region and braided in valley region. In the foothill region valley is unconfined and channel is braided, in middle part channel is meandering and valley is unconfined, whereas, channel is anthropogenically altered in the lower part of the course. Due to this the morphology of West Rapti river is highly diverse. These morphological variability are produced by different geomorphic processes. Therefore, for any river management it is essential to sustain these morphological variability so that the river could not cross the geomorphic threshold and environmental flow of the river along with the biodiversity of riparian region is maintained.

Keywords: channel morphology, environmental flow, floodplain morphology, geomorphic threshold

Procedia PDF Downloads 358
6623 Chemometric QSRR Evaluation of Behavior of s-Triazine Pesticides in Liquid Chromatography

Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević

Abstract:

This study considers the selection of the most suitable in silico molecular descriptors that could be used for s-triazine pesticides characterization. Suitable descriptors among topological, geometrical and physicochemical are used for quantitative structure-retention relationships (QSRR) model establishment. Established models were obtained using linear regression (LR) and multiple linear regression (MLR) analysis. In this paper, MLR models were established avoiding multicollinearity among the selected molecular descriptors. Statistical quality of established models was evaluated by standard and cross-validation statistical parameters. For detection of similarity or dissimilarity among investigated s-triazine pesticides and their classification, principal component analysis (PCA) and hierarchical cluster analysis (HCA) were used and gave similar grouping. This study is financially supported by COST action TD1305.

Keywords: chemometrics, classification analysis, molecular descriptors, pesticides, regression analysis

Procedia PDF Downloads 381
6622 Variable-Fidelity Surrogate Modelling with Kriging

Authors: Selvakumar Ulaganathan, Ivo Couckuyt, Francesco Ferranti, Tom Dhaene, Eric Laermans

Abstract:

Variable-fidelity surrogate modelling offers an efficient way to approximate function data available in multiple degrees of accuracy each with varying computational cost. In this paper, a Kriging-based variable-fidelity surrogate modelling approach is introduced to approximate such deterministic data. Initially, individual Kriging surrogate models, which are enhanced with gradient data of different degrees of accuracy, are constructed. Then these Gradient enhanced Kriging surrogate models are strategically coupled using a recursive CoKriging formulation to provide an accurate surrogate model for the highest fidelity data. While, intuitively, gradient data is useful to enhance the accuracy of surrogate models, the primary motivation behind this work is to investigate if it is also worthwhile incorporating gradient data of varying degrees of accuracy.

Keywords: Kriging, CoKriging, Surrogate modelling, Variable- fidelity modelling, Gradients

Procedia PDF Downloads 541
6621 Measurement of CES Production Functions Considering Energy as an Input

Authors: Donglan Zha, Jiansong Si

Abstract:

Because of its flexibility, CES attracts much interest in economic growth and programming models, and the macroeconomics or micro-macro models. This paper focuses on the development, estimating methods of CES production function considering energy as an input. We leave for future research work of relaxing the assumption of constant returns to scale, the introduction of potential input factors, and the generalization method of the optimal nested form of multi-factor production functions.

Keywords: bias of technical change, CES production function, elasticity of substitution, energy input

Procedia PDF Downloads 269
6620 Analysis of Risk Factors Affecting the Motor Insurance Pricing with Generalized Linear Models

Authors: Puttharapong Sakulwaropas, Uraiwan Jaroengeratikun

Abstract:

Casualty insurance business, the optimal premium pricing and adequate cost for an insurance company are important in risk management. Normally, the insurance pure premium can be determined by multiplying the claim frequency with the claim cost. The aim of this research was to study in the application of generalized linear models to select the risk factor for model of claim frequency and claim cost for estimating a pure premium. In this study, the data set was the claim of comprehensive motor insurance, which was provided by one of the insurance company in Thailand. The results of this study found that the risk factors significantly related to pure premium at the 0.05 level consisted of no claim bonus (NCB) and used of the car (Car code).

Keywords: generalized linear models, risk factor, pure premium, regression model

Procedia PDF Downloads 455
6619 Ontologies for Social Media Digital Evidence

Authors: Edlira Kalemi, Sule Yildirim-Yayilgan

Abstract:

Online Social Networks (OSNs) are nowadays being used widely and intensively for crime investigation and prevention activities. As they provide a lot of information they are used by the law enforcement and intelligence. An extensive review on existing solutions and models for collecting intelligence from this source of information and making use of it for solving crimes has been presented in this article. The main focus is on smart solutions and models where ontologies have been used as the main approach for representing criminal domain knowledge. A framework for a prototype ontology named SC-Ont will be described. This defines terms of the criminal domain ontology and the relations between them. The terms and the relations are extracted during both this review and the discussions carried out with domain experts. The development of SC-Ont is still ongoing work, where in this paper, we report mainly on the motivation for using smart ontology models and the possible benefits of using them for solving crimes.

Keywords: criminal digital evidence, social media, ontologies, reasoning

Procedia PDF Downloads 378
6618 Groundwater Pollution Models for Hebron/Palestine

Authors: Hassan Jebreen

Abstract:

These models of a conservative pollutant in groundwater do not include representation of processes in soils and in the unsaturated zone, or biogeochemical processes in groundwater, These demonstration models can be used as the basis for more detailed simulations of the impacts of pollution sources at a local scale, but such studies should address processes related to specific pollutant species, and should consider local hydrogeology in more detail, particularly in relation to possible impacts on shallow systems which are likely to respond more quickly to changes in pollutant inputs. The results have demonstrated the interaction between groundwater flow fields and pollution sources in abstraction areas, and help to emphasise that wadi development is one of the key elements of water resources planning. The quality of groundwater in the Hebron area indicates a gradual increase in chloride and nitrate with time. Since the aquifers in Hebron districts are highly vulnerable due to their karstic nature, continued disposal of untreated domestic and industrial wastewater into the wadi will lead to unacceptably poor water quality in drinking water, which may ultimately require expensive treatment if significant health problems are to be avoided. Improvements are required in wastewater treatment at the municipal and domestic levels, the latter requiring increased public awareness of the issues, as well as improved understanding of the hydrogeological behaviour of the aquifers.

Keywords: groundwater, models, pollutants, wadis, hebron

Procedia PDF Downloads 421
6617 Modeling of Daily Global Solar Radiation Using Ann Techniques: A Case of Study

Authors: Said Benkaciali, Mourad Haddadi, Abdallah Khellaf, Kacem Gairaa, Mawloud Guermoui

Abstract:

In this study, many experiments were carried out to assess the influence of the input parameters on the performance of multilayer perceptron which is one the configuration of the artificial neural networks. To estimate the daily global solar radiation on the horizontal surface, we have developed some models by using seven combinations of twelve meteorological and geographical input parameters collected from a radiometric station installed at Ghardaïa city (southern of Algeria). For selecting of best combination which provides a good accuracy, six statistical formulas (or statistical indicators) have been evaluated, such as the root mean square errors, mean absolute errors, correlation coefficient, and determination coefficient. We noted that multilayer perceptron techniques have the best performance, except when the sunshine duration parameter is not included in the input variables. The maximum of determination coefficient and correlation coefficient are equal to 98.20 and 99.11%. On the other hand, some empirical models were developed to compare their performances with those of multilayer perceptron neural networks. Results obtained show that the neural networks techniques give the best performance compared to the empirical models.

Keywords: empirical models, multilayer perceptron neural network, solar radiation, statistical formulas

Procedia PDF Downloads 329
6616 E-Consumers’ Attribute Non-Attendance Switching Behavior: Effect of Providing Information on Attributes

Authors: Leonard Maaya, Michel Meulders, Martina Vandebroek

Abstract:

Discrete Choice Experiments (DCE) are used to investigate how product attributes affect decision-makers’ choices. In DCEs, choice situations consisting of several alternatives are presented from which choice-makers select the preferred alternative. Standard multinomial logit models based on random utility theory can be used to estimate the utilities for the attributes. The overarching principle in these models is that respondents understand and use all the attributes when making choices. However, studies suggest that respondents sometimes ignore some attributes (commonly referred to as Attribute Non-Attendance/ANA). The choice modeling literature presents ANA as a static process, i.e., respondents’ ANA behavior does not change throughout the experiment. However, respondents may ignore attributes due to changing factors like availability of information on attributes, learning/fatigue in experiments, etc. We develop a dynamic mixture latent Markov model to model changes in ANA when information on attributes is provided. The model is illustrated on e-consumers’ webshop choices. The results indicate that the dynamic ANA model describes the behavioral changes better than modeling the impact of information using changes in parameters. Further, we find that providing information on attributes leads to an increase in the attendance probabilities for the investigated attributes.

Keywords: choice models, discrete choice experiments, dynamic models, e-commerce, statistical modeling

Procedia PDF Downloads 123
6615 Mathematical Models for Drug Diffusion Through the Compartments of Blood and Tissue Medium

Authors: M. A. Khanday, Aasma Rafiq, Khalid Nazir

Abstract:

This paper is an attempt to establish the mathematical models to understand the distribution of drug administration in the human body through oral and intravenous routes. Three models were formulated based on diffusion process using Fick’s principle and the law of mass action. The rate constants governing the law of mass action were used on the basis of the drug efficacy at different interfaces. The Laplace transform and eigenvalue methods were used to obtain the solution of the ordinary differential equations concerning the rate of change of concentration in different compartments viz. blood and tissue medium. The drug concentration in the different compartments has been computed using numerical parameters. The results illustrate the variation of drug concentration with respect to time using MATLAB software. It has been observed from the results that the drug concentration decreases in the first compartment and gradually increases in other subsequent compartments.

Keywords: Laplace transform, diffusion, eigenvalue method, mathematical model

Procedia PDF Downloads 321
6614 Modelling and Assessment of an Off-Grid Biogas Powered Mini-Scale Trigeneration Plant with Prioritized Loads Supported by Photovoltaic and Thermal Panels

Authors: Lorenzo Petrucci

Abstract:

This paper is intended to give insight into the potential use of small-scale off-grid trigeneration systems powered by biogas generated in a dairy farm. The off-grid plant object of analysis comprises a dual-fuel Genset as well as electrical and thermal storage equipment and an adsorption machine. The loads are the different apparatus used in the dairy farm, a household where the workers live and a small electric vehicle whose batteries can also be used as a power source in case of emergency. The insertion in the plant of an adsorption machine is mainly justified by the abundance of thermal energy and the simultaneous high cooling demand associated with the milk-chilling process. In the evaluated operational scenario, our research highlights the importance of prioritizing specific small loads which cannot sustain an interrupted supply of power over time. As a consequence, a photovoltaic and thermal panel is included in the plant and is tasked with providing energy independently of potentially disruptive events such as engine malfunctioning or scarce and unstable supplies of fuels. To efficiently manage the plant an energy dispatch strategy is created in order to control the flow of energy between the power sources and the thermal and electric storages. In this article we elaborate on models of the equipment and from these models, we extract parameters useful to build load-dependent profiles of the prime movers and storage efficiencies. We show that under reasonable assumptions the analysis provides a sensible estimate of the generated energy. The simulations indicate that a Diesel Generator sized to a value 25% higher than the total electrical peak demand operates 65% of the time below the minimum acceptable load threshold. To circumvent such a critical operating mode, dump loads are added through the activation and deactivation of small resistors. In this way, the excess of electric energy generated can be transformed into useful heat. The combination of PVT and electrical storage to support the prioritized load in an emergency scenario is evaluated in two different days of the year having the lowest and highest irradiation values, respectively. The results show that the renewable energy component of the plant can successfully sustain the prioritized loads and only during a day with very low irradiation levels it also needs the support of the EVs’ battery. Finally, we show that the adsorption machine can reduce the ice builder and the air conditioning energy consumption by 40%.

Keywords: hybrid power plants, mathematical modeling, off-grid plants, renewable energy, trigeneration

Procedia PDF Downloads 162
6613 Deep Learning Approach for Chronic Kidney Disease Complications

Authors: Mario Isaza-Ruget, Claudia C. Colmenares-Mejia, Nancy Yomayusa, Camilo A. González, Andres Cely, Jossie Murcia

Abstract:

Quantification of risks associated with complications development from chronic kidney disease (CKD) through accurate survival models can help with patient management. A retrospective cohort that included patients diagnosed with CKD from a primary care program and followed up between 2013 and 2018 was carried out. Time-dependent and static covariates associated with demographic, clinical, and laboratory factors were included. Deep Learning (DL) survival analyzes were developed for three CKD outcomes: CKD stage progression, >25% decrease in Estimated Glomerular Filtration Rate (eGFR), and Renal Replacement Therapy (RRT). Models were evaluated and compared with Random Survival Forest (RSF) based on concordance index (C-index) metric. 2.143 patients were included. Two models were developed for each outcome, Deep Neural Network (DNN) model reported C-index=0.9867 for CKD stage progression; C-index=0.9905 for reduction in eGFR; C-index=0.9867 for RRT. Regarding the RSF model, C-index=0.6650 was reached for CKD stage progression; decreased eGFR C-index=0.6759; RRT C-index=0.8926. DNN models applied in survival analysis context with considerations of longitudinal covariates at the start of follow-up can predict renal stage progression, a significant decrease in eGFR and RRT. The success of these survival models lies in the appropriate definition of survival times and the analysis of covariates, especially those that vary over time.

Keywords: artificial intelligence, chronic kidney disease, deep neural networks, survival analysis

Procedia PDF Downloads 121
6612 Modelling Conceptual Quantities Using Support Vector Machines

Authors: Ka C. Lam, Oluwafunmibi S. Idowu

Abstract:

Uncertainty in cost is a major factor affecting performance of construction projects. To our knowledge, several conceptual cost models have been developed with varying degrees of accuracy. Incorporating conceptual quantities into conceptual cost models could improve the accuracy of early predesign cost estimates. Hence, the development of quantity models for estimating conceptual quantities of framed reinforced concrete structures using supervised machine learning is the aim of the current research. Using measured quantities of structural elements and design variables such as live loads and soil bearing pressures, response and predictor variables were defined and used for constructing conceptual quantities models. Twenty-four models were developed for comparison using a combination of non-parametric support vector regression, linear regression, and bootstrap resampling techniques. R programming language was used for data analysis and model implementation. Gross soil bearing pressure and gross floor loading were discovered to have a major influence on the quantities of concrete and reinforcement used for foundations. Building footprint and gross floor loading had a similar influence on beams and slabs. Future research could explore the modelling of other conceptual quantities for walls, finishes, and services using machine learning techniques. Estimation of conceptual quantities would assist construction planners in early resource planning and enable detailed performance evaluation of early cost predictions.

Keywords: bootstrapping, conceptual quantities, modelling, reinforced concrete, support vector regression

Procedia PDF Downloads 200
6611 Performance Analysis of Double Gate FinFET at Sub-10NM Node

Authors: Suruchi Saini, Hitender Kumar Tyagi

Abstract:

With the rapid progress of the nanotechnology industry, it is becoming increasingly important to have compact semiconductor devices to function and offer the best results at various technology nodes. While performing the scaling of the device, several short-channel effects occur. To minimize these scaling limitations, some device architectures have been developed in the semiconductor industry. FinFET is one of the most promising structures. Also, the double-gate 2D Fin field effect transistor has the benefit of suppressing short channel effects (SCE) and functioning well for less than 14 nm technology nodes. In the present research, the MuGFET simulation tool is used to analyze and explain the electrical behaviour of a double-gate 2D Fin field effect transistor. The drift-diffusion and Poisson equations are solved self-consistently. Various models, such as Fermi-Dirac distribution, bandgap narrowing, carrier scattering, and concentration-dependent mobility models, are used for device simulation. The transfer and output characteristics of the double-gate 2D Fin field effect transistor are determined at 10 nm technology node. The performance parameters are extracted in terms of threshold voltage, trans-conductance, leakage current and current on-off ratio. In this paper, the device performance is analyzed at different structure parameters. The utilization of the Id-Vg curve is a robust technique that holds significant importance in the modeling of transistors, circuit design, optimization of performance, and quality control in electronic devices and integrated circuits for comprehending field-effect transistors. The FinFET structure is optimized to increase the current on-off ratio and transconductance. Through this analysis, the impact of different channel widths, source and drain lengths on the Id-Vg and transconductance is examined. Device performance was affected by the difficulty of maintaining effective gate control over the channel at decreasing feature sizes. For every set of simulations, the device's features are simulated at two different drain voltages, 50 mV and 0.7 V. In low-power and precision applications, the off-state current is a significant factor to consider. Therefore, it is crucial to minimize the off-state current to maximize circuit performance and efficiency. The findings demonstrate that the performance of the current on-off ratio is maximum with the channel width of 3 nm for a gate length of 10 nm, but there is no significant effect of source and drain length on the current on-off ratio. The transconductance value plays a pivotal role in various electronic applications and should be considered carefully. In this research, it is also concluded that the transconductance value of 340 S/m is achieved with the fin width of 3 nm at a gate length of 10 nm and 2380 S/m for the source and drain extension length of 5 nm, respectively.

Keywords: current on-off ratio, FinFET, short-channel effects, transconductance

Procedia PDF Downloads 53
6610 Cucurbita pepo L. Attenuates Diabetic Neuropathy by Targeting Oxidative Stress in STZ-Nicotinamide Induced Diabetic Rats

Authors: Navpreet Kaur, Randhir Singh

Abstract:

Diabetic neuropathy is one of the most common microvascular complications of diabetes mellitus which affects more than 50% of diabetic patients. The present study targeted oxidative stress mediated nerve damage in diabetic rats using a hydro-alcohol extract of Cucurbita pepo L. (Family: Cucurbitaceae) and its potential in treatment of diabetic neuropathy. Diabetes neuropathy was induced in Wistar rats by injection of streptozotocin (65 mg/kg, i.p.) 15 min after Nicotinamide (230 mg/kg, i.p.) administration. Hydro-alcohol extract of C. pepo seeds was assessed by oral administration at 100, 200 and 400 mg/kg in STZ-nicotinamide induced diabetic rats. Thermal hyperalgesia (Eddy's hot plate and tail immersion), mechanical hyperalgesia (Randall-Selitto) and tactile allodynia (Von Frey hair tests) were evaluated in all groups of streptozotocin diabetic rats to assess the extent of neuropathy. Tissue (sciatic nerve) antioxidant enzymes (SOD, CAT, GSH and LPO) levels were measured along with the formation of AGEs in serum to assess the effect of hydro-alcohol extract of C. pepo in ameliorating oxidative stress. Diabetic rats exhibited significantly decreased tail-flick latency in the tail-immersion test and decreased paw withdrawal threshold in both Randall-Selitto and von-Frey hair test. A decrease in the nociceptive threshold was accompanied by significantly increased oxidative stress in sciatic nerve of diabetic rats. Treatment with the C. pepo hydro-alcohol extract significantly attenuated all the behavioral and biochemical alterations in a dose-dependent manner. C. pepo attenuated the diabetic condition and also reversed neuropathic pain through modulation of oxidative stress and thus it may find application as a possible therapeutic agent against diabetic neuropathy.

Keywords: advanced glycation end products, antioxidant enzymes, cucurbita pepo, hyperglycemia

Procedia PDF Downloads 279
6609 Models of Environmental, Crack Propagation of Some Aluminium Alloys (7xxx)

Authors: H. A. Jawan

Abstract:

This review describes the models of environmental-related crack propagation of aluminum alloys (7xxx) during the last few decades. Acknowledge on effects of different factors on the susceptibility to SCC permits to propose valuable mechanisms on crack advancement. The reliable mechanism of cracking give a possibility to propose the optimum chemical composition and thermal treatment conditions resulting in microstructure the most suitable for real environmental condition and stress state.

Keywords: microstructure, environmental, propagation, mechanism

Procedia PDF Downloads 408