Search results for: generative models
5530 Model Averaging in a Multiplicative Heteroscedastic Model
Authors: Alan Wan
Abstract:
In recent years, the body of literature on frequentist model averaging in statistics has grown significantly. Most of this work focuses on models with different mean structures but leaves out the variance consideration. In this paper, we consider a regression model with multiplicative heteroscedasticity and develop a model averaging method that combines maximum likelihood estimators of unknown parameters in both the mean and variance functions of the model. Our weight choice criterion is based on a minimisation of a plug-in estimator of the model average estimator's squared prediction risk. We prove that the new estimator possesses an asymptotic optimality property. Our investigation of finite-sample performance by simulations demonstrates that the new estimator frequently exhibits very favourable properties compared to some existing heteroscedasticity-robust model average estimators. The model averaging method hedges against the selection of very bad models and serves as a remedy to variance function misspecification, which often discourages practitioners from modeling heteroscedasticity altogether. The proposed model average estimator is applied to the analysis of two real data sets.Keywords: heteroscedasticity-robust, model averaging, multiplicative heteroscedasticity, plug-in, squared prediction risk
Procedia PDF Downloads 3855529 Competency Model as a Key Tool for Managing People in Organizations: Presentation of a Model
Authors: Andrea ČopíKová
Abstract:
Competency Based Management is a new approach to management, which solves organization’s challenges with complexity and with the aim to find and solve organization’s problems and learn how to avoid these in future. They teach the organizations to create, apart from the state of stability – that is temporary, vital organization, which is permanently able to utilize and profit from internal and external opportunities. The aim of this paper is to propose a process of competency model design, based on which a competency model for a financial department manager in a production company will be created. Competency models are very useful tool in many personnel processes in any organization. They are used for acquiring and selection of employees, designing training and development activities, employees’ evaluation, and they can be used as a guide for a career planning and as a tool for succession planning especially for managerial positions. When creating a competency model the method AHP (Analytic Hierarchy Process) and quantitative pair-wise comparison (Saaty’s method) will be used; these methods belong among the most used methods for the determination of weights, and it is used in the AHP procedure. The introduction part of the paper consists of the research results pertaining to the use of competency model in practice and then the issue of competency and competency models is explained. The application part describes in detail proposed methodology for the creation of competency models, based on which the competency model for the position of financial department manager in a foreign manufacturing company, will be created. In the conclusion of the paper, the final competency model will be shown for above mentioned position. The competency model divides selected competencies into three groups that are managerial, interpersonal and functional. The model describes in detail individual levels of competencies, their target value (required level) and the level of importance.Keywords: analytic hierarchy process, competency, competency model, quantitative pairwise comparison
Procedia PDF Downloads 2445528 Multi-Impairment Compensation Based Deep Neural Networks for 16-QAM Coherent Optical Orthogonal Frequency Division Multiplexing System
Authors: Ying Han, Yuanxiang Chen, Yongtao Huang, Jia Fu, Kaile Li, Shangjing Lin, Jianguo Yu
Abstract:
In long-haul and high-speed optical transmission system, the orthogonal frequency division multiplexing (OFDM) signal suffers various linear and non-linear impairments. In recent years, researchers have proposed compensation schemes for specific impairment, and the effects are remarkable. However, different impairment compensation algorithms have caused an increase in transmission delay. With the widespread application of deep neural networks (DNN) in communication, multi-impairment compensation based on DNN will be a promising scheme. In this paper, we propose and apply DNN to compensate multi-impairment of 16-QAM coherent optical OFDM signal, thereby improving the performance of the transmission system. The trained DNN models are applied in the offline digital signal processing (DSP) module of the transmission system. The models can optimize the constellation mapping signals at the transmitter and compensate multi-impairment of the OFDM decoded signal at the receiver. Furthermore, the models reduce the peak to average power ratio (PAPR) of the transmitted OFDM signal and the bit error rate (BER) of the received signal. We verify the effectiveness of the proposed scheme for 16-QAM Coherent Optical OFDM signal and demonstrate and analyze transmission performance in different transmission scenarios. The experimental results show that the PAPR and BER of the transmission system are significantly reduced after using the trained DNN. It shows that the DNN with specific loss function and network structure can optimize the transmitted signal and learn the channel feature and compensate for multi-impairment in fiber transmission effectively.Keywords: coherent optical OFDM, deep neural network, multi-impairment compensation, optical transmission
Procedia PDF Downloads 1435527 Forecasting Stock Prices Based on the Residual Income Valuation Model: Evidence from a Time-Series Approach
Authors: Chen-Yin Kuo, Yung-Hsin Lee
Abstract:
Previous studies applying residual income valuation (RIV) model generally use panel data and single-equation model to forecast stock prices. Unlike these, this paper uses Taiwan longitudinal data to estimate multi-equation time-series models such as Vector Autoregressive (VAR), Vector Error Correction Model (VECM), and conduct out-of-sample forecasting. Further, this work assesses their forecasting performance by two instruments. In favor of extant research, the major finding shows that VECM outperforms other three models in forecasting for three stock sectors over entire horizons. It implies that an error correction term containing long-run information contributes to improve forecasting accuracy. Moreover, the pattern of composite shows that at longer horizon, VECM produces the greater reduction in errors, and performs substantially better than VAR.Keywords: residual income valuation model, vector error correction model, out of sample forecasting, forecasting accuracy
Procedia PDF Downloads 3165526 Estimation of Noise Barriers for Arterial Roads of Delhi
Authors: Sourabh Jain, Parul Madan
Abstract:
Traffic noise pollution has become a challenging problem for all metro cities of India due to rapid urbanization, growing population and rising number of vehicles and transport development. In Delhi the prime source of noise pollution is vehicular traffic. In Delhi it is found that the ambient noise level (Leq) is exceeding the standard permissible value at all the locations. Noise barriers or enclosures are definitely useful in obtaining effective deduction of traffic noise disturbances in urbanized areas. US’s Federal Highway Administration Model (FHWA) and Calculation of Road Traffic Noise (CORTN) of UK are used to develop spread sheets for noise prediction. Spread sheets are also developed for evaluating effectiveness of existing boundary walls abutting houses in mitigating noise, redesigning them as noise barriers. Study was also carried out to examine the changes in noise level due to designed noise barrier by using both models FHWA and CORTN respectively. During the collection of various data it is found that receivers are located far away from road at Rithala and Moolchand sites and hence extra barrier height needed to meet prescribed limits was less as seen from calculations and most of the noise diminishes by propagation effect.On the basis of overall study and data analysis, it is concluded that FHWA and CORTN models under estimate noise levels. FHWA model predicted noise levels with an average percentage error of -7.33 and CORTN predicted with an average percentage error of -8.5. It was observed that at all sites noise levels at receivers were exceeding the standard limit of 55 dB. It was seen from calculations that existing walls are reducing noise levels. Average noise reduction due to walls at Rithala was 7.41 dB and at Panchsheel was 7.20 dB and lower amount of noise reduction was observed at Friend colony which was only 5.88. It was observed from analysis that Friends colony sites need much greater height of barrier. This was because of residential buildings abutting the road. At friends colony great amount of traffic was observed since it is national highway. At this site diminishing of noise due to propagation effect was very less.As FHWA and CORTN models were developed in excel programme, it eliminates laborious calculations of noise. There was no reflection correction in FHWA models as like in CORTN model.Keywords: IFHWA, CORTN, Noise Sources, Noise Barriers
Procedia PDF Downloads 1335525 Logistics Model for Improving Quality in Railway Transport
Authors: Eva Nedeliakova, Juraj Camaj, Jaroslav Masek
Abstract:
This contribution is focused on the methodology for identifying levels of quality and improving quality through new logistics model in railway transport. It is oriented on the application of dynamic quality models, which represent an innovative method of evaluation quality services. Through this conception, time factor, expected, and perceived quality in each moment of the transportation process within logistics chain can be taken into account. Various models describe the improvement of the quality which emphases the time factor throughout the whole transportation logistics chain. Quality of services in railway transport can be determined by the existing level of service quality, by detecting the causes of dissatisfaction employees but also customers, to uncover strengths and weaknesses. This new logistics model is able to recognize critical processes in logistic chain. It includes service quality rating that must respect its specific properties, which are unrepeatability, impalpability, their use right at the time they are provided and particularly changeability, which is significant factor in the conditions of rail transport as well. These peculiarities influence the quality of service regarding the constantly increasing requirements and that result in new ways of finding progressive attitudes towards the service quality rating.Keywords: logistics model, quality, railway transport
Procedia PDF Downloads 5685524 Improved Rare Species Identification Using Focal Loss Based Deep Learning Models
Authors: Chad Goldsworthy, B. Rajeswari Matam
Abstract:
The use of deep learning for species identification in camera trap images has revolutionised our ability to study, conserve and monitor species in a highly efficient and unobtrusive manner, with state-of-the-art models achieving accuracies surpassing the accuracy of manual human classification. The high imbalance of camera trap datasets, however, results in poor accuracies for minority (rare or endangered) species due to their relative insignificance to the overall model accuracy. This paper investigates the use of Focal Loss, in comparison to the traditional Cross Entropy Loss function, to improve the identification of minority species in the “255 Bird Species” dataset from Kaggle. The results show that, although Focal Loss slightly decreased the accuracy of the majority species, it was able to increase the F1-score by 0.06 and improve the identification of the bottom two, five and ten (minority) species by 37.5%, 15.7% and 10.8%, respectively, as well as resulting in an improved overall accuracy of 2.96%.Keywords: convolutional neural networks, data imbalance, deep learning, focal loss, species classification, wildlife conservation
Procedia PDF Downloads 1915523 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane
Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo
Abstract:
Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining
Procedia PDF Downloads 865522 Transportation Accidents Mortality Modeling in Thailand
Authors: W. Sriwattanapongse, S. Prasitwattanaseree, S. Wongtrangan
Abstract:
The transportation accidents mortality is a major problem that leads to loss of human lives, and economic. The objective was to identify patterns of statistical modeling for estimating mortality rates due to transportation accidents in Thailand by using data from 2000 to 2009. The data was taken from the death certificate, vital registration database. The number of deaths and mortality rates were computed classifying by gender, age, year and region. There were 114,790 cases of transportation accidents deaths. The highest average age-specific transport accident mortality rate is 3.11 per 100,000 per year in males, Southern region and the lowest average age-specific transport accident mortality rate is 1.79 per 100,000 per year in females, North-East region. Linear, poisson and negative binomial models were chosen for fitting statistical model. Among the models fitted, the best was chosen based on the analysis of deviance and AIC. The negative binomial model was clearly appropriate fitted.Keywords: transportation accidents, mortality, modeling, analysis of deviance
Procedia PDF Downloads 2445521 Numerical Simulation of Axially Loaded to Failure Large Diameter Bored Pile
Authors: M. Ezzat, Y. Zaghloul, T. Sorour, A. Hefny, M. Eid
Abstract:
Ultimate capacity of large diameter bored piles is usually determined from pile loading tests as recommended by several international codes and foundation design standards. However, loading of this type of piles till achieving apparent failure is practically seldom. In this paper, numerical analyses are carried out to simulate load test of a large diameter bored pile performed at the location of Alzey highway bridge project (Germany). Test results of pile load settlement relationship till failure as well as results of the base and shaft resistances are available. Apparent failure was indicated in this test by the significant increase of the induced settlement during the last load increment applied on the pile head. Measurements of this pile load test are used to assess the quality of the numerical models investigated. Three different material soil models are implemented in the analyses: Mohr coulomb (MC), Soft soil (SS), and Modified Mohr coulomb (MMC). Very good agreement is obtained between the field measured settlement and the calculated settlement using the MMC model. Results of analysis showed also that the MMC constitutive model is superior to MC, and SS models in predicting the ultimate base and shaft resistances of the large diameter bored pile. After calibrating the numerical model, behavior of large diameter bored piles under axial loads is discussed and the formation of the plastic zone around the pile is explored. Results obtained showed that the plastic zone below the base of the pile at failure extended laterally to about four times the pile diameter and vertically to about three times the pile diameter.Keywords: ultimate capacity, large diameter bored piles, plastic zone, failure, pile load test
Procedia PDF Downloads 1435520 Quantitative Structure-Activity Relationship Modeling of Detoxication Properties of Some 1,2-Dithiole-3-Thione Derivatives
Authors: Nadjib Melkemi, Salah Belaidi
Abstract:
Quantitative Structure-Activity Relationship (QSAR) studies have been performed on nineteen molecules of 1,2-dithiole-3-thione analogues. The compounds used are the potent inducers of enzymes involved in the maintenance of reduced glutathione pools as well as phase-2 enzymes important to electrophile detoxication. A multiple linear regression (MLR) procedure was used to design the relationships between molecular descriptor and detoxication properties of the 1,2-dithiole-3-thione derivatives. The predictivity of the model was estimated by cross-validation with the leave-one-out method. Our results suggest a QSAR model based of the following descriptors: qS2, qC3, qC5, qS6, DM, Pol, log P, MV, SAG, HE and EHOMO for the specific activity of quinone reductase; qS1, qS2, qC3, qC4, qC5, qS6, DM, Pol, logP, MV, SAG, HE and EHOMO for the production of growth hormone. To confirm the predictive power of the models, an external set of molecules was used. High correlation between experimental and predicted activity values was observed, indicating the validation and the good quality of the derived QSAR models.Keywords: QSAR, quinone reductase activity, production of growth hormone, MLR
Procedia PDF Downloads 3505519 Named Entity Recognition System for Tigrinya Language
Authors: Sham Kidane, Fitsum Gaim, Ibrahim Abdella, Sirak Asmerom, Yoel Ghebrihiwot, Simon Mulugeta, Natnael Ambassager
Abstract:
The lack of annotated datasets is a bottleneck to the progress of NLP in low-resourced languages. The work presented here consists of large-scale annotated datasets and models for the named entity recognition (NER) system for the Tigrinya language. Our manually constructed corpus comprises over 340K words tagged for NER, with over 118K of the tokens also having parts-of-speech (POS) tags, annotated with 12 distinct classes of entities, represented using several types of tagging schemes. We conducted extensive experiments covering convolutional neural networks and transformer models; the highest performance achieved is 88.8% weighted F1-score. These results are especially noteworthy given the unique challenges posed by Tigrinya’s distinct grammatical structure and complex word morphologies. The system can be an essential building block for the advancement of NLP systems in Tigrinya and other related low-resourced languages and serve as a bridge for cross-referencing against higher-resourced languages.Keywords: Tigrinya NER corpus, TiBERT, TiRoBERTa, BiLSTM-CRF
Procedia PDF Downloads 1315518 Prediction of Compressive Strength Using Artificial Neural Network
Authors: Vijay Pal Singh, Yogesh Chandra Kotiyal
Abstract:
Structures are a combination of various load carrying members which transfer the loads to the foundation from the superstructure safely. At the design stage, the loading of the structure is defined and appropriate material choices are made based upon their properties, mainly related to strength. The strength of materials kept on reducing with time because of many factors like environmental exposure and deformation caused by unpredictable external loads. Hence, to predict the strength of materials used in structures, various techniques are used. Among these techniques, Non-Destructive Techniques (NDT) are the one that can be used to predict the strength without damaging the structure. In the present study, the compressive strength of concrete has been predicted using Artificial Neural Network (ANN). The predicted strength was compared with the experimentally obtained actual compressive strength of concrete and equations were developed for different models. A good co-relation has been obtained between the predicted strength by these models and experimental values. Further, the co-relation has been developed using two NDT techniques for prediction of strength by regression analysis. It was found that the percentage error has been reduced between the predicted strength by using combined techniques in place of single techniques.Keywords: rebound, ultra-sonic pulse, penetration, ANN, NDT, regression
Procedia PDF Downloads 4285517 Regression for Doubly Inflated Multivariate Poisson Distributions
Authors: Ishapathik Das, Sumen Sen, N. Rao Chaganty, Pooja Sengupta
Abstract:
Dependent multivariate count data occur in several research studies. These data can be modeled by a multivariate Poisson or Negative binomial distribution constructed using copulas. However, when some of the counts are inflated, that is, the number of observations in some cells are much larger than other cells, then the copula based multivariate Poisson (or Negative binomial) distribution may not fit well and it is not an appropriate statistical model for the data. There is a need to modify or adjust the multivariate distribution to account for the inflated frequencies. In this article, we consider the situation where the frequencies of two cells are higher compared to the other cells, and develop a doubly inflated multivariate Poisson distribution function using multivariate Gaussian copula. We also discuss procedures for regression on covariates for the doubly inflated multivariate count data. For illustrating the proposed methodologies, we present a real data containing bivariate count observations with inflations in two cells. Several models and linear predictors with log link functions are considered, and we discuss maximum likelihood estimation to estimate unknown parameters of the models.Keywords: copula, Gaussian copula, multivariate distributions, inflated distributios
Procedia PDF Downloads 1565516 Modelling the Effect of Biomass Appropriation for Human Use on Global Biodiversity
Authors: Karina Reiter, Stefan Dullinger, Christoph Plutzar, Dietmar Moser
Abstract:
Due to population growth and changing patterns of production and consumption, the demand for natural resources and, as a result, the pressure on Earth’s ecosystems are growing. Biodiversity mapping can be a useful tool for assessing species endangerment or detecting hotspots of extinction risks. This paper explores the benefits of using the change in trophic energy flows as a consequence of the human alteration of the biosphere in biodiversity mapping. To this end, multiple linear regression models were developed to explain species richness in areas where there is no human influence (i.e. wilderness) for three taxonomic groups (birds, mammals, amphibians). The models were then applied to predict (I) potential global species richness using potential natural vegetation (NPPpot) and (II) global ‘actual’ species richness after biomass appropriation using NPP remaining in ecosystems after harvest (NPPeco). By calculating the difference between predicted potential and predicted actual species numbers, maps of estimated species richness loss were generated. Results show that biomass appropriation for human use can indeed be linked to biodiversity loss. Areas for which the models predicted high species loss coincide with areas where species endangerment and extinctions are recorded to be particularly high by the International Union for Conservation of Nature and Natural Resources (IUCN). Furthermore, the analysis revealed that while the species distribution maps of the IUCN Red List of Threatened Species used for this research can determine hotspots of biodiversity loss in large parts of the world, the classification system for threatened and extinct species needs to be revised to better reflect local risks of extinction.Keywords: biodiversity loss, biomass harvest, human appropriation of net primary production, species richness
Procedia PDF Downloads 1305515 Improvement of the Aerodynamic Behaviour of a Land Rover Discovery 4 in Turbulent Flow Using Computational Fluid Dynamics (CFD)
Authors: Ahmed Al-Saadi, Ali Hassanpour, Tariq Mahmud
Abstract:
The main objective of this study is to investigate ways to reduce the aerodynamic drag coefficient and to increase the stability of the full-size Sport Utility Vehicle using three-dimensional Computational Fluid Dynamics (CFD) simulation. The baseline model in the simulation was the Land Rover Discovery 4. Many aerodynamic devices and external design modifications were used in this study. These reduction aerodynamic techniques were tested individually or in combination to get the best design. All new models have the same capacity and comfort of the baseline model. Uniform freestream velocity of the air at inlet ranging from 28 m/s to 40 m/s was used. ANSYS Fluent software (version 16.0) was used to simulate all models. The drag coefficient obtained from the ANSYS Fluent for the baseline model was validated with experimental data. It is found that the use of modern aerodynamic add-on devices and modifications has a significant effect in reducing the aerodynamic drag coefficient.Keywords: aerodynamics, RANS, sport utility vehicle, turbulent flow
Procedia PDF Downloads 3165514 Analysis Of Fine Motor Skills in Chronic Neurodegenerative Models of Huntington’s Disease and Amyotrophic Lateral Sclerosis
Authors: T. Heikkinen, J. Oksman, T. Bragge, A. Nurmi, O. Kontkanen, T. Ahtoniemi
Abstract:
Motor impairment is an inherent phenotypic feature of several chronic neurodegenerative diseases, and pharmacological therapies aimed to counterbalance the motor disability have a great market potential. Animal models of chronic neurodegenerative diseases display a number deteriorating motor phenotype during the disease progression. There is a wide array of behavioral tools to evaluate motor functions in rodents. However, currently existing methods to study motor functions in rodents are often limited to evaluate gross motor functions only at advanced stages of the disease phenotype. The most commonly applied traditional motor assays used in CNS rodent models, lack the sensitivity to capture fine motor impairments or improvements. Fine motor skill characterization in rodents provides a more sensitive tool to capture more subtle motor dysfunctions and therapeutic effects. Importantly, similar approach, kinematic movement analysis, is also used in clinic, and applied both in diagnosis and determination of therapeutic response to pharmacological interventions. The aim of this study was to apply kinematic gait analysis, a novel and automated high precision movement analysis system, to characterize phenotypic deficits in three different chronic neurodegenerative animal models, a transgenic mouse model (SOD1 G93A) for amyotrophic lateral sclerosis (ALS), and R6/2 and Q175KI mouse models for Huntington’s disease (HD). The readouts from walking behavior included gait properties with kinematic data, and body movement trajectories including analysis of various points of interest such as movement and position of landmarks in the torso, tail and joints. Mice (transgenic and wild-type) from each model were analyzed for the fine motor kinematic properties at young ages, prior to the age when gross motor deficits are clearly pronounced. Fine motor kinematic Evaluation was continued in the same animals until clear motor dysfunction with conventional motor assays was evident. Time course analysis revealed clear fine motor skill impairments in each transgenic model earlier than what is seen with conventional gross motor tests. Motor changes were quantitatively analyzed for up to ~80 parameters, and the largest data sets of HD models were further processed with principal component analysis (PCA) to transform the pool of individual parameters into a smaller and focused set of mutually uncorrelated gait parameters showing strong genotype difference. Kinematic fine motor analysis of transgenic animal models described in this presentation show that this method isa sensitive, objective and fully automated tool that allows earlier and more sensitive detection of progressive neuromuscular and CNS disease phenotypes. As a result of the analysis a comprehensive set of fine motor parameters for each model is created, and these parameters provide better understanding of the disease progression and enhanced sensitivity of this assay for therapeutic testing compared to classical motor behavior tests. In SOD1 G93A, R6/2, and Q175KI mice, the alterations in gait were evident already several weeks earlier than with traditional gross motor assays. Kinematic testing can be applied to a wider set of motor readouts beyond gait in order to study whole body movement patterns such as with relation to joints and various body parts longitudinally, providing a sophisticated and translatable method for disseminating motor components in rodent disease models and evaluating therapeutic interventions.Keywords: Gait analysis, kinematic, motor impairment, inherent feature
Procedia PDF Downloads 3555513 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model
Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung
Abstract:
The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation
Procedia PDF Downloads 1695512 Provenance in Scholarly Publications: Introducing the provCite Ontology
Authors: Maria Joseph Israel, Ahmed Amer
Abstract:
Our work aims to broaden the application of provenance technology beyond its traditional domains of scientific workflow management and database systems by offering a general provenance framework to capture richer and extensible metadata in unstructured textual data sources such as literary texts, commentaries, translations, and digital humanities. Specifically, we demonstrate the feasibility of capturing and representing expressive provenance metadata, including more of the context for citing scholarly works (e.g., the authors’ explicit or inferred intentions at the time of developing his/her research content for publication), while also supporting subsequent augmentation with similar additional metadata (by third parties, be they human or automated). To better capture the nature and types of possible citations, in our proposed provenance scheme metaScribe, we extend standard provenance conceptual models to form our proposed provCite ontology. This provides a conceptual framework which can accurately capture and describe more of the functional and rhetorical properties of a citation than can be achieved with any current models.Keywords: knowledge representation, provenance architecture, ontology, metadata, bibliographic citation, semantic web annotation
Procedia PDF Downloads 1175511 Development of E-Tendering Models for Nigerian Public Procuring Entities
Authors: Bello Abdullahi, Kabir Bala, Yahaya M. Ibrahim, Ahmed D. Ibrahim
Abstract:
Public sector tendering has traditionally been conducted using manual paper-based processes which are known to be inefficient, less transparent, and more prone to manipulations and errors. However, the advent of the Internet and its associated technologies has led to the development of numerous e-Tendering systems that addressed many of the problems associated with the manual paper-based tendering system. Currently, in Nigeria, the public tendering processes are largely conducted based on manual paper-based system that is bedevilled by a number of problems such as inordinate delays, inefficiencies, manipulation of the tender evaluation process, corruption, lack of transparency and competition, among other problems. These problems can be addressed through the adoption of existing web-based e-Tendering systems which are known to address most of these problems. However, these existing e-Tendering systems that have been developed are not based on the Nigerian legal procurement processes and as such their suitability for local application is very limited. This paper is part of a larger study that attempt to address this problem through the development of an e-Tendering system that is based on the requirements of the Nigerian public procuring entities. In this paper, the identified tendering processes commonly used by Nigerian public procuring entities in the selection of construction sources are presented. A multi-methods research approach was used to identify those tendering processes. Specifically, 19 existing business use cases used by Nigerian public procuring entities were identified and 61 system use cases were prescribed based on the identified business use cases. The use cases were used as the basis for the development of domain and software conceptual models. The models were successfully used to guide the development of an e-Tendering system called NPS-eTender. Ripple and Unified Process were adopted as the software development methodologies.Keywords: e-tendering, e-procurement, requirement model, conceptual model, public sector tendering, public procurement
Procedia PDF Downloads 1955510 Enhance the Power of Sentiment Analysis
Authors: Yu Zhang, Pedro Desouza
Abstract:
Since big data has become substantially more accessible and manageable due to the development of powerful tools for dealing with unstructured data, people are eager to mine information from social media resources that could not be handled in the past. Sentiment analysis, as a novel branch of text mining, has in the last decade become increasingly important in marketing analysis, customer risk prediction and other fields. Scientists and researchers have undertaken significant work in creating and improving their sentiment models. In this paper, we present a concept of selecting appropriate classifiers based on the features and qualities of data sources by comparing the performances of five classifiers with three popular social media data sources: Twitter, Amazon Customer Reviews, and Movie Reviews. We introduced a couple of innovative models that outperform traditional sentiment classifiers for these data sources, and provide insights on how to further improve the predictive power of sentiment analysis. The modelling and testing work was done in R and Greenplum in-database analytic tools.Keywords: sentiment analysis, social media, Twitter, Amazon, data mining, machine learning, text mining
Procedia PDF Downloads 3535509 Ontology-Based Backpropagation Neural Network Classification and Reasoning Strategy for NoSQL and SQL Databases
Authors: Hao-Hsiang Ku, Ching-Ho Chi
Abstract:
Big data applications have become an imperative for many fields. Many researchers have been devoted into increasing correct rates and reducing time complexities. Hence, the study designs and proposes an Ontology-based backpropagation neural network classification and reasoning strategy for NoSQL big data applications, which is called ON4NoSQL. ON4NoSQL is responsible for enhancing the performances of classifications in NoSQL and SQL databases to build up mass behavior models. Mass behavior models are made by MapReduce techniques and Hadoop distributed file system based on Hadoop service platform. The reference engine of ON4NoSQL is the ontology-based backpropagation neural network classification and reasoning strategy. Simulation results indicate that ON4NoSQL can efficiently achieve to construct a high performance environment for data storing, searching, and retrieving.Keywords: Hadoop, NoSQL, ontology, back propagation neural network, high distributed file system
Procedia PDF Downloads 2625508 Experimental Assessment of Micromechanical Models for Mechanical Properties of Recycled Short Fiber Composites
Authors: Mohammad S. Rouhi, Magdalena Juntikka
Abstract:
Processing of polymer fiber composites has a remarkable influence on their mechanical performance. These mechanical properties are even more influenced when using recycled reinforcement. Therefore, we place particular attention on the evaluation of micromechanical models to estimate the mechanical properties and compare them against the experimental results of the manufactured composites. For the manufacturing process, an epoxy matrix and carbon fiber production cut-offs as reinforcing material are incorporated using a vacuum infusion process. In addition, continuous textile reinforcement in combination with the epoxy matrix is used as reference material to evaluate the kick-down in mechanical performance of the recycled composite. The experimental results show less degradation of the composite stiffness compared to the strength properties. Observations from the modeling also show the same trend as the error between the theoretical and experimental results is lower for stiffness comparisons than the strength calculations. Yet still, good mechanical performance for specific applications can be expected from these materials.Keywords: composite recycling, carbon fibers, mechanical properties, micromechanics
Procedia PDF Downloads 1615507 Investigation of the Multiaxial Pedicle Screw Tulip Design Using Finite Element Analysis
Authors: S. Daqiqeh Rezaei, S. Mohajerzadeh, M. R. Sharifi
Abstract:
Pedicle screws are used to stabilize vertebrae and treat several types of spinal diseases and injuries. Multiaxial pedicle screws are a type of pedicle screw that increase surgical versatility, but they also increase design complexity. Failure of multiaxial pedicle screws caused by static loading, dynamic loading and fatigue can lead to irreparable damage to the patient. Inappropriate deformation of the multiaxial pedicle screw tulip can cause system failure. Investigation of deformation and stress in these tulips can be employed to optimize multiaxial pedicle screw design. The sensitivity of this matter necessitates precise analyzing and modeling of pedicle screws. In this work, three commercial multiaxial pedicle screw tulips and a newly designed tulip are investigated using finite element analysis. Employing video measuring machine (VMM), tulips are modeled. Afterwards, utilizing ANSYS, static analysis is performed on these models. In the end, stresses and displacements of the models are compared.Keywords: pedicle screw, multiaxial pedicle screw, finite element analysis, static analysis
Procedia PDF Downloads 3685506 A Generic Approach to Reuse Unified Modeling Language Components Following an Agile Process
Authors: Rim Bouhaouel, Naoufel Kraïem, Zuhoor Al Khanjari
Abstract:
Unified Modeling Language (UML) is considered as one of the widespread modeling language standardized by the Object Management Group (OMG). Therefore, the model driving engineering (MDE) community attempts to provide reuse of UML diagrams, and do not construct it from scratch. The UML model appears according to a specific software development process. The existing method generation models focused on the different techniques of transformation without considering the development process. Our work aims to construct an UML component from fragments of UML diagram basing on an agile method. We define UML fragment as a portion of a UML diagram, which express a business target. To guide the generation of fragments of UML models using an agile process, we need a flexible approach, which adapts to the agile changes and covers all its activities. We use the software product line (SPL) to derive a fragment of process agile method. This paper explains our approach, named RECUP, to generate UML fragments following an agile process, and overviews the different aspects. In this paper, we present the approach and we define the different phases and artifacts.Keywords: UML, component, fragment, agile, SPL
Procedia PDF Downloads 3975505 The Moderating Effect of Pathological Narcissism in the Relationship between Victim Justice Sensitivity and Anger Rumination
Authors: Isil Coklar-Okutkan, Miray Akyunus
Abstract:
Victim sensitivity is a form of justice sensitivity that reflects the tendency to perceive injustice to one’s disadvantage. Victim sensitivity is considered as a dysfunctional trait that predicts anger, aggression, uncooperative behavior, depression and anxiety. Indeed, exploring the mechanism of association between victim sensitivity and anger is clinically important since it can lead to externalizing and internalizing problems. This study aims to investigate the moderating role of pathological narcissism in the relationship between victim sensitivity and anger rumination. Through testing different models where subtypes of narcissism and anger rumination components are included independently, the specific mechanism of different ruminative processes in anger is investigated. The sample consisted of 311 undergraduate students from Turkey, 107 of whom were males, and 204 were females. Participants completed Justice Sensitivity Inventory-Victim Subscale, Pathological Narcissism Inventory and Anger Rumination Scale. In the proposed double moderation model, vulnerable and grandiose narcissism was the moderators in the relationship between victim justice sensitivity and anger rumination. Four separate models were tested where one of the four components of anger rumination (angry afterthoughts, thoughts of revenge, angry memories, understanding of causes) were the dependent variable in each model. Results revealed that two of the moderation models are significant. Firstly, grandiose narcissism is the only moderator in the relationship between victim sensitivity and thoughts of revenge. Secondly, vulnerable narcissism is the only moderator in the relationship between victim sensitivity and understanding causes. Accordingly, grandiose narcissism is positively associated with the thoughts of revenge, and vulnerable narcissism is positively associated with understanding causes, only when the level of victim sensitivity is high. To summarize, increased victim sensitivity leads to ruminative thoughts of revenge in individuals with grandiose narcissism, whereas it leads to rumination on causes of the incident in individuals with vulnerable narcissism. The clinical implications of the findings are discussed.Keywords: anger rumination, victim sensitivity, grandiose narcissism, vulnerable narcissism
Procedia PDF Downloads 2035504 Nondecoupling Signatures of Supersymmetry and an Lμ-Lτ Gauge Boson at Belle-II
Authors: Heerak Banerjee, Sourov Roy
Abstract:
Supersymmetry, one of the most celebrated fields of study for explaining experimental observations where the standard model (SM) falls short, is reeling from the lack of experimental vindication. At the same time, the idea of additional gauge symmetry, in particular, the gauged Lμ-Lτ symmetric models have also generated significant interest. They have been extensively proposed in order to explain the tantalizing discrepancy in the predicted and measured value of the muon anomalous magnetic moment alongside several other issues plaguing the SM. While very little parameter space within these models remain unconstrained, this work finds that the γ + Missing Energy (ME) signal at the Belle-II detector will be a smoking gun for supersymmetry (SUSY) in the presence of a gauged U(1)Lμ-Lτ symmetry. A remarkable consequence of breaking the enhanced symmetry appearing in the limit of degenerate (s)leptons is the nondecoupling of the radiative contribution of heavy charged sleptons to the γ-Z΄ kinetic mixing. The signal process, e⁺e⁻ →γZ΄→γ+ME, is an outcome of this ubiquitous feature. Taking the severe constraints on gauged Lμ-Lτ models by several low energy observables into account, it is shown that any significant excess in all but the highest photon energy bin would be an undeniable signature of such heavy scalar fields in SUSY coupling to the additional gauge boson Z΄. The number of signal events depends crucially on the logarithm of the ratio of stau to smuon mass in the presence of SUSY. In addition, the number is also inversely proportional to the e⁺e⁻ collision energy, making a low-energy, high-luminosity collider like Belle-II an ideal testing ground for this channel. This process can probe large swathes of the hitherto free slepton mass ratio vs. additional gauge coupling (gₓ) parameter space. More importantly, it can explore the narrow slice of Z΄ mass (MZ΄) vs. gₓ parameter space still allowed in gauged U(1)Lμ-Lτ models for superheavy sparticles. The spectacular finding that the signal significance is independent of individual slepton masses is an exciting prospect indeed. Further, the prospect that signatures of even superheavy SUSY particles that may have escaped detection at the LHC may show up at the Belle-II detector is an invigorating revelation.Keywords: additional gauge symmetry, electron-positron collider, kinetic mixing, nondecoupling radiative effect, supersymmetry
Procedia PDF Downloads 1275503 Advantages of Fuzzy Control Application in Fast and Sensitive Technological Processes
Authors: Radim Farana, Bogdan Walek, Michal Janosek, Jaroslav Zacek
Abstract:
This paper presents the advantages of fuzzy control use in technological processes control. The paper presents a real application of the Linguistic Fuzzy-Logic Control, developed at the University of Ostrava for the control of physical models in the Intelligent Systems Laboratory. The paper presents an example of a sensitive non-linear model, such as a magnetic levitation model and obtained results which show how modern information technologies can help to solve actual technical problems. A special method based on the LFLC controller with partial components is presented in this paper followed by the method of automatic context change, which is very helpful to achieve more accurate control results. The main advantage of the used system is its robustness in changing conditions demonstrated by comparing with conventional PID controller. This technology and real models are also used as a background for problem-oriented teaching, realized at the department for master students and their collaborative as well as individual final projects.Keywords: control, fuzzy logic, sensitive system, technological proves
Procedia PDF Downloads 4695502 Time Series Analysis on the Production of Fruit Juice: A Case Study of National Horticultural Research Institute (Nihort) Ibadan, Oyo State
Authors: Abiodun Ayodele Sanyaolu
Abstract:
The research was carried out to investigate the time series analysis on quarterly production of fruit juice at the National Horticultural Research Institute Ibadan from 2010 to 2018. Documentary method of data collection was used, and the method of least square and moving average were used in the analysis. From the calculation and the graph, it was glaring that there was increase, decrease, and uniform movements in both the graph of the original data and the tabulated quarter values of the original data. Time series analysis was used to detect the trend in the highest number of fruit juice and it appears to be good over a period of time and the methods used to forecast are additive and multiplicative models. Since it was observed that the production of fruit juice is usually high in January of every year, it is strongly advised that National Horticultural Research Institute should make more provision for fruit juice storage outside this period of the year.Keywords: fruit juice, least square, multiplicative models, time series
Procedia PDF Downloads 1425501 Cantilever Secant Pile Constructed in Sand: Capping Beam Analysis and Deformation Limitations
Authors: Khaled R. Khater
Abstract:
This paper fits in soil-structure interaction division. Its theme is soil retaining structures. Hence, the cantilever secant-pile wall imposed itself, focusing on the capping beam. Four research questions are prompted and beg an answer. How to calculate the forces that control capping beam design? What is the statical system of ‘capping beam-secant pile’ as one unit? Is it possible to design it to satisfy pre-specific lateral deformation? Is it possible to suggest permissible lateral deformation limits? Briefly, pile head displacements induced by Plaxis-2D are converted to forces needed for STAAD-Pro 3D models. Those models are constructed based on the proposed structural system. This is the paper’s idea and methodology. Parametric study performed considered three sand densities, one pile rigidity, and two excavation depths, i.e., 3.0 m and 5.0 m. The research questions are satisfactorily answered. This paper could be a first step towards standardizing analysis, design, and lateral deformations checks.Keywords: capping beam, secant pile, numerical, design aids, sandy soil
Procedia PDF Downloads 108