Search results for: hidden models of Markov (HMM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7280

Search results for: hidden models of Markov (HMM)

5750 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.

Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis

Procedia PDF Downloads 155
5749 PM₁₀ and PM2.5 Concentrations in Bangkok over Last 10 Years: Implications for Air Quality and Health

Authors: Tin Thongthammachart, Wanida Jinsart

Abstract:

Atmospheric particulate matter particles with a diameter less than 10 microns (PM₁₀) and less than 2.5 microns (PM₂.₅) have adverse health effect. The impact from PM was studied from both health and regulatory perspective. Ambient PM data was collected over ten years in Bangkok and vicinity areas of Thailand from 2007 to 2017. Statistical models were used to forecast PM concentrations from 2018 to 2020. Monitoring monthly data averaged concentration of PM₁₀ and PM₂.₅ were used as input to forecast the monthly average concentration of PM. The forecasting results were validated by root means square error (RMSE). The predicted results were used to determine hazard risk for the carcinogenic disease. The health risk values were interpolated with GIS with ordinary kriging technique to create hazard maps in Bangkok and vicinity area. GIS-based maps illustrated the variability of PM distribution and high-risk locations. These evaluated results could support national policy for the sake of human health.

Keywords: PM₁₀, PM₂.₅, statistical models, atmospheric particulate matter

Procedia PDF Downloads 162
5748 Validating the Micro-Dynamic Rule in Opinion Dynamics Models

Authors: Dino Carpentras, Paul Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is dedicated to modeling the dynamic evolution of people's opinions. Models in this field are based on a micro-dynamic rule, which determines how people update their opinion when interacting. Despite the high number of new models (many of them based on new rules), little research has been dedicated to experimentally validate the rule. A few studies started bridging this literature gap by experimentally testing the rule. However, in these studies, participants are forced to express their opinion as a number instead of using natural language. Furthermore, some of these studies average data from experimental questions, without testing if differences existed between them. Indeed, it is possible that different topics could show different dynamics. For example, people may be more prone to accepting someone's else opinion regarding less polarized topics. In this work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions using natural language ('agree' or 'disagree') and the certainty of their answer, expressed as a number between 1 and 10. To keep the interaction based on natural language, certainty was not shown to other participants. We then showed to the participant someone else's opinion on the same topic and, after a distraction task, we repeated the measurement. To produce data compatible with standard opinion dynamics models, we multiplied the opinion (encoded as agree=1 and disagree=-1) with the certainty to obtain a single 'continuous opinion' ranging from -10 to 10. By analyzing the topics independently, we observed that each one shows a different initial distribution. However, the dynamics (i.e., the properties of the opinion change) appear to be similar between all topics. This suggested that the same micro-dynamic rule could be applied to unpolarized topics. Another important result is that participants that change opinion tend to maintain similar levels of certainty. This is in contrast with typical micro-dynamics rules, where agents move to an average point instead of directly jumping to the opposite continuous opinion. As expected, in the data, we also observed the effect of social influence. This means that exposing someone with 'agree' or 'disagree' influenced participants to respectively higher or lower values of the continuous opinion. However, we also observed random variations whose effect was stronger than the social influence’s one. We even observed cases of people that changed from 'agree' to 'disagree,' even if they were exposed to 'agree.' This phenomenon is surprising, as, in the standard literature, the strength of the noise is usually smaller than the strength of social influence. Finally, we also built an opinion dynamics model from the data. The model was able to explain more than 80% of the data variance. Furthermore, by iterating the model, we were able to produce polarized states even starting from an unpolarized population. This experimental approach offers a way to test the micro-dynamic rule. This also allows us to build models which are directly grounded on experimental results.

Keywords: experimental validation, micro-dynamic rule, opinion dynamics, update rule

Procedia PDF Downloads 167
5747 The Prognostic Prediction Value of Positive Lymph Nodes Numbers for the Hypopharyngeal Squamous Cell Carcinoma

Authors: Wendu Pang, Yaxin Luo, Junhong Li, Yu Zhao, Danni Cheng, Yufang Rao, Minzi Mao, Ke Qiu, Yijun Dong, Fei Chen, Jun Liu, Jian Zou, Haiyang Wang, Wei Xu, Jianjun Ren

Abstract:

We aimed to compare the prognostic prediction value of positive lymph node number (PLNN) to the American Joint Committee on Cancer (AJCC) tumor, lymph node, and metastasis (TNM) staging system for patients with hypopharyngeal squamous cell carcinoma (HPSCC). A total of 826 patients with HPSCC from the Surveillance, Epidemiology, and End Results database (2004–2015) were identified and split into two independent cohorts: training (n=461) and validation (n=365). Univariate and multivariate Cox regression analyses were used to evaluate the prognostic effects of PLNN in patients with HPSCC. We further applied six Cox regression models to compare the survival predictive values of the PLNN and AJCC TNM staging system. PLNN showed a significant association with overall survival (OS) and cancer-specific survival (CSS) (P < 0.001) in both univariate and multivariable analyses, and was divided into three groups (PLNN 0, PLNN 1-5, and PLNN>5). In the training cohort, multivariate analysis revealed that the increased PLNN of HPSCC gave rise to significantly poor OS and CSS after adjusting for age, sex, tumor size, and cancer stage; this trend was also verified by the validation cohort. Additionally, the survival model incorporating a composite of PLNN and TNM classification (C-index, 0.705, 0.734) performed better than the PLNN and AJCC TNM models. PLNN can serve as a powerful survival predictor for patients with HPSCC and is a surrogate supplement for cancer staging systems.

Keywords: hypopharyngeal squamous cell carcinoma, positive lymph nodes number, prognosis, prediction models, survival predictive values

Procedia PDF Downloads 158
5746 Model Averaging in a Multiplicative Heteroscedastic Model

Authors: Alan Wan

Abstract:

In recent years, the body of literature on frequentist model averaging in statistics has grown significantly. Most of this work focuses on models with different mean structures but leaves out the variance consideration. In this paper, we consider a regression model with multiplicative heteroscedasticity and develop a model averaging method that combines maximum likelihood estimators of unknown parameters in both the mean and variance functions of the model. Our weight choice criterion is based on a minimisation of a plug-in estimator of the model average estimator's squared prediction risk. We prove that the new estimator possesses an asymptotic optimality property. Our investigation of finite-sample performance by simulations demonstrates that the new estimator frequently exhibits very favourable properties compared to some existing heteroscedasticity-robust model average estimators. The model averaging method hedges against the selection of very bad models and serves as a remedy to variance function misspecification, which often discourages practitioners from modeling heteroscedasticity altogether. The proposed model average estimator is applied to the analysis of two real data sets.

Keywords: heteroscedasticity-robust, model averaging, multiplicative heteroscedasticity, plug-in, squared prediction risk

Procedia PDF Downloads 392
5745 Competency Model as a Key Tool for Managing People in Organizations: Presentation of a Model

Authors: Andrea ČopíKová

Abstract:

Competency Based Management is a new approach to management, which solves organization’s challenges with complexity and with the aim to find and solve organization’s problems and learn how to avoid these in future. They teach the organizations to create, apart from the state of stability – that is temporary, vital organization, which is permanently able to utilize and profit from internal and external opportunities. The aim of this paper is to propose a process of competency model design, based on which a competency model for a financial department manager in a production company will be created. Competency models are very useful tool in many personnel processes in any organization. They are used for acquiring and selection of employees, designing training and development activities, employees’ evaluation, and they can be used as a guide for a career planning and as a tool for succession planning especially for managerial positions. When creating a competency model the method AHP (Analytic Hierarchy Process) and quantitative pair-wise comparison (Saaty’s method) will be used; these methods belong among the most used methods for the determination of weights, and it is used in the AHP procedure. The introduction part of the paper consists of the research results pertaining to the use of competency model in practice and then the issue of competency and competency models is explained. The application part describes in detail proposed methodology for the creation of competency models, based on which the competency model for the position of financial department manager in a foreign manufacturing company, will be created. In the conclusion of the paper, the final competency model will be shown for above mentioned position. The competency model divides selected competencies into three groups that are managerial, interpersonal and functional. The model describes in detail individual levels of competencies, their target value (required level) and the level of importance.

Keywords: analytic hierarchy process, competency, competency model, quantitative pairwise comparison

Procedia PDF Downloads 246
5744 Multi-Impairment Compensation Based Deep Neural Networks for 16-QAM Coherent Optical Orthogonal Frequency Division Multiplexing System

Authors: Ying Han, Yuanxiang Chen, Yongtao Huang, Jia Fu, Kaile Li, Shangjing Lin, Jianguo Yu

Abstract:

In long-haul and high-speed optical transmission system, the orthogonal frequency division multiplexing (OFDM) signal suffers various linear and non-linear impairments. In recent years, researchers have proposed compensation schemes for specific impairment, and the effects are remarkable. However, different impairment compensation algorithms have caused an increase in transmission delay. With the widespread application of deep neural networks (DNN) in communication, multi-impairment compensation based on DNN will be a promising scheme. In this paper, we propose and apply DNN to compensate multi-impairment of 16-QAM coherent optical OFDM signal, thereby improving the performance of the transmission system. The trained DNN models are applied in the offline digital signal processing (DSP) module of the transmission system. The models can optimize the constellation mapping signals at the transmitter and compensate multi-impairment of the OFDM decoded signal at the receiver. Furthermore, the models reduce the peak to average power ratio (PAPR) of the transmitted OFDM signal and the bit error rate (BER) of the received signal. We verify the effectiveness of the proposed scheme for 16-QAM Coherent Optical OFDM signal and demonstrate and analyze transmission performance in different transmission scenarios. The experimental results show that the PAPR and BER of the transmission system are significantly reduced after using the trained DNN. It shows that the DNN with specific loss function and network structure can optimize the transmitted signal and learn the channel feature and compensate for multi-impairment in fiber transmission effectively.

Keywords: coherent optical OFDM, deep neural network, multi-impairment compensation, optical transmission

Procedia PDF Downloads 149
5743 Forecasting Stock Prices Based on the Residual Income Valuation Model: Evidence from a Time-Series Approach

Authors: Chen-Yin Kuo, Yung-Hsin Lee

Abstract:

Previous studies applying residual income valuation (RIV) model generally use panel data and single-equation model to forecast stock prices. Unlike these, this paper uses Taiwan longitudinal data to estimate multi-equation time-series models such as Vector Autoregressive (VAR), Vector Error Correction Model (VECM), and conduct out-of-sample forecasting. Further, this work assesses their forecasting performance by two instruments. In favor of extant research, the major finding shows that VECM outperforms other three models in forecasting for three stock sectors over entire horizons. It implies that an error correction term containing long-run information contributes to improve forecasting accuracy. Moreover, the pattern of composite shows that at longer horizon, VECM produces the greater reduction in errors, and performs substantially better than VAR.

Keywords: residual income valuation model, vector error correction model, out of sample forecasting, forecasting accuracy

Procedia PDF Downloads 321
5742 Estimation of Noise Barriers for Arterial Roads of Delhi

Authors: Sourabh Jain, Parul Madan

Abstract:

Traffic noise pollution has become a challenging problem for all metro cities of India due to rapid urbanization, growing population and rising number of vehicles and transport development. In Delhi the prime source of noise pollution is vehicular traffic. In Delhi it is found that the ambient noise level (Leq) is exceeding the standard permissible value at all the locations. Noise barriers or enclosures are definitely useful in obtaining effective deduction of traffic noise disturbances in urbanized areas. US’s Federal Highway Administration Model (FHWA) and Calculation of Road Traffic Noise (CORTN) of UK are used to develop spread sheets for noise prediction. Spread sheets are also developed for evaluating effectiveness of existing boundary walls abutting houses in mitigating noise, redesigning them as noise barriers. Study was also carried out to examine the changes in noise level due to designed noise barrier by using both models FHWA and CORTN respectively. During the collection of various data it is found that receivers are located far away from road at Rithala and Moolchand sites and hence extra barrier height needed to meet prescribed limits was less as seen from calculations and most of the noise diminishes by propagation effect.On the basis of overall study and data analysis, it is concluded that FHWA and CORTN models under estimate noise levels. FHWA model predicted noise levels with an average percentage error of -7.33 and CORTN predicted with an average percentage error of -8.5. It was observed that at all sites noise levels at receivers were exceeding the standard limit of 55 dB. It was seen from calculations that existing walls are reducing noise levels. Average noise reduction due to walls at Rithala was 7.41 dB and at Panchsheel was 7.20 dB and lower amount of noise reduction was observed at Friend colony which was only 5.88. It was observed from analysis that Friends colony sites need much greater height of barrier. This was because of residential buildings abutting the road. At friends colony great amount of traffic was observed since it is national highway. At this site diminishing of noise due to propagation effect was very less.As FHWA and CORTN models were developed in excel programme, it eliminates laborious calculations of noise. There was no reflection correction in FHWA models as like in CORTN model.

Keywords: IFHWA, CORTN, Noise Sources, Noise Barriers

Procedia PDF Downloads 136
5741 Logistics Model for Improving Quality in Railway Transport

Authors: Eva Nedeliakova, Juraj Camaj, Jaroslav Masek

Abstract:

This contribution is focused on the methodology for identifying levels of quality and improving quality through new logistics model in railway transport. It is oriented on the application of dynamic quality models, which represent an innovative method of evaluation quality services. Through this conception, time factor, expected, and perceived quality in each moment of the transportation process within logistics chain can be taken into account. Various models describe the improvement of the quality which emphases the time factor throughout the whole transportation logistics chain. Quality of services in railway transport can be determined by the existing level of service quality, by detecting the causes of dissatisfaction employees but also customers, to uncover strengths and weaknesses. This new logistics model is able to recognize critical processes in logistic chain. It includes service quality rating that must respect its specific properties, which are unrepeatability, impalpability, their use right at the time they are provided and particularly changeability, which is significant factor in the conditions of rail transport as well. These peculiarities influence the quality of service regarding the constantly increasing requirements and that result in new ways of finding progressive attitudes towards the service quality rating.

Keywords: logistics model, quality, railway transport

Procedia PDF Downloads 573
5740 Improved Rare Species Identification Using Focal Loss Based Deep Learning Models

Authors: Chad Goldsworthy, B. Rajeswari Matam

Abstract:

The use of deep learning for species identification in camera trap images has revolutionised our ability to study, conserve and monitor species in a highly efficient and unobtrusive manner, with state-of-the-art models achieving accuracies surpassing the accuracy of manual human classification. The high imbalance of camera trap datasets, however, results in poor accuracies for minority (rare or endangered) species due to their relative insignificance to the overall model accuracy. This paper investigates the use of Focal Loss, in comparison to the traditional Cross Entropy Loss function, to improve the identification of minority species in the “255 Bird Species” dataset from Kaggle. The results show that, although Focal Loss slightly decreased the accuracy of the majority species, it was able to increase the F1-score by 0.06 and improve the identification of the bottom two, five and ten (minority) species by 37.5%, 15.7% and 10.8%, respectively, as well as resulting in an improved overall accuracy of 2.96%.

Keywords: convolutional neural networks, data imbalance, deep learning, focal loss, species classification, wildlife conservation

Procedia PDF Downloads 199
5739 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane

Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo

Abstract:

Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.

Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining

Procedia PDF Downloads 92
5738 Transportation Accidents Mortality Modeling in Thailand

Authors: W. Sriwattanapongse, S. Prasitwattanaseree, S. Wongtrangan

Abstract:

The transportation accidents mortality is a major problem that leads to loss of human lives, and economic. The objective was to identify patterns of statistical modeling for estimating mortality rates due to transportation accidents in Thailand by using data from 2000 to 2009. The data was taken from the death certificate, vital registration database. The number of deaths and mortality rates were computed classifying by gender, age, year and region. There were 114,790 cases of transportation accidents deaths. The highest average age-specific transport accident mortality rate is 3.11 per 100,000 per year in males, Southern region and the lowest average age-specific transport accident mortality rate is 1.79 per 100,000 per year in females, North-East region. Linear, poisson and negative binomial models were chosen for fitting statistical model. Among the models fitted, the best was chosen based on the analysis of deviance and AIC. The negative binomial model was clearly appropriate fitted.

Keywords: transportation accidents, mortality, modeling, analysis of deviance

Procedia PDF Downloads 249
5737 Numerical Simulation of Axially Loaded to Failure Large Diameter Bored Pile

Authors: M. Ezzat, Y. Zaghloul, T. Sorour, A. Hefny, M. Eid

Abstract:

Ultimate capacity of large diameter bored piles is usually determined from pile loading tests as recommended by several international codes and foundation design standards. However, loading of this type of piles till achieving apparent failure is practically seldom. In this paper, numerical analyses are carried out to simulate load test of a large diameter bored pile performed at the location of Alzey highway bridge project (Germany). Test results of pile load settlement relationship till failure as well as results of the base and shaft resistances are available. Apparent failure was indicated in this test by the significant increase of the induced settlement during the last load increment applied on the pile head. Measurements of this pile load test are used to assess the quality of the numerical models investigated. Three different material soil models are implemented in the analyses: Mohr coulomb (MC), Soft soil (SS), and Modified Mohr coulomb (MMC). Very good agreement is obtained between the field measured settlement and the calculated settlement using the MMC model. Results of analysis showed also that the MMC constitutive model is superior to MC, and SS models in predicting the ultimate base and shaft resistances of the large diameter bored pile. After calibrating the numerical model, behavior of large diameter bored piles under axial loads is discussed and the formation of the plastic zone around the pile is explored. Results obtained showed that the plastic zone below the base of the pile at failure extended laterally to about four times the pile diameter and vertically to about three times the pile diameter.

Keywords: ultimate capacity, large diameter bored piles, plastic zone, failure, pile load test

Procedia PDF Downloads 146
5736 Quantitative Structure-Activity Relationship Modeling of Detoxication Properties of Some 1,2-Dithiole-3-Thione Derivatives

Authors: Nadjib Melkemi, Salah Belaidi

Abstract:

Quantitative Structure-Activity Relationship (QSAR) studies have been performed on nineteen molecules of 1,2-dithiole-3-thione analogues. The compounds used are the potent inducers of enzymes involved in the maintenance of reduced glutathione pools as well as phase-2 enzymes important to electrophile detoxication. A multiple linear regression (MLR) procedure was used to design the relationships between molecular descriptor and detoxication properties of the 1,2-dithiole-3-thione derivatives. The predictivity of the model was estimated by cross-validation with the leave-one-out method. Our results suggest a QSAR model based of the following descriptors: qS2, qC3, qC5, qS6, DM, Pol, log P, MV, SAG, HE and EHOMO for the specific activity of quinone reductase; qS1, qS2, qC3, qC4, qC5, qS6, DM, Pol, logP, MV, SAG, HE and EHOMO for the production of growth hormone. To confirm the predictive power of the models, an external set of molecules was used. High correlation between experimental and predicted activity values was observed, indicating the validation and the good quality of the derived QSAR models.

Keywords: QSAR, quinone reductase activity, production of growth hormone, MLR

Procedia PDF Downloads 354
5735 Named Entity Recognition System for Tigrinya Language

Authors: Sham Kidane, Fitsum Gaim, Ibrahim Abdella, Sirak Asmerom, Yoel Ghebrihiwot, Simon Mulugeta, Natnael Ambassager

Abstract:

The lack of annotated datasets is a bottleneck to the progress of NLP in low-resourced languages. The work presented here consists of large-scale annotated datasets and models for the named entity recognition (NER) system for the Tigrinya language. Our manually constructed corpus comprises over 340K words tagged for NER, with over 118K of the tokens also having parts-of-speech (POS) tags, annotated with 12 distinct classes of entities, represented using several types of tagging schemes. We conducted extensive experiments covering convolutional neural networks and transformer models; the highest performance achieved is 88.8% weighted F1-score. These results are especially noteworthy given the unique challenges posed by Tigrinya’s distinct grammatical structure and complex word morphologies. The system can be an essential building block for the advancement of NLP systems in Tigrinya and other related low-resourced languages and serve as a bridge for cross-referencing against higher-resourced languages.

Keywords: Tigrinya NER corpus, TiBERT, TiRoBERTa, BiLSTM-CRF

Procedia PDF Downloads 139
5734 Analysing Techniques for Fusing Multimodal Data in Predictive Scenarios Using Convolutional Neural Networks

Authors: Philipp Ruf, Massiwa Chabbi, Christoph Reich, Djaffar Ould-Abdeslam

Abstract:

In recent years, convolutional neural networks (CNN) have demonstrated high performance in image analysis, but oftentimes, there is only structured data available regarding a specific problem. By interpreting structured data as images, CNNs can effectively learn and extract valuable insights from tabular data, leading to improved predictive accuracy and uncovering hidden patterns that may not be apparent in traditional structured data analysis. In applying a single neural network for analyzing multimodal data, e.g., both structured and unstructured information, significant advantages in terms of time complexity and energy efficiency can be achieved. Converting structured data into images and merging them with existing visual material offers a promising solution for applying CNN in multimodal datasets, as they often occur in a medical context. By employing suitable preprocessing techniques, structured data is transformed into image representations, where the respective features are expressed as different formations of colors and shapes. In an additional step, these representations are fused with existing images to incorporate both types of information. This final image is finally analyzed using a CNN.

Keywords: CNN, image processing, tabular data, mixed dataset, data transformation, multimodal fusion

Procedia PDF Downloads 128
5733 Prediction of Compressive Strength Using Artificial Neural Network

Authors: Vijay Pal Singh, Yogesh Chandra Kotiyal

Abstract:

Structures are a combination of various load carrying members which transfer the loads to the foundation from the superstructure safely. At the design stage, the loading of the structure is defined and appropriate material choices are made based upon their properties, mainly related to strength. The strength of materials kept on reducing with time because of many factors like environmental exposure and deformation caused by unpredictable external loads. Hence, to predict the strength of materials used in structures, various techniques are used. Among these techniques, Non-Destructive Techniques (NDT) are the one that can be used to predict the strength without damaging the structure. In the present study, the compressive strength of concrete has been predicted using Artificial Neural Network (ANN). The predicted strength was compared with the experimentally obtained actual compressive strength of concrete and equations were developed for different models. A good co-relation has been obtained between the predicted strength by these models and experimental values. Further, the co-relation has been developed using two NDT techniques for prediction of strength by regression analysis. It was found that the percentage error has been reduced between the predicted strength by using combined techniques in place of single techniques.

Keywords: rebound, ultra-sonic pulse, penetration, ANN, NDT, regression

Procedia PDF Downloads 432
5732 Analysis of Genomics Big Data in Cloud Computing Using Fuzzy Logic

Authors: Mohammad Vahed, Ana Sadeghitohidi, Majid Vahed, Hiroki Takahashi

Abstract:

In the genomics field, the huge amounts of data have produced by the next-generation sequencers (NGS). Data volumes are very rapidly growing, as it is postulated that more than one billion bases will be produced per year in 2020. The growth rate of produced data is much faster than Moore's law in computer technology. This makes it more difficult to deal with genomics data, such as storing data, searching information, and finding the hidden information. It is required to develop the analysis platform for genomics big data. Cloud computing newly developed enables us to deal with big data more efficiently. Hadoop is one of the frameworks distributed computing and relies upon the core of a Big Data as a Service (BDaaS). Although many services have adopted this technology, e.g. amazon, there are a few applications in the biology field. Here, we propose a new algorithm to more efficiently deal with the genomics big data, e.g. sequencing data. Our algorithm consists of two parts: First is that BDaaS is applied for handling the data more efficiently. Second is that the hybrid method of MapReduce and Fuzzy logic is applied for data processing. This step can be parallelized in implementation. Our algorithm has great potential in computational analysis of genomics big data, e.g. de novo genome assembly and sequence similarity search. We will discuss our algorithm and its feasibility.

Keywords: big data, fuzzy logic, MapReduce, Hadoop, cloud computing

Procedia PDF Downloads 303
5731 Regression for Doubly Inflated Multivariate Poisson Distributions

Authors: Ishapathik Das, Sumen Sen, N. Rao Chaganty, Pooja Sengupta

Abstract:

Dependent multivariate count data occur in several research studies. These data can be modeled by a multivariate Poisson or Negative binomial distribution constructed using copulas. However, when some of the counts are inflated, that is, the number of observations in some cells are much larger than other cells, then the copula based multivariate Poisson (or Negative binomial) distribution may not fit well and it is not an appropriate statistical model for the data. There is a need to modify or adjust the multivariate distribution to account for the inflated frequencies. In this article, we consider the situation where the frequencies of two cells are higher compared to the other cells, and develop a doubly inflated multivariate Poisson distribution function using multivariate Gaussian copula. We also discuss procedures for regression on covariates for the doubly inflated multivariate count data. For illustrating the proposed methodologies, we present a real data containing bivariate count observations with inflations in two cells. Several models and linear predictors with log link functions are considered, and we discuss maximum likelihood estimation to estimate unknown parameters of the models.

Keywords: copula, Gaussian copula, multivariate distributions, inflated distributios

Procedia PDF Downloads 161
5730 An Empirical Study on Switching Activation Functions in Shallow and Deep Neural Networks

Authors: Apoorva Vinod, Archana Mathur, Snehanshu Saha

Abstract:

Though there exists a plethora of Activation Functions (AFs) used in single and multiple hidden layer Neural Networks (NN), their behavior always raised curiosity, whether used in combination or singly. The popular AFs –Sigmoid, ReLU, and Tanh–have performed prominently well for shallow and deep architectures. Most of the time, AFs are used singly in multi-layered NN, and, to the best of our knowledge, their performance is never studied and analyzed deeply when used in combination. In this manuscript, we experiment with multi-layered NN architecture (both on shallow and deep architectures; Convolutional NN and VGG16) and investigate how well the network responds to using two different AFs (Sigmoid-Tanh, Tanh-ReLU, ReLU-Sigmoid) used alternately against a traditional, single (Sigmoid-Sigmoid, Tanh-Tanh, ReLUReLU) combination. Our results show that using two different AFs, the network achieves better accuracy, substantially lower loss, and faster convergence on 4 computer vision (CV) and 15 Non-CV (NCV) datasets. When using different AFs, not only was the accuracy greater by 6-7%, but we also accomplished convergence twice as fast. We present a case study to investigate the probability of networks suffering vanishing and exploding gradients when using two different AFs. Additionally, we theoretically showed that a composition of two or more AFs satisfies Universal Approximation Theorem (UAT).

Keywords: activation function, universal approximation function, neural networks, convergence

Procedia PDF Downloads 161
5729 Modelling the Effect of Biomass Appropriation for Human Use on Global Biodiversity

Authors: Karina Reiter, Stefan Dullinger, Christoph Plutzar, Dietmar Moser

Abstract:

Due to population growth and changing patterns of production and consumption, the demand for natural resources and, as a result, the pressure on Earth’s ecosystems are growing. Biodiversity mapping can be a useful tool for assessing species endangerment or detecting hotspots of extinction risks. This paper explores the benefits of using the change in trophic energy flows as a consequence of the human alteration of the biosphere in biodiversity mapping. To this end, multiple linear regression models were developed to explain species richness in areas where there is no human influence (i.e. wilderness) for three taxonomic groups (birds, mammals, amphibians). The models were then applied to predict (I) potential global species richness using potential natural vegetation (NPPpot) and (II) global ‘actual’ species richness after biomass appropriation using NPP remaining in ecosystems after harvest (NPPeco). By calculating the difference between predicted potential and predicted actual species numbers, maps of estimated species richness loss were generated. Results show that biomass appropriation for human use can indeed be linked to biodiversity loss. Areas for which the models predicted high species loss coincide with areas where species endangerment and extinctions are recorded to be particularly high by the International Union for Conservation of Nature and Natural Resources (IUCN). Furthermore, the analysis revealed that while the species distribution maps of the IUCN Red List of Threatened Species used for this research can determine hotspots of biodiversity loss in large parts of the world, the classification system for threatened and extinct species needs to be revised to better reflect local risks of extinction.

Keywords: biodiversity loss, biomass harvest, human appropriation of net primary production, species richness

Procedia PDF Downloads 133
5728 Identity Management in Virtual Worlds Based on Biometrics Watermarking

Authors: S. Bader, N. Essoukri Ben Amara

Abstract:

With the technological development and rise of virtual worlds, these spaces are becoming more and more attractive for cybercriminals, hidden behind avatars and fictitious identities. Since access to these spaces is not restricted or controlled, some impostors take advantage of gaining unauthorized access and practicing cyber criminality. This paper proposes an identity management approach for securing access to virtual worlds. The major purpose of the suggested solution is to install a strong security mechanism to protect virtual identities represented by avatars. Thus, only legitimate users, through their corresponding avatars, are allowed to access the platform resources. Access is controlled by integrating an authentication process based on biometrics. In the request process for registration, a user fingerprint is enrolled and then encrypted into a watermark utilizing a cancelable and non-invertible algorithm for its protection. After a user personalizes their representative character, the biometric mark is embedded into the avatar through a watermarking procedure. The authenticity of the avatar identity is verified when it requests authorization for access. We have evaluated the proposed approach on a dataset of avatars from various virtual worlds, and we have registered promising performance results in terms of authentication accuracy, acceptation and rejection rates.

Keywords: identity management, security, biometrics authentication and authorization, avatar, virtual world

Procedia PDF Downloads 269
5727 How Rational Decision-Making Mechanisms of Individuals Are Corrupted under the Presence of Others and the Reflection of This on Financial Crisis Management Situations

Authors: Gultekin Gurcay

Abstract:

It is known that the most crucial influence of the psychological, social and emotional factors that affect any human behavior is to corrupt the rational decision making mechanism of the individuals and cause them to display irrational behaviors. In this regard, the social context of human beings influences the rationality of our decisions, and people tend to display different behaviors when they were alone compared to when they were surrounded by others. At this point, the interaction and interdependence of the behavioral finance and economics with the area of social psychology comes, where intentions and the behaviors of the individuals are being analyzed in the actual or implied presence of others comes into prominence. Within the context of this study, the prevalent theories of behavioral finance, which are The Prospect Theory, The Utility Theory Given Uncertainty and the Five Axioms of Choice under Uncertainty, Veblen’s Hidden Utility Theory, and the concept of ‘Overreaction’ has been examined and demonstrated; and the meaning, existence and validity of these theories together with the social context has been assessed. Finally, in this study the behavior of the individuals in financial crisis situations where the majority of the society is being affected from the same negative conditions at the same time has been analyzed, by taking into account how individual behavior will change according to the presence of the others.

Keywords: conditional variance coefficient, financial crisis, garch model, stock market

Procedia PDF Downloads 243
5726 Inherited Intergenerational Trauma – The Society for Black People in South Central Los Angeles

Authors: Kevin R. Collins Sr.

Abstract:

In South Central Los Angeles, Black people have endured various forms of trauma that spans across generations. This includes the horrors of slavery and the aftermaths of the Jim Crow Laws, institutionalized racism, and legislative segregation, just to name a few. The individuals born from the 1900’s until today have continued to transmit the traumas experienced across generations. Parents unconsciously transmit the hidden trauma, and the children take these experiences and apply it to the society they live in. Although there are some who attempt to break the cycle of transmitted trauma, the remninsce still remain and play a huge role in how they interact with others. The attempt of this discussion is to bring these traumatic experiences to the surface and attack them head on. It is important that we do this to allow not only the suffering individuals but the suffering society to heal. As a society, looking at the humane side of it and attempting to stop the racial injustice placed on black people to relieve them of the stress that some. If not all,, endure in this great United States of America. Changing the behavior as a country to create an improved since of common unity within. If we solve our own racial and social issues within this country, maybe we can solve these same issues that have been the footstool to the many wars we see around the world. Thus, breaking the cycle of inherited intergenerational trauma.

Keywords: intergenerational trauma, inherited trauma, transmission of trauma, blacks in South central LA, black trauma in America

Procedia PDF Downloads 102
5725 Deep Reinforcement Learning-Based Computation Offloading for 5G Vehicle-Aware Multi-Access Edge Computing Network

Authors: Ziying Wu, Danfeng Yan

Abstract:

Multi-Access Edge Computing (MEC) is one of the key technologies of the future 5G network. By deploying edge computing centers at the edge of wireless access network, the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios. Meanwhile, with the development of IOV (Internet of Vehicles) technology, various delay-sensitive and compute-intensive in-vehicle applications continue to appear. Compared with traditional internet business, these computation tasks have higher processing priority and lower delay requirements. In this paper, we design a 5G-based Vehicle-Aware Multi-Access Edge Computing Network (VAMECN) and propose a joint optimization problem of minimizing total system cost. In view of the problem, a deep reinforcement learning-based joint computation offloading and task migration optimization (JCOTM) algorithm is proposed, considering the influences of multiple factors such as concurrent multiple computation tasks, system computing resources distribution, and network communication bandwidth. And, the mixed integer nonlinear programming problem is described as a Markov Decision Process. Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption, optimize computing offloading and resource allocation schemes, and improve system resource utilization, compared with other computing offloading policies.

Keywords: multi-access edge computing, computation offloading, 5th generation, vehicle-aware, deep reinforcement learning, deep q-network

Procedia PDF Downloads 123
5724 Improvement of the Aerodynamic Behaviour of a Land Rover Discovery 4 in Turbulent Flow Using Computational Fluid Dynamics (CFD)

Authors: Ahmed Al-Saadi, Ali Hassanpour, Tariq Mahmud

Abstract:

The main objective of this study is to investigate ways to reduce the aerodynamic drag coefficient and to increase the stability of the full-size Sport Utility Vehicle using three-dimensional Computational Fluid Dynamics (CFD) simulation. The baseline model in the simulation was the Land Rover Discovery 4. Many aerodynamic devices and external design modifications were used in this study. These reduction aerodynamic techniques were tested individually or in combination to get the best design. All new models have the same capacity and comfort of the baseline model. Uniform freestream velocity of the air at inlet ranging from 28 m/s to 40 m/s was used. ANSYS Fluent software (version 16.0) was used to simulate all models. The drag coefficient obtained from the ANSYS Fluent for the baseline model was validated with experimental data. It is found that the use of modern aerodynamic add-on devices and modifications has a significant effect in reducing the aerodynamic drag coefficient.

Keywords: aerodynamics, RANS, sport utility vehicle, turbulent flow

Procedia PDF Downloads 319
5723 Analysis Of Fine Motor Skills in Chronic Neurodegenerative Models of Huntington’s Disease and Amyotrophic Lateral Sclerosis

Authors: T. Heikkinen, J. Oksman, T. Bragge, A. Nurmi, O. Kontkanen, T. Ahtoniemi

Abstract:

Motor impairment is an inherent phenotypic feature of several chronic neurodegenerative diseases, and pharmacological therapies aimed to counterbalance the motor disability have a great market potential. Animal models of chronic neurodegenerative diseases display a number deteriorating motor phenotype during the disease progression. There is a wide array of behavioral tools to evaluate motor functions in rodents. However, currently existing methods to study motor functions in rodents are often limited to evaluate gross motor functions only at advanced stages of the disease phenotype. The most commonly applied traditional motor assays used in CNS rodent models, lack the sensitivity to capture fine motor impairments or improvements. Fine motor skill characterization in rodents provides a more sensitive tool to capture more subtle motor dysfunctions and therapeutic effects. Importantly, similar approach, kinematic movement analysis, is also used in clinic, and applied both in diagnosis and determination of therapeutic response to pharmacological interventions. The aim of this study was to apply kinematic gait analysis, a novel and automated high precision movement analysis system, to characterize phenotypic deficits in three different chronic neurodegenerative animal models, a transgenic mouse model (SOD1 G93A) for amyotrophic lateral sclerosis (ALS), and R6/2 and Q175KI mouse models for Huntington’s disease (HD). The readouts from walking behavior included gait properties with kinematic data, and body movement trajectories including analysis of various points of interest such as movement and position of landmarks in the torso, tail and joints. Mice (transgenic and wild-type) from each model were analyzed for the fine motor kinematic properties at young ages, prior to the age when gross motor deficits are clearly pronounced. Fine motor kinematic Evaluation was continued in the same animals until clear motor dysfunction with conventional motor assays was evident. Time course analysis revealed clear fine motor skill impairments in each transgenic model earlier than what is seen with conventional gross motor tests. Motor changes were quantitatively analyzed for up to ~80 parameters, and the largest data sets of HD models were further processed with principal component analysis (PCA) to transform the pool of individual parameters into a smaller and focused set of mutually uncorrelated gait parameters showing strong genotype difference. Kinematic fine motor analysis of transgenic animal models described in this presentation show that this method isa sensitive, objective and fully automated tool that allows earlier and more sensitive detection of progressive neuromuscular and CNS disease phenotypes. As a result of the analysis a comprehensive set of fine motor parameters for each model is created, and these parameters provide better understanding of the disease progression and enhanced sensitivity of this assay for therapeutic testing compared to classical motor behavior tests. In SOD1 G93A, R6/2, and Q175KI mice, the alterations in gait were evident already several weeks earlier than with traditional gross motor assays. Kinematic testing can be applied to a wider set of motor readouts beyond gait in order to study whole body movement patterns such as with relation to joints and various body parts longitudinally, providing a sophisticated and translatable method for disseminating motor components in rodent disease models and evaluating therapeutic interventions.

Keywords: Gait analysis, kinematic, motor impairment, inherent feature

Procedia PDF Downloads 357
5722 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model

Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung

Abstract:

The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.

Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation

Procedia PDF Downloads 173
5721 Provenance in Scholarly Publications: Introducing the provCite Ontology

Authors: Maria Joseph Israel, Ahmed Amer

Abstract:

Our work aims to broaden the application of provenance technology beyond its traditional domains of scientific workflow management and database systems by offering a general provenance framework to capture richer and extensible metadata in unstructured textual data sources such as literary texts, commentaries, translations, and digital humanities. Specifically, we demonstrate the feasibility of capturing and representing expressive provenance metadata, including more of the context for citing scholarly works (e.g., the authors’ explicit or inferred intentions at the time of developing his/her research content for publication), while also supporting subsequent augmentation with similar additional metadata (by third parties, be they human or automated). To better capture the nature and types of possible citations, in our proposed provenance scheme metaScribe, we extend standard provenance conceptual models to form our proposed provCite ontology. This provides a conceptual framework which can accurately capture and describe more of the functional and rhetorical properties of a citation than can be achieved with any current models.

Keywords: knowledge representation, provenance architecture, ontology, metadata, bibliographic citation, semantic web annotation

Procedia PDF Downloads 121