Search results for: models error comparison
10142 The Prognostic Prediction Value of Positive Lymph Nodes Numbers for the Hypopharyngeal Squamous Cell Carcinoma
Authors: Wendu Pang, Yaxin Luo, Junhong Li, Yu Zhao, Danni Cheng, Yufang Rao, Minzi Mao, Ke Qiu, Yijun Dong, Fei Chen, Jun Liu, Jian Zou, Haiyang Wang, Wei Xu, Jianjun Ren
Abstract:
We aimed to compare the prognostic prediction value of positive lymph node number (PLNN) to the American Joint Committee on Cancer (AJCC) tumor, lymph node, and metastasis (TNM) staging system for patients with hypopharyngeal squamous cell carcinoma (HPSCC). A total of 826 patients with HPSCC from the Surveillance, Epidemiology, and End Results database (2004–2015) were identified and split into two independent cohorts: training (n=461) and validation (n=365). Univariate and multivariate Cox regression analyses were used to evaluate the prognostic effects of PLNN in patients with HPSCC. We further applied six Cox regression models to compare the survival predictive values of the PLNN and AJCC TNM staging system. PLNN showed a significant association with overall survival (OS) and cancer-specific survival (CSS) (P < 0.001) in both univariate and multivariable analyses, and was divided into three groups (PLNN 0, PLNN 1-5, and PLNN>5). In the training cohort, multivariate analysis revealed that the increased PLNN of HPSCC gave rise to significantly poor OS and CSS after adjusting for age, sex, tumor size, and cancer stage; this trend was also verified by the validation cohort. Additionally, the survival model incorporating a composite of PLNN and TNM classification (C-index, 0.705, 0.734) performed better than the PLNN and AJCC TNM models. PLNN can serve as a powerful survival predictor for patients with HPSCC and is a surrogate supplement for cancer staging systems.Keywords: hypopharyngeal squamous cell carcinoma, positive lymph nodes number, prognosis, prediction models, survival predictive values
Procedia PDF Downloads 15410141 RAFU Functions in Robotics and Automation
Authors: Alicia C. Sanchez
Abstract:
This paper investigates the implementation of RAFU functions (radical functions) in robotics and automation. Specifically, the main goal is to show how these functions may be useful in lane-keeping control and the lateral control of autonomous machines, vehicles, robots or the like. From the knowledge of several points of a certain route, the RAFU functions are used to achieve the lateral control purpose and maintain the lane-keeping errors within the fixed limits. The stability that these functions provide, their ease of approaching any continuous trajectory and the control of the possible error made on the approximation may be useful in practice.Keywords: automatic navigation control, lateral control, lane-keeping control, RAFU approximation
Procedia PDF Downloads 30210140 Model Averaging in a Multiplicative Heteroscedastic Model
Authors: Alan Wan
Abstract:
In recent years, the body of literature on frequentist model averaging in statistics has grown significantly. Most of this work focuses on models with different mean structures but leaves out the variance consideration. In this paper, we consider a regression model with multiplicative heteroscedasticity and develop a model averaging method that combines maximum likelihood estimators of unknown parameters in both the mean and variance functions of the model. Our weight choice criterion is based on a minimisation of a plug-in estimator of the model average estimator's squared prediction risk. We prove that the new estimator possesses an asymptotic optimality property. Our investigation of finite-sample performance by simulations demonstrates that the new estimator frequently exhibits very favourable properties compared to some existing heteroscedasticity-robust model average estimators. The model averaging method hedges against the selection of very bad models and serves as a remedy to variance function misspecification, which often discourages practitioners from modeling heteroscedasticity altogether. The proposed model average estimator is applied to the analysis of two real data sets.Keywords: heteroscedasticity-robust, model averaging, multiplicative heteroscedasticity, plug-in, squared prediction risk
Procedia PDF Downloads 38510139 Reliability-Based Life-Cycle Cost Model for Engineering Systems
Authors: Reza Lotfalian, Sudarshan Martins, Peter Radziszewski
Abstract:
The effect of reliability on life-cycle cost, including initial and maintenance cost of a system is studied. The failure probability of a component is used to calculate the average maintenance cost during the operation cycle of the component. The standard deviation of the life-cycle cost is also calculated as an error measure for the average life-cycle cost. As a numerical example, the model is used to study the average life cycle cost of an electric motor.Keywords: initial cost, life-cycle cost, maintenance cost, reliability
Procedia PDF Downloads 60510138 Comparison of E-learning and Face-to-Face Learning Models Through the Early Design Stage in Architectural Design Education
Authors: Gülay Dalgıç, Gildis Tachir
Abstract:
Architectural design studios are ambiencein where architecture design is realized as a palpable product in architectural education. In the design studios that the architect candidate will use in the design processthe information, the methods of approaching the design problem, the solution proposals, etc., are set uptogetherwith the studio coordinators. The architectural design process, on the other hand, is complex and uncertain.Candidate architects work in a process that starts with abstre and ill-defined problems. This process starts with the generation of alternative solutions with the help of representation tools, continues with the selection of the appropriate/satisfactory solution from these alternatives, and then ends with the creation of an acceptable design/result product. In the studio ambience, many designs and thought relationships are evaluated, the most important step is the early design phase. In the early design phase, the first steps of converting the information are taken, and converted information is used in the constitution of the first design decisions. This phase, which positively affects the progress of the design process and constitution of the final product, is complex and fuzzy than the other phases of the design process. In this context, the aim of the study is to investigate the effects of face-to-face learning model and e-learning model on the early design phase. In the study, the early design phase was defined by literature research. The data of the defined early design phase criteria were obtained with the feedback graphics created for the architect candidates who performed e-learning in the first year of architectural education and continued their education with the face-to-face learning model. The findings of the data were analyzed with the common graphics program. It is thought that this research will contribute to the establishment of a contemporary architectural design education model by reflecting the evaluation of the data and results on architectural education.Keywords: education modeling, architecture education, design education, design process
Procedia PDF Downloads 13810137 Improvement of the Numerical Integration's Quality in Meshless Methods
Authors: Ahlem Mougaida, Hedi Bel Hadj Salah
Abstract:
Several methods are suggested to improve the numerical integration in Galerkin weak form for Meshless methods. In fact, integrating without taking into account the characteristics of the shape functions reproduced by Meshless methods (rational functions, with compact support etc.), causes a large integration error that influences the PDE’s approximate solution. Comparisons between different methods of numerical integration for rational functions are discussed and compared. The algorithms are implemented in Matlab. Finally, numerical results were presented to prove the efficiency of our algorithms in improving results.Keywords: adaptive methods, meshless, numerical integration, rational quadrature
Procedia PDF Downloads 36410136 Identification of Failures Occurring on a System on Chip Exposed to a Neutron Beam for Safety Applications
Authors: S. Thomet, S. De-Paoli, F. Ghaffari, J. M. Daveau, P. Roche, O. Romain
Abstract:
In this paper, we present a hardware module dedicated to understanding the fail reason of a System on Chip (SoC) exposed to a particle beam. Impact of Single-Event Effects (SEE) on processor-based SoCs is a concern that has increased in the past decade, particularly for terrestrial applications with automotive safety increasing requirements, as well as consumer and industrial domains. The SEE created by the impact of a particle on an SoC may have consequences that can end to instability or crashes. Specific hardening techniques for hardware and software have been developed to make such systems more reliable. SoC is then qualified using cosmic ray Accelerated Soft-Error Rate (ASER) to ensure the Soft-Error Rate (SER) remains in mission profiles. Understanding where errors are occurring is another challenge because of the complexity of operations performed in an SoC. Common techniques to monitor an SoC running under a beam are based on non-intrusive debug, consisting of recording the program counter and doing some consistency checking on the fly. To detect and understand SEE, we have developed a module embedded within the SoC that provide support for recording probes, hardware watchpoints, and a memory mapped register bank dedicated to software usage. To identify CPU failure modes and the most important resources to probe, we have carried out a fault injection campaign on the RTL model of the SoC. Probes are placed on generic CPU registers and bus accesses. They highlight the propagation of errors and allow identifying the failure modes. Typical resulting errors are bit-flips in resources creating bad addresses, illegal instructions, longer than expected loops, or incorrect bus accesses. Although our module is processor agnostic, it has been interfaced to a RISC-V by probing some of the processor registers. Probes are then recorded in a ring buffer. Associated hardware watchpoints are allowing to do some control, such as start or stop event recording or halt the processor. Finally, the module is also providing a bank of registers where the firmware running on the SoC can log information. Typical usage is for operating system context switch recording. The module is connected to a dedicated debug bus and is interfaced to a remote controller via a debugger link. Thus, a remote controller can interact with the monitoring module without any intrusiveness on the SoC. Moreover, in case of CPU unresponsiveness, or system-bus stall, the recorded information can still be recovered, providing the fail reason. A preliminary version of the module has been integrated into a test chip currently being manufactured at ST in 28-nm FDSOI technology. The module has been triplicated to provide reliable information on the SoC behavior. As the primary application domain is automotive and safety, the efficiency of the module will be evaluated by exposing the test chip under a fast-neutron beam by the end of the year. In the meantime, it will be tested with alpha particles and electromagnetic fault injection (EMFI). We will report in the paper on fault-injection results as well as irradiation results.Keywords: fault injection, SoC fail reason, SoC soft error rate, terrestrial application
Procedia PDF Downloads 22910135 Investigation of Delivery of Triple Play Service in GE-PON Fiber to the Home Network
Authors: Anurag Sharma, Dinesh Kumar, Rahul Malhotra, Manoj Kumar
Abstract:
Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT
Procedia PDF Downloads 73410134 Logistics Model for Improving Quality in Railway Transport
Authors: Eva Nedeliakova, Juraj Camaj, Jaroslav Masek
Abstract:
This contribution is focused on the methodology for identifying levels of quality and improving quality through new logistics model in railway transport. It is oriented on the application of dynamic quality models, which represent an innovative method of evaluation quality services. Through this conception, time factor, expected, and perceived quality in each moment of the transportation process within logistics chain can be taken into account. Various models describe the improvement of the quality which emphases the time factor throughout the whole transportation logistics chain. Quality of services in railway transport can be determined by the existing level of service quality, by detecting the causes of dissatisfaction employees but also customers, to uncover strengths and weaknesses. This new logistics model is able to recognize critical processes in logistic chain. It includes service quality rating that must respect its specific properties, which are unrepeatability, impalpability, their use right at the time they are provided and particularly changeability, which is significant factor in the conditions of rail transport as well. These peculiarities influence the quality of service regarding the constantly increasing requirements and that result in new ways of finding progressive attitudes towards the service quality rating.Keywords: logistics model, quality, railway transport
Procedia PDF Downloads 56810133 Big Data Analysis Approach for Comparison New York Taxi Drivers' Operation Patterns between Workdays and Weekends Focusing on the Revenue Aspect
Authors: Yongqi Dong, Zuo Zhang, Rui Fu, Li Li
Abstract:
The records generated by taxicabs which are equipped with GPS devices is of vital importance for studying human mobility behavior, however, here we are focusing on taxi drivers' operation strategies between workdays and weekends temporally and spatially. We identify a group of valuable characteristics through large scale drivers' behavior in a complex metropolis environment. Based on the daily operations of 31,000 taxi drivers in New York City, we classify drivers into top, ordinary and low-income groups according to their monthly working load, daily income, daily ranking and the variance of the daily rank. Then, we apply big data analysis and visualization methods to compare the different characteristics among top, ordinary and low income drivers in selecting of working time, working area as well as strategies between workdays and weekends. The results verify that top drivers do have special operation tactics to help themselves serve more passengers, travel faster thus make more money per unit time. This research provides new possibilities for fully utilizing the information obtained from urban taxicab data for estimating human behavior, which is not only very useful for individual taxicab driver but also to those policy-makers in city authorities.Keywords: big data, operation strategies, comparison, revenue, temporal, spatial
Procedia PDF Downloads 22710132 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors
Authors: Jakob Krause
Abstract:
Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling
Procedia PDF Downloads 14810131 Visibility of the Borders of the Mandibular Canal: A Comparative in Vitro Study Using Digital Panoramic Radiography, Reformatted Panoramic Radiography and Cross Sectional Cone Beam Computed Tomography
Authors: Keerthilatha Pai, Sakshi Kamra
Abstract:
Objectives: Determining the position of the mandibular canal prior to implant placement and surgeries of the posterior mandible are important to avoid the nerve injury. The visibility of the mandibular canal varies according to the imaging modality. Although panoramic radiography is the most common, slowly cone beam computed tomography is replacing it. This study was conducted with an aim to determine and compare the visibility of superior and inferior borders of the mandibular canal in digital panoramic radiograph, reformatted panoramic radiograph and cross-sectional images of cone beam computed tomography. Study design: digital panoramic, reformatted panoramic radiograph and cross sectional CBCT images of 25 human mandibles were evaluated for the visibility of the superior and inferior borders of the mandibular canal according to a 5 point scoring criteria. Also, the canal was evaluated as completely visible, partially visible and not visible. The mean scores and visibility percentage of all the imaging modalities were determined and compared. The interobserver and intraobserver agreement in the visualization of the superior and inferior borders of the mandibular canal were determined. Results: The superior and inferior borders of the mandibular canal were completely visible in 47% of the samples in digital panoramic, 63% in reformatted panoramic and 75.6% in CBCT cross-sectional images. The mandibular canal was invisible in 24% of samples in digital panoramic, 19% in reformatted panoramic and 2% in cross-sectional CBCT images. Maximum visibility was seen in Zone 5 and least visibility in Zone 1. On comparison of all the imaging modalities, CBCT cross-sectional images showed better visibility of superior border in Zones 2,3,4,6 and inferior border in Zones 2,3,4,6. The difference was statistically significant. Conclusion: CBCT cross-sectional images were much superior in the visualization of the mandibular canal in comparison to reformatted and digital panoramic radiographs. The inferior border was better visualized in comparison to the superior border in digital panoramic imaging. The mandibular canal was maximumly visible in posterior one-third region of the mandible and the visibility decreased towards the mental foramen.Keywords: cone beam computed tomography, mandibular canal, reformatted panoramic radiograph, visualization
Procedia PDF Downloads 12710130 Transportation Accidents Mortality Modeling in Thailand
Authors: W. Sriwattanapongse, S. Prasitwattanaseree, S. Wongtrangan
Abstract:
The transportation accidents mortality is a major problem that leads to loss of human lives, and economic. The objective was to identify patterns of statistical modeling for estimating mortality rates due to transportation accidents in Thailand by using data from 2000 to 2009. The data was taken from the death certificate, vital registration database. The number of deaths and mortality rates were computed classifying by gender, age, year and region. There were 114,790 cases of transportation accidents deaths. The highest average age-specific transport accident mortality rate is 3.11 per 100,000 per year in males, Southern region and the lowest average age-specific transport accident mortality rate is 1.79 per 100,000 per year in females, North-East region. Linear, poisson and negative binomial models were chosen for fitting statistical model. Among the models fitted, the best was chosen based on the analysis of deviance and AIC. The negative binomial model was clearly appropriate fitted.Keywords: transportation accidents, mortality, modeling, analysis of deviance
Procedia PDF Downloads 24410129 Numerical Simulation of Axially Loaded to Failure Large Diameter Bored Pile
Authors: M. Ezzat, Y. Zaghloul, T. Sorour, A. Hefny, M. Eid
Abstract:
Ultimate capacity of large diameter bored piles is usually determined from pile loading tests as recommended by several international codes and foundation design standards. However, loading of this type of piles till achieving apparent failure is practically seldom. In this paper, numerical analyses are carried out to simulate load test of a large diameter bored pile performed at the location of Alzey highway bridge project (Germany). Test results of pile load settlement relationship till failure as well as results of the base and shaft resistances are available. Apparent failure was indicated in this test by the significant increase of the induced settlement during the last load increment applied on the pile head. Measurements of this pile load test are used to assess the quality of the numerical models investigated. Three different material soil models are implemented in the analyses: Mohr coulomb (MC), Soft soil (SS), and Modified Mohr coulomb (MMC). Very good agreement is obtained between the field measured settlement and the calculated settlement using the MMC model. Results of analysis showed also that the MMC constitutive model is superior to MC, and SS models in predicting the ultimate base and shaft resistances of the large diameter bored pile. After calibrating the numerical model, behavior of large diameter bored piles under axial loads is discussed and the formation of the plastic zone around the pile is explored. Results obtained showed that the plastic zone below the base of the pile at failure extended laterally to about four times the pile diameter and vertically to about three times the pile diameter.Keywords: ultimate capacity, large diameter bored piles, plastic zone, failure, pile load test
Procedia PDF Downloads 14310128 Quantitative Structure-Activity Relationship Modeling of Detoxication Properties of Some 1,2-Dithiole-3-Thione Derivatives
Authors: Nadjib Melkemi, Salah Belaidi
Abstract:
Quantitative Structure-Activity Relationship (QSAR) studies have been performed on nineteen molecules of 1,2-dithiole-3-thione analogues. The compounds used are the potent inducers of enzymes involved in the maintenance of reduced glutathione pools as well as phase-2 enzymes important to electrophile detoxication. A multiple linear regression (MLR) procedure was used to design the relationships between molecular descriptor and detoxication properties of the 1,2-dithiole-3-thione derivatives. The predictivity of the model was estimated by cross-validation with the leave-one-out method. Our results suggest a QSAR model based of the following descriptors: qS2, qC3, qC5, qS6, DM, Pol, log P, MV, SAG, HE and EHOMO for the specific activity of quinone reductase; qS1, qS2, qC3, qC4, qC5, qS6, DM, Pol, logP, MV, SAG, HE and EHOMO for the production of growth hormone. To confirm the predictive power of the models, an external set of molecules was used. High correlation between experimental and predicted activity values was observed, indicating the validation and the good quality of the derived QSAR models.Keywords: QSAR, quinone reductase activity, production of growth hormone, MLR
Procedia PDF Downloads 35010127 Meteosat Second Generation Image Compression Based on the Radon Transform and Linear Predictive Coding: Comparison and Performance
Authors: Cherifi Mehdi, Lahdir Mourad, Ameur Soltane
Abstract:
Image compression is used to reduce the number of bits required to represent an image. The Meteosat Second Generation satellite (MSG) allows the acquisition of 12 image files every 15 minutes. Which results a large databases sizes. The transform selected in the images compression should contribute to reduce the data representing the images. The Radon transform retrieves the Radon points that represent the sum of the pixels in a given angle for each direction. Linear predictive coding (LPC) with filtering provides a good decorrelation of Radon points using a Predictor constitute by the Symmetric Nearest Neighbor filter (SNN) coefficients, which result losses during decompression. Finally, Run Length Coding (RLC) gives us a high and fixed compression ratio regardless of the input image. In this paper, a novel image compression method based on the Radon transform and linear predictive coding (LPC) for MSG images is proposed. MSG image compression based on the Radon transform and the LPC provides a good compromise between compression and quality of reconstruction. A comparison of our method with other whose two based on DCT and one on DWT bi-orthogonal filtering is evaluated to show the power of the Radon transform in its resistibility against the quantization noise and to evaluate the performance of our method. Evaluation criteria like PSNR and the compression ratio allows showing the efficiency of our method of compression.Keywords: image compression, radon transform, linear predictive coding (LPC), run lengthcoding (RLC), meteosat second generation (MSG)
Procedia PDF Downloads 42110126 Comparison of Different Activators Impact on the Alkali-Activated Aluminium-Silicate Composites
Authors: Laura Dembovska, Ina Pundiene, Diana Bajare
Abstract:
Alkali-activated aluminium-silicate composites (AASC) can be used in the production of innovative materials with a wide range of properties and applications. AASC are associated with low CO₂ emissions; in the production process, it is possible to use industrial by-products and waste, thereby minimizing the use of a non-renewable natural resource. This study deals with the preparation of heat-resistant porous AASC based on chamotte for high-temperature applications up to 1200°C. Different fillers, aluminium scrap recycling waste as pores forming agent and alkali activation with 6M sodium hydroxide (NaOH) and potassium hydroxide (KOH) solution were used. Sodium hydroxide (NaOH) is widely used for the synthesis of AASC compared to potassium hydroxide (KOH), but comparison of using different activator for geopolymer synthesis is not well established. Changes in chemical composition of AASC during heating were identified and quantitatively analyzed by using DTA, dimension changes during the heating process were determined by using HTOM, pore microstructure was examined by SEM, and mineralogical composition of AASC was determined by XRD. Lightweight porous AASC activated with NaOH have been obtained with density in range from 600 to 880 kg/m³ and compressive strength from 0.8 to 2.7 MPa, but for AAM activated with KOH density was in range from 750 to 850 kg/m³ and compressive strength from 0.7 to 2.1 MPa.Keywords: alkali activation, alkali activated materials, elevated temperature application, heat resistance
Procedia PDF Downloads 17810125 Named Entity Recognition System for Tigrinya Language
Authors: Sham Kidane, Fitsum Gaim, Ibrahim Abdella, Sirak Asmerom, Yoel Ghebrihiwot, Simon Mulugeta, Natnael Ambassager
Abstract:
The lack of annotated datasets is a bottleneck to the progress of NLP in low-resourced languages. The work presented here consists of large-scale annotated datasets and models for the named entity recognition (NER) system for the Tigrinya language. Our manually constructed corpus comprises over 340K words tagged for NER, with over 118K of the tokens also having parts-of-speech (POS) tags, annotated with 12 distinct classes of entities, represented using several types of tagging schemes. We conducted extensive experiments covering convolutional neural networks and transformer models; the highest performance achieved is 88.8% weighted F1-score. These results are especially noteworthy given the unique challenges posed by Tigrinya’s distinct grammatical structure and complex word morphologies. The system can be an essential building block for the advancement of NLP systems in Tigrinya and other related low-resourced languages and serve as a bridge for cross-referencing against higher-resourced languages.Keywords: Tigrinya NER corpus, TiBERT, TiRoBERTa, BiLSTM-CRF
Procedia PDF Downloads 13110124 Regression for Doubly Inflated Multivariate Poisson Distributions
Authors: Ishapathik Das, Sumen Sen, N. Rao Chaganty, Pooja Sengupta
Abstract:
Dependent multivariate count data occur in several research studies. These data can be modeled by a multivariate Poisson or Negative binomial distribution constructed using copulas. However, when some of the counts are inflated, that is, the number of observations in some cells are much larger than other cells, then the copula based multivariate Poisson (or Negative binomial) distribution may not fit well and it is not an appropriate statistical model for the data. There is a need to modify or adjust the multivariate distribution to account for the inflated frequencies. In this article, we consider the situation where the frequencies of two cells are higher compared to the other cells, and develop a doubly inflated multivariate Poisson distribution function using multivariate Gaussian copula. We also discuss procedures for regression on covariates for the doubly inflated multivariate count data. For illustrating the proposed methodologies, we present a real data containing bivariate count observations with inflations in two cells. Several models and linear predictors with log link functions are considered, and we discuss maximum likelihood estimation to estimate unknown parameters of the models.Keywords: copula, Gaussian copula, multivariate distributions, inflated distributios
Procedia PDF Downloads 15610123 Modelling the Effect of Biomass Appropriation for Human Use on Global Biodiversity
Authors: Karina Reiter, Stefan Dullinger, Christoph Plutzar, Dietmar Moser
Abstract:
Due to population growth and changing patterns of production and consumption, the demand for natural resources and, as a result, the pressure on Earth’s ecosystems are growing. Biodiversity mapping can be a useful tool for assessing species endangerment or detecting hotspots of extinction risks. This paper explores the benefits of using the change in trophic energy flows as a consequence of the human alteration of the biosphere in biodiversity mapping. To this end, multiple linear regression models were developed to explain species richness in areas where there is no human influence (i.e. wilderness) for three taxonomic groups (birds, mammals, amphibians). The models were then applied to predict (I) potential global species richness using potential natural vegetation (NPPpot) and (II) global ‘actual’ species richness after biomass appropriation using NPP remaining in ecosystems after harvest (NPPeco). By calculating the difference between predicted potential and predicted actual species numbers, maps of estimated species richness loss were generated. Results show that biomass appropriation for human use can indeed be linked to biodiversity loss. Areas for which the models predicted high species loss coincide with areas where species endangerment and extinctions are recorded to be particularly high by the International Union for Conservation of Nature and Natural Resources (IUCN). Furthermore, the analysis revealed that while the species distribution maps of the IUCN Red List of Threatened Species used for this research can determine hotspots of biodiversity loss in large parts of the world, the classification system for threatened and extinct species needs to be revised to better reflect local risks of extinction.Keywords: biodiversity loss, biomass harvest, human appropriation of net primary production, species richness
Procedia PDF Downloads 13010122 Improvement of the Aerodynamic Behaviour of a Land Rover Discovery 4 in Turbulent Flow Using Computational Fluid Dynamics (CFD)
Authors: Ahmed Al-Saadi, Ali Hassanpour, Tariq Mahmud
Abstract:
The main objective of this study is to investigate ways to reduce the aerodynamic drag coefficient and to increase the stability of the full-size Sport Utility Vehicle using three-dimensional Computational Fluid Dynamics (CFD) simulation. The baseline model in the simulation was the Land Rover Discovery 4. Many aerodynamic devices and external design modifications were used in this study. These reduction aerodynamic techniques were tested individually or in combination to get the best design. All new models have the same capacity and comfort of the baseline model. Uniform freestream velocity of the air at inlet ranging from 28 m/s to 40 m/s was used. ANSYS Fluent software (version 16.0) was used to simulate all models. The drag coefficient obtained from the ANSYS Fluent for the baseline model was validated with experimental data. It is found that the use of modern aerodynamic add-on devices and modifications has a significant effect in reducing the aerodynamic drag coefficient.Keywords: aerodynamics, RANS, sport utility vehicle, turbulent flow
Procedia PDF Downloads 31610121 Analysis Of Fine Motor Skills in Chronic Neurodegenerative Models of Huntington’s Disease and Amyotrophic Lateral Sclerosis
Authors: T. Heikkinen, J. Oksman, T. Bragge, A. Nurmi, O. Kontkanen, T. Ahtoniemi
Abstract:
Motor impairment is an inherent phenotypic feature of several chronic neurodegenerative diseases, and pharmacological therapies aimed to counterbalance the motor disability have a great market potential. Animal models of chronic neurodegenerative diseases display a number deteriorating motor phenotype during the disease progression. There is a wide array of behavioral tools to evaluate motor functions in rodents. However, currently existing methods to study motor functions in rodents are often limited to evaluate gross motor functions only at advanced stages of the disease phenotype. The most commonly applied traditional motor assays used in CNS rodent models, lack the sensitivity to capture fine motor impairments or improvements. Fine motor skill characterization in rodents provides a more sensitive tool to capture more subtle motor dysfunctions and therapeutic effects. Importantly, similar approach, kinematic movement analysis, is also used in clinic, and applied both in diagnosis and determination of therapeutic response to pharmacological interventions. The aim of this study was to apply kinematic gait analysis, a novel and automated high precision movement analysis system, to characterize phenotypic deficits in three different chronic neurodegenerative animal models, a transgenic mouse model (SOD1 G93A) for amyotrophic lateral sclerosis (ALS), and R6/2 and Q175KI mouse models for Huntington’s disease (HD). The readouts from walking behavior included gait properties with kinematic data, and body movement trajectories including analysis of various points of interest such as movement and position of landmarks in the torso, tail and joints. Mice (transgenic and wild-type) from each model were analyzed for the fine motor kinematic properties at young ages, prior to the age when gross motor deficits are clearly pronounced. Fine motor kinematic Evaluation was continued in the same animals until clear motor dysfunction with conventional motor assays was evident. Time course analysis revealed clear fine motor skill impairments in each transgenic model earlier than what is seen with conventional gross motor tests. Motor changes were quantitatively analyzed for up to ~80 parameters, and the largest data sets of HD models were further processed with principal component analysis (PCA) to transform the pool of individual parameters into a smaller and focused set of mutually uncorrelated gait parameters showing strong genotype difference. Kinematic fine motor analysis of transgenic animal models described in this presentation show that this method isa sensitive, objective and fully automated tool that allows earlier and more sensitive detection of progressive neuromuscular and CNS disease phenotypes. As a result of the analysis a comprehensive set of fine motor parameters for each model is created, and these parameters provide better understanding of the disease progression and enhanced sensitivity of this assay for therapeutic testing compared to classical motor behavior tests. In SOD1 G93A, R6/2, and Q175KI mice, the alterations in gait were evident already several weeks earlier than with traditional gross motor assays. Kinematic testing can be applied to a wider set of motor readouts beyond gait in order to study whole body movement patterns such as with relation to joints and various body parts longitudinally, providing a sophisticated and translatable method for disseminating motor components in rodent disease models and evaluating therapeutic interventions.Keywords: Gait analysis, kinematic, motor impairment, inherent feature
Procedia PDF Downloads 35510120 Aging Evaluation of Ammonium Perchlorate/Hydroxyl Terminated Polybutadiene-Based Solid Rocket Engine by Reactive Molecular Dynamics Simulation and Thermal Analysis
Authors: R. F. B. Gonçalves, E. N. Iwama, J. A. F. F. Rocco, K. Iha
Abstract:
Propellants based on Hydroxyl Terminated Polybutadiene/Ammonium Perchlorate (HTPB/AP) are the most commonly used in most of the rocket engines used by the Brazilian Armed Forces. This work aimed at the possibility of extending its useful life (currently in 10 years) by performing kinetic-chemical analyzes of its energetic material via Differential Scanning Calorimetry (DSC) and also performing computer simulation of aging process using the software Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). Thermal analysis via DSC was performed in triplicates and in three heating ratios (5 ºC, 10 ºC, and 15 ºC) of rocket motor with 11 years shelf-life, using the Arrhenius equation to obtain its activation energy, using Ozawa and Kissinger kinetic methods, allowing comparison with manufacturing period data (standard motor). In addition, the kinetic parameters of internal pressure of the combustion chamber in 08 rocket engines with 11 years of shelf-life were also acquired, for comparison purposes with the engine start-up data.Keywords: shelf-life, thermal analysis, Ozawa method, Kissinger method, LAMMPS software, thrust
Procedia PDF Downloads 12710119 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model
Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung
Abstract:
The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation
Procedia PDF Downloads 16910118 Provenance in Scholarly Publications: Introducing the provCite Ontology
Authors: Maria Joseph Israel, Ahmed Amer
Abstract:
Our work aims to broaden the application of provenance technology beyond its traditional domains of scientific workflow management and database systems by offering a general provenance framework to capture richer and extensible metadata in unstructured textual data sources such as literary texts, commentaries, translations, and digital humanities. Specifically, we demonstrate the feasibility of capturing and representing expressive provenance metadata, including more of the context for citing scholarly works (e.g., the authors’ explicit or inferred intentions at the time of developing his/her research content for publication), while also supporting subsequent augmentation with similar additional metadata (by third parties, be they human or automated). To better capture the nature and types of possible citations, in our proposed provenance scheme metaScribe, we extend standard provenance conceptual models to form our proposed provCite ontology. This provides a conceptual framework which can accurately capture and describe more of the functional and rhetorical properties of a citation than can be achieved with any current models.Keywords: knowledge representation, provenance architecture, ontology, metadata, bibliographic citation, semantic web annotation
Procedia PDF Downloads 11710117 Development of E-Tendering Models for Nigerian Public Procuring Entities
Authors: Bello Abdullahi, Kabir Bala, Yahaya M. Ibrahim, Ahmed D. Ibrahim
Abstract:
Public sector tendering has traditionally been conducted using manual paper-based processes which are known to be inefficient, less transparent, and more prone to manipulations and errors. However, the advent of the Internet and its associated technologies has led to the development of numerous e-Tendering systems that addressed many of the problems associated with the manual paper-based tendering system. Currently, in Nigeria, the public tendering processes are largely conducted based on manual paper-based system that is bedevilled by a number of problems such as inordinate delays, inefficiencies, manipulation of the tender evaluation process, corruption, lack of transparency and competition, among other problems. These problems can be addressed through the adoption of existing web-based e-Tendering systems which are known to address most of these problems. However, these existing e-Tendering systems that have been developed are not based on the Nigerian legal procurement processes and as such their suitability for local application is very limited. This paper is part of a larger study that attempt to address this problem through the development of an e-Tendering system that is based on the requirements of the Nigerian public procuring entities. In this paper, the identified tendering processes commonly used by Nigerian public procuring entities in the selection of construction sources are presented. A multi-methods research approach was used to identify those tendering processes. Specifically, 19 existing business use cases used by Nigerian public procuring entities were identified and 61 system use cases were prescribed based on the identified business use cases. The use cases were used as the basis for the development of domain and software conceptual models. The models were successfully used to guide the development of an e-Tendering system called NPS-eTender. Ripple and Unified Process were adopted as the software development methodologies.Keywords: e-tendering, e-procurement, requirement model, conceptual model, public sector tendering, public procurement
Procedia PDF Downloads 19510116 A Corpus Output Error Analysis of Chinese L2 Learners From America, Myanmar, and Singapore
Authors: Qiao-Yu Warren Cai
Abstract:
Due to the rise of big data, building corpora and using them to analyze ChineseL2 learners’ language output has become a trend. Various empirical research has been conducted using Chinese corpora built by different academic institutes. However, most of the research analyzed the data in the Chinese corpora usingcorpus-based qualitative content analysis with descriptive statistics. Descriptive statistics can be used to make summations about the subjects or samples that research has actually measured to describe the numerical data, but the collected data cannot be generalized to the population. Comte, a Frenchpositivist, has argued since the 19th century that human beings’ knowledge, whether the discipline is humanistic and social science or natural science, should be verified in a scientific way to construct a universal theory to explain the truth and human beings behaviors. Inferential statistics, able to make judgments of the probability of a difference observed between groups being dependable or caused by chance (Free Geography Notes, 2015)and to infer from the subjects or examples what the population might think or behave, is just the right method to support Comte’s argument in the field of TCSOL. Also, inferential statistics is a core of quantitative research, but little research has been conducted by combing corpora with inferential statistics. Little research analyzes the differences in Chinese L2 learners’ language corpus output errors by using theOne-way ANOVA so that the findings of previous research are limited to inferring the population's Chinese errors according to the given samples’ Chinese corpora. To fill this knowledge gap in the professional development of Taiwanese TCSOL, the present study aims to utilize the One-way ANOVA to analyze corpus output errors of Chinese L2 learners from America, Myanmar, and Singapore. The results show that no significant difference exists in ‘shì (是) sentence’ and word order errors, but compared with Americans and Singaporeans, it is significantly easier for Myanmar to have ‘sentence blends.’ Based on the above results, the present study provides an instructional approach and contributes to further exploration of how Chinese L2 learners can have (and use) learning strategies to lower errors.Keywords: Chinese corpus, error analysis, one-way analysis of variance, Chinese L2 learners, Americans, myanmar, Singaporeans
Procedia PDF Downloads 10610115 Enhance the Power of Sentiment Analysis
Authors: Yu Zhang, Pedro Desouza
Abstract:
Since big data has become substantially more accessible and manageable due to the development of powerful tools for dealing with unstructured data, people are eager to mine information from social media resources that could not be handled in the past. Sentiment analysis, as a novel branch of text mining, has in the last decade become increasingly important in marketing analysis, customer risk prediction and other fields. Scientists and researchers have undertaken significant work in creating and improving their sentiment models. In this paper, we present a concept of selecting appropriate classifiers based on the features and qualities of data sources by comparing the performances of five classifiers with three popular social media data sources: Twitter, Amazon Customer Reviews, and Movie Reviews. We introduced a couple of innovative models that outperform traditional sentiment classifiers for these data sources, and provide insights on how to further improve the predictive power of sentiment analysis. The modelling and testing work was done in R and Greenplum in-database analytic tools.Keywords: sentiment analysis, social media, Twitter, Amazon, data mining, machine learning, text mining
Procedia PDF Downloads 35310114 Ontology-Based Backpropagation Neural Network Classification and Reasoning Strategy for NoSQL and SQL Databases
Authors: Hao-Hsiang Ku, Ching-Ho Chi
Abstract:
Big data applications have become an imperative for many fields. Many researchers have been devoted into increasing correct rates and reducing time complexities. Hence, the study designs and proposes an Ontology-based backpropagation neural network classification and reasoning strategy for NoSQL big data applications, which is called ON4NoSQL. ON4NoSQL is responsible for enhancing the performances of classifications in NoSQL and SQL databases to build up mass behavior models. Mass behavior models are made by MapReduce techniques and Hadoop distributed file system based on Hadoop service platform. The reference engine of ON4NoSQL is the ontology-based backpropagation neural network classification and reasoning strategy. Simulation results indicate that ON4NoSQL can efficiently achieve to construct a high performance environment for data storing, searching, and retrieving.Keywords: Hadoop, NoSQL, ontology, back propagation neural network, high distributed file system
Procedia PDF Downloads 26210113 Water Quality Calculation and Management System
Authors: H. M. B. N Jayasinghe
Abstract:
The water is found almost everywhere on Earth. Water resources contain a lot of pollution. Some diseases can be spread through the water to the living beings. So to be clean water it should undergo a number of treatments necessary to make it drinkable. So it is must to have purification technology for the wastewater. So the waste water treatment plants act a major role in these issues. When considering the procedures taken after the water treatment process was always based on manual calculations and recordings. Water purification plants may interact with lots of manual processes. It means the process taking much time consuming. So the final evaluation and chemical, biological treatment process get delayed. So to prevent those types of drawbacks there are some computerized programmable calculation and analytical techniques going to be introduced to the laboratory staff. To solve this problem automated system will be a solution in which guarantees the rational selection. A decision support system is a way to model data and make quality decisions based upon it. It is widely used in the world for the various kind of process automation. Decision support systems that just collect data and organize it effectively are usually called passive models where they do not suggest a specific decision but only reveal information. This web base system is based on global positioning data adding facility with map location. Most worth feature is SMS and E-mail alert service to inform the appropriate person on a critical issue. The technological influence to the system is HTML, MySQL, PHP, and some other web developing technologies. Current issues in the computerized water chemistry analysis are not much deep in progress. For an example the swimming pool water quality calculator. The validity of the system has been verified by test running and comparison with an existing plant data. Automated system will make the life easier in productively and qualitatively.Keywords: automated system, wastewater, purification technology, map location
Procedia PDF Downloads 247