Search results for: estimating of trajectory
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1071

Search results for: estimating of trajectory

501 Estimation of Transition and Emission Probabilities

Authors: Aakansha Gupta, Neha Vadnere, Tapasvi Soni, M. Anbarsi

Abstract:

Protein secondary structure prediction is one of the most important goals pursued by bioinformatics and theoretical chemistry; it is highly important in medicine and biotechnology. Some aspects of protein functions and genome analysis can be predicted by secondary structure prediction. This is used to help annotate sequences, classify proteins, identify domains, and recognize functional motifs. In this paper, we represent protein secondary structure as a mathematical model. To extract and predict the protein secondary structure from the primary structure, we require a set of parameters. Any constants appearing in the model are specified by these parameters, which also provide a mechanism for efficient and accurate use of data. To estimate these model parameters there are many algorithms out of which the most popular one is the EM algorithm or called the Expectation Maximization Algorithm. These model parameters are estimated with the use of protein datasets like RS126 by using the Bayesian Probabilistic method (data set being categorical). This paper can then be extended into comparing the efficiency of EM algorithm to the other algorithms for estimating the model parameters, which will in turn lead to an efficient component for the Protein Secondary Structure Prediction. Further this paper provides a scope to use these parameters for predicting secondary structure of proteins using machine learning techniques like neural networks and fuzzy logic. The ultimate objective will be to obtain greater accuracy better than the previously achieved.

Keywords: model parameters, expectation maximization algorithm, protein secondary structure prediction, bioinformatics

Procedia PDF Downloads 452
500 Forecast of Polyethylene Properties in the Gas Phase Polymerization Aided by Neural Network

Authors: Nasrin Bakhshizadeh, Ashkan Forootan

Abstract:

A major problem that affects the quality control of polymer in the industrial polymerization is the lack of suitable on-line measurement tools to evaluate the properties of the polymer such as melt and density indices. Controlling the polymerization in ordinary method is performed manually by taking samples, measuring the quality of polymer in the lab and registry of results. This method is highly time consuming and leads to producing large number of incompatible products. An online application for estimating melt index and density proposed in this study is a neural network based on the input-output data of the polyethylene production plant. Temperature, the level of reactors' bed, the intensity of ethylene mass flow, hydrogen and butene-1, the molar concentration of ethylene, hydrogen and butene-1 are used for the process to establish the neural model. The neural network is taught based on the actual operational data and back-propagation and Levenberg-Marquart techniques. The simulated results indicate that the neural network process model established with three layers (one hidden layer) for forecasting the density and the four layers for the melt index is able to successfully predict those quality properties.

Keywords: polyethylene, polymerization, density, melt index, neural network

Procedia PDF Downloads 124
499 Ensemble Sampler For Infinite-Dimensional Inverse Problems

Authors: Jeremie Coullon, Robert J. Webber

Abstract:

We introduce a Markov chain Monte Carlo (MCMC) sam-pler for infinite-dimensional inverse problems. Our sam-pler is based on the affine invariant ensemble sampler, which uses interacting walkers to adapt to the covariance structure of the target distribution. We extend this ensem-ble sampler for the first time to infinite-dimensional func-tion spaces, yielding a highly efficient gradient-free MCMC algorithm. Because our ensemble sampler does not require gradients or posterior covariance estimates, it is simple to implement and broadly applicable. In many Bayes-ian inverse problems, Markov chain Monte Carlo (MCMC) meth-ods are needed to approximate distributions on infinite-dimensional function spaces, for example, in groundwater flow, medical imaging, and traffic flow. Yet designing efficient MCMC methods for function spaces has proved challenging. Recent gradi-ent-based MCMC methods preconditioned MCMC methods, and SMC methods have improved the computational efficiency of functional random walk. However, these samplers require gradi-ents or posterior covariance estimates that may be challenging to obtain. Calculating gradients is difficult or impossible in many high-dimensional inverse problems involving a numerical integra-tor with a black-box code base. Additionally, accurately estimating posterior covariances can require a lengthy pilot run or adaptation period. These concerns raise the question: is there a functional sampler that outperforms functional random walk without requir-ing gradients or posterior covariance estimates? To address this question, we consider a gradient-free sampler that avoids explicit covariance estimation yet adapts naturally to the covariance struc-ture of the sampled distribution. This sampler works by consider-ing an ensemble of walkers and interpolating and extrapolating between walkers to make a proposal. This is called the affine in-variant ensemble sampler (AIES), which is easy to tune, easy to parallelize, and efficient at sampling spaces of moderate dimen-sionality (less than 20). The main contribution of this work is to propose a functional ensemble sampler (FES) that combines func-tional random walk and AIES. To apply this sampler, we first cal-culate the Karhunen–Loeve (KL) expansion for the Bayesian prior distribution, assumed to be Gaussian and trace-class. Then, we use AIES to sample the posterior distribution on the low-wavenumber KL components and use the functional random walk to sample the posterior distribution on the high-wavenumber KL components. Alternating between AIES and functional random walk updates, we obtain our functional ensemble sampler that is efficient and easy to use without requiring detailed knowledge of the target dis-tribution. In past work, several authors have proposed splitting the Bayesian posterior into low-wavenumber and high-wavenumber components and then applying enhanced sampling to the low-wavenumber components. Yet compared to these other samplers, FES is unique in its simplicity and broad applicability. FES does not require any derivatives, and the need for derivative-free sam-plers has previously been emphasized. FES also eliminates the requirement for posterior covariance estimates. Lastly, FES is more efficient than other gradient-free samplers in our tests. In two nu-merical examples, we apply FES to challenging inverse problems that involve estimating a functional parameter and one or more scalar parameters. We compare the performance of functional random walk, FES, and an alternative derivative-free sampler that explicitly estimates the posterior covariance matrix. We conclude that FES is the fastest available gradient-free sampler for these challenging and multimodal test problems.

Keywords: Bayesian inverse problems, Markov chain Monte Carlo, infinite-dimensional inverse problems, dimensionality reduction

Procedia PDF Downloads 138
498 Exploring Public Trust in Democracy

Authors: Yaron Katz

Abstract:

The investigation of immigrants' electoral choices has remained relatively uncharted territory despite the fact that numerous nations extend political rights to their expatriates. This paper centers its attention on the matter of public trust in democracy, with a focus on the intricacies of Israeli politics as a divided system. It delves into the potential implications of political and social transformations stemming from the involvement of expatriate voters in elections taking place in their country of origin. In doing so, the article endeavors to explore a pathway for resolving a persistent challenge facing the stability of the Israeli political landscape over the past decade: the difficulty in forming a resilient government that genuinely represents the majority of voters. An examination is conducted into the role played by a demographic with the capacity to exert significant influence on election outcomes, namely, individuals residing outside of Israel. The objective of this research is to delve into this subject, dissecting social developments and political prospects that may shape the country's trajectory in the coming decades. This inquiry is especially pertinent given the extensive engagement of migrants in Israeli politics and the link between Israelis living abroad and their home country. Nevertheless, the study's findings reveal that while former citizens exhibit extensive involvement in Israeli politics and are cognizant of the potential consequences of permitting them to participate in elections, they maintain steadfastly unfavorable views regarding the inclusion of Israelis living overseas in their home country's electoral processes.

Keywords: trust, globalization, policy, democracy

Procedia PDF Downloads 27
497 Modeling Pan Evaporation Using Intelligent Methods of ANN, LSSVM and Tree Model M5 (Case Study: Shahroud and Mayamey Stations)

Authors: Hamidreza Ghazvinian, Khosro Ghazvinian, Touba Khodaiean

Abstract:

The importance of evaporation estimation in water resources and agricultural studies is undeniable. Pan evaporation are used as an indicator to determine the evaporation of lakes and reservoirs around the world due to the ease of interpreting its data. In this research, intelligent models were investigated in estimating pan evaporation on a daily basis. Shahroud and Mayamey were considered as the studied cities. These two cities are located in Semnan province in Iran. The mentioned cities have dry weather conditions that are susceptible to high evaporation potential. Meteorological data of 11 years of synoptic stations of Shahrood and Mayamey cities were used. The intelligent models used in this study are Artificial Neural Network (ANN), Least Squares Support Vector Machine (LSSVM), and M5 tree models. Meteorological parameters of minimum and maximum air temperature (Tmax, Tmin), wind speed (WS), sunshine hours (SH), air pressure (PA), relative humidity (RH) as selected input data and evaporation data from pan (EP) to The output data was considered. 70% of data is used at the education level, and 30 % of the data is used at the test level. Models used with explanation coefficient evaluation (R2) Root of Mean Squares Error (RMSE) and Mean Absolute Error (MAE). The results for the two Shahroud and Mayamey stations showed that the above three models' operations are rather appropriate.

Keywords: pan evaporation, intelligent methods, shahroud, mayamey

Procedia PDF Downloads 58
496 Anthropometric Data Variation within Gari-Frying Population

Authors: T. M. Samuel, O. O. Aremu, I. O. Ismaila, L. I. Onu, B. O. Adetifa, S. E. Adegbite, O. O. Olokoshe

Abstract:

The imperative of anthropometry in designing to fit cannot be overemphasized. Of essence is the variability of measurements among population for which data is collected. In this paper anthropometric data were collected for the design of gari-frying facility such that work system would be designed to fit the gari-frying population in the Southwestern states of Nigeria comprising Lagos, Ogun, Oyo, Osun, Ondo, and Ekiti. Twenty-seven body dimensions were measured among 120 gari-frying processors. Statistical analysis was performed using SPSS package to determine the mean, standard deviation, minimum value, maximum value and percentiles (2nd, 5th, 25th, 50th, 75th, 95th, and 98th) of the different anthropometric parameters. One sample t-test was conducted to determine the variation within the population. The 50th percentiles of some of the anthropometric parameters were compared with those from other populations in literature. The correlation between the worker’s age and the body anthropometry was also investigated.The mean weight, height, shoulder height (sitting), eye height (standing) and eye height (sitting) are 63.37 kg, 1.57 m, 0.55 m, 1.45 m, and 0.67 m respectively.Result also shows a high correlation with other populations and a statistically significant difference in variability of data within the population in all the body dimensions measured. With a mean age of 42.36 years, results shows that age will be a wrong indicator for estimating the anthropometry for the population.

Keywords: anthropometry, cassava processing, design to fit, gari-frying, workstation design

Procedia PDF Downloads 234
495 Linguistic Features for Sentence Difficulty Prediction in Aspect-Based Sentiment Analysis

Authors: Adrian-Gabriel Chifu, Sebastien Fournier

Abstract:

One of the challenges of natural language understanding is to deal with the subjectivity of sentences, which may express opinions and emotions that add layers of complexity and nuance. Sentiment analysis is a field that aims to extract and analyze these subjective elements from text, and it can be applied at different levels of granularity, such as document, paragraph, sentence, or aspect. Aspect-based sentiment analysis is a well-studied topic with many available data sets and models. However, there is no clear definition of what makes a sentence difficult for aspect-based sentiment analysis. In this paper, we explore this question by conducting an experiment with three data sets: ”Laptops”, ”Restaurants”, and ”MTSC” (Multi-Target-dependent Sentiment Classification), and a merged version of these three datasets. We study the impact of domain diversity and syntactic diversity on difficulty. We use a combination of classifiers to identify the most difficult sentences and analyze their characteristics. We employ two ways of defining sentence difficulty. The first one is binary and labels a sentence as difficult if the classifiers fail to correctly predict the sentiment polarity. The second one is a six-level scale based on how many of the top five best-performing classifiers can correctly predict the sentiment polarity. We also define 9 linguistic features that, combined, aim at estimating the difficulty at sentence level.

Keywords: sentiment analysis, difficulty, classification, machine learning

Procedia PDF Downloads 56
494 Numerical Response of Planar HPGe Detector for 241Am Contamination of Various Shapes

Authors: M. Manohari, Himanshu Gupta, S. Priyadharshini, R.Santhanam, S.Chandrasekaran, B|.Venkatraman

Abstract:

Injection is one of the potential routes of intake in a radioactive facility. The internal dose due to this intake is monitored at the radiation emergency medical centre, IGCAR using a portable planar HPGe detector. The contaminated wound may be having different shapes. In a reprocessing potential of wound contamination with actinide is more. Efficiency is one of the input parameters for estimation of internal dose. Estimating these efficiencies experimentally would be tedious and cumbersome. Numerical estimation can be a supplement to experiment. As an initial step in this study 241Am contamination of different shapes are studied. In this study portable planar HPGe detector was modeled using Monte Carlo code FLUKA and the effect of different parameters like distance of the contamination from the detector, radius of the circular contamination were studied. Efficiency values for point and surface contamination located at different distances were estimated. The effect of efficiency on the radius of the surface source was more predominant when the source is at 1 cm distance compared to when the source to detector distance is 10 cm. At 1 cm the efficiency decreased quadratically as the radius increased and at 10 cm it decreased linearly. The point source efficiency varied exponentially with source to detector distance.

Keywords: Planar HPGe, efficiency value, injection, surface source

Procedia PDF Downloads 23
493 Features of Calculating Structures for Frequent Weak Earthquakes

Authors: M. S. Belashov, A. V. Benin, Lin Hong, Sh. Sh. Nazarova, O. B. Sabirova, A. M. Uzdin, Lin Hong

Abstract:

The features of calculating structures for the action of weak earthquakes are analyzed. Earthquakes with a recurrence of 30 years and 50 years are considered. In the first case, the structure is to operate normally without damage after the earthquake. In the second case, damages are allowed that do not affect the possibility of the structure operation. Three issues are emphasized: setting elastic and damping characteristics of reinforced concrete, formalization of limit states, and combinations of loads. The dependence of damping on the reinforcement coefficient is estimated. When evaluating limit states, in addition to calculations for crack resistance and strength, a human factor, i.e., the possibility of panic among people, was considered. To avoid it, it is proposed to limit a floor-by-floor speed level in certain octave ranges. Proposals have been developed for estimating the coefficients of the combination of various loads with the seismic one. As an example, coefficients of combinations of seismic and ice loads are estimated. It is shown that for strong actions, the combination coefficients for different regions turn out to be close, while for weak actions, they may differ.

Keywords: weak earthquake, frequent earthquake, damage, limit state, reinforcement, crack resistance, strength resistance, a floor-by-floor velocity, combination coefficients

Procedia PDF Downloads 65
492 Experimental on Free and Forced Heat Transfer and Pressure Drop of Copper Oxide-Heat Transfer Oil Nanofluid in Horizontal and Inclined Microfin Tube

Authors: F. Hekmatipour, M. A. Akhavan-Behabadi, B. Sajadi

Abstract:

In this paper, the combined free and forced convection heat transfer of the Copper Oxide-Heat Transfer Oil (CuO-HTO) nanofluid flow in horizontal and inclined microfin tubes is studied experimentally. The flow regime is laminar, and pipe surface temperature is constant. The effect of nanoparticle and microfin tube on the heat transfer rate is investigated with the Richardson number which is between 0.1 and 0.7. The results show an increasing nanoparticle concentration between 0% and 1.5% leads to enhance the combined free and forced convection heat transfer rate. According to the results, five correlations are proposed to provide estimating the free and forced heat transfer rate as the increasing Richardson number from 0.1 to 0.7. The maximum deviation of both correlations is less than 16%. Moreover, four correlations are suggested to assess the Nusselt number based on the Rayleigh number in inclined tubes from 1800000 to 7000000. The maximum deviation of the correlation is almost 16%. The Darcy friction factor of the nanofluid flow has been investigated. Furthermore, CuO-HTO nanofluid flows in inclined microfin tubes.

Keywords: nanofluid, heat transfer oil, mixed convection, inclined tube, laminar flow

Procedia PDF Downloads 241
491 Wellbore Stability Evaluation of Ratawi Shale Formation

Authors: Raed Hameed Allawi

Abstract:

Wellbore instability problems are considered the majority challenge for several wells in the Ratawi shale formation. However, it results in non-productive (NPT) time and increased well-drilling expenditures. This work aims to construct an integrated mechanical earth model (MEM) to predict the wellbore failure and design optimum mud weight to improve the drilling efficiency of future wells. The MEM was based on field data, including open-hole wireline logging and measurement data. Several failure criteria were applied in this work, including Modified Lade, Mogi-Coulomb, and Mohr-Coulomb that utilized to calculate the proper mud weight and practical drilling paths and orientations. Results showed that the leading cause of wellbore instability problems was inadequate mud weight. Moreover, some improper drilling practices and heterogeneity of Ratawi formation were additional causes of the increased risk of wellbore instability. Therefore, the suitable mud weight for safe drilling in the Ratawi shale formation should be 11.5-13.5 ppg. Furthermore, the mud weight should be increased as required depending on the trajectory of the planned well. The outcome of this study is as practical tools to reduce non-productive time and well costs and design future neighboring deviated wells to get high drilling efficiency. In addition, the current results serve as a reference for similar fields in that region because of the lacking of published studies regarding wellbore instability problems of the Ratawi Formation in southern Iraqi oilfields.

Keywords: wellbore stability, hole collapse, horizontal stress, MEM, mud window

Procedia PDF Downloads 166
490 Enhanced Method of Conceptual Sizing of Aircraft Electro-Thermal De-Icing System

Authors: Ahmed Shinkafi, Craig Lawson

Abstract:

There is a great advancement towards the All-Electric Aircraft (AEA) technology. The AEA concept assumes that all aircraft systems will be integrated into one electrical power source in the future. The principle of the electro-thermal system is to transfer the energy required for anti/de-icing to the protected areas in electrical form. However, powering a large aircraft anti-icing system electrically could be quite excessive in cost and system weight. Hence, maximising the anti/de-icing efficiency of the electro-thermal system in order to minimise its power demand has become crucial to electro-thermal de-icing system sizing. In this work, an enhanced methodology has been developed for conceptual sizing of aircraft electro-thermal de-icing System. The work factored those critical terms overlooked in previous studies which were critical to de-icing energy consumption. A case study of a typical large aircraft wing de-icing was used to test and validate the model. The model was used to optimise the system performance by a trade-off between the de-icing peak power and system energy consumption. The optimum melting surface temperatures and energy flux predicted enabled the reduction in the power required for de-icing. The weight penalty associated with electro-thermal anti-icing/de-icing method could be eliminated using this method without under estimating the de-icing power requirement.

Keywords: aircraft, de-icing system, electro-thermal, in-flight icing

Procedia PDF Downloads 489
489 The Relevance of the U-Shaped Learning Model to the Acquisition of the Difference between C'est and Il Est in the English Learners of French Context

Authors: Pooja Booluck

Abstract:

A U-shaped learning curve entails a three-step process: a good performance followed by a bad performance followed by a good performance again. U-shaped curves have been observed not only in language acquisition but also in various fields such as temperature face recognition object permanence to name a few. Building on previous studies of the curve child language acquisition and Second Language Acquisition this empirical study seeks to investigate the relevance of the U-shaped learning model to the acquisition of the difference between cest and il est in the English Learners of French context. The present study was developed to assess whether older learners of French in the ELF context follow the same acquisition pattern. The empirical study was conducted on 15 English learners of French which lasted six weeks. Compositions and questionnaires were collected from each subject at three time intervals (after one week after three weeks after six weeks) after which students work were graded as being either correct or incorrect. The data indicates that there is evidence of a U-shaped learning curve in the acquisition of cest and il est and students did follow the same acquisition pattern as children in regards to rote-learned terms and subject clitics. This paper also discusses the need to introduce modules on U-shaped learning curve in teaching curriculum as many teachers are unaware of the trajectory learners undertake while acquiring core components in grammar. In addition this study also addresses the need to conduct more research on the acquisition of rote-learned terms and subject clitics in SLA.

Keywords: child language acquisition, rote-learning, subject clitics, u-shaped learning model

Procedia PDF Downloads 273
488 Design of a Permanent Magnet Based Focusing Lens for a Miniature Klystron

Authors: Kumud Singh, Janvin Itteera, Priti Ukarde, Sanjay Malhotra, P. PMarathe, Ayan Bandyopadhay, Rakesh Meena, Vikram Rawat, L. M. Joshi

Abstract:

Application of Permanent magnet technology to high frequency miniature klystron tubes to be utilized for space applications improves the efficiency and operational reliability of these tubes. But nevertheless the task of generating magnetic focusing forces to eliminate beam divergence once the beam crosses the electrostatic focusing regime and enters the drift region in the RF section of the tube throws several challenges. Building a high quality magnet focusing lens to meet beam optics requirement in cathode gun and RF interaction region is considered to be one of the critical issues for these high frequency miniature tubes. In this paper, electromagnetic design and particle trajectory studies in combined electric and magnetic field for optimizing the magnetic circuit using 3D finite element method (FEM) analysis software is presented. A rectangular configuration of the magnet was constructed to accommodate apertures for input and output waveguide sections and facilitate coupling of electromagnetic fields into the input klystron cavity and out from output klystron cavity through coupling loops. Prototype lenses have been built and have been tested after integration with the klystron tube. We discuss the design requirements and challenges, and the results from beam transmission of the prototype lens.

Keywords: beam transmission, Brillouin, confined flow, miniature klystron

Procedia PDF Downloads 423
487 Big Data Analysis Approach for Comparison New York Taxi Drivers' Operation Patterns between Workdays and Weekends Focusing on the Revenue Aspect

Authors: Yongqi Dong, Zuo Zhang, Rui Fu, Li Li

Abstract:

The records generated by taxicabs which are equipped with GPS devices is of vital importance for studying human mobility behavior, however, here we are focusing on taxi drivers' operation strategies between workdays and weekends temporally and spatially. We identify a group of valuable characteristics through large scale drivers' behavior in a complex metropolis environment. Based on the daily operations of 31,000 taxi drivers in New York City, we classify drivers into top, ordinary and low-income groups according to their monthly working load, daily income, daily ranking and the variance of the daily rank. Then, we apply big data analysis and visualization methods to compare the different characteristics among top, ordinary and low income drivers in selecting of working time, working area as well as strategies between workdays and weekends. The results verify that top drivers do have special operation tactics to help themselves serve more passengers, travel faster thus make more money per unit time. This research provides new possibilities for fully utilizing the information obtained from urban taxicab data for estimating human behavior, which is not only very useful for individual taxicab driver but also to those policy-makers in city authorities.

Keywords: big data, operation strategies, comparison, revenue, temporal, spatial

Procedia PDF Downloads 210
486 Modeling of Diurnal Pattern of Air Temperature in a Tropical Environment: Ile-Ife and Ibadan, Nigeria

Authors: Rufus Temidayo Akinnubi, M. O. Adeniyi

Abstract:

Existing diurnal air temperature models simulate night time air temperature over Nigeria with high biases. An improved parameterization is presented for modeling the diurnal pattern of air temperature (Ta) which is applicable in the calculation of turbulent heat fluxes in Global climate models, based on Nigeria Micrometeorological Experimental site (NIMEX) surface layer observations. Five diurnal Ta models for estimating hourly Ta from daily maximum, daily minimum, and daily mean air temperature were validated using root-mean-square error (RMSE), Mean Error Bias (MBE) and scatter graphs. The original Fourier series model showed better performance for unstable air temperature parameterizations while the stable Ta was strongly overestimated with a large error. The model was improved with the inclusion of the atmospheric cooling rate that accounts for the temperature inversion that occurs during the nocturnal boundary layer condition. The MBE and RMSE estimated by the modified Fourier series model reduced by 4.45 oC and 3.12 oC during the transitional period from dry to wet stable atmospheric conditions. The modified Fourier series model gave good estimation of the diurnal weather patterns of Ta when compared with other existing models for a tropical environment.

Keywords: air temperature, mean bias error, Fourier series analysis, surface energy balance,

Procedia PDF Downloads 209
485 Estimating Knowledge Flow Patterns of Business Method Patents with a Hidden Markov Model

Authors: Yoonjung An, Yongtae Park

Abstract:

Knowledge flows are a critical source of faster technological progress and stouter economic growth. Knowledge flows have been accelerated dramatically with the establishment of a patent system in which each patent is required by law to disclose sufficient technical information for the invention to be recreated. Patent analysis, thus, has been widely used to help investigate technological knowledge flows. However, the existing research is limited in terms of both subject and approach. Particularly, in most of the previous studies, business method (BM) patents were not covered although they are important drivers of knowledge flows as other patents. In addition, these studies usually focus on the static analysis of knowledge flows. Some use approaches that incorporate the time dimension, yet they still fail to trace a true dynamic process of knowledge flows. Therefore, we investigate dynamic patterns of knowledge flows driven by BM patents using a Hidden Markov Model (HMM). An HMM is a popular statistical tool for modeling a wide range of time series data, with no general theoretical limit in regard to statistical pattern classification. Accordingly, it enables characterizing knowledge patterns that may differ by patent, sector, country and so on. We run the model in sets of backward citations and forward citations to compare the patterns of knowledge utilization and knowledge dissemination.

Keywords: business method patents, dynamic pattern, Hidden-Markov Model, knowledge flow

Procedia PDF Downloads 310
484 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis

Authors: N. R. N. Idris, S. Baharom

Abstract:

A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates. On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.

Keywords: aggregate data, combined-level data, individual patient data, meta-analysis

Procedia PDF Downloads 355
483 Reliability and Validity for Measurement of Body Composition: A Field Method

Authors: Ahmad Hashim, Zarizi Ab Rahman

Abstract:

Measurement of body composition via a field method has the most popular instruments which are used to estimate the percentage of body fat. Among the instruments used are the Body Mass Index, Bio Impedance Analysis and Skinfold Test. All three of these instruments do not involve high costs, do not require high technical skills, are mobile, save time, and are suitable for use in large populations. Because all three instruments can estimate the percentage of body fat, but it is important to identify the most appropriate instruments and have high reliability. Hence, this study was conducted to determine the reliability and convergent validity of the instruments. A total of 40 students, males and females aged between 13 and 14 years participated in this study. The study found that the test retest and Pearson correlation coefficient of reliability for the three instruments is very high, r = .99. While the inter class reliability also are at high level with r = .99 for Body Mass Index and Bio Impedance Analysis, r = .96 for Skin fold test. Intra class reliability coefficient for these three instruments is too high for Body Mass Index r = .99, Bio Impedance Analysis r = .97, and Skin fold Test r = .90. However, Standard Error of Measurement value for all three instruments indicates the Body Mass Index is the most appropriate instrument with a mean value of .000672 compared with other instruments. The findings show that the Body Mass Index is an instrument which is the most accurate and reliable in estimating body fat percentage for the population studied.

Keywords: reliability, validity, body mass index, bio impedance analysis and skinfold test

Procedia PDF Downloads 313
482 Predicting the Next Offensive Play Types will be Implemented to Maximize the Defense’s Chances of Success in the National Football League

Authors: Chris Schoborg, Morgan C. Wang

Abstract:

In the realm of the National Football League (NFL), substantial dedication of time and effort is invested by both players and coaches in meticulously analyzing the game footage of their opponents. The primary aim is to anticipate the actions of the opposing team. Defensive players and coaches are especially focused on deciphering their adversaries' intentions to effectively counter their strategies. Acquiring insights into the specific play type and its intended direction on the field would confer a significant competitive advantage. This study establishes pre-snap information as the cornerstone for predicting both the play type (e.g., deep pass, short pass, or run) and its spatial trajectory (right, left, or center). The dataset for this research spans the regular NFL season data for all 32 teams from 2013 to 2022. This dataset is acquired using the nflreadr package, which conveniently extracts play-by-play data from NFL games and imports it into the R environment as structured datasets. In this study, we employ a recently developed machine learning algorithm, XGBoost. The final predictive model achieves an impressive lift of 2.61. This signifies that the presented model is 2.61 times more effective than random guessing—a significant improvement. Such a model has the potential to markedly enhance defensive coaches' ability to formulate game plans and adequately prepare their players, thus mitigating the opposing offense's yardage and point gains.

Keywords: lift, NFL, sports analytics, XGBoost

Procedia PDF Downloads 40
481 The Role of Authority's Testimony in Preschoolers' Ownership Judgment: A Study with Conflicting Cues Method

Authors: Zhanxing Li, Liqi Zhu

Abstract:

Authorities often intervene in children’s property conflicts, which may affect young children’s ownership understanding. First possession is a typical rule of ownership judgment. We recruited Chinese preschoolers as subjects and investigated their ownership reasoning regarding first possession, by setting three conditions via a conflicting cues method, in which a third party (mother or peer friend)’s testimony was always opposite to the cue of first possession (authority/non-authority testimony condition), or only the cue of first possession was present (no testimony condition). In Study A, we examined forty-two 3- and 5-year olds’ attribution and justification of ownership. The results showed while 5-year olds gave more support for the first possessor as the owner across three conditions, 3-year olds’ choice for the first possessor had no difference from the non-first possessor in the authority testimony condition. Moreover, 3-year olds tended to justify by reference to what mother said in the authority testimony condition, 5-year olds consistently referred to the first possession in three conditions. In Study B, we added two ownership questions to quantify children’s ability of ownership reasoning with four age groups (n = 32 for the 3-year-olds, n = 33 for the 4-year-olds, n = 27 for the 5-year olds and n = 30 for the adults) to explore the developmental trajectory further. It revealed that while 5-year olds’ performances were similar to the adults’ and always judged the first possessor as owner in three conditions, 3- and 4-year olds’ performed at chance level in the authority testimony condition. The results imply that Chinese young preschooler’s ownership reasoning was susceptible to authority’s testimony. Family authority may play an important role in diluting children’s adherence to ownership principles, which will be helpful for children to learn to share with others.

Keywords: authority, ownership judgment, preschoolers, testimony

Procedia PDF Downloads 172
480 Multi-Criteria Test Case Selection Using Ant Colony Optimization

Authors: Niranjana Devi N.

Abstract:

Test case selection is to select the subset of only the fit test cases and remove the unfit, ambiguous, redundant, unnecessary test cases which in turn improve the quality and reduce the cost of software testing. Test cases optimization is the problem of finding the best subset of test cases from a pool of the test cases to be audited. It will meet all the objectives of testing concurrently. But most of the research have evaluated the fitness of test cases only on single parameter fault detecting capability and optimize the test cases using a single objective. In the proposed approach, nine parameters are considered for test case selection and the best subset of parameters for test case selection is obtained using Interval Type-2 Fuzzy Rough Set. Test case selection is done in two stages. The first stage is the fuzzy entropy-based filtration technique, used for estimating and reducing the ambiguity in test case fitness evaluation and selection. The second stage is the ant colony optimization-based wrapper technique with a forward search strategy, employed to select test cases from the reduced test suite of the first stage. The results are evaluated using the Coverage parameters, Precision, Recall, F-Measure, APSC, APDC, and SSR. The experimental evaluation demonstrates that by this approach considerable computational effort can be avoided.

Keywords: ant colony optimization, fuzzy entropy, interval type-2 fuzzy rough set, test case selection

Procedia PDF Downloads 643
479 Effect of Heat Stress on the Physiology of the Cork Oak

Authors: J. Zekri, N. Souilah, W. Abdelaziz, D. Alatou

Abstract:

Our study shall focus on the ability of trees cork oak that showed vis-à-vis sensitivity to climate change, including late spring frosts. The combination of these factors resulted in damage alarmed, therefore forest ecosystems weakened trees that can affect their ability to support other abiotic and biotic stresses, For this we tested its tolerance to thermal variations and cold weather conditions by estimating some stress markers (quantification of proteins, RNA, soluble sugars) that are quantified to evaluate the cold tolerance of seedlings. Sowing of cork oak (Quercus suber L.) is grown in controlled conditions at 25° C ± 2° C in long days 16h. These seedlings are transferred at low temperatures between 5° C and -6° C for a period of 3 hours. Biochemical analyzes were performed in the various organs of the cork oak seedlings. Cool temperatures induced a significant accumulation of proline in different organs of seedlings and the optimum concentrations were observed in the roots with very high concentrations (4 times larger than those of the control). The accumulation of soluble sugars is significantly in stems and roots at 0° C. Protein concentrations are very high in leaves of both growth and high waves in rod at -4° C to -2° C. Tolerance cork oak seems to be at the thermal limit of -2°C. The concentration of these metabolites in the various organs showed the ability oak cork hardening during the winter.

Keywords: climate change, thermal change, semi-aride, biochemical markers, heat stress

Procedia PDF Downloads 223
478 Methods of Variance Estimation in Two-Phase Sampling

Authors: Raghunath Arnab

Abstract:

The two-phase sampling which is also known as double sampling was introduced in 1938. In two-phase sampling, samples are selected in phases. In the first phase, a relatively large sample of size is selected by some suitable sampling design and only information on the auxiliary variable is collected. During the second phase, a sample of size is selected either from, the sample selected in the first phase or from the entire population by using a suitable sampling design and information regarding the study and auxiliary variable is collected. Evidently, two phase sampling is useful if the auxiliary information is relatively easy and cheaper to collect than the study variable as well as if the strength of the relationship between the variables and is high. If the sample is selected in more than two phases, the resulting sampling design is called a multi-phase sampling. In this article we will consider how one can use data collected at the first phase sampling at the stages of estimation of the parameter, stratification, selection of sample and their combinations in the second phase in a unified setup applicable to any sampling design and wider classes of estimators. The problem of the estimation of variance will also be considered. The variance of estimator is essential for estimating precision of the survey estimates, calculation of confidence intervals, determination of the optimal sample sizes and for testing of hypotheses amongst others. Although, the variance is a non-negative quantity but its estimators may not be non-negative. If the estimator of variance is negative, then it cannot be used for estimation of confidence intervals, testing of hypothesis or measure of sampling error. The non-negativity properties of the variance estimators will also be studied in details.

Keywords: auxiliary information, two-phase sampling, varying probability sampling, unbiased estimators

Procedia PDF Downloads 567
477 A Process of Forming a Single Competitive Factor in the Digital Camera Industry

Authors: Kiyohiro Yamazaki

Abstract:

This paper considers a forming process of a single competitive factor in the digital camera industry from the viewpoint of product platform. To make product development easier for companies and to increase product introduction ratios, development efforts concentrate on improving and strengthening certain product attributes, and it is born in the process that the product platform is formed continuously. It is pointed out that the formation of this product platform raises product development efficiency of individual companies, but on the other hand, it has a trade-off relationship of causing unification of competitive factors in the whole industry. This research tries to analyze product specification data which were collected from the web page of digital camera companies. Specifically, this research collected all product specification data released in Japan from 1995 to 2003 and analyzed the composition of image sensor and optical lens; and it identified product platforms shared by multiple products and discussed their application. As a result, this research found that the product platformation was born in the development of the standard product for major market segmentation. Every major company has made product platforms of image sensors and optical lenses, and as a result, this research found that the competitive factors were unified in the entire industry throughout product platformation. In other words, this product platformation brought product development efficiency of individual firms; however, it also caused industrial competition factors to be unified in the industry.

Keywords: digital camera industry, product evolution trajectory, product platform, unification of competitive factors

Procedia PDF Downloads 135
476 Optimal Design of Step-Stress Partially Life Test Using Multiply Censored Exponential Data with Random Removals

Authors: Showkat Ahmad Lone, Ahmadur Rahman, Ariful Islam

Abstract:

The major assumption in accelerated life tests (ALT) is that the mathematical model relating the lifetime of a test unit and the stress are known or can be assumed. In some cases, such life–stress relationships are not known and cannot be assumed, i.e. ALT data cannot be extrapolated to use condition. So, in such cases, partially accelerated life test (PALT) is a more suitable test to be performed for which tested units are subjected to both normal and accelerated conditions. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests using progressive failure-censored hybrid data with random removals. The life data of the units under test is considered to follow exponential life distribution. The removals from the test are assumed to have binomial distributions. The point and interval maximum likelihood estimations are obtained for unknown distribution parameters and tampering coefficient. An optimum test plan is developed using the D-optimality criterion. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.

Keywords: binomial distribution, d-optimality, multiple censoring, optimal design, partially accelerated life testing, simulation study

Procedia PDF Downloads 300
475 A Case Study on the Numerical-Probability Approach for Deep Excavation Analysis

Authors: Komeil Valipourian

Abstract:

Urban advances and the growing need for developing infrastructures has increased the importance of deep excavations. In this study, after the introducing probability analysis as an important issue, an attempt has been made to apply it for the deep excavation project of Bangkok’s Metro as a case study. For this, the numerical probability model has been developed based on the Finite Difference Method and Monte Carlo sampling approach. The results indicate that disregarding the issue of probability in this project will result in an inappropriate design of the retaining structure. Therefore, probabilistic redesign of the support is proposed and carried out as one of the applications of probability analysis. A 50% reduction in the flexural strength of the structure increases the failure probability just by 8% in the allowable range and helps improve economic conditions, while maintaining mechanical efficiency. With regard to the lack of efficient design in most deep excavations, by considering geometrical and geotechnical variability, an attempt was made to develop an optimum practical design standard for deep excavations based on failure probability. On this basis, a practical relationship is presented for estimating the maximum allowable horizontal displacement, which can help improve design conditions without developing the probability analysis.

Keywords: numerical probability modeling, deep excavation, allowable maximum displacement, finite difference method (FDM)

Procedia PDF Downloads 106
474 Estimating Soil Erosion Using Universal Soil Loss Equation and Gis in Algash Basin

Authors: Issamaldin Mohammed, Ahmed Abdalla, Hatim Elobied

Abstract:

Soil erosion is globally known for adverse effects on social, environmental and economical aspects which directly or indirectly influence the human life. The area under study suffers from problems like water quality, river and agricultural canals bed rise due to high sediment load brought by Algash River from upstream (Eritrea high land), the current study utilized from remote sensing and Geographical Information System (GIS) to estimate the annual soil loss using Universal Soil Loss Equation (USLE). The USLE is widely used over the world which basically relies on rainfall erosivity factor (R), soil erodibility factor (K), topographic factor (LS), cover management factor (C) and support practice factor (P). The result of the study showed high soil loss in the study area, this result was illustrated in a form of map presenting the spatial distribution of soil loss amounts which classified into seven zones ranging from very slight zone (less than 2 ton/ha.year) to very severe (100-500 ton/ha.year), also the total soil loss from the whole study area was found to be 32,916,840.87 ton/ha.year. These kinds of results will help the experts of land management to give a priority for the severely affected zones to be tackled in an appropriate way.

Keywords: Geographical Information System, remote sensing, sedimentation, soil loss

Procedia PDF Downloads 271
473 India and Space Insurance Policy: An Analytical Insight

Authors: Shreyas Jayasimha, Suneel Anand Sundharesan, Rohan Tigadi

Abstract:

In the recent past, the United States of America and Russia were the only two dominant players in the field of space exploration and had a virtual monopoly in the field of space and technology. However, this has changed over the past few years. Many other nation states such as India, China, and the UK have made significant progress in this field. Amongst these nations, the growth and development of the Indian space program have been nothing short of a miracle. Starting recently, India has successfully launched a series of satellites including its much acclaimed Mangalyaan mission, which placed a satellite in Mars’ orbit. The fact that India was able to attain this feat in its attempt demonstrates the enormous growth potential and promise that the Indian space program holds for the coming years. However, unlike other space-faring nations, India does not have a comprehensive and consolidated space insurance policy. In this regard, it is pertinent to note that, the costs and risks involved in a administering a space program are enormous. Therefore, in the absence of a comprehensive space insurance policy, any losses from an unsuccessful will have to be borne by the state exchequer. Thus, in order to ensure that Indian space program continues on its upward trajectory, the Indian establishment should seriously consider formulating a comprehensive insurance policy. This paper intends to analyze the international best practices followed by other space-faring nations in relation to space insurance policy. Thereafter, the authors seek to examine the current regime in India relating to space insurance policy. Finally, the authors will conclude by providing a series of recommendations regarding the essential elements that should be part of any Indian space insurance policy regime.

Keywords: India, space insurance policy, space law, Indian space research organization

Procedia PDF Downloads 204
472 Assessing the Impacts of Long-Range Forest Fire Emission Transport on Air Quality in Toronto, Ontario, Using MODIS Fire Data and HYSPLIT Trajectories

Authors: Bartosz Osiecki, Jane Liu

Abstract:

Pollutants emitted from forest fires such as PM₂.₅ and carbon monoxide (CO) have been found to impact the air quality of distant regions through long-range transport. PM₂.₅ is of particular concern due to its transport capacity and implications for human respiratory and cardiovascular health. As such, significant increases in PM₂.₅ concentrations have been exhibited in urban areas downwind of fire sources. This study seeks to expand on this literature by evaluating the impacts of long-range forest fire emission transport on air quality in Toronto, Ontario, as a means of evaluating the vulnerability of this major urban center to distant fire events. In order to draw correlations between the fire event and air pollution episode in Toronto, MODIS fire count data and HYPLSIT trajectories are used to assess the date, location, and severity of the fire and track the trajectory of emissions (respectively). Forward and back-trajectories are run, terminating at the West Toronto air monitoring station. PM₂.₅ and CO concentrations in Toronto during September 2017 are found to be significantly elevated, which is likely attributable to the fire activity. Other sites in Ontario including Toronto (East, North, Downtown), Mississauga, Brampton, and Hamilton (Downtown) exhibit similar peaks in PM₂.₅ concentrations. This work sheds light on the non-local, natural factors influencing air quality in urban areas. This is especially important in the context of climate change which is expected to exacerbate intense forest fire events in the future.

Keywords: air quality, forest fires, PM₂.₅, Toronto

Procedia PDF Downloads 112