Search results for: error masking probability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3066

Search results for: error masking probability

2616 Flame Volume Prediction and Validation for Lean Blowout of Gas Turbine Combustor

Authors: Ejaz Ahmed, Huang Yong

Abstract:

The operation of aero engines has a critical importance in the vicinity of lean blowout (LBO) limits. Lefebvre’s model of LBO based on empirical correlation has been extended to flame volume concept by the authors. The flame volume takes into account the effects of geometric configuration, the complex spatial interaction of mixing, turbulence, heat transfer and combustion processes inside the gas turbine combustion chamber. For these reasons, flame volume based LBO predictions are more accurate. Although LBO prediction accuracy has improved, it poses a challenge associated with Vf estimation in real gas turbine combustors. This work extends the approach of flame volume prediction previously based on fuel iterative approximation with cold flow simulations to reactive flow simulations. Flame volume for 11 combustor configurations has been simulated and validated against experimental data. To make prediction methodology robust as required in the preliminary design stage, reactive flow simulations were carried out with the combination of probability density function (PDF) and discrete phase model (DPM) in FLUENT 15.0. The criterion for flame identification was defined. Two important parameters i.e. critical injection diameter (Dp,crit) and critical temperature (Tcrit) were identified, and their influence on reactive flow simulation was studied for Vf estimation. Obtained results exhibit ±15% error in Vf estimation with experimental data.

Keywords: CFD, combustion, gas turbine combustor, lean blowout

Procedia PDF Downloads 253
2615 Usability Testing on Information Design through Single-Lens Wearable Device

Authors: Jae-Hyun Choi, Sung-Soo Bae, Sangyoung Yoon, Hong-Ku Yun, Jiyoung Kwahk

Abstract:

This study was conducted to investigate the effect of ocular dominance on recognition performance using a single-lens smart display designed for cycling. A total of 36 bicycle riders who have been cycling consistently were recruited and participated in the experiment. The participants were asked to perform tasks riding a bicycle on a stationary stand for safety reasons. Independent variables of interest include ocular dominance, bike usage, age group, and information layout. Recognition time (i.e., the time required to identify specific information measured with an eye-tracker), error rate (i.e. false answer or failure to identify the information in 5 seconds), and user preference scores were measured and statistical tests were conducted to identify significant results. Recognition time and error ratio showed significant difference by ocular dominance factor, while the preference score did not. Recognition time was faster when the single-lens see-through display on the dominant eye (average 1.12sec) than on the non-dominant eye (average 1.38sec). Error ratio of the information recognition task was significantly lower when the see-through display was worn on the dominant eye (average 4.86%) than on the non-dominant eye (average 14.04%). The interaction effect of ocular dominance and age group was significant with respect to recognition time and error ratio. The recognition time of the users in their 40s was significantly longer than the other age groups when the display was placed on the non-dominant eye, while no difference was observed on the dominant eye. Error ratio also showed the same pattern. Although no difference was observed for the main effect of ocular dominance and bike usage, the interaction effect between the two variables was significant with respect to preference score. Preference score of daily bike users was higher when the display was placed on the dominant eye, whereas participants who use bikes for leisure purposes showed the opposite preference patterns. It was found more effective and efficient to wear a see-through display on the dominant eye than on the non-dominant eye, although user preference was not affected by ocular dominance. It is recommended to wear a see-through display on the dominant eye since it is safer by helping the user recognize the presented information faster and more accurately, even if the user may not notice the difference.

Keywords: eye tracking, information recognition, ocular dominance, smart headware, wearable device

Procedia PDF Downloads 261
2614 Dynamics of Adiabatic Rapid Passage in an Open Rabi Dimer Model

Authors: Justin Zhengjie Tan, Yang Zhao

Abstract:

Adiabatic Rapid Passage, a popular method of achieving population inversion, is studied in a Rabi dimer model in the presence of noise which acts as a dissipative environment. The integration of the multi-Davydov D2 Ansatz into the time-dependent variational framework enables us to model the intricate quantum system accurately. By influencing the system with a driving field strength resonant with the energy spacing, the probability of adiabatic rapid passage, which is modelled after the Landau Zener model, can be derived along with several other observables, such as the photon population. The effects of a dissipative environment can be reproduced by coupling the system to a common phonon mode. By manipulating the strength and frequency of the driving field, along with the coupling strength of the phonon mode to the qubits, we are able to control the qubits and photon dynamics and subsequently increase the probability of Adiabatic Rapid Passage happening.

Keywords: quantum electrodynamics, adiabatic rapid passage, Landau-Zener transitions, dissipative environment

Procedia PDF Downloads 69
2613 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights

Authors: Nelson Bii, Christopher Ouma, John Odhiambo

Abstract:

Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.

Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths

Procedia PDF Downloads 123
2612 Enhanced Bit Error Rate in Visible Light Communication: A New LED Hexagonal Array Distribution

Authors: Karim Matter, Heba Fayed, Ahmed Abd-Elaziz, Moustafa Hussein

Abstract:

Due to the exponential growth of mobile devices and wireless services, a huge demand for radiofrequency has increased. The presence of several frequencies causes interference between cells, which must be minimized to get the lower Bit Error Rate (BER). For this reason, it is of great interest to use visible light communication (VLC). This paper suggests a VLC system that decreases the BER by applying a new LED distribution with a hexagonal shape using a Frequency Reuse (FR) concept to mitigate the interference between the reused frequencies inside the hexagonal shape. The BER is measured in two scenarios, Line of Sight (LoS) and Non-Line of Sight (Non-LoS), for each technique that we used. The recommended values of BER in the proposed model for Soft Frequency Reuse (SFR) in the case of Los at 4, 8, and 10 dB signal to noise ratio (SNR), are 3.6×10⁻⁶, 6.03×10⁻¹³, and 2.66×10⁻¹⁸, respectively.

Keywords: visible light communication (VLC), field of view (FoV), hexagonal array, frequency reuse

Procedia PDF Downloads 147
2611 Modeling of Glycine Transporters in Mammalian Using the Probability Approach

Authors: K. S. Zaytsev, Y. R. Nartsissov

Abstract:

Glycine is one of the key inhibitory neurotransmitters in Central nervous system (CNS) meanwhile glycinergic transmission is highly dependable on its appropriate reuptake from synaptic cleft. Glycine transporters (GlyT) of types 1 and 2 are the enzymes providing glycine transport back to neuronal and glial cells along with Na⁺ and Cl⁻ co-transport. The distribution and stoichiometry of GlyT1 and GlyT2 differ in details, and GlyT2 is more interesting for the research as it reuptakes glycine to neuron cells, whereas GlyT1 is located in glial cells. In the process of GlyT2 activity, the translocation of the amino acid is accompanied with binding of both one chloride and three sodium ions consequently (two sodium ions for GlyT1). In the present study, we developed a computer simulator of GlyT2 and GlyT1 activity based on known experimental data for quantitative estimation of membrane glycine transport. The trait of a single protein functioning was described using the probability approach where each enzyme state was considered separately. Created scheme of transporter functioning realized as a consequence of elemental steps allowed to take into account each event of substrate association and dissociation. Computer experiments using up-to-date kinetic parameters allowed receiving the number of translocated glycine molecules, Na⁺ and Cl⁻ ions per time period. Flexibility of developed software makes it possible to evaluate glycine reuptake pattern in time under different internal characteristics of enzyme conformational transitions. We investigated the behavior of the system in a wide range of equilibrium constant (from 0.2 to 100), which is not determined experimentally. The significant influence of equilibrium constant in the range from 0.2 to 10 on the glycine transfer process is shown. The environmental conditions such as ion and glycine concentrations are decisive if the values of the constant are outside the specified range.

Keywords: glycine, inhibitory neurotransmitters, probability approach, single protein functioning

Procedia PDF Downloads 102
2610 The Impact of Formulate and Implementation Strategy for an Organization to Better Financial Consequences in Malaysian Private Hospital

Authors: Naser Zouri

Abstract:

Purpose: Measures of formulate and implementation strategy shows amount of product rate-market based strategic management category such as courtesy, competence, and compliance to reach the high loyalty of financial ecosystem. Despite, it solves the market place error intention to fair trade organization. Finding: Finding shows the ability of executives’ level of management to motivate and better decision-making to solve the treatments in business organization. However, it made ideal level of each interposition policy for a hypothetical household. Methodology/design. Style of questionnaire about the data collection was selected to survey of both pilot test and real research. Also, divide of questionnaire and using of Free Scale Semiconductor`s between the finance employee was famous of this instrument. Respondent`s nominated basic on non-probability sampling such as convenience sampling to answer the questionnaire. The way of realization costs to performed the questionnaire divide among the respondent`s approximately was suitable as a spend the expenditure to reach the answer but very difficult to collect data from hospital. However, items of research survey was formed of implement strategy, environment, supply chain, employee from impact of implementation strategy on reach to better financial consequences and also formulate strategy, comprehensiveness strategic design, organization performance from impression on formulate strategy and financial consequences. Practical Implication: Dynamic capability approach of formulate and implement strategy focuses on the firm-specific processes through which firms integrate, build, or reconfigure resources valuable for making a theoretical contribution. Originality/ value of research: Going beyond the current discussion, we show that case studies have the potential to extend and refine theory. We present new light on how dynamic capabilities can benefit from case study research by discovering the qualifications that shape the development of capabilities and determining the boundary conditions of the dynamic capabilities approach. Limitation of the study :Present study also relies on survey of methodology for data collection and the response perhaps connection by financial employee was difficult to responds the question because of limitation work place.

Keywords: financial ecosystem, loyalty, Malaysian market error, dynamic capability approach, rate-market, optimization intelligence strategy, courtesy, competence, compliance

Procedia PDF Downloads 288
2609 Probability Sampling in Matched Case-Control Study in Drug Abuse

Authors: Surya R. Niraula, Devendra B Chhetry, Girish K. Singh, S. Nagesh, Frederick A. Connell

Abstract:

Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.

Keywords: drug abuse, matched case-control study, non-probability sampling, probability sampling

Procedia PDF Downloads 482
2608 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength

Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos

Abstract:

Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.

Keywords: statistical slope stability analysis, skew distributions, probability of failure, functions of random variables

Procedia PDF Downloads 323
2607 CE Method for Development of Japan's Stochastic Earthquake Catalogue

Authors: Babak Kamrani, Nozar Kishi

Abstract:

Stochastic catalog represents the events module of the earthquake loss estimation models. It includes series of events with different magnitudes and corresponding frequencies/probabilities. For the development of the stochastic catalog, random or uniform sampling methods are used to sample the events from the seismicity model. For covering all the Magnitude Frequency Distribution (MFD), a huge number of events should be generated for the above-mentioned methods. Characteristic Event (CE) method chooses the events based on the interest of the insurance industry. We divide the MFD of each source into bins. We have chosen the bins based on the probability of the interest by the insurance industry. First, we have collected the information for the available seismic sources. Sources are divided into Fault sources, subduction, and events without specific fault source. We have developed the MFD for each of the individual and areal source based on the seismicity of the sources. Afterward, we have calculated the CE magnitudes based on the desired probability. To develop the stochastic catalog, we have introduced uncertainty to the location of the events too.

Keywords: stochastic catalogue, earthquake loss, uncertainty, characteristic event

Procedia PDF Downloads 283
2606 Mathematics Anxiety among Male and Female Students

Authors: Wern Lin Yeo, Choo Kim Tan, Sook Ling Lew

Abstract:

Mathematics anxiety refers to the feeling of anxious when one having difficulties in solving mathematical problem. Mathematics anxiety is the most common type of anxiety among other types of anxiety which occurs among the students. However, level of anxiety among males and females are different. There were few past study were conducted to determine the relationship of anxiety and gender but there were still did not have an exact results. Hence, the purpose of this study is to determine the relationship of anxiety level between male and female undergraduates at a private university in Malaysia. Convenient sampling method used in this study in which the students were selected based on the grouping assigned by the faculty. There were 214 undergraduates who registered the probability courses had participated in this study. Mathematics Anxiety Rating Scale (MARS) was the instrument used in study which used to determine students’ anxiety level towards probability. Reliability and validity of instrument was done before the major study was conducted. In the major study, students were given briefing about the study conducted. Participation of this study were voluntary. Students were given consent form to determine whether they agree to participate in the study. Duration of two weeks were given for students to complete the given online questionnaire. The data collected will be analyzed using Statistical Package for the Social Sciences (SPSS) to determine the level of anxiety. There were three anxiety level, i.e., low, average and high. Students’ anxiety level were determined based on their scores obtained compared with the mean and standard deviation. If the scores obtained were below mean and standard deviation, the anxiety level was low. If the scores were at below and above the mean and between one standard deviation, the anxiety level was average. If the scores were above the mean and greater than one standard deviation, the anxiety level was high. Results showed that both of the gender were having average anxiety level. Males having high frequency of three anxiety level which were low, average and high anxiety level as compared to females. Hence, the mean values obtained for males (M = 3.62) was higher than females (M = 3.42). In order to be significant of anxiety level among the gender, the p-value should be less than .05. The p-value obtained in this study was .117. However, this value was greater than .05. Thus, there was no significant difference of anxiety level among the gender. In other words, there was no relationship of anxiety level with the gender.

Keywords: anxiety level, gender, mathematics anxiety, probability and statistics

Procedia PDF Downloads 277
2605 Modern State of the Universal Modeling for Centrifugal Compressors

Authors: Y. Galerkin, K. Soldatova, A. Drozdov

Abstract:

The 6th version of Universal modeling method for centrifugal compressor stage calculation is described. Identification of the new mathematical model was made. As a result of identification the uniform set of empirical coefficients is received. The efficiency definition error is 0,86 % at a design point. The efficiency definition error at five flow rate points (except a point of the maximum flow rate) is 1,22 %. Several variants of the stage with 3D impellers designed by 6th version program and quasi three-dimensional calculation programs were compared by their gas dynamic performances CFD (NUMECA FINE TURBO). Performance comparison demonstrated general principles of design validity and leads to some design recommendations.

Keywords: compressor design, loss model, performance prediction, test data, model stages, flow rate coefficient, work coefficient

Procedia PDF Downloads 403
2604 Using Historical Data for Stock Prediction

Authors: Sofia Stoica

Abstract:

In this paper, we use historical data to predict the stock price of a tech company. To this end, we use a dataset consisting of the stock prices in the past five years of ten major tech companies – Adobe, Amazon, Apple, Facebook, Google, Microsoft, Netflix, Oracle, Salesforce, and Tesla. We experimented with a variety of models– a linear regressor model, K nearest Neighbors (KNN), a sequential neural network – and algorithms - Multiplicative Weight Update, and AdaBoost. We found that the sequential neural network performed the best, with a testing error of 0.18%. Interestingly, the linear model performed the second best with a testing error of 0.73%. These results show that using historical data is enough to obtain high accuracies, and a simple algorithm like linear regression has a performance similar to more sophisticated models while taking less time and resources to implement.

Keywords: finance, machine learning, opening price, stock market

Procedia PDF Downloads 164
2603 A Recognition Method of Ancient Yi Script Based on Deep Learning

Authors: Shanxiong Chen, Xu Han, Xiaolong Wang, Hui Ma

Abstract:

Yi is an ethnic group mainly living in mainland China, with its own spoken and written language systems, after development of thousands of years. Ancient Yi is one of the six ancient languages in the world, which keeps a record of the history of the Yi people and offers documents valuable for research into human civilization. Recognition of the characters in ancient Yi helps to transform the documents into an electronic form, making their storage and spreading convenient. Due to historical and regional limitations, research on recognition of ancient characters is still inadequate. Thus, deep learning technology was applied to the recognition of such characters. Five models were developed on the basis of the four-layer convolutional neural network (CNN). Alpha-Beta divergence was taken as a penalty term to re-encode output neurons of the five models. Two fully connected layers fulfilled the compression of the features. Finally, at the softmax layer, the orthographic features of ancient Yi characters were re-evaluated, their probability distributions were obtained, and characters with features of the highest probability were recognized. Tests conducted show that the method has achieved higher precision compared with the traditional CNN model for handwriting recognition of the ancient Yi.

Keywords: recognition, CNN, Yi character, divergence

Procedia PDF Downloads 152
2602 Performance Assessment of GSO Satellites before and after Enhancing the Pointing Effect

Authors: Amr Emam, Joseph Victor, Mohamed Abd Elghany

Abstract:

The paper presents the effect of the orbit inclination on the pointing error of the satellite antenna and consequently on its footprint on earth for a typical Ku- band payload system. The performance assessment is examined both theoretically and by means of practical measurements, taking also into account all additional sources of pointing errors, such as East-West station keeping, orbit eccentricity and actual attitude control performance. An implementation and computation of the sinusoidal biases in satellite roll and pitch used to compensate the pointing error of the satellite antenna coverage is studied and evaluated before and after the pointing corrections performed. A method for evaluation of the performance of the implemented biases has been introduced through measuring satellite received level from a tracking 11m and fixed 4.8m transmitting antenna before and after the implementation of the pointing corrections.

Keywords: satellite, inclined orbit, pointing errors, coverage optimization

Procedia PDF Downloads 377
2601 Modern Imputation Technique for Missing Data in Linear Functional Relationship Model

Authors: Adilah Abdul Ghapor, Yong Zulina Zubairi, Rahmatullah Imon

Abstract:

Missing value problem is common in statistics and has been of interest for years. This article considers two modern techniques in handling missing data for linear functional relationship model (LFRM) namely the Expectation-Maximization (EM) algorithm and Expectation-Maximization with Bootstrapping (EMB) algorithm using three performance indicators; namely the mean absolute error (MAE), root mean square error (RMSE) and estimated biased (EB). In this study, we applied the methods of imputing missing values in the LFRM. Results of the simulation study suggest that EMB algorithm performs much better than EM algorithm in both models. We also illustrate the applicability of the approach in a real data set.

Keywords: expectation-maximization, expectation-maximization with bootstrapping, linear functional relationship model, performance indicators

Procedia PDF Downloads 380
2600 An Adaptive Cooperative Scheme for Reliability of Transmission Using STBC and CDD in Wireless Communications

Authors: Hyun-Jun Shin, Jae-Jeong Kim, Hyoung-Kyu Song

Abstract:

In broadcasting and cellular system, a cooperative scheme is proposed for the improvement of performance of bit error rate. Up to date, the coverage of broadcasting system coexists with the coverage of cellular system. Therefore each user in a cellular coverage is frequently involved in a broadcasting coverage. The proposed cooperative scheme is derived from the shared areas. The users receive signals from both broadcasting base station and cellular base station. The proposed scheme selects a cellular base station of a worse channel to achieve better performance of bit error rate in cooperation. The performance of the proposed scheme is evaluated in fading channel.

Keywords: cooperative communication, diversity, STBC, CDD, channel condition, broadcasting system, cellular system

Procedia PDF Downloads 492
2599 High Accuracy Analytic Approximation for Special Functions Applied to Bessel Functions J₀(x) and Its Zeros

Authors: Fernando Maass, Pablo Martin, Jorge Olivares

Abstract:

The Bessel function J₀(x) is very important in Electrodynamics and Physics, as well as its zeros. In this work, a method to obtain high accuracy approximation is presented through an application to that function. In most of the applications of this function, the values of the zeros are very important. In this work, analytic approximations for this function have been obtained valid for all positive values of the variable x, which have high accuracy for the function as well as for the zeros. The approximation is determined by the simultaneous used of the power series and asymptotic expansion. The structure of the approximation is a combination of two rational functions with elementary functions as trigonometric and fractional powers. Here us in Pade method, rational functions are used, but now there combined with elementary functions us fractional powers hyperbolic or trigonometric functions, and others. The reason of this is that now power series of the exact function are used, but together with the asymptotic expansion, which usually includes fractional powers trigonometric functions and other type of elementary functions. The approximation must be a bridge between both expansions, and this can not be accomplished using only with rational functions. In the simplest approximation using 4 parameters the maximum absolute error is less than 0.006 at x ∼ 4.9. In this case also the maximum relative error for the zeros is less than 0.003 which is for the second zero, but that value decreases rapidly for the other zeros. The same kind of behaviour happens for the relative error of the maximum and minimum of the functions. Approximations with higher accuracy and more parameters will be also shown. All the approximations are valid for any positive value of x, and they can be calculated easily.

Keywords: analytic approximations, asymptotic approximations, Bessel functions, quasirational approximations

Procedia PDF Downloads 236
2598 Optimized Dynamic Bayesian Networks and Neural Verifier Test Applied to On-Line Isolated Characters Recognition

Authors: Redouane Tlemsani, Redouane, Belkacem Kouninef, Abdelkader Benyettou

Abstract:

In this paper, our system is a Markovien system which we can see it like a Dynamic Bayesian Networks. One of the major interests of these systems resides in the complete training of the models (topology and parameters) starting from training data. The Bayesian Networks are representing models of dubious knowledge on complex phenomena. They are a union between the theory of probability and the graph theory in order to give effective tools to represent a joined probability distribution on a set of random variables. The representation of knowledge bases on description, by graphs, relations of causality existing between the variables defining the field of study. The theory of Dynamic Bayesian Networks is a generalization of the Bayesians networks to the dynamic processes. Our objective amounts finding the better structure which represents the relationships (dependencies) between the variables of a dynamic bayesian network. In applications in pattern recognition, one will carry out the fixing of the structure which obliges us to admit some strong assumptions (for example independence between some variables).

Keywords: Arabic on line character recognition, dynamic Bayesian network, pattern recognition, networks

Procedia PDF Downloads 600
2597 Strategy in Practice: Strategy Development, Strategic Error and Project Delivery

Authors: Nipun Agarwal, David Paul, Fareed Un Din

Abstract:

Strategy development and implementation is the key to an organization’s success in today’s competitive marketplace. Many organizations develop excellent strategy but are unable to implement this strategy in order to succeed. The difference between strategic goals and its implementation is called strategic error. Strategic error occurs when an organization does not have structures in place to implement their strategy. Strategy implementation happens through projects and having a project management method that provides certainty and agility will help an organization become more competitive in implementing strategy. Numerous project management methods exist in theory and practice. However, projects mainly used the Waterfall method in the past that provides certainty in terms of budget, delivery date and resourcing. It is common practice now to utilise Agile based methods. However, Agile based methods do not provide specific deadlines and budgets. But provide agility in product design and project delivery, which is useful to companies. Both Waterfall and Agile methods in some forms are the opposites of each other. Executive management prefer agility in delivery projects as the competitive landscape changes frequently. However, they also appreciate certainty in the projects being able to quantify budgets, deadlines and resources that is harder for an Agile based method to provide. This paper attempts to develop a hybrid project management method that attempts to merge these Waterfall and Agile methods to provide the positives from both these approaches.

Keywords: strategy, project management, strategy implementation, agile

Procedia PDF Downloads 99
2596 An Analysis of a Queueing System with Heterogeneous Servers Subject to Catastrophes

Authors: M. Reni Sagayaraj, S. Anand Gnana Selvam, R. Reynald Susainathan

Abstract:

This study analyzed a queueing system with blocking and no waiting line. The customers arrive according to a Poisson process and the service times follow exponential distribution. There are two non-identical servers in the system. The queue discipline is FCFS, and the customers select the servers on fastest server first (FSF) basis. The service times are exponentially distributed with parameters μ1 and μ2 at servers I and II, respectively. Besides, the catastrophes occur in a Poisson manner with rate γ in the system. When server I is busy or blocked, the customer who arrives in the system leaves the system without being served. Such customers are called lost customers. The probability of losing a customer was computed for the system. The explicit time dependent probabilities of system size are obtained and a numerical example is presented in order to show the managerial insights of the model. Finally, the probability that arriving customer finds system busy and average number of server busy in steady state are obtained numerically.

Keywords: queueing system, blocking, poisson process, heterogeneous servers, queue discipline FCFS, busy period

Procedia PDF Downloads 489
2595 Experimenting with Error Performance of Systems Employing Pulse Shaping Filters on a Software-Defined-Radio Platform

Authors: Chia-Yu Yao

Abstract:

This paper presents experimental results on testing the symbol-error-rate (SER) performance of quadrature amplitude modulation (QAM) systems employing symmetric pulse-shaping square-root (SR) filters designed by minimizing the roughness function and by minimizing the peak-to-average power ratio (PAR). The device used in the experiments is the 'bladeRF' software-defined-radio platform. PAR is a well-known measurement, whereas the roughness function is a concept for measuring the jitter-induced interference. The experimental results show that the system employing minimum-roughness pulse-shaping SR filters outperforms the system employing minimum-PAR pulse-shaping SR filters in the sense of SER performance.

Keywords: pulse-shaping filters, FIR filters, jittering, QAM

Procedia PDF Downloads 329
2594 An Experimental Investigation of the Cognitive Noise Influence on the Bistable Visual Perception

Authors: Alexander E. Hramov, Vadim V. Grubov, Alexey A. Koronovskii, Maria K. Kurovskaуa, Anastasija E. Runnova

Abstract:

The perception of visual signals in the brain was among the first issues discussed in terms of multistability which has been introduced to provide mechanisms for information processing in biological neural systems. In this work the influence of the cognitive noise on the visual perception of multistable pictures has been investigated. The study includes an experiment with the bistable Necker cube illusion and the theoretical background explaining the obtained experimental results. In our experiments Necker cubes with different wireframe contrast were demonstrated repeatedly to different people and the probability of the choice of one of the cubes projection was calculated for each picture. The Necker cube was placed at the middle of a computer screen as black lines on a white background. The contrast of the three middle lines centered in the left middle corner was used as one of the control parameter. Between two successive demonstrations of Necker cubes another picture was shown to distract attention and to make a perception of next Necker cube more independent from the previous one. Eleven subjects, male and female, of the ages 20 through 45 were studied. The choice of the Necker cube projection was detected with the Electroencephalograph-recorder Encephalan-EEGR-19/26, Medicom MTD. To treat the experimental results we carried out theoretical consideration using the simplest double-well potential model with the presence of noise that led to the Fokker-Planck equation for the probability density of the stochastic process. At the first time an analytical solution for the probability of the selection of one of the Necker cube projection for different values of wireframe contrast have been obtained. Furthermore, having used the results of the experimental measurements with the help of the method of least squares we have calculated the value of the parameter corresponding to the cognitive noise of the person being studied. The range of cognitive noise parameter values for studied subjects turned to be [0.08; 0.55]. It should be noted, that experimental results have a good reproducibility, the same person being studied repeatedly another day produces very similar data with very close levels of cognitive noise. We found an excellent agreement between analytically deduced probability and the results obtained in the experiment. A good qualitative agreement between theoretical and experimental results indicates that even such a simple model allows simulating brain cognitive dynamics and estimating important cognitive characteristic of the brain, such as brain noise.

Keywords: bistability, brain, noise, perception, stochastic processes

Procedia PDF Downloads 432
2593 Employers’ Preferences when Employing Solo Self-employed: a Vignette Study in the Netherlands

Authors: Lian Kösters, Wendy Smits, Raymond Montizaan

Abstract:

The number of solo self-employed in the Netherlands has been increasing for years. The relative increase is among the largest in the EU. To explain this increase, most studies have focused on the supply side, workers who offer themselves as solo self-employed. The number of studies that focus on the demand side, the employer who hires the solo self-employed, is still scarce. Studies into employer behaviour conducted until now show that employers mainly choose self-employed workers when they have a temporary need for specialist knowledge, but also during projects or production peaks. These studies do not provide insight into the employers’ considerations for different contract types. In this study, interviews with employers were conducted, and available literature was consulted to provide an overview of the several factors employers use to compare different contract types. That input was used to set up a vignette study. This was carried out at the end of 2021 among almost 1000 business owners, HR managers, and business leaders of Dutch companies. Each respondent was given two sets of five fictitious candidates for two possible positions in their organization. They were asked to rank these candidates. The positions varied with regard to the type of tasks (core tasks or support tasks) and the time it took to train new people for the position. The respondents were asked additional questions about the positions, such as the required level of education, the duration, and the degree of predictability of tasks. The fictitious candidates varied, among other things, in the type of contract on which they would come to work for the organization. The results were analyzed using a rank-ordered logit analysis. This vignette setup makes it possible to see which factors are most important for employers when choosing to hire a solo self-employed person compared to other contracts. The results show that there are no indications that employers would want to hire solo self-employed workers en masse. They prefer regular employee contracts. The probability of being chosen with a solo self-employed contract over someone who comes to work as a temporary employee is 32 percent. This probability is even lower than for on-call and temporary agency workers. For a permanent contract, this probability is 46 percent. The results provide indications that employers consider knowledge and skills more important than the solo self-employed contract and that this can compensate. A solo self-employed candidate with 10 years of work experience has a 63 percent probability of being found attractive by an employer compared to a temporary employee without work experience. This suggests that employers are willing to give someone a less attractive contract for the employer if the worker so wishes. The results also show that the probability that a solo self-employed person is preferred over a candidate with a temporary employee contract is somewhat higher in business economics, administrative and technical professions. No significant results were found for factors where it was expected that solo self-employed workers are preferred more often, such as for unpredictable or temporary work.

Keywords: employer behaviour, rank-ordered logit analysis, solo self-employment, temporary contract, vignette study

Procedia PDF Downloads 59
2592 Wireless Transmission of Big Data Using Novel Secure Algorithm

Authors: K. Thiagarajan, K. Saranya, A. Veeraiah, B. Sudha

Abstract:

This paper presents a novel algorithm for secure, reliable and flexible transmission of big data in two hop wireless networks using cooperative jamming scheme. Two hop wireless networks consist of source, relay and destination nodes. Big data has to transmit from source to relay and from relay to destination by deploying security in physical layer. Cooperative jamming scheme determines transmission of big data in more secure manner by protecting it from eavesdroppers and malicious nodes of unknown location. The novel algorithm that ensures secure and energy balance transmission of big data, includes selection of data transmitting region, segmenting the selected region, determining probability ratio for each node (capture node, non-capture and eavesdropper node) in every segment, evaluating the probability using binary based evaluation. If it is secure transmission resume with the two- hop transmission of big data, otherwise prevent the attackers by cooperative jamming scheme and transmit the data in two-hop transmission.

Keywords: big data, two-hop transmission, physical layer wireless security, cooperative jamming, energy balance

Procedia PDF Downloads 471
2591 Design and Performance Analysis of Advanced B-Spline Algorithm for Image Resolution Enhancement

Authors: M. Z. Kurian, M. V. Chidananda Murthy, H. S. Guruprasad

Abstract:

An approach to super-resolve the low-resolution (LR) image is presented in this paper which is very useful in multimedia communication, medical image enhancement and satellite image enhancement to have a clear view of the information in the image. The proposed Advanced B-Spline method generates a high-resolution (HR) image from single LR image and tries to retain the higher frequency components such as edges in the image. This method uses B-Spline technique and Crispening. This work is evaluated qualitatively and quantitatively using Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR). The method is also suitable for real-time applications. Different combinations of decimation and super-resolution algorithms in the presence of different noise and noise factors are tested.

Keywords: advanced b-spline, image super-resolution, mean square error (MSE), peak signal to noise ratio (PSNR), resolution down converter

Procedia PDF Downloads 387
2590 Statistical Time-Series and Neural Architecture of Malaria Patients Records in Lagos, Nigeria

Authors: Akinbo Razak Yinka, Adesanya Kehinde Kazeem, Oladokun Oluwagbenga Peter

Abstract:

Time series data are sequences of observations collected over a period of time. Such data can be used to predict health outcomes, such as disease progression, mortality, hospitalization, etc. The Statistical approach is based on mathematical models that capture the patterns and trends of the data, such as autocorrelation, seasonality, and noise, while Neural methods are based on artificial neural networks, which are computational models that mimic the structure and function of biological neurons. This paper compared both parametric and non-parametric time series models of patients treated for malaria in Maternal and Child Health Centres in Lagos State, Nigeria. The forecast methods considered linear regression, Integrated Moving Average, ARIMA and SARIMA Modeling for the parametric approach, while Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) Network were used for the non-parametric model. The performance of each method is evaluated using the Mean Absolute Error (MAE), R-squared (R2) and Root Mean Square Error (RMSE) as criteria to determine the accuracy of each model. The study revealed that the best performance in terms of error was found in MLP, followed by the LSTM and ARIMA models. In addition, the Bootstrap Aggregating technique was used to make robust forecasts when there are uncertainties in the data.

Keywords: ARIMA, bootstrap aggregation, MLP, LSTM, SARIMA, time-series analysis

Procedia PDF Downloads 62
2589 Study of Seismic Damage Reinforced Concrete Frames in Variable Height with Logistic Statistic Function Distribution

Authors: P. Zarfam, M. Mansouri Baghbaderani

Abstract:

In seismic design, the proper reaction to the earthquake and the correct and accurate prediction of its subsequent effects on the structure are critical. Choose a proper probability distribution, which gives a more realistic probability of the structure's damage rate, is essential in damage discussions. With the development of design based on performance, analytical method of modal push over as an inexpensive, efficacious, and quick one in the estimation of the structures' seismic response is broadly used in engineering contexts. In this research three concrete frames of 3, 6, and 13 stories are analyzed in non-linear modal push over by 30 different earthquake records by OpenSEES software, then the detriment indexes of roof's displacement and relative displacement ratio of the stories are calculated by two parameters: peak ground acceleration and spectra acceleration. These indexes are used to establish the value of damage relations with log-normal distribution and logistics distribution. Finally the value of these relations is compared and the effect of height on the mentioned damage relations is studied, too.

Keywords: modal pushover analysis, concrete structure, seismic damage, log-normal distribution, logistic distribution

Procedia PDF Downloads 235
2588 Accurate Positioning Method of Indoor Plastering Robot Based on Line Laser

Authors: Guanqiao Wang, Hongyang Yu

Abstract:

There is a lot of repetitive work in the traditional construction industry. These repetitive tasks can significantly improve production efficiency by replacing manual tasks with robots. There- fore, robots appear more and more frequently in the construction industry. Navigation and positioning are very important tasks for construction robots, and the requirements for accuracy of positioning are very high. Traditional indoor robots mainly use radiofrequency or vision methods for positioning. Compared with ordinary robots, the indoor plastering robot needs to be positioned closer to the wall for wall plastering, so the requirements for construction positioning accuracy are higher, and the traditional navigation positioning method has a large error, which will cause the robot to move. Without the exact position, the wall cannot be plastered, or the error of plastering the wall is large. A new positioning method is proposed, which is assisted by line lasers and uses image processing-based positioning to perform more accurate positioning on the traditional positioning work. In actual work, filter, edge detection, Hough transform and other operations are performed on the images captured by the camera. Each time the position of the laser line is found, it is compared with the standard value, and the position of the robot is moved or rotated to complete the positioning work. The experimental results show that the actual positioning error is reduced to less than 0.5 mm by this accurate positioning method.

Keywords: indoor plastering robot, navigation, precise positioning, line laser, image processing

Procedia PDF Downloads 133
2587 Convex Restrictions for Outage Constrained MU-MISO Downlink under Imperfect Channel State Information

Authors: A. Preetha Priyadharshini, S. B. M. Priya

Abstract:

In this paper, we consider the MU-MISO downlink scenario, under imperfect channel state information (CSI). The main issue in imperfect CSI is to keep the probability of each user achievable outage rate below the given threshold level. Such a rate outage constraints present significant and analytical challenges. There are many probabilistic methods are used to minimize the transmit optimization problem under imperfect CSI. Here, decomposition based large deviation inequality and Bernstein type inequality convex restriction methods are used to perform the optimization problem under imperfect CSI. These methods are used for achieving improved output quality and lower complexity. They provide a safe tractable approximation of the original rate outage constraints. Based on these method implementations, performance has been evaluated in the terms of feasible rate and average transmission power. The simulation results are shown that all the two methods offer significantly improved outage quality and lower computational complexity.

Keywords: imperfect channel state information, outage probability, multiuser- multi input single output, channel state information

Procedia PDF Downloads 804