Search results for: tensorflow probability
404 Model Parameters Estimating on Lyman–Kutcher–Burman Normal Tissue Complication Probability for Xerostomia on Head and Neck Cancer
Authors: Tsair-Fwu Lee , Hui-Min Ting , Pei-Ju Chao, Jing-Chuan Jiang, Min-Yuan Chao, Wen-Cheng Chen, Long-Chang Chen, Jia-Ming Wu
Abstract:
The purpose of this study is to derive parameters estimating for the Lyman–Kutcher–Burman (LKB) normal tissue complication probability (NTCP) model using analysis of scintigraphy assessments and quality of life (QoL) measurement questionnaires for the parotid gland (xerostomia). In total, 31 patients with head-and-neck (HN) cancer were enrolled. Salivary excretion factor (SEF) and EORTC QLQ-H&N35 questionnaires datasets are used for the NTCP modeling to describe the incidence of grade 4 xerostomia. Assuming that n= 1, NTCP fitted parameters are given as TD50= 43.6 Gy, m= 0.18 in SEF analysis, and as TD50= 44.1 Gy, m= 0.11 in QoL measurements, respectively. SEF and QoL datasets can validate the Quantitative Analyses of Normal Tissue Effects in the Clinic (QUANTEC) guidelines well, resulting in NPV-s of 100% for the both datasets and suggests that the QUANTEC 25/20Gy gland-spared guidelines are suitable for clinical used for the HN cohort to effectively avoid xerostomia.Keywords: HN, NTCP, SEF, QoL, QUANTEC
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2101403 Wireless Transmission of Big Data Using Novel Secure Algorithm
Authors: K. Thiagarajan, K. Saranya, A. Veeraiah, B. Sudha
Abstract:
This paper presents a novel algorithm for secure, reliable and flexible transmission of big data in two hop wireless networks using cooperative jamming scheme. Two hop wireless networks consist of source, relay and destination nodes. Big data has to transmit from source to relay and from relay to destination by deploying security in physical layer. Cooperative jamming scheme determines transmission of big data in more secure manner by protecting it from eavesdroppers and malicious nodes of unknown location. The novel algorithm that ensures secure and energy balance transmission of big data, includes selection of data transmitting region, segmenting the selected region, determining probability ratio for each node (capture node, non-capture and eavesdropper node) in every segment, evaluating the probability using binary based evaluation. If it is secure transmission resume with the two- hop transmission of big data, otherwise prevent the attackers by cooperative jamming scheme and transmit the data in two-hop transmission.Keywords: Big data, cooperative jamming, energy balance, physical layer, two-hop transmission, wireless security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2180402 The Contribution of Growth Rate to the Pathogenicity of Candida spp.
Authors: Shu-Ying Marissa Pang, Stephen Tristram, Simon Brown
Abstract:
Fungal infections are becoming more common and the range of susceptible individuals has expanded. While Candida albicans remains the most common infective species, other Candida spp. are becoming increasingly significant. In a range of large-scale studies of candidaemia between 1999 and 2006, about 52% of 9717 cases involved C. albicans, about 30% involved either C. glabrata or C. parapsilosis and less than 15% involved C. tropicalis, C. krusei or C. guilliermondii. However, the probability of mortality within 30 days of infection with a particular species was at least 40% for C. tropicalis, C. albicans, C. glabrata and C. krusei and only 22% for C. parapsilopsis. Clinical isolates of Candida spp. grew at rates ranging from 1.65 h-1 to 4.9 h-1. Three species (C. krusei, C. albicans and C. glabrata) had relatively high growth rates (μm > 4 h-1), C. tropicalis and C. dubliniensis grew moderately quickly (Ôëê 3 h-1) and C. parapsilosis and C. guilliermondii grew slowly (< 2 h-1). Based on these data, the log of the odds of mortality within 30 days of diagnosis was linearly related to μm. From this the underlying probability of mortality is 0.13 (95% CI: 0.10-0.17) and it increases by about 0.09 ± 0.02 for each unit increase in μm. Given that the overall crude mortality is about 0.36, the growth of Candida spp. approximately doubles the rate, consistent with the results of larger case-matched studies of candidaemia.Keywords: Candida spp., candidiasis, growth, pathogenicity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1594401 A Recognition Method of Ancient Yi Script Based on Deep Learning
Authors: Shanxiong Chen, Xu Han, Xiaolong Wang, Hui Ma
Abstract:
Yi is an ethnic group mainly living in mainland China, with its own spoken and written language systems, after development of thousands of years. Ancient Yi is one of the six ancient languages in the world, which keeps a record of the history of the Yi people and offers documents valuable for research into human civilization. Recognition of the characters in ancient Yi helps to transform the documents into an electronic form, making their storage and spreading convenient. Due to historical and regional limitations, research on recognition of ancient characters is still inadequate. Thus, deep learning technology was applied to the recognition of such characters. Five models were developed on the basis of the four-layer convolutional neural network (CNN). Alpha-Beta divergence was taken as a penalty term to re-encode output neurons of the five models. Two fully connected layers fulfilled the compression of the features. Finally, at the softmax layer, the orthographic features of ancient Yi characters were re-evaluated, their probability distributions were obtained, and characters with features of the highest probability were recognized. Tests conducted show that the method has achieved higher precision compared with the traditional CNN model for handwriting recognition of the ancient Yi.
Keywords: Recognition, CNN, convolutional neural network, Yi character, divergence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 751400 Cyber Security Enhancement via Software-Defined Pseudo-Random Private IP Address Hopping
Authors: Andre Slonopas, Warren Thompson, Zona Kostic
Abstract:
Obfuscation is one of the most useful tools to prevent network compromise. Previous research focused on the obfuscation of the network communications between external-facing edge devices. This work proposes the use of two edge devices, external and internal facing, which communicates via private IPv4 addresses in a software-defined pseudo-random IP hopping. This methodology does not require additional IP addresses and/or resources to implement. Statistical analyses demonstrate that the hopping surface must be at least 1e3 IP addresses in size with a broad standard deviation to minimize the possibility of coincidence of monitored and communication IPs. The probability of breaking the hopping algorithm requires a collection of at least 1e6 samples, which for large hopping surfaces will take years to collect. The probability of dropped packets is controlled via memory buffers and the frequency of hops and can be reduced to levels acceptable for video streaming. This methodology provides an impenetrable layer of security ideal for information and supervisory control and data acquisition systems.
Keywords: Moving Target Defense, cybersecurity, network security, hopping randomization, software defined network, network security theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 624399 A Kernel Based Rejection Method for Supervised Classification
Authors: Abdenour Bounsiar, Edith Grall, Pierre Beauseroy
Abstract:
In this paper we are interested in classification problems with a performance constraint on error probability. In such problems if the constraint cannot be satisfied, then a rejection option is introduced. For binary labelled classification, a number of SVM based methods with rejection option have been proposed over the past few years. All of these methods use two thresholds on the SVM output. However, in previous works, we have shown on synthetic data that using thresholds on the output of the optimal SVM may lead to poor results for classification tasks with performance constraint. In this paper a new method for supervised classification with rejection option is proposed. It consists in two different classifiers jointly optimized to minimize the rejection probability subject to a given constraint on error rate. This method uses a new kernel based linear learning machine that we have recently presented. This learning machine is characterized by its simplicity and high training speed which makes the simultaneous optimization of the two classifiers computationally reasonable. The proposed classification method with rejection option is compared to a SVM based rejection method proposed in recent literature. Experiments show the superiority of the proposed method.Keywords: rejection, Chow's rule, error-reject tradeoff, SupportVector Machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1445398 Some Investigations on Higher Mathematics Scores for Chinese University Student
Authors: Xun Ge, Jingju Qian
Abstract:
To investigate some relations between higher mathe¬matics scores in Chinese graduate student entrance examination and calculus (resp. linear algebra, probability statistics) scores in subject's completion examination of Chinese university, we select 20 students as a sample, take higher mathematics score as a decision attribute and take calculus score, linear algebra score, probability statistics score as condition attributes. In this paper, we are based on rough-set theory (Rough-set theory is a logic-mathematical method proposed by Z. Pawlak. In recent years, this theory has been widely implemented in the many fields of natural science and societal science.) to investigate importance of condition attributes with respective to decision attribute and strength of condition attributes supporting decision attribute. Results of this investigation will be helpful for university students to raise higher mathematics scores in Chinese graduate student entrance examination.
Keywords: Rough set, higher mathematics scores, decision attribute, condition attribute.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1086397 Reliability of Chute-Feeders in Automatic Machines of High Production Capacity
Authors: R. Usubamatov, A. Usubamatova, S. Hussain
Abstract:
Modern highly automated production systems faces problems of reliability. Machine function reliability results in changes of productivity rate and efficiency use of expensive industrial facilities. Predicting of reliability has become an important research and involves complex mathematical methods and calculation. The reliability of high productivity technological automatic machines that consists of complex mechanical, electrical and electronic components is important. The failure of these units results in major economic losses of production systems. The reliability of transport and feeding systems for automatic technological machines is also important, because failure of transport leads to stops of technological machines. This paper presents reliability engineering on the feeding system and its components for transporting a complex shape parts to automatic machines. It also discusses about the calculation of the reliability parameters of the feeding unit by applying the probability theory. Equations produced for calculating the limits of the geometrical sizes of feeders and the probability of sticking the transported parts into the chute represents the reliability of feeders as a function of its geometrical parameters.Keywords: Chute-feeder, parts, reliability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1456396 FEM Simulation of HE Blast-Fragmentation Warhead and the Calculation of Lethal Range
Authors: G. Tanapornraweekit, W. Kulsirikasem
Abstract:
This paper presents the simulation of fragmentation warhead using a hydrocode, Autodyn. The goal of this research is to determine the lethal range of such a warhead. This study investigates the lethal range of warheads with and without steel balls as preformed fragments. The results from the FE simulation, i.e. initial velocities and ejected spray angles of fragments, are further processed using an analytical approach so as to determine a fragment hit density and probability of kill of a modelled warhead. In order to simulate a plenty of preformed fragments inside a warhead, the model requires expensive computation resources. Therefore, this study attempts to model the problem in an alternative approach by considering an equivalent mass of preformed fragments to the mass of warhead casing. This approach yields approximately 7% and 20% difference of fragment velocities from the analytical results for one and two layers of preformed fragments, respectively. The lethal ranges of the simulated warheads are 42.6 m and 56.5 m for warheads with one and two layers of preformed fragments, respectively, compared to 13.85 m for a warhead without preformed fragment. These lethal ranges are based on the requirement of fragment hit density. The lethal ranges which are based on the probability of kill are 27.5 m, 61 m and 70 m for warheads with no preformed fragment, one and two layers of preformed fragments, respectively.Keywords: Lethal Range, Natural Fragment, Preformed Fragment, Warhead.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4316395 Modal Approach for Decoupling Damage Cost Dependencies in Building Stories
Authors: Haj Najafi Leila, Tehranizadeh Mohsen
Abstract:
Dependencies between diverse factors involved in probabilistic seismic loss evaluation are recognized to be an imperative issue in acquiring accurate loss estimates. Dependencies among component damage costs could be taken into account considering two partial distinct states of independent or perfectly-dependent for component damage states; however, in our best knowledge, there is no available procedure to take account of loss dependencies in story level. This paper attempts to present a method called "modal cost superposition method" for decoupling story damage costs subjected to earthquake ground motions dealt with closed form differential equations between damage cost and engineering demand parameters which should be solved in complex system considering all stories' cost equations by the means of the introduced "substituted matrixes of mass and stiffness". Costs are treated as probabilistic variables with definite statistic factors of median and standard deviation amounts and a presumed probability distribution. To supplement the proposed procedure and also to display straightforwardness of its application, one benchmark study has been conducted. Acceptable compatibility has been proven for the estimated damage costs evaluated by the new proposed modal and also frequently used stochastic approaches for entire building; however, in story level, insufficiency of employing modification factor for incorporating occurrence probability dependencies between stories has been revealed due to discrepant amounts of dependency between damage costs of different stories. Also, more dependency contribution in occurrence probability of loss could be concluded regarding more compatibility of loss results in higher stories than the lower ones, whereas reduction in incorporation portion of cost modes provides acceptable level of accuracy and gets away from time consuming calculations including some limited number of cost modes in high mode situation.
Keywords: Dependency, story-cost, cost modes, engineering demand parameter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1018394 Probabilistic Modeling of Network-induced Delays in Networked Control Systems
Authors: Manoj Kumar, A.K. Verma, A. Srividya
Abstract:
Time varying network induced delays in networked control systems (NCS) are known for degrading control system-s quality of performance (QoP) and causing stability problems. In literature, a control method employing modeling of communication delays as probability distribution, proves to be a better method. This paper focuses on modeling of network induced delays as probability distribution. CAN and MIL-STD-1553B are extensively used to carry periodic control and monitoring data in networked control systems. In literature, methods to estimate only the worst-case delays for these networks are available. In this paper probabilistic network delay model for CAN and MIL-STD-1553B networks are given. A systematic method to estimate values to model parameters from network parameters is given. A method to predict network delay in next cycle based on the present network delay is presented. Effect of active network redundancy and redundancy at node level on network delay and system response-time is also analyzed.Keywords: NCS (networked control system), delay analysis, response-time distribution, worst-case delay, CAN, MIL-STD-1553B, redundancy
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1771393 A Markov Chain Model for Load-Balancing Based and Service Based RAT Selection Algorithms in Heterogeneous Networks
Authors: Abdallah Al Sabbagh
Abstract:
Next Generation Wireless Network (NGWN) is expected to be a heterogeneous network which integrates all different Radio Access Technologies (RATs) through a common platform. A major challenge is how to allocate users to the most suitable RAT for them. An optimized solution can lead to maximize the efficient use of radio resources, achieve better performance for service providers and provide Quality of Service (QoS) with low costs to users. Currently, Radio Resource Management (RRM) is implemented efficiently for the RAT that it was developed. However, it is not suitable for a heterogeneous network. Common RRM (CRRM) was proposed to manage radio resource utilization in the heterogeneous network. This paper presents a user level Markov model for a three co-located RAT networks. The load-balancing based and service based CRRM algorithms have been studied using the presented Markov model. A comparison for the performance of load-balancing based and service based CRRM algorithms is studied in terms of traffic distribution, new call blocking probability, vertical handover (VHO) call dropping probability and throughput.Keywords: Heterogeneous Wireless Network, Markov chain model, load-balancing based and service based algorithm, CRRM algorithms, Beyond 3G network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2487392 Exploring the Activity Fabric of an Intelligent Environment with Hierarchical Hidden Markov Theory
Authors: Chiung-Hui Chen
Abstract:
The Internet of Things (IoT) was designed for widespread convenience. With the smart tag and the sensing network, a large quantity of dynamic information is immediately presented in the IoT. Through the internal communication and interaction, meaningful objects provide real-time services for users. Therefore, the service with appropriate decision-making has become an essential issue. Based on the science of human behavior, this study employed the environment model to record the time sequences and locations of different behaviors and adopted the probability module of the hierarchical Hidden Markov Model for the inference. The statistical analysis was conducted to achieve the following objectives: First, define user behaviors and predict the user behavior routes with the environment model to analyze user purposes. Second, construct the hierarchical Hidden Markov Model according to the logic framework, and establish the sequential intensity among behaviors to get acquainted with the use and activity fabric of the intelligent environment. Third, establish the intensity of the relation between the probability of objects’ being used and the objects. The indicator can describe the possible limitations of the mechanism. As the process is recorded in the information of the system created in this study, these data can be reused to adjust the procedure of intelligent design services.Keywords: Behavior, big data, hierarchical Hidden Markov Model, intelligent object.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 764391 Disaggregation the Daily Rainfall Dataset into Sub-Daily Resolution in the Temperate Oceanic Climate Region
Authors: Mohammad Bakhshi, Firas Al Janabi
Abstract:
High resolution rain data are very important to fulfill the input of hydrological models. Among models of high-resolution rainfall data generation, the temporal disaggregation was chosen for this study. The paper attempts to generate three different rainfall resolutions (4-hourly, hourly and 10-minutes) from daily for around 20-year record period. The process was done by DiMoN tool which is based on random cascade model and method of fragment. Differences between observed and simulated rain dataset are evaluated with variety of statistical and empirical methods: Kolmogorov-Smirnov test (K-S), usual statistics, and Exceedance probability. The tool worked well at preserving the daily rainfall values in wet days, however, the generated data are cumulated in a shorter time period and made stronger storms. It is demonstrated that the difference between generated and observed cumulative distribution function curve of 4-hourly datasets is passed the K-S test criteria while in hourly and 10-minutes datasets the P-value should be employed to prove that their differences were reasonable. The results are encouraging considering the overestimation of generated high-resolution rainfall data.
Keywords: DiMoN tool, disaggregation, exceedance probability, Kolmogorov-Smirnov Test, rainfall.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1010390 Effect of Atmospheric Turbulence on Hybrid FSO/RF Link Availability under Qatar Harsh Climate
Authors: Abir Touati, Syed Jawad Hussain, Farid Touati, Ammar Bouallegue
Abstract:
Although there has been a growing interest in the hybrid free-space optical link and radio frequency FSO/RF communication system, the current literature is limited to results obtained in moderate or cold environment. In this paper, using a soft switching approach, we investigate the effect of weather inhomogeneities on the strength of turbulence hence the channel refractive index under Qatar harsh environment and their influence on the hybrid FSO/RF availability. In this approach, either FSO/RF or simultaneous or none of them can be active. Based on soft switching approach and a finite state Markov Chain (FSMC) process, we model the channel fading for the two links and derive a mathematical expression for the outage probability of the hybrid system. Then, we evaluate the behavior of the hybrid FSO/RF under hazy and harsh weather. Results show that the FSO/RF soft switching renders the system outage probability less than that of each link individually. A soft switching algorithm is being implemented on FPGAs using Raptor code interfaced to the two terminals of a 1Gbps/100 Mbps FSO/RF hybrid system, the first being implemented in the region. Experimental results are compared to the above simulation results.Keywords: Atmospheric turbulence, haze, soft switching, Raptor codes, refractive index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2580389 Energy Detection Based Sensing and Primary User Traffic Classification for Cognitive Radio
Authors: Urvee B. Trivedi, U. D. Dalal
Abstract:
As wireless communication services grow quickly; the seriousness of spectrum utilization has been on the rise gradually. An emerging technology, cognitive radio has come out to solve today’s spectrum scarcity problem. To support the spectrum reuse functionality, secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance. Once sensing is done, different prediction rules apply to classify the traffic pattern of primary user. Primary user follows two types of traffic patterns: periodic and stochastic ON-OFF patterns. A cognitive radio can learn the patterns in different channels over time. Two types of classification methods are discussed in this paper, by considering edge detection and by using autocorrelation function. Edge detection method has a high accuracy but it cannot tolerate sensing errors. Autocorrelation-based classification is applicable in the real environment as it can tolerate some amount of sensing errors.Keywords: Cognitive radio (CR), probability of detection (PD), probability of false alarm (PF), primary User (PU), secondary user (SU), Fast Fourier transform (FFT), signal to noise ratio (SNR).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1471388 Mining Network Data for Intrusion Detection through Naïve Bayesian with Clustering
Authors: Dewan Md. Farid, Nouria Harbi, Suman Ahmmed, Md. Zahidur Rahman, Chowdhury Mofizur Rahman
Abstract:
Network security attacks are the violation of information security policy that received much attention to the computational intelligence society in the last decades. Data mining has become a very useful technique for detecting network intrusions by extracting useful knowledge from large number of network data or logs. Naïve Bayesian classifier is one of the most popular data mining algorithm for classification, which provides an optimal way to predict the class of an unknown example. It has been tested that one set of probability derived from data is not good enough to have good classification rate. In this paper, we proposed a new learning algorithm for mining network logs to detect network intrusions through naïve Bayesian classifier, which first clusters the network logs into several groups based on similarity of logs, and then calculates the prior and conditional probabilities for each group of logs. For classifying a new log, the algorithm checks in which cluster the log belongs and then use that cluster-s probability set to classify the new log. We tested the performance of our proposed algorithm by employing KDD99 benchmark network intrusion detection dataset, and the experimental results proved that it improves detection rates as well as reduces false positives for different types of network intrusions.Keywords: Clustering, detection rate, false positive, naïveBayesian classifier, network intrusion detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5536387 Democratic Political Culture of the 5th and 6th Graders under the Authority of Dusit District Office, Bangkok
Authors: Vilasinee Jintalikhitdee, Phusit Phukamchanoad, Sakapas Saengchai
Abstract:
This research aims to study the level of democratic political culture and the factors that affect the democratic political culture of 5th and 6th graders under the authority of Dusit District Office, Bangkok by using stratified sampling for probability sampling and using purposive sampling for non-probability sampling to collect data toward the distribution of questionnaires to 300 respondents. This covers all of the schools under the authority of Dusit District Office. The researcher analyzed the data by using descriptive statistics which include arithmetic mean, standard deviation, and inferential statistics which are Independent Samples T-test (T-test) and One-Way ANOVA (F-test). The researcher also collected data by interviewing the target groups, and then analyzed the data by the use of descriptive analysis. The result shows that 5th and 6th graders under the authority of Dusit District Office, Bangkok have exposed to democratic political culture at high level in overall. When considering each part, it found out that the part that has highest mean is “the constitutional democratic governmental system is suitable for Thailand” statement. The part with the lowest mean is “corruption (cheat and defraud) is normal in Thai society” statement. The factor that affects democratic political culture is grade levels, occupations of mothers, and attention in news and political movements.
Keywords: Democratic, Political Culture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1564386 Further Investigations on Higher Mathematics Scores for Chinese University Students
Authors: Xun Ge
Abstract:
Recently, X. Ge and J. Qian investigated some relations between higher mathematics scores and calculus scores (resp. linear algebra scores, probability statistics scores) for Chinese university students. Based on rough-set theory, they established an information system S = (U,CuD,V, f). In this information system, higher mathematics score was taken as a decision attribute and calculus score, linear algebra score, probability statistics score were taken as condition attributes. They investigated importance of each condition attribute with respective to decision attribute and strength of each condition attribute supporting decision attribute. In this paper, we give further investigations for this issue. Based on the above information system S = (U, CU D, V, f), we analyze the decision rules between condition and decision granules. For each x E U, we obtain support (resp. strength, certainty factor, coverage factor) of the decision rule C —>x D, where C —>x D is the decision rule induced by x in S = (U, CU D, V, f). Results of this paper gives new analysis of on higher mathematics scores for Chinese university students, which can further lead Chinese university students to raise higher mathematics scores in Chinese graduate student entrance examination.
Keywords: Rough set, support, strength, certainty factor, coverage factor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1369385 Performance Evaluation of a Prioritized, Limited Multi-Server Processor-Sharing System That Includes Servers with Various Capacities
Authors: Yoshiaki Shikata, Nobutane Hanayama
Abstract:
We present a prioritized, limited multi-server processor sharing (PS) system where each server has various capacities, and N (≥2) priority classes are allowed in each PS server. In each prioritized, limited server, different service ratio is assigned to each class request, and the number of requests to be processed is limited to less than a certain number. Routing strategies of such prioritized, limited multi-server PS systems that take into account the capacity of each server are also presented, and a performance evaluation procedure for these strategies is discussed. Practical performance measures of these strategies, such as loss probability, mean waiting time, and mean sojourn time, are evaluated via simulation. In the PS server, at the arrival (or departure) of a request, the extension (shortening) of the remaining sojourn time of each request receiving service can be calculated by using the number of requests of each class and the priority ratio. Utilising a simulation program which executes these events and calculations, the performance of the proposed prioritized, limited multi-server PS rule can be analyzed. From the evaluation results, most suitable routing strategy for the loss or waiting system is clarified.
Keywords: Processor sharing, multi-server, various capacity, N priority classes, routing strategy, loss probability, mean sojourn time, mean waiting time, simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1036384 Comparison between Deterministic and Probabilistic Stability Analysis, Featuring Consequent Risk Assessment
Authors: Isabela Moreira Queiroz
Abstract:
Slope stability analyses are largely carried out by deterministic methods and evaluated through a single security factor. Although it is known that the geotechnical parameters can present great dispersal, such analyses are considered fixed and known. The probabilistic methods, in turn, incorporate the variability of input key parameters (random variables), resulting in a range of values of safety factors, thus enabling the determination of the probability of failure, which is an essential parameter in the calculation of the risk (probability multiplied by the consequence of the event). Among the probabilistic methods, there are three frequently used methods in geotechnical society: FOSM (First-Order, Second-Moment), Rosenblueth (Point Estimates) and Monte Carlo. This paper presents a comparison between the results from deterministic and probabilistic analyses (FOSM method, Monte Carlo and Rosenblueth) applied to a hypothetical slope. The end was held to evaluate the behavior of the slope and consequent risk analysis, which is used to calculate the risk and analyze their mitigation and control solutions. It can be observed that the results obtained by the three probabilistic methods were quite close. It should be noticed that the calculation of the risk makes it possible to list the priority to the implementation of mitigation measures. Therefore, it is recommended to do a good assessment of the geological-geotechnical model incorporating the uncertainty in viability, design, construction, operation and closure by means of risk management.Keywords: Probabilistic methods, risk assessment, risk management, slope stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1739383 Performance Evaluation of a Limited Round-Robin System
Authors: Yoshiaki Shikata
Abstract:
Performance of a limited Round-Robin (RR) rule is studied in order to clarify the characteristics of a realistic sharing model of a processor. Under the limited RR rule, the processor allocates to each request a fixed amount of time, called a quantum, in a fixed order. The sum of the requests being allocated these quanta is kept below a fixed value. Arriving requests that cannot be allocated quanta because of such a restriction are queued or rejected. Practical performance measures, such as the relationship between the mean sojourn time, the mean number of requests, or the loss probability and the quantum size are evaluated via simulation. In the evaluation, the requested service time of an arriving request is converted into a quantum number. One of these quanta is included in an RR cycle, which means a series of quanta allocated to each request in a fixed order. The service time of the arriving request can be evaluated using the number of RR cycles required to complete the service, the number of requests receiving service, and the quantum size. Then an increase or decrease in the number of quanta that are necessary before service is completed is reevaluated at the arrival or departure of other requests. Tracking these events and calculations enables us to analyze the performance of our limited RR rule. In particular, we obtain the most suitable quantum size, which minimizes the mean sojourn time, for the case in which the switching time for each quantum is considered.Keywords: Limited RR rule, quantum, processor sharing, sojourn time, performance measures, simulation, loss probability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1246382 Spectral Amplitude Coding Optical CDMA: Performance Analysis of PIIN Reduction Using VC Code Family
Authors: Hassan Yousif Ahmed, Ibrahima Faye, N.M.Saad, S.A. Aljined
Abstract:
Multi-user interference (MUI) is the main reason of system deterioration in the Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA) system. MUI increases with the number of simultaneous users, resulting into higher probability bit rate and limits the maximum number of simultaneous users. On the other hand, Phase induced intensity noise (PIIN) problem which is originated from spontaneous emission of broad band source from MUI severely limits the system performance should be addressed as well. Since the MUI is caused by the interference of simultaneous users, reducing the MUI value as small as possible is desirable. In this paper, an extensive study for the system performance specified by MUI and PIIN reducing is examined. Vectors Combinatorial (VC) codes families are adopted as a signature sequence for the performance analysis and a comparison with reported codes is performed. The results show that, when the received power increases, the PIIN noise for all the codes increases linearly. The results also show that the effect of PIIN can be minimized by increasing the code weight leads to preserve adequate signal to noise ratio over bit error probability. A comparison study between the proposed code and the existing codes such as Modified frequency hopping (MFH), Modified Quadratic- Congruence (MQC) has been carried out.
Keywords: FBG, MUI, PIIN, SAC-OCDMA, VCC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2211381 Error Rate Probability for Coded MQAM with MRC Diversity in the Presence of Cochannel Interferers over Nakagami-Fading Channels
Authors: J.S. Ubhi, M.S. Patterh, T.S. Kamal
Abstract:
Exact expressions for bit-error probability (BEP) for coherent square detection of uncoded and coded M-ary quadrature amplitude modulation (MQAM) using an array of antennas with maximal ratio combining (MRC) in a flat fading channel interference limited system in a Nakagami-m fading environment is derived. The analysis assumes an arbitrary number of independent and identically distributed Nakagami interferers. The results for coded MQAM are computed numerically for the case of (24,12) extended Golay code and compared with uncoded MQAM by plotting error probabilities versus average signal-to-interference ratio (SIR) for various values of order of diversity N, number of distinct symbols M, in order to examine the effect of cochannel interferers on the performance of the digital communication system. The diversity gains and net gains are also presented in tabular form in order to examine the performance of digital communication system in the presence of interferers, as the order of diversity increases. The analytical results presented in this paper are expected to provide useful information needed for design and analysis of digital communication systems with space diversity in wireless fading channels.Keywords: Cochannel interference, maximal ratio combining, Nakagami-m fading, wireless digital communications.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1854380 A Mobile Multihop Relay Dynamic TDD Scheme for Cellular Networks
Authors: Jong-Moon Chung, Hyung-Weon Cho, Ki-Yong Jin, Min-Hee Cho
Abstract:
In this paper, we present an analytical framework for the evaluation of the uplink performance of multihop cellular networks based on dynamic time division duplex (TDD). New wireless broadband protocols, such as WiMAX, WiBro, and 3G-LTE apply TDD, and mobile communication protocols under standardization (e.g., IEEE802.16j) are investigating mobile multihop relay (MMR) as a future technology. In this paper a novel MMR TDD scheme is presented, where the dynamic range of the frame is shared to traffic resources of asymmetric nature and multihop relaying. The mobile communication channel interference model comprises of inner and co-channel interference (CCI). The performance analysis focuses on the uplink due to the fact that the effects of dynamic resource allocation show significant performance degradation only in the uplink compared to time division multiple access (TDMA) schemes due to CCI [1-3], where the downlink results to be the same or better.The analysis was based on the signal to interference power ratio (SIR) outage probability of dynamic TDD (D-TDD) and TDMA systems,which are the most widespread mobile communication multi-user control techniques. This paper presents the uplink SIR outage probability with multihop results and shows that the dynamic TDD scheme applying MMR can provide a performance improvement compared to single hop applications if executed properly.
Keywords: Co-Channel Interference, Dynamic TDD, MobileMultihop Reply, Cellular Network, Time Division Multiple Access.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2345379 Pragati Node Popularity (PNP) Approach to Identify Congestion Hot Spots in MPLS
Authors: E. Ramaraj, A. Padmapriya
Abstract:
In large Internet backbones, Service Providers typically have to explicitly manage the traffic flows in order to optimize the use of network resources. This process is often referred to as Traffic Engineering (TE). Common objectives of traffic engineering include balance traffic distribution across the network and avoiding congestion hot spots. Raj P H and SVK Raja designed the Bayesian network approach to identify congestion hors pots in MPLS. In this approach for every node in the network the Conditional Probability Distribution (CPD) is specified. Based on the CPD the congestion hot spots are identified. Then the traffic can be distributed so that no link in the network is either over utilized or under utilized. Although the Bayesian network approach has been implemented in operational networks, it has a number of well known scaling issues. This paper proposes a new approach, which we call the Pragati (means Progress) Node Popularity (PNP) approach to identify the congestion hot spots with the network topology alone. In the new Pragati Node Popularity approach, IP routing runs natively over the physical topology rather than depending on the CPD of each node as in Bayesian network. We first illustrate our approach with a simple network, then present a formal analysis of the Pragati Node Popularity approach. Our PNP approach shows that for any given network of Bayesian approach, it exactly identifies the same result with minimum efforts. We further extend the result to a more generic one: for any network topology and even though the network is loopy. A theoretical insight of our result is that the optimal routing is always shortest path routing with respect to some considerations of hot spots in the networks.Keywords: Conditional Probability Distribution, Congestion hotspots, Operational Networks, Traffic Engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1989378 Workstation Design Based On Ergonomics in Animal Feed Packing Process
Authors: Pirutchada Musigapong, Wantanee Phanprasit
Abstract:
The intention of this study to design the probability optimized sewing sack-s workstation based on ergonomics for productivity improvement and decreasing musculoskeletal disorders. The physical dimensions of two workers were using to design the new workstation. The physical dimensions are (1) sitting height, (2) mid shoulder height sitting, (3) shoulder breadth, (4) knee height, (5) popliteal height, (6) hip breadth and (7) buttock-knee length. The 5th percentile of buttock knee length sitting (51 cm), the 50th percentile of mid shoulder height sitting (62 cm) and the 95th percentile of popliteal height (43 cm) and hip breadth (45 cm) applied to design the workstation for sewing sack-s operator and the others used to adjust the components of this workstation. The risk assessment by RULA before and after using the probability optimized workstation were 7 and 7 scores and REBA scores were 11 and 5, respectively. Body discomfort-abnormal index was used to assess muscle fatigue of operators before adjustment workstation found that neck muscles, arm muscles area, muscles on the back and the lower back muscles fatigue. Therefore, the extension and flexion exercise was applied to relief musculoskeletal stresses. The workers exercised 15 minutes before the beginning and the end of work for 5 days. After that, the capability of flexion and extension muscles- workers were increasing in 3 muscles (arm, leg, and back muscles).
Keywords: Animal feed, anthropometry, ergonomics, sewing sack, workstation design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2430377 A Study on Mechanical Properties of Fiberboard Made of Durian Rind through Latex with Phenolic Resin as Binding Agent
Authors: W. Wiyaratn, A. Watanapa
Abstract:
This study was aimed to study the probability about the production of fiberboard made of durian rind through latex with phenolic resin as binding agent. The durian rind underwent the boiling process with NaOH [7], [8] and then the fiber from durian rind was formed into fiberboard through heat press. This means that durian rind could be used as replacement for plywood in plywood industry by using durian fiber as composite material with adhesive substance. This research would study the probability about the production of fiberboard made of durian rind through latex with phenolic resin as binding agent. At first, durian rind was split, exposed to light, boiled and steamed in order to gain durian fiber. Then, fiberboard was tested with the density of 600 Kg/m3 and 800 Kg/m3. in order to find a suitable ratio of durian fiber and latex. Afterwards, mechanical properties were tested according to the standards of ASTM and JIS A5905-1994. After the suitable ratio was known, the test results would be compared with medium density fiberboard (MDF) and other related research studies. According to the results, fiberboard made of durian rind through latex with phenolic resin at the density of 800 Kg/m3 at ratio of 1:1, the moisture was measured to be 5.05% with specific gravity (ASTM D 2395-07a) of 0.81, density (JIS A 5905-1994) of 0.88 g/m3, tensile strength, hardness (ASTM D2240), flexibility or elongation at break yielded similar values as the ones by medium density fiberboard (MDF).Keywords: Durian rind, latex, phenolic resin, medium density fiberboard
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3931376 Advanced Numerical and Analytical Methods for Assessing Concrete Sewers and Their Remaining Service Life
Authors: Amir Alani, Mojtaba Mahmoodian, Anna Romanova, Asaad Faramarzi
Abstract:
Pipelines are extensively used engineering structures which convey fluid from one place to another. Most of the time, pipelines are placed underground and are encumbered by soil weight and traffic loads. Corrosion of pipe material is the most common form of pipeline deterioration and should be considered in both the strength and serviceability analysis of pipes. The study in this research focuses on concrete pipes in sewage systems (concrete sewers). This research firstly investigates how to involve the effect of corrosion as a time dependent process of deterioration in the structural and failure analysis of this type of pipe. Then three probabilistic time dependent reliability analysis methods including the first passage probability theory, the gamma distributed degradation model and the Monte Carlo simulation technique are discussed and developed. Sensitivity analysis indexes which can be used to identify the most important parameters that affect pipe failure are also discussed. The reliability analysis methods developed in this paper contribute as rational tools for decision makers with regard to the strengthening and rehabilitation of existing pipelines. The results can be used to obtain a cost-effective strategy for the management of the sewer system.Keywords: Reliability analysis, service life prediction, Monte Carlo simulation method, first passage probability theory, gamma distributed degradation model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1266375 Managing Iterations in Product Design and Development
Authors: K. Aravindhan, Trishit Bandyopadhyay, Mahesh Mehendale, Supriya Kumar De
Abstract:
The inherent iterative nature of product design and development poses significant challenge to reduce the product design and development time (PD). In order to shorten the time to market, organizations have adopted concurrent development where multiple specialized tasks and design activities are carried out in parallel. Iterative nature of work coupled with the overlap of activities can result in unpredictable time to completion and significant rework. Many of the products have missed the time to market window due to unanticipated or rather unplanned iteration and rework. The iterative and often overlapped processes introduce greater amounts of ambiguity in design and development, where the traditional methods and tools of project management provide less value. In this context, identifying critical metrics to understand the iteration probability is an open research area where significant contribution can be made given that iteration has been the key driver of cost and schedule risk in PD projects. Two important questions that the proposed study attempts to address are: Can we predict and identify the number of iterations in a product development flow? Can we provide managerial insights for a better control over iteration? The proposal introduces the concept of decision points and using this concept intends to develop metrics that can provide managerial insights into iteration predictability. By characterizing the product development flow as a network of decision points, the proposed research intends to delve further into iteration probability and attempts to provide more clarity.
Keywords: Decision Points, Iteration, Product Design, Rework.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2192