Search results for: random common fixed point theorem
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12833

Search results for: random common fixed point theorem

12593 Obtaining High-Dimensional Configuration Space for Robotic Systems Operating in a Common Environment

Authors: U. Yerlikaya, R. T. Balkan

Abstract:

In this research, a method is developed to obtain high-dimensional configuration space for path planning problems. In typical cases, the path planning problems are solved directly in the 3-dimensional (D) workspace. However, this method is inefficient in handling the robots with various geometrical and mechanical restrictions. To overcome these difficulties, path planning may be formalized and solved in a new space which is called configuration space. The number of dimensions of the configuration space comes from the degree of freedoms of the system of interest. The method can be applied in two ways. In the first way, the point clouds of all the bodies of the system and interaction of them are used. The second way is performed via using the clearance function of simulation software where the minimum distances between surfaces of bodies are simultaneously measured. A double-turret system is held in the scope of this study. The 4-D configuration space of a double-turret system is obtained in these two ways. As a result, the difference between these two methods is around 1%, depending on the density of the point cloud. The disparity between the two forms steadily decreases as the point cloud density increases. At the end of the study, in order to verify 4-D configuration space obtained, 4-D path planning problem was realized as 2-D + 2-D and a sample path planning is carried out with using A* algorithm. Then, the accuracy of the configuration space is proved using the obtained paths on the simulation model of the double-turret system.

Keywords: A* algorithm, autonomous turrets, high-dimensional C-space, manifold C-space, point clouds

Procedia PDF Downloads 121
12592 Design and Implementation of Pseudorandom Number Generator Using Android Sensors

Authors: Mochamad Beta Auditama, Yusuf Kurniawan

Abstract:

A smartphone or tablet require a strong randomness to establish secure encrypted communication, encrypt files, etc. Therefore, random number generation is one of the main keys to provide secrecy. Android devices are equipped with hardware-based sensors, such as accelerometer, gyroscope, etc. Each of these sensors provides a stochastic process which has a potential to be used as an extra randomness source, in addition to /dev/random and /dev/urandom pseudorandom number generators. Android sensors can provide randomness automatically. To obtain randomness from Android sensors, each one of Android sensors shall be used to construct an entropy source. After all entropy sources are constructed, output from these entropy sources are combined to provide more entropy. Then, a deterministic process is used to produces a sequence of random bits from the combined output. All of these processes are done in accordance with NIST SP 800-22 and the series of NIST SP 800-90. The operation conditions are done 1) on Android user-space, and 2) the Android device is placed motionless on a desk.

Keywords: Android hardware-based sensor, deterministic process, entropy source, random number generation/generators

Procedia PDF Downloads 341
12591 Estimation of Population Mean Using Characteristics of Poisson Distribution: An Application to Earthquake Data

Authors: Prayas Sharma

Abstract:

This paper proposed a generalized class of estimators, an exponential class of estimators based on the adaption of Sharma and Singh (2015) and Solanki and Singh (2013), and a simple difference estimator for estimating unknown population mean in the case of Poisson distributed population in simple random sampling without replacement. The expressions for mean square errors of the proposed classes of estimators are derived from the first order of approximation. It is shown that the adapted version of Solanki and Singh (2013), the exponential class of estimator, is always more efficient than the usual estimator, ratio, product, exponential ratio, and exponential product type estimators and equally efficient to simple difference estimator. Moreover, the adapted version of Sharma and Singh's (2015) estimator is always more efficient than all the estimators available in the literature. In addition, theoretical findings are supported by an empirical study to show the superiority of the constructed estimators over others with an application to earthquake data of Turkey.

Keywords: auxiliary attribute, point bi-serial, mean square error, simple random sampling, Poisson distribution

Procedia PDF Downloads 121
12590 Biomechanical Analysis on Skin and Jejunum of Chemically Prepared Cat Cadavers Used in Surgery Training

Authors: Raphael C. Zero, Thiago A. S. S. Rocha, Marita V. Cardozo, Caio C. C. Santos, Alisson D. S. Fechis, Antonio C. Shimano, FabríCio S. Oliveira

Abstract:

Biomechanical analysis is an important factor in tissue studies. The objective of this study was to determine the feasibility of a new anatomical technique and quantify the changes in skin and the jejunum resistance of cats’ corpses throughout the process. Eight adult cat cadavers were used. For every kilogram of weight, 120ml of fixative solution (95% 96GL ethyl alcohol and 5% pure glycerin) was applied via the external common carotid artery. Next, the carcasses were placed in a container with 96 GL ethyl alcohol for 60 days. After fixing, all carcasses were preserved in a 30% sodium chloride solution for 60 days. Before fixation, control samples were collected from fresh cadavers and after fixation, three skin and jejunum fragments from each cadaver were tested monthly for strength and displacement until complete rupture in a universal testing machine. All results were analyzed by F-test (P <0.05). In the jejunum, the force required to rupture the fresh samples and the samples fixed in alcohol for 60 days was 31.27±19.14N and 29.25±11.69N, respectively. For the samples preserved in the sodium chloride solution for 30 and 60 days, the strength was 26.17±16.18N and 30.57±13.77N, respectively. In relation to the displacement required for the rupture of the samples, the values of fresh specimens and those fixed in alcohol for 60 days was 2.79±0.73mm and 2.80±1.13mm, respectively. For the samples preserved for 30 and 60 days with sodium chloride solution, the displacement was 2.53±1.03mm and 2.83±1.27mm, respectively. There was no statistical difference between the samples (P=0.68 with respect to strength, and P=0.75 with respect to displacement). In the skin, the force needed to rupture the fresh samples and the samples fixed for 60 days in alcohol was 223.86±131.5N and 211.86±137.53N respectively. For the samples preserved in sodium chloride solution for 30 and 60 days, the force was 227.73±129.06 and 224.78±143.83N, respectively. In relation to the displacement required for the rupture of the samples, the values of fresh specimens and those fixed in alcohol for 60 days were 3.67±1.03mm and 4.11±0.87mm, respectively. For the samples preserved for 30 and 60 days with sodium chloride solution, the displacement was 4.21±0.93mm and 3.93±0.71mm, respectively. There was no statistical difference between the samples (P=0.65 with respect to strength, and P=0.98 with respect to displacement). The resistance of the skin and intestines of the cat carcasses suffered little change when subjected to alcohol fixation and preservation in sodium chloride solution, each for 60 days, which is promising for use in surgery training. All experimental procedures were approved by the Municipal Legal Department (protocol 02.2014.000027-1). The project was funded by FAPESP (protocol 2015-08259-9).

Keywords: anatomy, conservation, fixation, small animal

Procedia PDF Downloads 267
12589 Quantification of Pollution Loads for the Rehabilitation of Pusu River

Authors: Abdullah Al-Mamun, Md. Nuruzzaman, Md. Noor Salleh, Muhammad Abu Eusuf, Ahmad Jalal Khan Chowdhury, Mohd. Zaki M. Amin, Norlida Mohd. Dom

Abstract:

Identification of pollution sources and determination of pollution loads from all areas are very important for sustainable rehabilitation of any contaminated river. Pusu is a small river which, flows through the main campus of International Islamic University Malaysia (IIUM) at Gombak. Poor aesthetics of the river, which is flowing through the entrance of the campus, gives negative impression to the local and international visitors. As such, this study is being conducted to find ways to rehabilitate the river in a sustainable manner. The point and non-point pollution sources of the river basin are identified. Upper part of the 12.6 km2 river basin is covered with secondary forest. However, it is the lower-middle reaches of the river basin which is being cleared for residential development and source of high sediment load. Flow and concentrations of the common pollutants, important for a healthy river, such as Biochemical Oxygen Demand (BOD), Chemical Oxygen Demand (COD), Suspended Solids (SS), Turbidity, pH, Ammoniacal Nitrogen (AN), Total Nitrogen (TN) and Total Phosphorus (TP) are determined. Annual pollution loading to the river was calculated based on the primary and secondary data. Concentrations of SS were high during the rainy day due to contribution from the non-point sources. There are 7 ponds along the river system within the campus, which are severely affected by high sediment load from the land clearing activities. On the other hand, concentrations of other pollutants were high during the non-rainy days. The main sources of point pollution are the hostels, cafeterias, sewage treatment plants located in the campus. Therefore, both pollution sources need to be controlled in order to rehabilitate the river in a sustainable manner.

Keywords: river pollution, rehabilitation, point pollution source, non-point pollution sources, pollution loading

Procedia PDF Downloads 329
12588 Study on Robot Trajectory Planning by Robot End-Effector Using Dual Curvature Theory of the Ruled Surface

Authors: Y. S. Oh, P. Abhishesh, B. S. Ryuh

Abstract:

This paper presents the method of trajectory planning by the robot end-effector which accounts for more accurate and smooth differential geometry of the ruled surface generated by tool line fixed with end-effector based on the methods of curvature theory of ruled surface and the dual curvature theory, and focuses on the underlying relation to unite them for enhancing the efficiency for trajectory planning. Robot motion can be represented as motion properties of the ruled surface generated by trajectory of the Tool Center Point (TCP). The linear and angular properties of the six degree-of-freedom motion of end-effector are computed using the explicit formulas and functions from curvature theory and dual curvature theory. This paper explains the complete dualization of ruled surface and shows that the linear and angular motion applied using the method of dual curvature theory is more accurate and less complex.

Keywords: dual curvature theory, robot end effector, ruled surface, TCP (Tool Center Point)

Procedia PDF Downloads 336
12587 Examining the Critical Factors for Success and Failure of Common Ticketing Systems

Authors: Tam Viet Hoang

Abstract:

With a plethora of new mobility services and payment systems found in our cities and across modern public transportation systems, several cities globally have turned to common ticketing systems to help navigate this complexity. Helping to create time and space-differentiated fare structures and tariff schemes, common ticketing systems can optimize transport utilization rates, achieve cost efficiencies, and provide key incentives to specific target groups. However, not all cities and transportation systems have enjoyed a smooth journey towards the adoption, roll-out, and servicing of common ticketing systems, with both the experiences of success and failure being attributed to a wide variety of critical factors. Using case study research as a methodology and cities as the main unit of analysis, this research will seek to address the fundamental question of “what are the critical factors for the success and failure of common ticketing systems?” Using rail/train systems as the entry point for this study will start by providing a background to the evolution of transport ticketing and justify the improvements in operational efficiency that can be achieved through common ticketing systems. Examining the socio-economic benefits of common ticketing, the research will also help to articulate the value derived for different key identified stakeholder groups. By reviewing case studies of the implementation of common ticketing systems in different cities, the research will explore lessons learned from cities with the aim to elicit factors to ensure seamless connectivity integrated e-ticketing platforms. In an increasingly digital age and where cities are now coming online, this paper seeks to unpack these critical factors, undertaking case study research drawing from literature and lived experiences. Offering us a better understanding of the enabling environment and ideal mixture of ingredients to facilitate the successful roll-out of a common ticketing system, interviews will be conducted with transport operators from several selected cities to better appreciate the challenges and strategies employed to overcome those challenges in relation to common ticketing systems. Meanwhile, as we begin to see the introduction of new mobile applications and user interfaces to facilitate ticketing and payment as part of the transport journey, we take stock of numerous policy challenges ahead and implications on city-wide and system-wide urban planning. It is hoped that this study will help to identify the critical factors for the success and failure of common ticketing systems for cities set to embark on their implementation while serving to fine-tune processes in those cities where common ticketing systems are already in place. Outcomes from the study will help to facilitate an improved understanding of common pitfalls and essential milestones towards the roll-out of a common ticketing system for railway systems, especially for emerging countries where mass rapid transit transport systems are being considered or in the process of construction.

Keywords: common ticketing, public transport, urban strategies, Bangkok, Fukuoka, Sydney

Procedia PDF Downloads 54
12586 A Simple Technique for Centralisation of Distal Femoral Nail to Avoid Anterior Femoral Impingement and Perforation

Authors: P. Panwalkar, K. Veravalli, M. Tofighi, A. Mofidi

Abstract:

Introduction: Anterior femoral perforation or distal anterior nail position is a known complication of femoral nailing specifically in pertrochantric fractures fixed with cephalomedullary nail. This has been attributed to wrong entry point for the femoral nail, nail with large radius of curvature or malreduced fracture. Left alone anterior perforation of femur or abutment of nail on anterior femur will result in pain and risk stress riser at distal femur and periprosthetic fracture. There have been multiple techniques described to avert or correct this problem ranging from using different nail, entry point change, poller screw to deflect the nail position, use of shorter nail or use of curved guidewire or change of nail to ensure a nail with large radius of curvature Methods: We present this technique which we have used in order to centralise the femoral nail either when the nail has been put anteriorly or when the guide wire has been inserted too anteriorly prior to the insertion of the nail. This technique requires the use of femoral reduction spool from the nailing set. This technique was used by eight trainees of different level of experience under supervision. Results: This technique was easily reproducible without any learning curve without a need for opening of fracture site or change in the entry point with three different femoral nailing sets in twenty-five cases. The process took less than 10 minutes even when revising a malpositioned femoral nail. Conclusion: Our technique of using femoral reduction spool is easily reproducible and repeatable technique for avoidance of non-centralised femoral nail insertion and distal anterior perforation of femoral nail.

Keywords: femoral fracture, nailing, malposition, surgery

Procedia PDF Downloads 96
12585 The Effects of Wood Ash on Ignition Point of Wood

Authors: K. A. Ibe, J. I. Mbonu, G. K. Umukoro

Abstract:

The effects of wood ash on the ignition point of five common tropical woods in Nigeria were investigated. The ash and moisture contents of the wood saw dust from Mahogany (Khaya ivorensis), Opepe (Sarcocephalus latifolius), Abura (Hallealedermannii verdc), Rubber (Heavea brasilensis) and Poroporo (Sorghum bicolour) were determined using a furnace (Vecstar furnaces, model ECF2, serial no. f3077) and oven (Genlab laboratory oven, model MINO/040) respectively. The metal contents of the five wood sawdust ash samples were determined using a Perkin Elmer optima 3000 dv atomic absorption spectrometer while the ignition points were determined using Vecstar furnaces model ECF2. Poroporo had the highest ash content, 2.263 g while rubber had the least, 0.710 g. The results for the moisture content range from 2.971 g to 0.903 g. Magnesium metal had the highest concentration of all the metals, in all the wood ash samples; with mahogany ash having the highest concentration, 9.196 ppm while rubber ash had the least concentration of magnesium metal, 2.196 ppm. The ignition point results showed that the wood ashes from mahogany and opepe increased the ignition points of the test wood samples when coated on them while the ashes from poroporo, rubber and abura decreased the ignition points of the test wood samples when coated on them. However, Opepe saw dust ash decreased the ignition point in one of the test wood samples, suggesting that the metal content of the test wood sample was more than that of the Opepe saw dust ash. Therefore, Mahogany and Opepe saw dust ashes could be used in the surface treatment of wood to enhance their fire resistance or retardancy. However, the caution to be exercised in this application is that the metal content of the test wood samples should be evaluated as well.

Keywords: ash, fire, ignition point, retardant, wood saw dust

Procedia PDF Downloads 359
12584 Pruning Residue Effects on Symbiotic N₂ Fixation and δ¹³C Isotopic Composition of Sesbania sesban and Cajanus cajan

Authors: I. T. Makhubedu, B. A. Letty, P. F. Scogings, P. L. Mafongoya

Abstract:

Despite their potential importance in recycling dinitrogen (N2) fixed in alley cropping systems, the effects of tree pruning residues on symbiotic N2 fixation are poorly studied. A 2 x 2 x 2 factorial experiment was conducted to evaluate the effects of pruning residue management and pruning date on symbiotic performance and

Keywords: alley cropping, management, N₂ fixed, natural abundance, recycling

Procedia PDF Downloads 179
12583 Optimizing Human Diet Problem Using Linear Programming Approach: A Case Study

Authors: P. Priyanka, S. Shruthi, N. Guruprasad

Abstract:

Health is a common theme in most cultures. In fact all communities have their concepts of health, as part of their culture. Health continues to be a neglected entity. Planning of Human diet should be done very careful by selecting the food items or groups of food items also the composition involved. Low price and good taste of foods are regarded as two major factors for optimal human nutrition. Linear programming techniques have been extensively used for human diet formulation for quiet good number of years. Through the process, we mainly apply “The Simplex Method” which is a very useful statistical tool based on the theorem of Elementary Row Operation from Linear Algebra and also incorporate some other necessary rules set by the Simplex Method to help solve the problem. The study done by us is an attempt to develop a programming model for optimal planning and best use of nutrient ingredients.

Keywords: diet formulation, linear programming, nutrient ingredients, optimization, simplex method

Procedia PDF Downloads 529
12582 Global Navigation Satellite System and Precise Point Positioning as Remote Sensing Tools for Monitoring Tropospheric Water Vapor

Authors: Panupong Makvichian

Abstract:

Global Navigation Satellite System (GNSS) is nowadays a common technology that improves navigation functions in our life. Additionally, GNSS is also being employed on behalf of an accurate atmospheric sensor these times. Meteorology is a practical application of GNSS, which is unnoticeable in the background of people’s life. GNSS Precise Point Positioning (PPP) is a positioning method that requires data from a single dual-frequency receiver and precise information about satellite positions and satellite clocks. In addition, careful attention to mitigate various error sources is required. All the above data are combined in a sophisticated mathematical algorithm. At this point, the research is going to demonstrate how GNSS and PPP method is capable to provide high-precision estimates, such as 3D positions or Zenith tropospheric delays (ZTDs). ZTDs combined with pressure and temperature information allows us to estimate the water vapor in the atmosphere as precipitable water vapor (PWV). If the process is replicated for a network of GNSS sensors, we can create thematic maps that allow extract water content information in any location within the network area. All of the above are possible thanks to the advances in GNSS data processing. Therefore, we are able to use GNSS data for climatic trend analysis and acquisition of the further knowledge about the atmospheric water content.

Keywords: GNSS, precise point positioning, Zenith tropospheric delays, precipitable water vapor

Procedia PDF Downloads 170
12581 Identification of Candidate Congenital Heart Defects Biomarkers by Applying a Random Forest Approach on DNA Methylation Data

Authors: Kan Yu, Khui Hung Lee, Eben Afrifa-Yamoah, Jing Guo, Katrina Harrison, Jack Goldblatt, Nicholas Pachter, Jitian Xiao, Guicheng Brad Zhang

Abstract:

Background and Significance of the Study: Congenital Heart Defects (CHDs) are the most common malformation at birth and one of the leading causes of infant death. Although the exact etiology remains a significant challenge, epigenetic modifications, such as DNA methylation, are thought to contribute to the pathogenesis of congenital heart defects. At present, no existing DNA methylation biomarkers are used for early detection of CHDs. The existing CHD diagnostic techniques are time-consuming and costly and can only be used to diagnose CHDs after an infant was born. The present study employed a machine learning technique to analyse genome-wide methylation data in children with and without CHDs with the aim to find methylation biomarkers for CHDs. Methods: The Illumina Human Methylation EPIC BeadChip was used to screen the genome‐wide DNA methylation profiles of 24 infants diagnosed with congenital heart defects and 24 healthy infants without congenital heart defects. Primary pre-processing was conducted by using RnBeads and limma packages. The methylation levels of top 600 genes with the lowest p-value were selected and further investigated by using a random forest approach. ROC curves were used to analyse the sensitivity and specificity of each biomarker in both training and test sample sets. The functionalities of selected genes with high sensitivity and specificity were then assessed in molecular processes. Major Findings of the Study: Three genes (MIR663, FGF3, and FAM64A) were identified from both training and validating data by random forests with an average sensitivity and specificity of 85% and 95%. GO analyses for the top 600 genes showed that these putative differentially methylated genes were primarily associated with regulation of lipid metabolic process, protein-containing complex localization, and Notch signalling pathway. The present findings highlight that aberrant DNA methylation may play a significant role in the pathogenesis of congenital heart defects.

Keywords: biomarker, congenital heart defects, DNA methylation, random forest

Procedia PDF Downloads 132
12580 Constructing the Joint Mean-Variance Regions for Univariate and Bivariate Normal Distributions: Approach Based on the Measure of Cumulative Distribution Functions

Authors: Valerii Dashuk

Abstract:

The usage of the confidence intervals in economics and econometrics is widespread. To be able to investigate a random variable more thoroughly, joint tests are applied. One of such examples is joint mean-variance test. A new approach for testing such hypotheses and constructing confidence sets is introduced. Exploring both the value of the random variable and its deviation with the help of this technique allows checking simultaneously the shift and the probability of that shift (i.e., portfolio risks). Another application is based on the normal distribution, which is fully defined by mean and variance, therefore could be tested using the introduced approach. This method is based on the difference of probability density functions. The starting point is two sets of normal distribution parameters that should be compared (whether they may be considered as identical with given significance level). Then the absolute difference in probabilities at each 'point' of the domain of these distributions is calculated. This measure is transformed to a function of cumulative distribution functions and compared to the critical values. Critical values table was designed from the simulations. The approach was compared with the other techniques for the univariate case. It differs qualitatively and quantitatively in easiness of implementation, computation speed, accuracy of the critical region (theoretical vs. real significance level). Stable results when working with outliers and non-normal distributions, as well as scaling possibilities, are also strong sides of the method. The main advantage of this approach is the possibility to extend it to infinite-dimension case, which was not possible in the most of the previous works. At the moment expansion to 2-dimensional state is done and it allows to test jointly up to 5 parameters. Therefore the derived technique is equivalent to classic tests in standard situations but gives more efficient alternatives in nonstandard problems and on big amounts of data.

Keywords: confidence set, cumulative distribution function, hypotheses testing, normal distribution, probability density function

Procedia PDF Downloads 146
12579 A Machine Learning Approach for Intelligent Transportation System Management on Urban Roads

Authors: Ashish Dhamaniya, Vineet Jain, Rajesh Chouhan

Abstract:

Traffic management is one of the gigantic issue in most of the urban roads in al-most all metropolitan cities in India. Speed is one of the critical traffic parameters for effective Intelligent Transportation System (ITS) implementation as it decides the arrival rate of vehicles on an intersection which are majorly the point of con-gestions. The study aimed to leverage Machine Learning (ML) models to produce precise predictions of speed on urban roadway links. The research objective was to assess how categorized traffic volume and road width, serving as variables, in-fluence speed prediction. Four tree-based regression models namely: Decision Tree (DT), Random Forest (RF), Extra Tree (ET), and Extreme Gradient Boost (XGB)are employed for this purpose. The models' performances were validated using test data, and the results demonstrate that Random Forest surpasses other machine learning techniques and a conventional utility theory-based model in speed prediction. The study is useful for managing the urban roadway network performance under mixed traffic conditions and effective implementation of ITS.

Keywords: stream speed, urban roads, machine learning, traffic flow

Procedia PDF Downloads 30
12578 2D Point Clouds Features from Radar for Helicopter Classification

Authors: Danilo Habermann, Aleksander Medella, Carla Cremon, Yusef Caceres

Abstract:

This paper aims to analyze the ability of 2d point clouds features to classify different models of helicopters using radars. This method does not need to estimate the blade length, the number of blades of helicopters, and the period of their micro-Doppler signatures. It is also not necessary to generate spectrograms (or any other image based on time and frequency domain). This work transforms a radar return signal into a 2D point cloud and extracts features of it. Three classifiers are used to distinguish 9 different helicopter models in order to analyze the performance of the features used in this work. The high accuracy obtained with each of the classifiers demonstrates that the 2D point clouds features are very useful for classifying helicopters from radar signal.

Keywords: helicopter classification, point clouds features, radar, supervised classifiers

Procedia PDF Downloads 185
12577 A Multivariate 4/2 Stochastic Covariance Model: Properties and Applications to Portfolio Decisions

Authors: Yuyang Cheng, Marcos Escobar-Anel

Abstract:

This paper introduces a multivariate 4/2 stochastic covariance process generalizing the one-dimensional counterparts presented in Grasselli (2017). Our construction permits stochastic correlation not only among stocks but also among volatilities, also known as co-volatility movements, both driven by more convenient 4/2 stochastic structures. The parametrization is flexible enough to separate these types of correlation, permitting their individual study. Conditions for proper changes of measure and closed-form characteristic functions under risk-neutral and historical measures are provided, allowing for applications of the model to risk management and derivative pricing. We apply the model to an expected utility theory problem in incomplete markets. Our analysis leads to closed-form solutions for the optimal allocation and value function. Conditions are provided for well-defined solutions together with a verification theorem. Our numerical analysis highlights and separates the impact of key statistics on equity portfolio decisions, in particular, volatility, correlation, and co-volatility movements, with the latter being the least important in an incomplete market.

Keywords: stochastic covariance process, 4/2 stochastic volatility model, stochastic co-volatility movements, characteristic function, expected utility theory, veri cation theorem

Procedia PDF Downloads 124
12576 Inference for Compound Truncated Poisson Lognormal Model with Application to Maximum Precipitation Data

Authors: M. Z. Raqab, Debasis Kundu, M. A. Meraou

Abstract:

In this paper, we have analyzed maximum precipitation data during a particular period of time obtained from different stations in the Global Historical Climatological Network of the USA. One important point to mention is that some stations are shut down on certain days for some reason or the other. Hence, the maximum values are recorded by excluding those readings. It is assumed that the number of stations that operate follows zero-truncated Poisson random variables, and the daily precipitation follows a lognormal random variable. We call this model a compound truncated Poisson lognormal model. The proposed model has three unknown parameters, and it can take a variety of shapes. The maximum likelihood estimators can be obtained quite conveniently using Expectation-Maximization (EM) algorithm. Approximate maximum likelihood estimators are also derived. The associated confidence intervals also can be obtained from the observed Fisher information matrix. Simulation results have been performed to check the performance of the EM algorithm, and it is observed that the EM algorithm works quite well in this case. When we analyze the precipitation data set using the proposed model, it is observed that the proposed model provides a better fit than some of the existing models.

Keywords: compound Poisson lognormal distribution, EM algorithm, maximum likelihood estimation, approximate maximum likelihood estimation, Fisher information, skew distribution

Procedia PDF Downloads 82
12575 Investigation of Mode II Fracture Toughness in Orthotropic Materials

Authors: Mahdi Fakoor, Nabi Mehri Khansari, Ahmadreza Farokhi

Abstract:

Evaluation of mode II fracture toughness (KIIC) in composite materials is very hard problem to be solved, since it can be affected by many mechanisms of dissipation. Furthermore, non-linearity in its behavior can offer an extra difficulty to obtain accuracy in the results. Different reported values for KIIC in various references can prove the mentioned assertion. In this research, some solutions proposed based on the form of necessary corrections that should be executed on the common test fixtures. Due to the fact that the common test fixtures are not able to active toughening mechanisms in pure Mode II correctly, we have employed some structural modifications on common fixtures. Particularly, the Iosipescu test is used as start point. The tests are applied on graphite/epoxy; PMMA and Western White Pine Wood. Also, mixed mode I/II fracture limit curves are used to indicate the scattering in test results are really relevant to the creation of Fracture Process Zone (FPZ). In the present paper, shear load consideration applied at the predicted shear zone by considering some significant structural amendments that can active mode II toughening mechanisms. Indeed, the employed empirical method causes significant developing in repeatability and reproducibility as well. Moreover, a 3D Finite Element (FE) is performed for verification of the obtained results. Eventually, it is figured out that, a remarkable precision can be obtained in common test fixture in comparison with the previous one.

Keywords: FPZ, shear test fixture, mode II fracture toughness, composite material, FEM

Procedia PDF Downloads 337
12574 A Cost-Effective Evaluation of Single Server Multiple Variants and the Working Vacation Queueing Approach with a Waiting Server

Authors: R. Remya

Abstract:

We consider an M/M/1 multiple variant vacation queueing system and working vacation with waiting server. Here, comparing considering three models. First model, working vacation is taken after the server has exhaustively served all the customers in the system and waits random amount of time. After completing a working vacation, the server will wait for a random period of time before going on vacation. Then it goes to finite number of vacations same way. After end of J th vacation server waits in busy or served immediately. Second model, working vacation is taken after the server has exhaustively served all the customers in the system and waits random amount of time. Third model, working vacation is taken after the server has exhaustively served all the customers in the system and waits random amount of time. It is expected that service times and vacation lengths are exponentially distributed . We provide a steady-state solution and cost comparison for the stated models.

Keywords: vacation, working vacation, waiting server, steady state analysis, cost analysis

Procedia PDF Downloads 19
12573 Random Matrix Theory Analysis of Cross-Correlation in the Nigerian Stock Exchange

Authors: Chimezie P. Nnanwa, Thomas C. Urama, Patrick O. Ezepue

Abstract:

In this paper we use Random Matrix Theory to analyze the eigen-structure of the empirical correlations of 82 stocks which are consistently traded in the Nigerian Stock Exchange (NSE) over a 4-year study period 3 August 2009 to 26 August 2013. We apply the Marchenko-Pastur distribution of eigenvalues of a purely random matrix to investigate the presence of investment-pertinent information contained in the empirical correlation matrix of the selected stocks. We use hypothesised standard normal distribution of eigenvector components from RMT to assess deviations of the empirical eigenvectors to this distribution for different eigenvalues. We also use the Inverse Participation Ratio to measure the deviation of eigenvectors of the empirical correlation matrix from RMT results. These preliminary results on the dynamics of asset price correlations in the NSE are important for improving risk-return trade-offs associated with Markowitz’s portfolio optimization in the stock exchange, which is pursued in future work.

Keywords: correlation matrix, eigenvalue and eigenvector, inverse participation ratio, portfolio optimization, random matrix theory

Procedia PDF Downloads 311
12572 Effects of GRF on CMJ in Different Wooden Surface Systems

Authors: Yi-cheng Chen, Ming-jum Guo, Yang-ru Chen

Abstract:

Background and Objective: For safety and fair during basketball competition, FIBA proposes the definite level of physical functions in wooden surface system (WSS). There are existing various between different systems in indoor-stadium, so the aim of this study want to know how many effects in different WSS, especially for effects of ground reaction force(GRF) when player jumped. Materials and Methods: 12 participants acted counter-movement jump (CMJ) on 7 different surfaces, include 6 WSSs by 3 types rubber shock absorber pad (SAP) on cross or parallel fixed, and 1 rigid ground. GRFs of takeoff and landing had been recorded from an AMTI force platform when all participants acted vertical CMJs by counter-balance design. All data were analyzed using the one-way ANOVA to evaluate whether the test variable differed significantly between surfaces. The significance level was set at α=0.05. Results: There were non-significance in GRF between surfaces when participants taken off. For GRF of landing, we found WSS with cross fixed SAP are harder than parallel fixed. Although there were also non-significance when participant was landing on cross or parallel fixed surfaces, but there have test variable differed significantly between WSS with parallel fixed to rigid ground. In the study, landing to WSS with the hardest SAP, the GRF also have test variable differed significantly to other WSS. Conclusion: Although official basketball competition is in the WSS certificated by FIBA, there are also exist the various in GRF under takeoff or landing, any player must to warm-up before game starting. Especially, there is unsafe situation when play basketball on uncertificated WSS.

Keywords: wooden surface system, counter-movement jump, ground reaction force, shock absorber pad

Procedia PDF Downloads 412
12571 Efficacy of Computer Mediated Power Point Presentations on Students' Learning Outcomes in Basic Science in Oyo State, Nigeria

Authors: Sunmaila Oyetunji Raimi, Olufemi Akinloye Bolaji, Abiodun Ezekiel Adesina

Abstract:

The lingering poor performance of students in basic science spells doom for a vibrant scientific and technological development which pivoted the economic, social and physical upliftment of any nation. This calls for identifying appropriate strategies for imparting basic science knowledge and attitudes to the teaming youths in secondary schools. This study, therefore, determined the impact of computer mediated power point presentations on students’ achievement in basic science in Oyo State, Nigeria. A pre-test, posttest, control group quazi-experimental design adopted for the study. Two hundred and five junior secondary two students selected using stratified random sampling technique participated in the study. Three research questions and three hypotheses guided the study. Two evaluative instruments – Students’ Basic Science Attitudes Scale (SBSAS, r = 0.91); Students’ Knowledge of Basic Science Test (SKBST, r = 0.82) were used for data collection. Descriptive statistics of mean, standard deviation and inferential statistics of ANCOVA, scheffe post-hoc test were used to analyse the data. The results indicated significant main effect of treatment on students cognitive (F(1,200)= 171.680; p < 0.05) and attitudinal (F(1,200)= 34.466; p < 0.05) achievement in Basic science with the experimental group having higher mean gain than the control group. Gender has significant main effect (F(1,200)= 23.382; p < 0.05) on students cognitive outcomes but not significant for attitudinal achievement in Basic science. The study therefore recommended among others that computer mediated power point presentations should be incorporated into curriculum methodology of Basic science in secondary schools.

Keywords: basic science, computer mediated power point presentations, gender, students’ achievement

Procedia PDF Downloads 402
12570 Integration of UPQC Based on Fuzzy Controller for Power Quality Enhancement in Distributed Network

Authors: M. Habab, C. Benachaiba, B. Mazari, H. Madi, C. Benoudjafer

Abstract:

The use of Distributed Generation (DG) has been increasing in recent years to fill the gap between energy supply and demand. This paper presents the grid connected wind energy system with UPQC based on fuzzy controller to compensate for voltage and current disturbances. The proposed system can improve power quality at the point of installation on power distribution systems. Simulation results show the capability of the DG-UPQC intelligent system to compensate sags voltage and current harmonics at the Point of Common Coupling (PCC).

Keywords: shunt active filter, series active filter, UPQC, power quality, sags voltage, distributed generation, wind turbine

Procedia PDF Downloads 385
12569 Polysaccharides as Pour Point Depressants

Authors: Ali M. EL-Soll

Abstract:

Physical properties of Sarir waxy crude oil was investigated, pour-point was determined using ASTM D-79 procedure, paraffin content and carbon number distribution of the paraffin was determined using gas liquid Chromatography(GLC), polymeric additives were prepared and their structures were confirmed using IR spectrophotometer. The molecular weight and molecular weigh distribution of these additives were determined by gel permeation chromatography (GPC). the performance of the synthesized additives as pour-point depressants was evaluated, for the mentioned crude oil.

Keywords: sarir, waxy, crude, pour point, depressants

Procedia PDF Downloads 426
12568 Development of an Atmospheric Radioxenon Detection System for Nuclear Explosion Monitoring

Authors: V. Thomas, O. Delaune, W. Hennig, S. Hoover

Abstract:

Measurement of radioactive isotopes of atmospheric xenon is used to detect, locate and identify any confined nuclear tests as part of the Comprehensive Nuclear Test-Ban Treaty (CTBT). In this context, the Alternative Energies and French Atomic Energy Commission (CEA) has developed a fixed device to continuously measure the concentration of these fission products, the SPALAX process. During its atmospheric transport, the radioactive xenon will undergo a significant dilution between the source point and the measurement station. Regarding the distance between fixed stations located all over the globe, the typical volume activities measured are near 1 mBq m⁻³. To avoid the constraints induced by atmospheric dilution, the development of a mobile detection system is in progress; this system will allow on-site measurements in order to confirm or infringe a suspicious measurement detected by a fixed station. Furthermore, this system will use beta/gamma coincidence measurement technique in order to drastically reduce environmental background (which masks such activities). The detector prototype consists of a gas cell surrounded by two large silicon wafers, coupled with two square NaI(Tl) detectors. The gas cell has a sample volume of 30 cm³ and the silicon wafers are 500 µm thick with an active surface area of 3600 mm². In order to minimize leakage current, each wafer has been segmented into four independent silicon pixels. This cell is sandwiched between two low background NaI(Tl) detectors (70x70x40 mm³ crystal). The expected Minimal Detectable Concentration (MDC) for each radio-xenon is in the order of 1-10 mBq m⁻³. Three 4-channels digital acquisition modules (Pixie-NET) are used to process all the signals. Time synchronization is ensured by a dedicated PTP-network, using the IEEE 1588 Precision Time Protocol. We would like to present this system from its simulation to the laboratory tests.

Keywords: beta/gamma coincidence technique, low level measurement, radioxenon, silicon pixels

Procedia PDF Downloads 106
12567 Design for Flight Endurance and Mapping Area Enhancement of a Fixed Wing Unmanned Air Vehicle

Authors: P. Krachangthong, N. Limsumalee, L. Sawatdipon, A. Sasipongpreecha, S. Pisailert, J. Thongta, N. Hongkarnjanakul, C. Thipyopas

Abstract:

The design and development of new UAV are detailed in this paper. The mission requirement is setup for enhancement of flight endurance of a fixed wing UAV. The goal is to achieve flight endurance more than 60 minutes. UAV must be able launched by hand and can be equipped with the Sony A6000 camera. The design of sizing and aerodynamic analysis is conducted. The XFLR5 program and wind tunnel test are used for determination and comparison of aerodynamic characteristics. Lift, drag and pitching moment characteristics are evaluated. Then Kreno-V UAV is designed and proved its better efficiency compared to the Heron UAV who is currently used for mapping mission of Geo-Informatics and Space Technology Development Agency (Public Organization), Thailand. The endurance is improved by 19%. Finally, Kreno-V UAV with a wing span of 2meters, the aspect ratio of 7, and V-tail shape is constructed and successfully test.

Keywords: UAV design, fixed-wing UAV, wind tunnel test, long endurance

Procedia PDF Downloads 356
12566 Study on the Self-Location Estimate by the Evolutional Triangle Similarity Matching Using Artificial Bee Colony Algorithm

Authors: Yuji Kageyama, Shin Nagata, Tatsuya Takino, Izuru Nomura, Hiroyuki Kamata

Abstract:

In previous study, technique to estimate a self-location by using a lunar image is proposed. We consider the improvement of the conventional method in consideration of FPGA implementation in this paper. Specifically, we introduce Artificial Bee Colony algorithm for reduction of search time. In addition, we use fixed point arithmetic to enable high-speed operation on FPGA.

Keywords: SLIM, Artificial Bee Colony Algorithm, location estimate, evolutional triangle similarity

Procedia PDF Downloads 487
12565 Scheduling Jobs with Stochastic Processing Times or Due Dates on a Server to Minimize the Number of Tardy Jobs

Authors: H. M. Soroush

Abstract:

The problem of scheduling products and services for on-time deliveries is of paramount importance in today’s competitive environments. It arises in many manufacturing and service organizations where it is desirable to complete jobs (products or services) with different weights (penalties) on or before their due dates. In such environments, schedules should frequently decide whether to schedule a job based on its processing time, due-date, and the penalty for tardy delivery to improve the system performance. For example, it is common to measure the weighted number of late jobs or the percentage of on-time shipments to evaluate the performance of a semiconductor production facility or an automobile assembly line. In this paper, we address the problem of scheduling a set of jobs on a server where processing times or due-dates of jobs are random variables and fixed weights (penalties) are imposed on the jobs’ late deliveries. The goal is to find the schedule that minimizes the expected weighted number of tardy jobs. The problem is NP-hard to solve; however, we explore three scenarios of the problem wherein: (i) both processing times and due-dates are stochastic; (ii) processing times are stochastic and due-dates are deterministic; and (iii) processing times are deterministic and due-dates are stochastic. We prove that special cases of these scenarios are solvable optimally in polynomial time, and introduce efficient heuristic methods for the general cases. Our computational results show that the heuristics perform well in yielding either optimal or near optimal sequences. The results also demonstrate that the stochasticity of processing times or due-dates can affect scheduling decisions. Moreover, the proposed problem is general in the sense that its special cases reduce to some new and some classical stochastic single machine models.

Keywords: number of late jobs, scheduling, single server, stochastic

Procedia PDF Downloads 460
12564 Closed Form Solution for 4-D Potential Integrals for Arbitrary Coplanar Polygonal Surfaces

Authors: Damir Latypov

Abstract:

A closed-form solution for 4-D double surface integrals arising in boundary integrals equations of a potential theory is obtained for arbitrary coplanar polygonal surfaces. The solution method is based on the construction of exact differential forms followed by the application of Stokes' theorem for each surface integral. As a result, the 4-D double surface integral is reduced to a 2-D double line integral. By an appropriate change of variables, the integrand is transformed into a separable function of integration variables. The closed-form solutions to the corresponding 1-D integrals are readily available in the integration tables. Previously closed-form solutions were known only for the case of coincident triangle surfaces and coplanar rectangles. Solutions for these cases were obtained by surface-specific ad-hoc methods, while the present method is general. The method also works for non-polygonal surfaces. As an example, we compute in closed form the 4-D integral for the case of coincident surfaces in the shape of a circular disk. For an arbitrarily shaped surface, the proposed method provides an efficient quadrature rule. Extensions of the method for non-coplanar surfaces and other than 1/R integral kernels are also discussed.

Keywords: boundary integral equations, differential forms, integration, stokes' theorem

Procedia PDF Downloads 280