Search results for: stochastic pi calculus
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 534

Search results for: stochastic pi calculus

204 Cyclostationary Gaussian Linearization for Analyzing Nonlinear System Response Under Sinusoidal Signal and White Noise Excitation

Authors: R. J. Chang

Abstract:

A cyclostationary Gaussian linearization method is formulated for investigating the time average response of nonlinear system under sinusoidal signal and white noise excitation. The quantitative measure of cyclostationary mean, variance, spectrum of mean amplitude, and mean power spectral density of noise is analyzed. The qualitative response behavior of stochastic jump and bifurcation are investigated. The validity of the present approach in predicting the quantitative and qualitative statistical responses is supported by utilizing Monte Carlo simulations. The present analysis without imposing restrictive analytical conditions can be directly derived by solving non-linear algebraic equations. The analytical solution gives reliable quantitative and qualitative prediction of mean and noise response for the Duffing system subjected to both sinusoidal signal and white noise excitation.

Keywords: cyclostationary, duffing system, Gaussian linearization, sinusoidal, white noise

Procedia PDF Downloads 465
203 Comparison of Safety and Efficacy between Thulium Fibre Laser and Holmium YAG Laser for Retrograde Intrarenal Surgery

Authors: Sujeet Poudyal

Abstract:

Introduction: After Holmium:yttrium-aluminum-garnet (Ho: YAG) laser has revolutionized the management of urolithiasis, the introduction of Thulium fibre laser (TFL) has already challenged Ho:YAG laser due to its multiple commendable properties. Nevertheless, there are only few studies comparing TFL and holmium laser in Retrograde Intrarenal Surgery(RIRS). Therefore, this study was carried out to compare the efficacy and safety of thulium fiber laser (TFL) and holmium laser in RIRS. Methods: This prospective comparative study, which included all patients undergoing laser lithotripsy (RIRS) for proximal ureteric calculus and nephrolithiasis from March 2022 to March 2023, consisted of 63 patients in Ho:YAG laser group and 65 patients in TFL group. Stone free rate, operative time, laser utilization time, energy used, and complications were analysed between the two groups. Results: Mean stone size was comparable in TFL (14.23±4.1 mm) and Ho:YAG (13.88±3.28 mm) group, p-0.48. Similarly, mean stone density in TFL (1269±262 HU) was comparable to Ho:YAG (1189±212 HU), p-0.48. There was significant difference in lasing time between TFL (12.69±7.41 mins) and Ho:YAG (20.44±14 mins), p-0.012). TFL group had operative time of 43.47± 16.8 mins which was shorter than Ho:YAG group (58±26.3 mins),p-0.005. Both TFL and Ho:YAG groups had comparable total energy used(11.4±6.2 vs 12±8 respectively, p-0.758). Stone free rate was 87%for TFL, whereas it was 79.5% for Ho:YAG, p-0.25). Two cases of sepsis and one ureteric stricture were encountered in TFL, whereas three cases suffered from sepsis apart from one ureteric stricture in Ho:YAG group, p-0.62). Conclusion: Thulium Fibre Laser has similar efficacy as Holmium: YAG Laser in terms of safety and stone free rate. However, due to better stone ablation rate in TFL, it can become the game changer in management of urolithiasis in the coming days.

Keywords: retrograde intrarenal surgery, thulium fibre laser, holmium:yttrium-aluminum-garnet (ho:yag) laser, nephrolithiasis

Procedia PDF Downloads 49
202 A Reinforcement Learning Approach for Evaluation of Real-Time Disaster Relief Demand and Network Condition

Authors: Ali Nadi, Ali Edrissi

Abstract:

Relief demand and transportation links availability is the essential information that is needed for every natural disaster operation. This information is not in hand once a disaster strikes. Relief demand and network condition has been evaluated based on prediction method in related works. Nevertheless, prediction seems to be over or under estimated due to uncertainties and may lead to a failure operation. Therefore, in this paper a stochastic programming model is proposed to evaluate real-time relief demand and network condition at the onset of a natural disaster. To address the time sensitivity of the emergency response, the proposed model uses reinforcement learning for optimization of the total relief assessment time. The proposed model is tested on a real size network problem. The simulation results indicate that the proposed model performs well in the case of collecting real-time information.

Keywords: disaster management, real-time demand, reinforcement learning, relief demand

Procedia PDF Downloads 279
201 The Performance of Saudi Banking Industry 2000 -2011: Have the Banks Distinguished Themselves from One Another?

Authors: Bukhari M. S. Sillah, Imran Khokhar, Muhammad Nauman Khan

Abstract:

This paper studies the technical efficiency of Saudi banking sector using stochastic frontier model. A sample of 12 banks over the period 2000-2011 is selected to investigate their technical efficiencies in mobilizing deposits, producing investment and generating income. The banks are categorized as Saudi-owned banks, Saudi-foreign-owned banks and Islamic banks. The findings show some consistent pattern of these bank types; and there exist significant disparities among the banks in term of technical efficiency. The Banque Saudi Fransi stands out as a benchmark bank for the industry, and it is a Saudi-foreign owned bank type. The Saudi owned bank types have shown fluctuating performance during the period; and the Islamic bank types are no significantly different from Saudi-owned bank types.

Keywords: technical efficiency, production frontier model, Islamic banking

Procedia PDF Downloads 456
200 Probabilistic Analysis of Bearing Capacity of Isolated Footing using Monte Carlo Simulation

Authors: Sameer Jung Karki, Gokhan Saygili

Abstract:

The allowable bearing capacity of foundation systems is determined by applying a factor of safety to the ultimate bearing capacity. Conventional ultimate bearing capacity calculations routines are based on deterministic input parameters where the nonuniformity and inhomogeneity of soil and site properties are not accounted for. Hence, the laws of mathematics like probability calculus and statistical analysis cannot be directly applied to foundation engineering. It’s assumed that the Factor of Safety, typically as high as 3.0, incorporates the uncertainty of the input parameters. This factor of safety is estimated based on subjective judgement rather than objective facts. It is an ambiguous term. Hence, a probabilistic analysis of the bearing capacity of an isolated footing on a clayey soil is carried out by using the Monte Carlo Simulation method. This simulated model was compared with the traditional discrete model. It was found out that the bearing capacity of soil was found higher for the simulated model compared with the discrete model. This was verified by doing the sensitivity analysis. As the number of simulations was increased, there was a significant % increase of the bearing capacity compared with discrete bearing capacity. The bearing capacity values obtained by simulation was found to follow a normal distribution. While using the traditional value of Factor of safety 3, the allowable bearing capacity had lower probability (0.03717) of occurring in the field compared to a higher probability (0.15866), while using the simulation derived factor of safety of 1.5. This means the traditional factor of safety is giving us bearing capacity that is less likely occurring/available in the field. This shows the subjective nature of factor of safety, and hence probability method is suggested to address the variability of the input parameters in bearing capacity equations.

Keywords: bearing capacity, factor of safety, isolated footing, montecarlo simulation

Procedia PDF Downloads 159
199 Quantifying Freeway Capacity Reductions by Rainfall Intensities Based on Stochastic Nature of Flow Breakdown

Authors: Hoyoung Lee, Dong-Kyu Kim, Seung-Young Kho, R. Eddie Wilson

Abstract:

This study quantifies a decrement in freeway capacity during rainfall. Traffic and rainfall data were gathered from Highway Agencies and Wunderground weather service. Three inter-urban freeway sections and its nearest weather stations were selected as experimental sites. Capacity analysis found reductions of maximum and mean pre-breakdown flow rates due to rainfall. The Kruskal-Wallis test also provided some evidence to suggest that the variance in the pre-breakdown flow rate is statistically insignificant. Potential application of this study lies in the operation of real time traffic management schemes such as Variable Speed Limits (VSL), Hard Shoulder Running (HSR), and Ramp Metering System (RMS), where speed or flow limits could be set based on a number of factors, including rainfall events and their intensities.

Keywords: capacity randomness, flow breakdown, freeway capacity, rainfall

Procedia PDF Downloads 361
198 Air Cargo Overbooking Model under Stochastic Weight and Volume Cancellation

Authors: Naragain Phumchusri, Krisada Roekdethawesab, Manoj Lohatepanont

Abstract:

Overbooking is an approach of selling more goods or services than available capacities because sellers anticipate that some buyers will not show-up or may cancel their bookings. At present, many airlines deploy overbooking strategy in order to deal with the uncertainty of their customers. Particularly, some airlines sell more cargo capacity than what they have available to freight forwarders with beliefs that some of them will cancel later. In this paper, we propose methods to find the optimal overbooking level of volume and weight for air cargo in order to minimize the total cost, containing cost of spoilage and cost of offloaded. Cancellations of volume and weight are jointly random variables with a known joint distribution. Heuristic approaches applying the idea of weight and volume independency is considered to find an appropriate answer to the full problem. Computational experiments are used to explore the performance of approaches presented in this paper, as compared to a naïve method under different scenarios.

Keywords: air cargo overbooking, offloading capacity, optimal overbooking level, revenue management, spoilage capacity

Procedia PDF Downloads 299
197 Micromechanical Analysis of Interface Properties Effects on Transverse Tensile Response of Fiber-Reinforced Composites

Authors: M. Naderi, N. Iyyer, K. Goel, N. Phan

Abstract:

A micromechanical analysis of the influence of fiber-matrix interface fracture properties on the transverse tensile response of fiber-reinforced composite is investigated. Augmented finite element method (AFEM) is used to provide high-fidelity damage initiation and propagation along the micromechanical analysis. Effects of fiber volume fraction and fiber shapes are also studies in representative volume elements (RVE) to capture the stochastic behavior of the composite under loading. In addition, defects and voids influence on the composite response are investigated in micromechanical analysis. The results reveal that the response of RVE with constant interface properties overestimates the composite transverse strength. It is also seen that the damage initiation and propagation locations are controlled by the distributions of fracture properties, fibers’ shapes, and defects.

Keywords: cohesive model, fracture, computational mechanics, micromechanics

Procedia PDF Downloads 270
196 Maximizing Coverage with Mobile Crime Cameras in a Stochastic Spatiotemporal Bipartite Network

Authors: (Ted) Edward Holmberg, Mahdi Abdelguerfi, Elias Ioup

Abstract:

This research details a coverage measure for evaluating the effectiveness of observer node placements in a spatial bipartite network. This coverage measure can be used to optimize the configuration of stationary or mobile spatially oriented observer nodes, or a hybrid of the two, over time in order to fully utilize their capabilities. To demonstrate the practical application of this approach, we construct a SpatioTemporal Bipartite Network (STBN) using real-time crime center (RTCC) camera nodes and NOPD calls for service (CFS) event nodes from New Orleans, La (NOLA). We use the coverage measure to identify optimal placements for moving mobile RTCC camera vans to improve coverage of vulnerable areas based on temporal patterns.

Keywords: coverage measure, mobile node dynamics, Monte Carlo simulation, observer nodes, observable nodes, spatiotemporal bipartite knowledge graph, temporal spatial analysis

Procedia PDF Downloads 72
195 Divergence of Innovation Capabilities within the EU

Authors: Vishal Jaunky, Jonas Grafström

Abstract:

The development of the European Union’s (EU) single economic market and rapid technological change has resulted in major structural changes in EU’s member states economies. The general liberalization process that the countries has undergone together has convinced the governments of the member states of need to upgrade their economic and training systems in order to be able to face the economic globalization. Several signs of economic convergence have been found but less is known about the knowledge production. This paper addresses the convergence pattern of technological innovation in 13 European Union (EU) states over the time period 1990-2011 by means of parametric and non-parametric techniques. Parametric approaches revolve around the neoclassical convergence theories. This paper reveals divergence of both the β and σ types. Further, we found evidence of stochastic divergence and non-parametric convergence approach such as distribution dynamics shows a tendency towards divergence. This result is supported with the occurrence of γ-divergence. The policies of the EU to reduce technological gap among its member states seem to be missing its target, something that can have negative long run consequences for the market.

Keywords: convergence, patents, panel data, European union

Procedia PDF Downloads 258
194 An Alternative Richards’ Growth Model Based on Hyperbolic Sine Function

Authors: Samuel Oluwafemi Oyamakin, Angela Unna Chukwu

Abstract:

Richrads growth equation being a generalized logistic growth equation was improved upon by introducing an allometric parameter using the hyperbolic sine function. The integral solution to this was called hyperbolic Richards growth model having transformed the solution from deterministic to a stochastic growth model. Its ability in model prediction was compared with the classical Richards growth model an approach which mimicked the natural variability of heights/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using the coefficient of determination (R2), Mean Absolute Error (MAE) and Mean Square Error (MSE) results. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the behavior of the error term for possible violations. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic Richards nonlinear growth models better than the classical Richards growth model.

Keywords: height, diameter at breast height, DBH, hyperbolic sine function, Pinus caribaea, Richards' growth model

Procedia PDF Downloads 368
193 Nonequilibrium Effects in Photoinduced Ultrafast Charge Transfer Reactions

Authors: Valentina A. Mikhailova, Serguei V. Feskov, Anatoly I. Ivanov

Abstract:

In the last decade the nonequilibrium charge transfer have attracted considerable interest from the scientific community. Examples of such processes are the charge recombination in excited donor-acceptor complexes and the intramolecular electron transfer from the second excited electronic state. In these reactions the charge transfer proceeds predominantly in the nonequilibrium mode. In the excited donor-acceptor complexes the nuclear nonequilibrium is created by the pump pulse. The intramolecular electron transfer from the second excited electronic state is an example where the nuclear nonequilibrium is created by the forward electron transfer. The kinetics of these nonequilibrium reactions demonstrate a number of peculiar properties. Most important from them are: (i) the absence of the Marcus normal region in the free energy gap law for the charge recombination in excited donor-acceptor complexes, (ii) extremely low quantum yield of thermalized charge separated state in the ultrafast charge transfer from the second excited state, (iii) the nonexponential charge recombination dynamics in excited donor-acceptor complexes, (iv) the dependence of the charge transfer rate constant on the excitation pulse frequency. This report shows that most of these kinetic features can be well reproduced in the framework of stochastic point-transition multichannel model. The model involves an explicit description of the nonequilibrium excited state formation by the pump pulse and accounts for the reorganization of intramolecular high-frequency vibrational modes, for their relaxation as well as for the solvent relaxation. The model is able to quantitatively reproduce complex nonequilibrium charge transfer kinetics observed in modern experiments. The interpretation of the nonequilibrium effects from a unified point of view in the terms of the multichannel point transition stochastic model allows to see similarities and differences of electron transfer mechanism in various molecular donor-acceptor systems and formulates general regularities inherent in these phenomena. The nonequilibrium effects in photoinduced ultrafast charge transfer which have been studied for the last 10 years are analyzed. The methods of suppression of the ultrafast charge recombination, similarities and dissimilarities of electron transfer mechanism in different molecular donor-acceptor systems are discussed. The extremely low quantum yield of the thermalized charge separated state observed in the ultrafast charge transfer from the second excited state in the complex consisting of 1,2,4-trimethoxybenzene and tetracyanoethylene in acetonitrile solution directly demonstrates that its effectiveness can be close to unity. This experimental finding supports the idea that the nonequilibrium charge recombination in the excited donor-acceptor complexes can be also very effective so that the part of thermalized complexes is negligible. It is discussed the regularities inherent to the equilibrium and nonequilibrium reactions. Their fundamental differences are analyzed. Namely the opposite dependencies of the charge transfer rates on the dynamical properties of the solvent. The increase of the solvent viscosity results in decreasing the thermal rate and vice versa increasing the nonequilibrium rate. The dependencies of the rates on the solvent reorganization energy and the free energy gap also can considerably differ. This work was supported by the Russian Science Foundation (Grant No. 16-13-10122).

Keywords: Charge recombination, higher excited states, free energy gap law, nonequilibrium

Procedia PDF Downloads 292
192 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms

Authors: Abdul Rehman, Bo Liu

Abstract:

Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.

Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization

Procedia PDF Downloads 206
191 Probabilistic and Stochastic Analysis of a Retaining Wall for C-Φ Soil Backfill

Authors: André Luís Brasil Cavalcante, Juan Felix Rodriguez Rebolledo, Lucas Parreira de Faria Borges

Abstract:

A methodology for the probabilistic analysis of active earth pressure on retaining wall for c-Φ soil backfill is described in this paper. The Rosenblueth point estimate method is used to measure the failure probability of a gravity retaining wall. The basic principle of this methodology is to use two point estimates, i.e., the standard deviation and the mean value, to examine a variable in the safety analysis. The simplicity of this framework assures to its wide application. For the calculation is required 2ⁿ repetitions during the analysis, since the system is governed by n variables. In this study, a probabilistic model based on the Rosenblueth approach for the computation of the overturning probability of failure of a retaining wall is presented. The obtained results have shown the advantages of this kind of models in comparison with the deterministic solution. In a relatively easy way, the uncertainty on the wall and fill parameters are taken into account, and some practical results can be obtained for the retaining structure design.

Keywords: retaining wall, active earth pressure, backfill, probabilistic analysis

Procedia PDF Downloads 386
190 Towards an Enhanced Compartmental Model for Profiling Malware Dynamics

Authors: Jessemyn Modiini, Timothy Lynar, Elena Sitnikova

Abstract:

We present a novel enhanced compartmental model for malware spread analysis in cyber security. This paper applies cyber security data features to epidemiological compartmental models to model the infectious potential of malware. Compartmental models are most efficient for calculating the infectious potential of a disease. In this paper, we discuss and profile epidemiologically relevant data features from a Domain Name System (DNS) dataset. We then apply these features to epidemiological compartmental models to network traffic features. This paper demonstrates how epidemiological principles can be applied to the novel analysis of key cybersecurity behaviours and trends and provides insight into threat modelling above that of kill-chain analysis. In applying deterministic compartmental models to a cyber security use case, the authors analyse the deficiencies and provide an enhanced stochastic model for cyber epidemiology. This enhanced compartmental model (SUEICRN model) is contrasted with the traditional SEIR model to demonstrate its efficacy.

Keywords: cybersecurity, epidemiology, cyber epidemiology, malware

Procedia PDF Downloads 86
189 Understanding the Impact of Climate-Induced Rural-Urban Migration on the Technical Efficiency of Maize Production in Malawi

Authors: Innocent Pangapanga-Phiri, Eric Dada Mungatana

Abstract:

This study estimates the effect of climate-induced rural-urban migrants (RUM) on maize productivity. It uses panel data gathered by the National Statistics Office and the World Bank to understand the effect of RUM on the technical efficiency of maize production in rural Malawi. The study runs the two-stage Tobit regression to isolate the real effect of rural-urban migration on the technical efficiency of maize production. The results show that RUM significantly reduces the technical efficiency of maize production. However, the interaction of RUM and climate-smart agriculture has a positive and significant influence on the technical efficiency of maize production, suggesting the need for re-investing migrants’ remittances in agricultural activities.

Keywords: climate-smart agriculture, farm productivity, rural-urban migration, panel stochastic frontier models, two-stage Tobit regression

Procedia PDF Downloads 90
188 Comparative study of the technical efficiency of the cotton farms in the towns of Banikoara and Savalou

Authors: Boukari Abdou Wakilou

Abstract:

Benin is one of West Africa's major cotton-producing countries. Cotton is the country's main source of foreign currency and employment. But it is also one of the sources of soil degradation. The search for good agricultural practices is therefore, a constant preoccupation. The aim of this study is to measure the technical efficiency of cotton growers by comparing those who constantly grow cotton on the same land with those who practice crop rotation. The one-step estimation approach of the stochastic production frontier, including determinants of technical inefficiency, was applied to a stratified random sample of 261 cotton producers. Overall, the growers had a high average technical efficiency level of 90%. However, there was no significant difference in the level of technical efficiency between the two groups of growers studied. All the factors linked to compliance with the technical production itinerary had a positive influence on the growers' level of efficiency. It is, therefore, important to continue raising awareness of the importance of respecting the technical production itinerary and of integrated soil fertility management techniques.

Keywords: technical efficiency, soil fertility, cotton, crop rotation, benin

Procedia PDF Downloads 35
187 Steady-State Behavior of a Multi-Phase M/M/1 Queue in Random Evolution Subject to Catastrophe Failure

Authors: Reni M. Sagayaraj, Anand Gnana S. Selvam, Reynald R. Susainathan

Abstract:

In this paper, we consider stochastic queueing models for Steady-state behavior of a multi-phase M/M/1 queue in random evolution subject to catastrophe failure. The arrival flow of customers is described by a marked Markovian arrival process. The service times of different type customers have a phase-type distribution with different parameters. To facilitate the investigation of the system we use a generalized phase-type service time distribution. This model contains a repair state, when a catastrophe occurs the system is transferred to the failure state. The paper focuses on the steady-state equation, and observes that, the steady-state behavior of the underlying queueing model along with the average queue size is analyzed.

Keywords: M/G/1 queuing system, multi-phase, random evolution, steady-state equation, catastrophe failure

Procedia PDF Downloads 293
186 Optimal Load Control Strategy in the Presence of Stochastically Dependent Renewable Energy Sources

Authors: Mahmoud M. Othman, Almoataz Y. Abdelaziz, Yasser G. Hegazy

Abstract:

This paper presents a load control strategy based on modification of the Big Bang Big Crunch optimization method. The proposed strategy aims to determine the optimal load to be controlled and the corresponding time of control in order to minimize the energy purchased from substation. The presented strategy helps the distribution network operator to rely on the renewable energy sources in supplying the system demand. The renewable energy sources used in the presented study are modeled using the diagonal band Copula method and sequential Monte Carlo method in order to accurately consider the multivariate stochastic dependence between wind power, photovoltaic power and the system demand. The proposed algorithms are implemented in MATLAB environment and tested on the IEEE 37-node feeder. Several case studies are done and the subsequent discussions show the effectiveness of the proposed algorithm.

Keywords: big bang big crunch, distributed generation, load control, optimization, planning

Procedia PDF Downloads 322
185 Finite Element Modelling for the Development of a Planar Ultrasonic Dental Scaler for Prophylactic and Periodontal Care

Authors: Martin Hofmann, Diego Stutzer, Thomas Niederhauser, Juergen Burger

Abstract:

Dental biofilm is the main etiologic factor for caries, periodontal and peri-implant infections. In addition to the risk of tooth loss, periodontitis is also associated with an increased risk of systemic diseases such as atherosclerotic cardiovascular disease and diabetes. For this reason, dental hygienists use ultrasonic scalers for prophylactic and periodontal care of the teeth. However, the current instruments are limited to their dimensions and operating frequencies. The innovative design of a planar ultrasonic transducer introduces a new type of dental scalers. The flat titanium-based design allows the mass to be significantly reduced compared to a conventional screw-mounted Langevin transducer, resulting in a more efficient and controllable scaler. For the development of the novel device, multi-physics finite element analysis was used to simulate and optimise various design concepts. This process was supported by prototyping and electromechanical characterisation. The feasibility and potential of a planar ultrasonic transducer have already been confirmed by our current prototypes, which achieve higher performance compared to commercial devices. Operating at the desired resonance frequency of 28 kHz with a driving voltage of 40 Vrms results in an in-plane tip oscillation with a displacement amplitude of up to 75 μm by having less than 8 % out-of-plane movement and an energy transformation factor of 1.07 μm/mA. In a further step, we will adapt the design to two additional resonance frequencies (20 and 40 kHz) to obtain information about the most suitable mode of operation. In addition to the already integrated characterization methods, we will evaluate the clinical efficiency of the different devices in an in vitro setup with an artificial biofilm pocket model.

Keywords: ultrasonic instrumentation, ultrasonic scaling, piezoelectric transducer, finite element simulation, dental biofilm, dental calculus

Procedia PDF Downloads 86
184 Recursive Doubly Complementary Filter Design Using Particle Swarm Optimization

Authors: Ju-Hong Lee, Ding-Chen Chung

Abstract:

This paper deals with the optimal design of recursive doubly complementary (DC) digital filter design using a metaheuristic based optimization technique. Based on the theory of DC digital filters using two recursive digital all-pass filters (DAFs), the design problem is appropriately formulated to result in an objective function which is a weighted sum of the phase response errors of the designed DAFs. To deal with the stability of the recursive DC filters during the design process, we can either impose some necessary constraints on the phases of the recursive DAFs. Through a frequency sampling and a weighted least squares approach, the optimization problem of the objective function can be solved by utilizing a population based stochastic optimization approach. The resulting DC digital filters can possess satisfactory frequency response. Simulation results are presented for illustration and comparison.

Keywords: doubly complementary, digital all-pass filter, weighted least squares algorithm, particle swarm optimization

Procedia PDF Downloads 655
183 Optimal Maintenance Policy for a Three-Unit System

Authors: A. Abbou, V. Makis, N. Salari

Abstract:

We study the condition-based maintenance (CBM) problem of a system subject to stochastic deterioration. The system is composed of three units (or modules): (i) Module 1 deterioration follows a Markov process with two operational states and one failure state. The operational states are partially observable through periodic condition monitoring. (ii) Module 2 deterioration follows a Gamma process with a known failure threshold. The deterioration level of this module is fully observable through periodic inspections. (iii) Only the operating age information is available of Module 3. The lifetime of this module has a general distribution. A CBM policy prescribes when to initiate a maintenance intervention and which modules to repair during intervention. Our objective is to determine the optimal CBM policy minimizing the long-run expected average cost of operating the system. This is achieved by formulating a Markov decision process (MDP) and developing the value iteration algorithm for solving the MDP. We provide numerical examples illustrating the cost-effectiveness of the optimal CBM policy through a comparison with heuristic policies commonly found in the literature.

Keywords: reliability, maintenance optimization, Markov decision process, heuristics

Procedia PDF Downloads 193
182 A Novel Meta-Heuristic Algorithm Based on Cloud Theory for Redundancy Allocation Problem under Realistic Condition

Authors: H. Mousavi, M. Sharifi, H. Pourvaziri

Abstract:

Redundancy Allocation Problem (RAP) is a well-known mathematical problem for modeling series-parallel systems. It is a combinatorial optimization problem which focuses on determining an optimal assignment of components in a system design. In this paper, to be more practical, we have considered the problem of redundancy allocation of series system with interval valued reliability of components. Therefore, during the search process, the reliabilities of the components are considered as a stochastic variable with a lower and upper bounds. In order to optimize the problem, we proposed a simulated annealing based on cloud theory (CBSAA). Also, the Monte Carlo simulation (MCS) is embedded to the CBSAA to handle the random variable components’ reliability. This novel approach has been investigated by numerical examples and the experimental results have shown that the CBSAA combining MCS is an efficient tool to solve the RAP of systems with interval-valued component reliabilities.

Keywords: redundancy allocation problem, simulated annealing, cloud theory, monte carlo simulation

Procedia PDF Downloads 392
181 Failure Inference and Optimization for Step Stress Model Based on Bivariate Wiener Model

Authors: Soudabeh Shemehsavar

Abstract:

In this paper, we consider the situation under a life test, in which the failure time of the test units are not related deterministically to an observable stochastic time varying covariate. In such a case, the joint distribution of failure time and a marker value would be useful for modeling the step stress life test. The problem of accelerating such an experiment is considered as the main aim of this paper. We present a step stress accelerated model based on a bivariate Wiener process with one component as the latent (unobservable) degradation process, which determines the failure times and the other as a marker process, the degradation values of which are recorded at times of failure. Parametric inference based on the proposed model is discussed and the optimization procedure for obtaining the optimal time for changing the stress level is presented. The optimization criterion is to minimize the approximate variance of the maximum likelihood estimator of a percentile of the products’ lifetime distribution.

Keywords: bivariate normal, Fisher information matrix, inverse Gaussian distribution, Wiener process

Procedia PDF Downloads 299
180 Quadrature Mirror Filter Bank Design Using Population Based Stochastic Optimization

Authors: Ju-Hong Lee, Ding-Chen Chung

Abstract:

The paper deals with the optimal design of two-channel linear-phase (LP) quadrature mirror filter (QMF) banks using a metaheuristic based optimization technique. Based on the theory of two-channel QMF banks using two recursive digital all-pass filters (DAFs), the design problem is appropriately formulated to result in an objective function which is a weighted sum of the group delay error of the designed QMF bank and the magnitude response error of the designed low-pass analysis filter. Through a frequency sampling and a weighted least squares approach, the optimization problem of the objective function can be solved by utilizing a particle swarm optimization algorithm. The resulting two-channel QMF banks can possess approximately LP response without magnitude distortion. Simulation results are presented for illustration and comparison.

Keywords: quadrature mirror filter bank, digital all-pass filter, weighted least squares algorithm, particle swarm optimization

Procedia PDF Downloads 490
179 Optimal Perturbation in an Impulsively Blocked Channel Flow

Authors: Avinash Nayak, Debopam Das

Abstract:

The current work implements the variational principle to find the optimum initial perturbation that provides maximum growth in an impulsively blocked channel flow. The conventional method for studying temporal stability has always been through modal analysis. In most of the transient flows, this modal analysis is still followed with the quasi-steady assumption, i.e. change in base flow is much slower compared to perturbation growth rate. There are other studies where transient analysis on time dependent flows is done by formulating the growth of perturbation as an initial value problem. But the perturbation growth is sensitive to the initial condition. This study intends to find the initial perturbation that provides the maximum growth at a later time. Here, the expression of base flow for blocked channel is derived and the formulation is based on the two dimensional perturbation with stream function representing the perturbation quantity. Hence, the governing equation becomes the Orr-Sommerfeld equation. In the current context, the cost functional is defined as the ratio of disturbance energy at a terminal time 'T' to the initial energy, i.e. G(T) = ||q(T)||2/||q(0)||2 where q is the perturbation and ||.|| defines the norm chosen. The above cost functional needs to be maximized against the initial perturbation distribution. It is achieved with the constraint that perturbation follows the basic governing equation, i.e. Orr-Sommerfeld equation. The corresponding adjoint equation is derived and is solved along with the basic governing equation in an iterative manner to provide the initial spatial shape of the perturbation that provides the maximum growth G (T). The growth rate is plotted against time showing the development of perturbation which achieves an asymptotic shape. The effects of various parameters, e.g. Reynolds number, are studied in the process. Thus, the study emphasizes on the usage of optimal perturbation and its growth to understand the stability characteristics of time dependent flows. The assumption of quasi-steady analysis can be verified against these results for the transient flows like impulsive blocked channel flow.

Keywords: blocked channel flow, calculus of variation, hydrodynamic stability, optimal perturbation

Procedia PDF Downloads 401
178 Design and Implementation of Pseudorandom Number Generator Using Android Sensors

Authors: Mochamad Beta Auditama, Yusuf Kurniawan

Abstract:

A smartphone or tablet require a strong randomness to establish secure encrypted communication, encrypt files, etc. Therefore, random number generation is one of the main keys to provide secrecy. Android devices are equipped with hardware-based sensors, such as accelerometer, gyroscope, etc. Each of these sensors provides a stochastic process which has a potential to be used as an extra randomness source, in addition to /dev/random and /dev/urandom pseudorandom number generators. Android sensors can provide randomness automatically. To obtain randomness from Android sensors, each one of Android sensors shall be used to construct an entropy source. After all entropy sources are constructed, output from these entropy sources are combined to provide more entropy. Then, a deterministic process is used to produces a sequence of random bits from the combined output. All of these processes are done in accordance with NIST SP 800-22 and the series of NIST SP 800-90. The operation conditions are done 1) on Android user-space, and 2) the Android device is placed motionless on a desk.

Keywords: Android hardware-based sensor, deterministic process, entropy source, random number generation/generators

Procedia PDF Downloads 343
177 Scoring System for the Prognosis of Sepsis Patients in Intensive Care Units

Authors: Javier E. García-Gallo, Nelson J. Fonseca-Ruiz, John F. Duitama-Munoz

Abstract:

Sepsis is a syndrome that occurs with physiological and biochemical abnormalities induced by severe infection and carries a high mortality and morbidity, therefore the severity of its condition must be interpreted quickly. After patient admission in an intensive care unit (ICU), it is necessary to synthesize the large volume of information that is collected from patients in a value that represents the severity of their condition. Traditional severity of illness scores seeks to be applicable to all patient populations, and usually assess in-hospital mortality. However, the use of machine learning techniques and the data of a population that shares a common characteristic could lead to the development of customized mortality prediction scores with better performance. This study presents the development of a score for the one-year mortality prediction of the patients that are admitted to an ICU with a sepsis diagnosis. 5650 ICU admissions extracted from the MIMICIII database were evaluated, divided into two groups: 70% to develop the score and 30% to validate it. Comorbidities, demographics and clinical information of the first 24 hours after the ICU admission were used to develop a mortality prediction score. LASSO (least absolute shrinkage and selection operator) and SGB (Stochastic Gradient Boosting) variable importance methodologies were used to select the set of variables that make up the developed score; each of this variables was dichotomized and a cut-off point that divides the population into two groups with different mean mortalities was found; if the patient is in the group that presents a higher mortality a one is assigned to the particular variable, otherwise a zero is assigned. These binary variables are used in a logistic regression (LR) model, and its coefficients were rounded to the nearest integer. The resulting integers are the point values that make up the score when multiplied with each binary variables and summed. The one-year mortality probability was estimated using the score as the only variable in a LR model. Predictive power of the score, was evaluated using the 1695 admissions of the validation subset obtaining an area under the receiver operating characteristic curve of 0.7528, which outperforms the results obtained with Sequential Organ Failure Assessment (SOFA), Oxford Acute Severity of Illness Score (OASIS) and Simplified Acute Physiology Score II (SAPSII) scores on the same validation subset. Observed and predicted mortality rates within estimated probabilities deciles were compared graphically and found to be similar, indicating that the risk estimate obtained with the score is close to the observed mortality, it is also observed that the number of events (deaths) is indeed increasing as the outcome go from the decile with the lowest probabilities to the decile with the highest probabilities. Sepsis is a syndrome that carries a high mortality, 43.3% for the patients included in this study; therefore, tools that help clinicians to quickly and accurately predict a worse prognosis are needed. This work demonstrates the importance of customization of mortality prediction scores since the developed score provides better performance than traditional scoring systems.

Keywords: intensive care, logistic regression model, mortality prediction, sepsis, severity of illness, stochastic gradient boosting

Procedia PDF Downloads 186
176 The Role and Importance of Genome Sequencing in Prediction of Cancer Risk

Authors: M. Sadeghi, H. Pezeshk, R. Tusserkani, A. Sharifi Zarchi, A. Malekpour, M. Foroughmand, S. Goliaei, M. Totonchi, N. Ansari–Pour

Abstract:

The role and relative importance of intrinsic and extrinsic factors in the development of complex diseases such as cancer still remains a controversial issue. Determining the amount of variation explained by these factors needs experimental data and statistical models. These models are nevertheless based on the occurrence and accumulation of random mutational events during stem cell division, thus rendering cancer development a stochastic outcome. We demonstrate that not only individual genome sequencing is uninformative in determining cancer risk, but also assigning a unique genome sequence to any given individual (healthy or affected) is not meaningful. Current whole-genome sequencing approaches are therefore unlikely to realize the promise of personalized medicine. In conclusion, since genome sequence differs from cell to cell and changes over time, it seems that determining the risk factor of complex diseases based on genome sequence is somewhat unrealistic, and therefore, the resulting data are likely to be inherently uninformative.

Keywords: cancer risk, extrinsic factors, genome sequencing, intrinsic factors

Procedia PDF Downloads 244
175 Designing Inventory System with Constrained by Reducing Ordering Cost, Lead Time and Lost Sale Rate and Considering Random Disturbance in Ordering Quantity

Authors: Arezoo Heidary, Abolfazl Mirzazadeh, Aref Gholami-Qadikolaei

Abstract:

In the business environment it is very common that a lot received may not be equal to quantity ordered. in this work, a random disturbance in a received quantity is considered. It is assumed a maximum allowable limit for storage space and inventory investment.The impact of lead time and ordering cost reductions once they act dependently is also investigated. Further, considering a mixture of back order and lost sales for allowable shortage system, the effect of investment on reducing lost sale rate is analyzed. For the proposed control system, a Lagrangian method is applied in order to solve the problem and an algorithmic procedure is utilized to achieve optimal solution with the global minimum expected cost. Finally, proves on concavity and convexity of the model in the decision variables are shown.

Keywords: stochastic inventory system, lead time, ordering cost, lost sale rate, inventory constraints, random disturbance

Procedia PDF Downloads 390