Search results for: rationality parameter
997 Software Reliability Prediction Model Analysis
Authors: Lela Mirtskhulava, Mariam Khunjgurua, Nino Lomineishvili, Koba Bakuria
Abstract:
Software reliability prediction gives a great opportunity to measure the software failure rate at any point throughout system test. A software reliability prediction model provides with the technique for improving reliability. Software reliability is very important factor for estimating overall system reliability, which depends on the individual component reliabilities. It differs from hardware reliability in that it reflects the design perfection. Main reason of software reliability problems is high complexity of software. Various approaches can be used to improve the reliability of software. We focus on software reliability model in this article, assuming that there is a time redundancy, the value of which (the number of repeated transmission of basic blocks) can be an optimization parameter. We consider given mathematical model in the assumption that in the system may occur not only irreversible failures, but also a failure that can be taken as self-repairing failures that significantly affect the reliability and accuracy of information transfer. Main task of the given paper is to find a time distribution function (DF) of instructions sequence transmission, which consists of random number of basic blocks. We consider the system software unreliable; the time between adjacent failures has exponential distribution.Keywords: exponential distribution, conditional mean time to failure, distribution function, mathematical model, software reliability
Procedia PDF Downloads 465996 Design of Liquid Crystal Based Tunable Reflectarray Antenna Using Slot Embedded Patch Element Configurations
Authors: M. Y. Ismail, M. Inam
Abstract:
This paper presents the design and analysis of Liquid Crystal (LC) based tunable reflect array antenna with different design configurations within X-band frequency range. The effect of LC volume used for unit cell element on frequency tunability and reflection loss performance has been investigated. Moreover different slot embedded patch element configurations have been proposed for LC based tunable reflect array antenna design with enhanced performance. The detailed fabrication and measurement procedure for different LC based unit cells has been presented. The waveguide scattering parameter measured results demonstrated that by using the circular slot embedded patch elements, the frequency tunability and dynamic phase range can be increased from 180 MHz to 200 MHz and 120° to 124° respectively. Furthermore the circular slot embedded patch element can be designed at 10 GHz resonant frequency with a patch volume of 2.71 mm3 as compared to 3.47 mm3 required for rectangular patch without slot.Keywords: liquid crystal, tunable reflect array, frequency tunability, dynamic phase range
Procedia PDF Downloads 521995 Understanding Evolutionary Algorithms through Interactive Graphical Applications
Authors: Javier Barrachina, Piedad Garrido, Manuel Fogue, Julio A. Sanguesa, Francisco J. Martinez
Abstract:
It is very common to observe, especially in Computer Science studies that students have difficulties to correctly understand how some mechanisms based on Artificial Intelligence work. In addition, the scope and limitations of most of these mechanisms are usually presented by professors only in a theoretical way, which does not help students to understand them adequately. In this work, we focus on the problems found when teaching Evolutionary Algorithms (EAs), which imitate the principles of natural evolution, as a method to solve parameter optimization problems. Although this kind of algorithms can be very powerful to solve relatively complex problems, students often have difficulties to understand how they work, and how to apply them to solve problems in real cases. In this paper, we present two interactive graphical applications which have been specially designed with the aim of making Evolutionary Algorithms easy to be understood by students. Specifically, we present: (i) TSPS, an application able to solve the ”Traveling Salesman Problem”, and (ii) FotEvol, an application able to reconstruct a given image by using Evolution Strategies. The main objective is that students learn how these techniques can be implemented, and the great possibilities they offer.Keywords: education, evolutionary algorithms, evolution strategies, interactive learning applications
Procedia PDF Downloads 338994 Short Arc Technique for Baselines Determinations
Authors: Gamal F.Attia
Abstract:
The baselines are the distances and lengths of the chords between projections of the positions of the laser stations on the reference ellipsoid. For the satellite geodesy, it is very important to determine the optimal length of orbital arc along which laser measurements are to be carried out. It is clear that for the dynamical methods long arcs (one month or more) are to be used. According to which more errors of modeling of different physical forces such as earth's gravitational field, air drag, solar radiation pressure, and others that may influence the accuracy of the estimation of the satellites position, at the same time the measured errors con be almost completely excluded and high stability in determination of relative coordinate system can be achieved. It is possible to diminish the influence of the errors of modeling by using short-arcs of the satellite orbit (several revolutions or days), but the station's coordinates estimated by different arcs con differ from each other by a larger quantity than statistical zero. Under the semidynamical ‘short arc’ method one or several passes of the satellite in one of simultaneous visibility from both ends of the chord is known and the estimated parameter in this case is the length of the chord. The comparison of the same baselines calculated with long and short arcs methods shows a good agreement and even speaks in favor of the last one. In this paper the Short Arc technique has been explained and 3 baselines have been determined using the ‘short arc’ method.Keywords: baselines, short arc, dynamical, gravitational field
Procedia PDF Downloads 465993 Assessment of Exposure Dose Rate from Scattered X-Radiation during Diagnostic Examination in Nigerian University Teaching Hospital
Authors: Martins Gbenga., Orosun M. M., Olowookere C. J., Bamidele Lateef
Abstract:
Radiation exposures from diagnostic medical examinations are almost always justified by the benefits of accurate diagnosis of possible disease conditions. The aim is to assess the influence of selected exposure parameters on scattered dose rates. The research was carried out using Gamma Scout software installation on the Computer system (Laptop) to record the radiation counts, pulse rate, and dose rate for 136 patients. Seventy-three patients participated in the male category with 53.7%, while 63 females participated with 46.3%. The mean and standard deviation value for each parameter is recorded, and tube potential is within 69.50±11.75 ranges between 52.00 and 100.00, tube current is within 23.20±17.55 ranges between 4.00 and 100.00, focus skin distance is within 73.195±33.99 and ranges between 52.00 and 100.00. Dose Rate (DRate in µSv/hr) is significant at an interval of 0.582 and 0.587 for tube potential and body thickness (cm). Tube potential is significant at an interval of 0.582 and 0.842 of DRate (µSv/hr) and body thickness (cm). The study was compared with other studies. The exposure parameters selected during each examination contributed to scattered radiation. A quality assurance program (QAP) is advised for the center.Keywords: x-radiation, exposure rate, dose rate, tube potentials, scattered radiation, diagnostic examination
Procedia PDF Downloads 149992 An Investigation on Overstrength Factor (Ω) of Reinforced Concrete Buildings in Turkish Earthquake Draft Code (TEC-2016)
Authors: M. Hakan Arslan, I. Hakkı Erkan
Abstract:
Overstrength factor is an important parameter of load reduction factor. In this research, the overstrength factor (Ω) of reinforced concrete (RC) buildings and the parameters of Ω in TEC-2016 draft version have been explored. For this aim, 48 RC buildings have been modeled according to the current seismic code TEC-2007 and Turkish Building Code-500-2000 criteria. After modelling step, nonlinear static pushover analyses have been applied to these buildings by using TEC-2007 Section 7. After the nonlinear pushover analyses, capacity curves (lateral load-lateral top displacement curves) have been plotted for 48 RC buildings. Using capacity curves, overstrength factors (Ω) have been derived for each building. The obtained overstrength factor (Ω) values have been compared with TEC-2016 values for related building types, and the results have been interpreted. According to the obtained values from the study, overstrength factor (Ω) given in TEC-2016 draft code is found quite suitable.Keywords: reinforced concrete buildings, overstrength factor, earthquake, static pushover analysis
Procedia PDF Downloads 357991 Modelling of Relocation and Battery Autonomy Problem on Electric Cars Sharing Dynamic by Using Discrete Event Simulation and Petri Net
Authors: Taha Benarbia, Kay W. Axhausen, Anugrah Ilahi
Abstract:
Electric car sharing system as ecologic transportation increasing in the world. The complexity of managing electric car sharing systems, especially one-way trips and battery autonomy have direct influence to on supply and demand of system. One must be able to precisely model the demand and supply of these systems to better operate electric car sharing and estimate its effect on mobility management and the accessibility that it provides in urban areas. In this context, our work focus to develop performances optimization model of the system based on discrete event simulation and stochastic Petri net. The objective is to search optimal decisions and management parameters of the system in order to fulfil at best demand while minimizing undesirable situations. In this paper, we present new model of electric cars sharing with relocation based on monitoring system. The proposed approach also help to precise the influence of battery charging level on the behaviour of system as important decision parameter of this complex and dynamical system.Keywords: electric car-sharing systems, smart mobility, Petri nets modelling, discrete event simulation
Procedia PDF Downloads 183990 Modelling and Optimisation of Floating Drum Biogas Reactor
Authors: L. Rakesh, T. Y. Heblekar
Abstract:
This study entails the development and optimization of a mathematical model for a floating drum biogas reactor from first principles using thermal and empirical considerations. The model was derived on the basis of mass conservation, lumped mass heat transfer formulations and empirical biogas formation laws. The treatment leads to a system of coupled nonlinear ordinary differential equations whose solution mapped four-time independent controllable parameters to five output variables which adequately serve to describe the reactor performance. These equations were solved numerically using fourth order Runge-Kutta method for a range of input parameter values. Using the data so obtained an Artificial Neural Network with a single hidden layer was trained using Levenberg-Marquardt Damped Least Squares (DLS) algorithm. This network was then fine-tuned for optimal mapping by varying hidden layer size. This fast forward model was then employed as a health score generator in the Bacterial Foraging Optimization code. The optimal operating state of the simplified Biogas reactor was thus obtained.Keywords: biogas, floating drum reactor, neural network model, optimization
Procedia PDF Downloads 143989 Dynamic Effects of Charitable Giving in a Ramsey Model
Authors: Riham Barbar
Abstract:
This paper studies the dynamic effects of charitable giving in a Ramsey model à la Becker and Foias (1994), such that heterogeneity is reduced to two types of agents: rich and poor. It is assumed that rich show a great concern for poor and enjoy giving. The introduction of charitable giving in this paper is inspired from the notion of Zakat (borrowed from the Islamic Economics) and is defined according to the warm-glow of Andreoni (1990). In this framework, we prove the existence of a steady state where only the patient agent holds capital. Furthermore, we show that local indetermincay appears. While moderate values of charitable-giving elasticity makes the appearance of endogenous fluctuations due to self-fulfilling expectations more likely, high values of this elasticity stabilizes endogenous fluctuations, by narrowing down the range of parameter values compatible with local indeterminacy and may rule out expectations-driven fluctuations if it exceeds certain threshold. Finally, cycles of period two emerge. However, charitable-giving makes it less likely for these cycles to emerge.Keywords: charitable giving, warm-glow, bifurcations, heterogeneous agents, indeterminacy, self-fulfilling expectations, endogenous fluctuations
Procedia PDF Downloads 316988 Bayesian Estimation under Different Loss Functions Using Gamma Prior for the Case of Exponential Distribution
Authors: Md. Rashidul Hasan, Atikur Rahman Baizid
Abstract:
The Bayesian estimation approach is a non-classical estimation technique in statistical inference and is very useful in real world situation. The aim of this paper is to study the Bayes estimators of the parameter of exponential distribution under different loss functions and then compared among them as well as with the classical estimator named maximum likelihood estimator (MLE). In our real life, we always try to minimize the loss and we also want to gather some prior information (distribution) about the problem to solve it accurately. Here the gamma prior is used as the prior distribution of exponential distribution for finding the Bayes estimator. In our study, we also used different symmetric and asymmetric loss functions such as squared error loss function, quadratic loss function, modified linear exponential (MLINEX) loss function and non-linear exponential (NLINEX) loss function. Finally, mean square error (MSE) of the estimators are obtained and then presented graphically.Keywords: Bayes estimator, maximum likelihood estimator (MLE), modified linear exponential (MLINEX) loss function, Squared Error (SE) loss function, non-linear exponential (NLINEX) loss function
Procedia PDF Downloads 385987 Sediment Patterns from Fluid-Bed Interactions: A Direct Numerical Simulations Study on Fluvial Turbulent Flows
Authors: Nadim Zgheib, Sivaramakrishnan Balachandar
Abstract:
We present results on the initial formation of ripples from an initially flattened erodible bed. We use direct numerical simulations (DNS) of turbulent open channel flow over a fixed sinusoidal bed coupled with hydrodynamic stability analysis. We use the direct forcing immersed boundary method to account for the presence of the sediment bed. The resolved flow provides the bed shear stress and consequently the sediment transport rate, which is needed in the stability analysis of the Exner equation. The approach is different from traditional linear stability analysis in the sense that the phase lag between the bed topology, and the sediment flux is obtained from the DNS. We ran 11 simulations at a fixed shear Reynolds number of 180, but for different sediment bed wavelengths. The analysis allows us to sweep a large range of physical and modelling parameters to predict their effects on linear growth. The Froude number appears to be the critical controlling parameter in the early linear development of ripples, in contrast with the dominant role of particle Reynolds number during the equilibrium stage.Keywords: direct numerical simulation, immersed boundary method, sediment-bed interactions, turbulent multiphase flow, linear stability analysis
Procedia PDF Downloads 188986 Modeling Residual Modulus of Elasticity of Self-Compacted Concrete Using Artificial Neural Networks
Authors: Ahmed M. Ashteyat
Abstract:
Artificial Neural Network (ANN) models have been widely used in material modeling, inter-correlations, as well as behavior and trend predictions when the nonlinear relationship between system parameters cannot be quantified explicitly and mathematically. In this paper, ANN was used to predict the residual modulus of elasticity (RME) of self compacted concrete (SCC) damaged by heat. The ANN model was built, trained, tested and validated using a total of 112 experimental data sets, gathered from available literature. The data used in model development included temperature, relative humidity conditions, mix proportions, filler types, and fiber type. The result of ANN training, testing, and validation indicated that the RME of SCC, exposed to different temperature and relative humidity levels, could be predicted accurately with ANN techniques. The reliability between the predicated outputs and the actual experimental data was 99%. This show that ANN has strong potential as a feasible tool for predicting residual elastic modulus of SCC damaged by heat within the range of input parameter. The ANN model could be used to estimate the RME of SCC, as a rapid inexpensive substitute for the much more complicated and time consuming direct measurement of the RME of SCC.Keywords: residual modulus of elasticity, artificial neural networks, self compacted-concrete, material modeling
Procedia PDF Downloads 536985 Convergence Analysis of a Gibbs Sampling Based Mix Design Optimization Approach for High Compressive Strength Pervious Concrete
Authors: Jiaqi Huang, Lu Jin
Abstract:
Pervious concrete features with high water permeability rate. However, due to the lack of fine aggregates, the compressive strength is usually lower than other conventional concrete products. Optimization of pervious concrete mix design has long been recognized as an effective mechanism to achieve high compressive strength while maintaining desired permeability rate. In this paper, a Gibbs Sampling based algorithm is proposed to approximate the optimal mix design to achieve a high compressive strength of pervious concrete. We prove that the proposed algorithm efficiently converges to the set of global optimal solutions. The convergence rate and accuracy depend on a control parameter employed in the proposed algorithm. The simulation results show that, by using the proposed approach, the system converges to the optimal solution quickly and the derived optimal mix design achieves the maximum compressive strength while maintaining the desired permeability rate.Keywords: convergence, Gibbs Sampling, high compressive strength, optimal mix design, pervious concrete
Procedia PDF Downloads 182984 Study on the Application of Lime to Improve the Rheological Properties of Polymer Modified Bitumen
Authors: A. Chegenizadeh, M. Keramatikerman, H. Nikraz
Abstract:
Bitumen is one of the most applicable materials in pavement engineering. It is a binding material with unique viscoelastic properties, especially when it mixes with polymer. In this study, to figure out the viscoelastic behaviour of the polymer modified with bitumen (PMB), a series of dynamic shearing rheological (DSR) tests were conducted. Four percentages of lime (i.e. 1%, 2%, 4% and 5%) were mixed with PMB and tested under four different temperatures including 64ºC, 70ºC, 76ºC and 82ºC. The results indicated that complex shearing modulus (G*) increased by increasing the frequency due to raised resistance against deformation. The phase angle (δ) showed a decreasing trend by incrementing the frequency. The addition of lime percentages increased the complex modulus value and declined phase angle parameter. Increasing the temperature decreased the complex modulus and increased the phase angle until 70ºC. The decreasing trend of rutting factor with increasing temperature revealed that rutting factor improved by the addition of the lime to the PMB.Keywords: rheological properties, DSR test, polymer mixed with bitumen (PMB), complex modulus, lime
Procedia PDF Downloads 189983 Vortex Separator for More Accurate Air Dry-Bulb Temperature Measurement
Authors: Ahmed N. Shmroukh, I. M. S. Taha, A. M. Abdel-Ghany, M. Attalla
Abstract:
Fog systems application for cooling and humidification is still limited, although these systems require less initial cost compared with that of other cooling systems such as pad-and-fan systems. The undesirable relative humidity and air temperature inside the space which have been cooled or humidified are the main reasons for its limited use, which results from the poor control of fog systems. Any accurate control system essentially needs air dry bulb temperature as an input parameter. Therefore, the air dry-bulb temperature in the space needs to be measured accurately. The Scope of the present work is the separation of the fog droplets from the air in a fogged space to measure the air dry bulb temperature accurately. The separation is to be done in a small device inside which the sensor of the temperature measuring instrument is positioned. Vortex separator will be designed and used. Another reference device will be used for measuring the air temperature without separation. A comparative study will be performed to reach at the best device which leads to the most accurate measurement of air dry bulb temperature. The results showed that the proposed devices improved the measured air dry bulb temperature toward the correct direction over that of the free junction. Vortex device was the best. It respectively increased the temperature measured by the free junction in the range from around 2 to around 6°C for different fog on-off duration.Keywords: fog systems, measuring air dry bulb temperature, temperature measurement, vortex separator
Procedia PDF Downloads 296982 Actual Fracture Length Determination Using a Technique for Shale Fracturing Data Analysis in Real Time
Authors: M. Wigwe, M. Y Soloman, E. Pirayesh, R. Eghorieta, N. Stegent
Abstract:
The moving reference point (MRP) technique has been used in the analyses of the first three stages of two fracturing jobs. The results obtained verify the proposition that a hydraulic fracture in shale grows in spurts rather than in a continuous pattern as originally interpreted by Nolte-Smith technique. Rather than a continuous Mode I fracture that is followed by Mode II, III or IV fractures, these fracture modes could alternate throughout the pumping period. It is also shown that the Nolte-Smith time parameter plot can be very helpful in identifying the presence of natural fractures that have been intersected by the hydraulic fracture. In addition, with the aid of a fracture length-time plot generated from any fracture simulation that matches the data, the distance from the wellbore to the natural fractures, which also translates to the actual fracture length for the stage, can be determined. An algorithm for this technique is developed. This procedure was used for the first 9 minutes of the simulated frac job data. It was observed that after 7mins, the actual fracture length is about 150ft, instead of 250ft predicted by the simulator output. This difference gets larger as the analysis proceeds.Keywords: shale, fracturing, reservoir, simulation, frac-length, moving-reference-point
Procedia PDF Downloads 757981 System Identification in Presence of Outliers
Authors: Chao Yu, Qing-Guo Wang, Dan Zhang
Abstract:
The outlier detection problem for dynamic systems is formulated as a matrix decomposition problem with low-rank, sparse matrices and further recast as a semidefinite programming (SDP) problem. A fast algorithm is presented to solve the resulting problem while keeping the solution matrix structure and it can greatly reduce the computational cost over the standard interior-point method. The computational burden is further reduced by proper construction of subsets of the raw data without violating low rank property of the involved matrix. The proposed method can make exact detection of outliers in case of no or little noise in output observations. In case of significant noise, a novel approach based on under-sampling with averaging is developed to denoise while retaining the saliency of outliers and so-filtered data enables successful outlier detection with the proposed method while the existing filtering methods fail. Use of recovered “clean” data from the proposed method can give much better parameter estimation compared with that based on the raw data.Keywords: outlier detection, system identification, matrix decomposition, low-rank matrix, sparsity, semidefinite programming, interior-point methods, denoising
Procedia PDF Downloads 308980 Topology Optimization of the Interior Structures of Beams under Various Load and Support Conditions with Solid Isotropic Material with Penalization Method
Authors: Omer Oral, Y. Emre Yilmaz
Abstract:
Topology optimization is an approach that optimizes material distribution within a given design space for a certain load and boundary conditions by providing performance goals. It uses various restrictions such as boundary conditions, set of loads, and constraints to maximize the performance of the system. It is different than size and shape optimization methods, but it reserves some features of both methods. In this study, interior structures of the parts were optimized by using SIMP (Solid Isotropic Material with Penalization) method. The volume of the part was preassigned parameter and minimum deflection was the objective function. The basic idea behind the theory was considered, and different methods were discussed. Rhinoceros 3D design tool was used with Grasshopper and TopOpt plugins to create and optimize parts. A Grasshopper algorithm was designed and tested for different beams, set of arbitrary located forces and support types such as pinned, fixed, etc. Finally, 2.5D shapes were obtained and verified by observing the changes in density function.Keywords: Grasshopper, lattice structure, microstructures, Rhinoceros, solid isotropic material with penalization method, TopOpt, topology optimization
Procedia PDF Downloads 139979 Kinetic and Thermodynamic Modified Pectin with Chitosan by Forming Polyelectrolyte Complex Adsorbent to Remediate of Pb(II)
Authors: Budi Hastuti, Mudasir, Dwi Siswanta, Triyono
Abstract:
Biosorbent, such as pectin and chitosan, are usually produced with low physical stability, thus the materials need to be modified. In this research, the physical characteristic of adsorbent was increased by grafting chitosan using acetate carboxymetyl chitosan (CC). Further, CC and Pectin (Pec) were crosslinked using cross-linking agent BADGE (bis phenol A diglycidyl ether) to get CC-Pec-BADGE (CPB) adsorbent. The cross-linking processes aim to form stable structure and resistance on acidic media. Furthermore, in order to increase the adsorption capacity in removing Pb(II), the adsorbent was added with NaCl to form macroporous adsorbent named CCPec-BADGE-Na (CPB-Na). The physical and chemical characteristics of the porogenic adsorbent structure were characterized by scanning electron microscopy (SEM) and Fourier transform infrared spectroscopy (FT-IR). The adsorption parameter of CPB-Na to adsorb Pb(II) ion was determined. The kinetics and thermodynamics of the bath sorption of Pb(II) on CPB-Na adsorbent and using chitosan and pectin as a comparison were also studied. The results showed that the CPB-Na biosorbent was stable on acidic media. It had a rough and porous surface area, increased and gave higher sorption capacity for removal of Pb(II) ion. The CPB-Na 1/1 and 1/3 adsorbent adsorbed Pb(II) with adsorption capacity of 45.48 mg/g and 45.97 mg/g respectively, whereas pectin and chitosan were of 39.20 mg /g and 24.67 mg /g respectively.Keywords: porogen, Pectin, Carboxymethyl Chitosan (CC), CC- Pec-BADGE-Na
Procedia PDF Downloads 159978 Bayesian Using Markov Chain Monte Carlo and Lindley's Approximation Based on Type-I Censored Data
Authors: Al Omari Moahmmed Ahmed
Abstract:
These papers describe the Bayesian Estimator using Markov Chain Monte Carlo and Lindley’s approximation and the maximum likelihood estimation of the Weibull distribution with Type-I censored data. The maximum likelihood method can’t estimate the shape parameter in closed forms, although it can be solved by numerical methods. Moreover, the Bayesian estimates of the parameters, the survival and hazard functions cannot be solved analytically. Hence Markov Chain Monte Carlo method and Lindley’s approximation are used, where the full conditional distribution for the parameters of Weibull distribution are obtained via Gibbs sampling and Metropolis-Hastings algorithm (HM) followed by estimate the survival and hazard functions. The methods are compared to Maximum Likelihood counterparts and the comparisons are made with respect to the Mean Square Error (MSE) and absolute bias to determine the better method in scale and shape parameters, the survival and hazard functions.Keywords: weibull distribution, bayesian method, markov chain mote carlo, survival and hazard functions
Procedia PDF Downloads 479977 Red Blood Cells Deformability: A Chaotic Process
Authors: Ana M. Korol, Bibiana Riquelme, Osvaldo A. Rosso
Abstract:
Since erythrocyte deformability analysis is mostly qualitative, the development of quantitative nonlinear methods is crucial for restricting subjectivity in the study of cell behaviour. An electro-optic mechanic system called erythrodeformeter has been developed and constructed in our laboratory in order to evaluate the erythrocytes' viscoelasticity. A numerical method formulated on the basis of fractal approximation for ordinary (OBM) and fractionary Brownian motion (FBM), as well as wavelet transform analysis, are proposed to distinguish chaos from noise based on the assumption that diffractometric data involves both deterministic and stochastic components, so it could be modelled as a system of bounded correlated random walk. Here we report studies on 25 donors: 4 alpha thalassaemic patients, 11 beta thalassaemic patients, and 10 healthy controls non-alcoholic and non-smoker individuals. The Correlation Coefficient, a nonlinear parameter, showed evidence of the changes in the erythrocyte deformability; the Wavelet Entropy could quantify those differences which are detected by the light diffraction patterns. Such quantifiers allow a good deal of promise and the possibility of a better understanding of the rheological erythrocytes aspects and also could help in clinical diagnosis.Keywords: red blood cells, deformability, nonlinear dynamics, chaos theory, wavelet trannsform
Procedia PDF Downloads 60976 A Particle Filter-Based Data Assimilation Method for Discrete Event Simulation
Authors: Zhi Zhu, Boquan Zhang, Tian Jing, Jingjing Li, Tao Wang
Abstract:
Data assimilation is a model and data hybrid-driven method that dynamically fuses new observation data with a numerical model to iteratively approach the real system state. It is widely used in state prediction and parameter inference of continuous systems. Because of the discrete event system’s non-linearity and non-Gaussianity, traditional Kalman Filter based on linear and Gaussian assumptions cannot perform data assimilation for such systems, so particle filter has gradually become a technical approach for discrete event simulation data assimilation. Hence, we proposed a particle filter-based discrete event simulation data assimilation method and took the unmanned aerial vehicle (UAV) maintenance service system as a proof of concept to conduct simulation experiments. The experimental results showed that the filtered state data is closer to the real state of the system, which verifies the effectiveness of the proposed method. This research can provide a reference framework for the data assimilation process of other complex nonlinear systems, such as discrete-time and agent simulation.Keywords: discrete event simulation, data assimilation, particle filter, model and data-driven
Procedia PDF Downloads 20975 Weighted Risk Scores Method Proposal for Occupational Safety Risk Assessment
Authors: Ulas Cinar, Omer Faruk Ugurlu, Selcuk Cebi
Abstract:
Occupational safety risk management is the most important element of a safe working environment. Effective risk management can only be possible with accurate analysis and evaluations. Scoring-based risk assessment methods offer considerable ease of application as they convert linguistic expressions into numerical results. It can also be easily adapted to any field. Contrary to all these advantages, important problems in scoring-based methods are frequently discussed. Effective measurability is one of the most critical problems. Existing methods allow experts to choose a score equivalent to each parameter. Therefore, experts prefer the score of the most likely outcome for risk. However, all other possible consequences are neglected. Assessments of the existing methods express the most probable level of risk, not the real risk of the enterprises. In this study, it is aimed to develop a method that will present a more comprehensive evaluation compared to the existing methods by evaluating the probability and severity scores, all sub-parameters, and potential results, and a new scoring-based method is proposed in the literature.Keywords: occupational health and safety, risk assessment, scoring based risk assessment method, underground mining, weighted risk scores
Procedia PDF Downloads 136974 The Dynamics of Unsteady Squeezing Flow between Parallel Plates (Two-Dimensional)
Authors: Jiya Mohammed, Ibrahim Ismail Giwa
Abstract:
Unsteady squeezing flow of a viscous fluid between parallel plates is considered. The two plates are considered to be approaching each other symmetrically, causing the squeezing flow. Two-dimensional rectangular Cartesian coordinate is considered. The Navier-Stokes equation was reduced using similarity transformation to a single fourth order non-linear ordinary differential equation. The energy equation was transformed to a second order coupled differential equation. We obtained solution to the resulting ordinary differential equations via Homotopy Perturbation Method (HPM). HPM deforms a differential problem into a set of problem that are easier to solve and it produces analytic approximate expression in the form of an infinite power series by using only sixth and fifth terms for the velocity and temperature respectively. The results reveal that the proposed method is very effective and simple. Comparisons among present and existing solutions were provided and it is shown that the proposed method is in good agreement with Variation of Parameter Method (VPM). The effects of appropriate dimensionless parameters on the velocity profiles and temperature field are demonstrated with the aid of comprehensive graphs and tables.Keywords: coupled differential equation, Homotopy Perturbation Method, plates, squeezing flow
Procedia PDF Downloads 475973 Detection of Total Aflatoxin in Flour of Wheat and Maize Samples in Albania Using ELISA
Authors: Aferdita Dinaku, Jonida Canaj
Abstract:
Aflatoxins are potentially toxic metabolites produced by certain kinds of fungi (molds) that are found naturally all over the world; they can contaminate food crops and pose a serious health threat to humans by mutagenic and carcinogenic effects. Several types of aflatoxin (14 or more) occur in nature. In Albanian nutrition, cereals (especially wheat and corn) are common ingredients in some traditional meals. This study aimed to investigate the presence of aflatoxins in the flour of wheat and maize that are consumed in Albania’s markets. The samples were collected randomly in different markets in Albania and detected by the ELISA method, measured in 450 nm. The concentration of total aflatoxins was analyzed by enzyme-linked immunosorbent assay (ELISA), and they were ranged between 0.05-1.09 ppb. However, the screened mycotoxin levels in the samples were lower than the maximum permissible limits of European Commission No 1881/2006 (4 μg/kg). The linearity of calibration curves was good for total aflatoxins (B1, B2, G1, G2, M1) (R²=0.99) in the concentration range 0.005-4.05 ppb. The samples were analyzed in two replicated measurements and for each sample, the standard deviation (statistical parameter) is calculated. The results showed that the flour samples are safe, but the necessity of performing such tests is necessary.Keywords: aflatoxins, ELISA technique, food contamination, flour
Procedia PDF Downloads 158972 A Dynamic Model for Assessing the Advanced Glycation End Product Formation in Diabetes
Authors: Victor Arokia Doss, Kuberapandian Dharaniyambigai, K. Julia Rose Mary
Abstract:
Advanced Glycation End (AGE) products are the end products due to the reaction between excess reducing sugar present in diabetes and free amino group in protein lipids and nucleic acids. Thus, non-enzymic glycation of molecules such as hemoglobin, collagen, and other structurally and functionally important proteins add to the pathogenic complications such as diabetic retinopathy, neuropathy, nephropathy, vascular changes, atherosclerosis, Alzheimer's disease, rheumatoid arthritis, and chronic heart failure. The most common non-cross linking AGE, carboxymethyl lysine (CML) is formed by the oxidative breakdown of fructosyllysine, which is a product of glucose and lysine. CML is formed in a wide variety of tissues and is an index to assess the extent of glycoxidative damage. Thus we have constructed a mathematical and computational model that predicts the effect of temperature differences in vivo, on the formation of CML, which is now being considered as an important intracellular milieu. This hybrid model that had been tested for its parameter fitting and its sensitivity with available experimental data paves the way for designing novel laboratory experiments that would throw more light on the pathological formation of AGE adducts and in the pathophysiology of diabetic complications.Keywords: advanced glycation end-products, CML, mathematical model, computational model
Procedia PDF Downloads 130971 Development of Basic Patternmaking Using Parametric Modelling and AutoLISP
Authors: Haziyah Hussin, Syazwan Abdul Samad, Rosnani Jusoh
Abstract:
This study is aimed towards the automisation of basic patternmaking for traditional clothes for the purpose of mass production using AutoCAD to apply AutoLISP feature under software Hazi Attire. A standard dress form (industrial form) with the size of small (S), medium (M) and large (L) size is measured using full body scanning machine. Later, the pattern for the clothes is designed parametrically based on the measured dress form. Hazi Attire program is used within the framework of AutoCAD to generate the basic pattern of front bodice, back bodice, front skirt, back skirt and sleeve block (sloper). The generation of pattern is based on the parameters inputted by user, whereby in this study, the parameters were determined based on the measured size of dress form. The finalized pattern parameter shows that the pattern fit perfectly on the dress form. Since the pattern is generated almost instantly, these proved that using the AutoLISP programming, the manufacturing lead time for the mass production of the traditional clothes can be decreased.Keywords: apparel, AutoLISP, Malay traditional clothes, pattern ganeration
Procedia PDF Downloads 257970 A 3D Model of the Sustainable Management of the Natural Environment in National Parks
Authors: Paolo Russu
Abstract:
This paper investigates the economic and ecological dynamics that emerge in Protected Areas (PAs) as a result of interactions between visitors to the area and the animals that live there. We suppose that the PAs contain two species whose interactions are determined by the Lotka-Volterra equations system. Visitors' decisions to visit PAs are influenced by the entrance cost required to enter the park as well as the chance of witnessing the species that live there. Visitors have contradictory effects on the species and thus on the sustainability of the protected areas: on the one hand, an increase in the number of tourists damages the natural habitat of the areas and thus the species living there; on the other hand, it increases the total amount of entrance fees that the managing body of the PAs can use to perform defensive expenditures that protect the species from extinction. For a given set of parameter values, the existence of saddle-node bifurcation, Hopf bifurcation, homoclinic orbits, and a Bogdanov–Takens bifurcation of codimension two has been investigated. The system displays periodic doubling and chaotic solutions, as demonstrated by numerical examples. Pontryagin's Maximum Principle was utilized to develop an optimal admission charge policy that maximized both social gain and ecosystem conservation.Keywords: environmental preferences, singularities point, dynamical system, chaos
Procedia PDF Downloads 97969 Gas Condensing Unit with Inner Heat Exchanger
Authors: Dagnija Blumberga, Toms Prodanuks, Ivars Veidenbergs, Andra Blumberga
Abstract:
Gas condensing units with inner tubes heat exchangers represent third generation technology and differ from second generation heat and mass transfer units, which are fulfilled by passive filling material layer. The first one improves heat and mass transfer by increasing cooled contact surface of gas and condensate drops and film formed in inner tubes heat exchanger. This paper presents a selection of significant factors which influence the heat and mass transfer. Experimental planning is based on the research and analysis of main three independent variables; velocity of water and gas as well as density of spraying. Empirical mathematical models show that the coefficient of heat transfer is used as dependent parameter which depends on two independent variables; water and gas velocity. Empirical model is proved by the use of experimental data of two independent gas condensing units in Lithuania and Russia. Experimental data are processed by the use of heat transfer criteria-Kirpichov number. Results allow drawing the graphical nomogram for the calculation of heat and mass transfer conditions in the innovative and energy efficient gas cooling unit.Keywords: gas condensing unit, filling, inner heat exchanger, package, spraying, tunes
Procedia PDF Downloads 288968 A Copula-Based Approach for the Assessment of Severity of Illness and Probability of Mortality: An Exploratory Study Applied to Intensive Care Patients
Authors: Ainura Tursunalieva, Irene Hudson
Abstract:
Continuous improvement of both the quality and safety of health care is an important goal in Australia and internationally. The intensive care unit (ICU) receives patients with a wide variety of and severity of illnesses. Accurately identifying patients at risk of developing complications or dying is crucial to increasing healthcare efficiency. Thus, it is essential for clinicians and researchers to have a robust framework capable of evaluating the risk profile of a patient. ICU scoring systems provide such a framework. The Acute Physiology and Chronic Health Evaluation III and the Simplified Acute Physiology Score II are ICU scoring systems frequently used for assessing the severity of acute illness. These scoring systems collect multiple risk factors for each patient including physiological measurements then render the assessment outcomes of individual risk factors into a single numerical value. A higher score is related to a more severe patient condition. Furthermore, the Mortality Probability Model II uses logistic regression based on independent risk factors to predict a patient’s probability of mortality. An important overlooked limitation of SAPS II and MPM II is that they do not, to date, include interaction terms between a patient’s vital signs. This is a prominent oversight as it is likely there is an interplay among vital signs. The co-existence of certain conditions may pose a greater health risk than when these conditions exist independently. One barrier to including such interaction terms in predictive models is the dimensionality issue as it becomes difficult to use variable selection. We propose an innovative scoring system which takes into account a dependence structure among patient’s vital signs, such as systolic and diastolic blood pressures, heart rate, pulse interval, and peripheral oxygen saturation. Copulas will capture the dependence among normally distributed and skewed variables as some of the vital sign distributions are skewed. The estimated dependence parameter will then be incorporated into the traditional scoring systems to adjust the points allocated for the individual vital sign measurements. The same dependence parameter will also be used to create an alternative copula-based model for predicting a patient’s probability of mortality. The new copula-based approach will accommodate not only a patient’s trajectories of vital signs but also the joint dependence probabilities among the vital signs. We hypothesise that this approach will produce more stable assessments and lead to more time efficient and accurate predictions. We will use two data sets: (1) 250 ICU patients admitted once to the Chui Regional Hospital (Kyrgyzstan) and (2) 37 ICU patients’ agitation-sedation profiles collected by the Hunter Medical Research Institute (Australia). Both the traditional scoring approach and our copula-based approach will be evaluated using the Brier score to indicate overall model performance, the concordance (or c) statistic to indicate the discriminative ability (or area under the receiver operating characteristic (ROC) curve), and goodness-of-fit statistics for calibration. We will also report discrimination and calibration values and establish visualization of the copulas and high dimensional regions of risk interrelating two or three vital signs in so-called higher dimensional ROCs.Keywords: copula, intensive unit scoring system, ROC curves, vital sign dependence
Procedia PDF Downloads 153