Search results for: inductive parameter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2286

Search results for: inductive parameter

1026 Software Reliability Prediction Model Analysis

Authors: Lela Mirtskhulava, Mariam Khunjgurua, Nino Lomineishvili, Koba Bakuria

Abstract:

Software reliability prediction gives a great opportunity to measure the software failure rate at any point throughout system test. A software reliability prediction model provides with the technique for improving reliability. Software reliability is very important factor for estimating overall system reliability, which depends on the individual component reliabilities. It differs from hardware reliability in that it reflects the design perfection. Main reason of software reliability problems is high complexity of software. Various approaches can be used to improve the reliability of software. We focus on software reliability model in this article, assuming that there is a time redundancy, the value of which (the number of repeated transmission of basic blocks) can be an optimization parameter. We consider given mathematical model in the assumption that in the system may occur not only irreversible failures, but also a failure that can be taken as self-repairing failures that significantly affect the reliability and accuracy of information transfer. Main task of the given paper is to find a time distribution function (DF) of instructions sequence transmission, which consists of random number of basic blocks. We consider the system software unreliable; the time between adjacent failures has exponential distribution.

Keywords: exponential distribution, conditional mean time to failure, distribution function, mathematical model, software reliability

Procedia PDF Downloads 465
1025 Design of Liquid Crystal Based Tunable Reflectarray Antenna Using Slot Embedded Patch Element Configurations

Authors: M. Y. Ismail, M. Inam

Abstract:

This paper presents the design and analysis of Liquid Crystal (LC) based tunable reflect array antenna with different design configurations within X-band frequency range. The effect of LC volume used for unit cell element on frequency tunability and reflection loss performance has been investigated. Moreover different slot embedded patch element configurations have been proposed for LC based tunable reflect array antenna design with enhanced performance. The detailed fabrication and measurement procedure for different LC based unit cells has been presented. The waveguide scattering parameter measured results demonstrated that by using the circular slot embedded patch elements, the frequency tunability and dynamic phase range can be increased from 180 MHz to 200 MHz and 120° to 124° respectively. Furthermore the circular slot embedded patch element can be designed at 10 GHz resonant frequency with a patch volume of 2.71 mm3 as compared to 3.47 mm3 required for rectangular patch without slot.

Keywords: liquid crystal, tunable reflect array, frequency tunability, dynamic phase range

Procedia PDF Downloads 521
1024 Understanding Evolutionary Algorithms through Interactive Graphical Applications

Authors: Javier Barrachina, Piedad Garrido, Manuel Fogue, Julio A. Sanguesa, Francisco J. Martinez

Abstract:

It is very common to observe, especially in Computer Science studies that students have difficulties to correctly understand how some mechanisms based on Artificial Intelligence work. In addition, the scope and limitations of most of these mechanisms are usually presented by professors only in a theoretical way, which does not help students to understand them adequately. In this work, we focus on the problems found when teaching Evolutionary Algorithms (EAs), which imitate the principles of natural evolution, as a method to solve parameter optimization problems. Although this kind of algorithms can be very powerful to solve relatively complex problems, students often have difficulties to understand how they work, and how to apply them to solve problems in real cases. In this paper, we present two interactive graphical applications which have been specially designed with the aim of making Evolutionary Algorithms easy to be understood by students. Specifically, we present: (i) TSPS, an application able to solve the ”Traveling Salesman Problem”, and (ii) FotEvol, an application able to reconstruct a given image by using Evolution Strategies. The main objective is that students learn how these techniques can be implemented, and the great possibilities they offer.

Keywords: education, evolutionary algorithms, evolution strategies, interactive learning applications

Procedia PDF Downloads 338
1023 Short Arc Technique for Baselines Determinations

Authors: Gamal F.Attia

Abstract:

The baselines are the distances and lengths of the chords between projections of the positions of the laser stations on the reference ellipsoid. For the satellite geodesy, it is very important to determine the optimal length of orbital arc along which laser measurements are to be carried out. It is clear that for the dynamical methods long arcs (one month or more) are to be used. According to which more errors of modeling of different physical forces such as earth's gravitational field, air drag, solar radiation pressure, and others that may influence the accuracy of the estimation of the satellites position, at the same time the measured errors con be almost completely excluded and high stability in determination of relative coordinate system can be achieved. It is possible to diminish the influence of the errors of modeling by using short-arcs of the satellite orbit (several revolutions or days), but the station's coordinates estimated by different arcs con differ from each other by a larger quantity than statistical zero. Under the semidynamical ‘short arc’ method one or several passes of the satellite in one of simultaneous visibility from both ends of the chord is known and the estimated parameter in this case is the length of the chord. The comparison of the same baselines calculated with long and short arcs methods shows a good agreement and even speaks in favor of the last one. In this paper the Short Arc technique has been explained and 3 baselines have been determined using the ‘short arc’ method.

Keywords: baselines, short arc, dynamical, gravitational field

Procedia PDF Downloads 465
1022 The Participation of Experts in the Criminal Policy on Drugs: The Proposal of a Cannabis Regulation Model in Spain by the Cannabis Policy Studies Group

Authors: Antonio Martín-Pardo

Abstract:

With regard to the context in which this paper is inserted, it is noteworthy that the current criminal policy model in which we find immersed, denominated by some doctrine sector as the citizen security model, is characterized by a marked tendency towards the discredit of expert knowledge. This type of technic knowledge has been displaced by the common sense and by the daily experience of the people at the time of legislative drafting, as well as by excessive attention to the short-term political effects of the law. Despite this criminal-political adverse scene, we still find valuable efforts in the side of experts to bring some rationality to the legislative development. This is the case of the proposal for a new cannabis regulation model in Spain carried out by the Cannabis Policy Studies Group (hereinafter referred as ‘GEPCA’). The GEPCA is a multidisciplinary group composed by authors with multiple/different orientations, trajectories and interests, but with a common minimum objective: the conviction that the current situation regarding cannabis is unsustainable and, that a rational legislative solution must be given to the growing social pressure for the regulation of their consumption and production. This paper details the main lines through which this technical proposal is developed with the purpose of its dissemination and discussion in the Congress. The basic methodology of the proposal is inductive-expository. In that way, firstly, we will offer a brief, but solid contextualization of the situation of cannabis in Spain. This contextualization will touch on issues such as the national regulatory situation and its relationship with the international context; the criminal, judicial and penitentiary impact of the offer and consumption of cannabis, or the therapeutic use of the substance, among others. In second place, we will get down to the business properly by detailing the minutia of the three main cannabis access channels that are proposed. Namely: the regulated market, the associations of cannabis users and personal self-cultivation. In each of these options, especially in the first two, special attention will be paid to both, the production and processing of the substance and the necessary administrative control of the activity. Finally, in a third block, some notes will be given on a series of subjects that surround the different access options just mentioned above and that give fullness and coherence to the proposal outlined. Among those related issues we find some such as consumption and tenure of the substance; the issue of advertising and promotion of cannabis; consumption in areas of special risk (work or driving v. g.); the tax regime; the need to articulate evaluation instruments for the entire process; etc. The main conclusion drawn from the analysis of the proposal is the unsustainability of the current repressive system, clearly unsuccessful, and the need to develop new access routes to cannabis that guarantee both public health and the rights of people who have freely chosen to consume it.

Keywords: cannabis regulation proposal, cannabis policies studies group, criminal policy, expertise participation

Procedia PDF Downloads 120
1021 Assessment of Exposure Dose Rate from Scattered X-Radiation during Diagnostic Examination in Nigerian University Teaching Hospital

Authors: Martins Gbenga., Orosun M. M., Olowookere C. J., Bamidele Lateef

Abstract:

Radiation exposures from diagnostic medical examinations are almost always justified by the benefits of accurate diagnosis of possible disease conditions. The aim is to assess the influence of selected exposure parameters on scattered dose rates. The research was carried out using Gamma Scout software installation on the Computer system (Laptop) to record the radiation counts, pulse rate, and dose rate for 136 patients. Seventy-three patients participated in the male category with 53.7%, while 63 females participated with 46.3%. The mean and standard deviation value for each parameter is recorded, and tube potential is within 69.50±11.75 ranges between 52.00 and 100.00, tube current is within 23.20±17.55 ranges between 4.00 and 100.00, focus skin distance is within 73.195±33.99 and ranges between 52.00 and 100.00. Dose Rate (DRate in µSv/hr) is significant at an interval of 0.582 and 0.587 for tube potential and body thickness (cm). Tube potential is significant at an interval of 0.582 and 0.842 of DRate (µSv/hr) and body thickness (cm). The study was compared with other studies. The exposure parameters selected during each examination contributed to scattered radiation. A quality assurance program (QAP) is advised for the center.

Keywords: x-radiation, exposure rate, dose rate, tube potentials, scattered radiation, diagnostic examination

Procedia PDF Downloads 149
1020 An Investigation on Overstrength Factor (Ω) of Reinforced Concrete Buildings in Turkish Earthquake Draft Code (TEC-2016)

Authors: M. Hakan Arslan, I. Hakkı Erkan

Abstract:

Overstrength factor is an important parameter of load reduction factor. In this research, the overstrength factor (Ω) of reinforced concrete (RC) buildings and the parameters of Ω in TEC-2016 draft version have been explored. For this aim, 48 RC buildings have been modeled according to the current seismic code TEC-2007 and Turkish Building Code-500-2000 criteria. After modelling step, nonlinear static pushover analyses have been applied to these buildings by using TEC-2007 Section 7. After the nonlinear pushover analyses, capacity curves (lateral load-lateral top displacement curves) have been plotted for 48 RC buildings. Using capacity curves, overstrength factors (Ω) have been derived for each building. The obtained overstrength factor (Ω) values have been compared with TEC-2016 values for related building types, and the results have been interpreted. According to the obtained values from the study, overstrength factor (Ω) given in TEC-2016 draft code is found quite suitable.

Keywords: reinforced concrete buildings, overstrength factor, earthquake, static pushover analysis

Procedia PDF Downloads 357
1019 Modelling of Relocation and Battery Autonomy Problem on Electric Cars Sharing Dynamic by Using Discrete Event Simulation and Petri Net

Authors: Taha Benarbia, Kay W. Axhausen, Anugrah Ilahi

Abstract:

Electric car sharing system as ecologic transportation increasing in the world. The complexity of managing electric car sharing systems, especially one-way trips and battery autonomy have direct influence to on supply and demand of system. One must be able to precisely model the demand and supply of these systems to better operate electric car sharing and estimate its effect on mobility management and the accessibility that it provides in urban areas. In this context, our work focus to develop performances optimization model of the system based on discrete event simulation and stochastic Petri net. The objective is to search optimal decisions and management parameters of the system in order to fulfil at best demand while minimizing undesirable situations. In this paper, we present new model of electric cars sharing with relocation based on monitoring system. The proposed approach also help to precise the influence of battery charging level on the behaviour of system as important decision parameter of this complex and dynamical system.

Keywords: electric car-sharing systems, smart mobility, Petri nets modelling, discrete event simulation

Procedia PDF Downloads 183
1018 Modelling and Optimisation of Floating Drum Biogas Reactor

Authors: L. Rakesh, T. Y. Heblekar

Abstract:

This study entails the development and optimization of a mathematical model for a floating drum biogas reactor from first principles using thermal and empirical considerations. The model was derived on the basis of mass conservation, lumped mass heat transfer formulations and empirical biogas formation laws. The treatment leads to a system of coupled nonlinear ordinary differential equations whose solution mapped four-time independent controllable parameters to five output variables which adequately serve to describe the reactor performance. These equations were solved numerically using fourth order Runge-Kutta method for a range of input parameter values. Using the data so obtained an Artificial Neural Network with a single hidden layer was trained using Levenberg-Marquardt Damped Least Squares (DLS) algorithm. This network was then fine-tuned for optimal mapping by varying hidden layer size. This fast forward model was then employed as a health score generator in the Bacterial Foraging Optimization code. The optimal operating state of the simplified Biogas reactor was thus obtained.

Keywords: biogas, floating drum reactor, neural network model, optimization

Procedia PDF Downloads 143
1017 Dynamic Effects of Charitable Giving in a Ramsey Model

Authors: Riham Barbar

Abstract:

This paper studies the dynamic effects of charitable giving in a Ramsey model à la Becker and Foias (1994), such that heterogeneity is reduced to two types of agents: rich and poor. It is assumed that rich show a great concern for poor and enjoy giving. The introduction of charitable giving in this paper is inspired from the notion of Zakat (borrowed from the Islamic Economics) and is defined according to the warm-glow of Andreoni (1990). In this framework, we prove the existence of a steady state where only the patient agent holds capital. Furthermore, we show that local indetermincay appears. While moderate values of charitable-giving elasticity makes the appearance of endogenous fluctuations due to self-fulfilling expectations more likely, high values of this elasticity stabilizes endogenous fluctuations, by narrowing down the range of parameter values compatible with local indeterminacy and may rule out expectations-driven fluctuations if it exceeds certain threshold. Finally, cycles of period two emerge. However, charitable-giving makes it less likely for these cycles to emerge.

Keywords: charitable giving, warm-glow, bifurcations, heterogeneous agents, indeterminacy, self-fulfilling expectations, endogenous fluctuations

Procedia PDF Downloads 316
1016 Bayesian Estimation under Different Loss Functions Using Gamma Prior for the Case of Exponential Distribution

Authors: Md. Rashidul Hasan, Atikur Rahman Baizid

Abstract:

The Bayesian estimation approach is a non-classical estimation technique in statistical inference and is very useful in real world situation. The aim of this paper is to study the Bayes estimators of the parameter of exponential distribution under different loss functions and then compared among them as well as with the classical estimator named maximum likelihood estimator (MLE). In our real life, we always try to minimize the loss and we also want to gather some prior information (distribution) about the problem to solve it accurately. Here the gamma prior is used as the prior distribution of exponential distribution for finding the Bayes estimator. In our study, we also used different symmetric and asymmetric loss functions such as squared error loss function, quadratic loss function, modified linear exponential (MLINEX) loss function and non-linear exponential (NLINEX) loss function. Finally, mean square error (MSE) of the estimators are obtained and then presented graphically.

Keywords: Bayes estimator, maximum likelihood estimator (MLE), modified linear exponential (MLINEX) loss function, Squared Error (SE) loss function, non-linear exponential (NLINEX) loss function

Procedia PDF Downloads 385
1015 Sediment Patterns from Fluid-Bed Interactions: A Direct Numerical Simulations Study on Fluvial Turbulent Flows

Authors: Nadim Zgheib, Sivaramakrishnan Balachandar

Abstract:

We present results on the initial formation of ripples from an initially flattened erodible bed. We use direct numerical simulations (DNS) of turbulent open channel flow over a fixed sinusoidal bed coupled with hydrodynamic stability analysis. We use the direct forcing immersed boundary method to account for the presence of the sediment bed. The resolved flow provides the bed shear stress and consequently the sediment transport rate, which is needed in the stability analysis of the Exner equation. The approach is different from traditional linear stability analysis in the sense that the phase lag between the bed topology, and the sediment flux is obtained from the DNS. We ran 11 simulations at a fixed shear Reynolds number of 180, but for different sediment bed wavelengths. The analysis allows us to sweep a large range of physical and modelling parameters to predict their effects on linear growth. The Froude number appears to be the critical controlling parameter in the early linear development of ripples, in contrast with the dominant role of particle Reynolds number during the equilibrium stage.

Keywords: direct numerical simulation, immersed boundary method, sediment-bed interactions, turbulent multiphase flow, linear stability analysis

Procedia PDF Downloads 188
1014 Transducers for Measuring Displacements of Rotating Blades in Turbomachines

Authors: Pavel Prochazka

Abstract:

The study deals with transducers for measuring vibration displacements of rotating blade tips in turbomachines. In order to prevent major accidents with extensive economic consequences, it shows an urgent need for every low-pressure steam turbine stage being equipped with modern non-contact measuring system providing information on blade loading, damage and residual lifetime under operation. The requirement of measuring vibration and static characteristics of steam turbine blades, therefore, calls for the development and operational verification of both new types of sensors and measuring principles and methods. The task is really demanding: to measure displacements of blade tips with a resolution of the order of 10 μm by speeds up to 750 m/s, humidity 100% and temperatures up to 200 °C. While in gas turbines are used primarily capacitive and optical transducers, these transducers cannot be used in steam turbines. The reason is moisture vapor, droplets of condensing water and dirt, which disable the function of sensors. Therefore, the most feasible approach was to focus on research of electromagnetic sensors featuring promising characteristics for given blade materials in a steam environment. Following types of sensors have been developed and both experimentally and theoretically studied in the Institute of Thermodynamics, Academy of Sciences of the Czech Republic: eddy-current, Hall effect, inductive and magnetoresistive. Eddy-current transducers demand a small distance of 1 to 2 mm and change properties in the harsh environment of steam turbines. Hall effect sensors have relatively low sensitivity, high values of offset, drift, and especially noise. Induction sensors do not require any supply current and have a simple construction. The magnitude of the sensors output voltage is dependent on the velocity of the measured body and concurrently on the varying magnetic induction, and they cannot be used statically. Magnetoresistive sensors are formed by magnetoresistors arranged into a Wheatstone bridge. Supplying the sensor from a current source provides better linearity. The MR sensors can be used permanently for temperatures up to 200 °C at lower values of the supply current of about 1 mA. The frequency range of 0 to 300 kHz is by an order higher comparing to the Hall effect and induction sensors. The frequency band starts at zero frequency, which is very important because the sensors can be calibrated statically. The MR sensors feature high sensitivity and low noise. The symmetry of the bridge arrangement leads to a high common mode rejection ratio and suppressing disturbances, which is important, especially in industrial applications. The MR sensors feature high sensitivity, high common mode rejection ratio, and low noise, which is important, especially in industrial applications. Magnetoresistive transducers provide a range of excellent properties indicating their priority for displacement measurements of rotating blades in turbomachines.

Keywords: turbines, blade vibration, blade tip timing, non-contact sensors, magnetoresistive sensors

Procedia PDF Downloads 129
1013 Modeling Residual Modulus of Elasticity of Self-Compacted Concrete Using Artificial Neural Networks

Authors: Ahmed M. Ashteyat

Abstract:

Artificial Neural Network (ANN) models have been widely used in material modeling, inter-correlations, as well as behavior and trend predictions when the nonlinear relationship between system parameters cannot be quantified explicitly and mathematically. In this paper, ANN was used to predict the residual modulus of elasticity (RME) of self compacted concrete (SCC) damaged by heat. The ANN model was built, trained, tested and validated using a total of 112 experimental data sets, gathered from available literature. The data used in model development included temperature, relative humidity conditions, mix proportions, filler types, and fiber type. The result of ANN training, testing, and validation indicated that the RME of SCC, exposed to different temperature and relative humidity levels, could be predicted accurately with ANN techniques. The reliability between the predicated outputs and the actual experimental data was 99%. This show that ANN has strong potential as a feasible tool for predicting residual elastic modulus of SCC damaged by heat within the range of input parameter. The ANN model could be used to estimate the RME of SCC, as a rapid inexpensive substitute for the much more complicated and time consuming direct measurement of the RME of SCC.

Keywords: residual modulus of elasticity, artificial neural networks, self compacted-concrete, material modeling

Procedia PDF Downloads 536
1012 Convergence Analysis of a Gibbs Sampling Based Mix Design Optimization Approach for High Compressive Strength Pervious Concrete

Authors: Jiaqi Huang, Lu Jin

Abstract:

Pervious concrete features with high water permeability rate. However, due to the lack of fine aggregates, the compressive strength is usually lower than other conventional concrete products. Optimization of pervious concrete mix design has long been recognized as an effective mechanism to achieve high compressive strength while maintaining desired permeability rate. In this paper, a Gibbs Sampling based algorithm is proposed to approximate the optimal mix design to achieve a high compressive strength of pervious concrete. We prove that the proposed algorithm efficiently converges to the set of global optimal solutions. The convergence rate and accuracy depend on a control parameter employed in the proposed algorithm. The simulation results show that, by using the proposed approach, the system converges to the optimal solution quickly and the derived optimal mix design achieves the maximum compressive strength while maintaining the desired permeability rate.

Keywords: convergence, Gibbs Sampling, high compressive strength, optimal mix design, pervious concrete

Procedia PDF Downloads 182
1011 Study on the Application of Lime to Improve the Rheological Properties of Polymer Modified Bitumen

Authors: A. Chegenizadeh, M. Keramatikerman, H. Nikraz

Abstract:

Bitumen is one of the most applicable materials in pavement engineering. It is a binding material with unique viscoelastic properties, especially when it mixes with polymer. In this study, to figure out the viscoelastic behaviour of the polymer modified with bitumen (PMB), a series of dynamic shearing rheological (DSR) tests were conducted. Four percentages of lime (i.e. 1%, 2%, 4% and 5%) were mixed with PMB and tested under four different temperatures including 64ºC, 70ºC, 76ºC and 82ºC. The results indicated that complex shearing modulus (G*) increased by increasing the frequency due to raised resistance against deformation. The phase angle (δ) showed a decreasing trend by incrementing the frequency. The addition of lime percentages increased the complex modulus value and declined phase angle parameter. Increasing the temperature decreased the complex modulus and increased the phase angle until 70ºC. The decreasing trend of rutting factor with increasing temperature revealed that rutting factor improved by the addition of the lime to the PMB.

Keywords: rheological properties, DSR test, polymer mixed with bitumen (PMB), complex modulus, lime

Procedia PDF Downloads 189
1010 Vortex Separator for More Accurate Air Dry-Bulb Temperature Measurement

Authors: Ahmed N. Shmroukh, I. M. S. Taha, A. M. Abdel-Ghany, M. Attalla

Abstract:

Fog systems application for cooling and humidification is still limited, although these systems require less initial cost compared with that of other cooling systems such as pad-and-fan systems. The undesirable relative humidity and air temperature inside the space which have been cooled or humidified are the main reasons for its limited use, which results from the poor control of fog systems. Any accurate control system essentially needs air dry bulb temperature as an input parameter. Therefore, the air dry-bulb temperature in the space needs to be measured accurately. The Scope of the present work is the separation of the fog droplets from the air in a fogged space to measure the air dry bulb temperature accurately. The separation is to be done in a small device inside which the sensor of the temperature measuring instrument is positioned. Vortex separator will be designed and used. Another reference device will be used for measuring the air temperature without separation. A comparative study will be performed to reach at the best device which leads to the most accurate measurement of air dry bulb temperature. The results showed that the proposed devices improved the measured air dry bulb temperature toward the correct direction over that of the free junction. Vortex device was the best. It respectively increased the temperature measured by the free junction in the range from around 2 to around 6°C for different fog on-off duration.

Keywords: fog systems, measuring air dry bulb temperature, temperature measurement, vortex separator

Procedia PDF Downloads 296
1009 Actual Fracture Length Determination Using a Technique for Shale Fracturing Data Analysis in Real Time

Authors: M. Wigwe, M. Y Soloman, E. Pirayesh, R. Eghorieta, N. Stegent

Abstract:

The moving reference point (MRP) technique has been used in the analyses of the first three stages of two fracturing jobs. The results obtained verify the proposition that a hydraulic fracture in shale grows in spurts rather than in a continuous pattern as originally interpreted by Nolte-Smith technique. Rather than a continuous Mode I fracture that is followed by Mode II, III or IV fractures, these fracture modes could alternate throughout the pumping period. It is also shown that the Nolte-Smith time parameter plot can be very helpful in identifying the presence of natural fractures that have been intersected by the hydraulic fracture. In addition, with the aid of a fracture length-time plot generated from any fracture simulation that matches the data, the distance from the wellbore to the natural fractures, which also translates to the actual fracture length for the stage, can be determined. An algorithm for this technique is developed. This procedure was used for the first 9 minutes of the simulated frac job data. It was observed that after 7mins, the actual fracture length is about 150ft, instead of 250ft predicted by the simulator output. This difference gets larger as the analysis proceeds.

Keywords: shale, fracturing, reservoir, simulation, frac-length, moving-reference-point

Procedia PDF Downloads 757
1008 System Identification in Presence of Outliers

Authors: Chao Yu, Qing-Guo Wang, Dan Zhang

Abstract:

The outlier detection problem for dynamic systems is formulated as a matrix decomposition problem with low-rank, sparse matrices and further recast as a semidefinite programming (SDP) problem. A fast algorithm is presented to solve the resulting problem while keeping the solution matrix structure and it can greatly reduce the computational cost over the standard interior-point method. The computational burden is further reduced by proper construction of subsets of the raw data without violating low rank property of the involved matrix. The proposed method can make exact detection of outliers in case of no or little noise in output observations. In case of significant noise, a novel approach based on under-sampling with averaging is developed to denoise while retaining the saliency of outliers and so-filtered data enables successful outlier detection with the proposed method while the existing filtering methods fail. Use of recovered “clean” data from the proposed method can give much better parameter estimation compared with that based on the raw data.

Keywords: outlier detection, system identification, matrix decomposition, low-rank matrix, sparsity, semidefinite programming, interior-point methods, denoising

Procedia PDF Downloads 308
1007 Unscrupulous Intermediaries in International Labour Migration of Nepal

Authors: Anurag Devkota

Abstract:

Foreign employment serves to be the strongest pillar in engendering employment options for a large number of the young Nepali population. Nepali workers are forced to leave the comfort of their homes and are exposed to precarious conditions while on a journey to earn enough money to live better their lives. The exponential rise in foreign labour migration has produced a snowball effect on the economy of the nation. The dramatic variation in the economic development of the state has proved to establish the fact that migration is increasingly significant for livelihood, economic development, political stability, academic discourse and policy planning in Nepal. The foreign employment practice in Nepal largely incorporates the role of individual agents in the entire process of migration. With the fraudulent acts and false promises of these agents, the problems associated with every Nepali migrant worker starts at home. The workers encounter tremendous pre-departure malpractice and exploitation at home by different individual agents during different stages of processing. Although these epidemic and repetitive ill activities of intermediaries are dominant and deeply rooted, the agents have been allowed to walk free in the absence of proper laws to curb their wrongdoings and misconduct. It has been found that the existing regulatory mechanisms have not been utilised to their full efficacy and often fall short in addressing the actual concerns of the workers because of the complex legal and judicial procedures. Structural changes in the judicial setting will help bring perpetrators under the law and victims towards access to justice. Thus, a qualitative improvement of the overall situation of Nepali migrant workers calls for a proper 'regulatory' arrangement vis-à-vis these brokers. Hence, the author aims to carry out a doctrinal study using reports and scholarly articles as a major source of data collection. Various reports published by different non-governmental and governmental organizations working in the field of labour migration will be examined and the research will focus on the inductive and deductive data analysis. Hence, the real challenge of establishing a pro-migrant worker regime in recent times is to bring the agents under the jurisdiction of the court in Nepal. The Gulf Visit Study Report, 2017 prepared and launched by the International Relation and Labour Committee of Legislature-Parliament of Nepal finds that solving the problems at home solves 80 percent of the problems concerning migrant workers in Nepal. Against this backdrop, this research study is intended to determine the ways and measures to curb the role of agents in the foreign employment and labour migration process of Nepal. It will further dig deeper into the regulatory mechanisms of Nepal and map out essential determinant behind the impunity of agents.

Keywords: foreign employment, labour migration, human rights, migrant workers

Procedia PDF Downloads 116
1006 Topology Optimization of the Interior Structures of Beams under Various Load and Support Conditions with Solid Isotropic Material with Penalization Method

Authors: Omer Oral, Y. Emre Yilmaz

Abstract:

Topology optimization is an approach that optimizes material distribution within a given design space for a certain load and boundary conditions by providing performance goals. It uses various restrictions such as boundary conditions, set of loads, and constraints to maximize the performance of the system. It is different than size and shape optimization methods, but it reserves some features of both methods. In this study, interior structures of the parts were optimized by using SIMP (Solid Isotropic Material with Penalization) method. The volume of the part was preassigned parameter and minimum deflection was the objective function. The basic idea behind the theory was considered, and different methods were discussed. Rhinoceros 3D design tool was used with Grasshopper and TopOpt plugins to create and optimize parts. A Grasshopper algorithm was designed and tested for different beams, set of arbitrary located forces and support types such as pinned, fixed, etc. Finally, 2.5D shapes were obtained and verified by observing the changes in density function.

Keywords: Grasshopper, lattice structure, microstructures, Rhinoceros, solid isotropic material with penalization method, TopOpt, topology optimization

Procedia PDF Downloads 139
1005 Kinetic and Thermodynamic Modified Pectin with Chitosan by Forming Polyelectrolyte Complex Adsorbent to Remediate of Pb(II)

Authors: Budi Hastuti, Mudasir, Dwi Siswanta, Triyono

Abstract:

Biosorbent, such as pectin and chitosan, are usually produced with low physical stability, thus the materials need to be modified. In this research, the physical characteristic of adsorbent was increased by grafting chitosan using acetate carboxymetyl chitosan (CC). Further, CC and Pectin (Pec) were crosslinked using cross-linking agent BADGE (bis phenol A diglycidyl ether) to get CC-Pec-BADGE (CPB) adsorbent. The cross-linking processes aim to form stable structure and resistance on acidic media. Furthermore, in order to increase the adsorption capacity in removing Pb(II), the adsorbent was added with NaCl to form macroporous adsorbent named CCPec-BADGE-Na (CPB-Na). The physical and chemical characteristics of the porogenic adsorbent structure were characterized by scanning electron microscopy (SEM) and Fourier transform infrared spectroscopy (FT-IR). The adsorption parameter of CPB-Na to adsorb Pb(II) ion was determined. The kinetics and thermodynamics of the bath sorption of Pb(II) on CPB-Na adsorbent and using chitosan and pectin as a comparison were also studied. The results showed that the CPB-Na biosorbent was stable on acidic media. It had a rough and porous surface area, increased and gave higher sorption capacity for removal of Pb(II) ion. The CPB-Na 1/1 and 1/3 adsorbent adsorbed Pb(II) with adsorption capacity of 45.48 mg/g and 45.97 mg/g respectively, whereas pectin and chitosan were of 39.20 mg /g and 24.67 mg /g respectively.

Keywords: porogen, Pectin, Carboxymethyl Chitosan (CC), CC- Pec-BADGE-Na

Procedia PDF Downloads 159
1004 Bayesian Using Markov Chain Monte Carlo and Lindley's Approximation Based on Type-I Censored Data

Authors: Al Omari Moahmmed Ahmed

Abstract:

These papers describe the Bayesian Estimator using Markov Chain Monte Carlo and Lindley’s approximation and the maximum likelihood estimation of the Weibull distribution with Type-I censored data. The maximum likelihood method can’t estimate the shape parameter in closed forms, although it can be solved by numerical methods. Moreover, the Bayesian estimates of the parameters, the survival and hazard functions cannot be solved analytically. Hence Markov Chain Monte Carlo method and Lindley’s approximation are used, where the full conditional distribution for the parameters of Weibull distribution are obtained via Gibbs sampling and Metropolis-Hastings algorithm (HM) followed by estimate the survival and hazard functions. The methods are compared to Maximum Likelihood counterparts and the comparisons are made with respect to the Mean Square Error (MSE) and absolute bias to determine the better method in scale and shape parameters, the survival and hazard functions.

Keywords: weibull distribution, bayesian method, markov chain mote carlo, survival and hazard functions

Procedia PDF Downloads 479
1003 Red Blood Cells Deformability: A Chaotic Process

Authors: Ana M. Korol, Bibiana Riquelme, Osvaldo A. Rosso

Abstract:

Since erythrocyte deformability analysis is mostly qualitative, the development of quantitative nonlinear methods is crucial for restricting subjectivity in the study of cell behaviour. An electro-optic mechanic system called erythrodeformeter has been developed and constructed in our laboratory in order to evaluate the erythrocytes' viscoelasticity. A numerical method formulated on the basis of fractal approximation for ordinary (OBM) and fractionary Brownian motion (FBM), as well as wavelet transform analysis, are proposed to distinguish chaos from noise based on the assumption that diffractometric data involves both deterministic and stochastic components, so it could be modelled as a system of bounded correlated random walk. Here we report studies on 25 donors: 4 alpha thalassaemic patients, 11 beta thalassaemic patients, and 10 healthy controls non-alcoholic and non-smoker individuals. The Correlation Coefficient, a nonlinear parameter, showed evidence of the changes in the erythrocyte deformability; the Wavelet Entropy could quantify those differences which are detected by the light diffraction patterns. Such quantifiers allow a good deal of promise and the possibility of a better understanding of the rheological erythrocytes aspects and also could help in clinical diagnosis.

Keywords: red blood cells, deformability, nonlinear dynamics, chaos theory, wavelet trannsform

Procedia PDF Downloads 60
1002 A Qualitative Study to Analyze Clinical Coders’ Decision Making Process of Adverse Drug Event Admissions

Authors: Nisa Mohan

Abstract:

Clinical coding is a feasible method for estimating the national prevalence of adverse drug event (ADE) admissions. However, under-coding of ADE admissions is a limitation of this method. Whilst the under-coding will impact the accurate estimation of the actual burden of ADEs, the feasibility of the coded data in estimating the adverse drug event admissions goes much further compared to the other methods. Therefore, it is necessary to know the reasons for the under-coding in order to improve the clinical coding of ADE admissions. The ability to identify the reasons for the under-coding of ADE admissions rests on understanding the decision-making process of coding ADE admissions. Hence, the current study aimed to explore the decision-making process of clinical coders when coding cases of ADE admissions. Clinical coders from different levels of coding job such as trainee, intermediate and advanced level coders were purposefully selected for the interviews. Thirteen clinical coders were recruited from two Auckland region District Health Board hospitals for the interview study. Semi-structured, one-on-one, face-to-face interviews using open-ended questions were conducted with the selected clinical coders. Interviews were about 20 to 30 minutes long and were audio-recorded with the approval of the participants. The interview data were analysed using a general inductive approach. The interviews with the clinical coders revealed that the coders have targets to meet, and they sometimes hesitate to adhere to the coding standards. Coders deviate from the standard coding processes to make a decision. Coders avoid contacting the doctors for clarifying small doubts such as ADEs and the name of the medications because of the delay in getting a reply from the doctors. They prefer to do some research themselves or take help from their seniors and colleagues for making a decision because they can avoid a long wait to get a reply from the doctors. Coders think of ADE as a small thing. Lack of time for searching for information to confirm an ADE admission, inadequate communication with clinicians, along with coders’ belief that an ADE is a small thing may contribute to the under-coding of the ADE admissions. These findings suggest that further work is needed on interventions to improve the clinical coding of ADE admissions. Providing education to coders about the importance of ADEs, educating clinicians about the importance of clear and confirmed medical records entries, availing pharmacists’ services to improve the detection and clear documentation of ADE admissions, and including a mandatory field in the discharge summary about external causes of diseases may be useful for improving the clinical coding of ADE admissions. The findings of the research will help the policymakers to make informed decisions about the improvements. This study urges the coding policymakers, auditors, and trainers to engage with the unconscious cognitive biases and short-cuts of the clinical coders. This country-specific research conducted in New Zealand may also benefit other countries by providing insight into the clinical coding of ADE admissions and will offer guidance about where to focus changes and improvement initiatives.

Keywords: adverse drug events, clinical coders, decision making, hospital admissions

Procedia PDF Downloads 120
1001 A Particle Filter-Based Data Assimilation Method for Discrete Event Simulation

Authors: Zhi Zhu, Boquan Zhang, Tian Jing, Jingjing Li, Tao Wang

Abstract:

Data assimilation is a model and data hybrid-driven method that dynamically fuses new observation data with a numerical model to iteratively approach the real system state. It is widely used in state prediction and parameter inference of continuous systems. Because of the discrete event system’s non-linearity and non-Gaussianity, traditional Kalman Filter based on linear and Gaussian assumptions cannot perform data assimilation for such systems, so particle filter has gradually become a technical approach for discrete event simulation data assimilation. Hence, we proposed a particle filter-based discrete event simulation data assimilation method and took the unmanned aerial vehicle (UAV) maintenance service system as a proof of concept to conduct simulation experiments. The experimental results showed that the filtered state data is closer to the real state of the system, which verifies the effectiveness of the proposed method. This research can provide a reference framework for the data assimilation process of other complex nonlinear systems, such as discrete-time and agent simulation.

Keywords: discrete event simulation, data assimilation, particle filter, model and data-driven

Procedia PDF Downloads 20
1000 Linguistic Cyberbullying, a Legislative Approach

Authors: Simona Maria Ignat

Abstract:

Bullying online has been an increasing studied topic during the last years. Different approaches, psychological, linguistic, or computational, have been applied. To our best knowledge, a definition and a set of characteristics of phenomenon agreed internationally as a common framework are still waiting for answers. Thus, the objectives of this paper are the identification of bullying utterances on Twitter and their algorithms. This research paper is focused on the identification of words or groups of words, categorized as “utterances”, with bullying effect, from Twitter platform, extracted on a set of legislative criteria. This set is the result of analysis followed by synthesis of law documents on bullying(online) from United States of America, European Union, and Ireland. The outcome is a linguistic corpus with approximatively 10,000 entries. The methods applied to the first objective have been the following. The discourse analysis has been applied in identification of keywords with bullying effect in texts from Google search engine, Images link. Transcription and anonymization have been applied on texts grouped in CL1 (Corpus linguistics 1). The keywords search method and the legislative criteria have been used for identifying bullying utterances from Twitter. The texts with at least 30 representations on Twitter have been grouped. They form the second corpus linguistics, Bullying utterances from Twitter (CL2). The entries have been identified by using the legislative criteria on the the BoW method principle. The BoW is a method of extracting words or group of words with same meaning in any context. The methods applied for reaching the second objective is the conversion of parts of speech to alphabetical and numerical symbols and writing the bullying utterances as algorithms. The converted form of parts of speech has been chosen on the criterion of relevance within bullying message. The inductive reasoning approach has been applied in sampling and identifying the algorithms. The results are groups with interchangeable elements. The outcomes convey two aspects of bullying: the form and the content or meaning. The form conveys the intentional intimidation against somebody, expressed at the level of texts by grammatical and lexical marks. This outcome has applicability in the forensic linguistics for establishing the intentionality of an action. Another outcome of form is a complex of graphemic variations essential in detecting harmful texts online. This research enriches the lexicon already known on the topic. The second aspect, the content, revealed the topics like threat, harassment, assault, or suicide. They are subcategories of a broader harmful content which is a constant concern for task forces and legislators at national and international levels. These topic – outcomes of the dataset are a valuable source of detection. The analysis of content revealed algorithms and lexicons which could be applied to other harmful contents. A third outcome of content are the conveyances of Stylistics, which is a rich source of discourse analysis of social media platforms. In conclusion, this corpus linguistics is structured on legislative criteria and could be used in various fields.

Keywords: corpus linguistics, cyberbullying, legislation, natural language processing, twitter

Procedia PDF Downloads 86
999 Weighted Risk Scores Method Proposal for Occupational Safety Risk Assessment

Authors: Ulas Cinar, Omer Faruk Ugurlu, Selcuk Cebi

Abstract:

Occupational safety risk management is the most important element of a safe working environment. Effective risk management can only be possible with accurate analysis and evaluations. Scoring-based risk assessment methods offer considerable ease of application as they convert linguistic expressions into numerical results. It can also be easily adapted to any field. Contrary to all these advantages, important problems in scoring-based methods are frequently discussed. Effective measurability is one of the most critical problems. Existing methods allow experts to choose a score equivalent to each parameter. Therefore, experts prefer the score of the most likely outcome for risk. However, all other possible consequences are neglected. Assessments of the existing methods express the most probable level of risk, not the real risk of the enterprises. In this study, it is aimed to develop a method that will present a more comprehensive evaluation compared to the existing methods by evaluating the probability and severity scores, all sub-parameters, and potential results, and a new scoring-based method is proposed in the literature.

Keywords: occupational health and safety, risk assessment, scoring based risk assessment method, underground mining, weighted risk scores

Procedia PDF Downloads 136
998 The Dynamics of Unsteady Squeezing Flow between Parallel Plates (Two-Dimensional)

Authors: Jiya Mohammed, Ibrahim Ismail Giwa

Abstract:

Unsteady squeezing flow of a viscous fluid between parallel plates is considered. The two plates are considered to be approaching each other symmetrically, causing the squeezing flow. Two-dimensional rectangular Cartesian coordinate is considered. The Navier-Stokes equation was reduced using similarity transformation to a single fourth order non-linear ordinary differential equation. The energy equation was transformed to a second order coupled differential equation. We obtained solution to the resulting ordinary differential equations via Homotopy Perturbation Method (HPM). HPM deforms a differential problem into a set of problem that are easier to solve and it produces analytic approximate expression in the form of an infinite power series by using only sixth and fifth terms for the velocity and temperature respectively. The results reveal that the proposed method is very effective and simple. Comparisons among present and existing solutions were provided and it is shown that the proposed method is in good agreement with Variation of Parameter Method (VPM). The effects of appropriate dimensionless parameters on the velocity profiles and temperature field are demonstrated with the aid of comprehensive graphs and tables.

Keywords: coupled differential equation, Homotopy Perturbation Method, plates, squeezing flow

Procedia PDF Downloads 475
997 Detection of Total Aflatoxin in Flour of Wheat and Maize Samples in Albania Using ELISA

Authors: Aferdita Dinaku, Jonida Canaj

Abstract:

Aflatoxins are potentially toxic metabolites produced by certain kinds of fungi (molds) that are found naturally all over the world; they can contaminate food crops and pose a serious health threat to humans by mutagenic and carcinogenic effects. Several types of aflatoxin (14 or more) occur in nature. In Albanian nutrition, cereals (especially wheat and corn) are common ingredients in some traditional meals. This study aimed to investigate the presence of aflatoxins in the flour of wheat and maize that are consumed in Albania’s markets. The samples were collected randomly in different markets in Albania and detected by the ELISA method, measured in 450 nm. The concentration of total aflatoxins was analyzed by enzyme-linked immunosorbent assay (ELISA), and they were ranged between 0.05-1.09 ppb. However, the screened mycotoxin levels in the samples were lower than the maximum permissible limits of European Commission No 1881/2006 (4 μg/kg). The linearity of calibration curves was good for total aflatoxins (B1, B2, G1, G2, M1) (R²=0.99) in the concentration range 0.005-4.05 ppb. The samples were analyzed in two replicated measurements and for each sample, the standard deviation (statistical parameter) is calculated. The results showed that the flour samples are safe, but the necessity of performing such tests is necessary.

Keywords: aflatoxins, ELISA technique, food contamination, flour

Procedia PDF Downloads 158