Search results for: linear cascade
2621 Bartlett Factor Scores in Multiple Linear Regression Equation as a Tool for Estimating Economic Traits in Broilers
Authors: Oluwatosin M. A. Jesuyon
Abstract:
In order to propose a simpler tool that eliminates the age-long problems associated with the traditional index method for selection of multiple traits in broilers, the Barttlet factor regression equation is being proposed as an alternative selection tool. 100 day-old chicks each of Arbor Acres (AA) and Annak (AN) broiler strains were obtained from two rival hatcheries in Ibadan Nigeria. These were raised in deep litter system in a 56-day feeding trial at the University of Ibadan Teaching and Research Farm, located in South-west Tropical Nigeria. The body weight and body dimensions were measured and recorded during the trial period. Eight (8) zoometric measurements namely live weight (g), abdominal circumference, abdominal length, breast width, leg length, height, wing length and thigh circumference (all in cm) were recorded randomly from 20 birds within strain, at a fixed time on the first day of the new week respectively with a 5-kg capacity Camry scale. These records were analyzed and compared using completely randomized design (CRD) of SPSS analytical software, with the means procedure, Factor Scores (FS) in stepwise Multiple Linear Regression (MLR) procedure for initial live weight equations. Bartlett Factor Score (BFS) analysis extracted 2 factors for each strain, termed Body-length and Thigh-meatiness Factors for AA, and; Breast Size and Height Factors for AN. These derived orthogonal factors assisted in deducing and comparing trait-combinations that best describe body conformation and Meatiness in experimental broilers. BFS procedure yielded different body conformational traits for the two strains, thus indicating the different economic traits and advantages of strains. These factors could be useful as selection criteria for improving desired economic traits. The final Bartlett Factor Regression equations for prediction of body weight were highly significant with P < 0.0001, R2 of 0.92 and above, VIF of 1.00, and DW of 1.90 and 1.47 for Arbor Acres and Annak respectively. These FSR equations could be used as a simple and potent tool for selection during poultry flock improvement, it could also be used to estimate selection index of flocks in order to discriminate between strains, and evaluate consumer preference traits in broilers.Keywords: alternative selection tool, Bartlet factor regression model, consumer preference trait, linear and body measurements, live body weight
Procedia PDF Downloads 2032620 Private Coded Computation of Matrix Multiplication
Authors: Malihe Aliasgari, Yousef Nejatbakhsh
Abstract:
The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers
Procedia PDF Downloads 1222619 Study and Solving High Complex Non-Linear Differential Equations Applied in the Engineering Field by Analytical New Approach AGM
Authors: Mohammadreza Akbari, Sara Akbari, Davood Domiri Ganji, Pooya Solimani, Reza Khalili
Abstract:
In this paper, three complicated nonlinear differential equations(PDE,ODE) in the field of engineering and non-vibration have been analyzed and solved completely by new method that we have named it Akbari-Ganji's Method (AGM) . As regards the previous published papers, investigating this kind of equations is a very hard task to do and the obtained solution is not accurate and reliable. This issue will be emerged after comparing the achieved solutions by Numerical Method. Based on the comparisons which have been made between the gained solutions by AGM and Numerical Method (Runge-Kutta 4th), it is possible to indicate that AGM can be successfully applied for various differential equations particularly for difficult ones. Furthermore, It is necessary to mention that a summary of the excellence of this method in comparison with the other approaches can be considered as follows: It is noteworthy that these results have been indicated that this approach is very effective and easy therefore it can be applied for other kinds of nonlinear equations, And also the reasons of selecting the mentioned method for solving differential equations in a wide variety of fields not only in vibrations but also in different fields of sciences such as fluid mechanics, solid mechanics, chemical engineering, etc. Therefore, a solution with high precision will be acquired. With regard to the afore-mentioned explanations, the process of solving nonlinear equation(s) will be very easy and convenient in comparison with the other methods. And also one of the important position that is explored in this paper is: Trigonometric and exponential terms in the differential equation (the method AGM) , is no need to use Taylor series Expansion to enhance the precision of the result.Keywords: new method (AGM), complex non-linear partial differential equations, damping ratio, energy lost per cycle
Procedia PDF Downloads 4692618 Evaluation of the Photo Neutron Contamination inside and outside of Treatment Room for High Energy Elekta Synergy® Linear Accelerator
Authors: Sharib Ahmed, Mansoor Rafi, Kamran Ali Awan, Faraz Khaskhali, Amir Maqbool, Altaf Hashmi
Abstract:
Medical linear accelerators (LINAC’s) used in radiotherapy treatments produce undesired neutrons when they are operated at energies above 8 MeV, both in electron and photon configuration. Neutrons are produced by high-energy photons and electrons through electronuclear (e, n) a photonuclear giant dipole resonance (GDR) reactions. These reactions occurs when incoming photon or electron incident through the various materials of target, flattening filter, collimators, and other shielding components in LINAC’s structure. These neutrons may reach directly to the patient, or they may interact with the surrounding materials until they become thermalized. A work has been set up to study the effect of different parameter on the production of neutron around the room by photonuclear reactions induced by photons above ~8 MeV. One of the commercial available neutron detector (Ludlum Model 42-31H Neutron Detector) is used for the detection of thermal and fast neutrons (0.025 eV to approximately 12 MeV) inside and outside of the treatment room. Measurements were performed for different field sizes at 100 cm source to surface distance (SSD) of detector, at different distances from the isocenter and at the place of primary and secondary walls. Other measurements were performed at door and treatment console for the potential radiation safety concerns of the therapists who must walk in and out of the room for the treatments. Exposures have taken place from Elekta Synergy® linear accelerators for two different energies (10 MV and 18 MV) for a given 200 MU’s and dose rate of 600 MU per minute. Results indicates that neutron doses at 100 cm SSD depend on accelerator characteristics means jaw settings as jaws are made of high atomic number material so provides significant interaction of photons to produce neutrons, while doses at the place of larger distance from isocenter are strongly influenced by the treatment room geometry and backscattering from the walls cause a greater doses as compare to dose at 100 cm distance from isocenter. In the treatment room the ambient dose equivalent due to photons produced during decay of activation nuclei varies from 4.22 mSv.h−1 to 13.2 mSv.h−1 (at isocenter),6.21 mSv.h−1 to 29.2 mSv.h−1 (primary wall) and 8.73 mSv.h−1 to 37.2 mSv.h−1 (secondary wall) for 10 and 18 MV respectively. The ambient dose equivalent for neutrons at door is 5 μSv.h−1 to 2 μSv.h−1 while at treatment console room it is 2 μSv.h−1 to 0 μSv.h−1 for 10 and 18 MV respectively which shows that a 2 m thick and 5m longer concrete maze provides sufficient shielding for neutron at door as well as at treatment console for 10 and 18 MV photons.Keywords: equivalent doses, neutron contamination, neutron detector, photon energy
Procedia PDF Downloads 4492617 Generalized Linear Modeling of HCV Infection Among Medical Waste Handlers in Sidama Region, Ethiopia
Authors: Birhanu Betela Warssamo
Abstract:
Background: There is limited evidence on the prevalence and risk factors for hepatitis C virus (HCV) infection among waste handlers in the Sidama region, Ethiopia; however, this knowledge is necessary for the effective prevention of HCV infection in the region. Methods: A cross-sectional study was conducted among randomly selected waste collectors from October 2021 to 30 July 2022 in different public hospitals in the Sidama region of Ethiopia. Serum samples were collected from participants and screened for anti-HCV using a rapid immunochromatography assay. Socio-demographic and risk factor information of waste handlers was gathered by pretested and well-structured questionnaires. The generalized linear model (GLM) was conducted using R software, and P-value < 0.05 was declared statistically significant. Results: From a total of 282 participating waste handlers, 16 (5.7%) (95% CI, 4.2 – 8.7) were infected with the hepatitis C virus. The educational status of waste handlers was the significant demographic variable that was associated with the hepatitis C virus (AOR = 0.055; 95% CI = 0.012 – 0.248; P = 0.000). More married waste handlers, 12 (75%), were HCV positive than unmarried, 4 (25%) and married waste handlers were 2.051 times (OR = 2.051, 95%CI = 0.644 –6.527, P = 0.295) more prone to HCV infection, compared to unmarried, which was statistically insignificant. The GLM showed that exposure to blood (OR = 8.26; 95% CI = 1.878–10.925; P = 0.037), multiple sexual partners (AOR = 3.63; 95% CI = 2.751–5.808; P = 0.001), sharp injury (AOR = 2.77; 95% CI = 2.327–3.173; P = 0.036), not using PPE (AOR = 0.77; 95% CI = 0.032–0.937; P = 0.001), contact with jaundiced patient (AOR = 3.65; 95% CI = 1.093–4.368; P = 0 .0048) and unprotected sex (AOR = 11.91; 95% CI = 5.847–16.854; P = 0.001) remained statistically significantly associated with HCV positivity. Conclusions: The study revealed that there was a high prevalence of hepatitis C virus infection among waste handlers in the Sidama region, Ethiopia. This demonstrated that there is an urgent need to increase preventative efforts and strategic policy orientations to control the spread of the hepatitis C virus.Keywords: Hepatitis C virus, risk factors, waste handlers, prevalence, Sidama Ethiopia
Procedia PDF Downloads 142616 Laboratory Findings as Predictors of St2 and NT-Probnp Elevations in Heart Failure Clinic, National Cardiovascular Centre Harapan Kita, Indonesia
Authors: B. B. Siswanto, A. Halimi, K. M. H. J. Tandayu, C. Abdillah, F. Nanda , E. Chandra
Abstract:
Nowadays, modern cardiac biomarkers, such as ST2 and NT-proBNP, have important roles in predicting morbidity and mortality in heart failure patients. Abnormalities of serum electrolytes, sepsis or infection, and deteriorating renal function will worsen the conditions of patients with heart failure. It is intriguing to know whether cardiac biomarkers elevations are affected by laboratory findings in heart failure patients. We recruited 65 patients from the heart failure clinic in NCVC Harapan Kita in 2014-2015. All of them have consented for laboratory examination, including cardiac biomarkers. The findings were recorded in our Research and Development Centre and analyzed using linear regression to find whether there is a relationship between laboratory findings (sodium, potassium, creatinine, and leukocytes) and ST2 or NT-proBNP. From 65 patients, 26.9% of them are female, and 73.1% are male, 69.4% patients classified as NYHA I-II and 31.6% as NYHA III-IV. The mean age is 55.7+11.4 years old; mean sodium level is 136.1+6.5 mmol/l; mean potassium level is 4.7+1.9 mmol/l; mean leukocyte count is 9184.7+3622.4 /ul; mean creatinine level is 1.2+0.5 mg/dl. From linear regression logistics, the relationship between NT-proBNP and sodium level (p<0.001), as well as leukocyte count (p=0.002) are significant, while NT-proBNP and potassium level (p=0.05), as well as creatinine level (p=0.534) are not significant. The relationship between ST2 and sodium level (p=0.501), potassium level (p=0.76), leukocyte level (p=0.897), and creatinine level (p=0.817) are not significant. To conclude, laboratory findings are more sensitive in predicting NT-proBNP elevation than ST2 elevation. Larger studies are needed to prove that NT-proBNP correlation with laboratory findings is more superior than ST2.Keywords: heart failure, laboratory, NT-proBNP, ST2
Procedia PDF Downloads 3402615 Timetabling for Interconnected LRT Lines: A Package Solution Based on a Real-world Case
Authors: Huazhen Lin, Ruihua Xu, Zhibin Jiang
Abstract:
In this real-world case, timetabling the LRT network as a whole is rather challenging for the operator: they are supposed to create a timetable to avoid various route conflicts manually while satisfying a given interval and the number of rolling stocks, but the outcome is not satisfying. Therefore, the operator adopts a computerised timetabling tool, the Train Plan Maker (TPM), to cope with this problem. However, with various constraints in the dual-line network, it is still difficult to find an adequate pairing of turnback time, interval and rolling stocks’ number, which requires extra manual intervention. Aiming at current problems, a one-off model for timetabling is presented in this paper to simplify the procedure of timetabling. Before the timetabling procedure starts, this paper presents how the dual-line system with a ring and several branches is turned into a simpler structure. Then, a non-linear programming model is presented in two stages. In the first stage, the model sets a series of constraints aiming to calculate a proper timing for coordinating two lines by adjusting the turnback time at termini. Then, based on the result of the first stage, the model introduces a series of inequality constraints to avoid various route conflicts. With this model, an analysis is conducted to reveal the relation between the ratio of trains in different directions and the possible minimum interval, observing that the more imbalance the ratio is, the less possible to provide frequent service under such strict constraints.Keywords: light rail transit (LRT), non-linear programming, railway timetabling, timetable coordination
Procedia PDF Downloads 872614 A Neurofeedback Learning Model Using Time-Frequency Analysis for Volleyball Performance Enhancement
Authors: Hamed Yousefi, Farnaz Mohammadi, Niloufar Mirian, Navid Amini
Abstract:
Investigating possible capacities of visual functions where adapted mechanisms can enhance the capability of sports trainees is a promising area of research, not only from the cognitive viewpoint but also in terms of unlimited applications in sports training. In this paper, the visual evoked potential (VEP) and event-related potential (ERP) signals of amateur and trained volleyball players in a pilot study were processed. Two groups of amateur and trained subjects are asked to imagine themselves in the state of receiving a ball while they are shown a simulated volleyball field. The proposed method is based on a set of time-frequency features using algorithms such as Gabor filter, continuous wavelet transform, and a multi-stage wavelet decomposition that are extracted from VEP signals that can be indicative of being amateur or trained. The linear discriminant classifier achieves the accuracy, sensitivity, and specificity of 100% when the average of the repetitions of the signal corresponding to the task is used. The main purpose of this study is to investigate the feasibility of a fast, robust, and reliable feature/model determination as a neurofeedback parameter to be utilized for improving the volleyball players’ performance. The proposed measure has potential applications in brain-computer interface technology where a real-time biomarker is needed.Keywords: visual evoked potential, time-frequency feature extraction, short-time Fourier transform, event-related spectrum potential classification, linear discriminant analysis
Procedia PDF Downloads 1382613 Comparing Double-Stranded RNA Uptake Mechanisms in Dipteran and Lepidopteran Cell Lines
Authors: Nazanin Amanat, Alison Tayler, Steve Whyard
Abstract:
While chemical insecticides effectively control many insect pests, they also harm many non-target species. Double-stranded RNA (dsRNA) pesticides, in contrast, can be designed to target unique gene sequences and thus act in a species-specific manner. DsRNA insecticides do not, however, work equally well for all insects, and for some species that are considered refractory to dsRNA, a primary factor affecting efficacy is the relative ease by which dsRNA can enter a target cell’s cytoplasm. In this study, we are examining how different structured dsRNAs (linear, hairpin, and paperclip) can enter mosquito and lepidopteran cells, as they represent dsRNA-sensitive and refractory species, respectively. To determine how the dsRNAs enter the cells, we are using chemical inhibitors and RNA interference (RNAi)-mediated knockdown of key proteins associated with different endocytosis processes. Understanding how different dsRNAs enter cells will ultimately help in the design of molecules that overcome refractoriness to RNAi or develop resistance to dsRNA-based insecticides. To date, we have conducted chemical inhibitor experiments on both cell lines and have evidence that linear dsRNAs enter the cells using clathrin-mediated endocytosis, while the paperclip dsRNAs (pcRNAs) can enter both species’ cells in a clathrin-independent manner to induce RNAi. An alternative uptake mechanism for the pcRNAs has been tentatively identified, and the outcomes of our RNAi-mediated knockdown experiments, which should provide corroborative evidence of our initial findings, will be discussed.Keywords: dsRNA, RNAi, uptake, insecticides, dipteran, lepidopteran
Procedia PDF Downloads 732612 Sphere in Cube Grid Approach to Modelling of Shale Gas Production Using Non-Linear Flow Mechanisms
Authors: Dhruvit S. Berawala, Jann R. Ursin, Obrad Slijepcevic
Abstract:
Shale gas is one of the most rapidly growing forms of natural gas. Unconventional natural gas deposits are difficult to characterize overall, but in general are often lower in resource concentration and dispersed over large areas. Moreover, gas is densely packed into the matrix through adsorption which accounts for large volume of gas reserves. Gas production from tight shale deposits are made possible by extensive and deep well fracturing which contacts large fractions of the formation. The conventional reservoir modelling and production forecasting methods, which rely on fluid-flow processes dominated by viscous forces, have proved to be very pessimistic and inaccurate. This paper presents a new approach to forecast shale gas production by detailed modeling of gas desorption, diffusion and non-linear flow mechanisms in combination with statistical representation of these processes. The representation of the model involves a cube as a porous media where free gas is present and a sphere (SiC: Sphere in Cube model) inside it where gas is adsorbed on to the kerogen or organic matter. Further, the sphere is considered consisting of many layers of adsorbed gas in an onion-like structure. With pressure decline, the gas desorbs first from the outer most layer of sphere causing decrease in its molecular concentration. The new available surface area and change in concentration triggers the diffusion of gas from kerogen. The process continues until all the gas present internally diffuses out of the kerogen, gets adsorbs onto available surface area and then desorbs into the nanopores and micro-fractures in the cube. Each SiC idealizes a gas pathway and is characterized by sphere diameter and length of the cube. The diameter allows to model gas storage, diffusion and desorption; the cube length takes into account the pathway for flow in nanopores and micro-fractures. Many of these representative but general cells of the reservoir are put together and linked to a well or hydraulic fracture. The paper quantitatively describes these processes as well as clarifies the geological conditions under which a successful shale gas production could be expected. A numerical model has been derived which is then compiled on FORTRAN to develop a simulator for the production of shale gas by considering the spheres as a source term in each of the grid blocks. By applying SiC to field data, we demonstrate that the model provides an effective way to quickly access gas production rates from shale formations. We also examine the effect of model input properties on gas production.Keywords: adsorption, diffusion, non-linear flow, shale gas production
Procedia PDF Downloads 1652611 Non-Linear Assessment of Chromatographic Lipophilicity of Selected Steroid Derivatives
Authors: Milica Karadžić, Lidija Jevrić, Sanja Podunavac-Kuzmanović, Strahinja Kovačević, Anamarija Mandić, Aleksandar Oklješa, Andrea Nikolić, Marija Sakač, Katarina Penov Gaši
Abstract:
Using chemometric approach, the relationships between the chromatographic lipophilicity and in silico molecular descriptors for twenty-nine selected steroid derivatives were studied. The chromatographic lipophilicity was predicted using artificial neural networks (ANNs) method. The most important in silico molecular descriptors were selected applying stepwise selection (SS) paired with partial least squares (PLS) method. Molecular descriptors with satisfactory variable importance in projection (VIP) values were selected for ANN modeling. The usefulness of generated models was confirmed by detailed statistical validation. High agreement between experimental and predicted values indicated that obtained models have good quality and high predictive ability. Global sensitivity analysis (GSA) confirmed the importance of each molecular descriptor used as an input variable. High-quality networks indicate a strong non-linear relationship between chromatographic lipophilicity and used in silico molecular descriptors. Applying selected molecular descriptors and generated ANNs the good prediction of chromatographic lipophilicity of the studied steroid derivatives can be obtained. This article is based upon work from COST Actions (CM1306 and CA15222), supported by COST (European Cooperation and Science and Technology).Keywords: artificial neural networks, chemometrics, global sensitivity analysis, liquid chromatography, steroids
Procedia PDF Downloads 3452610 Society and Cinema in Iran
Authors: Seyedeh Rozhano Azimi Hashemi
Abstract:
There is no doubt that ‘Art’ is a social phenomena and cinema is the most social kind of art. Hence, it’s clear that we can analyze the relation’s of cinema and art from different aspects. In this paper sociological cinema will be investigated which, is a subdivision of sociological art. This term will be discussed by two main approaches. One of these approaches is focused on the effects of cinema on the society, which is known as “Effects Theory” and the second one, which is dealing with the reflection of social issues in cinema is called ” Reflection Theory”. "Reflect theory" approach, unlike "Effects theory" is considering movies as documents, in which social life is reflected, and by analyzing them, the changes and tendencies of a society are understood. Criticizing these approaches to cinema and society doesn’t mean that they are not real. Conversely, it proves the fact that for better understanding of cinema and society’s relation, more complicated models are required, which should consider two aspects. First, they should be bilinear and they should provide a dynamic and active relation between cinema and society, as for the current concept social life and cinema have bi-linear effects on each other, and that’s how they fit in a dialectic and dynamic process. Second, it should pay attention to the role of inductor elements such as small social institutions, marketing, advertisements, cultural pattern, art’s genres and popular cinema in society. In the current study, image of middle class in cinema of Iran and changing the role of women in cinema and society which were two bold issue that cinema and society faced since 1979 revolution till 80s are analyzed. Films as an artwork on one hand, are reflections of social changes and with their effects on the society on the other hand, are trying to speed up the trends of these changes. Cinema by the illustration of changes in ideologies and approaches in exaggerated ways and through it’s normalizing functions, is preparing the audiences and public opinions for the acceptance of these changes. Consequently, audience takes effect from this process, which is a bi-linear and interactive process.Keywords: Iranian Cinema, Cinema and Society, Middle Class, Woman’s Role
Procedia PDF Downloads 3402609 Joint Replenishment and Heterogeneous Vehicle Routing Problem with Cyclical Schedule
Authors: Ming-Jong Yao, Chin-Sum Shui, Chih-Han Wang
Abstract:
This paper is developed based on a real-world decision scenario that an industrial gas company that applies the Vendor Managed Inventory model and supplies liquid oxygen with a self-operated heterogeneous vehicle fleet to hospitals in nearby cities. We name it as a Joint Replenishment and Heterogeneous Vehicle Routing Problem with Cyclical Schedule and formulate it as a non-linear mixed-integer linear programming problem which simultaneously determines the length of the planning cycle (PC), the length of the replenishment cycle and the dates of replenishment for each customer and the vehicle routes of each day within PC, such that the average daily operation cost within PC, including inventory holding cost, setup cost, transportation cost, and overtime labor cost, is minimized. A solution method based on genetic algorithm, embedded with an encoding and decoding mechanism and local search operators, is then proposed, and the hash function is adopted to avoid repetitive fitness evaluation for identical solutions. Numerical experiments demonstrate that the proposed solution method can effectively solve the problem under different lengths of PC and number of customers. The method is also shown to be effective in determining whether the company should expand the storage capacity of a customer whose demand increases. Sensitivity analysis of the vehicle fleet composition shows that deploying a mixed fleet can reduce the daily operating cost.Keywords: cyclic inventory routing problem, joint replenishment, heterogeneous vehicle, genetic algorithm
Procedia PDF Downloads 872608 Investigation of Shear Thickening Fluid Isolator with Vibration Isolation Performance
Authors: M. C. Yu, Z. L. Niu, L. G. Zhang, W. W. Cui, Y. L. Zhang
Abstract:
According to the theory of the vibration isolation for linear systems, linear damping can reduce the transmissibility at the resonant frequency, but inescapably increase the transmissibility of the isolation frequency region. To resolve this problem, nonlinear vibration isolation technology has recently received increasing attentions. Shear thickening fluid (STF) is a special colloidal material. When STF is subject to high shear rate, it rheological property changes from a flowable behavior into a rigid behavior, i.e., it presents shear thickening effect. STF isolator is a vibration isolator using STF as working material. Because of shear thickening effect, STF isolator is a variable-damped isolator. It exhibits small damping under high vibration frequency and strong damping at resonance frequency due to shearing rate increasing. So its special inherent character is very favorable for vibration isolation, especially for restraining resonance. In this paper, firstly, STF was prepared by dispersing nano-particles of silica into polyethylene glycol 200 fluid, followed by rheological properties test. After that, an STF isolator was designed. The vibration isolation system supported by STF isolator was modeled, and the numerical simulation was conducted to study the vibration isolation properties of STF. And finally, the effect factors on vibrations isolation performance was also researched quantitatively. The research suggests that owing to its variable damping, STF vibration isolator can effetely restrain resonance without bringing unfavorable effect at high frequency, which meets the need of ideal damping properties and resolves the problem of traditional isolators.Keywords: shear thickening fluid, variable-damped isolator, vibration isolation, restrain resonance
Procedia PDF Downloads 1792607 Floor Response Spectra of RC Frames: Influence of the Infills on the Seismic Demand on Non-Structural Components
Authors: Gianni Blasi, Daniele Perrone, Maria Antonietta Aiello
Abstract:
The seismic vulnerability of non-structural components is nowadays recognized to be a key issue in performance-based earthquake engineering. Recent loss estimation studies, as well as the damage observed during past earthquakes, evidenced how non-structural damage represents the highest rate of economic loss in a building and can be in many cases crucial in a life-safety view during the post-earthquake emergency. The procedures developed to evaluate the seismic demand on non-structural components have been constantly improved and recent studies demonstrated how the existing formulations provided by main Standards generally ignore features which have a sensible influence on the definition of the seismic acceleration/displacements subjecting non-structural components. Since the influence of the infills on the dynamic behaviour of RC structures has already been evidenced by many authors, it is worth to be noted that the evaluation of the seismic demand on non-structural components should consider the presence of the infills as well as their mechanical properties. This study focuses on the evaluation of time-history floor acceleration in RC buildings; which is a useful mean to perform seismic vulnerability analyses of non-structural components through the well-known cascade method. Dynamic analyses are performed on an 8-storey RC frame, taking into account the presence of the infills; the influence of the elastic modulus of the panel on the results is investigated as well as the presence of openings. Floor accelerations obtained from the analyses are used to evaluate the floor response spectra, in order to define the demand on non-structural components depending on the properties of the infills. Finally, the results are compared with formulations provided by main International Standards, in order to assess the accuracy and eventually define the improvements required according to the results of the present research work.Keywords: floor spectra, infilled RC frames, non-structural components, seismic demand
Procedia PDF Downloads 3262606 Non Linear Stability of Non Newtonian Thin Liquid Film Flowing down an Incline
Authors: Lamia Bourdache, Amar Djema
Abstract:
The effect of non-Newtonian property (power law index n) on traveling waves of thin layer of power law fluid flowing over an inclined plane is investigated. For this, a simplified second-order two-equation model (SM) is used. The complete model is second-order four-equation (CM). It is derived by combining the weighted residual integral method and the lubrication theory. This is due to the fact that at the beginning of the instability waves, a very small number of waves is observed. Using a suitable set of test functions, second order terms are eliminated from the calculus so that the model is still accurate to the second order approximation. Linear, spatial, and temporal stabilities are studied. For travelling waves, a particular type of wave form that is steady in a moving frame, i.e., that travels at a constant celerity without changing its shape is studied. This type of solutions which are characterized by their celerity exists under suitable conditions, when the widening due to dispersion is balanced exactly by the narrowing effect due to the nonlinearity. Changing the parameter of celerity in some range allows exploring the entire spectrum of asymptotic behavior of these traveling waves. The (SM) model is converted into a three dimensional dynamical system. The result is that the model exhibits bifurcation scenarios such as heteroclinic, homoclinic, Hopf, and period-doubling bifurcations for different values of the power law index n. The influence of the non-Newtonian parameter on the nonlinear development of these travelling waves is discussed. It is found at the end that the qualitative characters of bifurcation scenarios are insensitive to the variation of the power law index.Keywords: inclined plane, nonlinear stability, non-Newtonian, thin film
Procedia PDF Downloads 2832605 A Linear Regression Model for Estimating Anxiety Index Using Wide Area Frontal Lobe Brain Blood Volume
Authors: Takashi Kaburagi, Masashi Takenaka, Yosuke Kurihara, Takashi Matsumoto
Abstract:
Major depressive disorder (MDD) is one of the most common mental illnesses today. It is believed to be caused by a combination of several factors, including stress. Stress can be quantitatively evaluated using the State-Trait Anxiety Inventory (STAI), one of the best indices to evaluate anxiety. Although STAI scores are widely used in applications ranging from clinical diagnosis to basic research, the scores are calculated based on a self-reported questionnaire. An objective evaluation is required because the subject may intentionally change his/her answers if multiple tests are carried out. In this article, we present a modified index called the “multi-channel Laterality Index at Rest (mc-LIR)” by recording the brain activity from a wider area of the frontal lobe using multi-channel functional near-infrared spectroscopy (fNIRS). The presented index aims to measure multiple positions near the Fpz defined by the international 10-20 system positioning. Using 24 subjects, the dependencies on the number of measuring points used to calculate the mc-LIR and its correlation coefficients with the STAI scores are reported. Furthermore, a simple linear regression was performed to estimate the STAI scores from mc-LIR. The cross-validation error is also reported. The experimental results show that using multiple positions near the Fpz will improve the correlation coefficients and estimation than those using only two positions.Keywords: frontal lobe, functional near-infrared spectroscopy, state-trait anxiety inventory score, stress
Procedia PDF Downloads 2502604 Bayesian Locally Approach for Spatial Modeling of Visceral Leishmaniasis Infection in Northern and Central Tunisia
Authors: Kais Ben-Ahmed, Mhamed Ali-El-Aroui
Abstract:
This paper develops a Local Generalized Linear Spatial Model (LGLSM) to describe the spatial variation of Visceral Leishmaniasis (VL) infection risk in northern and central Tunisia. The response from each region is a number of affected children less than five years of age recorded from 1996 through 2006 from Tunisian pediatric departments and treated as a poison county level data. The model includes climatic factors, namely averages of annual rainfall, extreme values of low temperatures in winter and high temperatures in summer to characterize the climate of each region according to each continentality index, the pluviometric quotient of Emberger (Q2) to characterize bioclimatic regions and component for residual extra-poison variation. The statistical results show the progressive increase in the number of affected children in regions with high continentality index and low mean yearly rainfull. On the other hand, an increase in pluviometric quotient of Emberger contributed to a significant increase in VL incidence rate. When compared with the original GLSM, Bayesian locally modeling is improvement and gives a better approximation of the Tunisian VL risk estimation. According to the Bayesian approach inference, we use vague priors for all parameters model and Markov Chain Monte Carlo method.Keywords: generalized linear spatial model, local model, extra-poisson variation, continentality index, visceral leishmaniasis, Tunisia
Procedia PDF Downloads 3972603 Portfolio Optimization with Reward-Risk Ratio Measure Based on the Mean Absolute Deviation
Authors: Wlodzimierz Ogryczak, Michal Przyluski, Tomasz Sliwinski
Abstract:
In problems of portfolio selection, the reward-risk ratio criterion is optimized to search for a risky portfolio with the maximum increase of the mean return in proportion to the risk measure increase when compared to the risk-free investments. In the classical model, following Markowitz, the risk is measured by the variance thus representing the Sharpe ratio optimization and leading to the quadratic optimization problems. Several Linear Programming (LP) computable risk measures have been introduced and applied in portfolio optimization. In particular, the Mean Absolute Deviation (MAD) measure has been widely recognized. The reward-risk ratio optimization with the MAD measure can be transformed into the LP formulation with the number of constraints proportional to the number of scenarios and the number of variables proportional to the total of the number of scenarios and the number of instruments. This may lead to the LP models with huge number of variables and constraints in the case of real-life financial decisions based on several thousands scenarios, thus decreasing their computational efficiency and making them hardly solvable by general LP tools. We show that the computational efficiency can be then dramatically improved by an alternative model based on the inverse risk-reward ratio minimization and by taking advantages of the LP duality. In the introduced LP model the number of structural constraints is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios and therefore guaranteeing easy solvability. Moreover, we show that under natural restriction on the target value the MAD risk-reward ratio optimization is consistent with the second order stochastic dominance rules.Keywords: portfolio optimization, reward-risk ratio, mean absolute deviation, linear programming
Procedia PDF Downloads 4072602 Amperometric Biosensor for Glucose Determination Based on a Recombinant Mn Peroxidase from Corn Cross-linked to a Gold Electrode
Authors: Anahita Izadyar, My Ni Van, Kayleigh Amber Rodriguez, Ilwoo Seok, Elizabeth E. Hood
Abstract:
Using a recombinant enzyme derived from corn and a simple modification, we fabricated a facile, fast, and cost-beneficial biosensor to measure glucose. The Nafion/ Plant Produced Mn Peroxidase (PPMP)– glucose oxidase (GOx)- Bovine serum albumin (BSA) /Au electrode showed an excellent amperometric response to detect glucose. This biosensor is capable of responding to a wide range of glucose—20.0 µM−15.0 mM and has a lower detection limit (LOD) of 2.90µM. The reproducibility response using six electrodes is also very substantial and indicates the high capability of this biosensor to detect a wide range of 3.10±0.19µM to 13.2±1.8 mM glucose concentration. Selectivity of this electrode was investigated in an optimized experimental solution contains 10% diet green tea with citrus containing ascorbic acid (AA), and citric acid (CA) in a wide concentration of glucose at 0.02 to 14.0mM with an LOD of 3.10µM. Reproducibility was also investigated using 4 electrodes in this sample and shows notable results in the wide concentration range of 3.35±0.45µM to of 13.0 ± 0.81 mM. We also used other voltammetry methods to evaluate this biosensor. We applied linear sweep voltammetry (LSV) and this technique shows a wide range of 0.10−15.0 mM to detect glucose with a lower detection limit of 19.5µM. The performance and strength of this enzyme biosensor were the simplicity, wide linear ranges, sensitivities, selectivity, and low limits of detection. We expect that the modified biosensor has the potential for monitoring various biofluids.Keywords: plant-produced manganese peroxidase, enzyme-based biosensors, glucose, modified gold electrode, glucose oxidase
Procedia PDF Downloads 1402601 The Impact of Hospital Strikes on Patient Care: Evidence from 135 Strikes in the Portuguese National Health System
Authors: Eduardo Costa
Abstract:
Hospital strikes in the Portuguese National Health Service (NHS) are becoming increasingly frequent, raising concerns in what respects patient safety. In fact, data shows that mortality rates for patients admitted during strikes are up to 30% higher than for patients admitted in other days. This paper analyses the effects of hospital strikes on patients’ outcomes. Specifically, it analyzes the impact of different strikes (physicians, nurses and other health professionals), on in-hospital mortality rates, readmission rates and length of stay. The paper uses patient-level data containing all NHS hospital admissions in mainland Portugal from 2012 to 2017, together with a comprehensive strike dataset comprising over 250 strike days (19 physicians-strike days, 150 nurses-strike days and 50 other health professionals-strike days) from 135 different strikes. The paper uses a linear probability model and controls for hospital and regional characteristics, time trends, and changes in patients’ composition and diagnoses. Preliminary results suggest a 6-7% increase in in-hospital mortality rates for patients exposed to physicians’ strikes. The effect is smaller for patients exposed to nurses’ strikes (2-5%). Patients exposed to nurses strikes during their stay have, on average, higher 30-days urgent readmission rates (4%). Length of stay also seems to increase for patients exposed to any strike. Results – conditional on further testing, namely on non-linear models - suggest that hospital operations and service levels are partially disrupted during strikes.Keywords: health sector strikes, in-hospital mortality rate, length of stay, readmission rate
Procedia PDF Downloads 1352600 Rayleigh-Bénard-Taylor Convection of Newtonian Nanoliquid
Authors: P. G. Siddheshwar, T. N. Sakshath
Abstract:
In the paper we make linear and non-linear stability analyses of Rayleigh-Bénard convection of a Newtonian nanoliquid in a rotating medium (called as Rayleigh-Bénard-Taylor convection). Rigid-rigid isothermal boundaries are considered for investigation. Khanafer-Vafai-Lightstone single phase model is used for studying instabilities in nanoliquids. Various thermophysical properties of nanoliquid are obtained using phenomenological laws and mixture theory. The eigen boundary value problem is solved for the Rayleigh number using an analytical method by considering trigonometric eigen functions. We observe that the critical nanoliquid Rayleigh number is less than that of the base liquid. Thus the onset of convection is advanced due to the addition of nanoparticles. So, increase in volume fraction leads to advanced onset and thereby increase in heat transport. The amplitudes of convective modes required for estimating the heat transport are determined analytically. The tri-modal standard Lorenz model is derived for the steady state assuming small scale convective motions. The effect of rotation on the onset of convection and on heat transport is investigated and depicted graphically. It is observed that the onset of convection is delayed due to rotation and hence leads to decrease in heat transport. Hence, rotation has a stabilizing effect on the system. This is due to the fact that the energy of the system is used to create the component V. We observe that the amount of heat transport is less in the case of rigid-rigid isothermal boundaries compared to free-free isothermal boundaries.Keywords: nanoliquid, rigid-rigid, rotation, single phase
Procedia PDF Downloads 2342599 Assessment of Landfill Pollution Load on Hydroecosystem by Use of Heavy Metal Bioaccumulation Data in Fish
Authors: Gintarė Sauliutė, Gintaras Svecevičius
Abstract:
Landfill leachates contain a number of persistent pollutants, including heavy metals. They have the ability to spread in ecosystems and accumulate in fish which most of them are classified as top-consumers of trophic chains. Fish are freely swimming organisms; but perhaps, due to their species-specific ecological and behavioral properties, they often prefer the most suitable biotopes and therefore, did not avoid harmful substances or environments. That is why it is necessary to evaluate the persistent pollutant dispersion in hydroecosystem using fish tissue metal concentration. In hydroecosystems of hybrid type (e.g. river-pond-river) the distance from the pollution source could be a perfect indicator of such a kind of metal distribution. The studies were carried out in the Kairiai landfill neighboring hybrid-type ecosystem which is located 5 km east of the Šiauliai City. Fish tissue (gills, liver, and muscle) metal concentration measurements were performed on two types of ecologically-different fishes according to their feeding characteristics: benthophagous (Gibel carp, roach) and predatory (Northern pike, perch). A number of mathematical models (linear, non-linear, using log and other transformations) have been applied in order to identify the most satisfactorily description of the interdependence between fish tissue metal concentration and the distance from the pollution source. However, the only one log-multiple regression model revealed the pattern that the distance from the pollution source is closely and positively correlated with metal concentration in all predatory fish tissues studied (gills, liver, and muscle).Keywords: bioaccumulation in fish, heavy metals, hydroecosystem, landfill leachate, mathematical model
Procedia PDF Downloads 2862598 Quantification of Effect of Linear Anionic Polyacrylamide on Seepage in Irrigation Channels
Authors: Hamil Uribe, Cristian Arancibia
Abstract:
In Chile, the water for irrigation and hydropower generation is delivery essentially through unlined channels on earth, which have high seepage losses. Traditional seepage-abatement technologies are very expensive. The goals of this work were to quantify water loss in unlined channels and select reaches to evaluate the use of linear anionic polyacrylamide (LA-PAM) to reduce seepage losses. The study was carried out in Maule Region, central area of Chile. Water users indicated reaches with potential seepage losses, 45 km of channels in total, whose flow varied between 1.07 and 23.6 m³ s⁻¹. According to seepage measurements, 4 reaches of channels, 4.5 km in total, were selected for LA-PAM application. One to 4 LA-PAM applications were performed at rates of 11 kg ha⁻¹, considering wet perimeter area as basis of calculation. Large channels were used to allow motorboat moving against the current to carry-out LA-PAM application. For applications, a seeder machine was used to evenly distribute granulated polymer on water surface. Water flow was measured (StreamPro ADCP) upstream and downstream in selected reaches, to estimate seepage losses before and after LA-PAM application. Weekly measurements were made to quantify treatment effect and duration. In each case, water turbidity and temperature were measured. Channels showed variable losses up to 13.5%. Channels showing water gains were not treated with PAM. In all cases, LA-PAM effect was positive, achieving average loss reductions of 8% to 3.1%. Water loss was confirmed and it was possible to reduce seepage through LA-PAM applications provided that losses were known and correctly determined when applying the polymer. This could allow increasing irrigation security in critical periods, especially under drought conditions.Keywords: canal seepage, irrigation, polyacrylamide, water management
Procedia PDF Downloads 1742597 Optimal Image Representation for Linear Canonical Transform Multiplexing
Authors: Navdeep Goel, Salvador Gabarda
Abstract:
Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4x4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4*4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4*4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.Keywords: chirp signals, image multiplexing, image transformation, linear canonical transform, polynomial approximation
Procedia PDF Downloads 4122596 A Bayesian Parameter Identification Method for Thermorheological Complex Materials
Authors: Michael Anton Kraus, Miriam Schuster, Geralt Siebert, Jens Schneider
Abstract:
Polymers increasingly gained interest in construction materials over the last years in civil engineering applications. As polymeric materials typically show time- and temperature dependent material behavior, which is accounted for in the context of the theory of linear viscoelasticity. Within the context of this paper, the authors show, that some polymeric interlayers for laminated glass can not be considered as thermorheologically simple as they do not follow a simple TTSP, thus a methodology of identifying the thermorheologically complex constitutive bahavioir is needed. ‘Dynamical-Mechanical-Thermal-Analysis’ (DMTA) in tensile and shear mode as well as ‘Differential Scanning Caliometry’ (DSC) tests are carried out on the interlayer material ‘Ethylene-vinyl acetate’ (EVA). A navoel Bayesian framework for the Master Curving Process as well as the detection and parameter identification of the TTSPs along with their associated Prony-series is derived and applied to the EVA material data. To our best knowledge, this is the first time, an uncertainty quantification of the Prony-series in a Bayesian context is shown. Within this paper, we could successfully apply the derived Bayesian methodology to the EVA material data to gather meaningful Master Curves and TTSPs. Uncertainties occurring in this process can be well quantified. We found, that EVA needs two TTSPs with two associated Generalized Maxwell Models. As the methodology is kept general, the derived framework could be also applied to other thermorheologically complex polymers for parameter identification purposes.Keywords: bayesian parameter identification, generalized Maxwell model, linear viscoelasticity, thermorheological complex
Procedia PDF Downloads 2632595 A New Multi-Target, Multi-Agent Search and Rescue Path Planning Approach
Authors: Jean Berger, Nassirou Lo, Martin Noel
Abstract:
Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.Keywords: search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization
Procedia PDF Downloads 3712594 Design of a Portable Shielding System for a Newly Installed NaI(Tl) Detector
Authors: Mayesha Tahsin, A.S. Mollah
Abstract:
Recently, a 1.5x1.5 inch NaI(Tl) detector based gamma-ray spectroscopy system has been installed in the laboratory of the Nuclear Science and Engineering Department of the Military Institute of Science and Technology for radioactivity detection purposes. The newly installed NaI(Tl) detector has a circular lead shield of 22 mm width. An important consideration of any gamma-ray spectroscopy is the minimization of natural background radiation not originating from the radioactive sample that is being measured. Natural background gamma-ray radiation comes from naturally occurring or man-made radionuclides in the environment or from cosmic sources. Moreover, the main problem with this system is that it is not suitable for measurements of radioactivity with a large sample container like Petridish or Marinelli beaker geometry. When any laboratory installs a new detector or/and new shield, it “must” first carry out quality and performance tests for the detector and shield. This paper describes a new portable shielding system with lead that can reduce the background radiation. Intensity of gamma radiation after passing the shielding will be calculated using shielding equation I=Ioe-µx where Io is initial intensity of the gamma source, I is intensity after passing through the shield, µ is linear attenuation coefficient of the shielding material, and x is the thickness of the shielding material. The height and width of the shielding will be selected in order to accommodate the large sample container. The detector will be surrounded by a 4π-geometry low activity lead shield. An additional 1.5 mm thick shield of tin and 1 mm thick shield of copper covering the inner part of the lead shielding will be added in order to remove the presence of characteristic X-rays from the lead shield.Keywords: shield, NaI (Tl) detector, gamma radiation, intensity, linear attenuation coefficient
Procedia PDF Downloads 1582593 Task Scheduling and Resource Allocation in Cloud-based on AHP Method
Authors: Zahra Ahmadi, Fazlollah Adibnia
Abstract:
Scheduling of tasks and the optimal allocation of resources in the cloud are based on the dynamic nature of tasks and the heterogeneity of resources. Applications that are based on the scientific workflow are among the most widely used applications in this field, which are characterized by high processing power and storage capacity. In order to increase their efficiency, it is necessary to plan the tasks properly and select the best virtual machine in the cloud. The goals of the system are effective factors in scheduling tasks and resource selection, which depend on various criteria such as time, cost, current workload and processing power. Multi-criteria decision-making methods are a good choice in this field. In this research, a new method of work planning and resource allocation in a heterogeneous environment based on the modified AHP algorithm is proposed. In this method, the scheduling of input tasks is based on two criteria of execution time and size. Resource allocation is also a combination of the AHP algorithm and the first-input method of the first client. Resource prioritization is done with the criteria of main memory size, processor speed and bandwidth. What is considered in this system to modify the AHP algorithm Linear Max-Min and Linear Max normalization methods are the best choice for the mentioned algorithm, which have a great impact on the ranking. The simulation results show a decrease in the average response time, return time and execution time of input tasks in the proposed method compared to similar methods (basic methods).Keywords: hierarchical analytical process, work prioritization, normalization, heterogeneous resource allocation, scientific workflow
Procedia PDF Downloads 1452592 Induced Pulsation Attack Against Kalman Filter Driven Brushless DC Motor Control System
Authors: Yuri Boiko, Iluju Kiringa, Tet Yeap
Abstract:
We use modeling and simulation tools, to introduce a novel bias injection attack, named the ’Induced Pulsation Attack’, which targets Cyber Physical Systems with closed-loop controlled Brushless DC (BLDC) motor and Kalman filter driver in the feedback loop. This attack involves engaging a linear function with a constant gradient to distort the coefficient of the injected bias, which falsifies the Kalman filter estimates of the rotor’s angular speed. As a result, this manipulation interaction inside the control system causes periodic pulsations in a form of asymmetric sine wave of both current and voltage in the circuit windings, with a high magnitude. It is shown that by varying the gradient of linear function, one can control both the frequency and structure of the induced pulsations. It is also demonstrated that terminating the attack at any point leads to additional compensating effort from the controller to restore the speed to its equilibrium value. This compensation effort produces an exponentially decaying wave, which we call the ’attack withdrawal syndrome’ wave. The conditions for maximizing or minimizing the impact of the attack withdrawal syndrome are determined. Linking the termination of the attack to the end of the full period of the induced pulsation wave has been shown to nullify the attack withdrawal syndrome wave, thereby improving the attack’s covertness.Keywords: cyber-attack, induced pulsation, bias injection, Kalman filter, BLDC motor, control system, closed loop, P- controller, PID-controller, saw-function, cyber-physical system
Procedia PDF Downloads 71