Search results for: locally linear embedding
3033 Development of a Nanocompound Based Fibre to Combat Insects
Authors: Merle Bischoff, Thomas Gries, Gunnar Seide
Abstract:
Pesticides, which harm crop enemies, but can also interfere with the human body, are nowadays mostly used for crop spraying. Silica particles (SiO2) in the nanometer and micrometer scale offer a physical way to combat insects without harming humans and other mammals. Thereby, they allow foregoing pesticides, which can harm the environment. As silica particles are supplied as a powder or in a suspension to farmers, the silica use in large scale agriculture is not sufficient due to erosion through wind and rain. When silica is implemented in a textile’s surface (nanocompound), particles are locally bound and do resist erosion, but can function against bugs. By choosing polypropylene as a matrix polymer, the production of an inexpensive agritextile with an 'anti-bug' effect is made possible. In the Symposium the results of the manufacturing and filament spinning of silica nanocomposites from a polypropylene basis is compared to the fabrication from nanocomposites based on Polybutylene succinate, a biodegradable composite. The investigation focuses on the difference between degradable nanocomposite and stable nanocomposite. Focus will be laid on the filament characteristics as well as the degradation of the nanocompound to underline their potential use and application as an agricultural textile.Keywords: agriculture, environment, insects, protection, silica, textile, nanocomposite
Procedia PDF Downloads 2493032 Private Coded Computation of Matrix Multiplication
Authors: Malihe Aliasgari, Yousef Nejatbakhsh
Abstract:
The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers
Procedia PDF Downloads 1223031 A Spatial Hypergraph Based Semi-Supervised Band Selection Method for Hyperspectral Imagery Semantic Interpretation
Authors: Akrem Sellami, Imed Riadh Farah
Abstract:
Hyperspectral imagery (HSI) typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image. Hence, a pixel in HSI is a high-dimensional vector of intensities with a large spectral range and a high spectral resolution. Therefore, the semantic interpretation is a challenging task of HSI analysis. We focused in this paper on object classification as HSI semantic interpretation. However, HSI classification still faces some issues, among which are the following: The spatial variability of spectral signatures, the high number of spectral bands, and the high cost of true sample labeling. Therefore, the high number of spectral bands and the low number of training samples pose the problem of the curse of dimensionality. In order to resolve this problem, we propose to introduce the process of dimensionality reduction trying to improve the classification of HSI. The presented approach is a semi-supervised band selection method based on spatial hypergraph embedding model to represent higher order relationships with different weights of the spatial neighbors corresponding to the centroid of pixel. This semi-supervised band selection has been developed to select useful bands for object classification. The presented approach is evaluated on AVIRIS and ROSIS HSIs and compared to other dimensionality reduction methods. The experimental results demonstrate the efficacy of our approach compared to many existing dimensionality reduction methods for HSI classification.Keywords: dimensionality reduction, hyperspectral image, semantic interpretation, spatial hypergraph
Procedia PDF Downloads 3063030 Study and Solving High Complex Non-Linear Differential Equations Applied in the Engineering Field by Analytical New Approach AGM
Authors: Mohammadreza Akbari, Sara Akbari, Davood Domiri Ganji, Pooya Solimani, Reza Khalili
Abstract:
In this paper, three complicated nonlinear differential equations(PDE,ODE) in the field of engineering and non-vibration have been analyzed and solved completely by new method that we have named it Akbari-Ganji's Method (AGM) . As regards the previous published papers, investigating this kind of equations is a very hard task to do and the obtained solution is not accurate and reliable. This issue will be emerged after comparing the achieved solutions by Numerical Method. Based on the comparisons which have been made between the gained solutions by AGM and Numerical Method (Runge-Kutta 4th), it is possible to indicate that AGM can be successfully applied for various differential equations particularly for difficult ones. Furthermore, It is necessary to mention that a summary of the excellence of this method in comparison with the other approaches can be considered as follows: It is noteworthy that these results have been indicated that this approach is very effective and easy therefore it can be applied for other kinds of nonlinear equations, And also the reasons of selecting the mentioned method for solving differential equations in a wide variety of fields not only in vibrations but also in different fields of sciences such as fluid mechanics, solid mechanics, chemical engineering, etc. Therefore, a solution with high precision will be acquired. With regard to the afore-mentioned explanations, the process of solving nonlinear equation(s) will be very easy and convenient in comparison with the other methods. And also one of the important position that is explored in this paper is: Trigonometric and exponential terms in the differential equation (the method AGM) , is no need to use Taylor series Expansion to enhance the precision of the result.Keywords: new method (AGM), complex non-linear partial differential equations, damping ratio, energy lost per cycle
Procedia PDF Downloads 4693029 Evaluation of the Photo Neutron Contamination inside and outside of Treatment Room for High Energy Elekta Synergy® Linear Accelerator
Authors: Sharib Ahmed, Mansoor Rafi, Kamran Ali Awan, Faraz Khaskhali, Amir Maqbool, Altaf Hashmi
Abstract:
Medical linear accelerators (LINAC’s) used in radiotherapy treatments produce undesired neutrons when they are operated at energies above 8 MeV, both in electron and photon configuration. Neutrons are produced by high-energy photons and electrons through electronuclear (e, n) a photonuclear giant dipole resonance (GDR) reactions. These reactions occurs when incoming photon or electron incident through the various materials of target, flattening filter, collimators, and other shielding components in LINAC’s structure. These neutrons may reach directly to the patient, or they may interact with the surrounding materials until they become thermalized. A work has been set up to study the effect of different parameter on the production of neutron around the room by photonuclear reactions induced by photons above ~8 MeV. One of the commercial available neutron detector (Ludlum Model 42-31H Neutron Detector) is used for the detection of thermal and fast neutrons (0.025 eV to approximately 12 MeV) inside and outside of the treatment room. Measurements were performed for different field sizes at 100 cm source to surface distance (SSD) of detector, at different distances from the isocenter and at the place of primary and secondary walls. Other measurements were performed at door and treatment console for the potential radiation safety concerns of the therapists who must walk in and out of the room for the treatments. Exposures have taken place from Elekta Synergy® linear accelerators for two different energies (10 MV and 18 MV) for a given 200 MU’s and dose rate of 600 MU per minute. Results indicates that neutron doses at 100 cm SSD depend on accelerator characteristics means jaw settings as jaws are made of high atomic number material so provides significant interaction of photons to produce neutrons, while doses at the place of larger distance from isocenter are strongly influenced by the treatment room geometry and backscattering from the walls cause a greater doses as compare to dose at 100 cm distance from isocenter. In the treatment room the ambient dose equivalent due to photons produced during decay of activation nuclei varies from 4.22 mSv.h−1 to 13.2 mSv.h−1 (at isocenter),6.21 mSv.h−1 to 29.2 mSv.h−1 (primary wall) and 8.73 mSv.h−1 to 37.2 mSv.h−1 (secondary wall) for 10 and 18 MV respectively. The ambient dose equivalent for neutrons at door is 5 μSv.h−1 to 2 μSv.h−1 while at treatment console room it is 2 μSv.h−1 to 0 μSv.h−1 for 10 and 18 MV respectively which shows that a 2 m thick and 5m longer concrete maze provides sufficient shielding for neutron at door as well as at treatment console for 10 and 18 MV photons.Keywords: equivalent doses, neutron contamination, neutron detector, photon energy
Procedia PDF Downloads 4493028 Generalized Linear Modeling of HCV Infection Among Medical Waste Handlers in Sidama Region, Ethiopia
Authors: Birhanu Betela Warssamo
Abstract:
Background: There is limited evidence on the prevalence and risk factors for hepatitis C virus (HCV) infection among waste handlers in the Sidama region, Ethiopia; however, this knowledge is necessary for the effective prevention of HCV infection in the region. Methods: A cross-sectional study was conducted among randomly selected waste collectors from October 2021 to 30 July 2022 in different public hospitals in the Sidama region of Ethiopia. Serum samples were collected from participants and screened for anti-HCV using a rapid immunochromatography assay. Socio-demographic and risk factor information of waste handlers was gathered by pretested and well-structured questionnaires. The generalized linear model (GLM) was conducted using R software, and P-value < 0.05 was declared statistically significant. Results: From a total of 282 participating waste handlers, 16 (5.7%) (95% CI, 4.2 – 8.7) were infected with the hepatitis C virus. The educational status of waste handlers was the significant demographic variable that was associated with the hepatitis C virus (AOR = 0.055; 95% CI = 0.012 – 0.248; P = 0.000). More married waste handlers, 12 (75%), were HCV positive than unmarried, 4 (25%) and married waste handlers were 2.051 times (OR = 2.051, 95%CI = 0.644 –6.527, P = 0.295) more prone to HCV infection, compared to unmarried, which was statistically insignificant. The GLM showed that exposure to blood (OR = 8.26; 95% CI = 1.878–10.925; P = 0.037), multiple sexual partners (AOR = 3.63; 95% CI = 2.751–5.808; P = 0.001), sharp injury (AOR = 2.77; 95% CI = 2.327–3.173; P = 0.036), not using PPE (AOR = 0.77; 95% CI = 0.032–0.937; P = 0.001), contact with jaundiced patient (AOR = 3.65; 95% CI = 1.093–4.368; P = 0 .0048) and unprotected sex (AOR = 11.91; 95% CI = 5.847–16.854; P = 0.001) remained statistically significantly associated with HCV positivity. Conclusions: The study revealed that there was a high prevalence of hepatitis C virus infection among waste handlers in the Sidama region, Ethiopia. This demonstrated that there is an urgent need to increase preventative efforts and strategic policy orientations to control the spread of the hepatitis C virus.Keywords: Hepatitis C virus, risk factors, waste handlers, prevalence, Sidama Ethiopia
Procedia PDF Downloads 143027 Laboratory Findings as Predictors of St2 and NT-Probnp Elevations in Heart Failure Clinic, National Cardiovascular Centre Harapan Kita, Indonesia
Authors: B. B. Siswanto, A. Halimi, K. M. H. J. Tandayu, C. Abdillah, F. Nanda , E. Chandra
Abstract:
Nowadays, modern cardiac biomarkers, such as ST2 and NT-proBNP, have important roles in predicting morbidity and mortality in heart failure patients. Abnormalities of serum electrolytes, sepsis or infection, and deteriorating renal function will worsen the conditions of patients with heart failure. It is intriguing to know whether cardiac biomarkers elevations are affected by laboratory findings in heart failure patients. We recruited 65 patients from the heart failure clinic in NCVC Harapan Kita in 2014-2015. All of them have consented for laboratory examination, including cardiac biomarkers. The findings were recorded in our Research and Development Centre and analyzed using linear regression to find whether there is a relationship between laboratory findings (sodium, potassium, creatinine, and leukocytes) and ST2 or NT-proBNP. From 65 patients, 26.9% of them are female, and 73.1% are male, 69.4% patients classified as NYHA I-II and 31.6% as NYHA III-IV. The mean age is 55.7+11.4 years old; mean sodium level is 136.1+6.5 mmol/l; mean potassium level is 4.7+1.9 mmol/l; mean leukocyte count is 9184.7+3622.4 /ul; mean creatinine level is 1.2+0.5 mg/dl. From linear regression logistics, the relationship between NT-proBNP and sodium level (p<0.001), as well as leukocyte count (p=0.002) are significant, while NT-proBNP and potassium level (p=0.05), as well as creatinine level (p=0.534) are not significant. The relationship between ST2 and sodium level (p=0.501), potassium level (p=0.76), leukocyte level (p=0.897), and creatinine level (p=0.817) are not significant. To conclude, laboratory findings are more sensitive in predicting NT-proBNP elevation than ST2 elevation. Larger studies are needed to prove that NT-proBNP correlation with laboratory findings is more superior than ST2.Keywords: heart failure, laboratory, NT-proBNP, ST2
Procedia PDF Downloads 3403026 Timetabling for Interconnected LRT Lines: A Package Solution Based on a Real-world Case
Authors: Huazhen Lin, Ruihua Xu, Zhibin Jiang
Abstract:
In this real-world case, timetabling the LRT network as a whole is rather challenging for the operator: they are supposed to create a timetable to avoid various route conflicts manually while satisfying a given interval and the number of rolling stocks, but the outcome is not satisfying. Therefore, the operator adopts a computerised timetabling tool, the Train Plan Maker (TPM), to cope with this problem. However, with various constraints in the dual-line network, it is still difficult to find an adequate pairing of turnback time, interval and rolling stocks’ number, which requires extra manual intervention. Aiming at current problems, a one-off model for timetabling is presented in this paper to simplify the procedure of timetabling. Before the timetabling procedure starts, this paper presents how the dual-line system with a ring and several branches is turned into a simpler structure. Then, a non-linear programming model is presented in two stages. In the first stage, the model sets a series of constraints aiming to calculate a proper timing for coordinating two lines by adjusting the turnback time at termini. Then, based on the result of the first stage, the model introduces a series of inequality constraints to avoid various route conflicts. With this model, an analysis is conducted to reveal the relation between the ratio of trains in different directions and the possible minimum interval, observing that the more imbalance the ratio is, the less possible to provide frequent service under such strict constraints.Keywords: light rail transit (LRT), non-linear programming, railway timetabling, timetable coordination
Procedia PDF Downloads 883025 A Neurofeedback Learning Model Using Time-Frequency Analysis for Volleyball Performance Enhancement
Authors: Hamed Yousefi, Farnaz Mohammadi, Niloufar Mirian, Navid Amini
Abstract:
Investigating possible capacities of visual functions where adapted mechanisms can enhance the capability of sports trainees is a promising area of research, not only from the cognitive viewpoint but also in terms of unlimited applications in sports training. In this paper, the visual evoked potential (VEP) and event-related potential (ERP) signals of amateur and trained volleyball players in a pilot study were processed. Two groups of amateur and trained subjects are asked to imagine themselves in the state of receiving a ball while they are shown a simulated volleyball field. The proposed method is based on a set of time-frequency features using algorithms such as Gabor filter, continuous wavelet transform, and a multi-stage wavelet decomposition that are extracted from VEP signals that can be indicative of being amateur or trained. The linear discriminant classifier achieves the accuracy, sensitivity, and specificity of 100% when the average of the repetitions of the signal corresponding to the task is used. The main purpose of this study is to investigate the feasibility of a fast, robust, and reliable feature/model determination as a neurofeedback parameter to be utilized for improving the volleyball players’ performance. The proposed measure has potential applications in brain-computer interface technology where a real-time biomarker is needed.Keywords: visual evoked potential, time-frequency feature extraction, short-time Fourier transform, event-related spectrum potential classification, linear discriminant analysis
Procedia PDF Downloads 1383024 Comparing Double-Stranded RNA Uptake Mechanisms in Dipteran and Lepidopteran Cell Lines
Authors: Nazanin Amanat, Alison Tayler, Steve Whyard
Abstract:
While chemical insecticides effectively control many insect pests, they also harm many non-target species. Double-stranded RNA (dsRNA) pesticides, in contrast, can be designed to target unique gene sequences and thus act in a species-specific manner. DsRNA insecticides do not, however, work equally well for all insects, and for some species that are considered refractory to dsRNA, a primary factor affecting efficacy is the relative ease by which dsRNA can enter a target cell’s cytoplasm. In this study, we are examining how different structured dsRNAs (linear, hairpin, and paperclip) can enter mosquito and lepidopteran cells, as they represent dsRNA-sensitive and refractory species, respectively. To determine how the dsRNAs enter the cells, we are using chemical inhibitors and RNA interference (RNAi)-mediated knockdown of key proteins associated with different endocytosis processes. Understanding how different dsRNAs enter cells will ultimately help in the design of molecules that overcome refractoriness to RNAi or develop resistance to dsRNA-based insecticides. To date, we have conducted chemical inhibitor experiments on both cell lines and have evidence that linear dsRNAs enter the cells using clathrin-mediated endocytosis, while the paperclip dsRNAs (pcRNAs) can enter both species’ cells in a clathrin-independent manner to induce RNAi. An alternative uptake mechanism for the pcRNAs has been tentatively identified, and the outcomes of our RNAi-mediated knockdown experiments, which should provide corroborative evidence of our initial findings, will be discussed.Keywords: dsRNA, RNAi, uptake, insecticides, dipteran, lepidopteran
Procedia PDF Downloads 733023 Sphere in Cube Grid Approach to Modelling of Shale Gas Production Using Non-Linear Flow Mechanisms
Authors: Dhruvit S. Berawala, Jann R. Ursin, Obrad Slijepcevic
Abstract:
Shale gas is one of the most rapidly growing forms of natural gas. Unconventional natural gas deposits are difficult to characterize overall, but in general are often lower in resource concentration and dispersed over large areas. Moreover, gas is densely packed into the matrix through adsorption which accounts for large volume of gas reserves. Gas production from tight shale deposits are made possible by extensive and deep well fracturing which contacts large fractions of the formation. The conventional reservoir modelling and production forecasting methods, which rely on fluid-flow processes dominated by viscous forces, have proved to be very pessimistic and inaccurate. This paper presents a new approach to forecast shale gas production by detailed modeling of gas desorption, diffusion and non-linear flow mechanisms in combination with statistical representation of these processes. The representation of the model involves a cube as a porous media where free gas is present and a sphere (SiC: Sphere in Cube model) inside it where gas is adsorbed on to the kerogen or organic matter. Further, the sphere is considered consisting of many layers of adsorbed gas in an onion-like structure. With pressure decline, the gas desorbs first from the outer most layer of sphere causing decrease in its molecular concentration. The new available surface area and change in concentration triggers the diffusion of gas from kerogen. The process continues until all the gas present internally diffuses out of the kerogen, gets adsorbs onto available surface area and then desorbs into the nanopores and micro-fractures in the cube. Each SiC idealizes a gas pathway and is characterized by sphere diameter and length of the cube. The diameter allows to model gas storage, diffusion and desorption; the cube length takes into account the pathway for flow in nanopores and micro-fractures. Many of these representative but general cells of the reservoir are put together and linked to a well or hydraulic fracture. The paper quantitatively describes these processes as well as clarifies the geological conditions under which a successful shale gas production could be expected. A numerical model has been derived which is then compiled on FORTRAN to develop a simulator for the production of shale gas by considering the spheres as a source term in each of the grid blocks. By applying SiC to field data, we demonstrate that the model provides an effective way to quickly access gas production rates from shale formations. We also examine the effect of model input properties on gas production.Keywords: adsorption, diffusion, non-linear flow, shale gas production
Procedia PDF Downloads 1653022 Non-Linear Assessment of Chromatographic Lipophilicity of Selected Steroid Derivatives
Authors: Milica Karadžić, Lidija Jevrić, Sanja Podunavac-Kuzmanović, Strahinja Kovačević, Anamarija Mandić, Aleksandar Oklješa, Andrea Nikolić, Marija Sakač, Katarina Penov Gaši
Abstract:
Using chemometric approach, the relationships between the chromatographic lipophilicity and in silico molecular descriptors for twenty-nine selected steroid derivatives were studied. The chromatographic lipophilicity was predicted using artificial neural networks (ANNs) method. The most important in silico molecular descriptors were selected applying stepwise selection (SS) paired with partial least squares (PLS) method. Molecular descriptors with satisfactory variable importance in projection (VIP) values were selected for ANN modeling. The usefulness of generated models was confirmed by detailed statistical validation. High agreement between experimental and predicted values indicated that obtained models have good quality and high predictive ability. Global sensitivity analysis (GSA) confirmed the importance of each molecular descriptor used as an input variable. High-quality networks indicate a strong non-linear relationship between chromatographic lipophilicity and used in silico molecular descriptors. Applying selected molecular descriptors and generated ANNs the good prediction of chromatographic lipophilicity of the studied steroid derivatives can be obtained. This article is based upon work from COST Actions (CM1306 and CA15222), supported by COST (European Cooperation and Science and Technology).Keywords: artificial neural networks, chemometrics, global sensitivity analysis, liquid chromatography, steroids
Procedia PDF Downloads 3453021 Society and Cinema in Iran
Authors: Seyedeh Rozhano Azimi Hashemi
Abstract:
There is no doubt that ‘Art’ is a social phenomena and cinema is the most social kind of art. Hence, it’s clear that we can analyze the relation’s of cinema and art from different aspects. In this paper sociological cinema will be investigated which, is a subdivision of sociological art. This term will be discussed by two main approaches. One of these approaches is focused on the effects of cinema on the society, which is known as “Effects Theory” and the second one, which is dealing with the reflection of social issues in cinema is called ” Reflection Theory”. "Reflect theory" approach, unlike "Effects theory" is considering movies as documents, in which social life is reflected, and by analyzing them, the changes and tendencies of a society are understood. Criticizing these approaches to cinema and society doesn’t mean that they are not real. Conversely, it proves the fact that for better understanding of cinema and society’s relation, more complicated models are required, which should consider two aspects. First, they should be bilinear and they should provide a dynamic and active relation between cinema and society, as for the current concept social life and cinema have bi-linear effects on each other, and that’s how they fit in a dialectic and dynamic process. Second, it should pay attention to the role of inductor elements such as small social institutions, marketing, advertisements, cultural pattern, art’s genres and popular cinema in society. In the current study, image of middle class in cinema of Iran and changing the role of women in cinema and society which were two bold issue that cinema and society faced since 1979 revolution till 80s are analyzed. Films as an artwork on one hand, are reflections of social changes and with their effects on the society on the other hand, are trying to speed up the trends of these changes. Cinema by the illustration of changes in ideologies and approaches in exaggerated ways and through it’s normalizing functions, is preparing the audiences and public opinions for the acceptance of these changes. Consequently, audience takes effect from this process, which is a bi-linear and interactive process.Keywords: Iranian Cinema, Cinema and Society, Middle Class, Woman’s Role
Procedia PDF Downloads 3403020 Joint Replenishment and Heterogeneous Vehicle Routing Problem with Cyclical Schedule
Authors: Ming-Jong Yao, Chin-Sum Shui, Chih-Han Wang
Abstract:
This paper is developed based on a real-world decision scenario that an industrial gas company that applies the Vendor Managed Inventory model and supplies liquid oxygen with a self-operated heterogeneous vehicle fleet to hospitals in nearby cities. We name it as a Joint Replenishment and Heterogeneous Vehicle Routing Problem with Cyclical Schedule and formulate it as a non-linear mixed-integer linear programming problem which simultaneously determines the length of the planning cycle (PC), the length of the replenishment cycle and the dates of replenishment for each customer and the vehicle routes of each day within PC, such that the average daily operation cost within PC, including inventory holding cost, setup cost, transportation cost, and overtime labor cost, is minimized. A solution method based on genetic algorithm, embedded with an encoding and decoding mechanism and local search operators, is then proposed, and the hash function is adopted to avoid repetitive fitness evaluation for identical solutions. Numerical experiments demonstrate that the proposed solution method can effectively solve the problem under different lengths of PC and number of customers. The method is also shown to be effective in determining whether the company should expand the storage capacity of a customer whose demand increases. Sensitivity analysis of the vehicle fleet composition shows that deploying a mixed fleet can reduce the daily operating cost.Keywords: cyclic inventory routing problem, joint replenishment, heterogeneous vehicle, genetic algorithm
Procedia PDF Downloads 873019 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data
Authors: Nicola Colaninno, Eugenio Morello
Abstract:
The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing
Procedia PDF Downloads 1953018 Investigation of Shear Thickening Fluid Isolator with Vibration Isolation Performance
Authors: M. C. Yu, Z. L. Niu, L. G. Zhang, W. W. Cui, Y. L. Zhang
Abstract:
According to the theory of the vibration isolation for linear systems, linear damping can reduce the transmissibility at the resonant frequency, but inescapably increase the transmissibility of the isolation frequency region. To resolve this problem, nonlinear vibration isolation technology has recently received increasing attentions. Shear thickening fluid (STF) is a special colloidal material. When STF is subject to high shear rate, it rheological property changes from a flowable behavior into a rigid behavior, i.e., it presents shear thickening effect. STF isolator is a vibration isolator using STF as working material. Because of shear thickening effect, STF isolator is a variable-damped isolator. It exhibits small damping under high vibration frequency and strong damping at resonance frequency due to shearing rate increasing. So its special inherent character is very favorable for vibration isolation, especially for restraining resonance. In this paper, firstly, STF was prepared by dispersing nano-particles of silica into polyethylene glycol 200 fluid, followed by rheological properties test. After that, an STF isolator was designed. The vibration isolation system supported by STF isolator was modeled, and the numerical simulation was conducted to study the vibration isolation properties of STF. And finally, the effect factors on vibrations isolation performance was also researched quantitatively. The research suggests that owing to its variable damping, STF vibration isolator can effetely restrain resonance without bringing unfavorable effect at high frequency, which meets the need of ideal damping properties and resolves the problem of traditional isolators.Keywords: shear thickening fluid, variable-damped isolator, vibration isolation, restrain resonance
Procedia PDF Downloads 1793017 Experimental Investigation on Utility and Suitability of Lateritic Soil as a Pavement Material
Authors: J. Hemanth, B. G. Shivaprakash, S. V. Dinesh
Abstract:
The locally available Lateritic soil in Dakshina Kanadda and Udupi districts are traditionally being used as building blocks for construction purpose but they do not meet the conventional requirements (L L ≤ 25% & P I ≤6%) and desired four days soaked CBR value to be used as a sub-base course material in pavements. In order to improve its properties to satisfy the Atterberg’s Limits, the soil is blended with sand, cement and quarry dust at various percentages and also to meet the CBR strength requirements, individual and combined gradation of various sized aggregates along with Laterite soil and other filler materials has been done for coarse graded granular sub-base materials (Grading II and Grading III). The effect of additives blended with lateritic soil and aggregates are studied in terms of Atterberg’s limits, compaction, California Bearing Ratio (CBR), and permeability. It has been observed that the addition of sand, cement and quarry dust are found to be effective in improving Atterberg’s limits, CBR values, and permeability values. The obtained CBR and permeability values of Grading III, and Grading II materials found to be sufficient to be used as sub-base course for low volume roads and high volume roads respectively.Keywords: lateritic soil, sand, quarry dust, gradation, sub-base course, permeability
Procedia PDF Downloads 3183016 Effect of Lime and Leaf Ash on Engineering Properties of Red Mud
Authors: Pawandeep Kaur, Prashant Garg
Abstract:
Red mud is a byproduct of aluminum extraction from Bauxite industry. It is dumped in a pond which not only uses thousands of acres of land but having very high pH, it pollutes the ground water and the soil also. Leaves are yet another big waste especially during autumn when they contribute immensely to the blockage of drains and can easily catch fire, among other risks hence also needs to be utilized effectively. The use of leaf ash and red mud in highway construction as a filling material may be an efficient way to dispose of leaf ash and red mud. In this study, leaf ash and lime were used as admixtures to improve the geotechnical engineering properties of red mud. The red mud was taken from National Aluminum Company Limited, Odisha, and leaf ash was locally collected. The aim of present study is to investigate the effect of lime and leaf ash on compaction characteristics and strength characteristics of red mud. California Bearing Ratio and Unconfined Compression Strength tests were performed on red mud by varying different percentages of lime and leaf ash. Leaf ash was added in proportion 2%,4%,6%,8% and 10% whereas lime was added in proportions of 5% to 15%. Optimized value of lime was decided with respect to maximum CBR (California Bearing Ratio) of red mud mixed with different proportions of lime. An increase of 300% in California Bearing ratio of red mud and an increase of 125% in Unconfined Compression Strength values were observed. It may, therefore, be concluded that red mud may be effectively utilized in the highway industry as a filler material.Keywords: stabilization, lime, red mud, leaf ash
Procedia PDF Downloads 2423015 Non Linear Stability of Non Newtonian Thin Liquid Film Flowing down an Incline
Authors: Lamia Bourdache, Amar Djema
Abstract:
The effect of non-Newtonian property (power law index n) on traveling waves of thin layer of power law fluid flowing over an inclined plane is investigated. For this, a simplified second-order two-equation model (SM) is used. The complete model is second-order four-equation (CM). It is derived by combining the weighted residual integral method and the lubrication theory. This is due to the fact that at the beginning of the instability waves, a very small number of waves is observed. Using a suitable set of test functions, second order terms are eliminated from the calculus so that the model is still accurate to the second order approximation. Linear, spatial, and temporal stabilities are studied. For travelling waves, a particular type of wave form that is steady in a moving frame, i.e., that travels at a constant celerity without changing its shape is studied. This type of solutions which are characterized by their celerity exists under suitable conditions, when the widening due to dispersion is balanced exactly by the narrowing effect due to the nonlinearity. Changing the parameter of celerity in some range allows exploring the entire spectrum of asymptotic behavior of these traveling waves. The (SM) model is converted into a three dimensional dynamical system. The result is that the model exhibits bifurcation scenarios such as heteroclinic, homoclinic, Hopf, and period-doubling bifurcations for different values of the power law index n. The influence of the non-Newtonian parameter on the nonlinear development of these travelling waves is discussed. It is found at the end that the qualitative characters of bifurcation scenarios are insensitive to the variation of the power law index.Keywords: inclined plane, nonlinear stability, non-Newtonian, thin film
Procedia PDF Downloads 2833014 Corpus-Based Description of Core English Nouns of Pakistani English, an EFL Learner Perspective at Secondary Level
Authors: Abrar Hussain Qureshi
Abstract:
Vocabulary has been highlighted as a key indicator in any foreign language learning program, especially English as a foreign language (EFL). It is often considered a potential tool in foreign language curriculum, and its deficiency impedes successful communication in the target language. The knowledge of the lexicon is very significant in getting communicative competence and performance. Nouns constitute a considerable bulk of English vocabulary. Rather, they are the bones of the English language and are the main semantic carrier in spoken and written discourse. As nouns dominate the bulk of the English lexicon, their role becomes all the more potential. The undertaken research is a systematic effort in this regard to work out a list of highly frequent list of Pakistani English nouns for the EFL learners at the secondary level. It will encourage autonomy for the EFL learners as well as will save their time. The corpus used for the research has been developed locally from leading English newspapers of Pakistan. Wordsmith Tools has been used to process the research data and to retrieve word list of frequent Pakistani English nouns. The retrieved list of core Pakistani English nouns is supposed to be useful for English language learners at the secondary level as it covers a wide range of speech events.Keywords: corpus, EFL, frequency list, nouns
Procedia PDF Downloads 1033013 A Linear Regression Model for Estimating Anxiety Index Using Wide Area Frontal Lobe Brain Blood Volume
Authors: Takashi Kaburagi, Masashi Takenaka, Yosuke Kurihara, Takashi Matsumoto
Abstract:
Major depressive disorder (MDD) is one of the most common mental illnesses today. It is believed to be caused by a combination of several factors, including stress. Stress can be quantitatively evaluated using the State-Trait Anxiety Inventory (STAI), one of the best indices to evaluate anxiety. Although STAI scores are widely used in applications ranging from clinical diagnosis to basic research, the scores are calculated based on a self-reported questionnaire. An objective evaluation is required because the subject may intentionally change his/her answers if multiple tests are carried out. In this article, we present a modified index called the “multi-channel Laterality Index at Rest (mc-LIR)” by recording the brain activity from a wider area of the frontal lobe using multi-channel functional near-infrared spectroscopy (fNIRS). The presented index aims to measure multiple positions near the Fpz defined by the international 10-20 system positioning. Using 24 subjects, the dependencies on the number of measuring points used to calculate the mc-LIR and its correlation coefficients with the STAI scores are reported. Furthermore, a simple linear regression was performed to estimate the STAI scores from mc-LIR. The cross-validation error is also reported. The experimental results show that using multiple positions near the Fpz will improve the correlation coefficients and estimation than those using only two positions.Keywords: frontal lobe, functional near-infrared spectroscopy, state-trait anxiety inventory score, stress
Procedia PDF Downloads 2503012 Portfolio Optimization with Reward-Risk Ratio Measure Based on the Mean Absolute Deviation
Authors: Wlodzimierz Ogryczak, Michal Przyluski, Tomasz Sliwinski
Abstract:
In problems of portfolio selection, the reward-risk ratio criterion is optimized to search for a risky portfolio with the maximum increase of the mean return in proportion to the risk measure increase when compared to the risk-free investments. In the classical model, following Markowitz, the risk is measured by the variance thus representing the Sharpe ratio optimization and leading to the quadratic optimization problems. Several Linear Programming (LP) computable risk measures have been introduced and applied in portfolio optimization. In particular, the Mean Absolute Deviation (MAD) measure has been widely recognized. The reward-risk ratio optimization with the MAD measure can be transformed into the LP formulation with the number of constraints proportional to the number of scenarios and the number of variables proportional to the total of the number of scenarios and the number of instruments. This may lead to the LP models with huge number of variables and constraints in the case of real-life financial decisions based on several thousands scenarios, thus decreasing their computational efficiency and making them hardly solvable by general LP tools. We show that the computational efficiency can be then dramatically improved by an alternative model based on the inverse risk-reward ratio minimization and by taking advantages of the LP duality. In the introduced LP model the number of structural constraints is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios and therefore guaranteeing easy solvability. Moreover, we show that under natural restriction on the target value the MAD risk-reward ratio optimization is consistent with the second order stochastic dominance rules.Keywords: portfolio optimization, reward-risk ratio, mean absolute deviation, linear programming
Procedia PDF Downloads 4073011 Amperometric Biosensor for Glucose Determination Based on a Recombinant Mn Peroxidase from Corn Cross-linked to a Gold Electrode
Authors: Anahita Izadyar, My Ni Van, Kayleigh Amber Rodriguez, Ilwoo Seok, Elizabeth E. Hood
Abstract:
Using a recombinant enzyme derived from corn and a simple modification, we fabricated a facile, fast, and cost-beneficial biosensor to measure glucose. The Nafion/ Plant Produced Mn Peroxidase (PPMP)– glucose oxidase (GOx)- Bovine serum albumin (BSA) /Au electrode showed an excellent amperometric response to detect glucose. This biosensor is capable of responding to a wide range of glucose—20.0 µM−15.0 mM and has a lower detection limit (LOD) of 2.90µM. The reproducibility response using six electrodes is also very substantial and indicates the high capability of this biosensor to detect a wide range of 3.10±0.19µM to 13.2±1.8 mM glucose concentration. Selectivity of this electrode was investigated in an optimized experimental solution contains 10% diet green tea with citrus containing ascorbic acid (AA), and citric acid (CA) in a wide concentration of glucose at 0.02 to 14.0mM with an LOD of 3.10µM. Reproducibility was also investigated using 4 electrodes in this sample and shows notable results in the wide concentration range of 3.35±0.45µM to of 13.0 ± 0.81 mM. We also used other voltammetry methods to evaluate this biosensor. We applied linear sweep voltammetry (LSV) and this technique shows a wide range of 0.10−15.0 mM to detect glucose with a lower detection limit of 19.5µM. The performance and strength of this enzyme biosensor were the simplicity, wide linear ranges, sensitivities, selectivity, and low limits of detection. We expect that the modified biosensor has the potential for monitoring various biofluids.Keywords: plant-produced manganese peroxidase, enzyme-based biosensors, glucose, modified gold electrode, glucose oxidase
Procedia PDF Downloads 1403010 The Impact of Hospital Strikes on Patient Care: Evidence from 135 Strikes in the Portuguese National Health System
Authors: Eduardo Costa
Abstract:
Hospital strikes in the Portuguese National Health Service (NHS) are becoming increasingly frequent, raising concerns in what respects patient safety. In fact, data shows that mortality rates for patients admitted during strikes are up to 30% higher than for patients admitted in other days. This paper analyses the effects of hospital strikes on patients’ outcomes. Specifically, it analyzes the impact of different strikes (physicians, nurses and other health professionals), on in-hospital mortality rates, readmission rates and length of stay. The paper uses patient-level data containing all NHS hospital admissions in mainland Portugal from 2012 to 2017, together with a comprehensive strike dataset comprising over 250 strike days (19 physicians-strike days, 150 nurses-strike days and 50 other health professionals-strike days) from 135 different strikes. The paper uses a linear probability model and controls for hospital and regional characteristics, time trends, and changes in patients’ composition and diagnoses. Preliminary results suggest a 6-7% increase in in-hospital mortality rates for patients exposed to physicians’ strikes. The effect is smaller for patients exposed to nurses’ strikes (2-5%). Patients exposed to nurses strikes during their stay have, on average, higher 30-days urgent readmission rates (4%). Length of stay also seems to increase for patients exposed to any strike. Results – conditional on further testing, namely on non-linear models - suggest that hospital operations and service levels are partially disrupted during strikes.Keywords: health sector strikes, in-hospital mortality rate, length of stay, readmission rate
Procedia PDF Downloads 1353009 Rayleigh-Bénard-Taylor Convection of Newtonian Nanoliquid
Authors: P. G. Siddheshwar, T. N. Sakshath
Abstract:
In the paper we make linear and non-linear stability analyses of Rayleigh-Bénard convection of a Newtonian nanoliquid in a rotating medium (called as Rayleigh-Bénard-Taylor convection). Rigid-rigid isothermal boundaries are considered for investigation. Khanafer-Vafai-Lightstone single phase model is used for studying instabilities in nanoliquids. Various thermophysical properties of nanoliquid are obtained using phenomenological laws and mixture theory. The eigen boundary value problem is solved for the Rayleigh number using an analytical method by considering trigonometric eigen functions. We observe that the critical nanoliquid Rayleigh number is less than that of the base liquid. Thus the onset of convection is advanced due to the addition of nanoparticles. So, increase in volume fraction leads to advanced onset and thereby increase in heat transport. The amplitudes of convective modes required for estimating the heat transport are determined analytically. The tri-modal standard Lorenz model is derived for the steady state assuming small scale convective motions. The effect of rotation on the onset of convection and on heat transport is investigated and depicted graphically. It is observed that the onset of convection is delayed due to rotation and hence leads to decrease in heat transport. Hence, rotation has a stabilizing effect on the system. This is due to the fact that the energy of the system is used to create the component V. We observe that the amount of heat transport is less in the case of rigid-rigid isothermal boundaries compared to free-free isothermal boundaries.Keywords: nanoliquid, rigid-rigid, rotation, single phase
Procedia PDF Downloads 2343008 Assessment of Landfill Pollution Load on Hydroecosystem by Use of Heavy Metal Bioaccumulation Data in Fish
Authors: Gintarė Sauliutė, Gintaras Svecevičius
Abstract:
Landfill leachates contain a number of persistent pollutants, including heavy metals. They have the ability to spread in ecosystems and accumulate in fish which most of them are classified as top-consumers of trophic chains. Fish are freely swimming organisms; but perhaps, due to their species-specific ecological and behavioral properties, they often prefer the most suitable biotopes and therefore, did not avoid harmful substances or environments. That is why it is necessary to evaluate the persistent pollutant dispersion in hydroecosystem using fish tissue metal concentration. In hydroecosystems of hybrid type (e.g. river-pond-river) the distance from the pollution source could be a perfect indicator of such a kind of metal distribution. The studies were carried out in the Kairiai landfill neighboring hybrid-type ecosystem which is located 5 km east of the Šiauliai City. Fish tissue (gills, liver, and muscle) metal concentration measurements were performed on two types of ecologically-different fishes according to their feeding characteristics: benthophagous (Gibel carp, roach) and predatory (Northern pike, perch). A number of mathematical models (linear, non-linear, using log and other transformations) have been applied in order to identify the most satisfactorily description of the interdependence between fish tissue metal concentration and the distance from the pollution source. However, the only one log-multiple regression model revealed the pattern that the distance from the pollution source is closely and positively correlated with metal concentration in all predatory fish tissues studied (gills, liver, and muscle).Keywords: bioaccumulation in fish, heavy metals, hydroecosystem, landfill leachate, mathematical model
Procedia PDF Downloads 2863007 Thermomechanical Simulation of Equipment Subjected to an Oxygen Pressure and Heated Locally by the Ignition of Small Particles
Authors: Khaled Ayfi
Abstract:
In industrial oxygen systems at high temperature and high pressure, contamination by solid particles is one of the principal causes of ignition hazards. Indeed, gas can sweep away particles, generated by corrosion inside the pipes or during maintenance operations (welding residues, careless disassembly, etc.) and produce accumulations at places where the gas velocity decrease. Moreover, in such an environment rich in oxygen (oxidant), particles are highly reactive and can ignite system walls more actively and at higher temperatures. Oxidation based thermal effects are responsible for mechanical properties lost, leading to the destruction of the pressure equipment wall. To deal with this problem, a numerical analysis is done regarding a sample representative of a wall subjected to pressure and temperature. The validation and analysis are done comparing the numerical simulations results to experimental measurements. More precisely, in this work, we propose a numerical model that describes the thermomechanical behavior of thin metal disks under pressure and subjected to laser heating. This model takes into account the geometric and material nonlinearity and has been validated by the comparison of simulation results with experimental measurements.Keywords: ignition, oxygen, numerical simulation, thermomechanical behavior
Procedia PDF Downloads 1053006 Quantification of Effect of Linear Anionic Polyacrylamide on Seepage in Irrigation Channels
Authors: Hamil Uribe, Cristian Arancibia
Abstract:
In Chile, the water for irrigation and hydropower generation is delivery essentially through unlined channels on earth, which have high seepage losses. Traditional seepage-abatement technologies are very expensive. The goals of this work were to quantify water loss in unlined channels and select reaches to evaluate the use of linear anionic polyacrylamide (LA-PAM) to reduce seepage losses. The study was carried out in Maule Region, central area of Chile. Water users indicated reaches with potential seepage losses, 45 km of channels in total, whose flow varied between 1.07 and 23.6 m³ s⁻¹. According to seepage measurements, 4 reaches of channels, 4.5 km in total, were selected for LA-PAM application. One to 4 LA-PAM applications were performed at rates of 11 kg ha⁻¹, considering wet perimeter area as basis of calculation. Large channels were used to allow motorboat moving against the current to carry-out LA-PAM application. For applications, a seeder machine was used to evenly distribute granulated polymer on water surface. Water flow was measured (StreamPro ADCP) upstream and downstream in selected reaches, to estimate seepage losses before and after LA-PAM application. Weekly measurements were made to quantify treatment effect and duration. In each case, water turbidity and temperature were measured. Channels showed variable losses up to 13.5%. Channels showing water gains were not treated with PAM. In all cases, LA-PAM effect was positive, achieving average loss reductions of 8% to 3.1%. Water loss was confirmed and it was possible to reduce seepage through LA-PAM applications provided that losses were known and correctly determined when applying the polymer. This could allow increasing irrigation security in critical periods, especially under drought conditions.Keywords: canal seepage, irrigation, polyacrylamide, water management
Procedia PDF Downloads 1743005 Temporal Trends in the Urban Metabolism of Riyadh, Saudi Arabia
Authors: Naif Albelwi, Alan Kwan, Yacine Rezgui
Abstract:
Cities with rapid growth face tremendous challenges not only to provide services to meet this growth but also to assure that this growth occurs in a sustainable way. The consumption of material, energy, and water resources is inextricably linked to population growth with a unique impact in urban areas, especially in light of significant investments in infrastructure to support urban development. Urban Metabolism (UM) is becoming popular as it provides a framework accounting the mass and energy flows through a city. The objective of this study is to determine the energy and material flows of Riyadh, Saudi Arabia using locally generated data from 1996 and 2012 and analyzing the temporal trends of energy and material flows. Preliminary results show that while the population of Riyadh grew 90% since 1996, the input and output flows have increased at higher rate. Results also show increasing in energy mobile consumption from 61k TJ in 1996 to 157k TJ in 2012 which points to Riyadh’s inefficient urban form. The study findings highlight the importance to develop effective policies for improving the use of resources.Keywords: energy and water consumption, sustainability, urban development, urban metabolism
Procedia PDF Downloads 2723004 Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform
Authors: Omaima N. Ahmad AL-Allaf
Abstract:
Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.Keywords: image watermarking, genetic algorithm, particle swarm optimization, discrete wavelet transform
Procedia PDF Downloads 226