Search results for: spectral sensitivity
629 Development and Validation of a Green Analytical Method for the Analysis of Daptomycin Injectable by Fourier-Transform Infrared Spectroscopy (FTIR)
Authors: Eliane G. Tótoli, Hérida Regina N. Salgado
Abstract:
Daptomycin is an important antimicrobial agent used in clinical practice nowadays, since it is very active against some Gram-positive bacteria that are particularly challenges for the medicine, such as methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant Enterococci (VRE). The importance of environmental preservation has receiving special attention since last years. Considering the evident need to protect the natural environment and the introduction of strict quality requirements regarding analytical procedures used in pharmaceutical analysis, the industries must seek environmentally friendly alternatives in relation to the analytical methods and other processes that they follow in their routine. In view of these factors, green analytical chemistry is prevalent and encouraged nowadays. In this context, infrared spectroscopy stands out. This is a method that does not use organic solvents and, although it is formally accepted for the identification of individual compounds, also allows the quantitation of substances. Considering that there are few green analytical methods described in literature for the analysis of daptomycin, the aim of this work was the development and validation of a green analytical method for the quantification of this drug in lyophilized powder for injectable solution, by Fourier-transform infrared spectroscopy (FT-IR). Method: Translucent potassium bromide pellets containing predetermined amounts of the drug were prepared and subjected to spectrophotometric analysis in the mid-infrared region. After obtaining the infrared spectrum and with the assistance of the IR Solution software, quantitative analysis was carried out in the spectral region between 1575 and 1700 cm-1, related to a carbonyl band of the daptomycin molecule, and this band had its height analyzed in terms of absorbance. The method was validated according to ICH guidelines regarding linearity, precision (repeatability and intermediate precision), accuracy and robustness. Results and discussion: The method showed to be linear (r = 0.9999), precise (RSD% < 2.0), accurate and robust, over a concentration range from 0.2 to 0.6 mg/pellet. In addition, this technique does not use organic solvents, which is one great advantage over the most common analytical methods. This fact contributes to minimize the generation of organic solvent waste by the industry and thereby reduces the impact of its activities on the environment. Conclusion: The validated method proved to be adequate to quantify daptomycin in lyophilized powder for injectable solution and can be used for its routine analysis in quality control. In addition, the proposed method is environmentally friendly, which is in line with the global trend.Keywords: daptomycin, Fourier-transform infrared spectroscopy, green analytical chemistry, quality control, spectrometry in IR region
Procedia PDF Downloads 381628 Electrophoretic Deposition of Ultrasonically Synthesized Nanostructured Conducting Poly(o-phenylenediamine)-Co-Poly(1-naphthylamine) Film for Detection of Glucose
Authors: Vaibhav Budhiraja, Chandra Mouli Pandey
Abstract:
The ultrasonic synthesis of nanostructured conducting copolymer is an effective technique to synthesize polymer with desired chemical properties. This tailored nanostructure, shows tremendous improvement in sensitivity and stability to detect a variety of analytes. The present work reports ultrasonically synthesized nanostructured conducting poly(o-phenylenediamine)-co-poly(1-naphthylamine) (POPD-co-PNA). The synthesized material has been characterized using Fourier transform infrared spectroscopy (FTIR), ultraviolet-visible spectroscopy, transmission electron microscopy, X-ray diffraction and cyclic voltammetry. FTIR spectroscopy confirmed random copolymerization, while UV-visible studies reveal the variation in polaronic states upon copolymerization. High crystallinity was achieved via ultrasonic synthesis which was confirmed by X-ray diffraction, and the controlled morphology of the nanostructures was confirmed by transmission electron microscopy analysis. Cyclic voltammetry shows that POPD-co-PNA has rather high electrochemical activity. This behavior was explained on the basis of variable orientations adopted by the conducting polymer chains. The synthesized material was electrophoretically deposited at onto indium tin oxide coated glass substrate which is used as cathode and parallel platinum plate as the counter electrode. The fabricated bioelectrode was further used for detection of glucose by crosslinking of glucose oxidase in the PODP-co-PNA film. The bioelectrode shows a surface-controlled electrode reaction with the electron transfer coefficient (α) of 0.72, charge transfer rate constant (ks) of 21.77 s⁻¹ and diffusion coefficient 7.354 × 10⁻¹⁵ cm²s⁻¹.Keywords: conducting, electrophoretic, glucose, poly (o-phenylenediamine), poly (1-naphthylamine), ultrasonic
Procedia PDF Downloads 142627 Numerical Aeroacoustics Investigation of Eroded and Coated Leading Edge of NACA 64- 618 Airfoil
Authors: Zeinab Gharibi, B. Stoevesandt, J. Peinke
Abstract:
Long term surface erosion of wind turbine blades, especially at the leading edge, impairs aerodynamic performance; therefore, lowers efficiency of the blades mostly in the high-speed rotor tip regions. Blade protection provides significant improvements in annual energy production, reduces costly downtime, and protects the integrity of the blades. However, this protection still influences the aerodynamic behavior, and broadband noise caused by interaction between the impinging turbulence and blade’s leading edge. This paper presents an extensive numerical aeroacoustics approach by investigating the sound power spectra of the eroded and coated NACA 64-618 wind turbine airfoil and evaluates aeroacoustics improvements after the protection procedure. Using computational fluid dynamics (CFD), different quasi 2D numerical grids were implemented and special attention was paid to the refinement of the boundary layers. The noise sources were captured and decoupled with acoustic propagation via the derived formulation of Curle’s analogy implemented in OpenFOAM. Therefore, the noise spectra were compared for clean, coated and eroded profiles in the range of chord-based Reynolds number (1.6e6 ≤ Re ≤ 11.5e6). Angle of attack was zero in all cases. Verifications were conducted for the clean profile using available experimental data. Sensitivity studies for the far-field were done on different observational positions. Furthermore, beamforming studies were done simulating an Archimedean spiral microphone array for far-field noise directivity patterns. Comparing the noise spectra of the coated and eroded geometries, results show that, coating clearly improves aerodynamic and acoustic performance of the eroded airfoil.Keywords: computational fluid dynamics, computational aeroacoustics, leading edge, OpenFOAM
Procedia PDF Downloads 223626 Storage System Validation Study for Raw Cocoa Beans Using Minitab® 17 and R (R-3.3.1)
Authors: Anthony Oppong Kyekyeku, Sussana Antwi-Boasiako, Emmanuel De-Graft Johnson Owusu Ansah
Abstract:
In this observational study, the performance of a known conventional storage system was tested and evaluated for fitness for its intended purpose. The system has a scope extended for the storage of dry cocoa beans. System sensitivity, reproducibility and uncertainties are not known in details. This study discusses the system performance in the context of existing literature on factors that influence the quality of cocoa beans during storage. Controlled conditions were defined precisely for the system to give reliable base line within specific established procedures. Minitab® 17 and R statistical software (R-3.3.1) were used for the statistical analyses. The approach to the storage system testing was to observe and compare through laboratory test methods the quality of the cocoa beans samples before and after storage. The samples were kept in Kilner jars and the temperature of the storage environment controlled and monitored over a period of 408 days. Standard test methods use in international trade of cocoa such as the cut test analysis, moisture determination with Aqua boy KAM III model and bean count determination were used for quality assessment. The data analysis assumed the entire population as a sample in order to establish a reliable baseline to the data collected. The study concluded a statistically significant mean value at 95% Confidence Interval (CI) for the performance data analysed before and after storage for all variables observed. Correlational graphs showed a strong positive correlation for all variables investigated with the exception of All Other Defect (AOD). The weak relationship between the before and after data for AOD had an explained variability of 51.8% with the unexplained variability attributable to the uncontrolled condition of hidden infestation before storage. The current study concluded with a high-performance criterion for the storage system.Keywords: benchmarking performance data, cocoa beans, hidden infestation, storage system validation
Procedia PDF Downloads 174625 Investigation on Correlation of Earthquake Intensity Parameters with Seismic Response of Reinforced Concrete Structures
Authors: Semra Sirin Kiris
Abstract:
Nonlinear dynamic analysis is permitted to be used for structures without any restrictions. The important issue is the selection of the design earthquake to conduct the analyses since quite different response may be obtained using ground motion records at the same general area even resulting from the same earthquake. In seismic design codes, the method requires scaling earthquake records based on site response spectrum to a specified hazard level. Many researches have indicated that this limitation about selection can cause a large scatter in response and other charecteristics of ground motion obtained in different manner may demonstrate better correlation with peak seismic response. For this reason influence of eleven different ground motion parameters on the peak displacement of reinforced concrete systems is examined in this paper. From conducting 7020 nonlinear time history analyses for single degree of freedom systems, the most effective earthquake parameters are given for the range of the initial periods and strength ratios of the structures. In this study, a hysteresis model for reinforced concrete called Q-hyst is used not taken into account strength and stiffness degradation. The post-yielding to elastic stiffness ratio is considered as 0.15. The range of initial period, T is from 0.1s to 0.9s with 0.1s time interval and three different strength ratios for structures are used. The magnitude of 260 earthquake records selected is higher than earthquake magnitude, M=6. The earthquake parameters related to the energy content, duration or peak values of ground motion records are PGA(Peak Ground Acceleration), PGV (Peak Ground Velocity), PGD (Peak Ground Displacement), MIV (Maximum Increamental Velocity), EPA(Effective Peak Acceleration), EPV (Effective Peak Velocity), teff (Effective Duration), A95 (Arias Intensity-based Parameter), SPGA (Significant Peak Ground Acceleration), ID (Damage Factor) and Sa (Spectral Response Spectrum).Observing the correlation coefficients between the ground motion parameters and the peak displacement of structures, different earthquake parameters play role in peak displacement demand related to the ranges formed by the different periods and the strength ratio of a reinforced concrete systems. The influence of the Sa tends to decrease for the high values of strength ratio and T=0.3s-0.6s. The ID and PGD is not evaluated as a measure of earthquake effect since high correlation with displacement demand is not observed. The influence of the A95 is high for T=0.1 but low related to the higher values of T and strength ratio. The correlation of PGA, EPA and SPGA shows the highest correlation for T=0.1s but their effectiveness decreases with high T. Considering all range of structural parameters, the MIV is the most effective parameter.Keywords: earthquake parameters, earthquake resistant design, nonlinear analysis, reinforced concrete
Procedia PDF Downloads 151624 High Throughput LC-MS/MS Studies on Sperm Proteome of Malnad Gidda (Bos Indicus) Cattle
Authors: Kerekoppa Puttaiah Bhatta Ramesha, Uday Kannegundla, Praseeda Mol, Lathika Gopalakrishnan, Jagish Kour Reen, Gourav Dey, Manish Kumar, Sakthivel Jeyakumar, Arumugam Kumaresan, Kiran Kumar M., Thottethodi Subrahmanya Keshava Prasad
Abstract:
Spermatozoa are the highly specialized transcriptionally and translationally inactive haploid male gamete. The understanding of proteome of sperm is indispensable to explore the mechanism of sperm motility and fertility. Though there is a large number of human sperm proteomic studies, in-depth proteomic information on Bos indicus spermatozoa is not well established yet. Therefore, we illustrated the profile of sperm proteome in indigenous cattle, Malnad gidda (Bos Indicus), using high-resolution mass spectrometry. In the current study, two semen ejaculates from 3 breeding bulls were collected employing the artificial vaginal method. Using 45% percoll purification, spermatozoa cells were isolated. Protein was extracted using lysis buffer containing 2% Sodium Dodecyl Sulphate (SDS) and protein concentration was estimated. Fifty micrograms of protein from each individual were pooled for further downstream processing. Pooled sample was fractionated using SDS-Poly Acrylamide Gel Electrophoresis, which is followed by in-gel digestion. The peptides were subjected to C18 Stage Tip clean-up and analyzed in Orbitrap Fusion Tribrid mass spectrometer interfaced with Proxeon Easy-nano LC II system (Thermo Scientific, Bremen, Germany). We identified a total of 6773 peptides with 28426 peptide spectral matches, which belonged to 1081 proteins. Gene ontology analysis has been carried out to determine the biological processes, molecular functions and cellular components associated with sperm protein. The biological process chiefly represented our data is an oxidation-reduction process (5%), spermatogenesis (2.5%) and spermatid development (1.4%). The highlighted molecular functions are ATP, and GTP binding (14%) and the prominent cellular components most observed in our data were nuclear membrane (1.5%), acrosomal vesicle (1.4%), and motile cilium (1.3%). Seventeen percent of sperm proteins identified in this study were involved in metabolic pathways. To the best of our knowledge, this data represents the first total sperm proteome from indigenous cattle, Malnad Gidda. We believe that our preliminary findings could provide a strong base for the future understanding of bovine sperm proteomics.Keywords: Bos indicus, Malnad Gidda, mass spectrometry, spermatozoa
Procedia PDF Downloads 196623 Hybrid Precoder Design Based on Iterative Hard Thresholding Algorithm for Millimeter Wave Multiple-Input-Multiple-Output Systems
Authors: Ameni Mejri, Moufida Hajjaj, Salem Hasnaoui, Ridha Bouallegue
Abstract:
The technology advances have most lately made the millimeter wave (mmWave) communication possible. Due to the huge amount of spectrum that is available in MmWave frequency bands, this promising candidate is considered as a key technology for the deployment of 5G cellular networks. In order to enhance system capacity and achieve spectral efficiency, very large antenna arrays are employed at mmWave systems by exploiting array gain. However, it has been shown that conventional beamforming strategies are not suitable for mmWave hardware implementation. Therefore, new features are required for mmWave cellular applications. Unlike traditional multiple-input-multiple-output (MIMO) systems for which only digital precoders are essential to accomplish precoding, MIMO technology seems to be different at mmWave because of digital precoding limitations. Moreover, precoding implements a greater number of radio frequency (RF) chains supporting more signal mixers and analog-to-digital converters. As RF chain cost and power consumption is increasing, we need to resort to another alternative. Although the hybrid precoding architecture has been regarded as the best solution based on a combination between a baseband precoder and an RF precoder, we still do not get the optimal design of hybrid precoders. According to the mapping strategies from RF chains to the different antenna elements, there are two main categories of hybrid precoding architecture. Given as a hybrid precoding sub-array architecture, the partially-connected structure reduces hardware complexity by using a less number of phase shifters, whereas it sacrifices some beamforming gain. In this paper, we treat the hybrid precoder design in mmWave MIMO systems as a problem of matrix factorization. Thus, we adopt the alternating minimization principle in order to solve the design problem. Further, we present our proposed algorithm for the partially-connected structure, which is based on the iterative hard thresholding method. Through simulation results, we show that our hybrid precoding algorithm provides significant performance gains over existing algorithms. We also show that the proposed approach reduces significantly the computational complexity. Furthermore, valuable design insights are provided when we use the proposed algorithm to make simulation comparisons between the hybrid precoding partially-connected structure and the fully-connected structure.Keywords: alternating minimization, hybrid precoding, iterative hard thresholding, low-complexity, millimeter wave communication, partially-connected structure
Procedia PDF Downloads 322622 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling
Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed
Abstract:
The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.Keywords: streamflow, neural network, optimisation, algorithm
Procedia PDF Downloads 152621 Evaluating Habitat Manipulation as a Strategy for Rodent Control in Agricultural Ecosystems of Pothwar Region, Pakistan
Authors: Nadeem Munawar, Tariq Mahmood
Abstract:
Habitat manipulation is an important technique that can be used for controlling rodent damage in agricultural ecosystems. It involves intentionally manipulation of vegetation cover in adjacent habitats around the active burrows of rodents to reduce shelter, food availability and to increase predation pressure. The current study was conducted in the Pothwar Plateau during the respective non-crop period of wheat-groundnut (post-harvested and un-ploughed/non-crop fallow lands) with the aim to assess the impact of the reduction in vegetation height of adjacent habitats (field borders) on rodent’s richness and abundance. The study area was divided into two sites viz. treated and non-treated. At the treated sites, habitat manipulation was carried out by removing crop cache, and non-crop vegetation’s over 10 cm in height to a distance of approximately 20 m from the fields. The trapping sessions carried out at both treated and non-treated sites adjacent to wheat-groundnut fields were significantly different (F 2, 6 = 13.2, P = 0.001) from each other, which revealed that a maximum number of rodents were captured from non-treated sites. There was a significant difference in the overall abundance of rodents (P < 0.05) between crop stages and between treatments in both crops. The manipulation effect was significantly observed on damage to crops, and yield production resulted in the reduction of damage within the associated croplands (P < 0.05). The outcomes of this study indicated a significant reduction of rodent population at treated sites due to changes in vegetation height and cover which affect important components, i.e., food, shelter, movements and increased risk sensitivity in their feeding behavior; therefore, they were unable to reach levels where they cause significant crop damage. This method is recommended for being a cost-effective and easy application.Keywords: agricultural ecosystems, crop damage, habitat manipulation, rodents, trapping
Procedia PDF Downloads 165620 Sensing of Cancer DNA Using Resonance Frequency
Authors: Sungsoo Na, Chanho Park
Abstract:
Lung cancer is one of the most common severe diseases driving to the death of a human. Lung cancer can be divided into two cases of small-cell lung cancer (SCLC) and non-SCLC (NSCLC), and about 80% of lung cancers belong to the case of NSCLC. From several studies, the correlation between epidermal growth factor receptor (EGFR) and NSCLCs has been investigated. Therefore, EGFR inhibitor drugs such as gefitinib and erlotinib have been used as lung cancer treatments. However, the treatments result showed low response (10~20%) in clinical trials due to EGFR mutations that cause the drug resistance. Patients with resistance to EGFR inhibitor drugs usually are positive to KRAS mutation. Therefore, assessment of EGFR and KRAS mutation is essential for target therapies of NSCLC patient. In order to overcome the limitation of conventional therapies, overall EGFR and KRAS mutations have to be monitored. In this work, the only detection of EGFR will be presented. A variety of techniques has been presented for the detection of EGFR mutations. The standard detection method of EGFR mutation in ctDNA relies on real-time polymerase chain reaction (PCR). Real-time PCR method provides high sensitive detection performance. However, as the amplification step increases cost effect and complexity increase as well. Other types of technology such as BEAMing, next generation sequencing (NGS), an electrochemical sensor and silicon nanowire field-effect transistor have been presented. However, those technologies have limitations of low sensitivity, high cost and complexity of data analyzation. In this report, we propose a label-free and high-sensitive detection method of lung cancer using quartz crystal microbalance based platform. The proposed platform is able to sense lung cancer mutant DNA with a limit of detection of 1nM.Keywords: cancer DNA, resonance frequency, quartz crystal microbalance, lung cancer
Procedia PDF Downloads 233619 A Computational Approach for the Prediction of Relevant Olfactory Receptors in Insects
Authors: Zaide Montes Ortiz, Jorge Alberto Molina, Alejandro Reyes
Abstract:
Insects are extremely successful organisms. A sophisticated olfactory system is in part responsible for their survival and reproduction. The detection of volatile organic compounds can positively or negatively affect many behaviors in insects. Compounds such as carbon dioxide (CO2), ammonium, indol, and lactic acid are essential for many species of mosquitoes like Anopheles gambiae in order to locate vertebrate hosts. For instance, in A. gambiae, the olfactory receptor AgOR2 is strongly activated by indol, which accounts for almost 30% of human sweat. On the other hand, in some insects of agricultural importance, the detection and identification of pheromone receptors (PRs) in lepidopteran species has become a promising field for integrated pest management. For example, with the disruption of the pheromone receptor, BmOR1, mediated by transcription activator-like effector nucleases (TALENs), the sensitivity to bombykol was completely removed affecting the pheromone-source searching behavior in male moths. Then, the detection and identification of olfactory receptors in the genomes of insects is fundamental to improve our understanding of the ecological interactions, and to provide alternatives in the integrated pests and vectors management. Hence, the objective of this study is to propose a bioinformatic workflow to enhance the detection and identification of potential olfactory receptors in genomes of relevant insects. Applying Hidden Markov models (Hmms) and different computational tools, potential candidates for pheromone receptors in Tuta absoluta were obtained, as well as potential carbon dioxide receptors in Rhodnius prolixus, the main vector of Chagas disease. This study showed the validity of a bioinformatic workflow with a potential to improve the identification of certain olfactory receptors in different orders of insects.Keywords: bioinformatic workflow, insects, olfactory receptors, protein prediction
Procedia PDF Downloads 149618 Luminescent Functionalized Graphene Oxide Based Sensitive Detection of Deadly Explosive TNP
Authors: Diptiman Dinda, Shyamal Kumar Saha
Abstract:
In the 21st century, sensitive and selective detection of trace amounts of explosives has become a serious problem. Generally, nitro compound and its derivatives are being used worldwide to prepare different explosives. Recently, TNP (2, 4, 6 trinitrophenol) is the most commonly used constituent to prepare powerful explosives all over the world. It is even powerful than TNT or RDX. As explosives are electron deficient in nature, it is very difficult to detect one separately from a mixture. Again, due to its tremendous water solubility, detection of TNP in presence of other explosives from water is very challenging. Simple instrumentation, cost-effective, fast and high sensitivity make fluorescence based optical sensing a grand success compared to other techniques. Graphene oxide (GO), with large no of epoxy grps, incorporate localized nonradiative electron-hole centres on its surface to give very weak fluorescence. In this work, GO is functionalized with 2, 6-diamino pyridine to remove those epoxy grps. through SN2 reaction. This makes GO into a bright blue luminescent fluorophore (DAP/rGO) which shows an intense PL spectrum at ∼384 nm when excited at 309 nm wavelength. We have also characterized the material by FTIR, XPS, UV, XRD and Raman measurements. Using this as fluorophore, a large fluorescence quenching (96%) is observed after addition of only 200 µL of 1 mM TNP in water solution. Other nitro explosives give very moderate PL quenching compared to TNP. Such high selectivity is related to the operation of FRET mechanism from fluorophore to TNP during this PL quenching experiment. TCSPC measurement also reveals that the lifetime of DAP/rGO drastically decreases from 3.7 to 1.9 ns after addition of TNP. Our material is also quite sensitive to 125 ppb level of TNP. Finally, we believe that this graphene based luminescent material will emerge a new class of sensing materials to detect trace amounts of explosives from aqueous solution.Keywords: graphene, functionalization, fluorescence quenching, FRET, nitroexplosive detection
Procedia PDF Downloads 440617 Nanomaterial Based Electrochemical Sensors for Endocrine Disrupting Compounds
Authors: Gaurav Bhanjana, Ganga Ram Chaudhary, Sandeep Kumar, Neeraj Dilbaghi
Abstract:
Main sources of endocrine disrupting compounds in the ecosystem are hormones, pesticides, phthalates, flame retardants, dioxins, personal-care products, coplanar polychlorinated biphenyls (PCBs), bisphenol A, and parabens. These endocrine disrupting compounds are responsible for learning disabilities, brain development problems, deformations of the body, cancer, reproductive abnormalities in females and decreased sperm count in human males. Although discharge of these chemical compounds into the environment cannot be stopped, yet their amount can be retarded through proper evaluation and detection techniques. The available techniques for determination of these endocrine disrupting compounds mainly include high performance liquid chromatography (HPLC), mass spectroscopy (MS) and gas chromatography-mass spectrometry (GC–MS). These techniques are accurate and reliable but have certain limitations like need of skilled personnel, time consuming, interference and requirement of pretreatment steps. Moreover, these techniques are laboratory bound and sample is required in large amount for analysis. In view of above facts, new methods for detection of endocrine disrupting compounds should be devised that promise high specificity, ultra sensitivity, cost effective, efficient and easy-to-operate procedure. Nowadays, electrochemical sensors/biosensors modified with nanomaterials are gaining high attention among researchers. Bioelement present in this system makes the developed sensors selective towards analyte of interest. Nanomaterials provide large surface area, high electron communication feature, enhanced catalytic activity and possibilities of chemical modifications. In most of the cases, nanomaterials also serve as an electron mediator or electrocatalyst for some analytes.Keywords: electrochemical, endocrine disruptors, microscopy, nanoparticles, sensors
Procedia PDF Downloads 273616 Meet Automotive Software Safety and Security Standards Expectations More Quickly
Authors: Jean-François Pouilly
Abstract:
This study addresses the growing complexity of embedded systems and the critical need for secure, reliable software. Traditional cybersecurity testing methods, often conducted late in the development cycle, struggle to keep pace. This talk explores how formal methods, integrated with advanced analysis tools, empower C/C++ developers to 1) Proactively address vulnerabilities and bugs, which includes formal methods and abstract interpretation techniques to identify potential weaknesses early in the development process, reducing the reliance on penetration and fuzz testing in later stages. 2) Streamline development by focusing on bugs that matter, with close to no false positives and catching flaws earlier, the need for rework and retesting is minimized, leading to faster development cycles, improved efficiency and cost savings. 3) Enhance software dependability which includes combining static analysis using abstract interpretation with full context sensitivity, with hardware memory awareness allows for a more comprehensive understanding of potential vulnerabilities, leading to more dependable and secure software. This approach aligns with industry best practices (ISO2626 or ISO 21434) and empowers C/C++ developers to deliver robust, secure embedded systems that meet the demands of today's and tomorrow's applications. We will illustrate this approach with the TrustInSoft analyzer to show how it accelerates verification for complex cases, reduces user fatigue, and improves developer efficiency, cost-effectiveness, and software cybersecurity. In summary, integrating formal methods and sound Analyzers enhances software reliability and cybersecurity, streamlining development in an increasingly complex environment.Keywords: safety, cybersecurity, ISO26262, ISO24434, formal methods
Procedia PDF Downloads 19615 Storms Dynamics in the Black Sea in the Context of the Climate Changes
Authors: Eugen Rusu
Abstract:
The objective of the work proposed is to perform an analysis of the wave conditions in the Black Sea basin. This is especially focused on the spatial and temporal occurrences and on the dynamics of the most extreme storms in the context of the climate changes. A numerical modelling system, based on the spectral phase averaged wave model SWAN, has been implemented and validated against both in situ measurements and remotely sensed data, all along the sea. Moreover, a successive correction method for the assimilation of the satellite data has been associated with the wave modelling system. This is based on the optimal interpolation of the satellite data. Previous studies show that the process of data assimilation improves considerably the reliability of the results provided by the modelling system. This especially concerns the most sensitive cases from the point of view of the accuracy of the wave predictions, as the extreme storm situations are. Following this numerical approach, it has to be highlighted that the results provided by the wave modelling system above described are in general in line with those provided by some similar wave prediction systems implemented in enclosed or semi-enclosed sea basins. Simulations of this wave modelling system with data assimilation have been performed for the 30-year period 1987-2016. Considering this database, the next step was to analyze the intensity and the dynamics of the higher storms encountered in this period. According to the data resulted from the model simulations, the western side of the sea is considerably more energetic than the rest of the basin. In this western region, regular strong storms provide usually significant wave heights greater than 8m. This may lead to maximum wave heights even greater than 15m. Such regular strong storms may occur several times in one year, usually in the wintertime, or in late autumn, and it can be noticed that their frequency becomes higher in the last decade. As regards the case of the most extreme storms, significant wave heights greater than 10m and maximum wave heights close to 20m (and even greater) may occur. Such extreme storms, which in the past were noticed only once in four or five years, are more recent to be faced almost every year in the Black Sea, and this seems to be a consequence of the climate changes. The analysis performed included also the dynamics of the monthly and annual significant wave height maxima as well as the identification of the most probable spatial and temporal occurrences of the extreme storm events. Finally, it can be concluded that the present work provides valuable information related to the characteristics of the storm conditions and on their dynamics in the Black Sea. This environment is currently subjected to high navigation traffic and intense offshore and nearshore activities and the strong storms that systematically occur may produce accidents with very serious consequences.Keywords: Black Sea, extreme storms, SWAN simulations, waves
Procedia PDF Downloads 248614 Quantitative Evaluation of Supported Catalysts Key Properties from Electron Tomography Studies: Assessing Accuracy Using Material-Realistic 3D-Models
Authors: Ainouna Bouziane
Abstract:
The ability of Electron Tomography to recover the 3D structure of catalysts, with spatial resolution in the subnanometer scale, has been widely explored and reviewed in the last decades. A variety of experimental techniques, based either on Transmission Electron Microscopy (TEM) or Scanning Transmission Electron Microscopy (STEM) have been used to reveal different features of nanostructured catalysts in 3D, but High Angle Annular Dark Field imaging in STEM mode (HAADF-STEM) stands out as the most frequently used, given its chemical sensitivity and avoidance of imaging artifacts related to diffraction phenomena when dealing with crystalline materials. In this regard, our group has developed a methodology that combines image denoising by undecimated wavelet transforms (UWT) with automated, advanced segmentation procedures and parameter selection methods using CS-TVM (Compressed Sensing-total variation minimization) algorithms to reveal more reliable quantitative information out of the 3D characterization studies. However, evaluating the accuracy of the magnitudes estimated from the segmented volumes is also an important issue that has not been properly addressed yet, because a perfectly known reference is needed. The problem particularly complicates in the case of multicomponent material systems. To tackle this key question, we have developed a methodology that incorporates volume reconstruction/segmentation methods. In particular, we have established an approach to evaluate, in quantitative terms, the accuracy of TVM reconstructions, which considers the influence of relevant experimental parameters like the range of tilt angles, image noise level or object orientation. The approach is based on the analysis of material-realistic, 3D phantoms, which include the most relevant features of the system under analysis.Keywords: electron tomography, supported catalysts, nanometrology, error assessment
Procedia PDF Downloads 88613 Domain-Specific Deep Neural Network Model for Classification of Abnormalities on Chest Radiographs
Authors: Nkechinyere Joy Olawuyi, Babajide Samuel Afolabi, Bola Ibitoye
Abstract:
This study collected a preprocessed dataset of chest radiographs and formulated a deep neural network model for detecting abnormalities. It also evaluated the performance of the formulated model and implemented a prototype of the formulated model. This was with the view to developing a deep neural network model to automatically classify abnormalities in chest radiographs. In order to achieve the overall purpose of this research, a large set of chest x-ray images were sourced for and collected from the CheXpert dataset, which is an online repository of annotated chest radiographs compiled by the Machine Learning Research Group, Stanford University. The chest radiographs were preprocessed into a format that can be fed into a deep neural network. The preprocessing techniques used were standardization and normalization. The classification problem was formulated as a multi-label binary classification model, which used convolutional neural network architecture to make a decision on whether an abnormality was present or not in the chest radiographs. The classification model was evaluated using specificity, sensitivity, and Area Under Curve (AUC) score as the parameter. A prototype of the classification model was implemented using Keras Open source deep learning framework in Python Programming Language. The AUC ROC curve of the model was able to classify Atelestasis, Support devices, Pleural effusion, Pneumonia, A normal CXR (no finding), Pneumothorax, and Consolidation. However, Lung opacity and Cardiomegaly had a probability of less than 0.5 and thus were classified as absent. Precision, recall, and F1 score values were 0.78; this implies that the number of False Positive and False Negative is the same, revealing some measure of label imbalance in the dataset. The study concluded that the developed model is sufficient to classify abnormalities present in chest radiographs into present or absent.Keywords: transfer learning, convolutional neural network, radiograph, classification, multi-label
Procedia PDF Downloads 129612 An Automated Stock Investment System Using Machine Learning Techniques: An Application in Australia
Authors: Carol Anne Hargreaves
Abstract:
A key issue in stock investment is how to select representative features for stock selection. The objective of this paper is to firstly determine whether an automated stock investment system, using machine learning techniques, may be used to identify a portfolio of growth stocks that are highly likely to provide returns better than the stock market index. The second objective is to identify the technical features that best characterize whether a stock’s price is likely to go up and to identify the most important factors and their contribution to predicting the likelihood of the stock price going up. Unsupervised machine learning techniques, such as cluster analysis, were applied to the stock data to identify a cluster of stocks that was likely to go up in price – portfolio 1. Next, the principal component analysis technique was used to select stocks that were rated high on component one and component two – portfolio 2. Thirdly, a supervised machine learning technique, the logistic regression method, was used to select stocks with a high probability of their price going up – portfolio 3. The predictive models were validated with metrics such as, sensitivity (recall), specificity and overall accuracy for all models. All accuracy measures were above 70%. All portfolios outperformed the market by more than eight times. The top three stocks were selected for each of the three stock portfolios and traded in the market for one month. After one month the return for each stock portfolio was computed and compared with the stock market index returns. The returns for all three stock portfolios was 23.87% for the principal component analysis stock portfolio, 11.65% for the logistic regression portfolio and 8.88% for the K-means cluster portfolio while the stock market performance was 0.38%. This study confirms that an automated stock investment system using machine learning techniques can identify top performing stock portfolios that outperform the stock market.Keywords: machine learning, stock market trading, logistic regression, cluster analysis, factor analysis, decision trees, neural networks, automated stock investment system
Procedia PDF Downloads 157611 Analysis of Two-Echelon Supply Chain with Perishable Items under Stochastic Demand
Authors: Saeed Poormoaied
Abstract:
Perishability and developing an intelligent control policy for perishable items are the major concerns of marketing managers in a supply chain. In this study, we address a two-echelon supply chain problem for perishable items with a single vendor and a single buyer. The buyer adopts an aged-based continuous review policy which works by taking both the stock level and the aging process of items into account. The vendor works under the warehouse framework, where its lot size is determined with respect to the batch size of the buyer. The model holds for a positive and fixed lead time for the buyer, and zero lead time for the vendor. The demand follows a Poisson process and any unmet demand is lost. We provide exact analytic expressions for the operational characteristics of the system by using the renewal reward theorem. Items have a fixed lifetime after which they become unusable and are disposed of from the buyer's system. The age of items starts when they are unpacked and ready for the consumption at the buyer. When items are held by the vendor, there is no aging process which results in no perishing at the vendor's site. The model is developed under the centralized framework, which takes the expected profit of both vendor and buyer into consideration. The goal is to determine the optimal policy parameters under the service level constraint at the retailer's site. A sensitivity analysis is performed to investigate the effect of the key input parameters on the expected profit and order quantity in the supply chain. The efficiency of the proposed age-based policy is also evaluated through a numerical study. Our results show that when the unit perishing cost is negligible, a significant cost saving is achieved.Keywords: two-echelon supply chain, perishable items, age-based policy, renewal reward theorem
Procedia PDF Downloads 144610 Nanowire Sensor Based on Novel Impedance Spectroscopy Approach
Authors: Valeriy M. Kondratev, Ekaterina A. Vyacheslavova, Talgat Shugabaev, Alexander S. Gudovskikh, Alexey D. Bolshakov
Abstract:
Modern sensorics imposes strict requirements on the biosensors characteristics, especially technological feasibility, and selectivity. There is a growing interest in the analysis of human health biological markers, which indirectly testifying the pathological processes in the body. Such markers are acids and alkalis produced by the human, in particular - ammonia and hydrochloric acid, which are found in human sweat, blood, and urine, as well as in gastric juice. Biosensors based on modern nanomaterials, especially low dimensional, can be used for this markers detection. Most classical adsorption sensors based on metal and silicon oxides are considered non-selective, because they identically change their electrical resistance (or impedance) under the action of adsorption of different target analytes. This work demonstrates a feasible frequency-resistive method of electrical impedance spectroscopy data analysis. The approach allows to obtain of selectivity in adsorption sensors of a resistive type. The method potential is demonstrated with analyzis of impedance spectra of silicon nanowires in the presence of NH3 and HCl vapors with concentrations of about 125 mmol/L (2 ppm) and water vapor. We demonstrate the possibility of unambiguous distinction of the sensory signal from NH3 and HCl adsorption. Moreover, the method is found applicable for analysis of the composition of ammonia and hydrochloric acid vapors mixture without water cross-sensitivity. Presented silicon sensor can be used to find diseases of the gastrointestinal tract by the qualitative and quantitative detection of ammonia and hydrochloric acid content in biological samples. The method of data analysis can be directly translated to other nanomaterials to analyze their applicability in the field of biosensory.Keywords: electrical impedance spectroscopy, spectroscopy data analysis, selective adsorption sensor, nanotechnology
Procedia PDF Downloads 114609 Infrared Photodetectors Based on Nanowire Arrays: Towards Far Infrared Region
Authors: Mohammad Karimi, Magnus Heurlin, Lars Samuelson, Magnus Borgstrom, Hakan Pettersson
Abstract:
Nanowire semiconductors are promising candidates for optoelectronic applications such as solar cells, photodetectors and lasers due to their quasi-1D geometry and large surface to volume ratio. The functional wavelength range of NW-based detectors is typically limited to the visible/near-infrared region. In this work, we present electrical and optical properties of IR photodetectors based on large square millimeter ensembles (>1million) of vertically processed semiconductor heterostructure nanowires (NWs) grown on InP substrates which operate in longer wavelengths. InP NWs comprising single or multiple (20) InAs/InAsP QDics axially embedded in an n-i-n geometry, have been grown on InP substrates using metal organic vapor phase epitaxy (MOVPE). The NWs are contacted in vertical direction by atomic layer deposition (ALD) deposition of 50 nm SiO2 as an insulating layer followed by sputtering of indium tin oxide (ITO) and evaporation of Ti and Au as top contact layer. In order to extend the sensitivity range to the mid-wavelength and long-wavelength regions, the intersubband transition within conduction band of InAsP QDisc is suggested. We present first experimental indications of intersubband photocurrent in NW geometry and discuss important design parameters for realization of intersubband detectors. Key advantages with the proposed design include large degree of freedom in choice of materials compositions, possible enhanced optical resonance effects due to periodically ordered NW arrays and the compatibility with silicon substrates. We believe that the proposed detector design offers the route towards monolithic integration of compact and sensitive III-V NW long wavelength detectors with Si technology.Keywords: intersubband photodetector, infrared, nanowire, quantum disc
Procedia PDF Downloads 386608 Insulin Resistance in Patients with Chronic Hepatitis C Virus Infection: Upper Egypt Experience
Authors: Ali Kassem
Abstract:
Background: In the last few years, factors such as insulin resistance (IR) and hepatic steatosis have been linked to progression of hepatic fibrosis.Patients with chronic liver disease, and cirrhosis in particular, are known to be prone to IR. However, chronic HCV (hepatitis C) infection may induce IR, regardless of the presence of liver cirrhosis. Our aims are to study insulin resistance (IR) assessed by HOMA-IR (Homeostatic Model Assessment Insulin Resistance) as a possible risk factor in disease progression in cirrhotic patients and to evaluate the role of IR in hepatic fibrosis progression. The correlations of HOMA-IR values to laboratory, virological and histopathological parameters of chronic HCV are also examined. Methods: The study included 50 people divided into 30 adult chronic hepatitis C patients diagnosed by PCR (polymerase chain reaction) within previous 6 months and 20 healthy controls. The functional and morphological status of the liver were evaluated by ultrasonography and laboratory investigations including liver function tests and by liver biopsy. Fasting blood glucose and fasting insulin levels were measured and body mass index and insulin resistance were calculated. Patients having HOMA-IR >2.5 were labeled as insulin resistant. Results: Chronic hepatitis C patients with IR showed significantly higher mean values of BMI (body mass index) and fasting insulin than those without IR (P < 0.000). Patients with IR were more likely to have steatosis (p = 0.006), higher necroinflammatory activity (p = 0.05). No significant differences were found between the two groups regarding hepatic fibrosis. Conclusion: HOMA-IR measurement could represent a novel marker to identify the cirrhotic patients at greater risk for the progression of liver disease. As IR is a potentially modifiable risk factor, these findings may have important prognostic and therapeutic implications. Assessment of IR by HOMA-IR and improving insulin sensitivity are recommended in patients with HCV and related chronic liver disease.Keywords: hepatic fibrosis, hepatitis C virus infection, hepatic steatosis, insulin resistance
Procedia PDF Downloads 154607 Uterine Cervical Cancer; Early Treatment Assessment with T2- And Diffusion-Weighted MRI
Authors: Susanne Fridsten, Kristina Hellman, Anders Sundin, Lennart Blomqvist
Abstract:
Background: Patients diagnosed with locally advanced cervical carcinoma are treated with definitive concomitant chemo-radiotherapy. Treatment failure occurs in 30-50% of patients with very poor prognoses. The treatment is standardized with risk for both over-and undertreatment. Consequently, there is a great need for biomarkers able to predict therapy outcomes to allow for individualized treatment. Aim: To explore the role of T2- and diffusion-weighted magnetic resonance imaging (MRI) for early prediction of therapy outcome and the optimal time point for assessment. Methods: A pilot study including 15 patients with cervical carcinoma stage IIB-IIIB (FIGO 2009) undergoing definitive chemoradiotherapy. All patients underwent MRI four times, at baseline, 3 weeks, 5 weeks, and 12 weeks after treatment started. Tumour size, size change (∆size), visibility on diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC) and change of ADC (∆ADC) at the different time points were recorded. Results: 7/15 patients relapsed during the study period, referred to as "poor prognosis", PP, and the remaining eight patients are referred to "good prognosis", GP. The tumor size was larger at all time points for PP than for GP. The ∆size between any of the four-time points was the same for PP and GP patients. The sensitivity and specificity to predict prognostic group depending on a remaining tumor on DWI were highest at 5 weeks and 83% (5/6) and 63% (5/8), respectively. The combination of tumor size at baseline and remaining tumor on DWI at 5 weeks in ROC analysis reached an area under the curve (AUC) of 0.83. After 12 weeks, no remaining tumor was seen on DWI among patients with GP, as opposed to 2/7 PP patients. Adding ADC to the tumor size measurements did not improve the predictive value at any time point. Conclusion: A large tumor at baseline MRI combined with a remaining tumor on DWI at 5 weeks predicted a poor prognosis.Keywords: chemoradiotherapy, diffusion-weighted imaging, magnetic resonance imaging, uterine cervical carcinoma
Procedia PDF Downloads 143606 Permeable Bio-Reactive Barriers to Tackle Petroleum Hydrocarbon Contamination in the Sub-Antarctic
Authors: Benjamin L. Freidman, Sally L. Gras, Ian Snape, Geoff W. Stevens, Kathryn A. Mumford
Abstract:
Increasing transportation and storage of petroleum hydrocarbons in Antarctic and sub-Antarctic regions have resulted in frequent accidental spills. Migrating petroleum hydrocarbon spills can have a significant impact on terrestrial and marine ecosystems in cold regions, as harsh environmental conditions result in heightened sensitivity to pollution. This migration of contaminants has led to the development of Permeable Reactive Barriers (PRB) for application in cold regions. PRB’s are one of the most practical technologies for on-site or in-situ groundwater remediation in cold regions due to their minimal energy, monitoring and maintenance requirements. The Main Power House site has been used as a fuel storage and power generation area for the Macquarie Island research station since at least 1960. Soil analysis at the site has revealed Total Petroleum Hydrocarbon (TPH) (C9-C28) concentrations as high as 19,000 mg/kg soil. Groundwater TPH concentrations at this site can exceed 350 mg/L TPH. Ongoing migration of petroleum hydrocarbons into the neighbouring marine ecosystem resulted in the installation of a ‘funnel and gate’ PRB in November 2014. The ‘funnel and gate’ design successfully intercepted contaminated groundwater and analysis of TPH retention and biodegradation on PRB media are currently underway. Installation of the PRB facilitates research aimed at better understanding the contribution of particle attached biofilms to the remediation of groundwater systems. Bench-scale PRB system analysis at The University of Melbourne is currently examining the role biofilms play in petroleum hydrocarbon degradation, and how controlled release nutrient media can heighten the metabolic activity of biofilms in cold regions in the presence of low temperatures and low nutrient groundwater.Keywords: groundwater, petroleum, Macquarie island, funnel and gate
Procedia PDF Downloads 358605 Evaluation of the Heating Capability and in vitro Hemolysis of Nanosized MgxMn1-xFe2O4 (x = 0.3 and 0.4) Ferrites Prepared by Sol-gel Method
Authors: Laura Elena De León Prado, Dora Alicia Cortés Hernández, Javier Sánchez
Abstract:
Among the different cancer treatments that are currently used, hyperthermia has a promising potential due to the multiple benefits that are obtained by this technique. In general terms, hyperthermia is a method that takes advantage of the sensitivity of cancer cells to heat, in order to damage or destroy them. Within the different ways of supplying heat to cancer cells and achieve their destruction or damage, the use of magnetic nanoparticles has attracted attention due to the capability of these particles to generate heat under the influence of an external magnetic field. In addition, these nanoparticles have a high surface area and sizes similar or even lower than biological entities, which allow their approaching and interaction with a specific region of interest. The most used magnetic nanoparticles for hyperthermia treatment are those based on iron oxides, mainly magnetite and maghemite, due to their biocompatibility, good magnetic properties and chemical stability. However, in order to fulfill more efficiently the requirements that demand the treatment of magnetic hyperthermia, there have been investigations using ferrites that incorporate different metallic ions, such as Mg, Mn, Co, Ca, Ni, Cu, Li, Gd, etc., in their structure. This paper reports the synthesis of nanosized MgxMn1-xFe2O4 (x = 0.3 and 0.4) ferrites by sol-gel method and their evaluation in terms of heating capability and in vitro hemolysis to determine the potential use of these nanoparticles as thermoseeds for the treatment of cancer by magnetic hyperthermia. It was possible to obtain ferrites with nanometric sizes, a single crystalline phase with an inverse spinel structure and a behavior near to that of superparamagnetic materials. Additionally, at concentrations of 10 mg of magnetic material per mL of water, it was possible to reach a temperature of approximately 45°C, which is within the range of temperatures used for the treatment of hyperthermia. The results of the in vitro hemolysis assay showed that, at the concentrations tested, these nanoparticles are non-hemolytic, as their percentage of hemolysis is close to zero. Therefore, these materials can be used as thermoseeds for the treatment of cancer by magnetic hyperthermia.Keywords: ferrites, heating capability, hemolysis, nanoparticles, sol-gel
Procedia PDF Downloads 342604 Combining in vitro Protein Expression with AlphaLISA Technology to Study Protein-Protein Interaction
Authors: Shayli Varasteh Moradi, Wayne A. Johnston, Dejan Gagoski, Kirill Alexandrov
Abstract:
The demand for a rapid and more efficient technique to identify protein-protein interaction particularly in the areas of therapeutics and diagnostics development is growing. The method described here is a rapid in vitro protein-protein interaction analysis approach based on AlphaLISA technology combined with Leishmania tarentolae cell-free protein production (LTE) system. Cell-free protein synthesis allows the rapid production of recombinant proteins in a multiplexed format. Among available in vitro expression systems, LTE offers several advantages over other eukaryotic cell-free systems. It is based on a fast growing fermentable organism that is inexpensive in cultivation and lysate production. High integrity of proteins produced in this system and the ability to co-express multiple proteins makes it a desirable method for screening protein interactions. Following the translation of protein pairs in LTE system, the physical interaction between proteins of interests is analysed by AlphaLISA assay. The assay is performed using unpurified in vitro translation reaction and therefore can be readily multiplexed. This approach can be used in various research applications such as epitope mapping, antigen-antibody analysis and protein interaction network mapping. The intra-viral protein interaction network of Zika virus was studied using the developed technique. The viral proteins were co-expressed pair-wise in LTE and all possible interactions among viral proteins were tested using AlphaLISA. The assay resulted to the identification of 54 intra-viral protein-protein interactions from which 19 binary interactions were found to be novel. The presented technique provides a powerful tool for rapid analysis of protein-protein interaction with high sensitivity and throughput.Keywords: AlphaLISA technology, cell-free protein expression, epitope mapping, Leishmania tarentolae, protein-protein interaction
Procedia PDF Downloads 237603 Raman Tweezers Spectroscopy Study of Size Dependent Silver Nanoparticles Toxicity on Erythrocytes
Authors: Surekha Barkur, Aseefhali Bankapur, Santhosh Chidangil
Abstract:
Raman Tweezers technique has become prevalent in single cell studies. This technique combines Raman spectroscopy which gives information about molecular vibrations, with optical tweezers which use a tightly focused laser beam for trapping the single cells. Thus Raman Tweezers enabled researchers analyze single cells and explore different applications. The applications of Raman Tweezers include studying blood cells, monitoring blood-related disorders, silver nanoparticle-induced stress, etc. There is increased interest in the toxic effect of nanoparticles with an increase in the various applications of nanoparticles. The interaction of these nanoparticles with the cells may vary with their size. We have studied the effect of silver nanoparticles of sizes 10nm, 40nm, and 100nm on erythrocytes using Raman Tweezers technique. Our aim was to investigate the size dependence of the nanoparticle effect on RBCs. We used 785nm laser (Starbright Diode Laser, Torsana Laser Tech, Denmark) for both trapping and Raman spectroscopic studies. 100 x oil immersion objectives with high numerical aperture (NA 1.3) is used to focus the laser beam into a sample cell. The back-scattered light is collected using the same microscope objective and focused into the spectrometer (Horiba Jobin Vyon iHR320 with 1200grooves/mm grating blazed at 750nm). Liquid nitrogen cooled CCD (Symphony CCD-1024x256-OPEN-1LS) was used for signal detection. Blood was drawn from healthy volunteers in vacutainer tubes and centrifuged to separate the blood components. 1.5 ml of silver nanoparticles was washed twice with distilled water leaving 0.1 ml silver nanoparticles in the bottom of the vial. The concentration of silver nanoparticles is 0.02mg/ml so the 0.03mg of nanoparticles will be present in the 0.1 ml nanoparticles obtained. The 25 ul of RBCs were diluted in 2 ml of PBS solution and then treated with 50 ul (0.015mg) of nanoparticles and incubated in CO2 incubator. Raman spectroscopic measurements were done after 24 hours and 48 hours of incubation. All the spectra were recorded with 10mW laser power (785nm diode laser), 60s of accumulation time and 2 accumulations. Major changes were observed in the peaks 565 cm-1, 1211 cm-1, 1224 cm-1, 1371 cm-1, 1638 cm-1. A decrease in intensity of 565 cm-1, increase in 1211 cm-1 with a reduction in 1224 cm-1, increase in intensity of 1371 cm-1 also peak disappearing at 1635 cm-1 indicates deoxygenation of hemoglobin. Nanoparticles with higher size were showing maximum spectral changes. Lesser changes observed in case of 10nm nanoparticle-treated erythrocyte spectra.Keywords: erythrocytes, nanoparticle-induced toxicity, Raman tweezers, silver nanoparticles
Procedia PDF Downloads 293602 An Empirical Study on the Impact of Peace in Tourists' Country of Origin on Their Travel Behavior
Authors: Claudia Seabra, Elisabeth Kastenholz, José Luís Abrantes, Manuel Reis
Abstract:
In a world of increasing mobility and global risks, terrorism has, in a perverse way, capitalized on contemporaneous society’s growing interest in travel to explore a world whose national boundaries and distances have decreased. Terrorists have identified the modern tourist flows originated from the economically more developed countries as new appealing targets so as to: i) call attention to the causes they defend and ii) destroy a country’s foundations of tourism, with the final aim of disrupting the economic and consequently social fabric of the affected countries. The present study analyses sensitivity towards risk and travel behaviors in international travel amongst a sample of 600 international tourists from 49 countries travelling by air. Specifically, the sample was segmented according to the Global Peace Index. This index defines country profiles regarding the levels of peace. The indicators used are established over three broad themes: i) ongoing domestic and international conflict; ii) societal safety and security; and iii) militarisation. Tourists were segmented, according to their country of origin, in different levels of peacefulness. Several facets of travel behavior were evaluated, namely motivations, attitude towards trip planning, quality perception and perceived value of the trip. Also factors related with risk perception were evaluated, specifically terrorism risk perception during the trip, unsafety sensation as well as importance attributed to safety in travel. Results contribute to our understanding of the role of previous exposure to the lack of peace and safety at home in the international tourists behaviors, which is further discussed in terms of tourism management and marketing implications which should particularly interest tourism services and destinations more affected by terrorism, war, political turmoil, crime and other safety risks.Keywords: terrorism, tourism, safety, risk perception
Procedia PDF Downloads 441601 Physical Model Testing of Storm-Driven Wave Impact Loads and Scour at a Beach Seawall
Authors: Sylvain Perrin, Thomas Saillour
Abstract:
The Grande-Motte port and seafront development project on the French Mediterranean coastline entailed evaluating wave impact loads (pressures and forces) on the new beach seawall and comparing the resulting scour potential at the base of the existing and new seawall. A physical model was built at ARTELIA’s hydraulics laboratory in Grenoble (France) to provide insight into the evolution of scouring overtime at the front of the wall, quasi-static and impulsive wave force intensity and distribution on the wall, and water and sand overtopping discharges over the wall. The beach was constituted of fine sand and approximately 50 m wide above mean sea level (MSL). Seabed slopes were in the range of 0.5% offshore to 1.5% closer to the beach. A smooth concrete structure will replace the existing concrete seawall with an elevated curved crown wall. Prior the start of breaking (at -7 m MSL contour), storm-driven maximum spectral significant wave heights of 2.8 m and 3.2 m were estimated for the benchmark historical storm event dated of 1997 and the 50-year return period storms respectively, resulting in 1 m high waves at the beach. For the wave load assessment, a tensor scale measured wave forces and moments and five piezo / piezo-resistive pressure sensors were placed on the wall. Light-weight sediment physical model and pressure and force measurements were performed with scale 1:18. The polyvinyl chloride light-weight particles used to model the prototype silty sand had a density of approximately 1 400 kg/m3 and a median diameter (d50) of 0.3 mm. Quantitative assessments of the seabed evolution were made using a measuring rod and also a laser scan survey. Testing demonstrated the occurrence of numerous impulsive wave impacts on the reflector (22%), induced not by direct wave breaking but mostly by wave run-up slamming on the top curved part of the wall. Wave forces of up to 264 kilonewtons and impulsive pressure spikes of up to 127 kilonewtons were measured. Maximum scour of -0.9 m was measured for the new seawall versus -0.6 m for the existing seawall, which is imputable to increased wave reflection (coefficient was 25.7 - 30.4% vs 23.4 - 28.6%). This paper presents a methodology for the setup and operation of a physical model in order to assess the hydrodynamic and morphodynamic processes at a beach seawall during storms events. It discusses the pros and cons of such methodology versus others, notably regarding structures peculiarities and model effects.Keywords: beach, impacts, scour, seawall, waves
Procedia PDF Downloads 153600 Biospiral-Detect to Distinguish PrP Multimers from Monomers
Authors: Gulyas Erzsebet
Abstract:
The multimerisation of proteins is a common feature of many cellular processes; however, it could also impair protein functions and/or be associated with the occurrence of diseases. Thus, development of a research tool monitoring the appearance/presence of multimeric protein forms has great importance for a variety of research fields. Such a tool is potentially applicable in the ante-mortem diagnosis of certain conformational diseases, such as transmissible spongiform encephalopathies (TSE) and Alzheimer’s disease. These conditions are accompanied by the appearance of aggregated protein multimers, present in low concentrations in various tissues. This detection is particularly relevant for TSE where the handling of tissues derived from affected individuals and of meat products of infected animals have become an enormous health concern. Here we demonstrate the potential of such a multimer detection approach in TSE by developing a facile approach. The Biospiral-Detect system resembles a traditional sandwich ELISA, except that the capturing antibody that is attached to a solid surface and the detecting antibody is directed against the same or overlapping epitopes. As a consequence, the capturing antibody shields the epitope on the captured monomer from reacting with the detecting antibody, therefore monomers are not detected. Thus, MDS is capable of detecting only protein multimers with high specificity. We developed an alternative system as well, where RNA aptamers were employed instead of monoclonal antibodies. In order to minimize degradation, the 3' and 5' ends of the aptamer contained deoxyribonucleotides and phosphorothioate linkages. When compared the monoclonal antibodies-based system with the aptamers-based one, the former proved to be superior. Thus all subsequent experiments were conducted by employing the Biospiral -Detect modified sandwich ELISA kit. Our approach showed an order of magnitude higher sensitivity toward mulimers than monomers suggesting that this approach may become a valuable diagnostic tool for conformational diseases that are accompanied by multimerization.Keywords: diagnosis, ELISA, Prion, TSE
Procedia PDF Downloads 251