Search results for: JM-MB-TBD filter
144 Development of a New Method for the Evaluation of Heat Tolerant Wheat Genotypes for Genetic Studies and Wheat Breeding
Authors: Hameed Alsamadany, Nader Aryamanesh, Guijun Yan
Abstract:
Heat is one of the major abiotic stresses limiting wheat production worldwide. To identify heat tolerant genotypes, a newly designed system involving a large plastic box holding many layers of filter papers positioned vertically with wheat seeds sown in between for the ease of screening large number of wheat geno types was developed and used to study heat tolerance. A collection of 499 wheat geno types were screened under heat stress (35ºC) and non-stress (25ºC) conditions using the new method. Compared with those under non-stress conditions, a substantial and very significant reduction in seedling length (SL) under heat stress was observed with an average reduction of 11.7 cm (P<0.01). A damage index (DI) of each geno type based on SL under the two temperatures was calculated and used to rank the genotypes. Three hexaploid geno types of Triticum aestivum [Perenjori (DI= -0.09), Pakistan W 20B (-0.18) and SST16 (-0.28)], all growing better at 35ºC than at 25ºC were identified as extremely heat tolerant (EHT). Two hexaploid genotypes of T. aestivum [Synthetic wheat (0.93) and Stiletto (0.92)] and two tetraploid genotypes of T. turgidum ssp dicoccoides [G3211 (0.98) and G3100 (0.93)] were identified as extremely heat susceptible (EHS). Another 14 geno types were classified as heat tolerant (HT) and 478 as heat susceptible (HS). Extremely heat tolerant and heat susceptible geno types were used to develop re combinant inbreeding line populations for genetic studies. Four major QTLs, HTI4D, HTI3B.1, HTI3B.2 and HTI3A located on wheat chromosomes 4D, 3B (x2) and 3A, explaining up to 34.67 %, 28.93 %, 13.46% % and 11.34% phenotypic variation, respectively, were detected. The four QTLs together accounted for 88.40% of the total phenotypic variation. Random wheat geno types possessing the four heat tolerant alleles performed significantly better under the heat condition than those lacking the heat tolerant alleles indicating the importance of the four QTLs in conferring heat tolerance in wheat. Molecular markers are being developed for marker assisted breeding of heat tolerant wheat.Keywords: bread wheat, heat tolerance, screening, RILs, QTL mapping, association analysis
Procedia PDF Downloads 551143 Morphology Analysis of Apple-Carrot Juice Treated by Manothermosonication (MTS) and High Temperature Short Time (HTST) Processes
Authors: Ozan Kahraman, Hao Feng
Abstract:
Manothermosonication (MTS), which consists of the simultaneous application of heat and ultrasound under moderate pressure (100-700 kPa), is one of the technologies which destroy microorganisms and inactivates enzymes. Transmission electron microscopy (TEM) is a microscopy technique in which a beam of electrons is transmitted through an ultra-thin specimen, interacting with the specimen as it passes through it. The environmental scanning electron microscope or ESEM is a scanning electron microscope (SEM) that allows for the option of collecting electron micrographs of specimens that are "wet," uncoated. These microscopy techniques allow us to observe the processing effects on the samples. This study was conducted to investigate the effects of MTS and HTST treatments on the morphology of apple-carrot juices by using TEM and ESEM microscopy. Apple-carrot juices treated with HTST (72 0C, 15 s), MTS 50 °C (60 s, 200 kPa), and MTS 60 °C (30 s, 200 kPa) were observed in both ESEM and TEM microscopy. For TEM analysis, a drop of the solution dispersed in fixative solution was put onto a Parafilm ® sheet. The copper coated side of the TEM sample holder grid was gently laid on top of the droplet and incubated for 15 min. A drop of a 7% uranyl acetate solution was added and held for 2 min. The grid was then removed from the droplet and allowed to dry at room temperature and presented into the TEM. For ESEM analysis, a critical point drying of the filters was performed using a critical point dryer (CPD) (Samdri PVT- 3D, Tousimis Research Corp., Rockville, MD, USA). After the CPD, each filter was mounted onto a stub and coated with gold/palladium with a sputter coater (Desk II TSC Denton Vacuum, Moorestown, NJ, USA). E.Coli O157:H7 cells on the filters were observed with an ESEM (Philips XL30 ESEM-FEG, FEI Co., Eindhoven, The Netherland). ESEM (Environmental Scanning Electron Microscopy) and TEM (Transmission Electron Microscopy) images showed extensive damage for the samples treated with MTS at 50 and 60 °C such as ruptured cells and breakage on cell membranes. The damage was increasing with increasing exposure time.Keywords: MTS, HTST, ESEM, TEM, E.COLI O157:H7
Procedia PDF Downloads 285142 Dynamic Web-Based 2D Medical Image Visualization and Processing Software
Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail
Abstract:
In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN
Procedia PDF Downloads 160141 Simulation of Concrete Wall Subjected to Airblast by Developing an Elastoplastic Spring Model in Modelica Modelling Language
Authors: Leo Laine, Morgan Johansson
Abstract:
To meet the civilizations future needs for safe living and low environmental footprint, the engineers designing the complex systems of tomorrow will need efficient ways to model and optimize these systems for their intended purpose. For example, a civil defence shelter and its subsystem components needs to withstand, e.g. airblast and ground shock from decided design level explosion which detonates with a certain distance from the structure. In addition, the complex civil defence shelter needs to have functioning air filter systems to protect from toxic gases and provide clean air, clean water, heat, and electricity needs to also be available through shock and vibration safe fixtures and connections. Similar complex building systems can be found in any concentrated living or office area. In this paper, the authors use a multidomain modelling language called Modelica to model a concrete wall as a single degree of freedom (SDOF) system with elastoplastic properties with the implemented option of plastic hardening. The elastoplastic model was developed and implemented in the open source tool OpenModelica. The simulation model was tested on the case with a transient equivalent reflected pressure time history representing an airblast from 100 kg TNT detonating 15 meters from the wall. The concrete wall is approximately regarded as a concrete strip of 1.0 m width. This load represents a realistic threat on any building in a city like area. The OpenModelica model results were compared with an Excel implementation of a SDOF model with an elastic-plastic spring using simple fixed timestep central difference solver. The structural displacement results agreed very well with each other when it comes to plastic displacement magnitude, elastic oscillation displacement, and response times.Keywords: airblast from explosives, elastoplastic spring model, Modelica modelling language, SDOF, structural response of concrete structure
Procedia PDF Downloads 130140 The Application of Dynamic Network Process to Environment Planning Support Systems
Authors: Wann-Ming Wey
Abstract:
In recent years, in addition to face the external threats such as energy shortages and climate change, traffic congestion and environmental pollution have become anxious problems for many cities. Considering private automobile-oriented urban development had produced many negative environmental and social impacts, the transit-oriented development (TOD) has been considered as a sustainable urban model. TOD encourages public transport combined with friendly walking and cycling environment designs, however, non-motorized modes help improving human health, energy saving, and reducing carbon emissions. Due to environmental changes often affect the planners’ decision-making; this research applies dynamic network process (DNP) which includes the time dependent concept to promoting friendly walking and cycling environmental designs as an advanced planning support system for environment improvements. This research aims to discuss what kinds of design strategies can improve a friendly walking and cycling environment under TOD. First of all, we collate and analyze environment designing factors by reviewing the relevant literatures as well as divide into three aspects of “safety”, “convenience”, and “amenity” from fifteen environment designing factors. Furthermore, we utilize fuzzy Delphi Technique (FDT) expert questionnaire to filter out the more important designing criteria for the study case. Finally, we utilized DNP expert questionnaire to obtain the weights changes at different time points for each design criterion. Based on the changing trends of each criterion weight, we are able to develop appropriate designing strategies as the reference for planners to allocate resources in a dynamic environment. In order to illustrate the approach we propose in this research, Taipei city as one example has been used as an empirical study, and the results are in depth analyzed to explain the application of our proposed approach.Keywords: environment planning support systems, walking and cycling, transit-oriented development (TOD), dynamic network process (DNP)
Procedia PDF Downloads 344139 Role of Matric Suction in Mechanics behind Swelling Characteristics of Expansive Soils
Authors: Saloni Pandya, Nikhil Sharma, Ajanta Sachan
Abstract:
Expansive soils in the unsaturated state are part of vadose zone and encountered in several arid and semi-arid parts of the world. Influence of high temperature, low precipitation and alternate cycles of wetting and drying are responsible for the chemical weathering of rocks, which results in the formation of expansive soils. Shrinkage-swelling (expansive) soils cover a substantial portion of area in India. Damages caused by expansive soils to various geotechnical structures are alarming. Matric suction develops in unsaturated soil due to capillarity and surface tension phenomena. Matric suction influences the geometric arrangement of soil skeleton, which induces the volume change behaviour of expansive soil. In the present study, an attempt has been made to evaluate the role of matric suction in the mechanism behind swelling characteristics of expansive soil. Four different soils have been collected from different parts of India for the current research. Soil sample S1, S2, S3 and S4 were collected from Nagpur, Bharuch, Bharuch-Dahej highway and Ahmedabad respectively. DFSI (Differential Free Swell Index) of these soils samples; S1, S2, S3, and S4; were determined to be 134%, 104%, 70% and 30% respectively. X-ray diffraction analysis of samples exhibited that percentage of Montmorillonite mineral present in the soils reduced with the decrease in DFSI. A series of constant volume swell pressure tests and in-contact filter paper tests were performed to evaluate swelling pressure and matric suction of all four soils at 30% saturation and 1.46 g/cc dry density. Results indicated that soils possessing higher DFSI exhibited higher matric suction as compared to lower DFSI expansive soils. Significant influence of matric suction on swelling pressure of expansive soils was observed with varying DFSI values. Higher matric suction of soil might govern the water uptake in the interlayer spaces of Montmorillonite mineral present in expansive soil leading to crystalline swelling.Keywords: differential free swell index, expansive soils, matric suction, swelling pressure
Procedia PDF Downloads 166138 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering
Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott
Abstract:
Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.Keywords: cancer research, graph theory, machine learning, single cell analysis
Procedia PDF Downloads 112137 Algorithm for Improved Tree Counting and Detection through Adaptive Machine Learning Approach with the Integration of Watershed Transformation and Local Maxima Analysis
Authors: Jigg Pelayo, Ricardo Villar
Abstract:
The Philippines is long considered as a valuable producer of high value crops globally. The country’s employment and economy have been dependent on agriculture, thus increasing its demand for the efficient agricultural mechanism. Remote sensing and geographic information technology have proven to effectively provide applications for precision agriculture through image-processing technique considering the development of the aerial scanning technology in the country. Accurate information concerning the spatial correlation within the field is very important for precision farming of high value crops, especially. The availability of height information and high spatial resolution images obtained from aerial scanning together with the development of new image analysis methods are offering relevant influence to precision agriculture techniques and applications. In this study, an algorithm was developed and implemented to detect and count high value crops simultaneously through adaptive scaling of support vector machine (SVM) algorithm subjected to object-oriented approach combining watershed transformation and local maxima filter in enhancing tree counting and detection. The methodology is compared to cutting-edge template matching algorithm procedures to demonstrate its effectiveness on a demanding tree is counting recognition and delineation problem. Since common data and image processing techniques are utilized, thus can be easily implemented in production processes to cover large agricultural areas. The algorithm is tested on high value crops like Palm, Mango and Coconut located in Misamis Oriental, Philippines - showing a good performance in particular for young adult and adult trees, significantly 90% above. The s inventories or database updating, allowing for the reduction of field work and manual interpretation tasks.Keywords: high value crop, LiDAR, OBIA, precision agriculture
Procedia PDF Downloads 402136 Liposome Sterile Filtration Fouling: The Impact of Transmembrane Pressure on Performance
Authors: Hercules Argyropoulos, Thomas F. Johnson, Nigel B Jackson, Kalliopi Zourna, Daniel G. Bracewell
Abstract:
Lipid encapsulation has become essential in drug delivery, notably for mRNA vaccines during the COVID-19 pandemic. However, their sterile filtration poses challenges due to the risk of deformation, filter fouling and product loss from adsorption onto the membrane. Choosing the right filtration membrane is crucial to maintain sterility and integrity while minimizing product loss. The objective of this study is to develop a rigorous analytical framework utilizing confocal microscopy and filtration blocking models to elucidate the fouling mechanisms of liposomes as a model system for this class of delivery vehicle during sterile filtration, particularly in response to variations in transmembrane pressure (TMP) during the filtration process. Experiments were conducted using fluorescent Lipoid S100 PC liposomes formulated by micro fluidization and characterized by Multi-Angle Dynamic Light Scattering. Dual-layer PES/PES and PES/PVDF membranes with 0.2 μm pores were used for filtration under constant pressure, cycling from 30 psi to 5 psi and back to 30 psi, with 5, 6, and 5-minute intervals. Cross-sectional membrane samples were prepared by microtome slicing and analyzed with confocal microscopy. Liposome characterization revealed a particle size range of 100-140 nm and an average concentration of 2.93x10¹¹ particles/mL. Goodness-of-fit analysis of flux decline data at varying TMPs identified the intermediate blocking model as most accurate at 30 psi and the cake filtration model at 5 psi. Membrane resistance analysis showed atypical behavior compared to therapeutic proteins, with resistance remaining below 1.38×10¹¹ m⁻¹ at 30 psi, increasing over fourfold at 5 psi, and then decreasing to 1-1.3-fold when pressure was returned to 30 psi. This suggests that increased flow/shear deforms liposomes enabling them to more effectively navigate membrane pores. Confocal microscopy indicated that liposome fouling mainly occurred in the upper parts of the dual-layer membrane.Keywords: sterile filtration, membrane resistance, microfluidization, confocal microscopy, liposomes, filtration blocking models
Procedia PDF Downloads 19135 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market
Authors: Cristian Păuna
Abstract:
In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.Keywords: algorithmic trading, automated trading systems, high-frequency trading, DAX Deutscher Aktienindex
Procedia PDF Downloads 130134 FMCW Doppler Radar Measurements with Microstrip Tx-Rx Antennas
Authors: Yusuf Ulaş Kabukçu, Si̇nan Çeli̇k, Onur Salan, Mai̇de Altuntaş, Mert Can Dalkiran, Gökseni̇n Bozdağ, Metehan Bulut, Fati̇h Yaman
Abstract:
This study presents a more compact implementation of the 2.4GHz MIT Coffee Can Doppler Radar for 2.6GHz operating frequency. The main difference of our prototype depends on the use of microstrip antennas which makes it possible to transport with a small robotic vehicle. We have designed our radar system with two different channels: Tx and Rx. The system mainly consists of Voltage Controlled Oscillator (VCO) source, low noise amplifiers, microstrip antennas, splitter, mixer, low pass filter, and necessary RF connectors with cables. The two microstrip antennas, one is element for transmitter and the other one is array for receiver channel, was designed, fabricated and verified by experiments. The system has two operation modes: speed detection and range detection. If the switch of the operation mode is ‘Off’, only CW signal transmitted for speed measurement. When the switch is ‘On’, CW is frequency-modulated and range detection is possible. In speed detection mode, high frequency (2.6 GHz) is generated by a VCO, and then amplified to reach a reasonable level of transmit power. Before transmitting the amplified signal through a microstrip patch antenna, a splitter used in order to compare the frequencies of transmitted and received signals. Half of amplified signal (LO) is forwarded to a mixer, which helps us to compare the frequencies of transmitted and received (RF) and has the IF output, or in other words information of Doppler frequency. Then, IF output is filtered and amplified to process the signal digitally. Filtered and amplified signal showing Doppler frequency is used as an input of audio input of a computer. After getting this data Doppler frequency is shown as a speed change on a figure via Matlab script. According to experimental field measurements the accuracy of speed measurement is approximately %90. In range detection mode, a chirp signal is used to form a FM chirp. This FM chirp helps to determine the range of the target since only Doppler frequency measured with CW is not enough for range detection. Such a FMCW Doppler radar may be used in border security of the countries since it is capable of both speed and range detection.Keywords: doppler radar, FMCW, range detection, speed detection
Procedia PDF Downloads 398133 Modelling and Simulation of Hysteresis Current Controlled Single-Phase Grid-Connected Inverter
Authors: Evren Isen
Abstract:
In grid-connected renewable energy systems, input power is controlled by AC/DC converter or/and DC/DC converter depending on output voltage of input source. The power is injected to DC-link, and DC-link voltage is regulated by inverter controlling the grid current. Inverter performance is considerable in grid-connected renewable energy systems to meet the utility standards. In this paper, modelling and simulation of hysteresis current controlled single-phase grid-connected inverter that is utilized in renewable energy systems, such as wind and solar systems, are presented. 2 kW single-phase grid-connected inverter is simulated in Simulink and modeled in Matlab-m-file. The grid current synchronization is obtained by phase locked loop (PLL) technique in dq synchronous rotating frame. Although dq-PLL can be easily implemented in three-phase systems, there is difficulty to generate β component of grid voltage in single-phase system because single-phase grid voltage exists. Inverse-Park PLL with low-pass filter is used to generate β component for grid angle determination. As grid current is controlled by constant bandwidth hysteresis current control (HCC) technique, average switching frequency and variation of switching frequency in a fundamental period are considered. 3.56% total harmonic distortion value of grid current is achieved with 0.5 A bandwidth. Average value of switching frequency and total harmonic distortion curves for different hysteresis bandwidth are obtained from model in m-file. Average switching frequency is 25.6 kHz while switching frequency varies between 14 kHz-38 kHz in a fundamental period. The average and maximum frequency difference should be considered for selection of solid state switching device, and designing driver circuit. Steady-state and dynamic response performances of the inverter depending on the input power are presented with waveforms. The control algorithm regulates the DC-link voltage by adjusting the output power.Keywords: grid-connected inverter, hysteresis current control, inverter modelling, single-phase inverter
Procedia PDF Downloads 478132 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT
Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar
Abstract:
X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum
Procedia PDF Downloads 400131 One-Step Synthesis and Characterization of Biodegradable ‘Click-Able’ Polyester Polymer for Biomedical Applications
Authors: Wadha Alqahtani
Abstract:
In recent times, polymers have seen a great surge in interest in the field of medicine, particularly chemotherapeutics. One recent innovation is the conversion of polymeric materials into “polymeric nanoparticles”. These nanoparticles can be designed and modified to encapsulate and transport drugs selectively to cancer cells, minimizing collateral damage to surrounding healthy tissues, and improve patient quality of life. In this study, we have synthesized pseudo-branched polyester polymers from bio-based small molecules, including sorbitol, glutaric acid and a propargylic acid derivative to further modify the polymer to make it “click-able" with an azide-modified target ligand. Melt polymerization technique was used for this polymerization reaction, using lipase enzyme catalyst NOVO 435. This reaction was conducted between 90- 95 °C for 72 hours. The polymer samples were collected in 24-hour increments for characterization and to monitor reaction progress. The resulting polymer was purified with the help of methanol dissolving and filtering with filter paper then characterized via NMR, GPC, FTIR, DSC, TGA and MALDI-TOF. Following characterization, these polymers were converted to a polymeric nanoparticle drug delivery system using solvent diffusion method, wherein DiI optical dye and chemotherapeutic drug Taxol can be encapsulated simultaneously. The efficacy of the nanoparticle’s apoptotic effects were analyzed in-vitro by incubation with prostate cancer (LNCaP) and healthy (CHO) cells. MTT assays and fluorescence microscopy were used to assess the cellular uptake and viability of the cells after 24 hours at 37 °C and 5% CO2 atmosphere. Results of the assays and fluorescence imaging confirmed that the nanoparticles were successful in both selectively targeting and inducing apoptosis in 80% of the LNCaP cells within 24 hours without affecting the viability of the CHO cells. These results show the potential of using biodegradable polymers as a vehicle for receptor-specific drug delivery and a potential alternative for traditional systemic chemotherapy. Detailed experimental results will be discussed in the e-poster.Keywords: chemotherapeutic drug, click chemistry, nanoparticle, prostat cancer
Procedia PDF Downloads 115130 Estimation of Dynamic Characteristics of a Middle Rise Steel Reinforced Concrete Building Using Long-Term
Authors: Fumiya Sugino, Naohiro Nakamura, Yuji Miyazu
Abstract:
In earthquake resistant design of buildings, evaluation of vibration characteristics is important. In recent years, due to the increment of super high-rise buildings, the evaluation of response is important for not only the first mode but also higher modes. The knowledge of vibration characteristics in buildings is mostly limited to the first mode and the knowledge of higher modes is still insufficient. In this paper, using earthquake observation records of a SRC building by applying frequency filter to ARX model, characteristics of first and second modes were studied. First, we studied the change of the eigen frequency and the damping ratio during the 3.11 earthquake. The eigen frequency gradually decreases from the time of earthquake occurrence, and it is almost stable after about 150 seconds have passed. At this time, the decreasing rates of the 1st and 2nd eigen frequencies are both about 0.7. Although the damping ratio has more large error than the eigen frequency, both the 1st and 2nd damping ratio are 3 to 5%. Also, there is a strong correlation between the 1st and 2nd eigen frequency, and the regression line is y=3.17x. In the damping ratio, the regression line is y=0.90x. Therefore 1st and 2nd damping ratios are approximately the same degree. Next, we study the eigen frequency and damping ratio from 1998 after 3.11 earthquakes, the final year is 2014. In all the considered earthquakes, they are connected in order of occurrence respectively. The eigen frequency slowly declined from immediately after completion, and tend to stabilize after several years. Although it has declined greatly after the 3.11 earthquake. Both the decresing rate of the 1st and 2nd eigen frequencies until about 7 years later are about 0.8. For the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1% and the 2nd increases by less than 1%. For the eigen frequency, there is a strong correlation between the 1st and 2nd, and the regression line is y=3.17x. For the damping ratio, the regression line is y=1.01x. Therefore, it can be said that the 1st and 2nd damping ratio is approximately the same degree. Based on the above results, changes in eigen frequency and damping ratio are summarized as follows. In the long-term study of the eigen frequency, both the 1st and 2nd gradually declined from immediately after completion, and tended to stabilize after a few years. Further it declined after the 3.11 earthquake. In addition, there is a strong correlation between the 1st and 2nd, and the declining time and the decreasing rate are the same degree. In the long-term study of the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1%, the 2nd increases by less than 1%. Also, the 1st and 2nd are approximately the same degree.Keywords: eigenfrequency, damping ratio, ARX model, earthquake observation records
Procedia PDF Downloads 217129 Automatic Furrow Detection for Precision Agriculture
Authors: Manpreet Kaur, Cheol-Hong Min
Abstract:
The increasing advancement in the robotics equipped with machine vision sensors applied to precision agriculture is a demanding solution for various problems in the agricultural farms. An important issue related with the machine vision system concerns crop row and weed detection. This paper proposes an automatic furrow detection system based on real-time processing for identifying crop rows in maize fields in the presence of weed. This vision system is designed to be installed on the farming vehicles, that is, submitted to gyros, vibration and other undesired movements. The images are captured under image perspective, being affected by above undesired effects. The goal is to identify crop rows for vehicle navigation which includes weed removal, where weeds are identified as plants outside the crop rows. The images quality is affected by different lighting conditions and gaps along the crop rows due to lack of germination and wrong plantation. The proposed image processing method consists of four different processes. First, image segmentation based on HSV (Hue, Saturation, Value) decision tree. The proposed algorithm used HSV color space to discriminate crops, weeds and soil. The region of interest is defined by filtering each of the HSV channels between maximum and minimum threshold values. Then the noises in the images were eliminated by the means of hybrid median filter. Further, mathematical morphological processes, i.e., erosion to remove smaller objects followed by dilation to gradually enlarge the boundaries of regions of foreground pixels was applied. It enhances the image contrast. To accurately detect the position of crop rows, the region of interest is defined by creating a binary mask. The edge detection and Hough transform were applied to detect lines represented in polar coordinates and furrow directions as accumulations on the angle axis in the Hough space. The experimental results show that the method is effective.Keywords: furrow detection, morphological, HSV, Hough transform
Procedia PDF Downloads 231128 Carbonaceous Monolithic Multi-Channel Denuders as a Gas-Particle Partitioning Tool for the Occupational Sampling of Aerosols from Semi-Volatile Organic Compounds
Authors: Vesta Kohlmeier, George C. Dragan, Juergen Orasche, Juergen Schnelle-Kreis, Dietmar Breuer, Ralf Zimmermann
Abstract:
Aerosols from hazardous semi-volatile organic compounds (SVOC) may occur in workplace air and can simultaneously be found as particle and gas phase. For health risk assessment, it is necessary to collect particles and gases separately. This can be achieved by using a denuder for the gas phase collection, combined with a filter and an adsorber for particle collection. The study focused on the suitability of carbonaceous monolithic multi-channel denuders, so-called Novacarb™-Denuders (MastCarbon International Ltd., Guilford, UK), to achieve gas-particle separation. Particle transmission efficiency experiments were performed with polystyrene latex (PSL) particles (size range 0.51-3 µm), while the time dependent gas phase collection efficiency was analysed for polar and nonpolar SVOC (mass concentrations 7-10 mg/m3) over 2 h at 5 or 10 l/min. The experimental gas phase collection efficiency was also compared with theoretical predictions. For n-hexadecane (C16), the gas phase collection efficiency was max. 91 % for one denuder and max. 98 % for two denuders, while for diethylene glycol (DEG), a maximal gas phase collection efficiency of 93 % for one denuder and 97 % for two denuders was observed. At 5 l/min higher gas phase collection efficiencies were achieved than at 10 l/min. The deviations between the theoretical and experimental gas phase collection efficiencies were up to 5 % for C16 and 23 % for DEG. Since the theoretical efficiency depends on the geometric shape and length of the denuder, flow rate and diffusion coefficients of the tested substances, the obtained values define an upper limit which could be reached. Regarding the particle transmission through the denuders, the use of one denuder showed transmission efficiencies around 98 % for 1-3 µm particle diameters. The use of three denuders resulted in transmission efficiencies from 93-97 % for the same particle sizes. In summary, NovaCarb™-Denuders are well applicable for sampling aerosols of polar/nonpolar substances with particle diameters ≤3 µm and flow rates of 5 l/min or lower. These properties and their compact size make them suitable for use in personal aerosol samplers. This work is supported by the German Social Accident Insurance (DGUV), research contract FP371.Keywords: gas phase collection efficiency, particle transmission, personal aerosol sampler, SVOC
Procedia PDF Downloads 176127 Performance of Different Spray Nozzles in the Application of Defoliant on Cotton Plants (Gossypium hirsutum L.)
Authors: Mohamud Ali Ibrahim, Ali Bayat, Ali Bolat
Abstract:
Defoliant spraying is an important link in the mechanized cotton harvest because adequate and uniform spraying can improve defoliation quality and reduce cotton trash content. In defoliant application, application volume and spraying technology are extremely important. In this study, the effectiveness of defoliant application to cotton plant that has come to harvest with two different application volumes and three different types of nozzles with a standard field crop sprayer was determined. Experiments were carried in two phases as field area trials and laboratory analysis. Application rates were 250 l/ha and 400 L/ha, and spraying nozzles were (1) Standard flat fan nozzle (TP8006), (2) Air induction nozzle (AI 11002-VS), and (3) Dual Pattern nozzle (AI307003VP). A tracer (BSF) and defoliant were applied to mature cotton with approximately 60% open bolls and samplings for BSF deposition and spray coverage on the cotton plant were done at two plant height (upper layer, lower layer) of plant. Before and after spraying, bolls open and leaves rate on cotton plants were calculated, and filter papers were used to detect BSF deposition, and water sensitive papers (WSP) were used to measure the coverage rate of spraying methods used. Spectrofluorophotometer was used to detect the amount of tracer deposition on targets, and an image process computer programme was used to measure coverage rate on WSP. In analysis, conclusions showed that air induction nozzle (AI 11002-VS) achieved better results than the dual pattern and standard flat fan nozzles in terms of higher depositions, coverages, and leaf defoliations, and boll opening rates. AI nozzles operating at 250 L/ha application rate provide the highest deposition and coverage rate on applications of the defoliant; in addition, BSF as an indicator of the defoliant used reached on leaf beneath in merely this spray nozzle. After defoliation boll opening rate was 85% on the 7th and 12th days after spraying and falling rate of leaves was 76% at application rate of 250 L/ha with air induction (AI1102) nozzle.Keywords: cotton defoliant, air induction nozzle, dual pattern nozzle, standard flat fan nozzle, coverage rate, spray deposition, boll opening rate, leaves falling rate
Procedia PDF Downloads 197126 Effect of Electromagnetic Fields at 27 GHz on Sperm Quality of Mytilus galloprovincialis
Authors: Carmen Sica, Elena M. Scalisi, Sara Ignoto, Ludovica Palmeri, Martina Contino, Greta Ferruggia, Antonio Salvaggio, Santi C. Pavone, Gino Sorbello, Loreto Di Donato, Roberta Pecoraro, Maria V. Brundo
Abstract:
Recently, a rise in the use of wireless internet technologies such as Wi-Fi and 5G routers/modems have been demonstrated. These devices emit a considerable amount of electromagnetic radiation (EMR), which could interact with the male reproductive system either by thermal or non-thermal mechanisms. The aim of this study was to investigate the direct in vitro influence of 5G radiation on sperm quality in Mytilus galloprovincialis, considered an excellent model for reproduction studies. The experiments at 27 GHz were conducted by using a no commercial high gain pyramidal horn antenna. To evaluate the specific absorption rate (SAR), a numerical simulation has been performed. The resulting incident power density was significantly lower than the power density limit of 10 mW/cm2 set by the international guidelines as a limit for nonthermal effects above 6 GHz. However, regarding temperature measurements of the aqueous sample, it has been verified an increase of 0.2°C, compared to the control samples. This very low-temperature increase couldn’t interfere with experiments. For experiments, sperm samples taken from sexually mature males of Mytilus galloprovincialis were placed in artificial seawater, salinity 30 + 1% and pH 8.3 filtered with a 0.2 m filter. After evaluating the number and quality of spermatozoa, sperm cells were exposed to electromagnetic fields a 27GHz. The effect of exposure on sperm motility and quality was evaluated after 10, 20, 30 and 40 minutes with a light microscope and also using the Eosin test to verify the vitality of the gametes. All the samples were performed in triplicate and statistical analysis was carried out using one-way analysis of variance (ANOVA) with Turkey test for multiple comparations of means to determine differences of sperm motility. A significant decrease (30%) in sperm motility was observed after 10 minutes of exposure and after 30 minutes, all sperms were immobile and not vital. Due to little literature data about this topic, these results could be useful for further studies concerning a great diffusion of these new technologies.Keywords: mussel, spermatozoa, sperm motility, millimeter waves
Procedia PDF Downloads 167125 Backwash Optimization for Drinking Water Treatment Biological Filters
Authors: Sarra K. Ikhlef, Onita Basu
Abstract:
Natural organic matter (NOM) removal efficiency using drinking water treatment biological filters can be highly influenced by backwashing conditions. Backwashing has the ability to remove the accumulated biomass and particles in order to regenerate the biological filters' removal capacity and prevent excessive headloss buildup. A lab scale system consisting of 3 biological filters was used in this study to examine the implications of different backwash strategies on biological filtration performance. The backwash procedures were evaluated based on their impacts on dissolved organic carbon (DOC) removals, biological filters’ biomass, backwash water volume usage, and particle removal. Results showed that under nutrient limited conditions, the simultaneous use of air and water under collapse pulsing conditions lead to a DOC removal of 22% which was significantly higher (p>0.05) than the 12% removal observed under water only backwash conditions. Employing a bed expansion of 20% under nutrient supplemented conditions compared to a 30% reference bed expansion while using the same amount of water volume lead to similar DOC removals. On the other hand, utilizing a higher bed expansion (40%) lead to significantly lower DOC removals (23%). Also, a backwash strategy that reduced the backwash water volume usage by about 20% resulted in similar DOC removals observed with the reference backwash. The backwash procedures investigated in this study showed no consistent impact on biological filters' biomass concentrations as measured by the phospholipids and the adenosine tri-phosphate (ATP) methods. Moreover, none of these two analyses showed a direct correlation with DOC removal. On the other hand, dissolved oxygen (DO) uptake showed a direct correlation with DOC removals. The addition of the extended terminal subfluidization wash (ETSW) demonstrated no apparent impact on DOC removals. ETSW also successfully eliminated the filter ripening sequence (FRS). As a result, the additional water usage resulting from implementing ETSW was compensated by water savings after restart. Results from this study provide insight to researchers and water treatment utilities on how to better optimize the backwashing procedure for the goal of optimizing the overall biological filtration process.Keywords: biological filtration, backwashing, collapse pulsing, ETSW
Procedia PDF Downloads 273124 Spatial Structure of First-Order Voronoi for the Future of Roundabout Cairo Since 1867
Authors: Ali Essam El Shazly
Abstract:
The Haussmannization plan of Cairo in 1867 formed a regular network of roundabout spaces, though deteriorated at present. The method of identifying the spatial structure of roundabout Cairo for conservation matches the voronoi diagram with the space syntax through their geometrical property of spatial convexity. In this initiative, the primary convex hull of first-order voronoi adopts the integral and control measurements of space syntax on Cairo’s roundabout generators. The functional essence of royal palaces optimizes the roundabout structure in terms of spatial measurements and the symbolic voronoi projection of 'Tahrir Roundabout' over the Giza Nile and Pyramids. Some roundabouts of major public and commercial landmarks surround the pole of 'Ezbekia Garden' with a higher control than integral measurements, which filter the new spatial structure from the adjacent traditional town. Nevertheless, the least integral and control measures correspond to the voronoi contents of pollutant workshops and the plateau of old Cairo Citadel with the visual compensation of new royal landmarks on top. Meanwhile, the extended suburbs of infinite voronoi polygons arrange high control generators of chateaux housing in 'garden city' environs. The point pattern of roundabouts determines the geometrical characteristics of voronoi polygons. The measured lengths of voronoi edges alternate between the zoned short range at the new poles of Cairo and the distributed structure of longer range. Nevertheless, the shortest range of generator-vertex geometry concentrates at 'Ezbekia Garden' where the crossways of vast Cairo intersect, which maximizes the variety of choice at different spatial resolutions. However, the symbolic 'Hippodrome' which is the largest public landmark forms exclusive geometrical measurements, while structuring a most integrative roundabout to parallel the royal syntax. Overview of the symbolic convex hull of voronoi with space syntax interconnects Parisian Cairo with the spatial chronology of scattered monuments to conceive one universal Cairo structure. Accordingly, the approached methodology of 'voronoi-syntax' prospects the future conservation of roundabout Cairo at the inferred city-level concept.Keywords: roundabout Cairo, first-order Voronoi, space syntax, spatial structure
Procedia PDF Downloads 501123 Improve Divers Tracking and Classification in Sonar Images Using Robust Diver Wake Detection Algorithm
Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy
Abstract:
Harbor protection systems are so important. The need for automatic protection systems has increased over the last years. Diver detection active sonar has great significance. It used to detect underwater threats such as divers and autonomous underwater vehicle. To automatically detect such threats the sonar image is processed by algorithms. These algorithms used to detect, track and classify of underwater objects. In this work, divers tracking and classification algorithm is improved be proposing a robust wake detection method. To detect objects the sonar images is normalized then segmented based on fixed threshold. Next, the centroids of the segments are found and clustered based on distance metric. Then to track the objects linear Kalman filter is applied. To reduce effect of noise and creation of false tracks, the Kalman tracker is fine tuned. The tuning is done based on our active sonar specifications. After the tracks are initialed and updated they are subjected to a filtering stage to eliminate the noisy and unstable tracks. Also to eliminate object with a speed out of the diver speed range such as buoys and fast boats. Afterwards the result tracks are subjected to a classification stage to deiced the type of the object been tracked. Here the classification stage is to deice wither if the tracked object is an open circuit diver or a close circuit diver. At the classification stage, a small area around the object is extracted and a novel wake detection method is applied. The morphological features of the object with his wake is extracted. We used support vector machine to find the best classifier. The sonar training images and the test images are collected by ARMELSAN Defense Technologies Company using the portable diver detection sonar ARAS-2023. After applying the algorithm to the test sonar data, we get fine and stable tracks of the divers. The total classification accuracy achieved with the diver type is 97%.Keywords: harbor protection, diver detection, active sonar, wake detection, diver classification
Procedia PDF Downloads 238122 Additional Method for the Purification of Lanthanide-Labeled Peptide Compounds Pre-Purified by Weak Cation Exchange Cartridge
Authors: K. Eryilmaz, G. Mercanoglu
Abstract:
Aim: Purification of the final product, which is the last step in the synthesis of lanthanide-labeled peptide compounds, can be accomplished by different methods. Among these methods, the two most commonly used methods are C18 solid phase extraction (SPE) and weak cation exchanger cartridge elution. SPE C18 solid phase extraction method yields high purity final product, while elution from the weak cation exchanger cartridge is pH dependent and ineffective in removing colloidal impurities. The aim of this work is to develop an additional purification method for the lanthanide-labeled peptide compound in cases where the desired radionuclidic and radiochemical purity of the final product can not be achieved because of pH problem or colloidal impurity. Material and Methods: For colloidal impurity formation, 3 mL of water for injection (WFI) was added to 30 mCi of 177LuCl3 solution and allowed to stand for 1 day. 177Lu-DOTATATE was synthesized using EZAG ML-EAZY module (10 mCi/mL). After synthesis, the final product was mixed with the colloidal impurity solution (total volume:13 mL, total activity: 40 mCi). The resulting mixture was trapped in SPE-C18 cartridge. The cartridge was washed with 10 ml saline to remove impurities to the waste vial. The product trapped in the cartridge was eluted with 2 ml of 50% ethanol and collected to the final product vial via passing through a 0.22μm filter. The final product was diluted with 10 mL of saline. Radiochemical purity before and after purification was analysed by HPLC method. (column: ACE C18-100A. 3µm. 150 x 3.0mm, mobile phase: Water-Acetonitrile-Trifluoro acetic acid (75:25:1), flow rate: 0.6 mL/min). Results: UV and radioactivity detector results in HPLC analysis showed that colloidal impurities were completely removed from the 177Lu-DOTATATE/ colloidal impurity mixture by purification method. Conclusion: The improved purification method can be used as an additional method to remove impurities that may result from the lanthanide-peptide synthesis in which the weak cation exchange purification technique is used as the last step. The purification of the final product and the GMP compliance (the final aseptic filtration and the sterile disposable system components) are two major advantages.Keywords: lanthanide, peptide, labeling, purification, radionuclide, radiopharmaceutical, synthesis
Procedia PDF Downloads 160121 Structure-Guided Optimization of Sulphonamide as Gamma–Secretase Inhibitors for the Treatment of Alzheimer’s Disease
Authors: Vaishali Patil, Neeraj Masand
Abstract:
In older people, Alzheimer’s disease (AD) is turning out to be a lethal disease. According to the amyloid hypothesis, aggregation of the amyloid β–protein (Aβ), particularly its 42-residue variant (Aβ42), plays direct role in the pathogenesis of AD. Aβ is generated through sequential cleavage of amyloid precursor protein (APP) by β–secretase (BACE) and γ–secretase (GS). Thus in the treatment of AD, γ-secretase modulators (GSMs) are potential disease-modifying as they selectively lower pathogenic Aβ42 levels by shifting the enzyme cleavage sites without inhibiting γ–secretase activity. This possibly avoids known adverse effects observed with complete inhibition of the enzyme complex. Virtual screening, via drug-like ADMET filter, QSAR and molecular docking analyses, has been utilized to identify novel γ–secretase modulators with sulphonamide nucleus. Based on QSAR analyses and docking score, some novel analogs have been synthesized. The results obtained by in silico studies have been validated by performing in vivo analysis. In the first step, behavioral assessment has been carried out using Scopolamine induced amnesia methodology. Later the same series has been evaluated for neuroprotective potential against the oxidative stress induced by Scopolamine. Biochemical estimation was performed to evaluate the changes in biochemical markers of Alzheimer’s disease such as lipid peroxidation (LPO), Glutathione reductase (GSH), and Catalase. The Scopolamine induced amnesia model has shown increased Acetylcholinesterase (AChE) levels and the inhibitory effect of test compounds in the brain AChE levels have been evaluated. In all the studies Donapezil (Dose: 50µg/kg) has been used as reference drug. The reduced AChE activity is shown by compounds 3f, 3c, and 3e. In the later stage, the most potent compounds have been evaluated for Aβ42 inhibitory profile. It can be hypothesized that this series of alkyl-aryl sulphonamides exhibit anti-AD activity by inhibition of Acetylcholinesterase (AChE) enzyme as well as inhibition of plaque formation on prolong dosage along with neuroprotection from oxidative stress.Keywords: gamma-secretase inhibitors, Alzzheimer's disease, sulphonamides, QSAR
Procedia PDF Downloads 254120 Human Lens Metabolome: A Combined LC-MS and NMR Study
Authors: Vadim V. Yanshole, Lyudmila V. Yanshole, Alexey S. Kiryutin, Timofey D. Verkhovod, Yuri P. Tsentalovich
Abstract:
Cataract, or clouding of the eye lens, is the leading cause of vision impairment in the world. The lens tissue have very specific structure: It does not have vascular system, the lens proteins – crystallins – do not turnover throughout lifespan. The protection of lens proteins is provided by the metabolites which diffuse inside the lens from the aqueous humor or synthesized in the lens epithelial layer. Therefore, the study of changes in the metabolite composition of a cataractous lens as compared to a normal lens may elucidate the possible mechanisms of the cataract formation. Quantitative metabolomic profiles of normal and cataractous human lenses were obtained with the combined use of high-frequency nuclear magnetic resonance (NMR) and ion-pairing high-performance liquid chromatography with high-resolution mass-spectrometric detection (LC-MS) methods. The quantitative content of more than fifty metabolites has been determined in this work for normal aged and cataractous human lenses. The most abundant metabolites in the normal lens are myo-inositol, lactate, creatine, glutathione, glutamate, and glucose. For the majority of metabolites, their levels in the lens cortex and nucleus are similar, with the few exceptions including antioxidants and UV filters: The concentrations of glutathione, ascorbate and NAD in the lens nucleus decrease as compared to the cortex, while the levels of the secondary UV filters formed from primary UV filters in redox processes increase. That confirms that the lens core is metabolically inert, and the metabolic activity in the lens nucleus is mostly restricted by protection from the oxidative stress caused by UV irradiation, UV filter spontaneous decomposition, or other factors. It was found that the metabolomic composition of normal and age-matched cataractous human lenses differ significantly. The content of the most important metabolites – antioxidants, UV filters, and osmolytes – in the cataractous nucleus is at least ten fold lower than in the normal nucleus. One may suppose that the majority of these metabolites are synthesized in the lens epithelial layer, and that age-related cataractogenesis might originate from the dysfunction of the lens epithelial cells. Comprehensive quantitative metabolic profiles of the human eye lens have been acquired for the first time. The obtained data can be used for the analysis of changes in the lens chemical composition occurring with age and with the cataract development.Keywords: cataract, lens, NMR, LC-MS, metabolome
Procedia PDF Downloads 322119 Effect of Locally Produced Sweetened Pediatric Antibiotics on Streptococcus mutans Isolated from the Oral Cavity of Pediatric Patients in Syria - in Vitro Study
Authors: Omar Nasani, Chaza Kouchaji, Muznah Alkhani, Maisaa Abd-alkareem
Abstract:
Objective: To evaluate the influence of sweetening agents used in pediatric medications on the growth of Streptococcus mutans colonies and its effect on the cariogenic activity in the oral cavity. No previous studies are registered yet in Syrian children. Methods: Specimens were isolated from the oral cavity of pediatric patients, then in-vitro study is applied on locally manufactured liquid pediatric antibiotic drugs, containing natural or synthetic sweeteners. The selected antibiotics are Ampicillin (sucrose), Amoxicillin (sucrose), Amoxicillin + Flucloxacillin (sorbitol), Amoxicillin+Clavulanic acid (Sorbitol or sucrose). These antibiotics have a known inhibitory effect on gram positive aerobic/anaerobic bacteria especially Streptococcus mutans strains in children’s oral biofilm. Five colonies are studied with each antibiotic. Saturated antibiotics were spread on a 6mm diameter filter disc. Incubated culture media were compared with each other and with the control antibiotic discs. Results were evaluated by measuring the diameter of the inhibition zones. The control group of antibiotic discs was resourced from Abtek Biologicals Ltd. Results: The diameter of inhibition zones around discs of antibiotics sweetened with sorbitol was larger than those sweetened with sucrose. The effect was most important when comparing Amoxicillin + Clavulanic Acid (sucrose 25mm; versus sorbitol 27mm). The highest inhibitory effect was observed with the usage of Amoxicillin + Flucloxacillin sweetened with sorbitol (38mm). Whereas the lowest inhibitory effect was observed with Amoxicillin and Ampicillin sweetened with sucrose (22mm and 21mm). Conclusion: The results of this study indicate that although all selected antibiotic produced an inhibitory effect on S. mutans, sucrose weakened the inhibitory action of the antibiotic to varying degrees, meanwhile antibiotic formulations containing sorbitol simulated the effects of the control antibiotic. This study calls attention to effects of sweeteners included in pediatric drugs on the oral hygiene and tooth decay.Keywords: pediatric, dentistry, antibiotics, streptococcus mutans, biofilm, sucrose, sugar free
Procedia PDF Downloads 72118 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data
Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca
Abstract:
In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.Keywords: citizen science, data quality filtering, species distribution models, trait profiles
Procedia PDF Downloads 202117 Accelerator Mass Spectrometry Analysis of Isotopes of Plutonium in PM₂.₅
Authors: C. G. Mendez-Garcia, E. T. Romero-Guzman, H. Hernandez-Mendoza, C. Solis, E. Chavez-Lomeli, E. Chamizo, R. Garcia-Tenorio
Abstract:
Plutonium is present in different concentrations in the environment and biological samples related to nuclear weapons testing, nuclear waste recycling and accidental discharges of nuclear plants. This radioisotope is considered the most radiotoxic substance, particularly when it enters the human body through inhalation of powders insoluble or aerosols. This is the main reason of the determination of the concentration of this radioisotope in the atmosphere. Besides that, the isotopic ratio of ²⁴⁰Pu/²³⁹Pu provides information about the origin of the source. PM₂.₅ sampling was carried out in the Metropolitan Zone of the Valley of Mexico (MZVM) from February 18th to March 17th in 2015 on quartz filter. There have been significant developments recently due to the establishment of new methods for sample preparation and accurate measurement to detect ultra trace levels as the plutonium is found in the environment. The accelerator mass spectrometry (AMS) is a technique that allows measuring levels of detection around of femtograms (10-15 g). The AMS determinations include the chemical isolation of Pu. The Pu separation involved an acidic digestion and a radiochemical purification using an anion exchange resin. Finally, the source is prepared, when Pu is pressed in the corresponding cathodes. According to the author's knowledge on these aerosols showed variations on the ²³⁵U/²³⁸U ratio of the natural value, suggesting that could be an anthropogenic source altering it. The determination of the concentration of the isotopes of Pu can be a useful tool in order the clarify this presence in the atmosphere. The first results showed a mean value of activity concentration of ²³⁹Pu of 280 nBq m⁻³ thus the ²⁴⁰Pu/²³⁹Pu was 0.025 corresponding to the weapon production source; these results corroborate that there is an anthropogenic influence that is increasing the concentration of radioactive material in PM₂.₅. According to the author's knowledge in Total Suspended Particles (TSP) have been reported activity concentrations of ²³⁹⁺²⁴⁰Pu around few tens of nBq m⁻³ and 0.17 of ²⁴⁰Pu/²³⁹Pu ratios. The preliminary results in MZVM show high activity concentrations of isotopes of Pu (40 and 700 nBq m⁻³) and low ²⁴⁰Pu/²³⁹Pu ratio than reported. These results are in the order of the activity concentrations of Pu in weapons-grade of high purity.Keywords: aerosols, fallout, mass spectrometry, radiochemistry, tracer, ²⁴⁰Pu/²³⁹Pu ratio
Procedia PDF Downloads 167116 Partial Least Square Regression for High-Dimentional and High-Correlated Data
Authors: Mohammed Abdullah Alshahrani
Abstract:
The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data
Procedia PDF Downloads 49115 Perception of Greek Vowels by Arabic-Greek Bilinguals: An Experimental Study
Authors: Georgios P. Georgiou
Abstract:
Infants are able to discriminate a number of sound contrasts in most languages. However, this ability is not available in adults who might face difficulties in discriminating accurately second language sound contrasts as they filter second language speech through the phonological categories of their native language. For example, Spanish speakers often struggle to perceive the difference between the English /ε/ and /æ/ because both vowels do not exist in their native language; so they assimilate these vowels to the closest phonological category of their first language. The present study aims to uncover the perceptual patterns of Arabic adult speakers in regard to the vowels of their second language (Greek). Still, there is not any study that investigates the perception of Greek vowels by Arabic speakers and, thus, the present study would contribute to the enrichment of the literature with cross-linguistic research in new languages. To the purpose of the present study, 15 native speakers of Egyptian Arabic who permanently live in Cyprus and have adequate knowledge of Greek as a second language passed through vowel assimilation and vowel contrast discrimination tests (AXB) in their second language. The perceptual stimuli included non-sense words that contained vowels in both stressed and unstressed positions. The second language listeners’ patterns were analyzed through the Perceptual Assimilation Model which makes testable hypotheses about the assimilation of second language sounds to the speakers’ native phonological categories and the discrimination accuracy over second language sound contrasts. The results indicated that second language listeners assimilated pairs of Greek vowels in a single phonological category of their native language resulting in a Category Goodness difference assimilation type for the Greek stressed /i/-/e/ and the Greek stressed-unstressed /o/-/u/ vowel contrasts. On the contrary, the members of the Greek unstressed /i/-/e/ vowel contrast were assimilated to two different categories resulting in a Two Category assimilation type. Furthermore, they could discriminate the Greek stressed /i/-/e/ and the Greek stressed-unstressed /o/-/u/ contrasts only in a moderate degree while the Greek unstressed /i/-/e/ contrast could be discriminated in an excellent degree. Two main implications emerge from the results. First, there is a strong influence of the listeners’ native language on the perception of the second language vowels. In Egyptian Arabic, contiguous vowel categories such as [i]-[e] and [u]-[o] do not have phonemic difference but they are subject to allophonic variation; by contrast, the vowel contrasts /i/-/e/ and /o/-/u/ are phonemic in Greek. Second, the role of stress is significant for second language perception since stressed vs. unstressed vowel contrasts were perceived in a different manner by the Greek listeners.Keywords: Arabic, bilingual, Greek, vowel perception
Procedia PDF Downloads 138