Search results for: motor parameter estimation
556 Density Measurement of Underexpanded Jet Using Stripe Patterned Background Oriented Schlieren Method
Authors: Shinsuke Udagawa, Masato Yamagishi, Masanori Ota
Abstract:
The Schlieren method, which has been conventionally used to visualize high-speed flows, has disadvantages such as the complexity of the experimental setup and the inability to quantitatively analyze the amount of refraction of light. The Background Oriented Schlieren (BOS) method proposed by Meier is one of the measurement methods that solves the problems, as mentioned above. The refraction of light is used for BOS method same as the Schlieren method. The BOS method is characterized using a digital camera to capture the images of the background behind the observation area. The images are later analyzed by a computer to quantitatively detect the amount of shift of the background image. The experimental setup for BOS does not require concave mirrors, pinholes, or color filters, which are necessary in the conventional Schlieren method, thus simplifying the experimental setup. However, the defocusing of the observation results is caused in case of using BOS method. Since the focus of camera on the background image leads to defocusing of the observed object. The defocusing of object becomes greater with increasing the distance between the background and the object. On the other hand, the higher sensitivity can be obtained. Therefore, it is necessary to adjust the distance between the background and the object to be appropriate for the experiment, considering the relation between the defocus and the sensitivity. The purpose of this study is to experimentally clarify the effect of defocus on density field reconstruction. In this study, the visualization experiment of underexpanded jet using BOS measurement system with ronchi ruling as the background that we constructed, have been performed. The reservoir pressure of the jet and the distance between camera and axis of jet is fixed, and the distance between background and axis of jet has been changed as the parameter. The images have been later analyzed by using personal computer to quantitatively detect the amount of shift of the background image from the comparison between the background pattern and the captured image of underexpanded jet. The quantitatively measured amount of shift have been reconstructed into a density flow field using the Abel transformation and the Gradstone-Dale equation. From the experimental results, it is found that the reconstructed density image becomes blurring, and noise becomes decreasing with increasing the distance between background and axis of underexpanded jet. Consequently, it is cralified that the sensitivity constant should be greater than 20, and the circle of confusion diameter should be less than 2.7mm at least in this experimental setup.Keywords: BOS method, underexpanded jet, abel transformation, density field visualization
Procedia PDF Downloads 79555 Zinc Sorption by Six Agricultural Soils Amended with Municipal Biosolids
Authors: Antoine Karam, Lotfi Khiari, Bruno Breton, Alfred Jaouich
Abstract:
Anthropogenic sources of zinc (Zn), including industrial emissions and effluents, Zn–rich fertilizer materials and pesticides containing Zn, can contribute to increasing the concentration of soluble Zn at levels toxic to plants in acid sandy soils. The application of municipal sewage sludge or biosolids (MBS) which contain metal immobilizing agents on coarse-textured soils could improve the metal sorption capacity of the low-CEC soils. The purpose of this experiment was to evaluate the sorption of Zn in surface samples (0-15 cm) of six Quebec (Canada) soils amended with MBS (pH 6.9) from Val d’Or (Quebec, Canada). Soil samples amended with increasing amounts (0 to 20%) of MBS were equilibrated with various amounts of Zn as ZnCl2 in 0.01 M CaCl2 for 48 hours at room temperature. Sorbed Zn was calculated from the difference between the initial and final Zn concentration in solution. Zn sorption data conformed to the linear form of Freundlich equation. The amount of sorbed Zn increased considerably with increasing MBS rate. Analysis of variance revealed a highly significant effect (p ≤ 0.001) of soil texture and MBS rate on the amount of sorbed Zn. The average values of the Zn-sorption capacity of MBS-amended coarse-textured soils were lower than those of MBS-amended fine textured soils. The two sandy soils (86-99% sand) amended with MBS retained 2- to 5-fold Zn than those without MBS (control). Significant Pearson correlation coefficients between the Zn sorption isotherm parameter, i.e. the Freundlich sorption isotherm (KF), and commonly measured physical and chemical entities were obtained. Among all the soil properties measured, soil pH gave the best significant correlation coefficients (p ≤ 0.001) for soils receiving 0, 5 and 10% MBS. Furthermore, KF values were positively correlated with soil clay content, exchangeable basic cations (Ca, Mg or K), CEC and clay content to CEC ratio. From these results, it can be concluded that (i) municipal biosolids provide sorption sites that have a strong affinity for Zn, (ii) both soil texture, especially clay content, and soil pH are the main factors controlling anthropogenic Zn sorption in the municipal biosolids-amended soils, and (iii) the effect of municipal biosolids on Zn sorption will be more pronounced for a sandy soil than for a clay soil.Keywords: metal, recycling, sewage sludge, trace element
Procedia PDF Downloads 284554 A Regression Model for Predicting Sugar Crystal Size in a Fed-Batch Vacuum Evaporative Crystallizer
Authors: Sunday B. Alabi, Edikan P. Felix, Aniediong M. Umo
Abstract:
Crystal size distribution is of great importance in the sugar factories. It determines the market value of granulated sugar and also influences the cost of production of sugar crystals. Typically, sugar is produced using fed-batch vacuum evaporative crystallizer. The crystallization quality is examined by crystal size distribution at the end of the process which is quantified by two parameters: the average crystal size of the distribution in the mean aperture (MA) and the width of the distribution of the coefficient of variation (CV). Lack of real-time measurement of the sugar crystal size hinders its feedback control and eventual optimisation of the crystallization process. An attractive alternative is to use a soft sensor (model-based method) for online estimation of the sugar crystal size. Unfortunately, the available models for sugar crystallization process are not suitable as they do not contain variables that can be measured easily online. The main contribution of this paper is the development of a regression model for estimating the sugar crystal size as a function of input variables which are easy to measure online. This has the potential to provide real-time estimates of crystal size for its effective feedback control. Using 7 input variables namely: initial crystal size (Lo), temperature (T), vacuum pressure (P), feed flowrate (Ff), steam flowrate (Fs), initial super-saturation (S0) and crystallization time (t), preliminary studies were carried out using Minitab 14 statistical software. Based on the existing sugar crystallizer models, and the typical ranges of these 7 input variables, 128 datasets were obtained from a 2-level factorial experimental design. These datasets were used to obtain a simple but online-implementable 6-input crystal size model. It seems the initial crystal size (Lₒ) does not play a significant role. The goodness of the resulting regression model was evaluated. The coefficient of determination, R² was obtained as 0.994, and the maximum absolute relative error (MARE) was obtained as 4.6%. The high R² (~1.0) and the reasonably low MARE values are an indication that the model is able to predict sugar crystal size accurately as a function of the 6 easy-to-measure online variables. Thus, the model can be used as a soft sensor to provide real-time estimates of sugar crystal size during sugar crystallization process in a fed-batch vacuum evaporative crystallizer.Keywords: crystal size, regression model, soft sensor, sugar, vacuum evaporative crystallizer
Procedia PDF Downloads 209553 Single and Sequential Extraction for Potassium Fractionation and Nano-Clay Flocculation Structure
Authors: Chakkrit Poonpakdee, Jing-Hua Tzen, Ya-Zhen Huang, Yao-Tung Lin
Abstract:
Potassium (K) is a known macro nutrient and essential element for plant growth. Single leaching and modified sequential extraction schemes have been developed to estimate the relative phase associations of soil samples. The sequential extraction process is a step in analyzing the partitioning of metals affected by environmental conditions, but it is not a tool for estimation of K bioavailability. While, traditional single leaching method has been used to classify K speciation for a long time, it depend on its availability to the plants and use for potash fertilizer recommendation rate. Clay mineral in soil is a factor for controlling soil fertility. The change of the micro-structure of clay minerals during various environment (i.e. swelling or shrinking) is characterized using Transmission X-Ray Microscopy (TXM). The objective of this study are to 1) compare the distribution of K speciation between single leaching and sequential extraction process 2) determined clay particle flocculation structure before/after suspension with K+ using TXM. Four tropical soil samples: farming without K fertilizer (10 years), long term applied K fertilizer (10 years; 168-240 kg K2O ha-1 year-1), red soil (450-500 kg K2O ha-1 year-1) and forest soil were selected. The results showed that the amount of K speciation by single leaching method were high in mineral K, HNO3 K, Non-exchangeable K, NH4OAc K, exchangeable K and water soluble K respectively. Sequential extraction process indicated that most K speciations in soil were associated with residual, organic matter, Fe or Mn oxide and exchangeable fractions and K associate fraction with carbonate was not detected in tropical soil samples. In farming long term applied K fertilizer and red soil were higher exchangeable K than farming long term without K fertilizer and forest soil. The results indicated that one way to increase the available K (water soluble K and exchangeable K) should apply K fertilizer and organic fertilizer for providing available K. The two-dimension of TXM image of clay particles suspension with K+ shows that the aggregation structure of clay mineral closed-void cellular networks. The porous cellular structure of soil aggregates in 1 M KCl solution had large and very larger empty voids than in 0.025 M KCl and deionized water respectively. TXM nanotomography is a new technique can be useful in the field as a tool for better understanding of clay mineral micro-structure.Keywords: potassium, sequential extraction process, clay mineral, TXM
Procedia PDF Downloads 291552 Effect of Strength Class of Concrete and Curing Conditions on Capillary Absorption of Self-Compacting and Conventional Concrete
Authors: Emine Ebru Demirci, Remzi Şahin
Abstract:
The purpose of this study is to compare Self Compacting Concrete (SCC) and Conventional Concrete (CC), which are used in beams with dense reinforcement, in terms of their capillary absorption. During the comparison of SCC and CC, the effects of two different factors were also investigated: concrete strength class and curing condition. In the study, both SCC and CC were produced in three different concrete classes (C25, C50 and C70) and the other parameter (i.e curing condition) was determined as two levels: moisture and air curing. Beam dimensions were determined to be 200 x 250 x 3000 mm. Reinforcements of the beams were calculated and placed as 2ø12 for the top and 3ø12 for the bottom. Stirrups with dimension 8 mm were used as lateral rebar and stirrup distances were chosen as 10 cm in the confinement zone and 15 cm at the central zone. In this manner, densification of rebars in lateral cross-sections of beams and handling of SCC in real conditions were aimed. Concrete covers of the rebars were chosen to be equal in all directions as 25 mm. The capillary absorption measurements were performed on core samples taken from the beams. Core samples of ø8x16 cm were taken from the beginning (0-100 cm), middle (100-200 cm) and end (200-300 cm) region of the beams according to the casting direction of SCC. However core samples were taken from lateral surface of the beams. In the study, capillary absorption experiments were performed according to Turkish Standard TS EN 13057. It was observed that, for both curing environments and all strength classes of concrete, SCC’s had lower capillary absorption values than that of CC’s. The capillary absorption values of C25 class of SCC are 11% and 16% lower than that of C25 class of CC for air and moisture conditions, respectively. For C50 class, these decreases were 6% and 18%, while for C70 class, they were 16% and 9%, respectively. It was also detected that, for both SCC and CC, capillary absorption values of samples kept in moisture curing are significantly lower than that of samples stored in air curing. For CC’s; C25, C50 and C70 class moisture-cured samples were found to have 26%, 12% and 31% lower capillary absorption values, respectively, when compared to the air-cured ones. For SCC’s; these values were 30%, 23% and 24%, respectively. Apart from that, it was determined that capillary absorption values for both SCC and CC decrease with increasing strength class of concrete for both curing environments. It was found that, for air cured CC, C50 and C70 class of concretes had 39% and 63% lower capillary absorption values compared to the C25 class of concrete. For the same type of concrete samples cured in the moisture environment, these values were found to be 27% and 66%. It was found that for SCC samples, capillary absorption value of C50 and C70 concretes, which were kept in air curing, were 35% and 65% lower than that of C25, while for moisture-cured samples these values were 29% and 63%, respectively. When standard deviations of the capillary absorption values are compared for core samples obtained from the beginning, middle and end of the CC and SCC beams, it was found that, in all three strength classes of concrete, the variation is much smaller for SCC than CC. This demonstrated that SCC’s had more uniform character than CC’s.Keywords: self compacting concrete, reinforced concrete beam, capillary absorption, strength class, curing condition
Procedia PDF Downloads 371551 The Evolution of Deformation in the Southern-Central Tunisian Atlas: Parameters and Modelling
Authors: Mohamed Sadok Bensalem, Soulef Amamria, Khaled Lazzez, Mohamed Ghanmi
Abstract:
The southern-central Tunisian Atlas presents a typical example of external zone. It occupies a particular position in the North African chains: firstly, it is the eastern limit of atlassicstructures; secondly, it is the edges between the belts structures to the north and the stable Saharan platform in the south. The evolution of deformation studyis based on several methods such as classical or numerical methods. The principals parameters controlling the genesis of folds in the southern central Tunisian Atlas are; the reactivation of pre-existing faults during later compressive phase, the evolution of decollement level, and the relation between thin and thick-skinned. One of the more principal characters of the southern-central Tunisian Atlas is the variation of belts structures directions determined by: NE-SW direction named the attlassic direction in Tunisia, the NW-SE direction carried along the Gafsa fault (the oriental limit of southern atlassic accident), and the E-W direction defined in the southern Tunisian Atlas. This variation of direction is the result of an important variation of deformation during different tectonics phases. A classical modeling of the Jebel ElKebar anticline, based on faults throw of the pre-existing faults and its reactivation during compressive phases, shows the importance of extensional deformation, particular during Aptian-Albian period, comparing with that of later compression (Alpine phases). A numerical modeling, based on the software Rampe E.M. 1.5.0, applied on the anticline of Jebel Orbata confirms the interpretation of “fault related fold” with decollement level within the Triassic successions. The other important parameter of evolution of deformation is the vertical migration of decollement level; indeed, more than the decollement level is in the recent series, most that the deformation is accentuated. The evolution of deformation is marked the development of duplex structure in Jebel AtTaghli (eastern limit of Jebel Orbata). Consequently, the evolution of deformation is proportional to the depth of the decollement level, the most important deformation is in the higher successions; thus is associated to the thin-skinned deformation; the decollement level permit the passive transfer of deformation in the cover.Keywords: evolution of deformation, pre-existing faults, decollement level, thin-skinned
Procedia PDF Downloads 132550 Study of Proton-9,11Li Elastic Scattering at 60~75 MeV/Nucleon
Authors: Arafa A. Alholaisi, Jamal H. Madani, M. A. Alvi
Abstract:
The radial form of nuclear matter distribution, charge and the shape of nuclei are essential properties of nuclei, and hence, are of great attention for several areas of research in nuclear physics. More than last three decades have witnessed a range of experimental means employing leptonic probes (such as muons, electrons etc.) for exploring nuclear charge distributions, whereas the hadronic probes (for example alpha particles, protons, etc.) have been used to investigate the nuclear matter distributions. In this paper, p-9,11Li elastic scattering differential cross sections in the energy range to MeV have been studied by means of Coulomb modified Glauber scattering formalism. By applying the semi-phenomenological Bhagwat-Gambhir-Patil [BGP] nuclear density for loosely bound neutron rich 11Li nucleus, the estimated matter radius is found to be 3.446 fm which is quite large as compared to so known experimental value 3.12 fm. The results of microscopic optical model based calculation by applying Bethe-Brueckner–Hartree–Fock formalism (BHF) have also been compared. It should be noted that in most of phenomenological density model used to reproduce the p-11Li differential elastic scattering cross sections data, the calculated matter radius lies between 2.964 and 3.55 fm. The calculated results with phenomenological BGP model density and with nucleon density calculated in the relativistic mean-field (RMF) reproduces p-9Li and p-11Li experimental data quite nicely as compared to Gaussian- Gaussian or Gaussian-Oscillator densities at all energies under consideration. In the approach described here, no free/adjustable parameter has been employed to reproduce the elastic scattering data as against the well-known optical model based studies that involve at least four to six adjustable parameters to match the experimental data. Calculated reaction cross sections σR for p-11Li at these energies are quite large as compared to estimated values reported by earlier works though so far no experimental studies have been performed to measure it.Keywords: Bhagwat-Gambhir-Patil density, Coulomb modified Glauber model, halo nucleus, optical limit approximation
Procedia PDF Downloads 163549 Estimation of Hydrogen Production from PWR Spent Fuel Due to Alpha Radiolysis
Authors: Sivakumar Kottapalli, Abdesselam Abdelouas, Christoph Hartnack
Abstract:
Spent nuclear fuel generates a mixed field of ionizing radiation to the water. This radiation field is generally dominated by gamma rays and a limited flux of fast neutrons. The fuel cladding effectively attenuates beta and alpha particle radiation. Small fraction of the spent nuclear fuel exhibits some degree of fuel cladding penetration due to pitting corrosion and mechanical failure. Breaches in the fuel cladding allow the exposure of small volumes of water in the cask to alpha and beta ionizing radiation. The safety of the transport of radioactive material is assured by the package complying with the IAEA Requirements for the Safe Transport of Radioactive Material SSR-6. It is of high interest to avoid generation of hydrogen inside the cavity which may to an explosive mixture. The risk of hydrogen production along with other radiation gases should be analyzed for a typical spent fuel for safety issues. This work aims to perform a realistic study of the production of hydrogen by radiolysis assuming most penalizing initial conditions. It consists in the calculation of the radionuclide inventory of a pellet taking into account the burn up and decays. Westinghouse 17X17 PWR fuel has been chosen and data has been analyzed for different sets of enrichment, burnup, cycles of irradiation and storage conditions. The inventory is calculated as the entry point for the simulation studies of hydrogen production by radiolysis kinetic models by MAKSIMA-CHEMIST. Dose rates decrease strongly within ~45 μm from the fuel surface towards the solution(water) in case of alpha radiation, while the dose rate decrease is lower in case of beta and even slower in case of gamma radiation. Calculations are carried out to obtain spectra as a function of time. Radiation dose rate profiles are taken as the input data for the iterative calculations. Hydrogen yield has been found to be around 0.02 mol/L. Calculations have been performed for a realistic scenario considering a capsule containing the spent fuel rod. Thus, hydrogen yield has been debated. Experiments are under progress to validate the hydrogen production rate using cyclotron at > 5MeV (at ARRONAX, Nantes).Keywords: radiolysis, spent fuel, hydrogen, cyclotron
Procedia PDF Downloads 521548 Evaluation of Different Liquid Scintillation Counting Methods for 222Rn Determination in Waters
Authors: Jovana Nikolov, Natasa Todorovic, Ivana Stojkovic
Abstract:
Monitoring of 222Rn in drinking or surface waters, as well as in groundwater has been performed in connection with geological, hydrogeological and hydrological surveys and health hazard studies. Liquid scintillation counting (LSC) is often preferred analytical method for 222Rn measurements in waters because it allows multiple-sample automatic analysis. LSC method implies mixing of water samples with organic scintillation cocktail, which triggers radon diffusion from the aqueous into organic phase for which it has a much greater affinity, eliminating possibility of radon emanation in that manner. Two direct LSC methods that assume different sample composition have been presented, optimized and evaluated in this study. One-phase method assumed direct mixing of 10 ml sample with 10 ml of emulsifying cocktail (Ultima Gold AB scintillation cocktail is used). Two-phase method involved usage of water-immiscible cocktails (in this study High Efficiency Mineral Oil Scintillator, Opti-Fluor O and Ultima Gold F are used). Calibration samples were prepared with aqueous 226Ra standard in glass 20 ml vials and counted on ultra-low background spectrometer Quantulus 1220TM equipped with PSA (Pulse Shape Analysis) circuit which discriminates alpha/beta spectra. Since calibration procedure is carried out with 226Ra standard, which has both alpha and beta progenies, it is clear that PSA discriminator has vital importance in order to provide reliable and precise spectra separation. Consequentially, calibration procedure was done through investigation of PSA discriminator level influence on 222Rn efficiency detection, using 226Ra calibration standard in wide range of activity concentrations. Evaluation of presented methods was based on obtained efficiency detections and achieved Minimal Detectable Activity (MDA). Comparison of presented methods, accuracy and precision as well as different scintillation cocktail’s performance was considered from results of measurements of 226Ra spiked water samples with known activity and environmental samples.Keywords: 222Rn in water, Quantulus1220TM, scintillation cocktail, PSA parameter
Procedia PDF Downloads 201547 The Relationship between Physical Fitness and Academic Performance among University Students
Authors: Bahar Ayberk
Abstract:
The study was conducted to determine the relationship between physical fitness and academic performance among university students. A far-famed saying ‘Sound mind in a sound body’ referring to the potential quality of increased physical fitness in the intellectual development of individuals seems to be endorsed. There is a growing body of literature the impact of physical fitness on academic achievement, especially in elementary and middle-school aged children. Even though there are numerous positive effects related to being physically active and physical fitness, their effect on academic achievement is not very much clear for university students. The subjects for this study included 25 students (20 female and 5 male) enrolled in Yeditepe University, Physiotherapy and Rehabilitation Department of Health Science Faculty. All participants filled in a questionnaire about their socio-demographic status, general health status, and physical activity status. Health-related physical fitness testing, included several core components: 1) body composition evaluation (body mass index, waist-to-hip ratio), 2) cardiovascular endurance evaluation (queen’s college step test), 3) muscle strength and endurance evaluation (sit-up test, push-up test), 4) flexibility evaluation (sit and reach test). Academic performance evaluation was based on student’s Cumulative Grade Point Average (CGPA). The prevalence of the subjects participating physical activity was found to be 40% (n = 10). CGPA scores were significantly higher among students having regular physical activity when we compared the students having regular physical activities or not (respectively 2,71 ± 0.46, 3.02 ± 0.28 scores, p = 0.076). The result of the study also revealed that there is positive correlation relationship between sit-up, push up and academic performance points (CGPA) (r = 0.43, p ≤ 0.05 ) and negative correlation relationship between cardiovascular endurance parameter (Queen's College Step Test) and academic performance points (CGPA) (r = -0.47, p ≤ 0.05). In conclusion, the findings confirmed that physical fitness level was generally associated with academic performance in the study group. Cardiovascular endurance and muscle strength and endurance were associated with student’s CGPA, whereas body composition and flexibility were unrelated to CGPA.Keywords: academic performance, health-related physical fitness, physical activity, physical fitness testing
Procedia PDF Downloads 164546 Dy3+ Ions Doped Single and Mixed Alkali Fluoro Tungstunate Tellurite Glasses for Laser and White LED Applications
Authors: Allam Srinivasa Rao, Ch. Annapurna Devi, G. Vijaya Prakash
Abstract:
A new-fangled series of white light emitting 1 mol% of Dy3+ ions doped Single-Alklai and Mixed-Alkai fluoro tungstunate tellurite glasses have been prepared using melt quenching technique and their spectroscopic behaviour was investigated by studying XRD, optical absorption, photoluminescence and lifetime measurements. The bonding parameter studies reveal the ionic nature of the Dy-O bond in the present glasses. From the absorption spectra, the Judd–Ofelt (J-O) intensity parameters have been determined which are used to explore the nature of bonding and symmetry orientation of the Dy–ligand field environment. The evaluated J-O parameters (Ω_4>Ω_2>Ω_6) for all the glasses are following the same trend. The photoluminescence spectra of all the glasses exhibit two intensified peaks in blue and Yellow regions corresponding to the transitions 4F9/2→6H15/2 (483 nm) and 4F9/2→6H13/2 (575 nm) respectively. From the photoluminescence spectra, it is observed that the luminescence intensity is maximum for Dy3+ ion doped potassium combination of fluoro tungstunate tellurite glass (TeWK: 1Dy). The J-O intensity parameters have been used to determine the various radiative properties for the different emission transitions from the 4F9/2 fluorescent level. The highest emission cross-section and branching ratio values observed for the 4F9/2→6H15/2 and 4F9/2→6H13/2 transitions suggest the possible laser action in the visible region from these glasses. By using the experimental lifetimes (τ_exp) measured from the decay spectral features and radiative lifetimes (τ_R), the quantum efficiencies (η) for all the glasses have been evaluated. Among all the glasses, the potassium combined fluoro tungstunate tellurite (TeWK:1Dy) glass has the highest quantum efficiency (94.6%). The CIE colour chromaticity coordinates (x, y), (u, v), colour correlated temperature (CCT) and Y/B ratio were also estimated from the photoluminescence spectra for different compositions of glasses. The (x, y) and (u, v) chromaticity colour coordinates fall within the white light region and the white light can be tuned by varying the composition of the glass. From all these studies, we are suggesting that the 1 mol% of Dy3+ ions doped TeWK glass is more suitable for lasing and White-LED applications.Keywords: dysprosium, Judd-Ofelt parameters, photo luminescence, tellurite glasses
Procedia PDF Downloads 224545 Remote Sensing Application in Environmental Researches: Case Study of Iran Mangrove Forests Quantitative Assessment
Authors: Neda Orak, Mostafa Zarei
Abstract:
Environmental assessment is an important session in environment management. Since various methods and techniques have been produces and implemented. Remote sensing (RS) is widely used in many scientific and research fields such as geology, cartography, geography, agriculture, forestry, land use planning, environment, etc. It can show earth surface objects cyclical changes. Also, it can show earth phenomena limits on basis of electromagnetic reflectance changes and deviations records. The research has been done on mangrove forests assessment by RS techniques. Mangrove forests quantitative analysis in Basatin and Bidkhoon estuaries was the aim of this research. It has been done by Landsat satellite images from 1975- 2013 and match to ground control points. This part of mangroves are the last distribution in northern hemisphere. It can provide a good background to improve better management on this important ecosystem. Landsat has provided valuable images to earth changes detection to researchers. This research has used MSS, TM, +ETM, OLI sensors from 1975, 1990, 2000, 2003-2013. Changes had been studied after essential corrections such as fix errors, bands combination, georeferencing on 2012 images as basic image, by maximum likelihood and IPVI Index. It was done by supervised classification. 2004 google earth image and ground points by GPS (2010-2012) was used to compare satellite images obtained changes. Results showed mangrove area in bidkhoon was 1119072 m2 by GPS and 1231200 m2 by maximum likelihood supervised classification and 1317600 m2 by IPVI in 2012. Basatin areas is respectively: 466644 m2, 88200 m2, 63000 m2. Final results show forests have been declined naturally. It is due to human activities in Basatin. The defect was offset by planting in many years. Although the trend has been declining in recent years again. So, it mentioned satellite images have high ability to estimation all environmental processes. This research showed high correlation between images and indexes such as IPVI and NDVI with ground control points.Keywords: IPVI index, Landsat sensor, maximum likelihood supervised classification, Nayband National Park
Procedia PDF Downloads 294544 Detecting Tomato Flowers in Greenhouses Using Computer Vision
Authors: Dor Oppenheim, Yael Edan, Guy Shani
Abstract:
This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.Keywords: agricultural engineering, image processing, computer vision, flower detection
Procedia PDF Downloads 330543 Structural and Microstructural Analysis of White Etching Layer Formation by Electrical Arcing Induced on the Surface of Rail Track
Authors: Ali Ahmed Ali Al-Juboori, H. Zhu, D. Wexler, H. Li, C. Lu, J. McLeod, S. Pannila, J. Barnes
Abstract:
A number of studies have focused on the formation mechanics of white etching layer and its origin in the railway operation. Until recently, the following hypotheses consider the precise mechanics of WELs formation: (i) WELs are the result of thermal process caused by wheel slip; (ii) WELs are mechanically induced by severe plastic deformation; (iii) WELs are caused by a combination of thermo-mechanical process. The mechanisms discussed above lead to occurrence of white etching layers on the area of wheel and rail contact. This is because the contact patch which is the active point of the wheel on the rail is exposed to highest shear stresses which result in localised severe plastic deformation; and highest rate of heat caused by wheel slipe during excessive traction or braking effort. However, if the WELs are not on the running band area, it would suggest that there is another cause of WELs formation. In railway system, particularly electrified railway, arcing phenomenon has been occurring more often and regularly on the rails. In electrified railway, the current is delivered to the train traction motor via contact wires and then returned to the station via the contact between the wheel and the rail. If the contact between the wheel and the rail is temporarily losing, due to dynamic vibration, entrapped dirt or water, lubricant effect or oxidation occurrences, high current can jump through the gap and results in arcing. The other resources of arcing also include the wheel passage the insulated joint and lightning on a train during bad weather. During the arcing, an extensive heat is generated and speared over a large area of top surface of rail. Thus, arcing is considered another heat source in the rail head (rather than wheel slipe) that results in microstructural changes and white etching layer formation. A head hardened (HH) rail steel, cut from a curved rail truck was used for the investigation. Samples were sectioned from a depth of 10 mm below the rail surface, where the material is considered to be still within the hardened layer but away from any microstructural changes on the top surface layer caused by train passage. These samples were subjected to electrical discharges by using Gas Tungsten Arc Welding (GTAW) machine. The arc current was controlled and moved along the samples surface in the direction of travel, as indicated by an arrow. Five different conditions were applied on the surface of the samples. Samples containing pre-existed WELs, taken from ex-service rail surface, were also considered in this study for comparison. Both simulated and ex-serviced WELs were characterised by advanced methods including SEM, TEM, TKD, EDS, XRD. Samples for TEM and TKFD were prepared by Focused Ion Beam (FIB) milling. The results showed that both simulated WELs by electrical arcing and ex-service WEL comprise similar microstructure. Brown etching layer was found with WELs and likely induced by a concurrent tempering process. This study provided a clear understanding of new formation mechanics of WELs which contributes to track maintenance procedure.Keywords: white etching layer, arcing, brown etching layer, material characterisation
Procedia PDF Downloads 122542 Phenotypic and Molecular Heterogeneity Linked to the Magnesium Transporter CNNM2
Authors: Reham Khalaf-Nazzal, Imad Dweikat, Paula Gimenez, Iker Oyenarte, Alfonso Martinez-Cruz, Domonik Muller
Abstract:
Metal cation transport mediator (CNNM) gene family comprises 4 isoforms that are expressed in various human tissues. Structurally, CNNMs are complex proteins that contain an extracellular N-terminal domain preceding a DUF21 transmembrane domain, a ‘Bateman module’ and a C-terminal cNMP-binding domain. Mutations in CNNM2 cause familial dominant hypomagnesaemia. Growing evidence highlights the role of CNNM2 in neurodevelopment. Mutations in CNNM2 have been implicated in epilepsy, intellectual disability, schizophrenia, and others. In the present study, we aim to elucidate the function of CNNM2 in the developing brain. Thus, we present the genetic origin of symptoms in two family cohorts. In the first family, three siblings of a consanguineous Palestinian family in which parents are first cousins, and consanguinity ran over several generations, presented a varying degree of intellectual disability, cone-rod dystrophy, and autism spectrum disorder. Exome sequencing and segregation analysis revealed the presence of homozygous pathogenic mutation in the CNNM2 gene, the parents were heterozygous for that gene mutation. Magnesium blood levels were normal in the three children and their parents in several measurements. They had no symptoms of hypomagnesemia. The CNNM2 mutation in this family was found to locate in the CBS1 domain of the CNNM2 protein. The crystal structure of the mutated CNNM2 protein was not significantly different from the wild-type protein, and the binding of AMP or MgATP was not dramatically affected. This suggests that the CBS1 domain could be involved in pure neurodevelopmental functions independent of its magnesium-handling role, and this mutation could have affected a protein partner binding or other functions in this protein. In the second family, another autosomal dominant CNNM2 mutation was found to run in a large family with multiple individuals over three generations. All affected family members had hypomagnesemia and hypermagnesuria. Oral supplementation of magnesium did not increase the levels of magnesium in serum significantly. Some affected members of this family have defects in fine motor skills such as dyslexia and dyslalia. The detected mutation is located in the N-terminal part, which contains a signal peptide thought to be involved in the sorting and routing of the protein. In this project, we describe heterogenous clinical phenotypes related to CNNM2 mutations and protein functions. In the first family, and up to the authors’ knowledge, we report for the first time the involvement of CNNM2 in retinal photoreceptor development and function. In addition, we report the presence of a neurophenotype independent of magnesium status related to the CNNM2 protein mutation. Taking into account the different modes of inheritance and the different positions of the mutations within CNNM2 and its different structural and functional domains, it is likely that CNNM2 might be involved in a wide spectrum of neuropsychiatric comorbidities with considerable varying phenotypes.Keywords: magnesium transport, autosomal recessive, autism, neurodevelopment, CBS domain
Procedia PDF Downloads 153541 Test Procedures for Assessing the Peel Strength and Cleavage Resistance of Adhesively Bonded Joints with Elastic Adhesives under Detrimental Service Conditions
Authors: Johannes Barlang
Abstract:
Adhesive bonding plays a pivotal role in various industrial applications, ranging from automotive manufacturing to aerospace engineering. The peel strength of adhesives, a critical parameter reflecting the ability of an adhesive to withstand external forces, is crucial for ensuring the integrity and durability of bonded joints. This study provides a synopsis of the methodologies, influencing factors, and significance of peel testing in the evaluation of adhesive performance. Peel testing involves the measurement of the force required to separate two bonded substrates under controlled conditions. This study systematically reviews the different testing techniques commonly applied in peel testing, including the widely used 180-degree peel test and the T-peel test. Emphasis is placed on the importance of selecting an appropriate testing method based on the specific characteristics of the adhesive and the application requirements. The influencing factors on peel strength are multifaceted, encompassing adhesive properties, substrate characteristics, environmental conditions, and test parameters. Through an in-depth analysis, this study explores how factors such as adhesive formulation, surface preparation, temperature, and peel rate can significantly impact the peel strength of adhesively bonded joints. Understanding these factors is essential for optimizing adhesive selection and application processes in real-world scenarios. Furthermore, the study highlights the role of peel testing in quality control and assurance, aiding manufacturers in maintaining consistent adhesive performance and ensuring the reliability of bonded structures. The correlation between peel strength and long-term durability is discussed, shedding light on the predictive capabilities of peel testing in assessing the service life of adhesive bonds. In conclusion, this study underscores the significance of peel testing as a fundamental tool for characterizing adhesive performance. By delving into testing methodologies, influencing factors, and practical implications, this study contributes to the broader understanding of adhesive behavior and fosters advancements in adhesive technology across diverse industrial sectors.Keywords: adhesively bonded joints, cleavage resistance, elastic adhesives, peel strength
Procedia PDF Downloads 96540 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life
Authors: Desplanches Maxime
Abstract:
Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression
Procedia PDF Downloads 70539 Productivity and Household Welfare Impact of Technology Adoption: A Microeconometric Analysis
Authors: Tigist Mekonnen Melesse
Abstract:
Since rural households are basically entitled to food through own production, improving productivity might lead to enhance the welfare of rural population through higher food availability at the household level and lowering the price of agricultural products. Increasing agricultural productivity through the use of improved technology is one of the desired outcomes from sensible food security and agricultural policy. The ultimate objective of this study was to evaluate the potential impact of improved agricultural technology adoption on smallholders’ crop productivity and welfare. The study is conducted in Ethiopia covering 1500 rural households drawn from four regions and 15 rural villages based on data collected by Ethiopian Rural Household Survey. Endogenous treatment effect model is employed in order to account for the selection bias on adoption decision that is expected from the self-selection of households in technology adoption. The treatment indicator, technology adoption is a binary variable indicating whether the household used improved seeds and chemical fertilizer or not. The outcome variables were cereal crop productivity, measured in real value of production and welfare of households, measured in real per capita consumption expenditure. Results of the analysis indicate that there is positive and significant effect of improved technology use on rural households’ crop productivity and welfare in Ethiopia. Adoption of improved seeds and chemical fertilizer alone will increase the crop productivity by 7.38 and 6.32 percent per year of each. Adoption of such technologies is also found to improve households’ welfare by 1.17 and 0.25 percent per month of each. The combined effect of both technologies when adopted jointly is increasing crop productivity by 5.82 percent and improving welfare by 0.42 percent. Besides, educational level of household head, farm size, labor use, participation in extension program, expenditure for input and number of oxen positively affect crop productivity and household welfare, while large household size negatively affect welfare of households. In our estimation, the average treatment effect of technology adoption (average treatment effect on the treated, ATET) is the same as the average treatment effect (ATE). This implies that the average predicted outcome for the treatment group is similar to the average predicted outcome for the whole population.Keywords: Endogenous treatment effect, technologies, productivity, welfare, Ethiopia
Procedia PDF Downloads 656538 Developmental Difficulties Prevalence and Management Capacities among Children Including Genetic Disease in a North Coastal District of Andhra Pradesh, India: A Cross-sectional Study
Authors: Koteswara Rao Pagolu, Raghava Rao Tamanam
Abstract:
The present study was aimed to find out the prevalence of DD's in Visakhapatnam, one of the north coastal districts of Andhra Pradesh, India during a span of five years. A cross-sectional investigation was held at District early intervention center (DEIC), Visakhapatnam from 2016 to 2020. To identify the pattern and trend of different DD's including seasonal variations, a retrospective analysis of the health center's inpatient database for the past 5 years was done. Male and female children aged 2 months-18 years are included in the study with the prior permission of the concerned medical officer. The screening tool developed by the Ministry of health and family welfare, India, was used for the study. Among 26,423 cases, children with birth defects are 962, 2229 with deficiencies, 7516 with diseases, and 15716 with disabilities were admitted during the study period. From birth defects, congenital deafness occurred in large numbers with 22.66%, and neural tube defect observed in a small number of cases with 0.83% during the period. From the side of deficiencies, severe acute malnutrition has mostly occurred (66.80 %) and a small number of children were affected with goiter (1.70%). Among the diseases, dental carriers (67.97%) are mostly found and these cases were at peak during the years 2016 and 2019. From disabilities, children with vision impairment (20.55%) have mostly approached the center. Over the past 5 years, the admission rate of down's syndrome and congenital deafness cases showed a rising trend up to 2019 and then declined. Hearing impairment, motor delay, and learning disorder showed a steep rise and gradual decline trend, whereas severe anemia, vitamin-D deficiency, otitis media, reactive airway disease, and attention deficit hyperactivity disorder showed a declining trend. However, congenital heart diseases, dental caries, and vision impairment admission rates showed a zigzag pattern over the past 5 years. This center had inadequate diagnostic facilities related to genetic disease management. For advanced confirmation, the cases are referred to a district government hospital or private diagnostic laboratories in the city for genetic tests. Information regarding the overall burden and pattern of admissions in the health center is obtained by the review of DEIC records. Through this study, it is observed that the incidence of birth defects, as well as genetic disease burden, is high in the Visakhapatnam district. Hence there is a need for strengthening of management services for these diseases in this region.Keywords: child health screening, developmental delays, district early intervention center, genetic disease management, infrastructural facility, Visakhapatnam district
Procedia PDF Downloads 215537 Fatigue Analysis of Spread Mooring Line
Authors: Chanhoe Kang, Changhyun Lee, Seock-Hee Jun, Yeong-Tae Oh
Abstract:
Offshore floating structure under the various environmental conditions maintains a fixed position by mooring system. Environmental conditions, vessel motions and mooring loads are applied to mooring lines as the dynamic tension. Because global responses of mooring system in deep water are specified as wave frequency and low frequency response, they should be calculated from the time-domain analysis due to non-linear dynamic characteristics. To take into account all mooring loads, environmental conditions, added mass and damping terms at each time step, a lot of computation time and capacities are required. Thus, under the premise that reliable fatigue damage could be derived through reasonable analysis method, it is necessary to reduce the analysis cases through the sensitivity studies and appropriate assumptions. In this paper, effects in fatigue are studied for spread mooring system connected with oil FPSO which is positioned in deep water of West Africa offshore. The target FPSO with two Mbbls storage has 16 spread mooring lines (4 bundles x 4 lines). The various sensitivity studies are performed for environmental loads, type of responses, vessel offsets, mooring position, loading conditions and riser behavior. Each parameter applied to the sensitivity studies is investigated from the effects of fatigue damage through fatigue analysis. Based on the sensitivity studies, the following results are presented: Wave loads are more dominant in terms of fatigue than other environment conditions. Wave frequency response causes the higher fatigue damage than low frequency response. The larger vessel offset increases the mean tension and so it results in the increased fatigue damage. The external line of each bundle shows the highest fatigue damage by the governed vessel pitch motion due to swell wave conditions. Among three kinds of loading conditions, ballast condition has the highest fatigue damage due to higher tension. The riser damping occurred by riser behavior tends to reduce the fatigue damage. The various analysis results obtained from these sensitivity studies can be used for a simplified fatigue analysis of spread mooring line as the reference.Keywords: mooring system, fatigue analysis, time domain, non-linear dynamic characteristics
Procedia PDF Downloads 334536 Evolution of Deformation in the Southern Central Tunisian Atlas: Parameters and Modelling
Authors: Mohamed Sadok Bensalem, Soulef Amamria, Khaled Lazzez, Mohamed Ghanmi
Abstract:
The southern-central Tunisian Atlas presents a typical example of an external zone. It occupies a particular position in the North African chains: firstly, it is the eastern limit of atlassic structures; secondly, it is the edges between the belts structures to the north and the stable Saharan platform in the south. The evolution of deformation study is based on several methods, such as classical or numerical methods. The principals parameters controlling the genesis of folds in the southern central Tunisian Atlas are; the reactivation of pre-existing faults during the later compressive phase, the evolution of decollement level, and the relation between thin and thick-skinned. One of the more principal characters of the southern-central Tunisian Atlas is the variation of belts structures directions determined by: NE-SW direction, named the attlassic direction in Tunisia, the NW-SE direction carried along the Gafsa fault (the oriental limit of southern atlassic accident), and the E-W direction defined in the southern Tunisian Atlas. This variation of direction is the result of important variation of deformation during different tectonics phases. A classical modelling of the Jebel ElKebar anticline, based on faults throw of the pre-existing faults and its reactivation during compressive phases, shows the importance of extensional deformation, particular during Aptian-Albian period, comparing with that of later compression (Alpine phases). A numerical modelling, based on the software Rampe E.M. 1.5.0, applied on the anticline of Jebel Orbata confirms the interpretation of “fault related fold” with decollement level within the Triassic successions. The other important parameter of evolution of deformation is the vertical migration of decollement level; indeed, more than the decollement level is in the recent series, most that the deformation is accentuated. The evolution of deformation is marked the development of duplex structure in Jebel At Taghli (eastern limit of Jebel Orbata). Consequently, the evolution of deformation is proportional to the depth of the decollement level, the most important deformation is in the higher successions; thus, is associated to the thin-skinned deformation; the decollement level permit the passive transfer of deformation in the cover.Keywords: evolution of deformation, pre-existing faults, decollement level, thin-skinned
Procedia PDF Downloads 126535 Study on Varying Solar Blocking Depths in the Exploration of Energy-Saving Renovation of the Energy-Saving Design of the External Shell of Existing Buildings: Using Townhouse Residences in Kaohsiung City as an Example
Authors: Kuang Sheng Liu, Yu Lin Shih*, Chun Ta Tzeng, Cheng Chen Chen
Abstract:
Buildings in the 21st century are facing issues such as an extreme climate and low-carbon/energy-saving requirements. Many countries in the world are of the opinion that a building during its medium- and long-term life cycle is an energy-consuming entity. As for the use of architectural resources, including the United Nations-implemented "Global Green Policy" and "Sustainable building and construction initiative", all are working towards "zero-energy building" and "zero-carbon building" policies. Because of this, countries are cooperating with industry development using policies such as "mandatory design criteria", "green procurement policy" and "incentive grants and rebates programme". The results of this study can provide a reference for sustainable building renovation design criteria. Aimed at townhouses in Kaohsiung City, this study uses different levels of solar blocking depth to carry out evaluation of design and energy-saving renovation of the outer shell of existing buildings by using data collection and the selection of representative cases. Using building resources from a building information model (BIM), simulation and efficiency evaluation are carried out and proven with simulation estimation. This leads into the ECO-efficiency model (EEM) for the life cycle cost efficiency (LCCE) evalution. The buildings selected by this research sit in a north-south direction set with different solar blocking depths. The indoor air-conditioning consumption rates are compared. The current balcony depth of 1 metre as the simulated EUI value acts as a reference value of 100%. The solar blocking of the balcony is increased to 1.5, 2, 2.5 and 3 metres for a total of 5 different solar-blocking balcony depths, for comparison of the air-conditioning improvement efficacy. This research uses different solar-blocking balcony depths to carry out air-conditioning efficiency analysis. 1.5m saves 3.08%, 2m saves 6.74%, 2.5m saves 9.80% and 3m saves 12.72% from the air-conditioning EUI value. This shows that solar-blocking balconies have an efficiency-increasing potential for indoor air-conditioning.Keywords: building information model, eco-efficiency model, energy-saving in the external shell, solar blocking depth.
Procedia PDF Downloads 403534 Caged Compounds as Light-Dependent Initiators for Enzyme Catalysis Reactions
Authors: Emma Castiglioni, Nigel Scrutton, Derren Heyes, Alistair Fielding
Abstract:
By using light as trigger, it is possible to study many biological processes, such as the activity of genes, proteins, and other molecules, with precise spatiotemporal control. Caged compounds, where biologically active molecules are generated from an inert precursor upon laser photolysis, offer the potential to initiate such biological reactions with high temporal resolution. As light acts as the trigger for cleaving the protecting group, the ‘caging’ technique provides a number of advantages as it can be intracellular, rapid and controlled in a quantitative manner. We are developing caging strategies to study the catalytic cycle of a number of enzyme systems, such as nitric oxide synthase and ethanolamine ammonia lyase. These include the use of caged substrates, caged electrons and the possibility of caging the enzyme itself. In addition, we are developing a novel freeze-quench instrument to study these reactions, which combines rapid mixing and flashing capabilities. Reaction intermediates will be trapped at low temperatures and will be analysed by using electron paramagnetic resonance (EPR) spectroscopy to identify the involvement of any radical species during catalysis. EPR techniques typically require relatively long measurement times and very often, low temperatures to fully characterise these short-lived species. Therefore, common rapid mixing techniques, such as stopped-flow or quench-flow are not directly suitable. However, the combination of rapid freeze-quench (RFQ) followed by EPR analysis provides the ideal approach to kinetically trap and spectroscopically characterise these transient radical species. In a typical RFQ experiment, two reagent solutions are delivered to the mixer via two syringes driven by a pneumatic actuator or stepper motor. The new mixed solution is then sprayed into a cryogenic liquid or surface, and the frozen sample is then collected and packed into an EPR tube for analysis. The earliest RFQ instrument consisted of a hydraulic ram unit as a drive unit with direct spraying of the sample into a cryogenic liquid (nitrogen, isopentane or petroleum). Improvements to the RFQ technique have arisen from the design of new mixers in order to reduce both the volume and the mixing time. In addition, the cryogenic isopentane bath has been coupled to a filtering system or replaced by spraying the solution onto a surface that is frozen via thermal conductivity with a cryogenic liquid. In our work, we are developing a novel RFQ instrument which combines the freeze-quench technology with flashing capabilities to enable the studies of both thermally-activated and light-activated biological reactions. This instrument also uses a new rotating plate design based on magnetic couplings and removes the need for mechanical motorised rotation, which can otherwise be problematic at cryogenic temperatures.Keywords: caged compounds, freeze-quench apparatus, photolysis, radicals
Procedia PDF Downloads 209533 The Volume–Volatility Relationship Conditional to Market Efficiency
Authors: Massimiliano Frezza, Sergio Bianchi, Augusto Pianese
Abstract:
The relation between stock price volatility and trading volume represents a controversial issue which has received a remarkable attention over the past decades. In fact, an extensive literature shows a positive relation between price volatility and trading volume in the financial markets, but the causal relationship which originates such association is an open question, from both a theoretical and empirical point of view. In this regard, various models, which can be considered as complementary rather than competitive, have been introduced to explain this relationship. They include the long debated Mixture of Distributions Hypothesis (MDH); the Sequential Arrival of Information Hypothesis (SAIH); the Dispersion of Beliefs Hypothesis (DBH); the Noise Trader Hypothesis (NTH). In this work, we analyze whether stock market efficiency can explain the diversity of results achieved during the years. For this purpose, we propose an alternative measure of market efficiency, based on the pointwise regularity of a stochastic process, which is the Hurst–H¨older dynamic exponent. In particular, we model the stock market by means of the multifractional Brownian motion (mBm) that displays the property of a time-changing regularity. Mostly, such models have in common the fact that they locally behave as a fractional Brownian motion, in the sense that their local regularity at time t0 (measured by the local Hurst–H¨older exponent in a neighborhood of t0 equals the exponent of a fractional Brownian motion of parameter H(t0)). Assuming that the stock price follows an mBm, we introduce and theoretically justify the Hurst–H¨older dynamical exponent as a measure of market efficiency. This allows to measure, at any time t, markets’ departures from the martingale property, i.e. from efficiency as stated by the Efficient Market Hypothesis. This approach is applied to financial markets; using data for the SP500 index from 1978 to 2017, on the one hand we find that when efficiency is not accounted for, a positive contemporaneous relationship emerges and is stable over time. Conversely, it disappears as soon as efficiency is taken into account. In particular, this association is more pronounced during time frames of high volatility and tends to disappear when market becomes fully efficient.Keywords: volume–volatility relationship, efficient market hypothesis, martingale model, Hurst–Hölder exponent
Procedia PDF Downloads 79532 Reading Comprehension in Profound Deaf Readers
Authors: S. Raghibdoust, E. Kamari
Abstract:
Research show that reduced functional hearing has a detrimental influence on the ability of an individual to establish proper phonological representations of words, since the phonological representations are claimed to mediate the conceptual processing of written words. Word processing efficiency is expected to decrease with a decrease in functional hearing. In other words, it is predicted that hearing individuals would be more capable of word processing than individuals with hearing loss, as their functional hearing works normally. Studies also demonstrate that the quality of the functional hearing affects reading comprehension via its effect on their word processing skills. In other words, better hearing facilitates the development of phonological knowledge, and can promote enhanced strategies for the recognition of written words, which in turn positively affect higher-order processes underlying reading comprehension. The aims of this study were to investigate and compare the effect of deafness on the participants’ abilities to process written words at the lexical and sentence levels through using two online and one offline reading comprehension tests. The performance of a group of 8 deaf male students (ages 8-12) was compared with that of a control group of normal hearing male students. All the participants had normal IQ and visual status, and came from an average socioeconomic background. None were diagnosed with a particular learning or motor disability. The language spoken in the homes of all participants was Persian. Two tests of word processing were developed and presented to the participants using OpenSesame software, in order to measure the speed and accuracy of their performance at the two perceptual and conceptual levels. In the third offline test of reading comprehension which comprised of semantically plausible and semantically implausible subject relative clauses, the participants had to select the correct answer out of two choices. The data derived from the statistical analysis using SPSS software indicated that hearing and deaf participants had a similar word processing performance both in terms of speed and accuracy of their responses. The results also showed that there was no significant difference between the performance of the deaf and hearing participants in comprehending semantically plausible sentences (p > 0/05). However, a significant difference between the performances of the two groups was observed with respect to their comprehension of semantically implausible sentences (p < 0/05). In sum, the findings revealed that the seriously impoverished sentence reading ability characterizing the profound deaf subjects of the present research, exhibited their reliance on reading strategies that are based on insufficient or deviant structural knowledge, in particular in processing semantically implausible sentences, rather than a failure to efficiently process written words at the lexical level. This conclusion, of course, does not mean to say that deaf individuals may never experience deficits at the word processing level, deficits that impede their understanding of written texts. However, as stated in previous researches, it sounds reasonable to assume that the more deaf individuals get familiar with written words, the better they can recognize them, despite having a profound phonological weakness.Keywords: deafness, reading comprehension, reading strategy, word processing, subject and object relative sentences
Procedia PDF Downloads 339531 A Program of Data Analysis on the Possible State of the Antibiotic Resistance in Bangladesh Environment in 2019
Authors: S. D. Kadir
Abstract:
Background: Antibiotics have always been at the centrum of the revolution of modern microbiology. Micro-organisms and its pathogenicity, resistant organisms, inappropriate or over usage of various types of antibiotic agents are fuelled multidrug-resistant pathogenic organisms. Our present time review report mainly focuses on the therapeutic condition of antibiotic resistance and the possible roots behind the development of antibiotic resistance in Bangladesh in 2019. Methodology: The systemic review has progressed through a series of research analyses on various manuscripts published on Google Scholar, PubMed, Research Gate, and collected relevant information from established popular healthcare and diagnostic center and its subdivisions all over Bangladesh. Our research analysis on the possible assurance of antibiotic resistance been ensured by the selective medical reports and on random assay on the extent of individual antibiotic in 2019. Results: 5 research articles, 50 medical report summary, and around 5 patients have been interviewed while going through the estimation process. We have prioritized research articles where the research analysis been performed by the appropriate use of the Kirby-Bauer method. Kirby-Bauer technique is preferred as it provides greater efficiency, ensures lower performance expenditure, and supplies greater convenience and simplification in the application. In most of the reviews, clinical and laboratory standards institute guidelines were strictly followed. Most of our reports indicate significant resistance shown by the Beta-lactam drugs. Specifically by the derivatives of Penicillin's, Cephalosporin's (rare use of the first generation Cephalosporin and overuse of the second and third generation of Cephalosporin and misuse of the fourth generation of Cephalosporin), which are responsible for almost 67 percent of the bacterial resistance. Moreover, approximately 20 percent of the resistance was due to the fact of drug pumping from the bacterial cell by tetracycline and sulphonamides and their derivatives. Conclusion: 90 percent of the approximate antibiotic resistance is due to the usage of relative and true broad-spectrum antibiotics. The environment has been created by the following circumstances where; the excessive usage of broad-spectrum antibiotics had led to a condition where the disruption of native bacteria and a series of anti-microbial resistance causing a disturbance of the surrounding environments in medium, leading to a state of super-infection.Keywords: antibiotics, antibiotic resistance, Kirby Bauer method, microbiology
Procedia PDF Downloads 121530 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors
Authors: Jakob Krause
Abstract:
Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling
Procedia PDF Downloads 149529 Agro-Morphological Traits Based Genetic Diversity Analysis of ‘Ethiopian Dinich’ Plectranthus edulis (Vatke) Agnew Populations Collected from Diverse Agro-Ecologies in Ethiopia
Authors: Fekadu Gadissa, Kassahun Tesfaye, Kifle Dagne, Mulatu Geleta
Abstract:
‘Ethiopian dinich’ also called ‘Ethiopian potato’ is one of the economically important ‘orphan’ edible tuber crops indigenous to Ethiopia. We evaluated the morphological and agronomic traits performances of 174 samples from Ethiopia at multiple locations using 12 qualitative and 16 quantitative traits, recorded at the correct growth stages. We observed several morphotypes and phenotypic variations for qualitative traits along with a wide range of mean performance values for all quantitative traits. Analysis of variance for each quantitative trait showed a highly significant (p<0.001) variation among the collections with eventually non-significant variation for environment-traits interaction for all but flower length. A comparatively high phenotypic and genotypic coefficient of variation was observed for plant height, days to flower initiation, days to 50% flowering and tuber number per hill. Moreover, the variability and coefficients of variation due to genotype-environment interaction was nearly zero for all the traits except flower length. High genotypic coefficients of variation coupled with a high estimate of broad sense heritability and high genetic advance as a percent of collection mean were obtained for tuber weight per hill, number of primary branches per plant, tuber number per hill and number of plants per hill. Association of tuber yield per hectare of land showed a large magnitude of positive phenotypic and genotypic correlation with those traits. Principal components analysis revealed 76% of the total variation for the first six principal axes with high factor loadings again from tuber number per hill, number of primary branches per plant and tuber weight. The collections were grouped into four clusters with the weak region (zone) of origin based pattern. In general, there is high genetic-based variability for ‘Ethiopian dinich’ improvement and conservation. DNA based markers are recommended for further genetic diversity estimation for use in breeding and conservation.Keywords: agro-morphological traits, Ethiopian dinich, genetic diversity, variance components
Procedia PDF Downloads 190528 Latitudinal Impact on Spatial and Temporal Variability of 7Be Activity Concentrations in Surface Air along Europe
Authors: M. A. Hernández-Ceballos, M. Marín-Ferrer, G. Cinelli, L. De Felice, T. Tollefsen, E. Nweke, P. V. Tognoli, S. Vanzo, M. De Cort
Abstract:
This study analyses the latitudinal impact of the spatial and temporal distribution on the cosmogenic isotope 7Be in surface air along Europe. The long-term database of the 6 sampling sites (Ivalo, Helsinki, Berlin, Freiburg, Sevilla and La Laguna), that regularly provide data to the Radioactivity Environmental Monitoring (REM) network managed by the Joint Research Centre (JRC) in Ispra, were used. The selection of the stations was performed attending to different factors, such as 1) heterogeneity in terms of latitude and altitude, and 2) long database coverage. The combination of these two parameters ensures a high degree of representativeness of the results. In the later, the temporal coverage varies between stations, being used in the present study sampling stations with a database more or less continuously from 1984 to 2011. The mean values of 7Be activity concentration presented a spatial distribution value ranging from 2.0 ± 0.9 mBq/m3 (Ivalo, north) to 4.8 ± 1.5 mBq/m3 (La Laguna, south). An increasing gradient with latitude was observed from the north to the south, 0.06 mBq/m3. However, there was no correlation with altitude, since all stations are sited within the atmospheric boundary layer. The analyses of the data indicated a dynamic range of 7Be activity for solar cycle and phase (maximum or minimum), having been observed different impact on stations according to their location. The results indicated a significant seasonal behavior, with the maximum concentrations occurring in the summer and minimum in the winter, although with differences in the values reached and in the month registered. Due to the large heterogeneity in the temporal pattern with which the individual radionuclide analyses were performed in each station, the 7Be monthly index was calculated to normalize the measurements and perform the direct comparison of monthly evolution among stations. Different intensity and evolution of the mean monthly index were observed. The knowledge of the spatial and temporal distribution of this natural radionuclide in the atmosphere is a key parameter for modeling studies of atmospheric processes, which are important phenomena to be taken into account in the case of a nuclear accident.Keywords: Berilium-7, latitudinal impact in Europe, seasonal and monthly variability, solar cycle
Procedia PDF Downloads 339527 Artificial Neural Network Approach for Vessel Detection Using Visible Infrared Imaging Radiometer Suite Day/Night Band
Authors: Takashi Yamaguchi, Ichio Asanuma, Jong G. Park, Kenneth J. Mackin, John Mittleman
Abstract:
In this paper, vessel detection using the artificial neural network is proposed in order to automatically construct the vessel detection model from the satellite imagery of day/night band (DNB) in visible infrared in the products of Imaging Radiometer Suite (VIIRS) on Suomi National Polar-orbiting Partnership (Suomi-NPP).The goal of our research is the establishment of vessel detection method using the satellite imagery of DNB in order to monitor the change of vessel activity over the wide region. The temporal vessel monitoring is very important to detect the events and understand the circumstances within the maritime environment. For the vessel locating and detection techniques, Automatic Identification System (AIS) and remote sensing using Synthetic aperture radar (SAR) imagery have been researched. However, each data has some lack of information due to uncertain operation or limitation of continuous observation. Therefore, the fusion of effective data and methods is important to monitor the maritime environment for the future. DNB is one of the effective data to detect the small vessels such as fishery ships that is difficult to observe in AIS. DNB is the satellite sensor data of VIIRS on Suomi-NPP. In contrast to SAR images, DNB images are moderate resolution and gave influence to the cloud but can observe the same regions in each day. DNB sensor can observe the lights produced from various artifact such as vehicles and buildings in the night and can detect the small vessels from the fishing light on the open water. However, the modeling of vessel detection using DNB is very difficult since complex atmosphere and lunar condition should be considered due to the strong influence of lunar reflection from cloud on DNB. Therefore, artificial neural network was applied to learn the vessel detection model. For the feature of vessel detection, Brightness Temperature at the 3.7 μm (BT3.7) was additionally used because BT3.7 can be used for the parameter of atmospheric conditions.Keywords: artificial neural network, day/night band, remote sensing, Suomi National Polar-orbiting Partnership, vessel detection, Visible Infrared Imaging Radiometer Suite
Procedia PDF Downloads 236