Search results for: sensory processing sensitivity
774 An Infrared Inorganic Scintillating Detector Applied in Radiation Therapy
Authors: Sree Bash Chandra Debnath, Didier Tonneau, Carole Fauquet, Agnes Tallet, Julien Darreon
Abstract:
Purpose: Inorganic scintillating dosimetry is the most recent promising technique to solve several dosimetric issues and provide quality assurance in radiation therapy. Despite several advantages, the major issue of using scintillating detectors is the Cerenkov effect, typically induced in the visible emission range. In this context, the purpose of this research work is to evaluate the performance of a novel infrared inorganic scintillator detector (IR-ISD) in the radiation therapy treatment to ensure Cerenkov free signal and the best matches between the delivered and prescribed doses during treatment. Methods: A simple and small-scale infrared inorganic scintillating detector of 100 µm diameter with a sensitive scintillating volume of 2x10-6 mm3 was developed. A prototype of the dose verification system has been introduced based on PTIR1470/F (provided by Phosphor Technology®) material used in the proposed novel IR-ISD. The detector was tested on an Elekta LINAC system tuned at 6 MV/15MV and a brachytherapy source (Ir-192) used in the patient treatment protocol. The associated dose rate was measured in count rate (photons/s) using a highly sensitive photon counter (sensitivity ~20ph/s). Overall measurements were performed in IBATM water tank phantoms by following international Technical Reports series recommendations (TRS 381) for radiotherapy and TG43U1 recommendations for brachytherapy. The performance of the detector was tested through several dosimetric parameters such as PDD, beam profiling, Cerenkov measurement, dose linearity, dose rate linearity repeatability, and scintillator stability. Finally, a comparative study is also shown using a reference microdiamond dosimeter, Monte-Carlo (MC) simulation, and data from recent literature. Results: This study is highlighting the complete removal of the Cerenkov effect especially for small field radiation beam characterization. The detector provides an entire linear response with the dose in the 4cGy to 800 cGy range, independently of the field size selected from 5 x 5 cm² down to 0.5 x 0.5 cm². A perfect repeatability (0.2 % variation from average) with day-to-day reproducibility (0.3% variation) was observed. Measurements demonstrated that ISD has superlinear behavior with dose rate (R2=1) varying from 50 cGy/s to 1000 cGy/s. PDD profiles obtained in water present identical behavior with a build-up maximum depth dose at 15 mm for different small fields irradiation. A low dimension of 0.5 x 0.5 cm² field profiles have been characterized, and the field cross profile presents a Gaussian-like shape. The standard deviation (1σ) of the scintillating signal remains within 0.02% while having a very low convolution effect, thanks to lower sensitive volume. Finally, during brachytherapy, a comparison with MC simulations shows that considering energy dependency, measurement agrees within 0.8% till 0.2 cm source to detector distance. Conclusion: The proposed scintillating detector in this study shows no- Cerenkov radiation and efficient performance for several radiation therapy measurement parameters. Therefore, it is anticipated that the IR-ISD system can be promoted to validate with direct clinical investigations, such as appropriate dose verification and quality control in the Treatment Planning System (TPS).Keywords: IR-Scintillating detector, dose measurement, micro-scintillators, Cerenkov effect
Procedia PDF Downloads 182773 Processing and Evaluation of Jute Fiber Reinforced Hybrid Composites
Authors: Mohammad W. Dewan, Jahangir Alam, Khurshida Sharmin
Abstract:
Synthetic fibers (carbon, glass, aramid, etc.) are generally utilized to make composite materials for better mechanical and thermal properties. However, they are expensive and non-biodegradable. In the perspective of Bangladesh, jute fibers are available, inexpensive, and comprising good mechanical properties. The improved properties (i.e., low cost, low density, eco-friendly) of natural fibers have made them a promising reinforcement in hybrid composites without sacrificing mechanical properties. In this study, jute and e-glass fiber reinforced hybrid composite materials are fabricated utilizing hand lay-up followed by a compression molding technique. Room temperature cured two-part epoxy resin is used as a matrix. Approximate 6-7 mm thick composite panels are fabricated utilizing 17 layers of woven glass and jute fibers with different fiber layering sequences- only jute, only glass, glass, and jute alternatively (g/j/g/j---) and 4 glass - 9 jute – 4 glass (4g-9j-4g). The fabricated composite panels are analyzed through fiber volume calculation, tensile test, bending test, and water absorption test. The hybridization of jute and glass fiber results in better tensile, bending, and water absorption properties than only jute fiber-reinforced composites, but inferior properties as compared to only glass fiber reinforced composites. Among different fiber layering sequences, 4g-9j-4g fibers layering sequence resulted in better tensile, bending, and water absorption properties. The effect of chemical treatment on the woven jute fiber and chopped glass microfiber infusion are also investigated in this study. Chemically treated jute fiber and 2 wt. % chopped glass microfiber infused hybrid composite shows about 12% improvements in flexural strength as compared to untreated and no micro-fiber infused hybrid composite panel. However, fiber chemical treatment and micro-filler do not have a significant effect on tensile strength.Keywords: compression molding, chemical treatment, hybrid composites, mechanical properties
Procedia PDF Downloads 158772 The Impact of Sign Language on Generating and Maintaining a Mental Image
Authors: Yi-Shiuan Chiu
Abstract:
Deaf signers have been found to have better mental image performance than hearing nonsigners. The goal of this study was to investigate the ability to generate mental images, to maintain them, and to manipulate them in deaf signers of Taiwanese Sign Language (TSL). In the visual image task, participants first memorized digits formed in a cell of 4 × 5 grids. After presenting a cue of Chinese digit character shown on the top of a blank cell, participants had to form a corresponding digit. When showing a probe, which was a grid containing a red circle, participants had to decide as quickly as possible whether the probe would have been covered by the mental image of the digit. The ISI (interstimulus interval) between cue and probe was manipulated. In experiment 1, 24 deaf signers and 24 hearing nonsigners were asked to perform image generation tasks (ISI: 200, 400 ms) and image maintenance tasks (ISI: 800, 2000 ms). The results showed that deaf signers had had an enhanced ability to generate and maintain a mental image. To explore the process of mental image, in experiment 2, 30 deaf signers and 30 hearing nonsigners were asked to do visual searching when maintaining a mental image. Between a digit image cue and a red circle probe, participants were asked to search a visual search task to see if a target triangle apex was directed to the right or left. When there was only one triangle in the searching task, the results showed that both deaf signers and hearing non-signers had similar visual searching performance in which the searching targets in the mental image locations got facilitates. However, deaf signers could maintain better and faster mental image performance than nonsigners. In experiment 3, we increased the number of triangles to 4 to raise the difficulty of the visual search task. The results showed that deaf participants performed more accurately in visual search and image maintenance tasks. The results suggested that people may use eye movements as a mnemonic strategy to maintain the mental image. And deaf signers had enhanced abilities to resist the interference of eye movements in the situation of fewer distractors. In sum, these findings suggested that deaf signers had enhanced mental image processing.Keywords: deaf signers, image maintain, mental image, visual search
Procedia PDF Downloads 154771 River Network Delineation from Sentinel 1 Synthetic Aperture Radar Data
Authors: Christopher B. Obida, George A. Blackburn, James D. Whyatt, Kirk T. Semple
Abstract:
In many regions of the world, especially in developing countries, river network data are outdated or completely absent, yet such information is critical for supporting important functions such as flood mitigation efforts, land use and transportation planning, and the management of water resources. In this study, a method was developed for delineating river networks using Sentinel 1 imagery. Unsupervised classification was applied to multi-temporal Sentinel 1 data to discriminate water bodies from other land covers then the outputs were combined to generate a single persistent water bodies product. A thinning algorithm was then used to delineate river centre lines, which were converted into vector features and built into a topologically structured geometric network. The complex river system of the Niger Delta was used to compare the performance of the Sentinel-based method against alternative freely available water body products from United States Geological Survey, European Space Agency and OpenStreetMap and a river network derived from a Shuttle Rader Topography Mission Digital Elevation Model. From both raster-based and vector-based accuracy assessments, it was found that the Sentinel-based river network products were superior to the comparator data sets by a substantial margin. The geometric river network that was constructed permitted a flow routing analysis which is important for a variety of environmental management and planning applications. The extracted network will potentially be applied for modelling dispersion of hydrocarbon pollutants in Ogoniland, a part of the Niger Delta. The approach developed in this study holds considerable potential for generating up to date, detailed river network data for the many countries where such data are deficient.Keywords: Sentinel 1, image processing, river delineation, large scale mapping, data comparison, geometric network
Procedia PDF Downloads 139770 Optical Imaging Based Detection of Solder Paste in Printed Circuit Board Jet-Printing Inspection
Authors: D. Heinemann, S. Schramm, S. Knabner, D. Baumgarten
Abstract:
Purpose: Applying solder paste to printed circuit boards (PCB) with stencils has been the method of choice over the past years. A new method uses a jet printer to deposit tiny droplets of solder paste through an ejector mechanism onto the board. This allows for more flexible PCB layouts with smaller components. Due to the viscosity of the solder paste, air blisters can be trapped in the cartridge. This can lead to missing solder joints or deviations in the applied solder volume. Therefore, a built-in and real-time inspection of the printing process is needed to minimize uncertainties and increase the efficiency of the process by immediate correction. The objective of the current study is the design of an optimal imaging system and the development of an automatic algorithm for the detection of applied solder joints from optical from the captured images. Methods: In a first approach, a camera module connected to a microcomputer and LED strips are employed to capture images of the printed circuit board under four different illuminations (white, red, green and blue). Subsequently, an improved system including a ring light, an objective lens, and a monochromatic camera was set up to acquire higher quality images. The obtained images can be divided into three main components: the PCB itself (i.e., the background), the reflections induced by unsoldered positions or screw holes and the solder joints. Non-uniform illumination is corrected by estimating the background using a morphological opening and subtraction from the input image. Image sharpening is applied in order to prevent error pixels in the subsequent segmentation. The intensity thresholds which divide the main components are obtained from the multimodal histogram using three probability density functions. Determining the intersections delivers proper thresholds for the segmentation. Remaining edge gradients produces small error areas which are removed by another morphological opening. For quantitative analysis of the segmentation results, the dice coefficient is used. Results: The obtained PCB images show a significant gradient in all RGB channels, resulting from ambient light. Using different lightings and color channels 12 images of a single PCB are available. A visual inspection and the investigation of 27 specific points show the best differentiation between those points using a red lighting and a green color channel. Estimating two thresholds from analyzing the multimodal histogram of the corrected images and using them for segmentation precisely extracts the solder joints. The comparison of the results to manually segmented images yield high sensitivity and specificity values. Analyzing the overall result delivers a Dice coefficient of 0.89 which varies for single object segmentations between 0.96 for a good segmented solder joints and 0.25 for single negative outliers. Conclusion: Our results demonstrate that the presented optical imaging system and the developed algorithm can robustly detect solder joints on printed circuit boards. Future work will comprise a modified lighting system which allows for more precise segmentation results using structure analysis.Keywords: printed circuit board jet-printing, inspection, segmentation, solder paste detection
Procedia PDF Downloads 336769 Enhanced Disk-Based Databases towards Improved Hybrid in-Memory Systems
Authors: Samuel Kaspi, Sitalakshmi Venkatraman
Abstract:
In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable in-memory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of disk-based database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of in-memory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.Keywords: in-memory database, disk-based system, hybrid database, concurrency control
Procedia PDF Downloads 417768 Level Set Based Extraction and Update of Lake Contours Using Multi-Temporal Satellite Images
Authors: Yindi Zhao, Yun Zhang, Silu Xia, Lixin Wu
Abstract:
The contours and areas of water surfaces, especially lakes, often change due to natural disasters and construction activities. It is an effective way to extract and update water contours from satellite images using image processing algorithms. However, to produce optimal water surface contours that are close to true boundaries is still a challenging task. This paper compares the performances of three different level set models, including the Chan-Vese (CV) model, the signed pressure force (SPF) model, and the region-scalable fitting (RSF) energy model for extracting lake contours. After experiment testing, it is indicated that the RSF model, in which a region-scalable fitting (RSF) energy functional is defined and incorporated into a variational level set formulation, is superior to CV and SPF, and it can get desirable contour lines when there are “holes” in the regions of waters, such as the islands in the lake. Therefore, the RSF model is applied to extracting lake contours from Landsat satellite images. Four temporal Landsat satellite images of the years of 2000, 2005, 2010, and 2014 are used in our study. All of them were acquired in May, with the same path/row (121/036) covering Xuzhou City, Jiangsu Province, China. Firstly, the near infrared (NIR) band is selected for water extraction. Image registration is conducted on NIR bands of different temporal images for information update, and linear stretching is also done in order to distinguish water from other land cover types. Then for the first temporal image acquired in 2000, lake contours are extracted via the RSF model with initialization of user-defined rectangles. Afterwards, using the lake contours extracted the previous temporal image as the initialized values, lake contours are updated for the current temporal image by means of the RSF model. Meanwhile, the changed and unchanged lakes are also detected. The results show that great changes have taken place in two lakes, i.e. Dalong Lake and Panan Lake, and RSF can actually extract and effectively update lake contours using multi-temporal satellite image.Keywords: level set model, multi-temporal image, lake contour extraction, contour update
Procedia PDF Downloads 366767 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging
Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland
Abstract:
A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography
Procedia PDF Downloads 157766 Revalidation and Hormonization of Existing IFCC Standardized Hepatic, Cardiac, and Thyroid Function Tests by Precison Optimization and External Quality Assurance Programs
Authors: Junaid Mahmood Alam
Abstract:
Revalidating and harmonizing clinical chemistry analytical principles and optimizing methods through quality control programs and assessments is the preeminent means to attain optimal outcome within the clinical laboratory services. Present study reports revalidation of our existing IFCC regularized analytical methods, particularly hepatic and thyroid function tests, by optimization of precision analyses and processing through external and internal quality assessments and regression determination. Parametric components of hepatic (Bilirubin ALT, γGT, ALP), cardiac (LDH, AST, Trop I) and thyroid/pituitary (T3, T4, TSH, FT3, FT4) function tests were used to validate analytical techniques on automated chemistry and immunological analyzers namely Hitachi 912, Cobas 6000 e601, Cobas c501, Cobas e411 with UV kinetic, colorimetric dry chemistry principles and Electro-Chemiluminescence immunoassay (ECLi) techniques. Process of validation and revalidation was completed with evaluating and assessing the precision analyzed Preci-control data of various instruments plotting against each other with regression analyses R2. Results showed that: Revalidation and optimization of respective parameters that were accredited through CAP, CLSI and NEQAPP assessments depicted 99.0% to 99.8% optimization, in addition to the methodology and instruments used for analyses. Regression R2 analysis of BilT was 0.996, whereas that of ALT, ALP, γGT, LDH, AST, Trop I, T3, T4, TSH, FT3, and FT4 exhibited R2 0.998, 0.997, 0.993, 0.967, 0.970, 0.980, 0.976, 0.996, 0.997, 0.997, and R2 0.990, respectively. This confirmed marked harmonization of analytical methods and instrumentations thus revalidating optimized precision standardization as per IFCC recommended guidelines. It is concluded that practices of revalidating and harmonizing the existing or any new services should be followed by all clinical laboratories, especially those associated with tertiary care hospital. This is will ensure deliverance of standardized, proficiency tested, optimized services for prompt and better patient care that will guarantee maximum patients’ confidence.Keywords: revalidation, standardized, IFCC, CAP, harmonized
Procedia PDF Downloads 269765 Two Component Source Apportionment Based on Absorption and Size Distribution Measurement
Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Gábor Szabó, Zoltán Bozóki
Abstract:
Beyond its climate and health related issues ambient light absorbing carbonaceous particulate matter (LAC) has also become a great scientific interest in terms of its regulations recently. It has been experimentally demonstrated in recent studies, that LAC is dominantly composed of traffic and wood burning aerosol particularly under wintertime urban conditions, when the photochemical and biological activities are negligible. Several methods have been introduced to quantitatively apportion aerosol fractions emitted by wood burning and traffic but most of them require costly and time consuming off-line chemical analysis. As opposed to chemical features, the microphysical properties of airborne particles such as optical absorption and size distribution can be easily measured on-line, with high accuracy and sensitivity, especially under highly polluted urban conditions. Recently a new method has been proposed for the apportionment of wood burning and traffic aerosols based on the spectral dependence of their absorption quantified by the Aerosol Angström Exponent (AAE). In this approach the absorption coefficient is deduced from transmission measurement on a filter accumulated aerosol sample and the conversion factor between the measured optical absorption and the corresponding mass concentration (the specific absorption cross section) are determined by on-site chemical analysis. The recently developed multi-wavelength photoacoustic instruments provide novel, in-situ approach towards the reliable and quantitative characterization of carbonaceous particulate matter. Therefore, it also opens up novel possibilities on the source apportionment through the measurement of light absorption. In this study, we demonstrate an in-situ spectral characterization method of the ambient carbon fraction based on light absorption and size distribution measurements using our state-of-the-art multi-wavelength photoacoustic instrument (4λ-PAS) and Single Mobility Particle Sizer (SMPS) The carbonaceous particulate selective source apportionment study was performed for ambient particulate matter in the city center of Szeged, Hungary where the dominance of traffic and wood burning aerosol has been experimentally demonstrated earlier. The proposed model is based on the parallel, in-situ measurement of optical absorption and size distribution. AAEff and AAEwb were deduced from the measured data using the defined correlation between the AOC(1064nm)/AOC(266nm) and N100/N20 ratios. σff(λ) and σwb(λ) were determined with the help of the independently measured temporal mass concentrations in the PM1 mode. Furthermore, the proposed optical source apportionment is based on the assumption that the light absorbing fraction of PM is exclusively related to traffic and wood burning. This assumption is indirectly confirmed here by the fact that the measured size distribution is composed of two unimodal size distributions identified to correspond to traffic and wood burning aerosols. The method offers the possibility of replacing laborious chemical analysis with simple in-situ measurement of aerosol size distribution data. The results by the proposed novel optical absorption based source apportionment method prove its applicability whenever measurements are performed at an urban site where traffic and wood burning are the dominant carbonaceous sources of emission.Keywords: absorption, size distribution, source apportionment, wood burning, traffic aerosol
Procedia PDF Downloads 228764 Molecular Topology and TLC Retention Behaviour of s-Triazines: QSRR Study
Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević
Abstract:
Quantitative structure-retention relationship (QSRR) analysis was used to predict the chromatographic behavior of s-triazine derivatives by using theoretical descriptors computed from the chemical structure. Fundamental basis of the reported investigation is to relate molecular topological descriptors with chromatographic behavior of s-triazine derivatives obtained by reversed-phase (RP) thin layer chromatography (TLC) on silica gel impregnated with paraffin oil and applied ethanol-water (φ = 0.5-0.8; v/v). Retention parameter (RM0) of 14 investigated s-triazine derivatives was used as dependent variable while simple connectivity index different orders were used as independent variables. The best QSRR model for predicting RM0 value was obtained with simple third order connectivity index (3χ) in the second-degree polynomial equation. Numerical values of the correlation coefficient (r=0.915), Fisher's value (F=28.34) and root mean square error (RMSE = 0.36) indicate that model is statistically significant. In order to test the predictive power of the QSRR model leave-one-out cross-validation technique has been applied. The parameters of the internal cross-validation analysis (r2CV=0.79, r2adj=0.81, PRESS=1.89) reflect the high predictive ability of the generated model and it confirms that can be used to predict RM0 value. Multivariate classification technique, hierarchical cluster analysis (HCA), has been applied in order to group molecules according to their molecular connectivity indices. HCA is a descriptive statistical method and it is the most frequently used for important area of data processing such is classification. The HCA performed on simple molecular connectivity indices obtained from the 2D structure of investigated s-triazine compounds resulted in two main clusters in which compounds molecules were grouped according to the number of atoms in the molecule. This is in agreement with the fact that these descriptors were calculated on the basis of the number of atoms in the molecule of the investigated s-triazine derivatives.Keywords: s-triazines, QSRR, chemometrics, chromatography, molecular descriptors
Procedia PDF Downloads 393763 An Evaluation of the Influence of Corn Cob Ash on the Strength Parameters of Lateritic SoiLs
Authors: O. A. Apampa, Y. A. Jimoh
Abstract:
The paper reports the investigation of Corn Cob Ash as a chemical stabilizing agent for laterite soils. Corn cob feedstock was obtained from Maya, a rural community in the derived savannah agro-ecological zone of South-Western Nigeria and burnt to ashes of pozzolanic quality. Reddish brown silty clayey sand material characterized as AASHTO A-2-6(3) lateritic material was obtained from a borrow pit in Abeokuta and subjected to strength characterization tests according to BS 1377: 2000. The soil was subsequently mixed with CCA in varying percentages of 0-7.5% at 1.5% intervals. The influence of CCA stabilized soil was determined for the Atterberg limits, compaction characteristics, CBR and the unconfined compression strength. The tests were repeated on laterite cement-soil mixture in order to establish a basis for comparison. The result shows a similarity in the compaction characteristics of soil-cement and soil-CCA. With increasing addition of binder from 1.5% to 7.5%, Maximum Dry Density progressively declined while the OMC steadily increased. For the CBR, the maximum positive impact was observed at 1.5% CCA addition at a value of 85% compared to the control value of 65% for the cement stabilization, but declined steadily thereafter with increasing addition of CCA, while that of soil-cement continued to increase with increasing addition of cement beyond 1.5% though at a relatively slow rate. Similar behavior was observed in the UCS values for the soil-CCA mix, increasing from a control value of 0.4 MN/m2 to 1.0 MN/m2 at 1.5% CCA and declining thereafter, while that for soil-cement continued to increase with increasing cement addition, but at a slower rate. This paper demonstrates that CCA is effective for chemical stabilization of a typical Nigerian AASHTO A-2-6 lateritic soil at maximum stabilizer content limit of 1.5% and therefore recommends its use as a way of finding further application for agricultural waste products and achievement of environmental sustainability in line with the ideals of the millennium development goals because of the economic and technical feasibility of the processing of the cobs from corn.Keywords: corn cob ash, pozzolan, cement, laterite, stabilizing agent, cation exchange capacity
Procedia PDF Downloads 297762 Enhancing Health Information Management with Smart Rings
Authors: Bhavishya Ramchandani
Abstract:
A little electronic device that is worn on the finger is called a smart ring. It incorporates mobile technology and has features that make it simple to use the device. These gadgets, which resemble conventional rings and are usually made to fit on the finger, are outfitted with features including access management, gesture control, mobile payment processing, and activity tracking. A poor sleep pattern, an irregular schedule, and bad eating habits are all part of the problems with health that a lot of people today are facing. Diets lacking fruits, vegetables, legumes, nuts, and whole grains are common. Individuals in India also experience metabolic issues. In the medical field, smart rings will help patients with problems relating to stomach illnesses and the incapacity to consume meals that are tailored to their bodies' needs. The smart ring tracks all bodily functions, including blood sugar and glucose levels, and presents the information instantly. Based on this data, the ring generates what the body will find to be perfect insights and a workable site layout. In addition, we conducted focus groups and individual interviews as part of our core approach and discussed the difficulties they're having maintaining the right diet, as well as whether or not the smart ring will be beneficial to them. However, everyone was very enthusiastic about and supportive of the concept of using smart rings in healthcare, and they believed that these rings may assist them in maintaining their health and having a well-balanced diet plan. This response came from the primary data, and also working on the Emerging Technology Canvas Analysis of smart rings in healthcare has led to a significant improvement in our understanding of the technology's application in the medical field. It is believed that there will be a growing demand for smart health care as people become more conscious of their health. The majority of individuals will finally utilize this ring after three to four years when demand for it will have increased. Their daily lives will be significantly impacted by it.Keywords: smart ring, healthcare, electronic wearable, emerging technology
Procedia PDF Downloads 64761 Learning Curve Effect on Materials Procurement Schedule of Multiple Sister Ships
Authors: Vijaya Dixit Aasheesh Dixit
Abstract:
Shipbuilding industry operates in Engineer Procure Construct (EPC) context. Product mix of a shipyard comprises of various types of ships like bulk carriers, tankers, barges, coast guard vessels, sub-marines etc. Each order is unique based on the type of ship and customized requirements, which are engineered into the product right from design stage. Thus, to execute every new project, a shipyard needs to upgrade its production expertise. As a result, over the long run, holistic learning occurs across different types of projects which contributes to the knowledge base of the shipyard. Simultaneously, in the short term, during execution of a project comprising of multiple sister ships, repetition of similar tasks leads to learning at activity level. This research aims to capture above learnings of a shipyard and incorporate learning curve effect in project scheduling and materials procurement to improve project performance. Extant literature provides support for the existence of such learnings in an organization. In shipbuilding, there are sequences of similar activities which are expected to exhibit learning curve behavior. For example, the nearly identical structural sub-blocks which are successively fabricated, erected, and outfitted with piping and electrical systems. Learning curve representation can model not only a decrease in mean completion time of an activity, but also a decrease in uncertainty of activity duration. Sister ships have similar material requirements. The same supplier base supplies materials for all the sister ships within a project. On one hand, this provides an opportunity to reduce transportation cost by batching the order quantities of multiple ships. On the other hand, it increases the inventory holding cost at shipyard and the risk of obsolescence. Further, due to learning curve effect the production scheduled of each consequent ship gets compressed. Thus, the material requirement schedule of every next ship differs from its previous ship. As more and more ships get constructed, compressed production schedules increase the possibility of batching the orders of sister ships. This work aims at integrating materials management with project scheduling of long duration projects for manufacturing of multiple sister ships. It incorporates the learning curve effect on progressively compressing material requirement schedules and addresses the above trade-off of transportation cost and inventory holding and shortage costs while satisfying budget constraints of various stages of the project. The activity durations and lead time of items are not crisp and are available in the form of probabilistic distribution. A Stochastic Mixed Integer Programming (SMIP) model is formulated which is solved using evolutionary algorithm. Its output provides ordering dates of items and degree of order batching for all types of items. Sensitivity analysis determines the threshold number of sister ships required in a project to leverage the advantage of learning curve effect in materials management decisions. This analysis will help materials managers to gain insights about the scenarios: when and to what degree is it beneficial to treat a multiple ship project as an integrated one by batching the order quantities and when and to what degree to practice distinctive procurement for individual ship.Keywords: learning curve, materials management, shipbuilding, sister ships
Procedia PDF Downloads 502760 Algorithm for Automatic Real-Time Electrooculographic Artifact Correction
Authors: Norman Sinnigen, Igor Izyurov, Marina Krylova, Hamidreza Jamalabadi, Sarah Alizadeh, Martin Walter
Abstract:
Background: EEG is a non-invasive brain activity recording technique with a high temporal resolution that allows the use of real-time applications, such as neurofeedback. However, EEG data are susceptible to electrooculographic (EOG) and electromyography (EMG) artifacts (i.e., jaw clenching, teeth squeezing and forehead movements). Due to their non-stationary nature, these artifacts greatly obscure the information and power spectrum of EEG signals. Many EEG artifact correction methods are too time-consuming when applied to low-density EEG and have been focusing on offline processing or handling one single type of EEG artifact. A software-only real-time method for correcting multiple types of EEG artifacts of high-density EEG remains a significant challenge. Methods: We demonstrate an improved approach for automatic real-time EEG artifact correction of EOG and EMG artifacts. The method was tested on three healthy subjects using 64 EEG channels (Brain Products GmbH) and a sampling rate of 1,000 Hz. Captured EEG signals were imported in MATLAB with the lab streaming layer interface allowing buffering of EEG data. EMG artifacts were detected by channel variance and adaptive thresholding and corrected by using channel interpolation. Real-time independent component analysis (ICA) was applied for correcting EOG artifacts. Results: Our results demonstrate that the algorithm effectively reduces EMG artifacts, such as jaw clenching, teeth squeezing and forehead movements, and EOG artifacts (horizontal and vertical eye movements) of high-density EEG while preserving brain neuronal activity information. The average computation time of EOG and EMG artifact correction for 80 s (80,000 data points) 64-channel data is 300 – 700 ms depending on the convergence of ICA and the type and intensity of the artifact. Conclusion: An automatic EEG artifact correction algorithm based on channel variance, adaptive thresholding, and ICA improves high-density EEG recordings contaminated with EOG and EMG artifacts in real-time.Keywords: EEG, muscle artifacts, ocular artifacts, real-time artifact correction, real-time ICA
Procedia PDF Downloads 179759 Comparison Of Virtual Non-Contrast To True Non-Contrast Images Using Dual Layer Spectral Computed Tomography
Authors: O’Day Luke
Abstract:
Purpose: To validate virtual non-contrast reconstructions generated from dual-layer spectral computed tomography (DL-CT) data as an alternative for the acquisition of a dedicated true non-contrast dataset during multiphase contrast studies. Material and methods: Thirty-three patients underwent a routine multiphase clinical CT examination, using Dual-Layer Spectral CT, from March to August 2021. True non-contrast (TNC) and virtual non-contrast (VNC) datasets, generated from both portal venous and arterial phase imaging were evaluated. For every patient in both true and virtual non-contrast datasets, a region-of-interest (ROI) was defined in aorta, liver, fluid (i.e. gallbladder, urinary bladder), kidney, muscle, fat and spongious bone, resulting in 693 ROIs. Differences in attenuation for VNC and TNV images were compared, both separately and combined. Consistency between VNC reconstructions obtained from the arterial and portal venous phase was evaluated. Results: Comparison of CT density (HU) on the VNC and TNC images showed a high correlation. The mean difference between TNC and VNC images (excluding bone results) was 5.5 ± 9.1 HU and > 90% of all comparisons showed a difference of less than 15 HU. For all tissues but spongious bone, the mean absolute difference between TNC and VNC images was below 10 HU. VNC images derived from the arterial and the portal venous phase showed a good correlation in most tissue types. The aortic attenuation was somewhat dependent however on which dataset was used for reconstruction. Bone evaluation with VNC datasets continues to be a problem, as spectral CT algorithms are currently poor in differentiating bone and iodine. Conclusion: Given the increasing availability of DL-CT and proven accuracy of virtual non-contrast processing, VNC is a promising tool for generating additional data during routine contrast-enhanced studies. This study shows the utility of virtual non-contrast scans as an alternative for true non-contrast studies during multiphase CT, with potential for dose reduction, without loss of diagnostic information.Keywords: dual-layer spectral computed tomography, virtual non-contrast, true non-contrast, clinical comparison
Procedia PDF Downloads 141758 Design of Traffic Counting Android Application with Database Management System and Its Comparative Analysis with Traditional Counting Methods
Authors: Muhammad Nouman, Fahad Tiwana, Muhammad Irfan, Mohsin Tiwana
Abstract:
Traffic congestion has been increasing significantly in major metropolitan areas as a result of increased motorization, urbanization, population growth and changes in the urban density. Traffic congestion compromises efficiency of transport infrastructure and causes multiple traffic concerns; including but not limited to increase of travel time, safety hazards, air pollution, and fuel consumption. Traffic management has become a serious challenge for federal and provincial governments, as well as exasperated commuters. Effective, flexible, efficient and user-friendly traffic information/database management systems characterize traffic conditions by making use of traffic counts for storage, processing, and visualization. While, the emerging data collection technologies continue to proliferate, its accuracy can be guaranteed through the comparison of observed data with the manual handheld counters. This paper presents the design of tablet based manual traffic counting application and framework for development of traffic database management system for Pakistan. The database management system comprises of three components including traffic counting android application; establishing online database and its visualization using Google maps. Oracle relational database was chosen to develop the data structure whereas structured query language (SQL) was adopted to program the system architecture. The GIS application links the data from the database and projects it onto a dynamic map for traffic conditions visualization. The traffic counting device and example of a database application in the real-world problem provided a creative outlet to visualize the uses and advantages of a database management system in real time. Also, traffic data counts by means of handheld tablet/ mobile application can be used for transportation planning and forecasting.Keywords: manual count, emerging data sources, traffic information quality, traffic surveillance, traffic counting device, android; data visualization, traffic management
Procedia PDF Downloads 193757 A Perspective on Allelopathic Potential of Corylus avellana L.
Authors: Tugba G. Isin Ozkan, Yoshiharu Fujii
Abstract:
One of the most important constrains that decrease the crop yields are weeds. Increased amount and number of chemical herbicides are being utilized every day to control weeds. Chemical herbicides which cause environmental effects, and limitations on implementation of them have led to the nonchemical alternatives in the management of weeds. It is needed increasingly the application of allelopathy as a nonherbicidal innovation to control weed populations in integrated weed management. It is not only because of public concern about herbicide use, but also increased agricultural costs and herbicide resistance weeds. Allelopathy is defined as a common biological phenomenon, direct or indirect interaction which one plant or organism produces biochemicals influence the physiological processes of another neighboring plant or organism. Biochemicals involved in allelopathy are called allelochemicals that influence beneficially or detrimentally the growth, survival, development, and reproduction of other plant or organisms. All plant parts could have allelochemicals which are secondary plant metabolites. Allelochemicals are released to environment, influence the germination and seedling growth of neighbors' weeds; that is the way how allelopathy is applied for weed control. Crop cultivars have significantly different ability for inhibiting the growth of certain weeds. So, a high commercial value crop Corylus avellana L. and its byproducts were chosen to introduce for their allelopathic potential in this research. Edible nut of Corylus avellana L., commonly known as hazelnut is commercially valuable crop with byproducts; skin, hard shell, green leafy cover, and tree leaf. Research on allelopathic potential of a plant by using the sandwich bioassay method and investigation growth inhibitory activity is the first step to develop new and environmentally friendly alternatives for weed control. Thus, the objective of this research is to determine allelopathic potential of C. avellana L. and its byproducts by using sandwich method and to determine effective concentrations (EC) of their extracts for inducing half-maximum elongation inhibition on radicle of test plant, EC50. The sandwich method is reliable and fast bioassay, very useful for allelopathic screening under laboratory conditions. In experiments, lettuce (Lactuca sativa L.) seeds will be test plant, because of its high sensitivity to inhibition by allelochemicals and reliability for germination. In sandwich method, the radicle lengths of dry material treated lettuce seeds and control lettuce seeds will be measured and inhibition of radicle elongation will be determined. Lettuce seeds will also be treated by the methanol extracts of dry hazelnut parts to calculate EC₅₀ values, which are required to induce half-maximal inhibition of growth, as mg dry weight equivalent mL-1. Inhibitory activity of extracts against lettuce seedling elongation will be evaluated, like in sandwich method, by comparing the radicle lengths of treated seeds with that of control seeds and EC₅₀ values will be determined. Research samples are dry parts of Turkish hazelnut, C. avellana L. The results would suggest the opportunity for allelopathic potential of C. avellana L. with its byproducts in plant-plant interaction, might be utilized for further researches, could be beneficial in finding bioactive chemicals from natural products and developing of natural herbicides.Keywords: allelopathy, Corylus avellana L., EC50, Lactuca sativa L., sandwich method, Turkish hazelnut
Procedia PDF Downloads 175756 Geophysical Mapping of Anomalies Associated with Sediments of Gwandu Formation Around Argungu and Its Environs NW, Nigeria
Authors: Adamu Abubakar, Abdulganiyu Yunusa, Likkason Othniel Kamfani, Abdulrahman Idris Augie
Abstract:
This research study is being carried out in accordance with the Gwandu formation's potential exploratory activities in the inland basin of northwest Nigeria.The present research aims to identify and characterize subsurface anomalies within Gwandu formation using electrical resistivity tomography (ERT) and magnetic surveys, providing valuable insights for mineral exploration. The study utilizes various data enhancement techniques like derivatives, upward continuation, and spectral analysis alongside 2D modeling of electrical imaging profiles to analyze subsurface structures and anomalies. Data was collected through ERT and magnetic surveys, with subsequent processing including derivatives, spectral analysis, and 2D modeling. The results indicate significant subsurface structures such as faults, folds, and sedimentary layers. The study area's geoelectric and magnetic sections illustrate the depth and distribution of sedimentary formations, enhancing understanding of the geological framework. Thus, showed that the entire formations of Eocene sediment of Gwandu are overprinted by the study area's Tertiary strata. The NE to SW and E to W cross-profile for the pseudo geoelectric sections beneath the study area were generated using a two-dimensional (2D) electrical resistivity imaging. 2D magnetic modelling, upward continuation, and derivative analysis are used to delineate the signatures of subsurface magnetic anomalies. The results also revealed The sediment thickness by surface depth ranges from ∼4.06 km and ∼23.31 km. The Moho interface, the lower and upper mantle crusts boundary, and magnetic crust are all located at depths of around ∼10.23 km. The vertical distance between the local models of the foundation rocks to the north and south of the Sokoto Group was approximately ∼6 to ∼8 km and ∼4.5 km, respectively.Keywords: high-resolution aeromagnetic data, electrical resistivity imaging, subsurface anomalies, 2d dorward modeling
Procedia PDF Downloads 13755 Effect of Steam Explosion of Crop Residues on Chemical Compositions and Efficient Energy Values
Authors: Xin Wu, Yongfeng Zhao, Qingxiang Meng
Abstract:
In China, quite low proportion of crop residues were used as feedstuff because of its poor palatability and low digestibility. Steam explosion is a physical and chemical feed processing technology which has great potential to improve sapidity and digestibility of crop residues. To investigate the effect of the steam explosion on chemical compositions and efficient energy values, crop residues (rice straw, wheat straw and maize stover) were processed by steam explosion (steam temperature 120-230°C, steam pressure 2-26kg/cm², 40min). Steam-exploded crop residues were regarded as treatment groups and untreated ones as control groups, nutritive compositions were analyzed and effective energy values were calculated by prediction model in INRA (1988, 2010) for both groups. Results indicated that the interaction between treatment and variety has a significant effect on chemical compositions of crop residues. Steam explosion treatment of crop residues decreased neutral detergent fiber (NDF) significantly (P < 0.01), and compared with untreated material, NDF content of rice straw, wheat straw, and maize stover lowered 21.46%, 32.11%, 28.34% respectively. Acid detergent lignin (ADL) of crop residues increased significantly after the steam explosion (P < 0.05). The content of crude protein (CP), ether extract (EE) and Ash increased significantly after steam explosion (P < 0.05). Moreover, predicted effective energy values of each steam-exploded residue were higher than that of untreated ones. The digestible energy (DE), metabolizable energy (ME), net energy for maintenance (NEm) and net energy for gain (NEg)of steam-exploded rice straw were 3.06, 2.48, 1.48and 0.29 MJ/kg respectively and increased 46.21%, 46.25%, 49.56% and 110.92% compared with untreated ones(P < 0.05). Correspondingly, the energy values of steam-exploded wheat straw were 2.18, 1.76, 1.03 and 0.15 MJ/kg, which were 261.78%, 261.29%, 274.59% and 1014.69% greater than that of wheat straw (P < 0.05). The above predicted energy values of steam exploded maize stover were 5.28, 4.30, 2.67 and 0.82 MJ/kg and raised 109.58%, 107.71%, 122.57% and 332.64% compared with the raw material(P < 0.05). In conclusion, steam explosion treatment could significantly decrease NDF content, increase ADL, CP, EE, Ash content and effective energy values of crop residues. The effect of steam explosion was much more obvious for wheat straw than the other two kinds of residues under the same condition.Keywords: chemical compositions, crop residues, efficient energy values, steam explosion
Procedia PDF Downloads 250754 Effects of Sintering Temperature on Microstructure and Mechanical Properties of Nanostructured Ni-17Cr Alloy
Authors: B. J. Babalola, M. B. Shongwe
Abstract:
Spark Plasma Sintering technique is a novel processing method that produces limited grain growth and highly dense variety of materials; alloys, superalloys, and carbides just to mention a few. However, initial particle size and spark plasma sintering parameters are factors which influence the grain growth and mechanical properties of sintered materials. Ni-Cr alloys are regarded as the most promising alloys for aerospace turbine blades, owing to the fact that they meet the basic requirements of desirable mechanical strength at high temperatures and good resistance to oxidation. The conventional method of producing this alloy often results in excessive grain growth and porosity levels that are detrimental to its mechanical properties. The effect of sintering temperature was evaluated on the microstructure and mechanical properties of the nanostructured Ni-17Cr alloy. Nickel and chromium powder were milled using high energy ball milling independently for 30 hours, milling speed of 400 revs/min and ball to powder ratio (BPR) of 10:1. The milled powders were mixed in the composition of Nickel having 83 wt % and chromium, 17 wt %. This was sintered at varied temperatures from 800°C, 900°C, 1000°C, 1100°C and 1200°C. The structural characteristics such as porosity, grain size, fracture surface and hardness were analyzed by scan electron microscopy and X-ray diffraction, Archimedes densitometry, micro-hardness tester. The corresponding results indicated an increase in the densification and hardness property of the alloy as the temperature increases. The residual porosity of the alloy reduces with respect to the sintering temperature and in contrast, the grain size was enhanced. The study of the mechanical properties, including hardness, densification shows that optimum properties were obtained for the sintering temperature of 1100°C. The advantages of high sinterability of Ni-17Cr alloy using milled powders and microstructural details were discussed.Keywords: densification, grain growth, milling, nanostructured materials, sintering temperature
Procedia PDF Downloads 402753 Development and Validation of a Carbon Dioxide TDLAS Sensor for Studies on Fermented Dairy Products
Authors: Lorenzo Cocola, Massimo Fedel, Dragiša Savić, Bojana Danilović, Luca Poletto
Abstract:
An instrument for the detection and evaluation of gaseous carbon dioxide in the headspace of closed containers has been developed in the context of Packsensor Italian-Serbian joint project. The device is based on Tunable Diode Laser Absorption Spectroscopy (TDLAS) with a Wavelength Modulation Spectroscopy (WMS) technique in order to accomplish a non-invasive measurement inside closed containers of fermented dairy products (yogurts and fermented cheese in cups and bottles). The purpose of this instrument is the continuous monitoring of carbon dioxide concentration during incubation and storage of products over a time span of the whole shelf life of the product, in the presence of different microorganisms. The instrument’s optical front end has been designed to be integrated in a thermally stabilized incubator. An embedded computer provides processing of spectral artifacts and storage of an arbitrary set of calibration data allowing a properly calibrated measurement on many samples (cups and bottles) of different shapes and sizes commonly found in the retail distribution. A calibration protocol has been developed in order to be able to calibrate the instrument on the field also on containers which are notoriously difficult to seal properly. This calibration protocol is described and evaluated against reference measurements obtained through an industry standard (sampling) carbon dioxide metering technique. Some sets of validation test measurements on different containers are reported. Two test recordings of carbon dioxide concentration evolution are shown as an example of instrument operation. The first demonstrates the ability to monitor a rapid yeast growth in a contaminated sample through the increase of headspace carbon dioxide. Another experiment shows the dissolution transient with a non-saturated liquid medium in presence of a carbon dioxide rich headspace atmosphere.Keywords: TDLAS, carbon dioxide, cups, headspace, measurement
Procedia PDF Downloads 324752 Parametric Evaluation for the Optimization of Gastric Emptying Protocols Used in Health Care Institutions
Authors: Yakubu Adamu
Abstract:
The aim of this research was to assess the factors contributing to the need for optimisation of the gastric emptying protocols in nuclear medicine and molecular imaging (SNMMI) procedures. The objective is to suggest whether optimisation is possible and provide supporting evidence for the current imaging protocols of gastric emptying examination used in nuclear medicine. The research involved the use of some selected patients with 30 dynamic series for the image processing using ImageJ, and by so doing, the calculated half-time, retention fraction to the 60 x1 minute, 5 minute and 10-minute protocol, and other sampling intervals were obtained. Results from the study IDs for the gastric emptying clearance half-time were classified into normal, abnormal fast, and abnormal slow categories. In the normal category, which represents 50% of the total gastric emptying image IDs processed, their clearance half-time was within the range of 49.5 to 86.6 minutes of the mean counts. Also, under the abnormal fast category, their clearance half-time fell between 21 to 43.3 minutes of the mean counts, representing 30% of the total gastric emptying image IDs processed, and the abnormal slow category had clearance half-time within the range of 138.6 to 138.6 minutes of the mean counts, representing 20%. The results indicated that the calculated retention fraction values from the 1, 5, and 10-minute sampling curves and the measured values of gastric emptying retention fraction from sampling curves of the study IDs had a normal retention fraction of <60% and decreased exponentially with an increase in time and it was evident with low percentages of retention fraction ratios of < 10% after the 4 hours. Thus, this study does not change categories suggesting that these values could feasibly be used instead of having to acquire actual images. Findings from the study suggest that the current gastric emptying protocol can be optimized by acquiring fewer images. The study recommended that the gastric emptying studies should be performed with imaging at a minimum of 0, 1, 2, and 4 hours after meal ingestion.Keywords: gastric emptying, retention fraction, clearance halftime, optimisation, protocol
Procedia PDF Downloads 3751 Plant Regeneration via Somatic Embryogenesis and Agrobacterium-Mediated Transformation in Alfalfa (Medicago sativa L.)
Authors: Sarwan Dhir, Suma Basak, Dipika Parajulee
Abstract:
Alfalfa is renowned for its nutritional and biopharmaceutical value as a perennial forage legume. However, establishing a rapid plant regeneration protocol using somatic embryogenesis and efficient transformation frequency are the crucial prerequisites for gene editing in alfalfa. This study was undertaken to establish and improve the protocol for somatic embryogenesis and subsequent plant regeneration. The experiments were conducted in response to natural sensitivity using various antibiotics such as cefotaxime, carbenicillin, gentamycin, hygromycin, and kanamycin. Using 3-week-old leaf tissue, somatic embryogenesis was initiated on Gamborg’s B5 basal (B5H) medium supplemented with 3% maltose, 0.9µM Kinetin, and 4.5µM 2,4-D. Embryogenic callus (EC) obtained from the B5H medium exhibited a high rate of somatic embryo formation (97.9%) after 3 weeks when the cultures were placed in the dark. Different developmental stages of somatic embryos and cotyledonary stages were then transferred to Murashige and Skoog’s (MS) basal medium under light, resulting in a 94% regeneration rate of plantlets. Our results indicate that leaf segments can grow (tolerate) up to 450 mg/L of cefotaxime and 400 mg/L of carbenicillin in the culture medium. However, the survival threshold for hygromycin at 12.5 mg/L, kanamycin at 250 mg/L, gentamycin at 50 mg/L, and timentin (300 mg/L). The experiment to improve the protocol for achieving efficient transient gene expression in alfalfa through genetic transformation with the Agrobacterium tumefaciens pCAMBIA1304 vector was also conducted. The vector contains two reporter genes such as β-glucuronidase (GUS) and green fluorescent protein (GFP), along with a selectable hygromycin B phosphotransferase gene (HPT), all driven under the CaMV 35s promoter. Various transformation parameters were optimized using 3-week-old in vitro-grown plantlets. The different parameters such as types of explant, leaf ages, preculture days, segment sizes, wounding types, bacterial concentrations, infection periods, co-cultivation periods, different concentrations of acetosyringone, silver nitrate, and calcium chloride were optimized for transient gene expression. The transient gene expression was confirmed via histochemical GUS and GFP visualization under fluorescent microscopy. The data were analyzed based on the semi-quantitative observation of the percentage and number of blue GUS spots on different days of agro-infection. The highest percentage of GUS positivity (76.2%) was observed in 3-week-old leaf segments wounded using a scalpel blade of 11 size- after 3 days of post-incubation at a bacterial concentration of 0.6, with 2 days of preculture, 30 min of bacterial-leaf segment co-cultivation, with the addition of 150 µM acetosyringone, 4 mM calcium chloride, and 75 µM silver nitrate. Our results suggest that various factors influence T-DNA delivery in the Agrobacterium-mediated transformation of alfalfa. The stable gene expression in the putative transgenic tissue was confirmed using PCR amplification of both marker genes, indicating that gene expression in explants was not solely due to Agrobacterium, but also from transformed cells. The improved protocol could be used for generating transgenic alfalfa plants using genome editing techniques such as CRISPR/Cas9.Keywords: Medicago sativa l. (Alfalfa), agrobacterium tumefaciens, β-glucuronidase, green fluorescent protein, transient gene
Procedia PDF Downloads 11750 Buddhism and Education for Children: Cultivating Wisdom and Compassion
Authors: Harry Einhorn
Abstract:
This paper aims to explore the integration of Buddhism into educational settings with the goal of fostering the holistic development of children. By incorporating Buddhist principles and practices, educators can create a nurturing environment that cultivates wisdom, compassion, and ethical values in children. The teachings of Buddhism provide valuable insights into mindfulness, compassion, and critical thinking, which can be adapted and applied to educational curricula to enhance children's intellectual, emotional, and moral growth. One of the fundamental aspects of Buddhist philosophy that is particularly relevant to education is the concept of mindfulness. By introducing mindfulness practices, such as meditation and breathing exercises, children can learn to cultivate present-moment awareness, develop emotional resilience, and enhance their ability to concentrate and focus. These skills are essential for effective learning and can contribute to reducing stress and promoting overall well-being in children. Mindfulness practices can also teach children how to manage their emotions and thoughts, promoting self-regulation and creating a positive classroom environment. In addition to mindfulness, Buddhism emphasizes the cultivation of compassion and empathy toward all living beings. Integrating teachings on kindness, empathy, and ethical behavior into the educational framework can help children develop a deep sense of interconnectedness and social responsibility. By engaging children in activities that promote empathy and encourage acts of kindness, such as community service projects and cooperative learning, educators can foster the development of compassionate individuals who are actively engaged in creating a more harmonious and compassionate society. Moreover, Buddhist teachings encourage critical thinking and inquiry, which are crucial skills for intellectual development. By introducing children to fundamental Buddhist concepts such as impermanence, interdependence, and the nature of suffering, educators can engage them in philosophical reflections and broaden their perspectives on life. These teachings promote open-mindedness, curiosity, and a deeper understanding of the interconnectedness of all things. Through the exploration of these concepts, children can develop critical thinking skills and gain insights into the complexities of the world, enabling them to navigate challenges with wisdom and discernment. While integrating Buddhism into education requires sensitivity, cultural awareness, and respect for diverse beliefs and backgrounds, it holds great potential for nurturing the holistic development of children. By incorporating mindfulness practices, fostering compassion and empathy, and promoting critical thinking, Buddhism can contribute to the creation of a more compassionate, inclusive, and harmonious educational environment. This integration can shape well-rounded individuals who are equipped with the necessary skills and qualities to navigate the complexities of the modern world with wisdom, compassion, and resilience. In conclusion, the integration of Buddhism into education offers a valuable framework for cultivating wisdom, compassion, and ethical values in children. By incorporating mindfulness, compassion, and critical thinking into educational practices, educators can create a supportive environment that promotes children's holistic development. By nurturing these qualities, Buddhism can help shape individuals who are not only academically proficient but also morally and ethically responsible, contributing to a more compassionate and harmonious society.Keywords: Buddhism, education, children, mindfulness
Procedia PDF Downloads 63749 Primary School Students’ Modeling Processes: Crime Problem
Authors: Neslihan Sahin Celik, Ali Eraslan
Abstract:
As a result of PISA (Program for International Student Assessments) survey that tests how well students can apply the knowledge and skills they have learned at school to real-life challenges, the new and redesigned mathematics education programs in many countries emphasize the necessity for the students to face complex and multifaceted problem situations and gain experience in this sense allowing them to develop new skills and mathematical thinking to prepare them for their future life after school. At this point, mathematical models and modeling approaches can be utilized in the analysis of complex problems which represent real-life situations in which students can actively participate. In particular, model eliciting activities that bring about situations which allow the students to create solutions to problems and which involve mathematical modeling must be used right from primary school years, allowing them to face such complex, real-life situations from early childhood period. A qualitative study was conducted in a university foundation primary school in the city center of a big province in 2013-2014 academic years. The participants were 4th grade students in a primary school. After a four-week preliminary study applied to a fourth-grade classroom, three students included in the focus group were selected using criterion sampling technique. A focus group of three students was videotaped as they worked on the Crime Problem. The conversation of the group was transcribed, examined with students’ written work and then analyzed through the lens of Blum and Ferri’s modeling processing cycle. The results showed that primary fourth-grade students can successfully work with model eliciting problem while they encounter some difficulties in the modeling processes. In particular, they developed new ideas based on different assumptions, identified the patterns among variables and established a variety of models. On the other hand, they had trouble focusing on problems and occasionally had breaks in the process.Keywords: primary school, modeling, mathematical modeling, crime problem
Procedia PDF Downloads 405748 Evaluation of Microwave-Assisted Pretreatment for Spent Coffee Grounds
Authors: Shady S. Hassan, Brijesh K. Tiwari, Gwilym A. Williams, Amit K. Jaiswal
Abstract:
Waste materials from a wide range of agro-industrial processes may be used as substrates for microbial growth, and subsequently the production of a range of high value products and bioenergy. In addition, utilization of these agro-residues in bioprocesses has the dual advantage of providing alternative substrates, as well as solving their disposal problems. Spent coffee grounds (SCG) are a by-product (45%) of coffee processing. SCG is a lignocellulosic material, which is composed mainly of cellulose, hemicelluloses, and lignin. Thus, a pretreatment process is required to facilitate an efficient enzymatic hydrolysis of such carbohydrates. In this context, microwave pretreatment of lignocellulosic biomass without the addition of harsh chemicals represents a green technology. Moreover, microwave treatment has a high heating efficiency and is easy to implement. Thus, microwave pretreatment of SCG without adding of harsh chemicals investigated as a green technology to enhance enzyme hydrolysis. In the present work, microwave pretreatment experiments were conducted on SCG at varying power levels (100, 250, 440, 600, and 1000 W) for 60 s. By increasing microwave power to a certain level (which vary by varying biomass), reducing sugar increases, then reducing sugar from biomass start to decrease with microwave power increase beyond this level. Microwave pretreatment of SCG at 60s followed by enzymatic hydrolysis resulted in total reducing sugars of 91.6 ± 7.0 mg/g of biomass (at microwave power of 100 w). Fourier transform Infrared Spectroscopy (FTIR) was employed to investigate changes in functional groups of biomass after pretreatment, while high-performance liquid chromatography (HPLC) was employed for determination of glucose. Pretreatment of lignocellulose using microwave was found to be an effective and energy efficient technology to improve saccharification and glucose yield. Energy performance will be evaluated for the microwave pretreatment, and the enzyme hydrolysate will be used as media component substitute for the production of ethanol and other high value products.Keywords: lignocellulose, microwave, pretreatment, spent coffee grounds
Procedia PDF Downloads 419747 Virulence Factors and Drug Resistance of Enterococci Species Isolated from the Intensive Care Units of Assiut University Hospitals, Egypt
Authors: Nahla Elsherbiny, Ahmed Ahmed, Hamada Mohammed, Mohamed Ali
Abstract:
Background: The enterococci may be considered as opportunistic agents particularly in immunocompromised patients. It is one of the top three pathogens causing many healthcare associated infections (HAIs). Resistance to several commonly used antimicrobial agents is a remarkable characteristic of most species which may carry various genes contributing to virulence. Objectives: to determine the prevalence of enterococci species in different intensive care units (ICUs) causing health care-associated infections (HAIs), intestinal carriage and environmental contamination. Also, to study the antimicrobial susceptibility pattern of the isolates with special reference to vancomycin resistance. In addition to phenotypic and genotypic detection of gelatinase, cytolysin and biofilm formation among isolates. Patients and Methods: This study was carried out in the infection control laboratory at Assiut University Hospitals over a period of one year. Clinical samples were collected from 285 patients with various (HAIs) acquired after admission to different ICUs. Rectal swabs were taken from 14 cases for detection of enterococci carriage. In addition, 1377 environmental samples were collected from the surroundings of the patients. Identification was done by conventional bacteriological methods and confirmed by analytical profile index (API). Antimicrobial sensitivity testing was performed by Kirby Bauer disc diffusion method and detection of vancomycin resistance was done by agar screen method. For the isolates, phenotypic detection of cytolysin, gelatinase production and detection of biofilm by tube method, Congo red method and microtiter plate. We performed polymerase chain reaction (PCR) for detection of some virulence genes (gelE, cylA, vanA, vanB and esp). Results: Enterococci caused 10.5% of the HAIs. Respiratory tract infection was the predominant type (86.7%). The commonest species were E.gallinarum (36.7%), E.casseliflavus (30%), E.faecalis (30%), and E.durans (3.4 %). Vancomycin resistance was detected in a total of 40% (12/30) of those isolates. The risk factors associated with acquiring vancomycin resistant enterococci (VRE) were immune suppression (P= 0.031) and artificial feeding (P= 0.008). For the rectal swabs, enterococci species were detected in 71.4% of samples with the predominance of E. casseliflavus (50%). Most of the isolates were vancomycin resistant (70%). Out of a total 1377 environmental samples, 577 (42%) samples were contaminated with different microorganisms. Enterococci were detected in 1.7% (10/577) of total contaminated samples, 50% of which were vancomycin resistant. All isolates were resistant to penicillin, ampicillin, oxacillin, ciprofloxacin, amikacin, erythromycin, clindamycin and trimethoprim-sulfamethaxazole. For the remaining antibiotics, variable percentages of resistance were reported. Cytolysin and gelatinase were detected phenotypically in 16% and 48 % of the isolates respectively. The microtiter plate method showed the highest percentages of detection of biofilm among all isolated species (100%). The studied virulence genes gelE, esp, vanA and vanB were detected in 62%, 12%, 2% and 12% respectively, while cylA gene was not detected in any isolates. Conclusions: A significant percentage of enterococci was isolated from patients and environments in the ICUs. Many virulence factors were detected phenotypically and genotypically among isolates. The high percentage of resistance, coupled with the risk of cross transmission to other patients make enterococci infections a significant infection control issue in hospitals.Keywords: antimicrobial resistance, enterococci, ICUs, virulence factors
Procedia PDF Downloads 285746 Waste Utilization by Combustion in the Composition of Gel Fuels
Authors: Dmitrii Glushkov, Aleksandr G. Nigay, Olga S. Yashutina
Abstract:
In recent years, due to the intensive development of the Arctic and Antarctic areas, the actual task is to develop technology for the effective utilization of solid and liquid combustible wastes in an environment with low temperatures. Firstly, such technology will help to prevent the dumping of waste into the World Ocean and reduce the risks of causing environmental damage to the Far North areas. Secondly, promising actions will help to prepare fuel compositions from the waste in the places of their production. Such kind of fuels can be used as energy resources. It will reduce waste utilization costs when transporting them to the mainland. In the present study, we suggest a solution to the problem of waste utilization by the preparation of gel fuels based on solid and liquid combustible components with the addition of the thickener. Such kind of fuels is characterized by ease of preparation, storage, transportation and use (as energy resources). The main regularities and characteristics of physical and chemical processes are established with varying parameters of gel fuels and heating sources in wide ranges. The obtained results let us conclude about the prospects of gel fuels practical application for combustible wastes utilization. Appropriate technology will be characterized by positive environmental, operational and economic effects. The composition of the gel fuels can vary in a wide range. The fuels preparation based on one type of a combustible liquid or a several liquids mixture with the finely dispersed components addition makes it possible to obtain compositions with predicted rheological, energy or environmental characteristics. Besides, gel fuels have a lower level of the fire hazard compared to common solid and liquid fuels. This makes them convenient for storage and transportation. In such conditions, it is not necessary to transport combustible wastes from the territory of the Arctic and the Antarctic to the mainland for processing, which is now quite an expensive procedure. The research was funded by the Russian Science Foundation (project No. 18-13-00031).Keywords: combustible liquid waste, gel fuel, ignition and combustion, utilization
Procedia PDF Downloads 119745 Processing and Characterization of Aluminum Matrix Composite Reinforced with Amorphous Zr₃₇.₅Cu₁₈.₆₇Al₄₃.₉₈ Phase
Authors: P. Abachi, S. Karami, K. Purazrang
Abstract:
The amorphous reinforcements (metallic glasses) can be considered as promising options for reinforcing light-weight aluminum and its alloys. By using the proper type of reinforcement, one can overcome to drawbacks such as interfacial de-cohesion and undesirable reactions which can be created at ceramic particle and metallic matrix interface. In this work, the Zr-based amorphous phase was produced via mechanical milling of elemental powders. Based on Miedema semi-empirical Model and diagrams for formation enthalpies and/or Gibbs free energies of Zr-Cu amorphous phase in comparison with the crystalline phase, the glass formability range was predicted. The composite was produced using the powder mixture of the aluminum and metallic glass and spark plasma sintering (SPS) at the temperature slightly above the glass transition Tg of the metallic glass particles. The selected temperature and rapid sintering route were suitable for consolidation of an aluminum matrix without crystallization of amorphous phase. To characterize amorphous phase formation, X-ray diffraction (XRD) phase analyses were performed on powder mixture after specified intervals of milling. The microstructure of the composite was studied by optical and scanning electron microscope (SEM). Uniaxial compression tests were carried out on composite specimens with the dimension of 4 mm long and a cross-section of 2 ˟ 2mm2. The micrographs indicated an appropriate reinforcement distribution in the metallic matrix. The comparison of stress–strain curves of the consolidated composite and the non-reinforced Al matrix alloy in compression showed that the enhancement of yield strength and mechanical strength are combined with an appreciable plastic strain at fracture. It can be concluded that metallic glasses (amorphous phases) are alternative reinforcement material for lightweight metal matrix composites capable of producing high strength and adequate ductility. However, this is in the expense of minor density increase.Keywords: aluminum matrix composite, amorphous phase, mechanical alloying, spark plasma sintering
Procedia PDF Downloads 364