Search results for: fault detection and classification
1200 Detection of Arcobacter and Helicobacter pylori Contamination in Organic Vegetables by Cultural and Polymerase Chain Reaction (PCR) Methods
Authors: Miguel García-Ferrús, Ana González, María A. Ferrús
Abstract:
The most demanded organic foods worldwide are those that are consumed fresh, such as fruits and vegetables. However, there is a knowledge gap about some aspects of organic food microbiological quality and safety. Organic fruits and vegetables are more exposed to pathogenic microorganisms due to surface contact with natural fertilizers such as animal manure, wastes and vermicompost used during farming. It has been suggested that some emergent pathogens, such as Helicobacter pylori or Arcobacter spp., could reach humans through the consumption of raw or minimally processed vegetables. Therefore, the objective of this work was to study the contamination of organic fresh green leafy vegetables by Arcobacter spp. and Helicobacter pylori. For this purpose, a total of 24 vegetable samples, 13 lettuce and 11 spinach were acquired from 10 different ecological supermarkets and greengroceries and analyzed by culture and PCR. Arcobacter spp. was detected in 5 samples (20%) by PCR, 4 spinach and one lettuce. One spinach sample was found to be also positive by culture. For H. pylori, the H. pylori VacA gene-specific band was detected in 12 vegetable samples (50%), 10 lettuces and 2 spinach. Isolation in the selective medium did not yield any positive result, possibly because of low contamination levels together with the presence of the organism in its viable but non-culturable form. Results showed significant levels of H. pylori and Arcobacter contamination in organic vegetables that are generally consumed raw, which seems to confirm that these foods can act as transmission vehicles to humans.Keywords: Arcobacter sp., Helicobacter pylori, Organic Vegetables, Polymerase Chain Reaction (PCR)
Procedia PDF Downloads 1641199 Analysis of Buddhist Rock Carvings in Diamer Basha Dam Reservoir Area, Gilgit-Baltistan, Pakistan
Authors: Abdul Ghani Khan
Abstract:
This paper focuses on the Buddhist rock carvings in the Diamer-Basha reservoir area, Gilgit-Baltistan, which is perhaps the largest rock art province of the world. The study region has thousands of rock carvings, particularly of the stupa carvings, engraved by artists, devotees or pilgrims, merchants have left their marks in the landscape or for the propagation of Buddhism. The Pak-German Archaeological Mission prepared, documented, and published the extensive catalogues of these carvings. Though, to date, very little systematic or statistically driven analysis was undertaken for in-depth understandings of the Buddhist rock carving tradition of the study region. This paper had made an attempt to examine stupa carvings and their constituent parts from the five selected sites, namely Oshibat, Shing Nala, Gichi Nala, Dadam Das, and Chilas Bridge. The statistical analyses and classification of the stupa carvings and their chronological contexts were carried out with the help of modern scientific tools such as STATA, FileMaker Pro, and MapSource softwares. The study had found that the tradition of stupa carvings on the surfaces of the rocks at the five selected sites continued for around 900 years, from the 1st century BCE to 8th century CE. There is a variation within the chronological settings of each of selected sites, possibly impacted by their utilization within particular landscapes, such as political (for example, change in political administrations or warfare) landscapes and geographical (for example, shifting of routes). The longer existence of the stupa carvings' tradition at these specific locations also indicates their central position on the trade and communication routes, and these were possibly also linked with religious ideologies within their particular times. The analyses of the different architectural elements of stupa carvings in the study area show that this tradition had structural similarities and differences in temporal and spatial contexts.Keywords: rock carvings, stupa, stupa carvings, Buddhism, Pak-German archaeological mission
Procedia PDF Downloads 2241198 Preparing Data for Calibration of Mechanistic-Empirical Pavement Design Guide in Central Saudi Arabia
Authors: Abdulraaof H. Alqaili, Hamad A. Alsoliman
Abstract:
Through progress in pavement design developments, a pavement design method was developed, which is titled the Mechanistic Empirical Pavement Design Guide (MEPDG). Nowadays, the evolution in roads network and highways is observed in Saudi Arabia as a result of increasing in traffic volume. Therefore, the MEPDG currently is implemented for flexible pavement design by the Saudi Ministry of Transportation. Implementation of MEPDG for local pavement design requires the calibration of distress models under the local conditions (traffic, climate, and materials). This paper aims to prepare data for calibration of MEPDG in Central Saudi Arabia. Thus, the first goal is data collection for the design of flexible pavement from the local conditions of the Riyadh region. Since, the modifying of collected data to input data is needed; the main goal of this paper is the analysis of collected data. The data analysis in this paper includes processing each: Trucks Classification, Traffic Growth Factor, Annual Average Daily Truck Traffic (AADTT), Monthly Adjustment Factors (MAFi), Vehicle Class Distribution (VCD), Truck Hourly Distribution Factors, Axle Load Distribution Factors (ALDF), Number of axle types (single, tandem, and tridem) per truck class, cloud cover percent, and road sections selected for the local calibration. Detailed descriptions of input parameters are explained in this paper, which leads to providing of an approach for successful implementation of MEPDG. Local calibration of MEPDG to the conditions of Riyadh region can be performed based on the findings in this paper.Keywords: mechanistic-empirical pavement design guide (MEPDG), traffic characteristics, materials properties, climate, Riyadh
Procedia PDF Downloads 2261197 Explanatory Variables for Crash Injury Risk Analysis
Authors: Guilhermina Torrao
Abstract:
An extensive number of studies have been conducted to determine the factors which influence crash injury risk (CIR); however, uncertainties inherent to selected variables have been neglected. A review of existing literature is required to not only obtain an overview of the variables and measures but also ascertain the implications when comparing studies without a systematic view of variable taxonomy. Therefore, the aim of this literature review is to examine and report on peer-reviewed studies in the field of crash analysis and to understand the implications of broad variations in variable selection in CIR analysis. The objective of this study is to demonstrate the variance in variable selection and classification when modeling injury risk involving occupants of light vehicles by presenting an analytical review of the literature. Based on data collected from 64 journal publications reported over the past 21 years, the analytical review discusses the variables selected by each study across an organized list of predictors for CIR analysis and provides a better understanding of the contribution of accident and vehicle factors to injuries acquired by occupants of light vehicles. A cross-comparison analysis demonstrates that almost half the studies (48%) did not consider vehicle design specifications (e.g., vehicle weight), whereas, for those that did, the vehicle age/model year was the most selected explanatory variable used by 41% of the literature studies. For those studies that included speed risk factor in their analyses, the majority (64%) used the legal speed limit data as a ‘proxy’ of vehicle speed at the moment of a crash, imposing limitations for CIR analysis and modeling. Despite the proven efficiency of airbags in minimizing injury impact following a crash, only 22% of studies included airbag deployment data. A major contribution of this study is to highlight the uncertainty linked to explanatory variable selection and identify opportunities for improvements when performing future studies in the field of road injuries.Keywords: crash, exploratory, injury, risk, variables, vehicle
Procedia PDF Downloads 1351196 A Structuring and Classification Method for Assigning Application Areas to Suitable Digital Factory Models
Authors: R. Hellmuth
Abstract:
The method of factory planning has changed a lot, especially when it is about planning the factory building itself. Factory planning has the task of designing products, plants, processes, organization, areas, and the building of a factory. Regular restructuring is becoming more important in order to maintain the competitiveness of a factory. Restrictions in new areas, shorter life cycles of product and production technology as well as a VUCA world (Volatility, Uncertainty, Complexity and Ambiguity) lead to more frequent restructuring measures within a factory. A digital factory model is the planning basis for rebuilding measures and becomes an indispensable tool. Furthermore, digital building models are increasingly being used in factories to support facility management and manufacturing processes. The main research question of this paper is, therefore: What kind of digital factory model is suitable for the different areas of application during the operation of a factory? First, different types of digital factory models are investigated, and their properties and usabilities for use cases are analysed. Within the scope of investigation are point cloud models, building information models, photogrammetry models, and these enriched with sensor data are examined. It is investigated which digital models allow a simple integration of sensor data and where the differences are. Subsequently, possible application areas of digital factory models are determined by means of a survey and the respective digital factory models are assigned to the application areas. Finally, an application case from maintenance is selected and implemented with the help of the appropriate digital factory model. It is shown how a completely digitalized maintenance process can be supported by a digital factory model by providing information. Among other purposes, the digital factory model is used for indoor navigation, information provision, and display of sensor data. In summary, the paper shows a structuring of digital factory models that concentrates on the geometric representation of a factory building and its technical facilities. A practical application case is shown and implemented. Thus, the systematic selection of digital factory models with the corresponding application cases is evaluated.Keywords: building information modeling, digital factory model, factory planning, maintenance
Procedia PDF Downloads 1101195 Green Synthesis of Magnetic, Silica Nanocomposite and Its Adsorptive Performance against Organochlorine Pesticides
Authors: Waleed A. El-Said, Dina M. Fouad, Mohamed H. Aly, Mohamed A. El-Gahami
Abstract:
Green synthesis of nanomaterials has received increasing attention as an eco-friendly technology in materials science. Here, we have used two types of extractions from green tea leaf (i.e. total extraction and tannin extraction) as reducing agents for a rapid, simple and one step synthesis method of mesoporous silica nanoparticles (MSNPs)/iron oxide (Fe3O4) nanocomposite based on deposition of Fe3O4 onto MSNPs. MSNPs/Fe3O4 nanocomposite were characterized by X-ray diffraction, Fourier transform infrared spectroscopy, scanning electron microscopy, energy dispersive X-ray, vibrating sample magnetometer, N2 adsorption, and high-resolution transmission electron microscopy. The average mesoporous silica particle diameter was found to be around 30 nm with high surface area (818 m2/gm). MSNPs/Fe3O4 nanocomposite was used for removing lindane pesticide (an environmental hazard material) from aqueous solutions. Fourier transform infrared, UV-vis, High-performance liquid chromatography and gas chromatography techniques were used to confirm the high ability of MSNPs/Fe3O4 nanocomposite for sensing and capture of lindane molecules with high sorption capacity (more than 89%) that could develop a new eco-friendly strategy for detection and removing of pesticide and as a promising material for water treatment application.Keywords: green synthesis, mesoporous silica, magnetic iron oxide NPs, adsorption Lindane
Procedia PDF Downloads 4361194 Vibro-Tactile Equalizer for Musical Energy-Valence Categorization
Authors: Dhanya Nair, Nicholas Mirchandani
Abstract:
Musical haptic systems can enhance a listener’s musical experience while providing an alternative platform for the hearing impaired to experience music. Current music tactile technologies focus on representing tactile metronomes to synchronize performers or encoding musical notes into distinguishable (albeit distracting) tactile patterns. There is growing interest in the development of musical haptic systems to augment the auditory experience, although the haptic-music relationship is still not well understood. This paper represents a tactile music interface that provides vibrations to multiple fingertips in synchronicity with auditory music. Like an audio equalizer, different frequency bands are filtered out, and the power in each frequency band is computed and converted to a corresponding vibrational strength. These vibrations are felt on different fingertips, each corresponding to a different frequency band. Songs with music from different spectrums, as classified by their energy and valence, were used to test the effectiveness of the system and to understand the relationship between music and tactile sensations. Three participants were trained on one song categorized as sad (low energy and low valence score) and one song categorized as happy (high energy and high valence score). They were trained both with and without auditory feedback (listening to the song while experiencing the tactile music on their fingertips and then experiencing the vibrations alone without the music). The participants were then tested on three songs from both categories, without any auditory feedback, and were asked to classify the tactile vibrations they felt into either category. The participants were blinded to the songs being tested and were not provided any feedback on the accuracy of their classification. These participants were able to classify the music with 100% accuracy. Although the songs tested were on two opposite spectrums (sad/happy), the preliminary results show the potential of utilizing a vibrotactile equalizer, like the one presented, for augmenting musical experience while furthering the current understanding of music tactile relationship.Keywords: haptic music relationship, tactile equalizer, tactile music, vibrations and mood
Procedia PDF Downloads 1811193 Investigating the Potential of Spectral Bands in the Detection of Heavy Metals in Soil
Authors: Golayeh Yousefi, Mehdi Homaee, Ali Akbar Norouzi
Abstract:
Ongoing monitoring of soil contamination by heavy metals is critical for ecosystem stability and environmental protection, and food security. The conventional methods of determining these soil contaminants are time-consuming and costly. Spectroscopy in the visible near-infrared (VNIR) - short wave infrared (SWIR) region is a rapid, non-destructive, noninvasive, and cost-effective method for assessment of soil heavy metals concentration by studying the spectral properties of soil constituents. The aim of this study is to derive spectral bands and important ranges that are sensitive to heavy metals and can be used to estimate the concentration of these soil contaminants. In other words, the change in the spectral properties of spectrally active constituents of soil can lead to the accurate identification and estimation of the concentration of these compounds in soil. For this purpose, 325 soil samples were collected, and their spectral reflectance curves were evaluated at a range of 350-2500 nm. After spectral preprocessing operations, the partial least-squares regression (PLSR) model was fitted on spectral data to predict the concentration of Cu and Ni. Based on the results, the spectral range of Cu- sensitive spectra were 480, 580-610, 1370, 1425, 1850, 1920, 2145, and 2200 nm, and Ni-sensitive ranges were 543, 655, 761, 1003, 1271, 1415, 1903, 2199 nm. Finally, the results of this study indicated that the spectral data contains a lot of information that can be applied to identify the soil properties, such as the concentration of heavy metals, with more detail.Keywords: heavy metals, spectroscopy, spectral bands, PLS regression
Procedia PDF Downloads 841192 Fluorescent Analysis of Gold Nanoclusters-Wool Keratin Addition to Copper Ions
Authors: Yao Xing, Hong Ling Liu, Wei Dong Yu
Abstract:
With the increase of global population, it is of importance for the safe water supply, while, the water-monitoring method with the capability of rapidness, low-cost, green and robustness remains unsolved. In this paper, gold nanoclusters-wool keratin is added into copper ions measured by fluorescent method in order to probe copper ions in aqueous solution. The fluorescent results show that gold nanoclusters-wool keratin exhibits high stability of pHs, while it is sensitive to temperature and time. Based on Gauss fitting method, the results exhibit that the slope of gold nanoclusters-wool keratin with pH resolution under acidic condition is higher compared to it under alkaline solutions. Besides, gold nanoclusters-wool keratin added into copper ions shows a fluorescence turn-off response transferring from red to blue under UV light, leading to the dramatically decreased fluorescent intensity of gold nanoclusters-wool keratin solution located at 690 nm. Moreover, the limited concentration of copper ions tested by gold nanoclusters-wool keratin system is around 1 µmol/L, which meets the need of detection standards. The fitting slope of Stern-Volmer plot at low concentration of copper ions is larger than it at high concentrations, which indicates that aggregated gold nanoclusters are from small amounts to large numbers with the increasing concentration of copper ions. It is expected to provide novel method and materials for copper ions testing with low cost, high efficiency, and easy operability.Keywords: gold nanoclusters, copper ions, wool keratin, fluorescence
Procedia PDF Downloads 2521191 Microstructural and Optical Characterization of Heterostructures of ZnS/CdS and CdS/ZnS Synthesized by Chemical Bath Deposition Method
Authors: Temesgen Geremew
Abstract:
ZnS/glass and CdS/glass single layers and ZnS/CdS and CdS/ZnS heterojunction thin films were deposited by the chemical bath deposition method using zinc acetate and cadmium acetate as the metal ion sources and thioacetamide as a nonmetallic ion source in acidic medium. Na2EDTA was used as a complexing agent to control the free cation concentration. +e single layer and heterojunction thin films were characterized with X-ray diffraction (XRD), a scanning electron microscope (SEM), energy dispersive X-ray (EDX), and a UV-VIS spectrometer. +e XRD patterns of the CdS/glass thin film deposited on the soda lime glass substrate crystalized in the cubic structure with a single peak along the (111) plane. +e ZnS/CdS heterojunction and ZnS/glass single layer thin films were crystalized in the hexagonal ZnS structure. +e CdS/ZnS heterojunction thin film is nearly amorphous.The optical analysis results confirmed single band gap values of 2.75 eV and 2.5 eV for ZnS/CdS and CdS/ZnS heterojunction thin films, respectively. +e CdS/glass and CdS/ZnS thin films have more imaginary dielectric components than the real part. The optical conductivity of the single layer and heterojunction films is in the order of 1015 1/s. +e optical study also confirmed refractive index values between 2 and 2.7 for ZnS/glass, ZnS/CdS, and CdS/ZnS thin films for incident photon energies between 1.2 eV and 3.8 eV. +e surface morphology studies revealed compacted spherical grains covering the substrate surfaces with few cracks on ZnS/glass, ZnS/CdS, and CdS/glass and voids on CdS/ZnS thin films. +e EDX result confirmed nearly 1 :1 metallic to nonmetallic ion ratio in the single-layered thin films and the dominance of Zn ion over Cd ion in both ZnS/CdS and CdS/ZnS heterojunction thin films.Keywords: SERS, sensor, Hg2+, water detection, polythiophene
Procedia PDF Downloads 651190 Development of a New Characterization Method to Analyse Cypermethrin Penetration in Wood Material by Immunolabelling
Authors: Sandra Tapin-Lingua, Katia Ruel, Jean-Paul Joseleau, Daouia Messaoudi, Olivier Fahy, Michel Petit-Conil
Abstract:
The preservative efficacy of organic biocides is strongly related to their capacity of penetration and retention within wood tissues. The specific detection of the pyrethroid insecticide is currently obtained after extraction followed by chemical analysis by chromatography techniques. However visualizing the insecticide molecule within the wood structure requires specific probes together with microscopy techniques. Therefore, the aim of the present work was to apply a new methodology based on antibody-antigen recognition and electronic microscopy to visualize directly pyrethroids in the wood material. A polyclonal antibody directed against cypermethrin was developed and implement it on Pinus sylvestris wood samples coated with technical cypermethrin. The antibody was tested on impregnated wood and the specific recognition of the insecticide was visualized in transmission electron microscopy (TEM). The immunogold-TEM assay evidenced the capacity of the synthetic biocide to penetrate in the wood. The depth of penetration was measured on sections taken at increasing distances from the coated surface of the wood. Such results correlated with chemical analyzes carried out by GC-ECD after extraction. In addition, the immuno-TEM investigation allowed visualizing, for the first time at the ultrastructure scale of resolution, that cypermethrin was able to diffuse within the secondary wood cell walls.Keywords: cypermethrin, insecticide, wood penetration, wood retention, immuno-transmission electron microscopy, polyclonal antibody
Procedia PDF Downloads 4131189 X-Ray Dosimetry by a Low-Cost Current Mode Ion Chamber
Authors: Ava Zarif Sanayei, Mustafa Farjad-Fard, Mohammad-Reza Mohammadian-Behbahani, Leyli Ebrahimi, Sedigheh Sina
Abstract:
The fabrication and testing of a low-cost air-filled ion chamber for X-ray dosimetry is studied. The chamber is made of a metal cylinder, a central wire, a BC517 Darlington transistor, a 9V DC battery, and a voltmeter in order to have a cost-effective means to measure the dose. The output current of the dosimeter is amplified by the transistor and then fed to the large internal resistance of the voltmeter, producing a readable voltage signal. The dose-response linearity of the ion chamber is evaluated for different exposure scenarios by the X-ray tube. kVp values 70, 90, and 120, and mAs up to 20 are considered. In all experiments, a solid-state dosimeter (Solidose 400, Elimpex Medizintechnik) is used as a reference device for chamber calibration. Each case of exposure is repeated three times, the voltmeter and Solidose readings are recorded, and the mean and standard deviation values are calculated. Then, the calibration curve, derived by plotting voltmeter readings against Solidose readings, provided a linear fit result for all tube kVps of 70, 90, and 120. A 99, 98, and 100% linear relationship, respectively, for kVp values 70, 90, and 120 are demonstrated. The study shows the feasibility of achieving acceptable dose measurements with a simplified setup. Further enhancements to the proposed setup include solutions for limiting the leakage current, optimizing chamber dimensions, utilizing electronic microcontrollers for dedicated data readout, and minimizing the impact of stray electromagnetic fields on the system.Keywords: dosimetry, ion chamber, radiation detection, X-ray
Procedia PDF Downloads 781188 Value Index, a Novel Decision Making Approach for Waste Load Allocation
Authors: E. Feizi Ashtiani, S. Jamshidi, M.H Niksokhan, A. Feizi Ashtiani
Abstract:
Waste load allocation (WLA) policies may use multi-objective optimization methods to find the most appropriate and sustainable solutions. These usually intend to simultaneously minimize two criteria, total abatement costs (TC) and environmental violations (EV). If other criteria, such as inequity, need for minimization as well, it requires introducing more binary optimizations through different scenarios. In order to reduce the calculation steps, this study presents value index as an innovative decision making approach. Since the value index contains both the environmental violation and treatment costs, it can be maximized simultaneously with the equity index. It implies that the definition of different scenarios for environmental violations is no longer required. Furthermore, the solution is not necessarily the point with minimized total costs or environmental violations. This idea is testified for Haraz River, in north of Iran. Here, the dissolved oxygen (DO) level of river is simulated by Streeter-Phelps equation in MATLAB software. The WLA is determined for fish farms using multi-objective particle swarm optimization (MOPSO) in two scenarios. At first, the trade-off curves of TC-EV and TC-Inequity are plotted separately as the conventional approach. In the second, the Value-Equity curve is derived. The comparative results show that the solutions are in a similar range of inequity with lower total costs. This is due to the freedom of environmental violation attained in value index. As a result, the conventional approach can well be replaced by the value index particularly for problems optimizing these objectives. This reduces the process to achieve the best solutions and may find better classification for scenario definition. It is also concluded that decision makers are better to focus on value index and weighting its contents to find the most sustainable alternatives based on their requirements.Keywords: waste load allocation (WLA), value index, multi objective particle swarm optimization (MOPSO), Haraz River, equity
Procedia PDF Downloads 4221187 Improvement of the Reliability and the Availability of a Production System
Authors: Lakhoua Najeh
Abstract:
Aims of the work: The aim of this paper is to improve the reliability and the availability of a Packer production line of cigarettes based on two methods: The SADT method (Structured Analysis Design Technique) and the FMECA approach (Failure Mode Effects and Critically Analysis). The first method enables us to describe the functionality of the Packer production line of cigarettes and the second method enables us to establish an FMECA analysis. Methods: The methodology adopted in order to contribute to the improvement of the reliability and the availability of a Packer production line of cigarettes has been proposed in this paper, and it is based on the use of Structured Analysis Design Technique (SADT) and Failure mode, effects, and criticality analysis (FMECA) methods. This methodology consists of using a diagnosis of the existing of all of the equipment of a production line of a factory in order to determine the most critical machine. In fact, we use, on the one hand, a functional analysis based on the SADT method of the production line and on the other hand, a diagnosis and classification of mechanical and electrical failures of the line production by their criticality analysis based on the FMECA approach. Results: Based on the methodology adopted in this paper, the results are the creation and the launch of a preventive maintenance plan. They contain the different elements of a Packer production line of cigarettes; the list of the intervention preventive activities and their period of realization. Conclusion: The diagnosis of the existing state helped us to found that the machine of cigarettes used in the Packer production line of cigarettes is the most critical machine in the factory. Then this enables us in the one hand, to describe the functionality of the production line of cigarettes by SADT method and on the other hand, to study the FMECA machine in order to improve the availability and the performance of this machine.Keywords: production system, diagnosis, SADT method, FMECA method
Procedia PDF Downloads 1431186 Defining a Reference Architecture for Predictive Maintenance Systems: A Case Study Using the Microsoft Azure IoT-Cloud Components
Authors: Walter Bernhofer, Peter Haber, Tobias Mayer, Manfred Mayr, Markus Ziegler
Abstract:
Current preventive maintenance measures are cost intensive and not efficient. With the available sensor data of state of the art internet of things devices new possibilities of automated data processing emerge. Current advances in data science and in machine learning enable new, so called predictive maintenance technologies, which empower data scientists to forecast possible system failures. The goal of this approach is to cut expenses in preventive maintenance by automating the detection of possible failures and to improve efficiency and quality of maintenance measures. Additionally, a centralization of the sensor data monitoring can be achieved by using this approach. This paper describes the approach of three students to define a reference architecture for a predictive maintenance solution in the internet of things domain with a connected smartphone app for service technicians. The reference architecture is validated by a case study. The case study is implemented with current Microsoft Azure cloud technologies. The results of the case study show that the reference architecture is valid and can be used to achieve a system for predictive maintenance execution with the cloud components of Microsoft Azure. The used concepts are technology platform agnostic and can be reused in many different cloud platforms. The reference architecture is valid and can be used in many use cases, like gas station maintenance, elevator maintenance and many more.Keywords: case study, internet of things, predictive maintenance, reference architecture
Procedia PDF Downloads 2521185 Ultra-Rapid and Efficient Immunomagnetic Separation of Listeria Monocytogenes from Complex Samples in High-Gradient Magnetic Field Using Disposable Magnetic Microfluidic Device
Authors: L. Malic, X. Zhang, D. Brassard, L. Clime, J. Daoud, C. Luebbert, V. Barrere, A. Boutin, S. Bidawid, N. Corneau, J. Farber, T. Veres
Abstract:
The incidence of infections caused by foodborne pathogens such as Listeria monocytogenes (L. monocytogenes) poses a great potential threat to public health and safety. These issues are further exacerbated by legal repercussions due to “zero tolerance” food safety standards adopted in developed countries. Unfortunately, a large number of related disease outbreaks are caused by pathogens present in extremely low counts currently undetectable by available techniques. The development of highly sensitive and rapid detection of foodborne pathogens is therefore crucial, and requires robust and efficient pre-analytical sample preparation. Immunomagnetic separation is a popular approach to sample preparation. Microfluidic chips combined with external magnets have emerged as viable high throughput methods. However, external magnets alone are not suitable for the capture of nanoparticles, as very strong magnetic fields are required. Devices that incorporate externally applied magnetic field and microstructures of a soft magnetic material have thus been used for local field amplification. Unfortunately, very complex and costly fabrication processes used for integration of soft magnetic materials in the reported proof-of-concept devices would prohibit their use as disposable tools for food and water safety or diagnostic applications. We present a sample preparation magnetic microfluidic device implemented in low-cost thermoplastic polymers using fabrication techniques suitable for mass-production. The developed magnetic capture chip (M-chip) was employed for rapid capture and release of L. monocytogenes conjugated to immunomagnetic nanoparticles (IMNs) in buffer and beef filtrate. The M-chip relies on a dense array of Nickel-coated high-aspect ratio pillars for capture with controlled magnetic field distribution and a microfluidic channel network for sample delivery, waste, wash and recovery. The developed Nickel-coating process and passivation allows generation of switchable local perturbations within the uniform magnetic field generated with a pair of permanent magnets placed at the opposite edges of the chip. This leads to strong and reversible trapping force, wherein high local magnetic field gradients allow efficient capture of IMNs conjugated to L. monocytogenes flowing through the microfluidic chamber. The experimental optimization of the M-chip was performed using commercially available magnetic microparticles and fabricated silica-coated iron-oxide nanoparticles. The fabricated nanoparticles were optimized to achieve the desired magnetic moment and surface functionalization was tailored to allow efficient capture antibody immobilization. The integration, validation and further optimization of the capture and release protocol is demonstrated using both, dead and live L. monocytogenes through fluorescence microscopy and plate- culture method. The capture efficiency of the chip was found to vary as function of listeria to nanoparticle concentration ratio. The maximum capture efficiency of 30% was obtained and the 24-hour plate-culture method allowed the detection of initial sample concentration of only 16 cfu/ml. The device was also very efficient in concentrating the sample from a 10 ml initial volume. Specifically, 280% concentration efficiency was achieved in 17 minutes only, demonstrating the suitability of the system for food safety applications. In addition, flexible design and low-cost fabrication process will allow rapid sample preparation for applications beyond food and water safety, including point-of-care diagnosis.Keywords: array of pillars, bacteria isolation, immunomagnetic sample preparation, polymer microfluidic device
Procedia PDF Downloads 2801184 Ice Load Measurements on Known Structures Using Image Processing Methods
Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka
Abstract:
This study employs a method based on image analyses and structure information to detect accumulated ice on known structures. The icing of marine vessels and offshore structures causes significant reductions in their efficiency and creates unsafe working conditions. Image processing methods are used to measure ice loads automatically. Most image processing methods are developed based on captured image analyses. In this method, ice loads on structures are calculated by defining structure coordinates and processing captured images. A pyramidal structure is designed with nine cylindrical bars as the known structure of experimental setup. Unsymmetrical ice accumulated on the structure in a cold room represents the actual case of experiments. Camera intrinsic and extrinsic parameters are used to define structure coordinates in the image coordinate system according to the camera location and angle. The thresholding method is applied to capture images and detect iced structures in a binary image. The ice thickness of each element is calculated by combining the information from the binary image and the structure coordinate. Averaging ice diameters from different camera views obtains ice thicknesses of structure elements. Comparison between ice load measurements using this method and the actual ice loads shows positive correlations with an acceptable range of error. The method can be applied to complex structures defining structure and camera coordinates.Keywords: camera calibration, ice detection, ice load measurements, image processing
Procedia PDF Downloads 3681183 Quality Assessment of New Zealand Mānuka Honeys Using Hyperspectral Imaging Combined with Deep 1D-Convolutional Neural Networks
Authors: Hien Thi Dieu Truong, Mahmoud Al-Sarayreh, Pullanagari Reddy, Marlon M. Reis, Richard Archer
Abstract:
New Zealand mānuka honey is a honeybee product derived mainly from Leptospermum scoparium nectar. The potent antibacterial activity of mānuka honey derives principally from methylglyoxal (MGO), in addition to the hydrogen peroxide and other lesser activities present in all honey. MGO is formed from dihydroxyacetone (DHA) unique to L. scoparium nectar. Mānuka honey also has an idiosyncratic phenolic profile that is useful as a chemical maker. Authentic mānuka honey is highly valuable, but almost all honey is formed from natural mixtures of nectars harvested by a hive over a time period. Once diluted by other nectars, mānuka honey irrevocably loses value. We aimed to apply hyperspectral imaging to honey frames before bulk extraction to minimise the dilution of genuine mānuka by other honey and ensure authenticity at the source. This technology is non-destructive and suitable for an industrial setting. Chemometrics using linear Partial Least Squares (PLS) and Support Vector Machine (SVM) showed limited efficacy in interpreting chemical footprints due to large non-linear relationships between predictor and predictand in a large sample set, likely due to honey quality variability across geographic regions. Therefore, an advanced modelling approach, one-dimensional convolutional neural networks (1D-CNN), was investigated for analysing hyperspectral data for extraction of biochemical information from honey. The 1D-CNN model showed superior prediction of honey quality (R² = 0.73, RMSE = 2.346, RPD= 2.56) to PLS (R² = 0.66, RMSE = 2.607, RPD= 1.91) and SVM (R² = 0.67, RMSE = 2.559, RPD=1.98). Classification of mono-floral manuka honey from multi-floral and non-manuka honey exceeded 90% accuracy for all models tried. Overall, this study reveals the potential of HSI and deep learning modelling for automating the evaluation of honey quality in frames.Keywords: mānuka honey, quality, purity, potency, deep learning, 1D-CNN, chemometrics
Procedia PDF Downloads 1391182 Paraoxonase 1 (PON 1) Arylesterase and Lactonase Activities, Polymorphism and Conjugated Dienes in Gastroenteritis in Paediatric Population
Authors: M. R. Mogarekar, Shraddha V. More, Pankaj Kumar
Abstract:
Gastroenteritis, the third leading killer of children in India today is responsible for 13% of all deaths in children <5 years of age and kills an estimated 300,000 children in India each year. We decided to investigate parameters which can help in early disease detection and prompt treatment. Serum paraoxonase is calcium dependent esterase which is widely distributed among tissues such as liver, kidney, and intestine and is located in the chromosomal region 7q21.3 22.1. Studies show the presence of excessive reactive oxygen metabolites and antioxidant imbalance in the gastrointestinal tract leading to oxidative stress in gastroenteritis. To our knowledge, this is the first ever study done. The objective of present study is to investigate the role of paraoxonase 1 (PON 1) status i.e arylesterase and lactonase activities and Q192R polymorphism and conjugated dienes, in gastroenteritis of paediatric population. The study and control group consists of 40 paediatric patients with and without gastroenteritis. Paraoxonase arylesterase and lactonase activities were assessed and phenotyping was determined. Conjugated dienes were also assessed. PON 1 arylesterase activities in cases (61.494±13.220) and controls (70.942±15.385) and lactonase activities in cases (15.702±1.036) and controls (17.434±1.176) were significantly decreased (p<0.05). There is no significant difference of phenotypic distribution in cases and controls. Conjugated dienes were found significantly increased in patients (0.086±0.024) than the control group (0.064±0.019) (p<0.05). Paraoxonase 1 activities (arylesterase and lactonase) and conjugated dienes may be useful in risk assessment and management in gastroenteritis in paediatric population.Keywords: paraoxonase 1 polymorphism, arylesterase, lactonase, conjugated dienes, p-nitrophenylacetate, DHC
Procedia PDF Downloads 3071181 Delineation of Different Geological Interfaces Beneath the Bengal Basin: Spectrum Analysis and 2D Density Modeling of Gravity Data
Authors: Md. Afroz Ansari
Abstract:
The Bengal basin is a spectacular example of a peripheral foreland basin formed by the convergence of the Indian plate beneath the Eurasian and Burmese plates. The basin is embraced on three sides; north, west and east by different fault-controlled tectonic features whereas released in the south where the rivers are drained into the Bay of Bengal. The Bengal basin in the eastern part of the Indian subcontinent constitutes the largest fluvio-deltaic to shallow marine sedimentary basin in the world today. This continental basin coupled with the offshore Bengal Fan under the Bay of Bengal forms the biggest sediment dispersal system. The continental basin is continuously receiving the sediments by the two major rivers Ganga and Brahmaputra (known as Jamuna in Bengal), and Meghna (emerging from the point of conflux of the Ganga and Brahmaputra) and large number of rain-fed, small tributaries originating from the eastern Indian Shield. The drained sediments are ultimately delivered into the Bengal fan. The significance of the present study is to delineate the variations in thicknesses of the sediments, different crustal structures, and the mantle lithosphere throughout the onshore-offshore Bengal basin. In the present study, the different crustal/geological units and the shallower mantle lithosphere were delineated by analyzing the Bouguer Gravity Anomaly (BGA) data along two long traverses South-North (running from Bengal fan cutting across the transition offshore-onshore of the Bengal basin and intersecting the Main Frontal Thrust of India-Himalaya collision zone in Sikkim-Bhutan Himalaya) and West-East (running from the Peninsular Indian Shield across the Bengal basin to the Chittagong–Tripura Fold Belt). The BGA map was derived from the analysis of topex data after incorporating Bouguer correction and all terrain corrections. The anomaly map was compared with the available ground gravity data in the western Bengal basin and the sub-continents of India for consistency of the data used. Initially, the anisotropy associated with the thicknesses of the different crustal units, crustal interfaces and moho boundary was estimated through spectral analysis of the gravity data with varying window size over the study area. The 2D density sections along the traverses were finalized after a number of iterations with the acceptable root mean square (RMS) errors. The estimated thicknesses of the different crustal units and dips of the Moho boundary along both the profiles are consistent with the earlier results. Further the results were encouraged by examining the earthquake database and focal mechanism solutions for better understanding the geodynamics. The earthquake data were taken from the catalogue of US Geological Survey, and the focal mechanism solutions were compiled from the Harvard Centroid Moment Tensor Catalogue. The concentrations of seismic events at different depth levels are not uncommon. The occurrences of earthquakes may be due to stress accumulation as a result of resistance from three sides.Keywords: anisotropy, interfaces, seismicity, spectrum analysis
Procedia PDF Downloads 2741180 Computer-Aided Detection of Simultaneous Abdominal Organ CT Images by Iterative Watershed Transform
Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid
Abstract:
Interpretation of medical images benefits from anatomical and physiological priors to optimize computer-aided diagnosis applications. Segmentation of liver, spleen and kidneys is regarded as a major primary step in the computer-aided diagnosis of abdominal organ diseases. In this paper, a semi-automated method for medical image data is presented for the abdominal organ segmentation data using mathematical morphology. Our proposed method is based on hierarchical segmentation and watershed algorithm. In our approach, a powerful technique has been designed to suppress over-segmentation based on mosaic image and on the computation of the watershed transform. Our algorithm is currency in two parts. In the first, we seek to improve the quality of the gradient-mosaic image. In this step, we propose a method for improving the gradient-mosaic image by applying the anisotropic diffusion filter followed by the morphological filters. Thereafter, we proceed to the hierarchical segmentation of the liver, spleen and kidney. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.Keywords: anisotropic diffusion filter, CT images, morphological filter, mosaic image, simultaneous organ segmentation, the watershed algorithm
Procedia PDF Downloads 4411179 Research on Detection of Web Page Visual Salience Region Based on Eye Tracker and Spectral Residual Model
Authors: Xiaoying Guo, Xiangyun Wang, Chunhua Jia
Abstract:
Web page has been one of the most important way of knowing the world. Humans catch a lot of information from it everyday. Thus, understanding where human looks when they surfing the web pages is rather important. In normal scenes, the down-top features and top-down tasks significantly affect humans’ eye movement. In this paper, we investigated if the conventional visual salience algorithm can properly predict humans’ visual attractive region when they viewing the web pages. First, we obtained the eye movement data when the participants viewing the web pages using an eye tracker. By the analysis of eye movement data, we studied the influence of visual saliency and thinking way on eye-movement pattern. The analysis result showed that thinking way affect human’ eye-movement pattern much more than visual saliency. Second, we compared the results of web page visual salience region extracted by Itti model and Spectral Residual (SR) model. The results showed that Spectral Residual (SR) model performs superior than Itti model by comparison with the heat map from eye movements. Considering the influence of mind habit on humans’ visual region of interest, we introduced one of the most important cue in mind habit-fixation position to improved the SR model. The result showed that the improved SR model can better predict the human visual region of interest in web pages.Keywords: web page salience region, eye-tracker, spectral residual, visual salience
Procedia PDF Downloads 2761178 Development and Validation of HPLC Method on Determination of Acesulfame-K in Jelly Drink Product
Authors: Candra Irawan, David Yudianto, Ahsanu Nadiyya, Dewi Anna Br Sitepu, Hanafi, Erna Styani
Abstract:
Jelly drink was produced from a combination of both natural and synthetic materials, such as acesulfame potassium (acesulfame-K) as synthetic sweetener material. Acesulfame-K content in jelly drink could be determined by High-Performance Liquid Chromatography (HPLC), but this method needed validation due to having a change on the reagent addition step which skips the carrez addition and comparison of mix mobile phase (potassium dihydrogen phosphate and acetonitrile) with ratio from 75:25 to 90:10 to be more efficient and cheap. This study was conducted to evaluate the performance of determination method for acesulfame-K content in the jelly drink by HPLC. The method referred to Deutsches Institut fur Normung European Standard International Organization for Standardization (DIN EN ISO):12856 (1999) about Foodstuffs, Determination of acesulfame-K, aspartame and saccharin. The result of the correlation coefficient value (r) on the linearity test was 0.9987 at concentration range 5-100 mg/L. Detection limit value was 0.9153 ppm, while the quantitation limit value was 1.1932 ppm. The recovery (%) value on accuracy test for sample concentration by spiking 100 mg/L was 102-105%. Relative Standard Deviation (RSD) value for precision and homogenization tests were 2.815% and 4.978%, respectively. Meanwhile, the comparative and stability tests were tstat (0.136) < ttable (2.101) and |µ1-µ2| (1.502) ≤ 0.3×CV Horwitz. Obstinacy test value was tstat < ttable. It can be concluded that the HPLC method for the determination of acesulfame-K in jelly drink product by HPLC has been valid and can be used for analysis with good performance.Keywords: acesulfame-K, jelly drink, HPLC, validation
Procedia PDF Downloads 1291177 Assessment of Waste Management Practices in Bahrain
Authors: T. Radu, R. Sreenivas, H. Albuflasa, A. Mustafa Khan, W. Aloqab
Abstract:
The Kingdom of Bahrain, a small island country in the Gulf region, is experiencing fast economic growth resulting in a sharp increase in population and greater than ever amounts of waste being produced. However, waste management in the country is still very basic, with landfilling being the most popular option. Recycling is still a scarce practice, with small recycling businesses and initiatives emerging in recent years. This scenario is typical for other countries in the region, with similar amounts of per capita waste being produced. In this paper, we are reviewing current waste management practices in Bahrain by collecting data published by the Government and various authors, and by visiting the country’s only landfill site, Askar. In addition, we have performed a survey of the residents to learn more about the awareness and attitudes towards sustainable waste management strategies. A review of the available data on waste management indicates that the Askar landfill site is nearing its capacity. The site uses open tipping as the method of disposal. The highest percentage of disposed waste comes from the building sector (38.4%), followed by domestic (27.5%) and commercial waste (17.9%). Disposal monitoring and recording are often based on estimates of weight and without proper characterization/classification of received waste. Besides, there is a need for assessment of the environmental impact of the site with systematic monitoring of pollutants in the area and their potential spreading to the surrounding land, groundwater, and air. The results of the survey indicate low awareness of what happens with the collected waste in the country. However, the respondents have shown support for future waste reduction and recycling initiatives. This implies that the education of local communities would be very beneficial for such governmental initiatives, securing greater participation. Raising awareness of issues surrounding recycling and waste management and systematic effort to divert waste from landfills are the first steps towards securing sustainable waste management in the Kingdom of Bahrain.Keywords: landfill, municipal solid waste, survey, waste management
Procedia PDF Downloads 1581176 Artificial Intelligence-Based Thermal Management of Battery System for Electric Vehicles
Authors: Raghunandan Gurumurthy, Aricson Pereira, Sandeep Patil
Abstract:
The escalating adoption of electric vehicles (EVs) across the globe has underscored the critical importance of advancing battery system technologies. This has catalyzed a shift towards the design and development of battery systems that not only exhibit higher energy efficiency but also boast enhanced thermal performance and sophisticated multi-material enclosures. A significant leap in this domain has been the incorporation of simulation-based design optimization for battery packs and Battery Management Systems (BMS), a move further enriched by integrating artificial intelligence/machine learning (AI/ML) approaches. These strategies are pivotal in refining the design, manufacturing, and operational processes for electric vehicles and energy storage systems. By leveraging AI/ML, stakeholders can now predict battery performance metrics—such as State of Health, State of Charge, and State of Power—with unprecedented accuracy. Furthermore, as Li-ion batteries (LIBs) become more prevalent in urban settings, the imperative for bolstering thermal and fire resilience has intensified. This has propelled Battery Thermal Management Systems (BTMs) to the forefront of energy storage research, highlighting the role of machine learning and AI not just as tools for enhanced safety management through accurate temperature forecasts and diagnostics but also as indispensable allies in the early detection and warning of potential battery fires.Keywords: electric vehicles, battery thermal management, industrial engineering, machine learning, artificial intelligence, manufacturing
Procedia PDF Downloads 971175 Examining Relationship between Resource-Curse and Under-Five Mortality in Resource-Rich Countries
Authors: Aytakin Huseynli
Abstract:
The paper reports findings of the study which examined under-five mortality rate among resource-rich countries. Typically when countries obtain wealth citizens gain increased wellbeing. Societies with new wealth create equal opportunities for everyone including vulnerable groups. But scholars claim that this is not the case for developing resource-rich countries and natural resources become the curse for them rather than the blessing. Spillovers from natural resource curse affect the social wellbeing of vulnerable people negatively. They get excluded from the mainstream society, and their situation becomes tangible. In order to test this hypothesis, the study compared under-5 mortality rate among resource-rich countries by using independent sample one-way ANOVA. The data on under-five mortality rate came from the World Bank. The natural resources for this study are oil, gas and minerals. The list of 67 resource-rich countries was taken from Natural Resource Governance Institute. The sample size was categorized and 4 groups were created such as low, low-middle, upper middle and high-income countries based on income classification of the World Bank. Results revealed that there was a significant difference in the scores for low, middle, upper-middle and high-income countries in under-five mortality rate (F(3(29.01)=33.70, p=.000). To find out the difference among income groups, the Games-Howell test was performed and it was found that infant mortality was an issue for low, middle and upper middle countries but not for high-income countries. Results of this study are in agreement with previous research on resource curse and negative effects of resource-based development. Policy implications of the study for social workers, policy makers, academicians and social development specialists are to raise and discuss issues of marginalization and exclusion of vulnerable groups in developing resource-rich countries and suggest interventions for avoiding them.Keywords: children, natural resource, extractive industries, resource-based development, vulnerable groups
Procedia PDF Downloads 2541174 A Functional Correlate of the Two Polarities of Depressive Experience Model
Authors: Jaime R. Silva, Gabriel E. Reyes, Marianne Krause
Abstract:
Background: The two-polarity model of the depressive personality argues that experience is organized around two axes: interpersonal relatedness and self-definition. Differential emphasis on one of these poles defines three types of depressive experience: Anaclitic, Introjective or Mixed pattern. On the one hand, Anaclitic pattern has been conceptually related with exaggerated biological stress sensitivity. On the other hand, the Introjective pattern was linked with anhedonic symptomatology. The general aim of the study was to find empirical support for this relationship. Methods: 101 non-clinical individuals participated in two experimental sessions. During the first session, the biological stress reactivity (cortisol concentration in saliva) and the subjective stress perceived (self-reported) during the Trier Social Stress Test (TSST), were investigated. In the second session, a visual discrimination task with a specific reward system, to study the reinforcement sensitivity (anhedonia), was performed. Results: Results evidenced that participants with Introjective depressive symptoms showed a higher interpersonal sensitivity and a diminished sensitivity to reinforcement. In addition, results also indicated that such a group has a poor psychological detection of its exacerbated reactivity to stress, which is the opposite pattern evidenced amongst the Anaclitic group. Conclusions: In perspective, these results empirically support the two-polarity of the depressive personality model. Clinical implications are discussed.Keywords: depression, interpersonal stress, personality, trier social stress test
Procedia PDF Downloads 2511173 Virtual Prototyping of LED Chip Scale Packaging Using Computational Fluid Dynamic and Finite Element Method
Authors: R. C. Law, Shirley Kang, T. Y. Hin, M. Z. Abdullah
Abstract:
LED technology has been evolving aggressively in recent years from incandescent bulb during older days to as small as chip scale package. It will continue to stay bright in future. As such, there is tremendous pressure to stay competitive in the market by optimizing products to next level of performance and reliability with the shortest time to market. This changes the conventional way of product design and development to virtual prototyping by means of Computer Aided Engineering (CAE). It comprises of the deployment of Finite Element Method (FEM) and Computational Fluid Dynamic (CFD). FEM accelerates the investigation for early detection of failures such as crack, improve the thermal performance of system and enhance solder joint reliability. CFD helps to simulate the flow pattern of molding material as a function of different temperature, molding parameters settings to evaluate failures like voids and displacement. This paper will briefly discuss the procedures and applications of FEM in thermal stress, solder joint reliability and CFD of compression molding in LED CSP. Integration of virtual prototyping in product development had greatly reduced the time to market. Many successful achievements with minimized number of evaluation iterations required in the scope of material, process setting, and package architecture variant have been materialized with this approach.Keywords: LED, chip scale packaging (CSP), computational fluid dynamic (CFD), virtual prototyping
Procedia PDF Downloads 2871172 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder
Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh
Abstract:
In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization
Procedia PDF Downloads 1141171 Predictive Spectral Lithological Mapping, Geomorphology and Geospatial Correlation of Structural Lineaments in Bornu Basin, Northeast Nigeria
Authors: Aminu Abdullahi Isyaku
Abstract:
Semi-arid Bornu basin in northeast Nigeria is characterised with flat topography, thick cover sediments and lack of continuous bedrock outcrops discernible for field geology. This paper presents the methodology for the characterisation of neotectonic surface structures and surface lithology in the north-eastern Bornu basin in northeast Nigeria as an alternative approach to field geological mapping using free multispectral Landsat 7 ETM+, SRTM DEM and ASAR Earth Observation datasets. Spectral lithological mapping herein developed utilised spectral discrimination of the surface features identified on Landsat 7 ETM+ images to infer on the lithology using four steps including; computations of band combination images; band ratio images; supervised image classification and inferences of the lithological compositions. Two complementary approaches to lineament mapping are carried out in this study involving manual digitization and automatic lineament extraction to validate the structural lineaments extracted from the Landsat 7 ETM+ image mosaic covering the study. A comparison between the mapped surface lineaments and lineament zones show good geospatial correlation and identified the predominant NE-SW and NW-SE structural trends in the basin. Topographic profiles across different parts of the Bama Beach Ridge palaeoshorelines in the basin appear to show different elevations across the feature. It is determined that most of the drainage systems in the northeastern Bornu basin are structurally controlled with drainage lines terminating against the paleo-lake border and emptying into the Lake Chad mainly arising from the extensive topographic high-stand Bama Beach Ridge palaeoshoreline.Keywords: Bornu Basin, lineaments, spectral lithology, tectonics
Procedia PDF Downloads 139