Search results for: volatile detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3906

Search results for: volatile detection

2016 Simulation of Performance of LaBr₃ (Ce) Using GEANT4

Authors: Zarana Dave

Abstract:

Cerium-doped lanthanum bromide, LaBr₃ (Ce), scintillator shows attracting properties for spectroscopy that makes it a suitable solution for security, medical, geophysics and high energy physics applications. Here, the performance parameters of a cylindrical LaBr₃ (Ce) scintillator was investigated. The first aspect is the determination of the efficiency for γ - ray detection, measured with GEANT4 simulation toolkit from 10keV to 10MeV energy range. The second is the detailed study of background radiation of LaBr₃ (Ce). It has relatively high intrinsic radiation background due to naturally occurring ¹³⁸La and ²²⁷Ac radioisotopes.

Keywords: LaBr₃(Ce), GEANT4, efficiency, background radiation

Procedia PDF Downloads 222
2015 Two-Dimensional Van-Der Waals Heterostructure for Highly Energy-Efficient Field-Free Deterministic Spin-Orbit Torque Switching at Room Temperature

Authors: Pradeep Raj Sharma, Bogeun Jang, Jongill Hong

Abstract:

Spin-orbit torque (SOT) is an efficient approach for manipulating the magnetization of ferromagnetic materials (FMs), providing improved device performance, better compatibility, and ultra-fast switching with lower power consumption compared to spin-transfer torque (STT). Among the various materials and structural designs, two-dimensional (2D) van-der Waals (vdW) layered materials and their heterostructures have been demonstrated as highly scalable and promising device architecture for SOT. In particular, a bilayer heterostructure consisting of fully 2D-vdW-FM, non-magnetic material (NM) offers a potential platform for controlling the magnetization using SOT because of the advantages of being easy to scale and less energy to switch. Here, we report filed-free deterministic switching driven by SOT at room temperature, integrating perpendicularly magnetized 2D-vdW material Fe₃GaTe₂ (FGaT) and NM WTe₂. Pulse current-induced magnetization switching with an ultra-low current density of about 6.5×10⁵ A/cm², yielding a SOT efficiency close to double-digits at 300 K, is reported. These values are two orders of magnitude higher than those observed in conventional heavy metal (HM) based SOT and exceed those reported with 2D-vdW layered materials. WTe₂, a topological semimetal possessing strong SOC and high spin Hall angle, can induce significant spin accumulation with negligible spin loss across the transparent 2D bilayer heterointerface. This promising device architecture enables highly compatible, energy-efficient, non-volatile memory and lays the foundation for designing efficient, flexible, and miniaturized spintronic devices.

Keywords: spintronics, spin-orbit torque, spin Hall effect, spin Hall angle, topological semimetal, perpendicular magnetic anisotropy

Procedia PDF Downloads 6
2014 Double Functionalization of Magnetic Colloids with Electroactive Molecules and Antibody for Platelet Detection and Separation

Authors: Feixiong Chen, Naoufel Haddour, Marie Frenea-Robin, Yves MéRieux, Yann Chevolot, Virginie Monnier

Abstract:

Neonatal thrombopenia occurs when the mother generates antibodies against her baby’s platelet antigens. It is particularly critical for newborns because it can cause coagulation troubles leading to intracranial hemorrhage. In this case, diagnosis must be done quickly to make platelets transfusion immediately after birth. Before transfusion, platelet antigens must be tested carefully to avoid rejection. The majority of thrombopenia (95 %) are caused by antibodies directed against Human Platelet Antigen 1a (HPA-1a) or 5b (HPA-5b). The common method for antigen platelets detection is polymerase chain reaction allowing for identification of gene sequence. However, it is expensive, time-consuming and requires significant blood volume which is not suitable for newborns. We propose to develop a point-of-care device based on double functionalized magnetic colloids with 1) antibodies specific to antigen platelets and 2) highly sensitive electroactive molecules in order to be detected by an electrochemical microsensor. These magnetic colloids will be used first to isolate platelets from other blood components, then to capture specifically platelets bearing HPA-1a and HPA-5b antigens and finally to attract them close to sensor working electrode for improved electrochemical signal. The expected advantages are an assay time lower than 20 min starting from blood volume smaller than 100 µL. Our functionalization procedure based on amine dendrimers and NHS-ester modification of initial carboxyl colloids will be presented. Functionalization efficiency was evaluated by colorimetric titration of surface chemical groups, zeta potential measurements, infrared spectroscopy, fluorescence scanning and cyclic voltammetry. Our results showed that electroactive molecules and antibodies can be immobilized successfully onto magnetic colloids. Application of a magnetic field onto working electrode increased the detected electrochemical signal. Magnetic colloids were able to capture specific purified antigens extracted from platelets.

Keywords: Magnetic Nanoparticles , Electroactive Molecules, Antibody, Platelet

Procedia PDF Downloads 270
2013 Study of Synergetic Effect by Combining Dielectric Barrier Discharge (DBD) Plasma and Photocatalysis for Abatement of Pollutants in Air Mixture System: Influence of Some Operating Conditions and Identification of Byproducts

Authors: Wala Abou Saoud, Aymen Amine Assadi, Monia Guiza, Abdelkrim Bouzaza, Wael Aboussaoud, Abdelmottaleb Ouederni, Dominique Wolbert

Abstract:

Volatile organic compounds (VOCs) constitute one of the most important families of chemicals involved in atmospheric pollution, causing damage to the environment and human health, and need, consequently, to be eliminated. Among the promising technologies, dielectric barrier discharge (DBD) plasma - photocatalysis coupling reveals very interesting prospects in terms of process synergy of compounds mineralization’s, with low energy consumption. In this study, the removal of organic compounds such butyraldehyde (BUTY) and dimethyl disulfide (DMDS) (exhaust gasses from animal quartering centers.) in air mixture using DBD plasma coupled with photocatalysis was tested, in order to determine whether or not synergy effect was present. The removal efficiency of these pollutants, a selectivity of CO₂ and CO, and byproducts formation such as ozone formation were investigated in order to evaluate the performance of the combined process. For this purpose, a series of experiments were carried out in a continuous reactor. Many operating parameters were also investigated such as the specific energy of discharge, the inlet concentration of pollutant and the flowrate. It appears from this study that, the performance of the process has enhanced and a synergetic effect is observed. In fact, we note an enhancement of 10 % on removal efficiency. It is interesting to note that the combined system leads to better CO₂ selectivity than for plasma. Consequently, intermediates by-products have been reduced due to various other species (O•, N, OH•, O₂•-, O₃, NO₂, NOx, etc.). Additionally, the behavior of combining DBD plasma and photocatalysis has shown that the ozone can be easily also decomposed in presence of photocatalyst.

Keywords: combined process, DBD plasma, photocatalysis, pilot scale, synergetic effect, VOCs

Procedia PDF Downloads 330
2012 Red-Tide Detection and Prediction Using MODIS Data in the Arabian Gulf of Qatar

Authors: Yasir E. Mohieldeen

Abstract:

Qatar is one of the most water scarce countries in the World. In 2014, the average per capita rainfall was less than 29 m3/y/ca, while the global average is 6,000 m3/y/ca. However, the per capita water consumption in Qatar is among the highest in the World: more than 500 liters per person per day, whereas the global average is 160 liters per person per day. Since the early 2000s, Qatar has been relying heavily on desalinated water from the Arabian Gulf as the main source of fresh water. In 2009, about 99.9% of the total potable water produced was desalinated. Reliance on desalinated water makes Qatar very vulnerable to water related natural disasters, such as the red-tide phenomenon. Qatar’s strategic water reserve lasts for only 7 days. In case of red-tide outbreak, the country would not be able to desalinate water for days, let alone the months that this disaster would bring about (as it clogs the desalination equipment). The 2008-09 red-tide outbreak, for instance, lasted for more than eight months and forced the closure of desalination plants in the region for weeks. This study aims at identifying favorite conditions for red-tide outbreaks, using satellite data along with in-situ measurements. This identification would allow the prediction of these outbreaks and their hotspots. Prediction and monitoring of outbreaks are crucial to water security in the country, as different measures could be put in place in advance to prevent an outbreak and mitigate its impact if it happened. Red-tide outbreaks are detected using different algorithms for chlorophyll concentration in the Gulf waters. Vegetation indices, such as Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) were used along with Surface Algae Bloom Index (SABI) to detect known outbreaks. MODIS (or Moderate Resolution Imaging Spectroradiometer) bands are used to calculate these indices. A red-tide outbreaks atlas in the Arabian Gulf is being produced. Prediction of red-tide outbreaks ahead of their occurrences would give critical information on possible water-shortage in the country. Detecting known outbreaks in the past few decades and related parameters (e.g. water salinity, water surface temperature, nutrition, sandstorms, … etc) enables the identification of favorite conditions of red-tide outbreak that are key to the prediction of these outbreaks.

Keywords: Arabian Gulf, MODIS, red-tide detection, strategic water reserve, water desalination

Procedia PDF Downloads 107
2011 Securing Mobile Ad-Hoc Network Utilizing OPNET Simulator

Authors: Tariq A. El Shheibia, Halima Mohamed Belhamad

Abstract:

This paper is considered securing data based on multi-path protocol (SDMP) in mobile ad hoc network utilizing OPNET simulator modular 14.5, including the AODV routing protocol at the network as based multi-path algorithm for message security in MANETs. The main idea of this work is to present a way that is able to detect the attacker inside the MANETs. The detection for this attacker will be performed by adding some effective parameters to the network.

Keywords: MANET, AODV, malicious node, OPNET

Procedia PDF Downloads 295
2010 Conformance to Spatial Planning between the Kampala Physical Development Plan of 2012 and the Existing Land Use in 2021

Authors: Brendah Nagula, Omolo Fredrick Okalebo, Ronald Ssengendo, Ivan Bamweyana

Abstract:

The Kampala Physical Development Plan (KPDP) was developed in 2012 and projected both long term and short term developments within the City .The purpose of the plan was to not only shape the city into a spatially planned area but also to control the urban sprawl trends that had expanded with pronounced instances of informal settlements. This plan was approved by the National Physical Planning Board and a signature was appended by the Minister in 2013. Much as the KPDP plan has been implemented using different approaches such as detailed planning, development control, subdivision planning, carrying out construction inspections, greening and beautification, there is still limited knowledge on the level of conformance towards this plan. Therefore, it is yet to be determined whether it has been effective in shaping the City into an ideal spatially planned area. Attaining a clear picture of the level of conformance towards the KPDP 2012 through evaluation between the planned and the existing land use in Kampala City was performed. Methods such as Supervised Classification and Post Classification Change Detection were adopted to perform this evaluation. Scrutiny of findings revealed Central Division registered the lowest level of conformance to the planning standards specified in the KPDP 2012 followed by Nakawa, Rubaga, Kawempe, and Makindye. Furthermore, mixed-use development was identified as the land use with the highest level of non-conformity of 25.11% and institutional land use registered the highest level of conformance of 84.45 %. The results show that the aspect of location was not carefully considered while allocating uses in the KPDP whereby areas located near the Central Business District have higher land rents and hence require uses that ensure profit maximization. Also, the prominence of development towards mixed-use denotes an increased demand for land towards compact development that was not catered for in the plan. Therefore in order to transform Kampala city into a spatially planned area, there is need to carefully develop detailed plans especially for all the Central Division planning precincts indicating considerations for land use densification.

Keywords: spatial plan, post classification change detection, Kampala city, landuse

Procedia PDF Downloads 92
2009 Renewable Energy Trends Analysis: A Patents Study

Authors: Sepulveda Juan

Abstract:

This article explains the elements and considerations taken into account when implementing and applying patent evaluation and scientometric study in the identifications of technology trends, and the tools that led to the implementation of a software application for patent revision. Univariate analysis helped recognize the technological leaders in the field of energy, and steered the way for a multivariate analysis of this sample, which allowed for a graphical description of the techniques of mature technologies, as well as the detection of emerging technologies. This article ends with a validation of the methodology as applied to the case of fuel cells.

Keywords: patents, scientometric, renewable energy, technology maps

Procedia PDF Downloads 309
2008 A Simple Olfactometer for Odour and Lateralization Thresholds of Chemical Vapours

Authors: Lena Ernstgård, Aishwarya M. Dwivedi, Johan Lundström, Gunnar Johanson

Abstract:

A simple inexpensive olfactometer was constructed to enable valid measures of detection threshold of low concentrations of vapours of chemicals. The delivery system consists of seven syringe pumps, each connected to a Tedlar bag containing a predefined concentration of the test chemical in the air. The seven pumps are connected to a 8-way mixing valve which in turn connects to a birhinal nose piece. Chemical vapor of known concentration is generated by injection of an appropriate amount of the test chemical into a Tedlar bag with a known volume of clean air. Complete vaporization is assured by gentle heating of the bag from the outside with a heat flow. The six test concentrations are obtained by adding different volumes from the starting bag to six new Tedlar bags with known volumes of clean air. One bag contains clean air only. Thus, six different test concentrations and clean air can easily be tested in series by shifting the valve to new positions. Initial in-line measurement with a photoionization detector showed that the delivery system quickly responded to a shift in valve position. Thus 90% of the desired concentration was reached within 15 seconds. The concentrations in the bags are verified daily by gas chromatography. The stability of the system in terms of chemical concentration is monitored in real time by means of a photo-ionization detector. To determine lateralization thresholds, an additional pump supplying clean air is added to the delivery system in a way so that the nostrils can be separately and interchangeably be exposed to clean air and test chemical. Odor and lateralization thresholds were determined for three aldehydes; acrolein, crotonaldehyde, and hexanal in 20 healthy naïve individuals. Aldehydes generally have a strong odour, and the selected aldehydes are also considered to be irritating to mucous membranes. The median odor thresholds of the three aldehydes were 0.017, 0.0008, and 0.097 ppm, respectively. No lateralization threshold could be identified for acrolein, whereas the medians for crotonaldehyde and hexanal were 0.003 and 0.39 ppm, respectively. In conclusion, we constructed a simple, inexpensive olfactometer that allows for stable and easily measurable concentrations of vapors of the test chemical. Our test with aldehydes demonstrates that the system produces valid detection among volunteers in terms of odour and lateralization thresholds.

Keywords: irritation, odour delivery, olfactometer, smell

Procedia PDF Downloads 216
2007 An Automatic Large Classroom Attendance Conceptual Model Using Face Counting

Authors: Sirajdin Olagoke Adeshina, Haidi Ibrahim, Akeem Salawu

Abstract:

large lecture theatres cannot be covered by a single camera but rather by a multicamera setup because of their size, shape, and seating arrangements. Although, classroom capture is achievable through a single camera. Therefore, a design and implementation of a multicamera setup for a large lecture hall were considered. Researchers have shown emphasis on the impact of class attendance taken on the academic performance of students. However, the traditional method of carrying out this exercise is below standard, especially for large lecture theatres, because of the student population, the time required, sophistication, exhaustiveness, and manipulative influence. An automated large classroom attendance system is, therefore, imperative. The common approach in this system is face detection and recognition, where known student faces are captured and stored for recognition purposes. This approach will require constant face database updates due to constant changes in the facial features. Alternatively, face counting can be performed by cropping the localized faces on the video or image into a folder and then count them. This research aims to develop a face localization-based approach to detect student faces in classroom images captured using a multicamera setup. A selected Haar-like feature cascade face detector trained with an asymmetric goal to minimize the False Rejection Rate (FRR) relative to the False Acceptance Rate (FAR) was applied on Raspberry Pi 4B. A relationship between the two factors (FRR and FAR) was established using a constant (λ) as a trade-off between the two factors for automatic adjustment during training. An evaluation of the proposed approach and the conventional AdaBoost on classroom datasets shows an improvement of 8% TPR (output result of low FRR) and 7% minimization of the FRR. The average learning speed of the proposed approach was improved with 1.19s execution time per image compared to 2.38s of the improved AdaBoost. Consequently, the proposed approach achieved 97% TPR with an overhead constraint time of 22.9s compared to 46.7s of the improved Adaboost when evaluated on images obtained from a large lecture hall (DK5) USM.

Keywords: automatic attendance, face detection, haar-like cascade, manual attendance

Procedia PDF Downloads 72
2006 Artificial Intelligence for Traffic Signal Control and Data Collection

Authors: Reggie Chandra

Abstract:

Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.

Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal

Procedia PDF Downloads 169
2005 Alternative Water Resources and Brominated Byproducts

Authors: Nora Kuiper, Candace Rowell, Hugues Preud'Homme, Basem Shomar

Abstract:

As the global dependence on seawater desalination as a primary drinking water resource increases, a unique class of secondary pollutants is emerging. The presence of bromide salts in seawater may result in increased levels of bromine and brominated byproducts in drinking water. The State of Qatar offers a unique setting to study these pollutants and their impacts on consumers as the country is 100% dependent on seawater desalination to supply municipal tap water and locally produced bottled water. Tap water (n=115) and bottled water (n=62) samples were collected throughout the State of Qatar and analyzed for a suite of inorganic and organic compounds, including 54 volatile organic compounds (VOCs), with an emphasis on brominated byproducts. All VOC identification and quantification was completed using a Bruker Scion GCMSMS with static headspace technologies. A risk survey tool was used to collect information regarding local consumption habits, health outcomes and perception of water sources for adults and children. This study is the first of its kind in the country. Dibromomethane, bromoform, and bromobenzene were detected in 61%, 88% and 2%, of the drinking water samples analyzed. The levels of dibromomethane ranged from approximately 100-500 ng/L and the concentrations of bromoform ranged from approximately 5-50 µg/L. Additionally, bromobenzene concentrations were 60 ng/L. The presence of brominated compounds in drinking water is a public health concern specific to populations using seawater as a feed water source and may pose unique risks that have not been previously studied. Risk assessments are ongoing to quantify the risks associated with prolonged consumption of disinfection byproducts; specifically the risks of brominated trihalomethanes as the levels of bromoform found in Qatar’s drinking water reach more than 60% of the US EPA’s Maximum Contaminant Level of all THMs.

Keywords: brominated byproducts, desalination, trihalomethanes, risk assessment

Procedia PDF Downloads 429
2004 KAP Study on Breast Cancer Among Women in Nirmala Educational Institutions-A Prospective Observational Study

Authors: Shaik Asha Begum, S. Joshna Rani, Shaik Abdul Rahaman

Abstract:

INTRODUCTION: Breast cancer is a disease that creates in breast cells. "KAP" study estimates the Knowledge, Attitude, and Practices of a local area. More than 1.5 million ladies (25% of all ladies with malignancy) are determined to have bosom disease consistently all through the world. Understanding the degrees of Knowledge, Attitude and Practice will empower a more effective cycle of mindfulness creation as it will permit the program to be custom-made all the more properly to the necessities of the local area. OBJECTIVES: The objective of this study is to assess the knowledge on signs and symptoms, risk factors, provide awareness on the practicing of the early detection techniques of breast cancer and provide knowledge on the overall breast cancer including preventive techniques. METHODOLOGY: This is an expressive cross-sectional investigation. This investigation of KAP was done in the Nirmala Educational Institutions from January to April 2021. A total of 300 participants are included from women students in pharmacy graduates & lecturers, and also from graduates other than the pharmacy. The examiners are taken from the BCAM (Breast Cancer Awareness Measure), tool compartment (Version 2). RESULT: According to the findings of the study, the majority of the participants were not well informed about breast cancer. A lump in the breast was the most commonly mentioned sign of breast cancer, followed by pain in the breast or nipple. The percentage of knowledge related to the breast cancer risk factors was also very less. The correct answers for breast cancer risk factors were radiation exposure (58.20 percent), a positive family history (47.6 percent), obesity (46.9 percent), a lack of physical activity (43.6 percent), and smoking (43.2 percent). Breast cancer screening, on the other hand, was uncommon (only 30 and 11.3 percent practiced clinical breast examination and mammography respectively). CONCLUSION: In this study, the knowledge on the signs and symptoms, risk factors of breast cancer - pharmacy graduates have more knowledge than the non-pharmacy graduates but in the preventive techniques and early detective tools of breast cancer -had poor knowledge in the pharmacy and non-pharmacy graduate. After the awareness program, pharmacy and non-pharmacy graduates got supportive knowledge on the preventive techniques and also practiced the early detective techniques of breast cancer.

Keywords: breast cancer, mammography, KAP study, early detection

Procedia PDF Downloads 138
2003 Effective Training System for Riding Posture Using Depth and Inertial Sensors

Authors: Sangseung Kang, Kyekyung Kim, Suyoung Chi

Abstract:

A good posture is the most important factor in riding. In this paper, we present an effective posture correction system for a riding simulator environment to provide position error detection and customized training functions. The proposed system detects and analyzes the rider's posture using depth data and inertial sensing data. Our experiments show that including these functions will help users improve their seat for a riding.

Keywords: posture correction, posture training, riding posture, riding simulator

Procedia PDF Downloads 476
2002 A Comprehensive Framework for Fraud Prevention and Customer Feedback Classification in E-Commerce

Authors: Samhita Mummadi, Sree Divya Nagalli, Harshini Vemuri, Saketh Charan Nakka, Sumesh K. J.

Abstract:

One of the most significant challenges faced by people in today’s digital era is an alarming increase in fraudulent activities on online platforms. The fascination with online shopping to avoid long queues in shopping malls, the availability of a variety of products, and home delivery of goods have paved the way for a rapid increase in vast online shopping platforms. This has had a major impact on increasing fraudulent activities as well. This loop of online shopping and transactions has paved the way for fraudulent users to commit fraud. For instance, consider a store that orders thousands of products all at once, but what’s fishy about this is the massive number of items purchased and their transactions turning out to be fraud, leading to a huge loss for the seller. Considering scenarios like these underscores the urgent need to introduce machine learning approaches to combat fraud in online shopping. By leveraging robust algorithms, namely KNN, Decision Trees, and Random Forest, which are highly effective in generating accurate results, this research endeavors to discern patterns indicative of fraudulent behavior within transactional data. Introducing a comprehensive solution to this problem in order to empower e-commerce administrators in timely fraud detection and prevention is the primary motive and the main focus. In addition to that, sentiment analysis is harnessed in the model so that the e-commerce admin can tailor to the customer’s and consumer’s concerns, feedback, and comments, allowing the admin to improve the user’s experience. The ultimate objective of this study is to ramp up online shopping platforms against fraud and ensure a safer shopping experience. This paper underscores a model accuracy of 84%. All the findings and observations that were noted during our work lay the groundwork for future advancements in the development of more resilient and adaptive fraud detection systems, which will become crucial as technologies continue to evolve.

Keywords: behavior analysis, feature selection, Fraudulent pattern recognition, imbalanced classification, transactional anomalies

Procedia PDF Downloads 27
2001 Enhancing Sewage Sludge Management through Integrated Hydrothermal Liquefaction and Anaerobic Digestion: A Comparative Study

Authors: Harveen Kaur Tatla, Parisa Niknejad, Rajender Gupta, Bipro Ranjan Dhar, Mohd. Adana Khan

Abstract:

Sewage sludge management presents a pressing challenge in the realm of wastewater treatment, calling for sustainable and efficient solutions. This study explores the integration of Hydrothermal Liquefaction (HTL) and Anaerobic Digestion (AD) as a promising approach to address the complexities associated with sewage sludge treatment. The integration of these two processes offers a complementary and synergistic framework, allowing for the mitigation of inherent limitations, thereby enhancing overall efficiency, product quality, and the comprehensive utilization of sewage sludge. In this research, we investigate the optimal sequencing of HTL and AD within the treatment framework, aiming to discern which sequence, whether HTL followed by AD or AD followed by HTL, yields superior results. We explore a range of HTL working temperatures, including 250°C, 300°C, and 350°C, coupled with residence times of 30 and 60 minutes. To evaluate the effectiveness of each sequence, a battery of tests is conducted on the resultant products, encompassing Total Ammonia Nitrogen (TAN), Chemical Oxygen Demand (COD), and Volatile Fatty Acids (VFA). Additionally, elemental analysis is employed to determine which sequence maximizes energy recovery. Our findings illuminate the intricate dynamics of HTL and AD integration for sewage sludge management, shedding light on the temperature-residence time interplay and its impact on treatment efficiency. This study not only contributes to the optimization of sewage sludge treatment but also underscores the potential of integrated processes in sustainable waste management strategies. The insights gleaned from this research hold promise for advancing the field of wastewater treatment and resource recovery, addressing critical environmental and energy challenges.

Keywords: Anaerobic Digestion (AD), aqueous phase, energy recovery, Hydrothermal Liquefaction (HTL), sewage sludge management, sustainability.

Procedia PDF Downloads 81
2000 Mother as Troubles Teller: A Discourse Analytic Case Study of Mother-Adolescent Daughter Interaction

Authors: Domenica L. DelPrete

Abstract:

Viewed as a type of rapport-talk, troubles telling is a common conversational practice among female friends who wish to establish connection, show empathy, or share a disconcerting experience. This study shows how troubles talk between a mother and her adolescent daughter has a different interactional outcome. Specifically, it reveals how discursive interaction with an adolescent daughter becomes increasingly volatile when the mother steps out of the role of nurturer and into the role of troubles teller. Naturally occurring interactions between a mother and her 15-year-old daughter were videotaped in their family home over a two-week period. The data were primarily analyzed from an interactional sociolinguistic perspective, using conversation analytic techniques for transcriptions and discursive analysis. The following questions guided this research: (1) How are troubles telling discursively accomplished in the everyday talk of a mother and her adolescent daughter? and (2) What topic prompts the mother to engage in troubles talk? The data show that the mother engages her daughter in troubles to talk on issues related to body image and physical appearance and does so by (1) repeated questioning, (2) not accepting the daughter’s response as adequate, and (3) proffering self-deprecation. Findings reveal that engaging an adolescent daughter in a conversational practice reserved for female friendship groups creates a negative connection and relational disharmony. Since 'telling one’s troubles' assumes an egalitarian relationship between individuals, mother’s trouble telling creates a peer-like interaction that the adolescent daughter repeatedly resists. This study also proposes a discursive consciousness raising, which hopes to enhance communication between mothers and daughters by revealing the signals that show an adolescent daughter’s unwillingness to participate in troubles talk. Being in tune to these cues may prompt mothers to hesitate before pursuing a topic that will not garner the positive interactional outcome they seek.

Keywords: discursive interaction, maternal roles, mother-daughter interaction, troubles telling

Procedia PDF Downloads 131
1999 A Comparative Study on Biochar from Slow Pyrolysis of Corn Cob and Cassava Wastes

Authors: Adilah Shariff, Nurhidayah Mohamed Noor, Alexander Lau, Muhammad Azwan Mohd Ali

Abstract:

Biomass such as corn and cassava wastes if left to decay will release significant quantities of greenhouse gases (GHG) including carbon dioxide and methane. The biomass wastes can be converted into biochar via thermochemical process such as slow pyrolysis. This approach can reduce the biomass wastes as well as preserve its carbon content. Biochar has the potential to be used as a carbon sequester and soil amendment. The aim of this study is to investigate the characteristics of the corn cob, cassava stem, and cassava rhizome in order to identify their potential as pyrolysis feedstocks for biochar production. This was achieved by using the proximate and elemental analyses as well as calorific value and lignocellulosic determination. The second objective is to investigate the effect of pyrolysis temperature on the biochar produced. A fixed bed slow pyrolysis reactor was used to pyrolyze the corn cob, cassava stem, and cassava rhizome. The pyrolysis temperatures were varied between 400 °C and 600 °C, while the heating rate and the holding time were fixed at 5 °C/min and 1 hour, respectively. Corn cob, cassava stem, and cassava rhizome were found to be suitable feedstocks for pyrolysis process because they contained a high percentage of volatile matter more than 80 mf wt.%. All the three feedstocks contained low nitrogen and sulphur content less than 1 mf wt.%. Therefore, during the pyrolysis process, the feedstocks give off very low rate of GHG such as nitrogen oxides and sulphur oxides. Independent of the types of biomass, the percentage of biochar yield is inversely proportional to the pyrolysis temperature. The highest biochar yield for each studied temperature is from slow pyrolysis of cassava rhizome as the feedstock contained the highest percentage of ash compared to the other two feedstocks. The percentage of fixed carbon in all the biochars increased as the pyrolysis temperature increased. The increment of pyrolysis temperature from 400 °C to 600 °C increased the fixed carbon of corn cob biochar, cassava stem biochar and cassava rhizome biochar by 26.35%, 10.98%, and 6.20% respectively. Irrespective of the pyrolysis temperature, all the biochars produced were found to contain more than 60 mf wt.% fixed carbon content, much higher than its feedstocks.

Keywords: biochar, biomass, cassava wastes, corn cob, pyrolysis

Procedia PDF Downloads 299
1998 Pervasive Computing: Model to Increase Arable Crop Yield through Detection Intrusion System (IDS)

Authors: Idowu Olugbenga Adewumi, Foluke Iyabo Oluwatoyinbo

Abstract:

Presently, there are several discussions on the food security with increase in yield of arable crop throughout the world. This article, briefly present research efforts to create digital interfaces to nature, in particular to area of crop production in agriculture with increase in yield with interest on pervasive computing. The approach goes beyond the use of sensor networks for environmental monitoring but also by emphasizing the development of a system architecture that detect intruder (Intrusion Process) which reduce the yield of the farmer at the end of the planting/harvesting period. The objective of the work is to set a model for setting up the hand held or portable device for increasing the quality and quantity of arable crop. This process incorporates the use of infrared motion image sensor with security alarm system which can send a noise signal to intruder on the farm. This model of the portable image sensing device in monitoring or scaring human, rodent, birds and even pests activities will reduce post harvest loss which will increase the yield on farm. The nano intelligence technology was proposed to combat and minimize intrusion process that usually leads to low quality and quantity of produce from farm. Intranet system will be in place with wireless radio (WLAN), router, server, and client computer system or hand held device e.g PDAs or mobile phone. This approach enables the development of hybrid systems which will be effective as a security measure on farm. Since, precision agriculture has developed with the computerization of agricultural production systems and the networking of computerized control systems. In the intelligent plant production system of controlled greenhouses, information on plant responses, measured by sensors, is used to optimize the system. Further work must be carry out on modeling using pervasive computing environment to solve problems of agriculture, as the use of electronics in agriculture will attracts more youth involvement in the industry.

Keywords: pervasive computing, intrusion detection, precision agriculture, security, arable crop

Procedia PDF Downloads 403
1997 Deleterious SNP’s Detection Using Machine Learning

Authors: Hamza Zidoum

Abstract:

This paper investigates the impact of human genetic variation on the function of human proteins using machine-learning algorithms. Single-Nucleotide Polymorphism represents the most common form of human genome variation. We focus on the single amino-acid polymorphism located in the coding region as they can affect the protein function leading to pathologic phenotypic change. We use several supervised Machine Learning methods to identify structural properties correlated with increased risk of the missense mutation being damaging. SVM associated with Principal Component Analysis give the best performance.

Keywords: single-nucleotide polymorphism, machine learning, feature selection, SVM

Procedia PDF Downloads 378
1996 Determination of Four Anions in the Ground Layer of Tomb Murals by Ion Chromatography

Authors: Liping Qiu, Xiaofeng Zhang

Abstract:

The ion chromatography method for the rapid determination of four anions (F⁻、Cl⁻、SO₄²⁻、NO₃⁻) in burial ground poles was optimized. The L₉(₃⁴) orthogonal test was used to determine the optimal parameters of sample pretreatment: accurately weigh 2.000g of sample, add 10mL of ultrapure water, and extract for 40min under the conditions of shaking temperature 40℃ and shaking speed 180 r·min-1. The eluent was 25 mmol/L KOH solution, the analytical column was Ion Pac® AS11-SH (250 mm × 4.0 mm), and the purified filtrate was measured by a conductivity detector. Under this method, the detection limit of each ion is 0.066~0.078mg/kg, the relative standard deviation is 0.86%~2.44% (n=7), and the recovery rate is 94.6~101.9.

Keywords: ion chromatography, tomb, anion (F⁻, Cl⁻, SO₄²⁻, NO₃⁻), environmental protection

Procedia PDF Downloads 102
1995 Genetic Diversity of Norovirus Strains in Outpatient Children from Rural Communities of Vhembe District, South Africa, 2014-2015

Authors: Jean Pierre Kabue, Emma Meader, Afsatou Ndama Traore, Paul R. Hunter, Natasha Potgieter

Abstract:

Norovirus is now considered the most common cause of outbreaks of nonbacterial gastroenteritis. Limited data are available for Norovirus strains in Africa, especially in rural and peri-urban areas. Despite the excessive burden of diarrhea disease in developing countries, Norovirus infections have been to date mostly reported in developed countries. There is a need to investigate intensively the role of viral agents associated with diarrhea in different settings in Africa continent. To determine the prevalence and genetic diversity of Norovirus strains circulating in the rural communities in the Limpopo Province, South Africa and investigate the genetic relationship between Norovirus strains, a cross-sectional study was performed on human stools collected from rural communities. Between July 2014 and April 2015, outpatient children under 5 years of age from rural communities of Vhembe District, South Africa, were recorded for the study. A total of 303 stool specimens were collected from those with diarrhea (n=253) and without (n=50) diarrhea. NoVs were identified using real-time one-step RT-PCR. Partial Sequence analyses were performed to genotype the strains. Phylogenetic analyses were performed to compare identified NoVs genotypes to the worldwide circulating strains. Norovirus detection rate was 41.1% (104/253) in children with diarrhea. There was no significant difference (OR=1.24; 95% CI 0.66-2.33) in Norovirus detection between symptomatic and asymptomatic children. Comparison of the median CT values for NoV in children with diarrhea and without diarrhea revealed significant statistical difference of estimated GII viral load from both groups, with a much higher viral burden in children with diarrhea. To our knowledge, this is the first study reporting on the differences in estimated viral load of GII and GI NoV positive cases and controls. GII.Pe (n=9) were the predominant genotypes followed by GII.Pe/GII.4 Sydney 2012 (n=8) suspected recombinant and GII.4 Sydney 2012 variants(n=7). Two unassigned GII.4 variants and an unusual RdRp genotype GII.P15 were found. With note, the rare GIIP15 identified in this study has a common ancestor with GIIP15 strain from Japan previously reported as GII/untypeable recombinant strain implicated in a gastroenteritis outbreak. To our knowledge, this is the first report of this unusual genotype in the African continent. Though not confirmed predictive of diarrhea disease in this study, the high detection rate of NoV is an indication of subsequent exposure of children from rural communities to enteric pathogens due to poor sanitation and hygiene practices. The results reveal that the difference between asymptomatic and symptomatic children with NoV may possibly be related to the NoV genogroups involved. The findings emphasize NoV genetic diversity and predominance of GII.Pe/GII.4 Sydney 2012, indicative of increased NoV activity. An uncommon GII.P15 and two unassigned GII.4 variants were also identified from rural settings of the Vhembe District/South Africa. NoV surveillance is required to help to inform investigations into NoV evolution, and to support vaccine development programmes in Africa.

Keywords: asymptomatic, common, outpatients, norovirus genetic diversity, sporadic gastroenteritis, South African rural communities, symptomatic

Procedia PDF Downloads 195
1994 qPCR Method for Detection of Halal Food Adulteration

Authors: Gabriela Borilova, Monika Petrakova, Petr Kralik

Abstract:

Nowadays, European producers are increasingly interested in the production of halal meat products. Halal meat has been increasingly appearing in the EU's market network and meat products from European producers are being exported to Islamic countries. Halal criteria are mainly related to the origin of muscle used in production, and also to the way products are obtained and processed. Although the EU has legislatively addressed the question of food authenticity, the circumstances of previous years when products with undeclared horse or poultry meat content appeared on EU markets raised the question of the effectiveness of control mechanisms. Replacement of expensive or not-available types of meat for low-priced meat has been on a global scale for a long time. Likewise, halal products may be contaminated (falsified) by pork or food components obtained from pigs. These components include collagen, offal, pork fat, mechanically separated pork, emulsifier, blood, dried blood, dried blood plasma, gelatin, and others. These substances can influence sensory properties of the meat products - color, aroma, flavor, consistency and texture or they are added for preservation and stabilization. Food manufacturers sometimes access these substances mainly due to their dense availability and low prices. However, the use of these substances is not always declared on the product packaging. Verification of the presence of declared ingredients, including the detection of undeclared ingredients, are among the basic control procedures for determining the authenticity of food. Molecular biology methods, based on DNA analysis, offer rapid and sensitive testing. The PCR method and its modification can be successfully used to identify animal species in single- and multi-ingredient raw and processed foods and qPCR is the first choice for food analysis. Like all PCR-based methods, it is simple to implement and its greatest advantage is the absence of post-PCR visualization by electrophoresis. qPCR allows detection of trace amounts of nucleic acids, and by comparing an unknown sample with a calibration curve, it can also provide information on the absolute quantity of individual components in the sample. Our study addresses a problem that is related to the fact that the molecular biological approach of most of the work associated with the identification and quantification of animal species is based on the construction of specific primers amplifying the selected section of the mitochondrial genome. In addition, the sections amplified in conventional PCR are relatively long (hundreds of bp) and unsuitable for use in qPCR, because in DNA fragmentation, amplification of long target sequences is quite limited. Our study focuses on finding a suitable genomic DNA target and optimizing qPCR to reduce variability and distortion of results, which is necessary for the correct interpretation of quantification results. In halal products, the impact of falsification of meat products by the addition of components derived from pigs is all the greater that it is not just about the economic aspect but above all about the religious and social aspect. This work was supported by the Ministry of Agriculture of the Czech Republic (QJ1530107).

Keywords: food fraud, halal food, pork, qPCR

Procedia PDF Downloads 247
1993 Effect of an Oral Dose of M. elsdenii NCIMB 41125 on Lower Digestive Tract, Bacteria Count and Rumen Fermentation in Holstein Calves

Authors: M. C. Muya, L. J. Erasmus

Abstract:

Twenty four new born male Holstein calves were divided into two treatments groups and used to evaluate the effects of M. elsdenii NCIMB 41125. The first groups were dosed with 50 ml containing 108 CFU/mL of M. elsdenii NCIMB 41125 (Me) and the control calves were not dosed. Within each of the two treatments groups, calves were divided into three treatment groups (Not dosed: 7 d, 14 d and 21 d vs dosed Me 7 d, Me14 and Me21 d (treatments), each groups contained 4 calves within which two calves were euthanized at 24 h and two calves at 72 h. Calves entered the trial until euthanize at whether 24 or 72 H after dosing time. After receiving colostrum for 3 consecutive days after birth, calves were fed whole milk and had free access to a commercial calf starter pellet and fresh water. Fecal grab samples were taken from each calf in duplicate +24 h or +72 h relative to dosing. Immediately after euthanizing, the digestive tract was harvested, and duplicate rumen and colon digesta samples collected for VFA’s determination and DNA extraction for bacteria count using 16s RNA PCR probe technique. Independent two t-test was performed to compare mean volatile fatty acids. Mixed-effects linear regressions were performed to establish relationships between: 1) M. elsdenii and Me, and between VFA’s and Me using SAS (2009). M. elsdenii NCIMB 41125 was detected in the faeces, colon and rumen of dosed calves at both +24H and +72H and ranged from 1.6 x 106 to 4.9 x 109 cfu/ml, indicating its potential to colonize in the digestive tract of calves. There was a strong positive relationship (R²=0.96; P < 0.0001) between M. elsdenii NCIMB 41125 and M. elsdenii population (cfu/ml) in the rumen, suggesting that the increase in M. elsdenii was due to increased M. elsdenii NCIMB 41125. An increase in butyrate was observed from +24 h to +72 h when calves were dosed on both d 7 and 14. Results showed that Me presented a positive relationship with butyrate (P < 0.001, R² = 0.43) and a concomitant negative relationship with acetate (P = 0.017, R² = -0.33). These results suggest that dosing pre-weaned dairy calves with M. elsdenii NCIMB 41125 has the potential to alter ruminal VFA production through increasing proportions of butyrate at the expense of propionate.

Keywords: calves, megasphaera elsdenii, rumen fermentation, bacteria

Procedia PDF Downloads 394
1992 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 74
1991 Evaluation of Antimicrobial Susceptibility Profile of Urinary Tract Infections in Massoud Medical Laboratory: 2018-2021

Authors: Ali Ghorbanipour

Abstract:

The aim of this study is to investigate the drug resistance pattern and the value of the MIC (minimum inhibitory concentration)method to reduce the impact of infectious diseases and the slow development of resistance. Method: The study was conducted on clinical specimens collected between 2018 to 2021. identification of isolates and antibiotic susceptibility testing were performed using conventional biochemical tests. Antibiotic resistance was determined using kibry-Bauer disk diffusion and MIC by E-test methods comparative with microdilution plate elisa method. Results were interpreted according to CLSI. Results: Out of 249600 different clinical specimens, 18720 different pathogenic bacteria by overall detection ratio 7.7% were detected. Among pathogen bacterial were Gram negative bacteria (70%,n=13000) and Gram positive bacteria(30%,n=5720).Medically relevant gram-negative bacteria include a multitude of species such as E.coli , Klebsiella .spp , Pseudomonas .aeroginosa , Acinetobacter .spp , Enterobacterspp ,and gram positive bacteria Staphylococcus.spp , Enterococcus .spp , Streptococcus .spp was isolated . Conclusion: Our results highlighted that the resistance ratio among Gram Negative bacteria and Gram positive bacteria with different infection is high it suggest constant screening and follow-up programs for the detection of antibiotic resistance and the value of MIC drug susceptibility reporting that provide a new way to the usage of resistant antibiotic in combination with other antibiotics or accurate weight of antibiotics that inhibit or kill bacteria. Evaluation of wrong medication in the expansion of resistance and side effects of over usage antibiotics are goals. Ali ghorbanipour presently working as a supervision at the microbiology department of Massoud medical laboratory. Iran. Earlier, he worked as head department of pulmonary infection in firoozgarhospital, Iran. He received master degree in 2012 from Fergusson College. His research prime objective is a biologic wound dressing .to his credit, he has Published10 articles in various international congresses by presenting posters.

Keywords: antimicrobial profile, MIC & MBC Method, microplate antimicrobial assay, E-test

Procedia PDF Downloads 133
1990 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management

Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran

Abstract:

Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.

Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities

Procedia PDF Downloads 72
1989 Effect of Environmental Parameters on the Water Solubility of the Polycyclic Aromatic Hydrocarbons and Derivatives using Taguchi Experimental Design Methodology

Authors: Pranudda Pimsee, Caroline Sablayrolles, Pascale De Caro, Julien Guyomarch, Nicolas Lesage, Mireille Montréjaud-Vignoles

Abstract:

The MIGR’HYCAR research project was initiated to provide decisional tools for risks connected to oil spill drifts in continental waters. These tools aim to serve in the decision-making process once oil spill pollution occurs and/or as reference tools to study scenarios of potential impacts of pollutions on a given site. This paper focuses on the study of the distribution of polycyclic aromatic hydrocarbons (PAHs) and derivatives from oil spill in water as function of environmental parameters. Eight petroleum oils covering a representative range of commercially available products were tested. 41 Polycyclic Aromatic Hydrocarbons (PAHs) and derivate, among them 16 EPA priority pollutants were studied by dynamic tests at laboratory scale. The chemical profile of the water soluble fraction was different from the parent oil profile due to the various water solubility of oil components. Semi-volatile compounds (naphtalenes) constitute the major part of the water soluble fraction. A large variation in composition of the water soluble fraction was highlighted depending on oil type. Moreover, four environmental parameters (temperature, suspended solid quantity, salinity, and oil: water surface ratio) were investigated with the Taguchi experimental design methodology. The results showed that oils are divided into three groups: the solubility of Domestic fuel and Jet A1 presented a high sensitivity to parameters studied, meaning they must be taken into account. For gasoline (SP95-E10) and diesel fuel, a medium sensitivity to parameters was observed. In fact, the four others oils have shown low sensitivity to parameters studied. Finally, three parameters were found to be significant towards the water soluble fraction.

Keywords: mornitoring, PAHs, water soluble fraction, SBSE, Taguchi experimental design

Procedia PDF Downloads 325
1988 Yields and Composition of the Gas, Liquid and Solid Fractions Obtained by Conventional Pyrolysis of Different Lignocellulosic Biomass Residues

Authors: María del Carmen Recio-Ruiz, Ramiro Ruiz-Rosas, Juana María Rosas, José Rodríguez-Mirasol, Tomás Cordero

Abstract:

Nowadays, fossil resources are main precursors for fuel production. Due to their contribution to the greenhouse effect and their future depletion, there is a constant search for environmentally friendly feedstock alternatives. Biomass residues constitute an interesting replacement for fossil resources because of their zero net CO₂ emissions. One of the main routes to convert biomass into energy and chemicals is pyrolysis. In this work, conventional pyrolysis of different biomass residues highly available such as almond shells, hemp hurds, olive stones, and Kraft lignin, was studied. In a typical experiment, the biomass was crushed and loaded into a fixed bed reactor under continuous nitrogen flow. The influence of temperature (400-800 ºC) and heating rate (10 and 20 ºC/min) on the pyrolysis yield and composition of the different fractions has been studied. In every case, the mass yields revealed that the solid fraction decreased with temperature, while liquid and gas fractions increased due to depolymerization and cracking reactions at high temperatures. The composition of every pyrolysis fraction was studied in detail. The results showed that the composition of the gas fraction was mainly CO, CO₂ when working at low temperatures, and mostly CH₄ and H₂at high temperatures. The solid fraction developed an incipient microporosity, with narrow micropore volume of 0.21 cm³/g. Regarding the liquid fraction, pyrolysis of almond shell, hemp hurds, and olive stones led mainly to a high content in aliphatic acids and furans, due to the high volatile matter content of these biomass (>74 %wt.), and phenols to a lesser degree, which were formed due to the degradation of lignin at higher temperatures. However, when Kraft lignin was used as bio-oil precursor, the presence of phenols was very prominent, and aliphatic compounds were also detected in a lesser extent.

Keywords: Bio-oil, biomass, conventional pyrolysis, lignocellulosic

Procedia PDF Downloads 134
1987 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 94