Search results for: spatial audio processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6328

Search results for: spatial audio processing

808 Unveiling Drought Dynamics in the Cuneo District, Italy: A Machine Learning-Enhanced Hydrological Modelling Approach

Authors: Mohammadamin Hashemi, Mohammadreza Kashizadeh

Abstract:

Droughts pose a significant threat to sustainable water resource management, agriculture, and socioeconomic sectors, particularly in the field of climate change. This study investigates drought simulation using rainfall-runoff modelling in the Cuneo district, Italy, over the past 60-year period. The study leverages the TUW model, a lumped conceptual rainfall-runoff model with a semi-distributed operation capability. Similar in structure to the widely used Hydrologiska Byråns Vattenbalansavdelning (HBV) model, the TUW model operates on daily timesteps for input and output data specific to each catchment. It incorporates essential routines for snow accumulation and melting, soil moisture storage, and streamflow generation. Multiple catchments' discharge data within the Cuneo district form the basis for thorough model calibration employing the Kling-Gupta Efficiency (KGE) metric. A crucial metric for reliable drought analysis is one that can accurately represent low-flow events during drought periods. This ensures that the model provides a realistic picture of water availability during these critical times. Subsequent validation of monthly discharge simulations thoroughly evaluates overall model performance. Beyond model development, the investigation delves into drought analysis using the robust Standardized Runoff Index (SRI). This index allows for precise characterization of drought occurrences within the study area. A meticulous comparison of observed and simulated discharge data is conducted, with particular focus on low-flow events that characterize droughts. Additionally, the study explores the complex interplay between land characteristics (e.g., soil type, vegetation cover) and climate variables (e.g., precipitation, temperature) that influence the severity and duration of hydrological droughts. The study's findings demonstrate successful calibration of the TUW model across most catchments, achieving commendable model efficiency. Comparative analysis between simulated and observed discharge data reveals significant agreement, especially during critical low-flow periods. This agreement is further supported by the Pareto coefficient, a statistical measure of goodness-of-fit. The drought analysis provides critical insights into the duration, intensity, and severity of drought events within the Cuneo district. This newfound understanding of spatial and temporal drought dynamics offers valuable information for water resource management strategies and drought mitigation efforts. This research deepens our understanding of drought dynamics in the Cuneo region. Future research directions include refining hydrological modelling techniques and exploring future drought projections under various climate change scenarios.

Keywords: hydrologic extremes, hydrological drought, hydrological modelling, machine learning, rainfall-runoff modelling

Procedia PDF Downloads 41
807 BiFormerDTA: Structural Embedding of Protein in Drug Target Affinity Prediction Using BiFormer

Authors: Leila Baghaarabani, Parvin Razzaghi, Mennatolla Magdy Mostafa, Ahmad Albaqsami, Al Warith Al Rushaidi, Masoud Al Rawahi

Abstract:

Predicting the interaction between drugs and their molecular targets is pivotal for advancing drug development processes. Due to the time and cost limitations, computational approaches have emerged as an effective approach to drug-target interaction (DTI) prediction. Most of the introduced computational based approaches utilize the drug molecule and protein sequence as input. This study does not only utilize these inputs, it also introduces a protein representation developed using a masked protein language model. In this representation, for every individual amino acid residue within the protein sequence, there exists a corresponding probability distribution that indicates the likelihood of each amino acid being present at that particular position. Then, the similarity between each pair of amino-acids is computed to create similarity matrix. To encode the knowledge of the similarity matrix, Bi-Level Routing Attention (BiFormer) is utilized, which combines aspects of transformer-based models with protein sequence analysis and represents a significant advancement in the field of drug-protein interaction prediction. BiFormer has the ability to pinpoint the most effective regions of the protein sequence that are responsible for facilitating interactions between the protein and drugs, thereby enhancing the understanding of these critical interactions. Thus, it appears promising in its ability to capture the local structural relationship of the proteins by enhancing the understanding of how it contributes to drug protein interactions, thereby facilitating more accurate predictions. To evaluate the proposed method, it was tested on two widely recognized datasets: Davis and KIBA. A comprehensive series of experiments was conducted to illustrate its effectiveness in comparison to cuttingedge techniques.

Keywords: BiFormer, transformer, protein language processing, self-attention mechanism, binding affinity, drug target interaction, similarity matrix, protein masked representation, protein language model

Procedia PDF Downloads 8
806 Lexical-Semantic Deficits in Sinhala Speaking Persons with Post Stroke Aphasia: Evidence from Single Word Auditory Comprehension Task

Authors: D. W. M. S. Samarathunga, Isuru Dharmarathne

Abstract:

In aphasia, various levels of symbolic language processing (semantics) are affected. It is shown that Persons with Aphasia (PWA) often experience more problems comprehending some categories of words than others. The study aimed to determine lexical semantic deficits seen in Auditory Comprehension (AC) and to describe lexical-semantic deficits across six selected word categories. Thirteen (n =13) persons diagnosed with post-stroke aphasia (PSA) were recruited to perform an AC task. Foods, objects, clothes, vehicles, body parts and animals were selected as the six categories. As the test stimuli, black and white line drawings were adapted from a picture set developed for semantic studies by Snodgrass and Vanderwart. A pilot study was conducted with five (n=5) healthy nonbrain damaged Sinhala speaking adults to decide familiarity and applicability of the test material. In the main study, participants were scored based on the accuracy and number of errors shown. The results indicate similar trends of lexical semantic deficits identified in the literature confirming ‘animals’ to be the easiest category to comprehend. Mann-Whitney U test was performed to determine the association between the selected variables and the participants’ performance on AC task. No statistical significance was found between the errors and the type of aphasia reflecting similar patterns described in aphasia literature in other languages. The current study indicates the presence of selectivity of lexical semantic deficits in AC and a hierarchy was developed based on the complexity of the categories to comprehend by Sinhala speaking PWA, which might be clinically beneficial when improving language skills of Sinhala speaking persons with post-stroke aphasia. However, further studies on aphasia should be conducted with larger samples for a longer period to study deficits in Sinhala and other Sri Lankan languages (Tamil and Malay).

Keywords: aphasia, auditory comprehension, selective lexical-semantic deficits, semantic categories

Procedia PDF Downloads 253
805 Alternative Seed System for Enhanced Availability of Quality Seeds and Seed/Varietal Replacement Rate - An Experience

Authors: Basave Gowda, Lokesh K., Prasanth S. M., Bellad S. B., Radha J., Lokesh G. Y., Patil S. B., Vijayakumar D. K., Ganigar B. S., Rakesh C. Mathad

Abstract:

Quality seed plays an important role in enhancing the crop productivity. It was reported and confirmed by large scale verification research trials that by use of quality seeds alone, the crop yield can be enhanced by 15 to 20 per cent. At present, the quality seed production and distribution through organised sectors comprising both public and private seed sector was only 20-25% of the requirement and the remaining quantity is met through unorganised sector which include the farmer to farmers saved seeds. With an objective of developing an alternative seed system, the University of Agricultural Sciences, Raichur in Karnataka state has implemented Seed Village Programme in more than 100 villages covering around 5000 farmers every year since 2009-10 and in the selected seed villages, a group of 50-150 farmers were supplied the foundation seeds of new varieties to an extent of 0.4 ha at 50 % subsidy. And two to three training programmes were conducted in the targeted villages for quality seed production and the seed produced in the target group was processed locally in the university seed processing units and arranged for distribution in the local villages by the seed growers themselves. By this new innovative and modified seed system, the university can able to replace old varieties of pigeon pea and green gram by producing 1482, 2978, 2729, 2560, and 4581 tonnes of seeds of new varieties on large scale under farmers and scientists participatory seed village programmes respectively during 2009-10, 2010-11, 2011-12, 2012-13 and 2013-14. From this new alternate model of seed system, there should be large scale promotion of regional seed system involving farmers, NGO and voluntary organisation for quick and effective replacement of old, low yielding, disease susceptible varieties with new high yielding, disease resistant for enhanced food production and food security.

Keywords: seed system, seed village, seed replacement, varietal replacement

Procedia PDF Downloads 431
804 Comparison Study of 70% Ethanol Effect on Direct and Retrival Culture of Contaminated Umblical Cord Tissue for Expansion of Mesenchymal Stem Cells

Authors: Ganeshkumar, Ashika, Valavan, Ramesh, Thangam, Chirayu

Abstract:

MSCs are found in much higher concentration in the Wharton’s jelly compared to the umbilical cord blood, which is a rich source of hematopoietic stem cells. Umbilical cord tissue is collected at the time of birth; it is processed and stored in liquid nitrogen for future therapeutical purpose. The source of contamination might be either from vaginal tract of mother or from hospital environment or from personal handling during cord tissue sample collection. If the sample were contaminated, decontamination procedure will be done with 70% ethanol (1 minute) in order to avoid sample rejection. Ethanol is effective against a wide range of bacteria, protozoa and fungi and has low toxicity to humans. Among the 1954 samples taken for the study, 24 samples were found to be contaminated with microorganism. The organisms isolated from the positive samples were found to be E. coli, Stenotrophomonas maltophilia, Pseudomonas aueroginosa, Enterococcus fecalis, Acinetobacter bowmani, Staphylococcus epidermidis, Enterobacter cloacae, and Proteus mirabilis. Among these organisms 70% ethanol successfully eliminated E. coli, Enterococcus fecalis, Acinetobacter bowmani, Staphylococcus epidermidis, and Proteus mirabilis. 70% ethanol was unsuccessful in eliminating Stenotrophomonas maltophilia, Pseudomonas aueroginosa, and Enterobacter cloacae. Stenotrophomonas maltophilia and Pseudomonas aueroginosa have the ability to form biofilm that make them resistant to alcohol. Biofilm act as protective layer for bacteria and which protects them from host defense and antibiotic wash. Finally it was found 70% ethanol wash saved 58.3% cord tissue samples from rejection and it is ineffective against 41% of the samples. The contamination rate can be reduced by maintaining proper aseptic techniques during sample collection and processing.

Keywords: umblical cord tissue, decontamination, 70% ethanol effectiveness, contamination

Procedia PDF Downloads 349
803 Carbon-Encapsulated Iron Nanoparticles for Hydrogen Sulfide Removal

Authors: Meriem Abid, Erika Oliveria-Jardim, Andres Fullana, Joaquin Silvestre-Albero

Abstract:

The rapid industrial development associated with the increase of volatile organic compounds (VOCs) has seriously impacted the environment. Among VOCs, hydrogen sulfide (H₂S) is known as a highly toxic, malodorous, flammable, and corrosive gas, which is emitted from diverse chemical processes, including industrial waste-gas streams, natural gas processing, and biogas purification. The high toxicity, corrosively, and very characteristic odor threshold of H2S call for urgent development of efficient desulfurization processes from the viewpoint of environmental protection and resource regeneration. In order to reduce H₂S emissions, effective technologies for have been performed. The general method of H₂S removal included amine aqueous solution, adsorption process, biological methods, and fixed-bed solid catalytic oxidation processes. Ecologically and economically, low-temperature direct oxidation of H₂S to elemental sulfur using catalytic oxidation is the preferred approach for removing H₂S-containing gas streams. A large number of catalysts made from carbon, metal oxides, clay, and others, have been studied extensively for this application. In this sense, activated carbon (AC) is an attractive catalyst for H₂S removal because it features a high specific surface area, diverse functional groups, low cost, durability, and high efficiency. It is interesting to stand out that AC is modified using metal oxides to promote the efficiency of H₂S removal and to enhance the catalytic performance. Based on these premises, the main goal of the present study is the evaluation of the H₂S adsorption performance in carbon-encapsulated iron nanoparticles obtained from an olive mill, thermally treated at 600, 800 and 1000 ºC temperatures under anaerobic conditions. These results anticipate that carbon-encapsulated iron nanoparticles exhibit a promising performance for the H₂S removal up to 360 mg/g.

Keywords: H₂S removal, catalytic oxidation, carbon encapsulated iron, olive mill wastewater

Procedia PDF Downloads 87
802 Linear Evolution of Compressible Görtler Vortices Subject to Free-Stream Vortical Disturbances

Authors: Samuele Viaro, Pierre Ricco

Abstract:

Görtler instabilities generate in boundary layers from an unbalance between pressure and centrifugal forces caused by concave surfaces. Their spatial streamwise evolution influences transition to turbulence. It is therefore important to understand even the early stages where perturbations, still small, grow linearly and could be controlled more easily. This work presents a rigorous theoretical framework for compressible flows using the linearized unsteady boundary region equations, where only the streamwise pressure gradient and streamwise diffusion terms are neglected from the full governing equations of fluid motion. Boundary and initial conditions are imposed through an asymptotic analysis in order to account for the interaction of the boundary layer with free-stream turbulence. The resulting parabolic system is discretize with a second-order finite difference scheme. Realistic flow parameters are chosen from wind tunnel studies performed at supersonic and subsonic conditions. The Mach number ranges from 0.5 to 8, with two different radii of curvature, 5 m and 10 m, frequencies up to 2000 Hz, and vortex spanwise wavelengths from 5 mm to 20 mm. The evolution of the perturbation flow is shown through velocity, temperature, pressure profiles relatively close to the leading edge, where non-linear effects can still be neglected, and growth rate. Results show that a global stabilizing effect exists with the increase of Mach number, frequency, spanwise wavenumber and radius of curvature. In particular, at high Mach numbers curvature effects are less pronounced and thermal streaks become stronger than velocity streaks. This increase of temperature perturbations saturates at approximately Mach 4 flows, and is limited in the early stage of growth, near the leading edge. In general, Görtler vortices evolve closer to the surface with respect to a flat plate scenario but their location shifts toward the edge of the boundary layer as the Mach number increases. In fact, a jet-like behavior appears for steady vortices having small spanwise wavelengths (less than 10 mm) at Mach 8, creating a region of unperturbed flow close to the wall. A similar response is also found at the highest frequency considered for a Mach 3 flow. Larger vortices are found to have a higher growth rate but are less influenced by the Mach number. An eigenvalue approach is also employed to study the amplification of the perturbations sufficiently downstream from the leading edge. These eigenvalue results are compared with the ones obtained through the initial value approach with inhomogeneous free-stream boundary conditions. All of the parameters here studied have a significant influence on the evolution of the instabilities for the Görtler problem which is indeed highly dependent on initial conditions.

Keywords: compressible boundary layers, Görtler instabilities, receptivity, turbulence transition

Procedia PDF Downloads 253
801 Wet Processing of Algae for Protein and Carbohydrate Recovery as Co-Product of Algal Oil

Authors: Sahil Kumar, Rajaram Ghadge, Ramesh Bhujade

Abstract:

Historically, lipid extraction from dried algal biomass remained a focus area of the algal research. It has been realized over the past few years that the lipid-centric approach and conversion technologies that require dry algal biomass have several challenges. Algal culture in cultivation systems contains more than 99% water, with algal concentrations of just a few hundred milligrams per liter ( < 0.05 wt%), which makes harvesting and drying energy intensive. Drying the algal biomass followed by extraction also entails the loss of water and nutrients. In view of these challenges, focus has shifted toward developing processes that will enable oil production from wet algal biomass without drying. Hydrothermal liquefaction (HTL), an emerging technology, is a thermo-chemical conversion process that converts wet biomass to oil and gas using water as a solvent at high temperature and high pressure. HTL processes wet algal slurry containing more than 80% water and significantly reduces the adverse cost impact owing to drying the algal biomass. HTL, being inherently feedstock agnostic, i.e., can convert carbohydrates and proteins also to fuels and recovers water and nutrients. It is most effective with low-lipid (10--30%) algal biomass, and bio-crude yield is two to four times higher than the lipid content in the feedstock. In the early 2010s, research remained focused on increasing the oil yield by optimizing the process conditions of HTL. However, various techno-economic studies showed that simply converting algal biomass to only oil does not make economic sense, particularly in view of low crude oil prices. Making the best use of every component of algae is a key for economic viability of algal to oil process. On investigation of HTL reactions at the molecular level, it has been observed that sequential HTL has the potential to recover value-added products along with biocrude and improve the overall economics of the process. This potential of sequential HTL makes it a most promising technology for converting wet waste to wealth. In this presentation, we will share our experience on the techno-economic and engineering aspects of sequential HTL for conversion of algal biomass to algal bio-oil and co-products.

Keywords: algae, biomass, lipid, protein

Procedia PDF Downloads 214
800 Silicon-Photonic-Sensor System for Botulinum Toxin Detection in Water

Authors: Binh T. T. Nguyen, Zhenyu Li, Eric Yap, Yi Zhang, Ai-Qun Liu

Abstract:

Silicon-photonic-sensor system is an emerging class of analytical technologies that use evanescent field wave to sensitively measure the slight difference in the surrounding environment. The wavelength shift induced by local refractive index change is used as an indicator in the system. These devices can be served as sensors for a wide variety of chemical or biomolecular detection in clinical and environmental fields. In our study, a system including a silicon-based micro-ring resonator, microfluidic channel, and optical processing is designed, fabricated for biomolecule detection. The system is demonstrated to detect Clostridium botulinum type A neurotoxin (BoNT) in different water sources. BoNT is one of the most toxic substances known and relatively easily obtained from a cultured bacteria source. The toxin is extremely lethal with LD50 of about 0.1µg/70kg intravenously, 1µg/ 70 kg by inhalation, and 70µg/kg orally. These factors make botulinum neurotoxins primary candidates as bioterrorism or biothreat agents. It is required to have a sensing system which can detect BoNT in a short time, high sensitive and automatic. For BoNT detection, silicon-based micro-ring resonator is modified with a linker for the immobilization of the anti-botulinum capture antibody. The enzymatic reaction is employed to increase the signal hence gains sensitivity. As a result, a detection limit to 30 pg/mL is achieved by our silicon-photonic sensor within a short period of 80 min. The sensor also shows high specificity versus the other type of botulinum. In the future, by designing the multifunctional waveguide array with fully automatic control system, it is simple to simultaneously detect multi-biomaterials at a low concentration within a short period. The system has a great potential to apply for online, real-time and high sensitivity for the label-free bimolecular rapid detection.

Keywords: biotoxin, photonic, ring resonator, sensor

Procedia PDF Downloads 117
799 Potential Use of Cnidoscolus Chayamansa Leaf from Mexico as High-Quality Protein Source

Authors: Diana Karina Baigts Allende, Mariana Gonzalez Diaz, Luis Antonio Chel Guerrero, Mukthar Sandoval Peraza

Abstract:

Poverty and food insecurity are still incident problems in the developing countries, where population´s diet is based on cereals which are lack in protein content. Nevertheless, during last years the use of native plants has been studied as an alternative source of protein in order to improve the nutritional intake. Chaya crop also called Spinach tree, is a prehispanic plant native from Central America and South of Mexico (Mayan culture), which has been especially valued due to its high nutritional content particularly protein and some medicinal properties. The aim of this work was to study the effect of protein isolation processing from Chaya leaf harvest in Yucatan, Mexico on its structure quality in order: i) to valorize the Chaya crop and ii) to produce low-cost and high-quality protein. Chaya leaf was extruded, clarified and recovered using: a) acid precipitation by decreasing the pH value until reach the isoelectric point (3.5) and b) thermal coagulation, by heating the protein solution at 80 °C during 30 min. Solubilized protein was re-dissolved in water and spray dried. The presence of Fraction I protein, known as RuBisCO (Rubilose-1,5-biphosfate carboxylase/oxygenase) was confirmed by gel electrophoresis (SDS-PAGE) where molecular weight bands of 55 KDa and 12 KDa were observed. The infrared spectrum showed changes in protein structure due to the isolation method. The use of high temperatures (thermal coagulation) highly decreased protein solubility in comparison to isoelectric precipitated protein, the nutritional properties according to amino acid profile was also disturbed, showing minor amounts of overall essential amino acids from 435.9 to 367.8 mg/g. Chaya protein isolate obtained by acid precipitation showed higher protein quality according to essential amino acid score compared to FAO recommendations, which could represent an important sustainable source of protein for human consumption.

Keywords: chaya leaf, nutritional properties, protein isolate, protein structure

Procedia PDF Downloads 341
798 Motion Planning and Simulation Design of a Redundant Robot for Sheet Metal Bending Processes

Authors: Chih-Jer Lin, Jian-Hong Hou

Abstract:

Industry 4.0 is a vision of integrated industry implemented by artificial intelligent computing, software, and Internet technologies. The main goal of industry 4.0 is to deal with the difficulty owing to competitive pressures in the marketplace. For today’s manufacturing factories, the type of production is changed from mass production (high quantity production with low product variety) to medium quantity-high variety production. To offer flexibility, better quality control, and improved productivity, robot manipulators are used to combine material processing, material handling, and part positioning systems into an integrated manufacturing system. To implement the automated system for sheet metal bending operations, motion planning of a 7-degrees of freedom (DOF) robot is studied in this paper. A virtual reality (VR) environment of a bending cell, which consists of the robot and a bending machine, is established using the virtual robot experimentation platform (V-REP) simulator. For sheet metal bending operations, the robot only needs six DOFs for the pick-and-place or tracking tasks. Therefore, this 7 DOF robot has more DOFs than the required to execute a specified task; it can be called a redundant robot. Therefore, this robot has kinematic redundancies to deal with the task-priority problems. For redundant robots, Pseudo-inverse of the Jacobian is the most popular motion planning method, but the pseudo-inverse methods usually lead to a kind of chaotic motion with unpredictable arm configurations as the Jacobian matrix lose ranks. To overcome the above problem, we proposed a method to formulate the motion planning problems as optimization problem. Moreover, a genetic algorithm (GA) based method is proposed to deal with motion planning of the redundant robot. Simulation results validate the proposed method feasible for motion planning of the redundant robot in an automated sheet-metal bending operations.

Keywords: redundant robot, motion planning, genetic algorithm, obstacle avoidance

Procedia PDF Downloads 146
797 Stability of Total Phenolic Concentration and Antioxidant Capacity of Extracts from Pomegranate Co-Products Subjected to In vitro Digestion

Authors: Olaniyi Fawole, Umezuruike Opara

Abstract:

Co-products obtained from pomegranate juice processing contain high levels of polyphenols with potential high added values. From value-addition viewpoint, the aim of this study was to evaluate the stability of polyphenolic concentrations in pomegranate fruit co-products in different solvent extracts and assess the effect on the total antioxidant capacity using the FRAP, DPPH˙ and ABTS˙+ assays during simulated in vitro digestion. Pomegranate juice, marc and peel were extracted in water, 50% ethanol (50%EtOH) and absolute ethanol (100%EtOH) and analysed for total phenolic concentration (TPC), total flavonoids concentration (TFC) and total antioxidant capacity in DPPH˙, ABST˙+ and FRAP assays before and after in vitro digestion. Total phenolic concentration (TPC) and total flavonoid concentration (TFC) were in the order of peel > marc > juice throughout the in vitro digestion irrespective of the extraction solvents used. However, 50% ethanol extracted 1.1 to 12-fold more polyphenols than water and ethanol solvents depending on co-products. TPC and TFC increased significantly in gastric digests. In contrast, after the duodenal, polyphenolic concentrations decreased significantly (p < 0.05) compared to those obtained in gastric digests. Undigested samples and gastric digests showed strong and positive relationships between polyphenols and the antioxidant activities measured in DPPH, ABTS and FRAP assays, with correlation coefficients (r2) ranging between 0.930 – 0.990 whereas, the correlation between polyphenols (TPC and TFC) and radical cation scavenging activity (in ABTS) were moderately positive in duodenal digests. Findings from this study also showed that the concentration of pomegranate polyphenols and antioxidant thereof during in vitro gastro-intestinal digestion may not reflect the pre-digested phenolic concentration. Thus, this study highlights the need to provide biologically relevant information on antioxidants by providing data reflecting their stability and activity after in vitro digestion.

Keywords: by-product, DPPH, polyphenols, value addition

Procedia PDF Downloads 330
796 i-Plastic: Surface and Water Column Microplastics From the Coastal North Eastern Atlantic (Portugal)

Authors: Beatriz Rebocho, Elisabete Valente, Carla Palma, Andreia Guilherme, Filipa Bessa, Paula Sobral

Abstract:

The global accumulation of plastic in the oceans is a growing problem. Plastic is transported from its source to the oceans via rivers, which are considered the main route for plastic particles from land-based sources to the ocean. These plastics undergo physical and chemical degradation resulting in microplastics. The i-Plastic project aims to understand and predict the dispersion, accumulation and impacts of microplastics (5 mm to 1 µm) and nano plastics (below 1 µm) in marine environments from the tropical and temperate land-ocean interface to the open ocean under distinct flow and climate regimes. Seasonal monitoring of the fluxes of microplastics was carried out in (three) coastal areas in Brazil, Portugal and Spain. The present work shows the first results of in-situ seasonal monitoring and mapping of microplastics in ocean waters between Ovar and Vieira de Leiria (Portugal), in which 43 surface water samples and 43 water column samples were collected in contrasting seasons (spring and autumn). The spring and autumn surface water samples were collected with a 300 µm and 150 µm pore neuston net, respectively. In both campaigns, water column samples were collected using a conical mesh with a 150 µm pore. The experimental procedure comprises the following steps: i) sieving by a metal sieve; ii) digestion with potassium hydroxide to remove the organic matter original from the sample matrix. After a filtration step, the content is retained on a membrane and observed under a stereomicroscope, and physical and chemical characterization (type, color, size, and polymer composition) of the microparticles is performed. Results showed that 84% and 88% of the surface water and water column samples were contaminated with microplastics, respectively. Surface water samples collected during the spring campaign averaged 0.35 MP.m-3, while surface water samples collected during autumn recorded 0.39 MP.m-3. Water column samples from the spring campaign had an average of 1.46 MP.m-3, while those from the autumn recorded 2.54 MP.m-3. In the spring, all microplastics found were fibers, predominantly black and blue. In autumn, the dominant particles found in the surface waters were fibers, while in the water column, fragments were dominant. In spring, the average size of surface water particles was 888 μm, while in the water column was 1063 μm. In autumn, the average size of surface and water column microplastics was 1333 μm and 1393 μm, respectively. The main polymers identified by Attenuated Total Reflectance (ATR) and micro-ATR Fourier Transform Infrared (FTIR) spectroscopy from all samples were low-density polyethylene (LDPE), polypropylene (PP), polyethylene terephthalate (PET), and polyvinyl chloride (PVC). The significant difference between the microplastic concentration in the water column between the two campaigns could be due to the remixing of the water masses that occurred that week due to the occurrence of a storm. This work presents preliminary results since the i-Plastic project is still in progress. These results will contribute to the understanding of the spatial and temporal dispersion and accumulation of microplastics in this marine environment.

Keywords: microplastics, Portugal, Atlantic Ocean, water column, surface water

Procedia PDF Downloads 80
795 Foamability and Foam Stability of Gelatine-Sodium Dodecyl Sulfate Solutions

Authors: Virginia Martin Torrejon, Song Hang

Abstract:

Gelatine foams are widely explored materials due to their biodegradability, biocompatibility, and availability. They exhibit outstanding properties and are currently subject to increasing scientific research due to their potential use in different applications, such as biocompatible cellular materials for biomedical products or biofoams as an alternative to fossil-fuel-derived packaging. Gelatine is a highly surface-active polymer, and its concentrated solutions usually do not require surfactants to achieve low surface tension. Still, anionic surfactants like sodium dodecyl sulfate (SDS) strongly interact with gelatine, impacting its viscosity and rheological properties and, in turn, their foaming behaviour. Foaming behaviour is a key parameter for cellular solids produced by mechanical foaming as it has a significant effect on the processing and properties of cellular materials. Foamability mainly impacts the density and the mechanical properties of the foams, while foam stability is crucial to achieving foams with low shrinkage and desirable pore morphology. This work aimed to investigate the influence of SDS on the foaming behaviour of concentrated gelatine foams by using a dynamic foam analyser. The study of maximum foam height created, foam formation behaviour, drainage behaviour, and foam structure with regard to bubble size and distribution were carried out in 10 wt% gelatine solutions prepared at different SDS/gelatine concentration ratios. Comparative rheological and viscometry measurements provided a good correlation with the data from the dynamic foam analyser measurements. SDS incorporation at optimum dosages and gelatine gelation led to highly stable foams at high expansion ratios. The viscosity increase of the hydrogel solution at SDS content increased was a key parameter for foam stabilization. In addition, the impact of SDS content on gelling time and gel strength also considerably impacted the foams' stability and pore structure.

Keywords: dynamic foam analyser, gelatine foams stability and foamability, gelatine-surfactant foams, gelatine-SDS rheology, gelatine-SDS viscosity

Procedia PDF Downloads 154
794 Valuing Cultural Ecosystem Services of Natural Treatment Systems Using Crowdsourced Data

Authors: Andrea Ghermandi

Abstract:

Natural treatment systems such as constructed wetlands and waste stabilization ponds are increasingly used to treat water and wastewater from a variety of sources, including stormwater and polluted surface water. The provision of ancillary benefits in the form of cultural ecosystem services makes these systems unique among water and wastewater treatment technologies and greatly contributes to determine their potential role in promoting sustainable water management practices. A quantitative analysis of these benefits, however, has been lacking in the literature. Here, a critical assessment of the recreational and educational benefits in natural treatment systems is provided, which combines observed public use from a survey of managers and operators with estimated public use as obtained using geotagged photos from social media as a proxy for visitation rates. Geographic Information Systems (GIS) are used to characterize the spatial boundaries of 273 natural treatment systems worldwide. Such boundaries are used as input for the Application Program Interfaces (APIs) of two popular photo-sharing websites (Flickr and Panoramio) in order to derive the number of photo-user-days, i.e., the number of yearly visits by individual photo users in each site. The adequateness and predictive power of four univariate calibration models using the crowdsourced data as a proxy for visitation are evaluated. A high correlation is found between photo-user-days and observed annual visitors (Pearson's r = 0.811; p-value < 0.001; N = 62). Standardized Major Axis (SMA) regression is found to outperform Ordinary Least Squares regression and count data models in terms of predictive power insofar as standard verification statistics – such as the root mean square error of prediction (RMSEP), the mean absolute error of prediction (MAEP), the reduction of error (RE), and the coefficient of efficiency (CE) – are concerned. The SMA regression model is used to estimate the intensity of public use in all 273 natural treatment systems. System type, influent water quality, and area are found to statistically affect public use, consistently with a priori expectations. Publicly available information regarding the home location of the sampled visitors is derived from their social media profiles and used to infer the distance they are willing to travel to visit the natural treatment systems in the database. Such information is analyzed using the travel cost method to derive monetary estimates of the recreational benefits of the investigated natural treatment systems. Overall, the findings confirm the opportunities arising from an integrated design and management of natural treatment systems, which combines the objectives of water quality enhancement and provision of cultural ecosystem services through public use in a multi-functional approach and compatibly with the need to protect public health.

Keywords: constructed wetlands, cultural ecosystem services, ecological engineering, waste stabilization ponds

Procedia PDF Downloads 180
793 Protein-Enrichment of Oilseed Meals by Triboelectrostatic Separation

Authors: Javier Perez-Vaquero, Katryn Junker, Volker Lammers, Petra Foerst

Abstract:

There is increasing importance to accelerate the transition to sustainable food systems by including environmentally friendly technologies. Our work focuses on protein enrichment and fractionation of agricultural side streams by dry triboelectrostatic separation technology. Materials are fed in particulate form into a system dispersed in a highly turbulent gas stream, whereby the high collision rate of particles against surfaces and other particles greatly enhances the electrostatic charge build-up over the particle surface. A subsequent step takes the charged particles to a delimited zone in the system where there is a highly uniform, intense electric field applied. Because the charge polarity acquired by a particle is influenced by its chemical composition, morphology, and structure, the protein-rich and fiber-rich particles of the starting material get opposite charge polarities, thus following different paths as they move through the region where the electric field is present. The output is two material fractions, which differ in their respective protein content. One is a fiber-rich, low-protein fraction, while the other is a high-protein, low-fiber composition. Prior to testing, materials undergo a milling process, and some samples are stored under controlled humidity conditions. In this way, the influence of both particle size and humidity content was established. We used two oilseed meals: lupine and rapeseed. In addition to a lab-scale separator to perform the experiments, the triboelectric separation process could be successfully scaled up to a mid-scale belt separator, increasing the mass feed from g/sec to kg/hour. The triboelectrostatic separation technology opens a huge potential for the exploitation of so far underutilized alternative protein sources. Agricultural side-streams from cereal and oil production, which are generated in high volumes by the industries, can further be valorized by this process.

Keywords: bench-scale processing, dry separation, protein-enrichment, triboelectrostatic separation

Procedia PDF Downloads 190
792 Metal Extraction into Ionic Liquids and Hydrophobic Deep Eutectic Mixtures

Authors: E. E. Tereshatov, M. Yu. Boltoeva, V. Mazan, M. F. Volia, C. M. Folden III

Abstract:

Room temperature ionic liquids (RTILs) are a class of liquid organic salts with melting points below 20 °C that are considered to be environmentally friendly ‘designers’ solvents. Pure hydrophobic ILs are known to extract metallic species from aqueous solutions. The closest analogues of ionic liquids are deep eutectic solvents (DESs), which are a eutectic mixture of at least two compounds with a melting point lower than that of each individual component. DESs are acknowledged to be attractive for organic synthesis and metal processing. Thus, these non-volatile and less toxic compounds are of interest for critical metal extraction. The US Department of Energy and the European Commission consider indium as a key metal. Its chemical homologue, thallium, is also an important material for some applications and environmental safety. The aim of this work is to systematically investigate In and Tl extraction from aqueous solutions into pure fluorinated ILs and hydrophobic DESs. The dependence of the Tl extraction efficiency on the structure and composition of the ionic liquid ions, metal oxidation state, and initial metal and aqueous acid concentrations have been studied. The extraction efficiency of the TlXz3–z anionic species (where X = Cl– and/or Br–) is greater for ionic liquids with more hydrophobic cations. Unexpectedly high distribution ratios (> 103) of Tl(III) were determined even by applying a pure ionic liquid as receiving phase. An improved mathematical model based on ion exchange and ion pair formation mechanisms has been developed to describe the co-extraction of two different anionic species, and the relative contributions of each mechanism have been determined. The first evidence of indium extraction into new quaternary ammonium- and menthol-based hydrophobic DESs from hydrochloric and oxalic acid solutions with distribution ratios up to 103 will be provided. Data obtained allow us to interpret the mechanism of thallium and indium extraction into ILs and DESs media. The understanding of Tl and In chemical behavior in these new media is imperative for the further improvement of separation and purification of these elements.

Keywords: deep eutectic solvents, indium, ionic liquids, thallium

Procedia PDF Downloads 241
791 Neural Correlates of Attention Bias to Threat during the Emotional Stroop Task in Schizophrenia

Authors: Camellia Al-Ibrahim, Jenny Yiend, Sukhwinder S. Shergill

Abstract:

Background: Attention bias to threat play a role in the development, maintenance, and exacerbation of delusional beliefs in schizophrenia in which patients emphasize the threatening characteristics of stimuli and prioritise them for processing. Cognitive control deficits arise when task-irrelevant emotional information elicits attentional bias and obstruct optimal performance. This study is investigating neural correlates of interference effect of linguistic threat and whether these effects are independent of delusional severity. Methods: Using an event-related functional magnetic resonance imaging (fMRI), neural correlates of interference effect of linguistic threat during the emotional Stroop task were investigated and compared patients with schizophrenia with high (N=17) and low (N=16) paranoid symptoms and healthy controls (N=20). Participants were instructed to identify the font colour of each word presented on the screen as quickly and accurately as possible. Stimuli types vary between threat-relevant, positive and neutral words. Results: Group differences in whole brain effects indicate decreased amygdala activity in patients with high paranoid symptoms compared with low paranoid patients and healthy controls. Regions of interest analysis (ROI) validated our results within the amygdala and investigated changes within the striatum showing a pattern of reduced activation within the clinical group compared to healthy controls. Delusional severity was associated with significant decreased neural activity in the striatum within the clinical group. Conclusion: Our findings suggest that the emotional interference mediated by the amygdala and striatum may reduce responsiveness to threat-related stimuli in schizophrenia and that attenuation of fMRI Blood-oxygen-level dependent (BOLD) signal within these areas might be influenced by the severity of delusional symptoms.

Keywords: attention bias, fMRI, Schizophrenia, Stroop

Procedia PDF Downloads 200
790 Improving the Supply Chain of Vietnamese Coffee in Buon Me Thuot City, Daklak Province, Vietnam to Achieve Sustainability

Authors: Giang Ngo Tinh Nguyen

Abstract:

Agriculture plays an important role in the economy of Vietnam and coffee is one of most crucial agricultural commodities for exporting but the current farming methods and processing infrastructure could not keep up with the development of the sector. There are many catastrophic impacts on the environment such as deforestation; soil degradation that leads to a decrease in the quality of coffee beans. Therefore, improving supply chain to develop the cultivation of sustainable coffee is one of the most important strategies to boost the coffee industry and create a competitive advantage for Vietnamese coffee in the worldwide market. If all stakeholders in the supply chain network unite together; the sustainable production of coffee will be scaled up and the future of coffee industry will be firmly secured. Buon Ma Thuot city, Dak Lak province is the principal growing region for Vietnamese coffee which accounted for a third of total coffee area in Vietnam. It plays a strategically crucial role in the development of sustainable Vietnamese coffee. Thus, the research is to improve the supply chain of sustainable Vietnamese coffee production in Buon Ma Thuot city, Dak Lak province, Vietnam for the purpose of increasing the yields and export availability as well as helping coffee farmers to be more flexible in an ever-changing market situation. It will help to affirm Vietnamese coffee brand when entering international market; improve the livelihood of farmers and conserve the environment of this area. Besides, after analyzing the data, a logistic regression model is established to explain the relationship between the dependent variable and independent variables to help sustainable coffee organizations forecast the probability of farmer will be having a sustainable certificate with their current situation and help them choose promising candidates to develop sustainable programs. It investigates opinions of local farmers through quantitative surveys. Qualitative interviews are also used to interview local collectors and staff of Trung Nguyen manufacturing company to have an overview of the situation.

Keywords: supply chain management, sustainable agricultural development, sustainable coffee, Vietnamese coffee

Procedia PDF Downloads 448
789 A Study on the Application of Machine Learning and Deep Learning Techniques for Skin Cancer Detection

Authors: Hritwik Ghosh, Irfan Sadiq Rahat, Sachi Nandan Mohanty, J. V. R. Ravindra

Abstract:

In the rapidly evolving landscape of medical diagnostics, the early detection and accurate classification of skin cancer remain paramount for effective treatment outcomes. This research delves into the transformative potential of Artificial Intelligence (AI), specifically Deep Learning (DL), as a tool for discerning and categorizing various skin conditions. Utilizing a diverse dataset of 3,000 images representing nine distinct skin conditions, we confront the inherent challenge of class imbalance. This imbalance, where conditions like melanomas are over-represented, is addressed by incorporating class weights during the model training phase, ensuring an equitable representation of all conditions in the learning process. Our pioneering approach introduces a hybrid model, amalgamating the strengths of two renowned Convolutional Neural Networks (CNNs), VGG16 and ResNet50. These networks, pre-trained on the ImageNet dataset, are adept at extracting intricate features from images. By synergizing these models, our research aims to capture a holistic set of features, thereby bolstering classification performance. Preliminary findings underscore the hybrid model's superiority over individual models, showcasing its prowess in feature extraction and classification. Moreover, the research emphasizes the significance of rigorous data pre-processing, including image resizing, color normalization, and segmentation, in ensuring data quality and model reliability. In essence, this study illuminates the promising role of AI and DL in revolutionizing skin cancer diagnostics, offering insights into its potential applications in broader medical domains.

Keywords: artificial intelligence, machine learning, deep learning, skin cancer, dermatology, convolutional neural networks, image classification, computer vision, healthcare technology, cancer detection, medical imaging

Procedia PDF Downloads 87
788 Nanomaterials Based Biosensing Chip for Non-Invasive Detection of Oral Cancer

Authors: Suveen Kumar

Abstract:

Oral cancer (OC) is the sixth most death causing cancer in world which includes tumour of lips, floor of the mouth, tongue, palate, cheeks, sinuses, throat, etc. Conventionally, the techniques used for OC detection are toluidine blue staining, biopsy, liquid-based cytology, visual attachments, etc., however these are limited by their highly invasive nature, low sensitivity, time consumption, sophisticated instrument handling, sample processing and high cost. Therefore, we developed biosensing chips for non-invasive detection of OC via CYFRA-21-1 biomarker. CYFRA-21-1 (molecular weight: 40 kDa) is secreted in saliva of OC patients which is a non-invasive biological fluid with a cut-off value of 3.8 ng mL-1, above which the subjects will be suffering from oral cancer. Therefore, in first work, 3-aminopropyl triethoxy silane (APTES) functionalized zirconia (ZrO2) nanoparticles (APTES/nZrO2) were used to successfully detect CYFRA-21-1 in a linear detection range (LDR) of 2-16 ng mL-1 with sensitivity of 2.2 µA mL ng-1. Successively, APTES/nZrO2-RGO was employed to prevent agglomeration of ZrO2 by providing high surface area reduced graphene oxide (RGO) support and much wider LDR (2-22 ng mL-1) was obtained with remarkable limit of detection (LOD) as 0.12 ng mL-1. Further, APTES/nY2O3/ITO platform was used for oral cancer bioseneor development. The developed biosensor (BSA/anti-CYFRA-21-1/APTES/nY2O3/ITO) have wider LDR (0.01-50 ng mL-1) with remarkable limit of detection (LOD) as 0.01 ng mL-1. To improve the sensitivity of the biosensing platform, nanocomposite of yattria stabilized nanostructured zirconia-reduced graphene oxide (nYZR) based biosensor has been developed. The developed biosensing chip having ability to detect CYFRA-21-1 biomolecules in the range of 0.01-50 ng mL-1, LOD of 7.2 pg mL-1 with sensitivity of 200 µA mL ng-1. Further, the applicability of the fabricated biosensing chips were also checked through real sample (saliva) analysis of OC patients and the obtained results showed good correlation with the standard protein detection enzyme linked immunosorbent assay (ELISA) technique.

Keywords: non-invasive, oral cancer, nanomaterials, biosensor, biochip

Procedia PDF Downloads 127
787 The Optimal Order Policy for the Newsvendor Model under Worker Learning

Authors: Sunantha Teyarachakul

Abstract:

We consider the worker-learning Newsvendor Model, under the case of lost-sales for unmet demand, with the research objective of proposing the cost-minimization order policy and lot size, scheduled to arrive at the beginning of the selling-period. In general, the New Vendor Model is used to find the optimal order quantity for the perishable items such as fashionable products or those with seasonal demand or short-life cycles. Technically, it is used when the product demand is stochastic and available for the single selling-season, and when there is only a one time opportunity for the vendor to purchase, with possibly of long ordering lead-times. Our work differs from the classical Newsvendor Model in that we incorporate the human factor (specifically worker learning) and its influence over the costs of processing units into the model. We describe this by using the well-known Wright’s Learning Curve. Most of the assumptions of the classical New Vendor Model are still maintained in our work, such as the constant per-unit cost of leftover and shortage, the zero initial inventory, as well as the continuous time. Our problem is challenging in the way that the best order quantity in the classical model, which is balancing the over-stocking and under-stocking costs, is no longer optimal. Specifically, when adding the cost-saving from worker learning to such expected total cost, the convexity of the cost function will likely not be maintained. This has called for a new way in determining the optimal order policy. In response to such challenges, we found a number of characteristics related to the expected cost function and its derivatives, which we then used in formulating the optimal ordering policy. Examples of such characteristics are; the optimal order quantity exists and is unique if the demand follows a Uniform Distribution; if the demand follows the Beta Distribution with some specific properties of its parameters, the second derivative of the expected cost function has at most two roots; and there exists the specific level of lot size that satisfies the first order condition. Our research results could be helpful for analysis of supply chain coordination and of the periodic review system for similar problems.

Keywords: inventory management, Newsvendor model, order policy, worker learning

Procedia PDF Downloads 416
786 Progress in Replacing Antibiotics in Farm Animal Production

Authors: Debabrata Biswas

Abstract:

The current trend in the development of antibiotic resistance by multiple bacterial pathogens has resulted in a troubling loss of effective antibiotic options for human. The emergence of multi-drug-resistant pathogens has necessitated higher dosages and combinations of multiple antibiotics, further exacerbating the problem of antibiotic resistance. Zoonotic bacterial pathogens, such as Salmonella, Campylobacter, Shiga toxin-producing Escherichia coli (such as enterohaemorrhagic E. coli or EHEC), and Listeria are the most common and predominant foodborne enteric infectious agents. It was observed that these pathogens gained/developed their ability to survive in the presence of antibiotics either in farm animal gut or farm environment and researchers believe that therapeutic and sub-therapeutic antibiotic use in farm animal production might play an important role in it. The mechanism of action of antimicrobial components used in farm animal production in genomic interplay in the gut and farm environment, has not been fully characterized. Even the risk of promoting the exchange of mobile genetic elements between microbes specifically pathogens needs to be evaluated in depth, to ensure sustainable farm animal production, safety of our food and to mitigate/limit the enteric infection with multiple antibiotic resistant bacterial pathogens. Due to the consumer’s demand and considering the current emerging situation, many countries are in process to withdraw antibiotic use in farm animal production. Before withdrawing use of the sub-therapeutic antibiotic or restricting the use of therapeutic antibiotics in farm animal production, it is essential to find alternative natural antimicrobials for promoting the growth of farm animal and/or treating animal diseases. Further, it is also necessary to consider whether that compound(s) has the potential to trigger the acquisition or loss of genetic materials in zoonotic and any other bacterial pathogens. Development of alternative therapeutic and sub-therapeutic antimicrobials for farm animal production and food processing and preservation and their effective implementation for sustainable strategies for farm animal production as well as the possible risk for horizontal gene transfer in major enteric pathogens will be focus in the study.

Keywords: food safety, natural antimicrobial, sustainable farming, antibiotic resistance

Procedia PDF Downloads 270
785 Performance Enrichment of Deep Feed Forward Neural Network and Deep Belief Neural Networks for Fault Detection of Automobile Gearbox Using Vibration Signal

Authors: T. Praveenkumar, Kulpreet Singh, Divy Bhanpuriya, M. Saimurugan

Abstract:

This study analysed the classification accuracy for gearbox faults using Machine Learning Techniques. Gearboxes are widely used for mechanical power transmission in rotating machines. Its rotating components such as bearings, gears, and shafts tend to wear due to prolonged usage, causing fluctuating vibrations. Increasing the dependability of mechanical components like a gearbox is hampered by their sealed design, which makes visual inspection difficult. One way of detecting impending failure is to detect a change in the vibration signature. The current study proposes various machine learning algorithms, with aid of these vibration signals for obtaining the fault classification accuracy of an automotive 4-Speed synchromesh gearbox. Experimental data in the form of vibration signals were acquired from a 4-Speed synchromesh gearbox using Data Acquisition System (DAQs). Statistical features were extracted from the acquired vibration signal under various operating conditions. Then the extracted features were given as input to the algorithms for fault classification. Supervised Machine Learning algorithms such as Support Vector Machines (SVM) and unsupervised algorithms such as Deep Feed Forward Neural Network (DFFNN), Deep Belief Networks (DBN) algorithms are used for fault classification. The fusion of DBN & DFFNN classifiers were architected to further enhance the classification accuracy and to reduce the computational complexity. The fault classification accuracy for each algorithm was thoroughly studied, tabulated, and graphically analysed for fused and individual algorithms. In conclusion, the fusion of DBN and DFFNN algorithm yielded the better classification accuracy and was selected for fault detection due to its faster computational processing and greater efficiency.

Keywords: deep belief networks, DBN, deep feed forward neural network, DFFNN, fault diagnosis, fusion of algorithm, vibration signal

Procedia PDF Downloads 114
784 Morphology Analysis of Apple-Carrot Juice Treated by Manothermosonication (MTS) and High Temperature Short Time (HTST) Processes

Authors: Ozan Kahraman, Hao Feng

Abstract:

Manothermosonication (MTS), which consists of the simultaneous application of heat and ultrasound under moderate pressure (100-700 kPa), is one of the technologies which destroy microorganisms and inactivates enzymes. Transmission electron microscopy (TEM) is a microscopy technique in which a beam of electrons is transmitted through an ultra-thin specimen, interacting with the specimen as it passes through it. The environmental scanning electron microscope or ESEM is a scanning electron microscope (SEM) that allows for the option of collecting electron micrographs of specimens that are "wet," uncoated. These microscopy techniques allow us to observe the processing effects on the samples. This study was conducted to investigate the effects of MTS and HTST treatments on the morphology of apple-carrot juices by using TEM and ESEM microscopy. Apple-carrot juices treated with HTST (72 0C, 15 s), MTS 50 °C (60 s, 200 kPa), and MTS 60 °C (30 s, 200 kPa) were observed in both ESEM and TEM microscopy. For TEM analysis, a drop of the solution dispersed in fixative solution was put onto a Parafilm ® sheet. The copper coated side of the TEM sample holder grid was gently laid on top of the droplet and incubated for 15 min. A drop of a 7% uranyl acetate solution was added and held for 2 min. The grid was then removed from the droplet and allowed to dry at room temperature and presented into the TEM. For ESEM analysis, a critical point drying of the filters was performed using a critical point dryer (CPD) (Samdri PVT- 3D, Tousimis Research Corp., Rockville, MD, USA). After the CPD, each filter was mounted onto a stub and coated with gold/palladium with a sputter coater (Desk II TSC Denton Vacuum, Moorestown, NJ, USA). E.Coli O157:H7 cells on the filters were observed with an ESEM (Philips XL30 ESEM-FEG, FEI Co., Eindhoven, The Netherland). ESEM (Environmental Scanning Electron Microscopy) and TEM (Transmission Electron Microscopy) images showed extensive damage for the samples treated with MTS at 50 and 60 °C such as ruptured cells and breakage on cell membranes. The damage was increasing with increasing exposure time.

Keywords: MTS, HTST, ESEM, TEM, E.COLI O157:H7

Procedia PDF Downloads 285
783 Rheological and Crystallization Properties of Dark Chocolate Formulated with Essential Oil of Orange and Carotene Extracted from Pineapple Peels

Authors: Mayra Pilamunga, Edwin Vera

Abstract:

The consumption of dark chocolate is beneficial due to its high content of flavonoids, catechins, and procyanidins. To improve its properties, fortification of chocolate with polyphenols, anthocyanins, soy milk powder and other compounds has been evaluated in several studies. However, to our best knowledge, the addition of carotenes to chocolate has not been tested. Carotenoids, especially ß-carotene and lutein, are widely distributed in fruits and vegetables so that they could be extracted from agro-industrial waste, such as fruit processing. On the other hand, limonene produces crystalline changes of cocoa butter and improves its consistency and viscosity. This study aimed to evaluate the production of dark chocolate with the addition of carotenes extracted from an agro industrial waste and to improve its rheological properties and crystallization, with orange essential oil. The dried and fermented cocoa beans were purchased in Puerto Quito, Ecuador, and had a fat content of 51%. Six types of chocolates were formulated, and two formulations were chosen, one at 65% cocoa and other at 70% cocoa, both with a solid: fat ratio of 1.4:1. With the formulations selected, the influence of the addition of 0.75% and 1.5% orange essential oil was evaluated, and analysis to measure the viscosity, crystallization and sensory analysis were done. It was found that essential oil does not generate significant changes in the properties of chocolate, but has an important effect on aroma and coloration, which changed from auburn to brown. The best scores on sensory analysis were obtained for the samples formulated with 0.75% essential oil. Prior to the formulation with carotenes, the extraction of these compounds from pineapple peels were performed. The process was done with and without a previous enzymatic treatment, with three solid-solvent ratios. The best treatment was using enzymes in a solids-solvent ratio of 1:12.5; the extract obtained under these conditions had 4.503 ± 0.214 μg Eq. β-carotene/mL. This extract was encapsulated with gum arabic and maltodextrin, and the solution was dried using a freeze dryer. The encapsulated carotenes were added to the chocolate in an amount of 1.7% however 60,8 % of them were lost in the final product.

Keywords: cocoa, fat crystallization, limonene, carotenoids, pineapple peels

Procedia PDF Downloads 160
782 Carbon Footprint Assessment and Application in Urban Planning and Geography

Authors: Hyunjoo Park, Taehyun Kim, Taehyun Kim

Abstract:

Human life, activity, and culture depend on the wider environment. Cities offer economic opportunities for goods and services, but cannot exist in environments without food, energy, and water supply. Technological innovation in energy supply and transport speeds up the expansion of urban areas and the physical separation from agricultural land. As a result, division of urban agricultural areas causes more energy demand for food and goods transport between the regions. As the energy resources are leaking all over the world, the impact on the environment crossing the boundaries of cities is also growing. While advances in energy and other technologies can reduce the environmental impact of consumption, there is still a gap between energy supply and demand by current technology, even in technically advanced countries. Therefore, reducing energy demand is more realistic than relying solely on the development of technology for sustainable development. The purpose of this study is to introduce the application of carbon footprint assessment in fields of urban planning and geography. In urban studies, carbon footprint has been assessed at different geographical scales, such as nation, city, region, household, and individual. Carbon footprint assessment for a nation and a city is available by using national or city level statistics of energy consumption categories. By means of carbon footprint calculation, it is possible to compare the ecological capacity and deficit among nations and cities. Carbon footprint also offers great insight on the geographical distribution of carbon intensity at a regional level in the agricultural field. The study shows the background of carbon footprint applications in urban planning and geography by case studies such as figuring out sustainable land-use measures in urban planning and geography. For micro level, footprint quiz or survey can be adapted to measure household and individual carbon footprint. For example, first case study collected carbon footprint data from the survey measuring home energy use and travel behavior of 2,064 households in eight cities in Gyeonggi-do, Korea. Second case study analyzed the effects of the net and gross population densities on carbon footprint of residents at an intra-urban scale in the capital city of Seoul, Korea. In this study, the individual carbon footprint of residents was calculated by converting the carbon intensities of home and travel fossil fuel use of respondents to the unit of metric ton of carbon dioxide (tCO₂) by multiplying the conversion factors equivalent to the carbon intensities of each energy source, such as electricity, natural gas, and gasoline. Carbon footprint is an important concept not only for reducing climate change but also for sustainable development. As seen in case studies carbon footprint may be measured and applied in various spatial units, including but not limited to countries and regions. These examples may provide new perspectives on carbon footprint application in planning and geography. In addition, additional concerns for consumption of food, goods, and services can be included in carbon footprint calculation in the area of urban planning and geography.

Keywords: carbon footprint, case study, geography, urban planning

Procedia PDF Downloads 288
781 Characterization of the Ignitability and Flame Regression Behaviour of Flame Retarded Natural Fibre Composite Panel

Authors: Timine Suoware, Sylvester Edelugo, Charles Amgbari

Abstract:

Natural fibre composites (NFC) are becoming very attractive especially for automotive interior and non-structural building applications because they are biodegradable, low cost, lightweight and environmentally friendly. NFC are known to release high combustible products during exposure to heat atmosphere and this behaviour has raised concerns to end users. To improve on their fire response, flame retardants (FR) such as aluminium tri-hydroxide (ATH) and ammonium polyphosphate (APP) are incorporated during processing to delay the start and spread of fire. In this paper, APP was modified with Gum Arabic powder (GAP) and synergized with carbon black (CB) to form new FR species. Four FR species at 0, 12, 15 and 18% loading ratio were added to oil palm fibre polyester composite (OPFC) panels as follows; OPFC12%APP-GAP, OPFC15%APP-GAP/CB, OPFC18%ATH/APP-GAP and OPFC18%ATH/APPGAP/CB. The panels were produced using hand lay-up compression moulding and cured at room temperature. Specimens were cut from the panels and these were tested for ignition time (Tig), peak heat released rate (HRRp), average heat release rate (HRRavg), peak mass loss rate (MLRp), residual mass (Rm) and average smoke production rate (SPRavg) using cone calorimeter apparatus as well as the available flame energy (ɸ) in driving the flame using radiant panel flame spread apparatus. From the ignitability data obtained at 50 kW/m2 heat flux (HF), it shows that the hybrid FR modified with APP that is OPFC18%ATH/APP-GAP exhibited superior flame retardancy and the improvement was based on comparison with those without FR which stood at Tig = 20 s, HRRp = 86.6 kW/m2, HRRavg = 55.8 kW/m2, MLRp =0.131 g/s, Rm = 54.6% and SPRavg = 0.05 m2/s representing respectively 17.6%, 67.4%, 62.8%, 50.9%, 565% and 62.5% improvements less than those without FR (OPFC0%). In terms of flame spread, the least flame energy (ɸ) of 0.49 kW2/s3 for OPFC18%ATH/APP-GAP caused early flame regression. This was less than 39.6 kW2/s3 compared to those without FR (OPFC0%). It can be concluded that hybrid FR modified with APP could be useful in the automotive and building industries to delay the start and spread of fire.

Keywords: flame retardant, flame regression, oil palm fibre, composite panel

Procedia PDF Downloads 128
780 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory

Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker

Abstract:

In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.

Keywords: chemical analysis, concrete, LIBS, spectroscopy

Procedia PDF Downloads 105
779 Adapting an Accurate Reverse-time Migration Method to USCT Imaging

Authors: Brayden Mi

Abstract:

Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.

Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation

Procedia PDF Downloads 74