Search results for: temporal gravity variations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2921

Search results for: temporal gravity variations

641 Portuguese Guitar Strings Characterization and Comparison

Authors: P. Serrão, E. Costa, A. Ribeiro, V. Infante

Abstract:

The characteristic sonority of the Portuguese guitar is in great part what makes Fado so distinguishable from other traditional song styles. The Portuguese guitar is a pear-shaped plucked chordophone with six courses of double strings. This study compares the two types of plain strings available for Portuguese guitar and used by the musicians. One is stainless steel spring wire, the other is high carbon spring steel (music wire). Some musicians mention noticeable differences in sound quality between these two string materials, such as a little more brightness and sustain in the steel strings. Experimental tests were performed to characterize string tension at pitch; mechanical strength and tuning stability using the universal testing machine; dimensional control and chemical composition analysis using the scanning electron microscope. The string dynamical behaviour characterization experiments, including frequency response, inharmonicity, transient response, damping phenomena and were made in a monochord test set-up designed and built in-house. Damping factor was determined for the fundamental frequency. As musicians are able to detect very small damping differences, an accurate a characterization of the damping phenomena for all harmonics was necessary. With that purpose, another improved monochord was set and a new system identification methodology applied. Due to the complexity of this task several adjustments were necessary until obtaining good experimental data. In a few cases, dynamical tests were repeated to detect any evolution in damping parameters after break-in period when according to players experience a new string sounds gradually less dull until reaching the typically brilliant timbre. Finally, each set of strings was played on one guitar by a distinguished player and recorded. The recordings which include individual notes, scales, chords and a study piece, will be analysed to potentially characterize timbre variations.

Keywords: damping factor, music wire, portuguese guitar, string dynamics

Procedia PDF Downloads 550
640 Education-based, Graphical User Interface Design for Analyzing Phase Winding Inter-Turn Faults in Permanent Magnet Synchronous Motors

Authors: Emir Alaca, Hasbi Apaydin, Rohullah Rahmatullah, Necibe Fusun Oyman Serteller

Abstract:

In recent years, Permanent Magnet Synchronous Motors (PMSMs) have found extensive applications in various industrial sectors, including electric vehicles, wind turbines, and robotics, due to their high performance and low losses. Accurate mathematical modeling of PMSMs is crucial for advanced studies in electric machines. To enhance the effectiveness of graduate-level education, incorporating virtual or real experiments becomes essential to reinforce acquired knowledge. Virtual laboratories have gained popularity as cost-effective alternatives to physical testing, mitigating the risks associated with electrical machine experiments. This study presents a MATLAB-based Graphical User Interface (GUI) for PMSMs. The GUI offers a visual interface that allows users to observe variations in motor outputs corresponding to different input parameters. It enables users to explore healthy motor conditions and the effects of short-circuit faults in the one-phase winding. Additionally, the interface includes menus through which users can access equivalent circuits related to the motor and gain hands-on experience with the mathematical equations used in synchronous motor calculations. The primary objective of this paper is to enhance the learning experience of graduate and doctoral students by providing a GUI-based approach in laboratory studies. This interactive platform empowers students to examine and analyze motor outputs by manipulating input parameters, facilitating a deeper understanding of PMSM operation and control.

Keywords: magnet synchronous motor, mathematical modelling, education tools, winding inter-turn fault

Procedia PDF Downloads 49
639 Characteristics of Aerosols Properties Over Different Desert-Influenced Aeronet Sites

Authors: Abou Bakr Merdji, Alaa Mhawish, Xiaofeng Xu, Chunsong Lu

Abstract:

The characteristics of optical and microphysical properties of aerosols near deserts are analyzed using 11 AErosol RObotic NETwork (AERONET) sites located in 6 major desert areas (the Sahara, Arabia, Thar, Karakum, Taklamakan, and Gobi) between 1998 and 2021. The regional mean of Aerosol Optical Depth (AOD) (coarse AOD (CAOD)) are 0.44 (0.187), 0.38 (0.26), 0.35 (0.24), 0.23 (0.11), 0.20 (0.14), 0.10 (0.05) in the Thar, Arabian, Sahara, Karakum, Taklamakan and Gobi Deserts respectively, while an opposite for AE and Fine Mode Fraction (FMF). Higher extinctions are associated with larger particles (dust) over all the main desert regions. This is shown by the almost inversely proportional variations of AOD and CAOD compared with AE and FMF. Coarse particles contribute the most to the total AOD over the Sahara Desert compared to those in the other deserts all year round. Related to the seasonality of dust events, the maximum AOD (CAOD) generally appears in summer and spring, while the minimum is in winter. The mean values of absorbing AOD (AAOD), Absorbing AE (AAE), and the Single Scattering Albedo (SSA) for all sites ranged from 0.017 to 0.037, from 1.16 to 2.81 and from 0.844 to 0.944, respectively. Generally, the highest absorbing aerosol load are observed over the Thar, followed by the Karakum, the Sahara, the Gobi, and then the Taklamakan Deserts, while the largest absorbing particles are observed in the Sahara followed by Arabia, Thar, Karakum, Gobi, and the smallest over the Taklamakan Desert. Similar absorption qualities are observed over the Sahara, Arabia, Thar, and Karakum Deserts, with SSA values varying between 0.90 and 0.91, whereas the most and least absorbing particles are observed at the Taklamakan and the Gobi Deserts, respectively. The seasonal AAODs are distinctly different over the deserts, with parts of Sahara and Arabia, and the Dalanzadgad sites experiencing the maximum in summer, the Southern Sahara, Western Arabia, Jaipur, and Dushanbe in winter, while the Eastern Arabia and the Muztagh Ata in autumn. AAOD and SSA spectra are consistent with dust-dominated conditions that resulted from aerosol typing (dust and polluted dust) at most deserts, with a possible presence of other absorbing particles apart from dust at Arabia, the Taklamakan, and the Gobi Desert sites.

Keywords: sahara, AERONET, desert, dust belt, aerosols, optical properties

Procedia PDF Downloads 83
638 Microbial Phylogenetic Divergence between Surface-Water and Sedimentary Ecosystems Drove the Resistome Profiles

Authors: Okugbe Ebiotubo Ohore, Jingli Zhang, Binessi Edouard Ifon, Mathieu Nsenga Kumwimba, Xiaoying Mu, Dai Kuang, Zhen Wang, Ji-Dong Gu, Guojing Yang

Abstract:

Antibiotic pollution and the evolution of antibiotic resistance genes (ARGs) are increasingly viewed as major threats to both ecosystem security and human health, and has drawn attention. This study investigated the fate of antibiotics in aqueous and sedimentary substrates and the impact of ecosystem shifts between water and sedimentary phases on resistome profiles. The findings indicated notable variations in the concentration and distribution patterns of antibiotics across various environmental phases. Based on the partition coefficient (Kd), the total antibiotic concentration was significantly greater in the surface water (1405.45 ng/L; 49.5%) compared to the suspended particulate matter (Kd =0.64; 892.59 ng/g; 31.4%) and sediment (Kd=0.4; 542.64 ng/g; 19.1%). However, the relative abundance of ARGs in surface water and sediment was disproportionate to the abundance of antibiotics concentration, and sediments were the predominant ARGs reservoirs. Phylogenetic divergence of the microbial communities between the surface water and the sedimentary ecosystems potentially played important roles in driving the ARGs profiles between the two distinctive ecosystems. ARGs of Clinical importance; including blaGES, MCR-7.1, ermB, tet(34), tet36, tetG-01, and sul2 were significantly increased in the surface water, while blaCTX-M-01, blaTEM, blaOXA10-01, blaVIM, tet(W/N/W), tetM02, and ermX were amplified in the sediments. cfxA was an endemic ARG in surface-water ecosystems while the endemic ARGs of the sedimentary ecosystems included aacC4, aadA9-02, blaCTX-M-04, blaIMP-01, blaIMP-02, bla-L1, penA, erm(36), ermC, ermT-01, msrA-01, pikR2, vgb-01, mexA, oprD, ttgB, and aac. These findings offer a valuable information for the identification of ARGs-specific high-risk reservoirs.

Keywords: antibiotic resistance genes, microbial diversity, suspended particulate matter, sediment, surface water

Procedia PDF Downloads 27
637 Quantitative Polymerase Chain Reaction Analysis of Phytoplankton Composition and Abundance to Assess Eutrophication: A Multi-Year Study in Twelve Large Rivers across the United States

Authors: Chiqian Zhang, Kyle D. McIntosh, Nathan Sienkiewicz, Ian Struewing, Erin A. Stelzer, Jennifer L. Graham, Jingrang Lu

Abstract:

Phytoplankton plays an essential role in freshwater aquatic ecosystems and is the primary group synthesizing organic carbon and providing food sources or energy to ecosystems. Therefore, the identification and quantification of phytoplankton are important for estimating and assessing ecosystem productivity (carbon fixation), water quality, and eutrophication. Microscopy is the current gold standard for identifying and quantifying phytoplankton composition and abundance. However, microscopic analysis of phytoplankton is time-consuming, has a low sample throughput, and requires deep knowledge and rich experience in microbial morphology to implement. To improve this situation, quantitative polymerase chain reaction (qPCR) was considered for phytoplankton identification and quantification. Using qPCR to assess phytoplankton composition and abundance, however, has not been comprehensively evaluated. This study focused on: 1) conducting a comprehensive performance comparison of qPCR and microscopy techniques in identifying and quantifying phytoplankton and 2) examining the use of qPCR as a tool for assessing eutrophication. Twelve large rivers located throughout the United States were evaluated using data collected from 2017 to 2019 to understand the relation between qPCR-based phytoplankton abundance and eutrophication. This study revealed that temporal variation of phytoplankton abundance in the twelve rivers was limited within years (from late spring to late fall) and among different years (2017, 2018, and 2019). Midcontinent rivers had moderately greater phytoplankton abundance than eastern and western rivers, presumably because midcontinent rivers were more eutrophic. The study also showed that qPCR- and microscope-determined phytoplankton abundance had a significant positive linear correlation (adjusted R² 0.772, p-value < 0.001). In addition, phytoplankton abundance assessed via qPCR showed promise as an indicator of the eutrophication status of those rivers, with oligotrophic rivers having low phytoplankton abundance and eutrophic rivers having (relatively) high phytoplankton abundance. This study demonstrated that qPCR could serve as an alternative tool to traditional microscopy for phytoplankton quantification and eutrophication assessment in freshwater rivers.

Keywords: phytoplankton, eutrophication, river, qPCR, microscopy, spatiotemporal variation

Procedia PDF Downloads 99
636 Utility of Thromboelastography Derived Maximum Amplitude and R-Time (MA-R) Ratio as a Predictor of Mortality in Trauma Patients

Authors: Arulselvi Subramanian, Albert Venencia, Sanjeev Bhoi

Abstract:

Coagulopathy of trauma is an early endogenous coagulation abnormality that occurs shortly resulting in high mortality. In emergency trauma situations, viscoelastic tests may be better in identifying the various phenotypes of coagulopathy and demonstrate the contribution of platelet function to coagulation. We aimed to determine thrombin generation and clot strength, by estimating a ratio of Maximum amplitude and R-time (MA-R ratio) for identifying trauma coagulopathy and predicting subsequent mortality. Methods: We conducted a prospective cohort analysis of acutely injured trauma patients of the adult age groups (18- 50 years), admitted within 24hrs of injury, for one year at a Level I trauma center and followed up on 3rd day and 5th day of injury. Patients with h/o coagulation abnormalities, liver disease, renal impairment, with h/o intake of drugs were excluded. Thromboelastography was done and a ratio was calculated by dividing the MA by the R-time (MA-R). Patients were further stratified into sub groups based on the calculated MA-R quartiles. First sampling was done within 24 hours of injury; follow up on 3rd and 5thday of injury. Mortality was the primary outcome. Results: 100 acutely injured patients [average, 36.6±14.3 years; 94% male; injury severity score 12.2(9-32)] were included in the study. Median (min-max) on admission MA-R ratio was 15.01(0.4-88.4) which declined 11.7(2.2-61.8) on day three and slightly rose on day 5 13.1(0.06-68). There were no significant differences between sub groups in regard to age, or gender. In the lowest MA-R ratios subgroup; MA-R1 (<8.90; n = 27), injury severity score was significantly elevated. MA-R2 (8.91-15.0; n = 23), MA-R3 (15.01-19.30; n = 24) and MA-R4 (>19.3; n = 26) had no difference between their admission laboratory investigations, however slight decline was observed in hemoglobin, red blood cell count and platelet counts compared to the other subgroups. Also significantly prolonged R time, shortened alpha angle and MA were seen in MA-R1. Elevated incidence of mortality also significantly correlated with on admission low MA-R ratios (p 0.003). Temporal changes in the MA-R ratio did not correlated with mortality. Conclusion: The MA-R ratio provides a snapshot of early clot function, focusing specifically on thrombin burst and clot strength. In our observation, patients with the lowest MA-R time ratio (MA-R1) had significantly increased mortality compared with all other groups (45.5% MA-R1 compared with <25% in MA-R2 to MA-R3, and 9.1% in MA-R4; p < 0.003). Maximum amplitude and R-time may prove highly useful to predict at-risk patients early, when other physiologic indicators are absent.

Keywords: coagulopathy, trauma, thromboelastography, mortality

Procedia PDF Downloads 173
635 Floodnet: Classification for Post Flood Scene with a High-Resolution Aerial Imaginary Dataset

Authors: Molakala Mourya Vardhan Reddy, Kandimala Revanth, Koduru Sumanth, Beena B. M.

Abstract:

Emergency response and recovery operations are severely hampered by natural catastrophes, especially floods. Understanding post-flood scenarios is essential to disaster management because it facilitates quick evaluation and decision-making. To this end, we introduce FloodNet, a brand-new high-resolution aerial picture collection created especially for comprehending post-flood scenes. A varied collection of excellent aerial photos taken during and after flood occurrences make up FloodNet, which offers comprehensive representations of flooded landscapes, damaged infrastructure, and changed topographies. The dataset provides a thorough resource for training and assessing computer vision models designed to handle the complexity of post-flood scenarios, including a variety of environmental conditions and geographic regions. Pixel-level semantic segmentation masks are used to label the pictures in FloodNet, allowing for a more detailed examination of flood-related characteristics, including debris, water bodies, and damaged structures. Furthermore, temporal and positional metadata improve the dataset's usefulness for longitudinal research and spatiotemporal analysis. For activities like flood extent mapping, damage assessment, and infrastructure recovery projection, we provide baseline standards and evaluation metrics to promote research and development in the field of post-flood scene comprehension. By integrating FloodNet into machine learning pipelines, it will be easier to create reliable algorithms that will help politicians, urban planners, and first responders make choices both before and after floods. The goal of the FloodNet dataset is to support advances in computer vision, remote sensing, and disaster response technologies by providing a useful resource for researchers. FloodNet helps to create creative solutions for boosting communities' resilience in the face of natural catastrophes by tackling the particular problems presented by post-flood situations.

Keywords: image classification, segmentation, computer vision, nature disaster, unmanned arial vehicle(UAV), machine learning.

Procedia PDF Downloads 76
634 Optimizing Electric Vehicle Charging Networks with Dynamic Pricing and Demand Elasticity

Authors: Chiao-Yi Chen, Dung-Ying Lin

Abstract:

With the growing awareness of environmental protection and the implementation of government carbon reduction policies, the number of electric vehicles (EVs) has rapidly increased, leading to a surge in charging demand and imposing significant challenges on the existing power grid’s capacity. Traditional urban power grid planning has not adequately accounted for the additional load generated by EV charging, which often strains the infrastructure. This study aims to optimize grid operation and load management by dynamically adjusting EV charging prices based on real-time electricity supply and demand, leveraging consumer demand elasticity to enhance system efficiency. This study uniquely addresses the intricate interplay between urban traffic patterns and power grid dynamics in the context of electric vehicle (EV) adoption. By integrating Hsinchu City's road network with the IEEE 33-bus system, the research creates a comprehensive model that captures both the spatial and temporal aspects of EV charging demand. This approach allows for a nuanced analysis of how traffic flow directly influences the load distribution across the power grid. The strategic placement of charging stations at key nodes within the IEEE 33-bus system, informed by actual road traffic data, enables a realistic simulation of the dynamic relationship between vehicle movement and energy consumption. This integration of transportation and energy systems provides a holistic view of the challenges and opportunities in urban EV infrastructure planning, highlighting the critical need for solutions that can adapt to the ever-changing interplay between traffic patterns and grid capacity. The proposed dynamic pricing strategy effectively reduces peak charging loads, enhances the operational efficiency of charging stations, and maximizes operator profits, all while ensuring grid stability. These findings provide practical insights and a valuable framework for optimizing EV charging infrastructure and policies in future smart cities, contributing to more resilient and sustainable urban energy systems.

Keywords: dynamic pricing, demand elasticity, EV charging, grid load balancing, optimization

Procedia PDF Downloads 17
633 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach

Authors: James Ladzekpo

Abstract:

Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.

Keywords: diabetes, machine learning, prediction, biomarkers

Procedia PDF Downloads 53
632 The Impact of Trait and Mathematical Anxiety on Oscillatory Brain Activity during Lexical and Numerical Error-Recognition Tasks

Authors: Alexander N. Savostyanov, Tatyana A. Dolgorukova, Elena A. Esipenko, Mikhail S. Zaleshin, Margherita Malanchini, Anna V. Budakova, Alexander E. Saprygin, Yulia V. Kovas

Abstract:

The present study compared spectral-power indexes and cortical topography of brain activity in a sample characterized by different levels of trait and mathematical anxiety. 52 healthy Russian-speakers (age 17-32; 30 males) participated in the study. Participants solved an error recognition task under 3 conditions: A lexical condition (simple sentences in Russian), and two numerical conditions (simple arithmetic and complicated algebraic problems). Trait and mathematical anxiety were measured using self-repot questionnaires. EEG activity was recorded simultaneously during task execution. Event-related spectral perturbations (ERSP) were used to analyze spectral-power changes in brain activity. Additionally, sLORETA was applied in order to localize the sources of brain activity. When exploring EEG activity recorded after tasks onset during lexical conditions, sLORETA revealed increased activation in frontal and left temporal cortical areas, mainly in the alpha/beta frequency ranges. When examining the EEG activity recorded after task onset during arithmetic and algebraic conditions, additional activation in delta/theta band in the right parietal cortex was observed. The ERSP plots reveled alpha/beta desynchronizations within a 500-3000 ms interval after task onset and slow-wave synchronization within an interval of 150-350 ms. Amplitudes of these intervals reflected the accuracy of error recognition, and were differently associated with the three (lexical, arithmetic and algebraic) conditions. The level of trait anxiety was positively correlated with the amplitude of alpha/beta desynchronization. The level of mathematical anxiety was negatively correlated with the amplitude of theta synchronization and of alpha/beta desynchronization. Overall, trait anxiety was related with an increase in brain activation during task execution, whereas mathematical anxiety was associated with increased inhibitory-related activity. We gratefully acknowledge the support from the №11.G34.31.0043 grant from the Government of the Russian Federation.

Keywords: anxiety, EEG, lexical and numerical error-recognition tasks, alpha/beta desynchronization

Procedia PDF Downloads 524
631 Improved Clothing Durability as a Lifespan Extension Strategy: A Framework for Measuring Clothing Durability

Authors: Kate E Morris, Mark Sumner, Mark Taylor, Amanda Joynes, Yue Guo

Abstract:

Garment durability, which encompasses physical and emotional factors, has been identified as a critical ingredient in producing clothing with increased lifespans, battling overconsumption, and subsequently tackling the catastrophic effects of climate change. Eco-design for Sustainable Products Regulation (ESPR) and Extended Producer Responsibility (EPR) schemes have been suggested and will be implemented across Europe and the UK which might require brands to declare a garment’s durability credentials to be able to sell in that market. There is currently no consistent method of measuring the overall durability of a garment. Measuring the physical durability of garments is difficult and current assessment methods lack objectivity and reliability or don’t reflect the complex nature of durability for different garment categories. This study presents a novel and reproducible methodology for testing and ranking the absolute durability of 5 commercially available garment types, Formal Trousers, Casual Trousers, Denim Jeans, Casual Leggings and Underwear. A total of 112 garments from 21 UK brands were assessed. Due to variations in end use, different factors were considered across the different garment categories when evaluating durability. A physical testing protocol was created, tailored to each category, to dictate the necessary test results needed to measure the absolute durability of the garments. Multiple durability factors were used to modulate the ranking as opposed to previous studies which only reported on single factors to evaluate durability. The garments in this study were donated by the signatories of the Waste Resource Action Programme’s (WRAP) Textile 2030 initiative as part of their strategy to reduce the environmental impact of UK fashion. This methodology presents a consistent system for brands and policymakers to follow to measure and rank various garment type’s physical durability. Furthermore, with such a methodology, the durability of garments can be measured and new standards for improving durability can be created to enhance utilisation and improve the sustainability of the clothing on the market.

Keywords: circularity, durability, garment testing, ranking

Procedia PDF Downloads 34
630 Ecological Relationships Between Material, Colonizing Organisms, and Resulting Performances

Authors: Chris Thurlbourne

Abstract:

Due to the continual demand for material to build, and a limit of good environmental material credentials of 'normal' building materials, there is a need to look at new and reconditioned material types - both biogenic and non-biogenic - and a field of research that accompanies this. This research development focuses on biogenic and non-biogenic material engineering and the impact of our environment on new and reconditioned material types. In our building industry and all the industries involved in constructing our built environment, building material types can be broadly categorized into two types, biogenic and non-biogenic material properties. Both play significant roles in shaping our built environment. Regardless of their properties, all material types originate from our earth, whereas many are modified through processing to provide resistance to 'forces of nature', be it rain, wind, sun, gravity, or whatever the local environmental conditions throw at us. Modifications are succumbed to offer benefits in endurance, resistance, malleability in handling (building with), and ergonomic values - in all types of building material. We assume control of all building materials through rigorous quality control specifications and regulations to ensure materials perform under specific constraints. Yet materials confront an external environment that is not controlled with live forces undetermined, and of which materials naturally act and react through weathering, patination and discoloring, promoting natural chemical reactions such as rusting. The purpose of the paper is to present recent research that explores the after-life of specific new and reconditioned biogenic and non-biogenic material types and how the understanding of materials' natural processes of transformation when exposed to the external climate, can inform initial design decisions. With qualities to receive in a transient and contingent manner, ecological relationships between material, the colonizing organisms and resulting performances invite opportunities for new design explorations for the benefit of both the needs of human society and the needs of our natural environment. The research follows designing for the benefit of both and engaging in both biogenic and non-biogenic material engineering whilst embracing the continual demand for colonization - human and environment, and the aptitude of a material to be colonized by one or several groups of living organisms without necessarily undergoing any severe deterioration, but embracing weathering, patination and discoloring, and at the same time establishing new habitat. The research follows iterative prototyping processes where knowledge has been accumulated via explorations of specific material performances, from laboratory to construction mock-ups focusing on the architectural qualities embedded in control of production techniques and facilitating longer-term patinas of material surfaces to extend the aesthetic beyond common judgments. Experiments are therefore focused on how the inherent material qualities drive a design brief toward specific investigations to explore aesthetics induced through production, patinas and colonization obtained over time while exposed and interactions with external climate conditions.

Keywords: biogenic and non-biogenic, natural processes of transformation, colonization, patina

Procedia PDF Downloads 86
629 Objective Assessment of the Evolution of Microplastic Contamination in Sediments from a Vast Coastal Area

Authors: Vanessa Morgado, Ricardo Bettencourt da Silva, Carla Palma

Abstract:

The environmental pollution by microplastics is well recognized. Microplastics were already detected in various matrices from distinct environmental compartments worldwide, some from remote areas. Various methodologies and techniques have been used to determine microplastic in such matrices, for instance, sediment samples from the ocean bottom. In order to determine microplastics in a sediment matrix, the sample is typically sieved through a 5 mm mesh, digested to remove the organic matter, and density separated to isolate microplastics from the denser part of the sediment. The physical analysis of microplastic consists of visual analysis under a stereomicroscope to determine particle size, colour, and shape. The chemical analysis is performed by an infrared spectrometer coupled to a microscope (micro-FTIR), allowing to the identification of the chemical composition of microplastic, i.e., the type of polymer. Creating legislation and policies to control and manage (micro)plastic pollution is essential to protect the environment, namely the coastal areas. The regulation is defined from the known relevance and trends of the pollution type. This work discusses the assessment of contamination trends of a 700 km² oceanic area affected by contamination heterogeneity, sampling representativeness, and the uncertainty of the analysis of collected samples. The methodology developed consists of objectively identifying meaningful variations of microplastic contamination by the Monte Carlo simulation of all uncertainty sources. This work allowed us to unequivocally conclude that the contamination level of the studied area did not vary significantly between two consecutive years (2018 and 2019) and that PET microplastics are the major type of polymer. The comparison of contamination levels was performed for a 99% confidence level. The developed know-how is crucial for the objective and binding determination of microplastic contamination in relevant environmental compartments.

Keywords: measurement uncertainty, micro-ATR-FTIR, microplastics, ocean contamination, sampling uncertainty

Procedia PDF Downloads 89
628 Airborne CO₂ Lidar Measurements for Atmospheric Carbon and Transport: America (ACT-America) Project and Active Sensing of CO₂ Emissions over Nights, Days, and Seasons 2017-2018 Field Campaigns

Authors: Joel F. Campbell, Bing Lin, Michael Obland, Susan Kooi, Tai-Fang Fan, Byron Meadows, Edward Browell, Wayne Erxleben, Doug McGregor, Jeremy Dobler, Sandip Pal, Christopher O'Dell, Ken Davis

Abstract:

The Active Sensing of CO₂ Emissions over Nights, Days, and Seasons (ASCENDS) CarbonHawk Experiment Simulator (ACES) is a NASA Langley Research Center instrument funded by NASA’s Science Mission Directorate that seeks to advance technologies critical to measuring atmospheric column carbon dioxide (CO₂ ) mixing ratios in support of the NASA ASCENDS mission. The ACES instrument, an Intensity-Modulated Continuous-Wave (IM-CW) lidar, was designed for high-altitude aircraft operations and can be directly applied to space instrumentation to meet the ASCENDS mission requirements. The ACES design demonstrates advanced technologies critical for developing an airborne simulator and spaceborne instrument with lower platform consumption of size, mass, and power, and with improved performance. The Atmospheric Carbon and Transport – America (ACT-America) is an Earth Venture Suborbital -2 (EVS-2) mission sponsored by the Earth Science Division of NASA’s Science Mission Directorate. A major objective is to enhance knowledge of the sources/sinks and transport of atmospheric CO₂ through the application of remote and in situ airborne measurements of CO₂ and other atmospheric properties on spatial and temporal scales. ACT-America consists of five campaigns to measure regional carbon and evaluate transport under various meteorological conditions in three regional areas of the Continental United States. Regional CO₂ distributions of the lower atmosphere were observed from the C-130 aircraft by the Harris Corp. Multi-Frequency Fiber Laser Lidar (MFLL) and the ACES lidar. The airborne lidars provide unique data that complement the more traditional in situ sensors. This presentation shows the applications of CO₂ lidars in support of these science needs.

Keywords: CO₂ measurement, IMCW, CW lidar, laser spectroscopy

Procedia PDF Downloads 160
627 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 124
626 Characterising Indigenous Chicken (Gallus gallus domesticus) Ecotypes of Tigray, Ethiopia: A Combined Approach Using Ecological Niche Modelling and Phenotypic Distribution Modelling

Authors: Gebreslassie Gebru, Gurja Belay, Minister Birhanie, Mulalem Zenebe, Tadelle Dessie, Adriana Vallejo-Trujillo, Olivier Hanotte

Abstract:

Livestock must adapt to changing environmental conditions, which can result in either phenotypic plasticity or irreversible phenotypic change. In this study, we combine Ecological Niche Modelling (ENM) and Phenotypic Distribution Modelling (PDM) to provide a comprehensive framework for understanding the ecological and phenotypic characteristics of indigenous chicken (Gallus gallus domesticus) ecotypes. This approach helped us to classify these ecotypes, differentiate their phenotypic traits, and identify associations between environmental variables and adaptive traits. We measured 297 adult indigenous chickens from various agro-ecologies, including 208 females and 89 males. A subset of the 22 measured traits was selected using stepwise selection, resulting in seven traits for each sex. Using ENM, we identified four agro-ecologies potentially harbouring distinct phenotypes of indigenous Tigray chickens. However, PDM classified these chickens into three phenotypical ecotypes. Chickens grouped in ecotype-1 and ecotype-3 exhibited superior adaptive traits compared to those in ecotype-2, with significant variance observed. This high variance suggests a broader range of trait expression within these ecotypes, indicating greater adaptation capacity and potentially more diverse genetic characteristics. Several environmental variables, such as soil clay content, forest cover, and mean temperature of the wettest quarter, were strongly associated with most phenotypic traits. This suggests that these environmental factors play a role in shaping the observed phenotypic variations. By integrating ENM and PDM, this study enhances our understanding of indigenous chickens' ecological and phenotypic diversity. It also provides valuable insights into their conservation and management in response to environmental changes.

Keywords: adaptive traits, agro-ecology, appendage, climate, environment, imagej, morphology, phenotypic variation

Procedia PDF Downloads 31
625 Assessing Future Offshore Wind Farms in the Gulf of Roses: Insights from Weather Research and Forecasting Model Version 4.2

Authors: Kurias George, Ildefonso Cuesta Romeo, Clara Salueña Pérez, Jordi Sole Olle

Abstract:

With the growing prevalence of wind energy there is a need, for modeling techniques to evaluate the impact of wind farms on meteorology and oceanography. This study presents an approach that utilizes the WRF (Weather Research and Forecasting )with that include a Wind Farm Parametrization model to simulate the dynamics around Parc Tramuntana project, a offshore wind farm to be located near the Gulf of Roses off the coast of Barcelona, Catalonia. The model incorporates parameterizations for wind turbines enabling a representation of the wind field and how it interacts with the infrastructure of the wind farm. Current results demonstrate that the model effectively captures variations in temeperature, pressure and in both wind speed and direction over time along with their resulting effects on power output from the wind farm. These findings are crucial for optimizing turbine placement and operation thus improving efficiency and sustainability of the wind farm. In addition to focusing on atmospheric interactions, this study delves into the wake effects within the turbines in the farm. A range of meteorological parameters were also considered to offer a comprehensive understanding of the farm's microclimate. The model was tested under different horizontal resolutions and farm layouts to scrutinize the wind farm's effects more closely. These experimental configurations allow for a nuanced understanding of how turbine wakes interact with each other and with the broader atmospheric and oceanic conditions. This modified approach serves as a potent tool for stakeholders in renewable energy, environmental protection, and marine spatial planning. environmental protection and marine spatial planning. It provides a range of information regarding the environmental and socio economic impacts of offshore wind energy projects.

Keywords: weather research and forecasting, wind turbine wake effects, environmental impact, wind farm parametrization, sustainability analysis

Procedia PDF Downloads 71
624 The Control of Wall Thickness Tolerance during Pipe Purchase Stage Based on Reliability Approach

Authors: Weichao Yu, Kai Wen, Weihe Huang, Yang Yang, Jing Gong

Abstract:

Metal-loss corrosion is a major threat to the safety and integrity of gas pipelines as it may result in the burst failures which can cause severe consequences that may include enormous economic losses as well as the personnel casualties. Therefore, it is important to ensure the corroding pipeline integrity and efficiency, considering the value of wall thickness, which plays an important role in the failure probability of corroding pipeline. Actually, the wall thickness is controlled during pipe purchase stage. For example, the API_SPEC_5L standard regulates the allowable tolerance of the wall thickness from the specified value during the pipe purchase. The allowable wall thickness tolerance will be used to determine the wall thickness distribution characteristic such as the mean value, standard deviation and distribution. Taking the uncertainties of the input variables in the burst limit-state function into account, the reliability approach rather than the deterministic approach will be used to evaluate the failure probability. Moreover, the cost of pipe purchase will be influenced by the allowable wall thickness tolerance. More strict control of the wall thickness usually corresponds to a higher pipe purchase cost. Therefore changing the wall thickness tolerance will vary both the probability of a burst failure and the cost of the pipe. This paper describes an approach to optimize the wall thickness tolerance considering both the safety and economy of corroding pipelines. In this paper, the corrosion burst limit-state function in Annex O of CSAZ662-7 is employed to evaluate the failure probability using the Monte Carlo simulation technique. By changing the allowable wall thickness tolerance, the parameters of the wall thickness distribution in the limit-state function will be changed. Using the reliability approach, the corresponding variations in the burst failure probability will be shown. On the other hand, changing the wall thickness tolerance will lead to a change in cost in pipe purchase. Using the variation of the failure probability and pipe cost caused by changing wall thickness tolerance specification, the optimal allowable tolerance can be obtained, and used to define pipe purchase specifications.

Keywords: allowable tolerance, corroding pipeline segment, operation cost, production cost, reliability approach

Procedia PDF Downloads 394
623 Glaucoma with Normal IOP, Is It True Normal Tension glaucoma or Something Else!

Authors: Sushma Tejwani, Shoruba Dinakaran, Kushal Kacha, K. Bhujang Shetty

Abstract:

Introduction and aim: It is not unusual to find patients with glaucomatous damage and normal intraocular pressure, and to label a patient as Normal tension glaucoma (NTG) majority of clinicians depend on office Intraocular pressures (IOP) recordings; hence, the concern is that whether we are missing the late night or early morning spikes in this group of patients. Also, ischemia to the optic nerve is one of the presumed causes of damage in these patients, however demonstrating the same has been a challenge. The aim of this study was to evaluate IOP variations and patterns in a series of patients with open angles, glaucomatous discs or fields but normal office IOP, and in addition to identify ischemic factors for true NTG patients. Materials & Methods: This was an observational cross- sectional study from a tertiary care centre. The patients that underwent full day DVT from Jan 2012 to April 2014 were studied. All patients underwent IOP measurement on Goldmann applanation tonometry every 3 hours for 24 hours along with a recording of the blood pressure (BP). Further patients with normal IOP throughout the 24- hour period were evaluated with a cardiologist for echocardiography and carotid Doppler. Results: There were 47 patients and a maximum number of patients studied was in the age group of 50-70 years. A biphasic IOP peak was noted for almost all the patients. Out of the 47 patients, 2 were excluded from analysis as they were on treatment.20 patients (42%) were diagnosed on DVT to have an IOP spike and were then diagnosed as open angle glaucoma and another 25 (55%) were diagnosed to have normal tension glaucoma and were subsequently advised a carotid Doppler and a cardiologists consult. Another interesting finding was that 9 patients had a nocturnal dip in their BP and 3 were found to have carotid artery stenosis. Conclusion: A continuous 24-hour monitoring of the IOP and BP is a very useful albeit mildly cumbersome tool which provides a wealth of information in cases of glaucoma presenting with normal office pressures. It is of great value in differentiating between normal tension glaucoma patients & open angle glaucoma patients. It also helps in timely diagnosis & possible intervention due to referral to a cardiologist in cases of carotid artery stenosis.

Keywords: carotid artery disease in NTG, diurnal variation of IOP, ischemia in glaucoma, normal tension glaucoma

Procedia PDF Downloads 282
622 The Potential of Potato and Maize Based Snacks as Fire Accelerants

Authors: E. Duffin, L. Brownlow

Abstract:

Arson is a crime which can provide exceptional problems to forensic specialists. Its destructive nature makes evidence much harder to find, especially when used to cover up another crime. There is a consistent potential threat of arsonists seeking new and easier ways to set fires. Existing research in this field primarily focuses on the use of accelerants such as petrol, with less attention to other more accessible and harder to detect materials. This includes the growing speculation of potato and maize-based snacks being used as fire accelerants. It was hypothesized that all ‘crisp-type’ snacks in foil packaging had the potential to act as accelerants and would burn readily in the various experiments. To test this hypothesis, a series of small lab-based experiments were undertaken, igniting samples of the snacks. Factors such as ingredients, shape, packaging and calorific value were all taken into consideration. The time (in seconds) spent on fire by the individual snacks was recorded. It was found that all of the snacks tested burnt for statistically similar amounts of time with a p-value of 0.0157. This was followed with a large mock real-life scenario using packets of crisps on fire and car seats to investigate as to the possibility of these snacks being verifiable tools to the arsonist. Here, three full packets of crisps were selected based on variations in burning during the lab experiments. They were each lit with a lighter to initiate burning, then placed onto a car seat to be timed and observed with video cameras. In all three cases, the fire was significant and sustained by the 200-second mark. On the basis of this data, it was concluded that potato and maize-based snacks were viable accelerants of fire. They remain an effective method of starting fires whilst being cheap, accessible, non-suspicious and non-detectable. The results produced supported the hypothesis that all ‘crisp-type’ snacks in foil packaging (that had been tested) had the potential to act as accelerants and would burn readily in the various experiments. This study serves to raise awareness and provide a basis for research and prevention of arson regarding maize and potato-based snacks as fire accelerants.

Keywords: arson, crisps, fires, food

Procedia PDF Downloads 120
621 Influence of Wind Induced Fatigue Damage in the Reliability of Wind Turbines

Authors: Emilio A. Berny-Brandt, Sonia E. Ruiz

Abstract:

Steel tubular towers serving as support structures for large wind turbines are subject to several hundred million stress cycles arising from the turbulent nature of the wind. This causes high-cycle fatigue which can govern tower design. The practice of maintaining the support structure after wind turbines reach its typical 20-year design life have become common, but without quantifying the changes in the reliability on the tower. There are several studies on this topic, but most of them are based on the S-N curve approach using the Miner’s rule damage summation method, the de-facto standard in the wind industry. However, the qualitative nature of Miner’s method makes desirable the use of fracture mechanics to measure the effects of fatigue in the capacity curve of the structure, which is important in order to evaluate the integrity and reliability of these towers. Temporal and spatially varying wind speed time histories are simulated based on power spectral density and coherence functions. Simulations are then applied to a SAP2000 finite element model and step-by-step analysis is used to obtain the stress time histories for a range of representative wind speeds expected during service conditions of the wind turbine. Rainflow method is then used to obtain cycle and stress range information of each of these time histories and a statistical analysis is performed to obtain the distribution parameters of each variable. Monte Carlo simulation is used here to evaluate crack growth over time in the tower base using the Paris-Erdogan equation. A nonlinear static pushover analysis to assess the capacity curve of the structure after a number of years is performed. The capacity curves are then used to evaluate the changes in reliability of a steel tower located in Oaxaca, Mexico, where wind energy facilities are expected to grow in the near future. Results show that fatigue on the tower base can have significant effects on the structural capacity of the wind turbine, especially after the 20-year design life when the crack growth curve starts behaving exponentially.

Keywords: crack growth, fatigue, Monte Carlo simulation, structural reliability, wind turbines

Procedia PDF Downloads 514
620 Effect of Geometric Imperfections on the Vibration Response of Hexagonal Lattices

Authors: P. Caimmi, E. Bele, A. Abolfathi

Abstract:

Lattice materials are cellular structures composed of a periodic network of beams. They offer high weight-specific mechanical properties and lend themselves to numerous weight-sensitive applications. The periodic internal structure responds to external vibrations through characteristic frequency bandgaps, making these materials suitable for the reduction of noise and vibration. However, the deviation from architectural homogeneity, due to, e.g., manufacturing imperfections, has a strong influence on the mechanical properties and vibration response of these materials. In this work, we present results on the influence of geometric imperfections on the vibration response of hexagonal lattices. Three classes of geometrical variables are used: the characteristics of the architecture (relative density, ligament length/cell size ratio), imperfection type (degree of non-periodicity, cracks, hard inclusions) and defect morphology (size, distribution). Test specimens with controlled size and distribution of imperfections are manufactured through selective laser sintering. The Frequency Response Functions (FRFs) in the form of accelerance are measured, and the modal shapes are captured through a high-speed camera. The finite element method is used to provide insights on the extension of these results to semi-infinite lattices. An updating procedure is conducted to increase the reliability of numerical simulation results compared to experimental measurements. This is achieved by updating the boundary conditions and material stiffness. Variations in FRFs of periodic structures due to changes in the relative density of the constituent unit cell are analysed. The effects of geometric imperfections on the dynamic response of periodic structures are investigated. The findings can be used to open up the opportunity for tailoring these lattice materials to achieve optimal amplitude attenuations at specific frequency ranges.

Keywords: lattice architectures, geometric imperfections, vibration attenuation, experimental modal analysis

Procedia PDF Downloads 121
619 Power Performance Improvement of 500W Vertical Axis Wind Turbine with Salient Design Parameters

Authors: Young-Tae Lee, Hee-Chang Lim

Abstract:

This paper presents the performance characteristics of Darrieus-type vertical axis wind turbine (VAWT) with NACA airfoil blades. The performance of Darrieus-type VAWT can be characterized by torque and power. There are various parameters affecting the performance such as chord length, helical angle, pitch angle and rotor diameter. To estimate the optimum shape of Darrieustype wind turbine in accordance with various design parameters, we examined aerodynamic characteristics and separated flow occurring in the vicinity of blade, interaction between flow and blade, and torque and power characteristics derived from it. For flow analysis, flow variations were investigated based on the unsteady RANS (Reynolds-averaged Navier-Stokes) equation. Sliding mesh algorithm was employed in order to consider rotational effect of blade. To obtain more realistic results we conducted experiment and numerical analysis at the same time for three-dimensional shape. In addition, several parameters (chord length, rotor diameter, pitch angle, and helical angle) were considered to find out optimum shape design and characteristics of interaction with ambient flow. Since the NACA airfoil used in this study showed significant changes in magnitude of lift and drag depending on an angle of attack, the rotor with low drag, long cord length and short diameter shows high power coefficient in low tip speed ratio (TSR) range. On the contrary, in high TSR range, drag becomes high. Hence, the short-chord and long-diameter rotor produces high power coefficient. When a pitch angle at which airfoil directs toward inside equals to -2° and helical angle equals to 0°, Darrieus-type VAWT generates maximum power.

Keywords: darrieus wind turbine, VAWT, NACA airfoil, performance

Procedia PDF Downloads 370
618 Engineering of Reagentless Fluorescence Biosensors Based on Single-Chain Antibody Fragments

Authors: Christian Fercher, Jiaul Islam, Simon R. Corrie

Abstract:

Fluorescence-based immunodiagnostics are an emerging field in biosensor development and exhibit several advantages over traditional detection methods. While various affinity biosensors have been developed to generate a fluorescence signal upon sensing varying concentrations of analytes, reagentless, reversible, and continuous monitoring of complex biological samples remains challenging. Here, we aimed to genetically engineer biosensors based on single-chain antibody fragments (scFv) that are site-specifically labeled with environmentally sensitive fluorescent unnatural amino acids (UAA). A rational design approach resulted in quantifiable analyte-dependent changes in peak fluorescence emission wavelength and enabled antigen detection in vitro. Incorporation of a polarity indicator within the topological neighborhood of the antigen-binding interface generated a titratable wavelength blueshift with nanomolar detection limits. In order to ensure continuous analyte monitoring, scFv candidates with fast binding and dissociation kinetics were selected from a genetic library employing a high-throughput phage display and affinity screening approach. Initial rankings were further refined towards rapid dissociation kinetics using bio-layer interferometry (BLI) and surface plasmon resonance (SPR). The most promising candidates were expressed, purified to homogeneity, and tested for their potential to detect biomarkers in a continuous microfluidic-based assay. Variations of dissociation kinetics within an order of magnitude were achieved without compromising the specificity of the antibody fragments. This approach is generally applicable to numerous antibody/antigen combinations and currently awaits integration in a wide range of assay platforms for one-step protein quantification.

Keywords: antibody engineering, biosensor, phage display, unnatural amino acids

Procedia PDF Downloads 144
617 Neural Networks Underlying the Generation of Neural Sequences in the HVC

Authors: Zeina Bou Diab, Arij Daou

Abstract:

The neural mechanisms of sequential behaviors are intensively studied, with songbirds a focus for learned vocal production. We are studying the premotor nucleus HVC at a nexus of multiple pathways contributing to song learning and production. The HVC consists of multiple classes of neuronal populations, each has its own cellular, electrophysiological and functional properties. During singing, a large subset of motor cortex analog-projecting HVCRA neurons emit a single 6-10 ms burst of spikes at the same time during each rendition of song, a large subset of basal ganglia-projecting HVCX neurons fire 1 to 4 bursts that are similarly time locked to vocalizations, while HVCINT neurons fire tonically at average high frequency throughout song with prominent modulations whose timing in relation to song remains unresolved. This opens the opportunity to define models relating explicit HVC circuitry to how these neurons work cooperatively to control learning and singing. We developed conductance-based Hodgkin-Huxley models for the three classes of HVC neurons (based on the ion channels previously identified from in vitro recordings) and connected them in several physiologically realistic networks (based on the known synaptic connectivity and specific glutaminergic and gabaergic pharmacology) via different architecture patterning scenarios with the aim to replicate the in vivo firing patterning behaviors. We are able, through these networks, to reproduce the in vivo behavior of each class of HVC neurons, as shown by the experimental recordings. The different network architectures developed highlight different mechanisms that might be contributing to the propagation of sequential neural activity (continuous or punctate) in the HVC and to the distinctive firing patterns that each class exhibits during singing. Examples of such possible mechanisms include: 1) post-inhibitory rebound in HVCX and their population patterns during singing, 2) different subclasses of HVCINT interacting via inhibitory-inhibitory loops, 3) mono-synaptic HVCX to HVCRA excitatory connectivity, and 4) structured many-to-one inhibitory synapses from interneurons to projection neurons, and others. Replication is only a preliminary step that must be followed by model prediction and testing.

Keywords: computational modeling, neural networks, temporal neural sequences, ionic currents, songbird

Procedia PDF Downloads 69
616 Sliding Mode Power System Stabilizer for Synchronous Generator Stability Improvement

Authors: J. Ritonja, R. Brezovnik, M. Petrun, B. Polajžer

Abstract:

Many modern synchronous generators in power systems are extremely weakly damped. The reasons are cost optimization of the machine building and introduction of the additional control equipment into power systems. Oscillations of the synchronous generators and related stability problems of the power systems are harmful and can lead to failures in operation and to damages. The only useful solution to increase damping of the unwanted oscillations represents the implementation of the power system stabilizers. Power system stabilizers generate the additional control signal which changes synchronous generator field excitation voltage. Modern power system stabilizers are integrated into static excitation systems of the synchronous generators. Available commercial power system stabilizers are based on linear control theory. Due to the nonlinear dynamics of the synchronous generator, current stabilizers do not assure optimal damping of the synchronous generator’s oscillations in the entire operating range. For that reason the use of the robust power system stabilizers which are convenient for the entire operating range is reasonable. There are numerous robust techniques applicable for the power system stabilizers. In this paper the use of sliding mode control for synchronous generator stability improvement is studied. On the basis of the sliding mode theory, the robust power system stabilizer was developed. The main advantages of the sliding mode controller are simple realization of the control algorithm, robustness to parameter variations and elimination of disturbances. The advantage of the proposed sliding mode controller against conventional linear controller was tested for damping of the synchronous generator oscillations in the entire operating range. Obtained results show the improved damping in the entire operating range of the synchronous generator and the increase of the power system stability. The proposed study contributes to the progress in the development of the advanced stabilizer, which will replace conventional linear stabilizers and improve damping of the synchronous generators.

Keywords: control theory, power system stabilizer, robust control, sliding mode control, stability, synchronous generator

Procedia PDF Downloads 220
615 Evaluation of Weather Risk Insurance for Agricultural Products Using a 3-Factor Pricing Model

Authors: O. Benabdeljelil, A. Karioun, S. Amami, R. Rouger, M. Hamidine

Abstract:

A model for preventing the risks related to climate conditions in the agricultural sector is presented. It will determine the yearly optimum premium to be paid by a producer in order to reach his required turnover. The model is based on both climatic stability and 'soft' responses of usually grown species to average climate variations at the same place and inside a safety ball which can be determined from past meteorological data. This allows the use of linear regression expression for dependence of production result in terms of driving meteorological parameters, the main ones of which are daily average sunlight, rainfall and temperature. By simple best parameter fit from the expert table drawn with professionals, optimal representation of yearly production is determined from records of previous years, and yearly payback is evaluated from minimum yearly produced turnover. The model also requires accurate pricing of commodity at N+1. Therefore, a pricing model is developed using 3 state variables, namely the spot price, the difference between the mean-term and the long-term forward price, and the long-term structure of the model. The use of historical data enables to calibrate the parameters of state variables, and allows the pricing of commodity. Application to beet sugar underlines pricer precision. Indeed, the percentage of accuracy between computed result and real world is 99,5%. Optimal premium is then deduced and gives the producer a useful bound for negotiating an offer by insurance companies to effectively protect its harvest. The application to beet production in French Oise department illustrates the reliability of present model with as low as 6% difference between predicted and real data. The model can be adapted to almost any agricultural field by changing state parameters and calibrating their associated coefficients.

Keywords: agriculture, production model, optimal price, meteorological factors, 3-factor model, parameter calibration, forward price

Procedia PDF Downloads 376
614 Effects of Four Dietary Oils on Cholesterol and Fatty Acid Composition of Egg Yolk in Layers

Authors: A. F. Agboola, B. R. O. Omidiwura, A. Oyeyemi, E. A. Iyayi, A. S. Adelani

Abstract:

Dietary cholesterol has elicited the most public interest as it relates with coronary heart disease. Thus, humans have been paying more attention to health, thereby reducing consumption of cholesterol enriched food. Egg is considered as one of the major sources of human dietary cholesterol. However, an alternative way to reduce the potential cholesterolemic effect of eggs is to modify the fatty acid composition of the yolk. The effect of palm oil (PO), soybean oil (SO), sesame seed oil (SSO) and fish oil (FO) supplementation in the diets of layers on egg yolk fatty acid, cholesterol, egg production and egg quality parameters were evaluated in a 42-day feeding trial. One hundred and five Isa Brown laying hens of 34 weeks of age were randomly distributed into seven groups of five replicates and three birds per replicate in a completely randomized design. Seven corn-soybean basal diets (BD) were formulated: BD+No oil (T1), BD+1.5% PO (T2), BD+1.5% SO (T3), BD+1.5% SSO (T4), BD+1.5% FO (T5), BD+0.75% SO+0.75% FO (T6) and BD+0.75% SSO+0.75% FO (T7). Five eggs were randomly sampled at day 42 from each replicate to assay for the cholesterol, fatty acid profile of egg yolk and egg quality assessment. Results showed that there were no significant (P>0.05) differences observed in production performance, egg cholesterol and egg quality parameters except for yolk height, albumen height, yolk index, egg shape index, haugh unit, and yolk colour. There were no significant differences (P>0.05) observed in total cholesterol, high density lipoprotein and low density lipoprotein levels of egg yolk across the treatments. However, diets had effect (P<0.05) on TAG (triacylglycerol) and VLDL (very low density lipoprotein) of the egg yolk. The highest TAG (603.78 mg/dl) and VLDL values (120.76 mg/dl) were recorded in eggs of hens on T4 (1.5% sesame seed oil) and was similar to those on T3 (1.5% soybean oil), T5 (1.5% fish oil) and T6 (0.75% soybean oil + 0.75% fish oil). However, results revealed a significant (P<0.05) variations on eggs’ summation of polyunsaturated fatty acid (PUFA). In conclusion, it is suggested that dietary oils could be included in layers’ diets to produce designer eggs low in cholesterol and high in PUFA especially omega-3 fatty acids.

Keywords: dietary oils, egg cholesterol, egg fatty acid profile, egg quality parameters

Procedia PDF Downloads 307
613 Evaluation and Risk Assessment of Heavy Metals Pollution Using Edible Crabs, Based on Food Intended for Human Consumption

Authors: Nayab Kanwal, Noor Us Saher

Abstract:

The management and utilization of food resources is becoming a big issue due to rapid urbanization, wastage and non-sustainable use of food, especially in developing countries. Therefore, the use of seafood as alternative sources is strongly promoted worldwide. Marine pollution strongly affects marine organisms, which ultimately decreases their export quality. The monitoring of contamination in marine organisms is a good indicator of the environmental quality as well as seafood quality. Monitoring the accumulation of chemical elements within various tissues of organisms has become a useful tool to survey current or chronic levels of heavy metal exposure within an environment. In this perspective, this study was carried out to compare the previous and current levels (Year 2012 and 2014) of heavy metals (Cd, Pb, Cr, Cu and Zn) in crabs marketed in Karachi and to estimate the toxicological risk associated with their intake. The accumulation of metals in marine organisms, both essential (Cu and Zn) and toxic (Pb, Cd and Cr), natural and anthropogenic, is an actual food safety issue. Significant (p>0.05) variations in metal concentrations were found in all crab species between the two years, with most of the metals showing high accumulation in 2012. For toxicological risk assessment, EWI (Estimated weekly intake), Target Hazard quotient (THQ) and cancer risk (CR) were also assessed and high EWI, Non- cancer risk (THQ < 1) showed that there is no serious threat associated with the consumption of shellfish species on Karachi coast. The Cancer risk showed the highest risk from Cd and Pb pollution if consumed in excess. We summarize key environmental health research on health effects associated with exposure to contaminated seafood. It could be concluded that considering the Pakistan coast, these edible species may be sensitive and vulnerable to the adverse effects of environmental contaminants; more attention should be paid to the Pb and Cd metal bioaccumulation and to toxicological risks to seafood and consumers.

Keywords: cancer risk, edible crabs, heavy metals pollution, risk assessment

Procedia PDF Downloads 378
612 Advanced Electron Microscopy Study of Fission Products in a TRISO Coated Particle Neutron Irradiated to 3.6 X 1021 N/cm² Fast Fluence at 1040 ⁰C

Authors: Haiming Wen, Isabella J. Van Rooyen

Abstract:

Tristructural isotropic (TRISO)-coated fuel particles are designed as nuclear fuel for high-temperature gas reactors. TRISO coating consists of layers of carbon buffer, inner pyrolytic carbon (IPyC), SiC, and outer pyrolytic carbon. The TRISO coating, especially the SiC layer, acts as a containment system for fission products produced in the kernel. However, release of certain metallic fission products across intact TRISO coatings has been observed for decades. Despite numerous studies, mechanisms by which fission products migrate across the coating layers remain poorly understood. In this study, scanning transmission electron microscopy (STEM), energy dispersive X-ray spectroscopy (EDS), high-resolution transmission electron microscopy (HRTEM) and electron energy loss spectroscopy (EELS) were used to examine the distribution, composition and structure of fission products in a TRISO coated particle neutron irradiated to 3.6 x 1021 n/cm² fast fluence at 1040 ⁰C. Precession electron diffraction was used to investigate characters of grain boundaries where specific fission product precipitates are located. The retention fraction of 110mAg in the investigated TRISO particle was estimated to be 0.19. A high density of nanoscale fission product precipitates was observed in the SiC layer close to the SiC-IPyC interface, most of which are rich in Pd, while Ag was not identified. Some Pd-rich precipitates contain U. Precipitates tend to have complex structure and composition. Although a precipitate appears to have uniform contrast in STEM, EDS indicated that there may be composition variations throughout the precipitate, and HRTEM suggested that the precipitate may have several parts different in crystal structure or orientation. Attempts were made to measure charge states of precipitates using EELS and study their possible effect on precipitate transport.

Keywords: TRISO particle, fission product, nuclear fuel, electron microscopy, neutron irradiation

Procedia PDF Downloads 263