Search results for: computational experiment
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4582

Search results for: computational experiment

292 Automated, Objective Assessment of Pilot Performance in Simulated Environment

Authors: Maciej Zasuwa, Grzegorz Ptasinski, Antoni Kopyt

Abstract:

Nowadays flight simulators offer tremendous possibilities for safe and cost-effective pilot training, by utilization of powerful, computational tools. Due to technology outpacing methodology, vast majority of training related work is done by human instructors. It makes assessment not efficient, and vulnerable to instructors’ subjectivity. The research presents an Objective Assessment Tool (gOAT) developed at the Warsaw University of Technology, and tested on SW-4 helicopter flight simulator. The tool uses database of the predefined manoeuvres, defined and integrated to the virtual environment. These were implemented, basing on Aeronautical Design Standard Performance Specification Handling Qualities Requirements for Military Rotorcraft (ADS-33), with predefined Mission-Task-Elements (MTEs). The core element of the gOAT enhanced algorithm that provides instructor a new set of information. In details, a set of objective flight parameters fused with report about psychophysical state of the pilot. While the pilot performs the task, the gOAT system automatically calculates performance using the embedded algorithms, data registered by the simulator software (position, orientation, velocity, etc.), as well as measurements of physiological changes of pilot’s psychophysiological state (temperature, sweating, heart rate). Complete set of measurements is presented on-line to instructor’s station and shown in dedicated graphical interface. The presented tool is based on open source solutions, and flexible for editing. Additional manoeuvres can be easily added using guide developed by authors, and MTEs can be changed by instructor even during an exercise. Algorithm and measurements used allow not only to implement basic stress level measurements, but also to reduce instructor’s workload significantly. Tool developed can be used for training purpose, as well as periodical checks of the aircrew. Flexibility and ease of modifications allow the further development to be wide ranged, and the tool to be customized. Depending on simulation purpose, gOAT can be adjusted to support simulator of aircraft, helicopter, or unmanned aerial vehicle (UAV).

Keywords: automated assessment, flight simulator, human factors, pilot training

Procedia PDF Downloads 132
291 Exploring Fluoroquinolone-Resistance Dynamics Using a Distinct in Vitro Fermentation Chicken Caeca Model

Authors: Bello Gonzalez T. D. J., Setten Van M., Essen Van A., Brouwer M., Veldman K. T.

Abstract:

Resistance to fluoroquinolones (FQ) has evolved increasingly over the years, posing a significant challenge for the treatment of human infections, particularly gastrointestinal tract infections caused by zoonotic bacteria transmitted through the food chain and environment. In broiler chickens, a relatively high proportion of FQ resistance has been observed in Escherichia coli indicator, Salmonella and Campylobacter isolates. We hypothesize that flumequine (Flu), used as a secondary choice for the treatment of poultry infections, could potentially be associated with a high proportion of FQ resistance. To evaluate this hypothesis, we used an in vitro fermentation chicken caeca model. Two continuous single-stage fermenters were used to simulate in real time the physiological conditions of the chicken caeca microbial content (temperature, pH, caecal content mixing, and anoxic environment). A pool of chicken caecal content containing FQ-resistant E. coli obtained from chickens at slaughter age was used as inoculum along with a spiked FQ-susceptible Campylobacter jejuni strain isolated from broilers. Flu was added to one of the fermenters (Flu-fermenter) every 24 hours for two days to evaluate the selection and maintenance of FQ resistance over time, while the other served as a control (C-Fermenter). The experiment duration was 5 days. Samples were collected at three different time points: before, during and after Flu administration. Serial dilutions were plated on Butzler culture media with and without Flu (8mg/L) and enrofloxacin (4mg/L) and on MacConkey culture media with and without Flu (4mg/L) and enrofloxacin (1mg/L) to determine the proportion of resistant strains over time. Positive cultures were identified by mass spectrometry and matrix-assisted laser desorption/ionization (MALDI). A subset of the obtained isolates were used for Whole Genome Sequencing analysis. Over time, E. coli exhibited positive growth in both fermenters, while C. jejuni growth was detected up to day 3. The proportion of Flu-resistant E. coli strains recovered remained consistent over time after antibiotic selective pressure, while in the C-fermenter, a decrease was observed at day 5; a similar pattern was observed in the enrofloxacin-resistant E. coli strains. This suggests that Flu might play a role in the selection and persistence of enrofloxacin resistance, compared to C-fermenter, where enrofloxacin-resistant E. coli strains appear at a later time. Furthermore, positive growth was detected from both fermenters only on Butzler plates without antibiotics. A subset of C. jejuni strains from the Flu-fermenter revealed that those strains were susceptible to ciprofloxacin (MIC < 0.12 μg/mL). A selection of E. coli strains from both fermenters revealed the presence of plasmid-mediated quinolone resistance (PMQR) (qnr-B19) in only one strain from the C-fermenter belonging to sequence type (ST) 48, and in all from Flu-fermenter belonged to ST189. Our results showed that Flu selective impact on PMQR-positive E. coli strains, while no effect was observed in C. jejuni. Maintenance of Flu-resistance was correlated with antibiotic selective pressure. Further studies into antibiotic resistance gene transfer among commensal and zoonotic bacteria in the chicken caeca content may help to elucidate the resistance spread mechanisms.

Keywords: fluoroquinolone-resistance, escherichia coli, campylobacter jejuni, in vitro model

Procedia PDF Downloads 40
290 Optimization Approach to Integrated Production-Inventory-Routing Problem for Oxygen Supply Chains

Authors: Yena Lee, Vassilis M. Charitopoulos, Karthik Thyagarajan, Ian Morris, Jose M. Pinto, Lazaros G. Papageorgiou

Abstract:

With globalisation, the need to have better coordination of production and distribution decisions has become increasingly important for industrial gas companies in order to remain competitive in the marketplace. In this work, we investigate a problem that integrates production, inventory, and routing decisions in a liquid oxygen supply chain. The oxygen supply chain consists of production facilities, external third-party suppliers, and multiple customers, including hospitals and industrial customers. The product produced by the plants or sourced from the competitors, i.e., third-party suppliers, is distributed by a fleet of heterogenous vehicles to satisfy customer demands. The objective is to minimise the total operating cost involving production, third-party, and transportation costs. The key decisions for production include production and inventory levels and product amount from third-party suppliers. In contrast, the distribution decisions involve customer allocation, delivery timing, delivery amount, and vehicle routing. The optimisation of the coordinated production, inventory, and routing decisions is a challenging problem, especially when dealing with large-size problems. Thus, we present a two-stage procedure to solve the integrated problem efficiently. First, the problem is formulated as a mixed-integer linear programming (MILP) model by simplifying the routing component. The solution from the first-stage MILP model yields the optimal customer allocation, production and inventory levels, and delivery timing and amount. Then, we fix the previous decisions and solve a detailed routing. In the second stage, we propose a column generation scheme to address the computational complexity of the resulting detailed routing problem. A case study considering a real-life oxygen supply chain in the UK is presented to illustrate the capability of the proposed models and solution method. Furthermore, a comparison of the solutions from the proposed approach with the corresponding solutions provided by existing metaheuristic techniques (e.g., guided local search and tabu search algorithms) is presented to evaluate the efficiency.

Keywords: production planning, inventory routing, column generation, mixed-integer linear programming

Procedia PDF Downloads 98
289 The Product Innovation Using Nutraceutical Delivery System on Improving Growth Performance of Broiler

Authors: Kitti Supchukun, Kris Angkanaporn, Teerapong Yata

Abstract:

The product innovation using a nutraceutical delivery system on improving the growth performance of broilers is the product planning and development to solve the antibiotics banning policy incurred in the local and global livestock production system. Restricting the use of antibiotics can reduce the quality of chicken meat and increase pathogenic bacterial contamination. Although other alternatives were used to replace antibiotics, the efficacy was inconsistent, reflecting on low chicken growth performance and contaminated products. The product innovation aims to effectively deliver the selected active ingredients into the body. This product is tested on the pharmaceutical lab scale and on the farm-scale for market feasibility in order to create product innovation using the nutraceutical delivery system model. The model establishes the product standardization and traceable quality control process for farmers. The study is performed using mixed methods. Starting with a qualitative method to find the farmers' (consumers) demands and the product standard, then the researcher used the quantitative research method to develop and conclude the findings regarding the acceptance of the technology and product performance. The survey has been sent to different organizations by random sampling among the entrepreneur’s population including integrated broiler farm, broiler farm, and other related organizations. The mixed-method results, both qualitative and quantitative, verify the user and lead users' demands since they provide information about the industry standard, technology preference, developing the right product according to the market, and solutions for the industry problems. The product innovation selected nutraceutical ingredients that can solve the following problems in livestock; bactericidal, anti-inflammation, gut health, antioxidant. The combinations of the selected nutraceutical and nanostructured lipid carriers (NLC) technology aim to improve chemical and pharmaceutical components by changing the structure of active ingredients into nanoparticle, which will be released in the targeted location with accurate concentration. The active ingredients in nanoparticle form are more stable, elicit antibacterial activity against pathogenic Salmonella spp and E.coli, balance gut health, have antioxidant and anti-inflammation activity. The experiment results have proven that the nutraceuticals have an antioxidant and antibacterial activity which also increases the average daily gain (ADG), reduces feed conversion ratio (FCR). The results also show a significant impact on the higher European Performance Index that can increase the farmers' profit when exporting. The product innovation will be tested in technology acceptance management methods from farmers and industry. The production of broiler and commercialization analyses are useful to reduce the importation of animal supplements. Most importantly, product innovation is protected by intellectual property.

Keywords: nutraceutical, nano structure lipid carrier, anti-microbial drug resistance, broiler, Salmonella

Procedia PDF Downloads 152
288 Neural Networks Underlying the Generation of Neural Sequences in the HVC

Authors: Zeina Bou Diab, Arij Daou

Abstract:

The neural mechanisms of sequential behaviors are intensively studied, with songbirds a focus for learned vocal production. We are studying the premotor nucleus HVC at a nexus of multiple pathways contributing to song learning and production. The HVC consists of multiple classes of neuronal populations, each has its own cellular, electrophysiological and functional properties. During singing, a large subset of motor cortex analog-projecting HVCRA neurons emit a single 6-10 ms burst of spikes at the same time during each rendition of song, a large subset of basal ganglia-projecting HVCX neurons fire 1 to 4 bursts that are similarly time locked to vocalizations, while HVCINT neurons fire tonically at average high frequency throughout song with prominent modulations whose timing in relation to song remains unresolved. This opens the opportunity to define models relating explicit HVC circuitry to how these neurons work cooperatively to control learning and singing. We developed conductance-based Hodgkin-Huxley models for the three classes of HVC neurons (based on the ion channels previously identified from in vitro recordings) and connected them in several physiologically realistic networks (based on the known synaptic connectivity and specific glutaminergic and gabaergic pharmacology) via different architecture patterning scenarios with the aim to replicate the in vivo firing patterning behaviors. We are able, through these networks, to reproduce the in vivo behavior of each class of HVC neurons, as shown by the experimental recordings. The different network architectures developed highlight different mechanisms that might be contributing to the propagation of sequential neural activity (continuous or punctate) in the HVC and to the distinctive firing patterns that each class exhibits during singing. Examples of such possible mechanisms include: 1) post-inhibitory rebound in HVCX and their population patterns during singing, 2) different subclasses of HVCINT interacting via inhibitory-inhibitory loops, 3) mono-synaptic HVCX to HVCRA excitatory connectivity, and 4) structured many-to-one inhibitory synapses from interneurons to projection neurons, and others. Replication is only a preliminary step that must be followed by model prediction and testing.

Keywords: computational modeling, neural networks, temporal neural sequences, ionic currents, songbird

Procedia PDF Downloads 50
287 Reverse Engineering of a Secondary Structure of a Helicopter: A Study Case

Authors: Jose Daniel Giraldo Arias, Camilo Rojas Gomez, David Villegas Delgado, Gullermo Idarraga Alarcon, Juan Meza Meza

Abstract:

The reverse engineering processes are widely used in the industry with the main goal to determine the materials and the manufacture used to produce a component. There are a lot of characterization techniques and computational tools that are used in order to get this information. A study case of a reverse engineering applied to a secondary sandwich- hybrid type structure used in a helicopter is presented. The methodology used consists of five main steps, which can be applied to any other similar component: Collect information about the service conditions of the part, disassembly and dimensional characterization, functional characterization, material properties characterization and manufacturing processes characterization, allowing to obtain all the supports of the traceability of the materials and processes of the aeronautical products that ensure their airworthiness. A detailed explanation of each step is covered. Criticality and comprehend the functionalities of each part, information of the state of the art and information obtained from interviews with the technical groups of the helicopter’s operators were analyzed,3D optical scanning technique, standard and advanced materials characterization techniques and finite element simulation allow to obtain all the characteristics of the materials used in the manufacture of the component. It was found that most of the materials are quite common in the aeronautical industry, including Kevlar, carbon, and glass fibers, aluminum honeycomb core, epoxy resin and epoxy adhesive. The stacking sequence and volumetric fiber fraction are a critical issue for the mechanical behavior; a digestion acid method was used for this purpose. This also helps in the determination of the manufacture technique which for this case was Vacuum Bagging. Samples of the material were manufactured and submitted to mechanical and environmental tests. These results were compared with those obtained during reverse engineering, which allows concluding that the materials and manufacture were correctly determined. Tooling for the manufacture was designed and manufactured according to the geometry and manufacture process requisites. The part was manufactured and the mechanical, and environmental tests required were also performed. Finally, a geometric characterization and non-destructive techniques allow verifying the quality of the part.

Keywords: reverse engineering, sandwich-structured composite parts, helicopter, mechanical properties, prototype

Procedia PDF Downloads 393
286 Performance of a Sailing Vessel with a Solid Wing Sail Compared to a Traditional Sail

Authors: William Waddington, M. Jahir Rizvi

Abstract:

Sail used to propel a vessel functions in a similar way to an aircraft wing. Traditionally, cloth and ropes were used to produce sails. However, there is one major problem with traditional sail design, the increase in turbulence and flow separation when compared to that of an aircraft wing with the same camber. This has led to the development of the solid wing sail focusing mainly on the sail shape. Traditional cloth sails are manufactured as a single element whereas solid wing sail is made of two segments. To the authors’ best knowledge, the phenomena behind the performances of this type of sail at various angles of wind direction with respect to a sailing vessel’s direction (known as the angle of attack) is still an area of mystery. Hence, in this study, the thrusts of a sailing vessel produced by wing sails constructed with various angles (22°, 24°, 26° and 28°) between the two segments have been compared to that of a traditional cloth sail made of carbon-fiber material. The reason for using carbon-fiber material is to achieve the correct and the exact shape of a commercially available mainsail. NACA 0024 and NACA 0016 foils have been used to generate two-segment wing sail shape which incorporates a flap between the first and the second segments. Both the two-dimensional and the three-dimensional sail models designed in commercial CAD software Solidworks have been analyzed through Computational Fluid Dynamics (CFD) techniques using Ansys CFX considering an apparent wind speed of 20.55 knots with an apparent wind angle of 31°. The results indicate that the thrust from traditional sail increases from 8.18 N to 8.26 N when the angle of attack is increased from 5° to 7°. However, the thrust value decreases if the angle of attack is further increased. A solid wing sail which possesses 20° angle between its two segments, produces thrusts from 7.61 N to 7.74 N with an increase in the angle of attack from 7° to 8°. The thrust remains steady up to 9° angle of attack and drops dramatically beyond 9°. The highest thrust values that can be obtained for the solid wing sails with 22°, 24°, 26° and 28° angle respectively between the two segments are 8.75 N, 9.10 N, 9.29 N and 9.19 N respectively. The optimum angle of attack for each of the solid wing sails is identified as 7° at which these thrust values are obtained. Therefore, it can be concluded that all the thrust values predicted for the solid wing sails of angles between the two segments above 20° are higher compared to the thrust predicted for the traditional sail. However, the best performance from a solid wing sail is expected when the sail is created with an angle between the two segments above 20° but below or equal to 26°. In addition, 1/29th scale models in the wind tunnel have been tested to observe the flow behaviors around the sails. The experimental results support the numerical observations as the flow behaviors are exactly the same.

Keywords: CFD, drag, sailing vessel, thrust, traditional sail, wing sail

Procedia PDF Downloads 259
285 Evaluating the Potential of a Fast Growing Indian Marine Cyanobacterium by Reconstructing and Analysis of a Genome Scale Metabolic Model

Authors: Ruchi Pathania, Ahmad Ahmad, Shireesh Srivastava

Abstract:

Cyanobacteria is a promising microbe that can capture and convert atmospheric CO₂ and light into valuable industrial bio-products like biofuels, biodegradable plastics, etc. Among their most attractive traits are faster autotrophic growth, whole year cultivation using non-arable land, high photosynthetic activity, much greater biomass and productivity and easy for genetic manipulations. Cyanobacteria store carbon in the form of glycogen which can be hydrolyzed to release glucose and fermented to form bioethanol or other valuable products. Marine cyanobacterial species are especially attractive for countries with scarcity of freshwater. We recently identified a marine native cyanobacterium Synechococcus sp. BDU 130192 which has good growth rate and high level of polyglucans accumulation compared to Synechococcus PCC 7002. In this study, firstly we sequenced the whole genome and the sequences were annotated using the RAST server. Genome scale metabolic model (GSMM) was reconstructed through COBRA toolbox. GSMM is a computational representation of the metabolic reactions and metabolites of the target strain. GSMMs construction through the application of Flux Balance Analysis (FBA), which uses external nutrient uptake rates and estimate steady state intracellular and extracellular reaction fluxes, including maximization of cell growth. The model, which we have named isyn942, includes 942 reactions and 913 metabolites having 831 metabolic, 78 transport and 33 exchange reactions. The phylogenetic tree obtained by BLAST search revealed that the strain was a close relative of Synechococcus PCC 7002. The flux balance analysis (FBA) was applied on the model iSyn942 to predict the theoretical yields (mol product produced/mol CO₂ consumed) for native and non-native products like acetone, butanol, etc. under phototrophic condition by applying metabolic engineering strategies. The reported strain can be a viable strain for biotechnological applications, and the model will be helpful to researchers interested in understanding the metabolism as well as to design metabolic engineering strategies for enhanced production of various bioproducts.

Keywords: cyanobacteria, flux balance analysis, genome scale metabolic model, metabolic engineering

Procedia PDF Downloads 140
284 Bacterial Community Diversity in Soil under Two Tillage Systems

Authors: Dalia Ambrazaitienė, Monika Vilkienė, Danute Karcauskienė, Gintaras Siaudinis

Abstract:

The soil is a complex ecosystem that is part of our biosphere. The ability of soil to provide ecosystem services is dependent on microbial diversity. T Tillage is one of the major factors that affect soil properties. The no-till systems or shallow ploughless tillage are opposite of traditional deep ploughing, no-tillage systems, for instance, increase soil organic matter by reducing mineralization rates and stimulating litter concentrations of the top soil layer, whereas deep ploughing increases the biological activity of arable soil layer and reduces the incidence of weeds. The role of soil organisms is central to soil processes. Although the number of microbial species in soil is still being debated, the metagenomic approach to estimate microbial diversity predicted about 2000 – 18 000 bacterial genomes in 1 g of soil. Despite the key role of bacteria in soil processes, there is still lack of information about the bacterial diversity of soils as affected by tillage practices. This study focused on metagenomic analysis of bacterial diversity in long-term experimental plots of Dystric Epihypogleyic Albeluvisols in western part of Lithuania. The experiment was set up in 2013 and had a split-plot design where the whole-plot treatments were laid out in a randomized design with three replicates. The whole-plot treatments consisted of two tillage methods - deep ploughing (22-25 cm) (DP), ploughless tillage (7-10 cm) (PT). Three subsamples (0-20 cm) were collected on October 22, 2015 for each of the three replicates. Subsamples from the DP and PT systems were pooled together wise to make two composition samples, one representing deep ploughing (DP) and the other ploughless tillage (PT). Genomic DNA from soil sample was extracted from approximately 200 mg field-moist soil by using the D6005 Fungal/Bacterial Miniprep set (Zymo Research®) following the manufacturer’s instructions. To determine bacterial diversity and community composition, we employed a culture – independent approach of high-throughput pyrosequencing of the 16S rRNA gene. Metagenomic sequencing was made with Illumina MiSeq platform in Base Clear Company. The microbial component of soil plays a crucial role in cycling of nutrients in biosphere. Our study was a preliminary attempt at observing bacterial diversity in soil under two common but contrasting tillage practices. The number of sequenced reads obtained for PT (161 917) was higher than DP (131 194). The 10 most abundant genus in soil sample were the same (Arthrobacter, Candidatus Saccharibacteria, Actinobacteria, Acidobacterium, Mycobacterium, Bacillus, Alphaproteobacteria, Longilinea, Gemmatimonas, Solirubrobacter), just the percent of community part was different. In DP the Arthrobacter and Acidobacterium consist respectively 8.4 % and 2.5%, meanwhile in PT just 5.8% and 2.1% of all community. The Nocardioides and Terrabacter were observed just in PT. This work was supported by the project VP1-3.1-ŠMM-01-V-03-001 NKPDOKT and National Science Program: The effect of long-term, different-intensity management of resources on the soils of different genesis and on other components of the agro-ecosystems [grant number SIT-9/2015] funded by the Research Council of Lithuania.

Keywords: deep ploughing, metagenomics, ploughless tillage, soil community analysis

Procedia PDF Downloads 228
283 Influence of Protein Malnutrition and Different Stressful Conditions on Aluminum-Induced Neurotoxicity in Rats: Focus on the Possible Protection Using Epigallocatechin-3-Gallate

Authors: Azza A. Ali, Asmaa Abdelaty, Mona G. Khalil, Mona M. Kamal, Karema Abu-Elfotuh

Abstract:

Background: Aluminium (Al) is known as a neurotoxin environmental pollutant that can cause certain diseases as Dementia, Alzheimer's disease, and Parkinsonism. It is widely used in antacid drugs as well as in food additives and toothpaste. Stresses have been linked to cognitive impairment; Social isolation (SI) may exacerbate memory deficits while protein malnutrition (PM) increases oxidative damage in cortex, hippocampus and cerebellum. The risk of cognitive decline may be lower by maintaining social connections. Epigallocatechin-3-gallate (EGCG) is the most abundant catechin in green tea and has antioxidant, anti-inflammatory and anti-atherogenic effects as well as health-promoting effects in CNS. Objective: To study the influence of different stressful conditions as social isolation, electric shock (EC) and inadequate Nutritional condition as PM on neurotoxicity induced by Al in rats as well as to investigate the possible protective effect of EGCG in these stressful and PM conditions. Methods: Rats were divided into two major groups; protected group which was daily treated during three weeks of the experiment by EGCG (10 mg/kg, IP) or non-treated. Protected and non-protected groups included five subgroups as following: One normal control received saline and four Al toxicity groups injected daily for three weeks by ALCl3 (70 mg/kg, IP). One of them served as Al toxicity model, two groups subjected to different stresses either by isolation as mild stressful condition (SI-associated Al toxicity model) or by electric shock as high stressful condition (EC- associated Al toxicity model). The last was maintained on 10% casein diet (PM -associated Al toxicity model). Isolated rats were housed individually in cages covered with black plastic. Biochemical changes in the brain as acetyl cholinesterase (ACHE), Aβ, brain derived neurotrophic factor (BDNF), inflammatory mediators (TNF-α, IL-1β), oxidative parameters (MDA, SOD, TAC) were estimated for all groups. Histopathological changes in different brain regions were also evaluated. Results: Rats exposed to Al for three weeks showed brain neurotoxicity and neuronal degenerations. Both mild (SI) and high (EC) stressful conditions as well as inadequate nutrition (PM) enhanced Al-induced neurotoxicity and brain neuronal degenerations; the enhancement induced by stresses especially in its higher conditions (ES) was more pronounced than that of inadequate nutritional conditions (PM) as indicated by the significant increase in Aβ, ACHE, MDA, TNF-α, IL-1β together with the significant decrease in SOD, TAC, BDNF. On the other hand, EGCG showed more pronounced protection against hazards of Al in both stressful conditions (SI and EC) rather than in PM .The protective effects of EGCG were indicated by the significant decrease in Aβ, ACHE, MDA, TNF-α, IL-1β together with the increase in SOD, TAC, BDNF and confirmed by brain histopathological examinations. Conclusion: Neurotoxicity and brain neuronal degenerations induced by Al were more severe with stresses than with PM. EGCG can protect against Al-induced brain neuronal degenerations in all conditions. Consequently, administration of EGCG together with socialization as well as adequate protein nutrition is advised especially on excessive Al-exposure to avoid the severity of its neuronal toxicity.

Keywords: environmental pollution, aluminum, social isolation, protein malnutrition, neuronal degeneration, epigallocatechin-3-gallate, rats

Procedia PDF Downloads 371
282 Application of Lattice Boltzmann Method to Different Boundary Conditions in a Two Dimensional Enclosure

Authors: Jean Yves Trepanier, Sami Ammar, Sagnik Banik

Abstract:

Lattice Boltzmann Method has been advantageous in simulating complex boundary conditions and solving for fluid flow parameters by streaming and collision processes. This paper includes the study of three different test cases in a confined domain using the method of the Lattice Boltzmann model. 1. An SRT (Single Relaxation Time) approach in the Lattice Boltzmann model is used to simulate Lid Driven Cavity flow for different Reynolds Number (100, 400 and 1000) with a domain aspect ratio of 1, i.e., square cavity. A moment-based boundary condition is used for more accurate results. 2. A Thermal Lattice BGK (Bhatnagar-Gross-Krook) Model is developed for the Rayleigh Benard convection for both test cases - Horizontal and Vertical Temperature difference, considered separately for a Boussinesq incompressible fluid. The Rayleigh number is varied for both the test cases (10^3 ≤ Ra ≤ 10^6) keeping the Prandtl number at 0.71. A stability criteria with a precise forcing scheme is used for a greater level of accuracy. 3. The phase change problem governed by the heat-conduction equation is studied using the enthalpy based Lattice Boltzmann Model with a single iteration for each time step, thus reducing the computational time. A double distribution function approach with D2Q9 (density) model and D2Q5 (temperature) model are used for two different test cases-the conduction dominated melting and the convection dominated melting. The solidification process is also simulated using the enthalpy based method with a single distribution function using the D2Q5 model to provide a better understanding of the heat transport phenomenon. The domain for the test cases has an aspect ratio of 2 with some exceptions for a square cavity. An approximate velocity scale is chosen to ensure that the simulations are within the incompressible regime. Different parameters like velocities, temperature, Nusselt number, etc. are calculated for a comparative study with the existing works of literature. The simulated results demonstrate excellent agreement with the existing benchmark solution within an error limit of ± 0.05 implicates the viability of this method for complex fluid flow problems.

Keywords: BGK, Nusselt, Prandtl, Rayleigh, SRT

Procedia PDF Downloads 113
281 Changes in Physicochemical Characteristics of a Serpentine Soil and in Root Architecture of a Hyperaccumulating Plant Cropped with a Legume

Authors: Ramez F. Saad, Ahmad Kobaissi, Bernard Amiaud, Julien Ruelle, Emile Benizri

Abstract:

Agromining is a new technology that establishes agricultural systems on ultramafic soils in order to produce valuable metal compounds such as nickel (Ni), with the final aim of restoring a soil's agricultural functions. But ultramafic soils are characterized by low fertility levels and this can limit yields of hyperaccumulators and metal phytoextraction. The objectives of the present work were to test if the association of a hyperaccumulating plant (Alyssum murale) and a Fabaceae (Vicia sativa var. Prontivesa) could induce changes in physicochemical characteristics of a serpentine soil and in root architecture of a hyperaccumulating plant then lead to efficient agromining practices through soil quality improvement. Based on standard agricultural systems, consisting in the association of legumes and another crop such as wheat or rape, a three-month rhizobox experiment was carried out to study the effect of the co-cropping (Co) or rotation (Ro) of a hyperaccumulating plant (Alyssum murale) with a legume (Vicia sativa) and incorporating legume biomass to soil, in comparison with mineral fertilization (FMo), on the structure and physicochemical properties of an ultramafic soil and on root architecture. All parameters measured (biomass, C and N contents, and taken-up Ni) on Alyssum murale conducted in co-cropping system showed the highest values followed by the mineral fertilization and rotation (Co > FMo > Ro), except for root nickel yield for which rotation was better than the mineral fertilization (Ro > FMo). The rhizosphere soil of Alyssum murale in co-cropping had larger soil particles size and better aggregates stability than other treatments. Using geostatistics, co-cropped Alyssum murale showed a greater root surface area spatial distribution. Moreover, co-cropping and rotation-induced lower soil DTPA-extractable nickel concentrations than other treatments, but higher pH values. Alyssum murale co-cropped with a legume showed a higher biomass production, improved soil physical characteristics and enhanced nickel phytoextraction. This study showed that the introduction of a legume into Ni agromining systems could improve yields of dry biomass of the hyperaccumulating plant used and consequently, the yields of Ni. Our strategy can decrease the need to apply fertilizers and thus minimizes the risk of nitrogen leaching and underground water pollution. Co-cropping of Alyssum murale with the legume showed a clear tendency to increase nickel phytoextraction and plant biomass in comparison to rotation treatment and fertilized mono-culture. In addition, co-cropping improved soil physical characteristics and soil structure through larger and more stabilized aggregates. It is, therefore, reasonable to conclude that the use of legumes in Ni-agromining systems could be a good strategy to reduce chemical inputs and to restore soil agricultural functions. Improving the agromining system by the replacement of inorganic fertilizers could simultaneously be a safe way of rehabilitating degraded soils and a method to restore soil quality and functions leading to the recovery of ecosystem services.

Keywords: plant association, legumes, hyperaccumulating plants, ultramafic soil physicochemical properties

Procedia PDF Downloads 150
280 Productivity of Grain Sorghum-Cowpea Intercropping System: Climate-Smart Approach

Authors: Mogale T. E., Ayisi K. K., Munjonji L., Kifle Y. G.

Abstract:

Grain sorghum and cowpea are important staple crops in many areas of South Africa, particularly the Limpopo Province. The two crops are produced under a wide range of unsustainable conventional methods, which reduces productivity in the long run. Climate-smart traditional methods such as intercropping can be adopted to ensure sustainable production of these important two crops in the province. A no-tillage field experiment was laid out in a randomised complete block design (RCBD) with four replications over two seasons in two distinct agro-ecological zones, Syferkuil and Ofcolacoin, the province to assess the productivity of sorghum-cowpea intercropped under two cowpea densities.LCi Ultra compact photosynthesis machine was used to collect photosynthetic rate data biweekly between 11h00 and 13h00 until physiological maturity. Biomass and grain yield of the component crops in binary and sole cultures were determined at harvest maturity from middle rows of 2.7 m2 area. The biomass was oven dried in the laboratory at 65oC till constant weight. To obtain grain yield, harvested sorghum heads and cowpea pods were threshed, cleaned, and weighed. Harvest index (HI) and land equivalent ratio (LER) of the two crops were calculated to assess intercrop productivity relative to sole cultures. Data was analysed using the statistical analysis software system (SAS) 9.4 version, followed by mean separation using the least significant difference method. The photosyntheticrate of sorghum-cowpea intercrop was influenced by cowpea density and sorghum cultivar. Photosynthetic rate under low density was higher compared to high density, but this was dependent on the growing conditions. Dry biomass accumulation, grain yield, and harvest index differed among the sorghum cultivars and cowpea in both binary and sole cultures at the two test locations during the 2018/19 and 2020/21 growing seasons. Cowpea grain and dry biomass yields werein excess of 60% under high density compared to low density in both binary and sole cultures. The results revealed that grain yield accumulation of sorghum cultivars was influenced by the density of the companion cowpea crop as well as the production season. For instant, at Syferkuil, Enforcer and Ns5511 accumulated high yield under low density, whereas, at Ofcolaco, the higher yield was recorded under high density. Generally, under low cowpea density, cultivar Enforcer produced relatively higher grain yield whereas, under higher density, Titan yield was superior. The partial and total LER varied with growing season and the treatments studied. The total LERs exceeded 1.0 at the two locations across seasons, ranging from 1.3 to 1.8. From the results, it can be concluded that resources were used more efficiently in sorghum-cowpea intercrop at both Syferkuil and Ofcolaco. Furthermore, intercropping system improved photosynthetic rate, grain yield, and dry matter accumulation of sorghum and cowpea depending on growing conditions and density of cowpea. Hence, the sorghum-cowpea intercropping system can be adopted as a climate-smart practice for sustainable production in the Limpopo province.

Keywords: cowpea, climate-smart, grain sorghum, intercropping

Procedia PDF Downloads 193
279 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment

Authors: Ella Sèdé Maforikan

Abstract:

Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.

Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment

Procedia PDF Downloads 41
278 Use of Shipping Containers as Office Buildings in Brazil: Thermal and Energy Performance for Different Constructive Options and Climate Zones

Authors: Lucas Caldas, Pablo Paulse, Karla Hora

Abstract:

Shipping containers are present in different Brazilian cities, firstly used for transportation purposes, but which become waste materials and an environmental burden in their end-of-life cycle. In the last decade, in Brazil, some buildings made partly or totally from shipping containers started to appear, most of them for commercial and office uses. Although the use of a reused container for buildings seems a sustainable solution, it is very important to measure the thermal and energy aspects when they are used as such. In this context, this study aims to evaluate the thermal and energy performance of an office building totally made from a 12-meter-long, High Cube 40’ shipping container in different Brazilian Bioclimatic Zones. Four different constructive solutions, mostly used in Brazil were chosen: (1) container without any covering; (2) with internally insulated drywall; (3) with external fiber cement boards; (4) with both drywall and fiber cement boards. For this, the DesignBuilder with EnergyPlus was used for the computational simulation in 8760 hours. The EnergyPlus Weather File (EPW) data of six Brazilian capital cities were considered: Curitiba, Sao Paulo, Brasilia, Campo Grande, Teresina and Rio de Janeiro. Air conditioning appliance (split) was adopted for the conditioned area and the cooling setpoint was fixed at 25°C. The coefficient of performance (CoP) of air conditioning equipment was set as 3.3. Three kinds of solar absorptances were verified: 0.3, 0.6 and 0.9 of exterior layer. The building in Teresina presented the highest level of energy consumption, while the one in Curitiba presented the lowest, with a wide range of differences in results. The constructive option of external fiber cement and drywall presented the best results, although the differences were not significant compared to the solution using just drywall. The choice of absorptance showed a great impact in energy consumption, mainly compared to the case of containers without any covering and for use in the hottest cities: Teresina, Rio de Janeiro, and Campo Grande. This study brings as the main contribution the discussion of constructive aspects for design guidelines for more energy-efficient container buildings, considering local climate differences, and helps the dissemination of this cleaner constructive practice in the Brazilian building sector.

Keywords: bioclimatic zones, Brazil, shipping containers, thermal and energy performance

Procedia PDF Downloads 154
277 On-Farm Mechanized Conservation Agriculture: Preliminary Agro-Economic Performance Difference between Disc Harrowing, Ripping and No-Till

Authors: Godfrey Omulo, Regina Birner, Karlheinz Koller, Thomas Daum

Abstract:

Conservation agriculture (CA) as a climate-resilient and sustainable practice have been carried out for over three decades in Zambia. However, its continued promotion and adoption has been predominantly on a small-scale basis. Despite the plethora of scholarship pointing to the positive benefits of CA in regard to enhanced yield, profitability, carbon sequestration and minimal environmental degradation, these have not stimulated commensurate agricultural extensification desired for Zambia. The objective of this study was to investigate the potential differences between mechanized conventional and conservation tillage practices on operation time, fuel consumption, labor costs, soil moisture retention, soil temperature and crop yield. An on-farm mechanized conservation agriculture (MCA) experiment arranged in a randomized complete block design with four replications was used. The research was conducted on a 15 ha of sandy loam rainfed land: soybeans on 7ha with plot dimensions of 24 m by 210 m and maize on 8ha with plot dimensions of 24 m by 250 m. The three tillage treatments were: residue burning followed by disc harrowing, ripping tillage and no-till. The crops were rotated in two subsequent seasons. All operations were done using a 60hp 2-wheel tractor, a disc harrow, a two-tine ripper and a two-row planter. Soil measurements and the agro-economic factors were recorded for two farming seasons. The season results showed that the yield of maize and soybeans under no-till and ripping tillage practices were not significantly different from the conventional burning and discing. But, there was a significant difference in soil moisture content between no-till (25.31SFU±2.77) and disced (11.91SFU±0.59) plots at depths from 10-60 cm. Soil temperature in no-till plots (24.59°C±0.91) was significantly lower compared to the disced plots (26.20°C±1.75) at the depths 15 cm and 45 cm. For maize, there was a significant difference in operation time between disc-harrowed (3.68hr/ha±1.27) and no-till (1.85hr/ha±0.04) plots, and a significant difference in cost of labor between disc-harrowed (45.45$/ha±19.56) and no-till (21.76$/ha) plots. There was no significant difference in fuel consumption between ripping and disc-harrowing and direct seeding. For soybeans, there was a significant difference in operation time between no-tillage (1.96hr/ha±0.31) and ripping (3.34hr/ha±0.53) and disc harrowing (3.30hr/ha±0.16). Further, fuel consumption and labor on no-till plots were significantly different from both the ripped and disc-harrowed plots. The high seed emergence percentage on maize disc-harrowed plot (93.75%±5.87) was not significantly different from ripping and no-till plots. Again, the high seed emergence percentage for the soybean ripped plot (93.75%±13.03) had no significant difference with discing and ripping. The results show that it is economically sound and timesaving to practice MCA and get viable yields compared to conventional farming. This research fills the gap on the potential of MCA in the context of Zambia and its profitability in incentivizing policymakers to invest in appropriate and sustainable machinery and implements for extensive agricultural production.

Keywords: climate-smart agriculture, labor cost, mechanized conservation agriculture, soil moisture, Zambia

Procedia PDF Downloads 133
276 Extension of the Simplified Theory of Plastic Zones for Analyzing Elastic Shakedown in a Multi-Dimensional Load Domain

Authors: Bastian Vollrath, Hartwig Hubel

Abstract:

In case of over-elastic and cyclic loading, strain may accumulate due to a ratcheting mechanism until the state of shakedown is possibly achieved. Load history dependent numerical investigations by a step-by-step analysis are rather costly in terms of engineering time and numerical effort. In the case of multi-parameter loading, where various independent loadings affect the final state of shakedown, the computational effort becomes an additional challenge. Therefore, direct methods like the Simplified Theory of Plastic Zones (STPZ) are developed to solve the problem with a few linear elastic analyses. Post-shakedown quantities such as strain ranges and cyclic accumulated strains are calculated approximately by disregarding the load history. The STPZ is based on estimates of a transformed internal variable, which can be used to perform modified elastic analyses, where the elastic material parameters are modified, and initial strains are applied as modified loading, resulting in residual stresses and strains. The STPZ already turned out to work well with respect to cyclic loading between two states of loading. Usually, few linear elastic analyses are sufficient to obtain a good approximation to the post-shakedown quantities. In a multi-dimensional load domain, the approximation of the transformed internal variable transforms from a plane problem into a hyperspace problem, where time-consuming approximation methods need to be applied. Therefore, a solution restricted to structures with four stress components was developed to estimate the transformed internal variable by means of three-dimensional vector algebra. This paper presents the extension to cyclic multi-parameter loading so that an unlimited number of load cases can be taken into account. The theoretical basis and basic presumptions of the Simplified Theory of Plastic Zones are outlined for the case of elastic shakedown. The extension of the method to many load cases is explained, and a workflow of the procedure is illustrated. An example, adopting the FE-implementation of the method into ANSYS and considering multilinear hardening is given which highlights the advantages of the method compared to incremental, step-by-step analysis.

Keywords: cyclic loading, direct method, elastic shakedown, multi-parameter loading, STPZ

Procedia PDF Downloads 145
275 Linguistic Analysis of Argumentation Structures in Georgian Political Speeches

Authors: Mariam Matiashvili

Abstract:

Argumentation is an integral part of our daily communications - formal or informal. Argumentative reasoning, techniques, and language tools are used both in personal conversations and in the business environment. Verbalization of the opinions requires the use of extraordinary syntactic-pragmatic structural quantities - arguments that add credibility to the statement. The study of argumentative structures allows us to identify the linguistic features that make the text argumentative. Knowing what elements make up an argumentative text in a particular language helps the users of that language improve their skills. Also, natural language processing (NLP) has become especially relevant recently. In this context, one of the main emphases is on the computational processing of argumentative texts, which will enable the automatic recognition and analysis of large volumes of textual data. The research deals with the linguistic analysis of the argumentative structures of Georgian political speeches - particularly the linguistic structure, characteristics, and functions of the parts of the argumentative text - claims, support, and attack statements. The research aims to describe the linguistic cues that give the sentence a judgmental/controversial character and helps to identify reasoning parts of the argumentative text. The empirical data comes from the Georgian Political Corpus, particularly TV debates. Consequently, the texts are of a dialogical nature, representing a discussion between two or more people (most often between a journalist and a politician). The research uses the following approaches to identify and analyze the argumentative structures Lexical Classification & Analysis - Identify lexical items that are relevant in argumentative texts creating process - Creating the lexicon of argumentation (presents groups of words gathered from a semantic point of view); Grammatical Analysis and Classification - means grammatical analysis of the words and phrases identified based on the arguing lexicon. Argumentation Schemas - Describe and identify the Argumentation Schemes that are most likely used in Georgian Political Speeches. As a final step, we analyzed the relations between the above mentioned components. For example, If an identified argument scheme is “Argument from Analogy”, identified lexical items semantically express analogy too, and they are most likely adverbs in Georgian. As a result, we created the lexicon with the words that play a significant role in creating Georgian argumentative structures. Linguistic analysis has shown that verbs play a crucial role in creating argumentative structures.

Keywords: georgian, argumentation schemas, argumentation structures, argumentation lexicon

Procedia PDF Downloads 55
274 The Effect of Combined Fluid Shear Stress and Cyclic Stretch on Endothelial Cells

Authors: Daphne Meza, Louie Abejar, David A. Rubenstein, Wei Yin

Abstract:

Endothelial cell (ECs) morphology and function is highly impacted by the mechanical stresses these cells experience in vivo. Any change in the mechanical environment can trigger pathological EC responses. A detailed understanding of EC morphological response and function upon subjection to individual and simultaneous mechanical stimuli is needed for advancement in mechanobiology and preventive medicine. To investigate this, a programmable device capable of simultaneously applying physiological fluid shear stress (FSS) and cyclic strain (CS) has been developed, characterized and validated. Its validation was performed both experimentally, through tracer tracking, and theoretically, through the use of a computational fluid dynamics model. The effectiveness of the device was evaluated through EC morphology changes under mechanical loading conditions. Changes in cell morphology were evaluated through: cell and nucleus elongation, cell alignment and junctional actin production. The results demonstrated that the combined FSS-CS stimulation induced visible changes in EC morphology. Upon simultaneous fluid shear stress and biaxial tensile strain stimulation, cells were elongated and generally aligned with the flow direction, with stress fibers highlighted along the cell junctions. The concurrent stimulation from shear stress and biaxial cyclic stretch led to a significant increase in cell elongation compared to untreated cells. This, however, was significantly lower than that induced by shear stress alone, indicating that the biaxial tensile strain may counteract the elongating effect of shear stress to maintain the shape of ECs. A similar trend was seen in alignment, where the alignment induced by the concurrent application of shear stress and cyclic stretch fell in between that induced by shear stress and tensile stretch alone, indicating the opposite role shear stress and tensile strain may play in cell alignment. Junctional actin accumulation was increased upon shear stress alone or simultaneously with tensile stretch. Tensile stretch alone did not change junctional actin accumulation, indicating the dominant role of shear stress in damaging EC junctions. These results demonstrate that the shearing-stretching device is capable of applying well characterized dynamic shear stress and tensile strain to cultured ECs. Using this device, EC response to altered mechanical environment in vivo can be characterized in vitro.

Keywords: cyclic stretch, endothelial cells, fluid shear stress, vascular biology

Procedia PDF Downloads 361
273 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods

Authors: Sohyoung Won, Heebal Kim, Dajeong Lim

Abstract:

Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.

Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium

Procedia PDF Downloads 127
272 Screening for Non-hallucinogenic Neuroplastogens as Drug Candidates for the Treatment of Anxiety, Depression, and Posttraumatic Stress Disorder

Authors: Jillian M. Hagel, Joseph E. Tucker, Peter J. Facchini

Abstract:

With the aim of establishing a holistic approach for the treatment of central nervous system (CNS) disorders, we are pursuing a drug development program rapidly progressing through discovery and characterization phases. The drug candidates identified in this program are referred to as neuroplastogens owing to their ability to mediate neuroplasticity, which can be beneficial to patients suffering from anxiety, depression, or posttraumatic stress disorder. These and other related neuropsychiatric conditions are associated with the onset of neuronal atrophy, which is defined as a reduction in the number and/or productivity of neurons. The stimulation of neuroplasticity results in an increase in the connectivity between neurons and promotes the restoration of healthy brain function. We have synthesized a substantial catalogue of proprietary indolethylamine derivatives based on the general structures of serotonin (5-hydroxytryptamine) and psychedelic molecules such as N,N-dimethyltryptamine (DMT) and psilocin (4-hydroxy-DMT) that function as neuroplastogens. A primary objective in our screening protocol is the identification of derivatives associated with a significant reduction in hallucination, which will allow administration of the drug at a dose that induces neuroplasticity and triggers other efficacious outcomes in the treatment of targeted CNS disorders but which does not cause a psychedelic response in the patient. Both neuroplasticity and hallucination are associated with engagement of the 5HT2A receptor, requiring drug candidates differentially coupled to these two outcomes at a molecular level. We use novel and proprietary artificial intelligence algorithms to predict the mode of binding to the 5HT2A receptor, which has been shown to correlate with the hallucinogenic response. Hallucination is tested using the mouse head-twitch response model, whereas mouse marble-burying and sucrose preference assays are used to evaluate anxiolytic and anti-depressive potential. Neuroplasticity is assays using dendritic outgrowth assays and cell-based ELISA analysis. Pharmacokinetics and additional receptor-binding analyses also contribute the selection of lead candidates. A summary of the program is presented.

Keywords: neuroplastogen, non-hallucinogenic, drug development, anxiety, depression, PTSD, indolethylamine derivatives, psychedelic-inspired, 5-HT2A receptor, computational chemistry, head-twitch response behavioural model, neurite outgrowth assay

Procedia PDF Downloads 109
271 Movable Airfoil Arm (MAA) and Ducting Effect to Increase the Efficiency of a Helical Turbine

Authors: Abdi Ismail, Zain Amarta, Riza Rifaldy Argaputra

Abstract:

The Helical Turbine has the highest efficiency in comparison with the other hydrokinetic turbines. However, the potential of the Helical Turbine efficiency can be further improved so that the kinetic energy of a water current can be converted into mechanical energy as much as possible. This paper explains the effects by adding a Movable Airfoil Arm (MAA) and ducting on a Helical Turbine. The first research conducted an analysis of the efficiency comparison between a Plate Arm Helical Turbine (PAHT) versus a Movable Arm Helical Turbine Airfoil (MAAHT) at various water current velocities. The first step is manufacturing a PAHT and MAAHT. The PAHT and MAAHT has these specifications (as a fixed variable): 80 cm in diameter, a height of 88 cm, 3 blades, NACA 0018 blade profile, a 10 cm blade chord and a 60o inclination angle. The MAAHT uses a NACA 0012 airfoil arm that can move downward 20o, the PAHT uses a 5 mm plate arm. At the current velocity of 0.8, 0.85 and 0.9 m/s, the PAHT respectively generates a mechanical power of 92, 117 and 91 watts (a consecutive efficiency of 16%, 17% and 11%). At the same current velocity variation, the MAAHT respectively generates 74, 60 and 43 watts (a consecutive efficiency of 13%, 9% and 5%). Therefore, PAHT has a better performance than the MAAHT. Using analysis from CFD (Computational Fluid Dynamics), the drag force of MAA is greater than the one generated by the plate arm. By using CFD analysis, the drag force that occurs on the MAA is more dominant than the lift force, therefore the MAA can be called a drag device, whereas the lift force that occurs on the helical blade is more dominant than the drag force, therefore it can be called a lift device. Thus, the lift device cannot be combined with the drag device, because the drag device will become a hindrance to the lift device rotation. The second research conducted an analysis of the efficiency comparison between a Ducted Helical Turbine (DHT) versus a Helical Turbine (HT) through experimental studies. The first step is manufacturing the DHT and HT. The Helical turbine specifications (as a fixed variable) are: 40 cm in diameter, a height of 88 cm, 3 blades, NACA 0018 blade profile, 10 cm blade chord and a 60o inclination angle. At the current speed of 0.7, 0.8, 0.9 and 1.1 m/s, the HT respectively generates a mechanical power of 72, 85, 93 and 98 watts (a consecutive efficiency of 38%, 30%, 23% and 13%). At the same current speed variation, the DHT generates a mechanical power of 82, 98, 110 and 134 watts (a consecutive efficiency of 43%, 34%, 27% and 18%), respectively. The usage of ducting causes the water current speed around the turbine to increase.

Keywords: hydrokinetic turbine, helical turbine, movable airfoil arm, ducting

Procedia PDF Downloads 356
270 Co-Gasification of Petroleum Waste and Waste Tires: A Numerical and CFD Study

Authors: Thomas Arink, Isam Janajreh

Abstract:

The petroleum industry generates significant amounts of waste in the form of drill cuttings, contaminated soil and oily sludge. Drill cuttings are a product of the off-shore drilling rigs, containing wet soil and total petroleum hydrocarbons (TPH). Contaminated soil comes from different on-shore sites and also contains TPH. The oily sludge is mainly residue or tank bottom sludge from storage tanks. The two main treatment methods currently used are incineration and thermal desorption (TD). Thermal desorption is a method where the waste material is heated to 450ºC in an anaerobic environment to release volatiles, the condensed volatiles can be used as a liquid fuel. For the thermal desorption unit dry contaminated soil is mixed with moist drill cuttings to generate a suitable mixture. By thermo gravimetric analysis (TGA) of the TD feedstock it was found that less than 50% of the TPH are released, the discharged material is stored in landfill. This study proposes co-gasification of petroleum waste with waste tires as an alternative to thermal desorption. Co-gasification with a high-calorific material is necessary since the petroleum waste consists of more than 60 wt% ash (soil/sand), causing its calorific value to be too low for gasification. Since the gasification process occurs at 900ºC and higher, close to 100% of the TPH can be released, according to the TGA. This work consists of three parts: 1. a mathematical gasification model, 2. a reactive flow CFD model and 3. experimental work on a drop tube reactor. Extensive material characterization was done by means of proximate analysis (TGA), ultimate analysis (CHNOS flash analysis) and calorific value measurements (Bomb calorimeter) for the input parameters of the mathematical and CFD model. The mathematical model is a zero dimensional model based on Gibbs energy minimization together with Lagrange multiplier; it is used to find the product species composition (molar fractions of CO, H2, CH4 etc.) for different tire/petroleum feedstock mixtures and equivalence ratios. The results of the mathematical model act as a reference for the CFD model of the drop-tube reactor. With the CFD model the efficiency and product species composition can be predicted for different mixtures and particle sizes. Finally both models are verified by experiments on a drop tube reactor (1540 mm long, 66 mm inner diameter, 1400 K maximum temperature).

Keywords: computational fluid dynamics (CFD), drop tube reactor, gasification, Gibbs energy minimization, petroleum waste, waste tires

Procedia PDF Downloads 504
269 Improving a Stagnant River Reach Water Quality by Combining Jet Water Flow and Ultrasonic Irradiation

Authors: A. K. Tekile, I. L. Kim, J. Y. Lee

Abstract:

Human activities put freshwater quality under risk, mainly due to expansion of agriculture and industries, damming, diversion and discharge of inadequately treated wastewaters. The rapid human population growth and climate change escalated the problem. External controlling actions on point and non-point pollution sources are long-term solution to manage water quality. To have a holistic approach, these mechanisms should be coupled with the in-water control strategies. The available in-lake or river methods are either costly or they have some adverse effect on the ecological system that the search for an alternative and effective solution with a reasonable balance is still going on. This study aimed at the physical and chemical water quality improvement in a stagnant Yeo-cheon River reach (Korea), which has recently shown sign of water quality problems such as scum formation and fish death. The river water quality was monitored, for the duration of three months by operating only water flow generator in the first two weeks and then ultrasonic irradiation device was coupled to the flow unit for the remaining duration of the experiment. In addition to assessing the water quality improvement, the correlation among the parameters was analyzed to explain the contribution of the ultra-sonication. Generally, the combined strategy showed localized improvement of water quality in terms of dissolved oxygen, Chlorophyll-a and dissolved reactive phosphate. At locations under limited influence of the system operation, chlorophyll-a was highly increased, but within 25 m of operation the low initial value was maintained. The inverse correlation coefficient between dissolved oxygen and chlorophyll-a decreased from 0.51 to 0.37 when ultrasonic irradiation unit was used with the flow, showing that ultrasonic treatment reduced chlorophyll-a concentration and it inhibited photosynthesis. The relationship between dissolved oxygen and reactive phosphate also indicated that influence of ultra-sonication was higher than flow on the reactive phosphate concentration. Even though flow increased turbidity by suspending sediments, ultrasonic waves canceled out the effect due to the agglomeration of suspended particles and the follow-up settling out. There has also been variation of interaction in the water column as the decrease of pH and dissolved oxygen from surface to the bottom played a role in phosphorus release into the water column. The variation of nitrogen and dissolved organic carbon concentrations showed mixed trend probably due to the complex chemical reactions subsequent to the operation. Besides, the intensive rainfall and strong wind around the end of the field trial had apparent impact on the result. The combined effect of water flow and ultrasonic irradiation was a cumulative water quality improvement and it maintained the dissolved oxygen and chlorophyll-a requirement of the river for healthy ecological interaction. However, the overall improvement of water quality is not guaranteed as effectiveness of ultrasonic technology requires long-term monitoring of water quality before, during and after treatment. Even though, the short duration of the study conducted here has limited nutrient pattern realization, the use of ultrasound at field scale to improve water quality is promising.

Keywords: stagnant, ultrasonic irradiation, water flow, water quality

Procedia PDF Downloads 177
268 A Homogenized Mechanical Model of Carbon Nanotubes/Polymer Composite with Interface Debonding

Authors: Wenya Shu, Ilinca Stanciulescu

Abstract:

Carbon nanotubes (CNTs) possess attractive properties, such as high stiffness and strength, and high thermal and electrical conductivities, making them promising filler in multifunctional nanocomposites. Although CNTs can be efficient reinforcements, the expected level of mechanical performance of CNT-polymers is not often reached in practice due to the poor mechanical behavior of the CNT-polymer interfaces. It is believed that the interactions of CNT and polymer mainly result from the Van der Waals force. The interface debonding is a fracture and delamination phenomenon. Thus, the cohesive zone modeling (CZM) is deemed to give good capture of the interface behavior. The detailed, cohesive zone modeling provides an option to consider the CNT-matrix interactions, but brings difficulties in mesh generation and also leads to high computational costs. Homogenized models that smear the fibers in the ground matrix and treat the material as homogeneous are studied in many researches to simplify simulations. But based on the perfect interface assumption, the traditional homogenized model obtained by mixing rules severely overestimates the stiffness of the composite, even comparing with the result of the CZM with artificially very strong interface. A mechanical model that can take into account the interface debonding and achieve comparable accuracy to the CZM is thus essential. The present study first investigates the CNT-matrix interactions by employing cohesive zone modeling. Three different coupled CZM laws, i.e., bilinear, exponential and polynomial, are considered. These studies indicate that the shapes of the CZM constitutive laws chosen do not influence significantly the simulations of interface debonding. Assuming a bilinear traction-separation relationship, the debonding process of single CNT in the matrix is divided into three phases and described by differential equations. The analytical solutions corresponding to these phases are derived. A homogenized model is then developed by introducing a parameter characterizing interface sliding into the mixing theory. The proposed mechanical model is implemented in FEAP8.5 as a user material. The accuracy and limitations of the model are discussed through several numerical examples. The CZM simulations in this study reveal important factors in the modeling of CNT-matrix interactions. The analytical solutions and proposed homogenized model provide alternative methods to efficiently investigate the mechanical behaviors of CNT/polymer composites.

Keywords: carbon nanotube, cohesive zone modeling, homogenized model, interface debonding

Procedia PDF Downloads 114
267 Computationally Efficient Electrochemical-Thermal Li-Ion Cell Model for Battery Management System

Authors: Sangwoo Han, Saeed Khaleghi Rahimian, Ying Liu

Abstract:

Vehicle electrification is gaining momentum, and many car manufacturers promise to deliver more electric vehicle (EV) models to consumers in the coming years. In controlling the battery pack, the battery management system (BMS) must maintain optimal battery performance while ensuring the safety of a battery pack. Tasks related to battery performance include determining state-of-charge (SOC), state-of-power (SOP), state-of-health (SOH), cell balancing, and battery charging. Safety related functions include making sure cells operate within specified, static and dynamic voltage window and temperature range, derating power, detecting faulty cells, and warning the user if necessary. The BMS often utilizes an RC circuit model to model a Li-ion cell because of its robustness and low computation cost among other benefits. Because an equivalent circuit model such as the RC model is not a physics-based model, it can never be a prognostic model to predict battery state-of-health and avoid any safety risk even before it occurs. A physics-based Li-ion cell model, on the other hand, is more capable at the expense of computation cost. To avoid the high computation cost associated with a full-order model, many researchers have demonstrated the use of a single particle model (SPM) for BMS applications. One drawback associated with the single particle modeling approach is that it forces to use the average current density in the calculation. The SPM would be appropriate for simulating drive cycles where there is insufficient time to develop a significant current distribution within an electrode. However, under a continuous or high-pulse electrical load, the model may fail to predict cell voltage or Li⁺ plating potential. To overcome this issue, a multi-particle reduced-order model is proposed here. The use of multiple particles combined with either linear or nonlinear charge-transfer reaction kinetics enables to capture current density distribution within an electrode under any type of electrical load. To maintain computational complexity like that of an SPM, governing equations are solved sequentially to minimize iterative solving processes. Furthermore, the model is validated against a full-order model implemented in COMSOL Multiphysics.

Keywords: battery management system, physics-based li-ion cell model, reduced-order model, single-particle and multi-particle model

Procedia PDF Downloads 93
266 An Efficient Process Analysis and Control Method for Tire Mixing Operation

Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park

Abstract:

Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.

Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process

Procedia PDF Downloads 245
265 Interpretation of the Russia-Ukraine 2022 War via N-Gram Analysis

Authors: Elcin Timur Cakmak, Ayse Oguzlar

Abstract:

This study presents the results of the tweets sent by Twitter users on social media about the Russia-Ukraine war by bigram and trigram methods. On February 24, 2022, Russian President Vladimir Putin declared a military operation against Ukraine, and all eyes were turned to this war. Many people living in Russia and Ukraine reacted to this war and protested and also expressed their deep concern about this war as they felt the safety of their families and their futures were at stake. Most people, especially those living in Russia and Ukraine, express their views on the war in different ways. The most popular way to do this is through social media. Many people prefer to convey their feelings using Twitter, one of the most frequently used social media tools. Since the beginning of the war, it is seen that there have been thousands of tweets about the war from many countries of the world on Twitter. These tweets accumulated in data sources are extracted using various codes for analysis through Twitter API and analysed by Python programming language. The aim of the study is to find the word sequences in these tweets by the n-gram method, which is known for its widespread use in computational linguistics and natural language processing. The tweet language used in the study is English. The data set consists of the data obtained from Twitter between February 24, 2022, and April 24, 2022. The tweets obtained from Twitter using the #ukraine, #russia, #war, #putin, #zelensky hashtags together were captured as raw data, and the remaining tweets were included in the analysis stage after they were cleaned through the preprocessing stage. In the data analysis part, the sentiments are found to present what people send as a message about the war on Twitter. Regarding this, negative messages make up the majority of all the tweets as a ratio of %63,6. Furthermore, the most frequently used bigram and trigram word groups are found. Regarding the results, the most frequently used word groups are “he, is”, “I, do”, “I, am” for bigrams. Also, the most frequently used word groups are “I, do, not”, “I, am, not”, “I, can, not” for trigrams. In the machine learning phase, the accuracy of classifications is measured by Classification and Regression Trees (CART) and Naïve Bayes (NB) algorithms. The algorithms are used separately for bigrams and trigrams. We gained the highest accuracy and F-measure values by the NB algorithm and the highest precision and recall values by the CART algorithm for bigrams. On the other hand, the highest values for accuracy, precision, and F-measure values are achieved by the CART algorithm, and the highest value for the recall is gained by NB for trigrams.

Keywords: classification algorithms, machine learning, sentiment analysis, Twitter

Procedia PDF Downloads 57
264 Using Mathematical Models to Predict the Academic Performance of Students from Initial Courses in Engineering School

Authors: Martín Pratto Burgos

Abstract:

The Engineering School of the University of the Republic in Uruguay offers an Introductory Mathematical Course from the second semester of 2019. This course has been designed to assist students in preparing themselves for math courses that are essential for Engineering Degrees, namely Math1, Math2, and Math3 in this research. The research proposes to build a model that can accurately predict the student's activity and academic progress based on their performance in the three essential Mathematical courses. Additionally, there is a need for a model that can forecast the incidence of the Introductory Mathematical Course in the three essential courses approval during the first academic year. The techniques used are Principal Component Analysis and predictive modelling using the Generalised Linear Model. The dataset includes information from 5135 engineering students and 12 different characteristics based on activity and course performance. Two models are created for a type of data that follows a binomial distribution using the R programming language. Model 1 is based on a variable's p-value being less than 0.05, and Model 2 uses the stepAIC function to remove variables and get the lowest AIC score. After using Principal Component Analysis, the main components represented in the y-axis are the approval of the Introductory Mathematical Course, and the x-axis is the approval of Math1 and Math2 courses as well as student activity three years after taking the Introductory Mathematical Course. Model 2, which considered student’s activity, performed the best with an AUC of 0.81 and an accuracy of 84%. According to Model 2, the student's engagement in school activities will continue for three years after the approval of the Introductory Mathematical Course. This is because they have successfully completed the Math1 and Math2 courses. Passing the Math3 course does not have any effect on the student’s activity. Concerning academic progress, the best fit is Model 1. It has an AUC of 0.56 and an accuracy rate of 91%. The model says that if the student passes the three first-year courses, they will progress according to the timeline set by the curriculum. Both models show that the Introductory Mathematical Course does not directly affect the student’s activity and academic progress. The best model to explain the impact of the Introductory Mathematical Course on the three first-year courses was Model 1. It has an AUC of 0.76 and 98% accuracy. The model shows that if students pass the Introductory Mathematical Course, it will help them to pass Math1 and Math2 courses without affecting their performance on the Math3 course. Matching the three predictive models, if students pass Math1 and Math2 courses, they will stay active for three years after taking the Introductory Mathematical Course, and also, they will continue following the recommended engineering curriculum. Additionally, the Introductory Mathematical Course helps students to pass Math1 and Math2 when they start Engineering School. Models obtained in the research don't consider the time students took to pass the three Math courses, but they can successfully assess courses in the university curriculum.

Keywords: machine-learning, engineering, university, education, computational models

Procedia PDF Downloads 64
263 Variables, Annotation, and Metadata Schemas for Early Modern Greek

Authors: Eleni Karantzola, Athanasios Karasimos, Vasiliki Makri, Ioanna Skouvara

Abstract:

Historical linguistics unveils the historical depth of languages and traces variation and change by analyzing linguistic variables over time. This field of linguistics usually deals with a closed data set that can only be expanded by the (re)discovery of previously unknown manuscripts or editions. In some cases, it is possible to use (almost) the entire closed corpus of a language for research, as is the case with the Thesaurus Linguae Graecae digital library for Ancient Greek, which contains most of the extant ancient Greek literature. However, concerning ‘dynamic’ periods when the production and circulation of texts in printed as well as manuscript form have not been fully mapped, representative samples and corpora of texts are needed. Such material and tools are utterly lacking for Early Modern Greek (16th-18th c.). In this study, the principles of the creation of EMoGReC, a pilot representative corpus of Early Modern Greek (16th-18th c.) are presented. Its design follows the fundamental principles of historical corpora. The selection of texts aims to create a representative and balanced corpus that gives insight into diachronic, diatopic and diaphasic variation. The pilot sample includes data derived from fully machine-readable vernacular texts, which belong to 4-5 different textual genres and come from different geographical areas. We develop a hierarchical linguistic annotation scheme, further customized to fit the characteristics of our text corpus. Regarding variables and their variants, we use as a point of departure the bundle of twenty-four features (or categories of features) for prose demotic texts of the 16th c. Tags are introduced bearing the variants [+old/archaic] or [+novel/vernacular]. On the other hand, further phenomena that are underway (cf. The Cambridge Grammar of Medieval and Early Modern Greek) are selected for tagging. The annotated texts are enriched with metalinguistic and sociolinguistic metadata to provide a testbed for the development of the first comprehensive set of tools for the Greek language of that period. Based on a relational management system with interconnection of data, annotations, and their metadata, the EMoGReC database aspires to join a state-of-the-art technological ecosystem for the research of observed language variation and change using advanced computational approaches.

Keywords: early modern Greek, variation and change, representative corpus, diachronic variables.

Procedia PDF Downloads 47