Search results for: computational experiment
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4695

Search results for: computational experiment

525 Interactive Glare Visualization Model for an Architectural Space

Authors: Florina Dutt, Subhajit Das, Matthew Swartz

Abstract:

Lighting design and its impact on indoor comfort conditions are an integral part of good interior design. Impact of lighting in an interior space is manifold and it involves many sub components like glare, color, tone, luminance, control, energy efficiency, flexibility etc. While other components have been researched and discussed multiple times, this paper discusses the research done to understand the glare component from an artificial lighting source in an indoor space. Consequently, the paper discusses a parametric model to convey real time glare level in an interior space to the designer/ architect. Our end users are architects and likewise for them it is of utmost importance to know what impression the proposed lighting arrangement and proposed furniture layout will have on indoor comfort quality. This involves specially those furniture elements (or surfaces) which strongly reflect light around the space. Essentially, the designer needs to know the ramification of the ‘discomfortable glare’ at the early stage of design cycle, when he still can afford to make changes to his proposed design and consider different routes of solution for his client. Unfortunately, most of the lighting analysis tools that are present, offer rigorous computation and analysis on the back end eventually making it challenging for the designer to analyze and know the glare from interior light quickly. Moreover, many of them do not focus on glare aspect of the artificial light. That is why, in this paper, we explain a novel approach to approximate interior glare data. Adding to that we visualize this data in a color coded format, expressing the implications of their proposed interior design layout. We focus on making this analysis process very fluid and fast computationally, enabling complete user interaction with the capability to vary different ranges of user inputs adding more degrees of freedom for the user. We test our proposed parametric model on a case study, a Computer Lab space in our college facility.

Keywords: computational geometry, glare impact in interior space, info visualization, parametric lighting analysis

Procedia PDF Downloads 349
524 Isolation and Characterization of the First Known Inhibitor Cystine Knot Peptide in Sea Anemone: Inhibitory Activity on Acid-Sensing Ion Channels

Authors: Armando A. Rodríguez, Emilio Salceda, Anoland Garateix, André J. Zaharenko, Steve Peigneur, Omar López, Tirso Pons, Michael Richardson, Maylín Díaz, Yasnay Hernández, Ludger Ständker, Jan Tytgat, Enrique Soto

Abstract:

Acid-sensing ion channels are cation (Na+) channels activated by a pH drop. These proteins belong to the ENaC/degenerin superfamily of sodium channels. ASICs are involved in sensory perception, synaptic plasticity, learning, memory formation, cell migration and proliferation, nociception, and neurodegenerative disorders, among other processes; therefore those molecules that specifically target these channels are of growing pharmacological and biomedical interest. Sea anemones produce a large variety of ion channels peptide toxins; however, those acting on ligand-gated ion channels, such as Glu-gated, Ach-gated ion channels, and acid-sensing ion channels (ASICs), remain barely explored. The peptide PhcrTx1 is the first compound characterized from the sea anemone Phymanthus crucifer, and it constitutes a novel ASIC inhibitor. This peptide was purified by chromatographic techniques and pharmacologically characterized on acid-sensing ion channels of mammalian neurons using patch-clamp techniques. PhcrTx1 inhibited ASIC currents with an IC50 of 100 nM. Edman degradation yielded a sequence of 32 amino acids residues, with a molecular mass of 3477 Da by MALDI-TOF. No similarity to known sea anemone peptides was found in protein databases. The computational analysis of Cys-pattern and secondary structure arrangement suggested that this is a structurally ICK (Inhibitor Cystine Knot)-type peptide, a scaffold that had not been found in sea anemones but in other venomous organisms. These results show that PhcrTx1 represents the first member of a new structural group of sea anemones toxins acting on ASICs. Also, this peptide constitutes a novel template for the development of drugs against pathologies related to ASICs function.

Keywords: animal toxin, inhibitor cystine knot, ion channel, sea anemone

Procedia PDF Downloads 307
523 Phage Therapy as a Potential Solution in the Fight against Antimicrobial Resistance

Authors: Sanjay Shukla

Abstract:

Excessive use of antibiotics is a main problem in the treatment of wounds and other chronic infections and antibiotic treatment is frequently non-curative, thus alternative treatment is necessary. Phage therapy is considered one of the most effective approaches to treat multi-drug resistant bacterial pathogens. Infections caused by Staphylococcus aureus are very efficiently controlled with phage cocktails, containing a different individual phages lysate infecting a majority of known pathogenic S. aureus strains. The aim of current study was to investigate the efficiency of a purified phage cocktail for prophylactic as well as therapeutic application in mouse model and in large animals with chronic septic infection of wounds. A total of 150 sewage samples were collected from various livestock farms. These samples were subjected for the isolation of bacteriophage by double agar layer method. A total of 27 sewage samples showed plaque formation by producing lytic activity against S. aureus in double agar overlay method out of 150 sewage samples. In TEM recovered isolates of bacteriophages showed hexagonal structure with tail fiber. In the bacteriophage (ØVS) had an icosahedral symmetry with the head size 52.20 nm in diameter and long tail of 109 nm. Head and tail were held together by connector and can be classified as a member of the Myoviridae family under the order of Caudovirale. Recovered bacteriophage had shown the antibacterial activity against the S. aureus in vitro. Cocktail (ØVS1, ØVS5, ØVS9 and ØVS 27) of phage lysate were tested to know in vivo antibacterial activity as well as the safety profile. Result of mice experiment indicated that the bacteriophage lysate was very safe, did not show any appearance of abscess formation which indicates its safety in living system. The mice were also prophylactically protected against S. aureus when administered with cocktail of bacteriophage lysate just before the administration of S. aureus which indicates that they are good prophylactic agent. The S. aureus inoculated mice were completely recovered by bacteriophage administration with 100% recovery which was very good as compere to conventional therapy. In present study ten chronic cases of wound were treated with phage lysate and follow up of these cases was done regularly up to ten days (at 0, 5 and 10 d). Result indicated that the six cases out of ten showed complete recovery of wounds within 10 d. The efficacy of bacteriophage therapy was found to be 60% which was very good as compared to the conventional antibiotic therapy in chronic septic wounds infections. Thus, the application of lytic phage in single dose proved to be innovative and effective therapy for treatment of septic chronic wounds.

Keywords: phage therapy, phage lysate, antimicrobial resistance, S. aureus

Procedia PDF Downloads 116
522 A Mixed-Method Exploration of the Interrelationship between Corporate Governance and Firm Performance

Authors: Chen Xiatong

Abstract:

The study aims to explore the interrelationship between corporate governance factors and firm performance in Mainland China using a mixed-method approach. To clarify the current effectiveness of corporate governance, uncover the complex interrelationships between governance factors and firm performance, and enhance understanding of corporate governance strategies in Mainland China. The research involves quantitative methods like statistical analysis of governance factors and firm performance data, as well as qualitative approaches including policy research, case studies, and interviews with staff members. The study aims to reveal the current effectiveness of corporate governance in Mainland China, identify complex interrelationships between governance factors and firm performance, and provide suggestions for companies to enhance their governance practices. The research contributes to enriching the literature on corporate governance by providing insights into the effectiveness of governance practices in Mainland China and offering suggestions for improvement. Quantitative data will be gathered through surveys and sampling methods, focusing on governance factors and firm performance indicators. Qualitative data will be collected through policy research, case studies, and interviews with staff members. Quantitative data will be analyzed using statistical, mathematical, and computational techniques. Qualitative data will be analyzed through thematic analysis and interpretation of policy documents, case study findings, and interview responses. The study addresses the effectiveness of corporate governance in Mainland China, the interrelationship between governance factors and firm performance, and staff members' perceptions of corporate governance strategies. The research aims to enhance understanding of corporate governance effectiveness, enrich the literature on governance practices, and contribute to the field of business management and human resources management in Mainland China.

Keywords: corporate governance, business management, human resources management, board of directors

Procedia PDF Downloads 53
521 An Analysis of LoRa Networks for Rainforest Monitoring

Authors: Rafael Castilho Carvalho, Edjair de Souza Mota

Abstract:

As the largest contributor to the biogeochemical functioning of the Earth system, the Amazon Rainforest has the greatest biodiversity on the planet, harboring about 15% of all the world's flora. Recognition and preservation are the focus of research that seeks to mitigate drastic changes, especially anthropic ones, which irreversibly affect this biome. Functional and low-cost monitoring alternatives to reduce these impacts are a priority, such as those using technologies such as Low Power Wide Area Networks (LPWAN). Promising, reliable, secure and with low energy consumption, LPWAN can connect thousands of IoT devices, and in particular, LoRa is considered one of the most successful solutions to facilitate forest monitoring applications. Despite this, the forest environment, in particular the Amazon Rainforest, is a challenge for these technologies, requiring work to identify and validate the use of technology in a real environment. To investigate the feasibility of deploying LPWAN in remote water quality monitoring of rivers in the Amazon Region, a LoRa-based test bed consisting of a Lora transmitter and a LoRa receiver was set up, both parts were implemented with Arduino and the LoRa chip SX1276. The experiment was carried out at the Federal University of Amazonas, which contains one of the largest urban forests in Brazil. There are several springs inside the forest, and the main goal is to collect water quality parameters and transmit the data through the forest in real time to the gateway at the uni. In all, there are nine water quality parameters of interest. Even with a high collection frequency, the amount of information that must be sent to the gateway is small. However, for this application, the battery of the transmitter device is a concern since, in the real application, the device must run without maintenance for long periods of time. With these constraints in mind, parameters such as Spreading Factor (SF) and Coding Rate (CR), different antenna heights, and distances were tuned to better the connectivity quality, measured with RSSI and loss rate. A handheld spectrum analyzer RF Explorer was used to get the RSSI values. Distances exceeding 200 m have soon proven difficult to establish communication due to the dense foliage and high humidity. The optimal combinations of SF-CR values were 8-5 and 9-5, showing the lowest packet loss rates, 5% and 17%, respectively, with a signal strength of approximately -120 dBm, these being the best settings for this study so far. The rains and climate changes imposed limitations on the equipment, and more tests are already being conducted. Subsequently, the range of the LoRa configuration must be extended using a mesh topology, especially because at least three different collection points in the same water body are required.

Keywords: IoT, LPWAN, LoRa, coverage, loss rate, forest

Procedia PDF Downloads 84
520 Arguments against Innateness of Theory of Mind

Authors: Arkadiusz Gut, Robert Mirski

Abstract:

The nativist-constructivist debate constitutes a considerable part of current research on mindreading. Peter Carruthers and his colleagues are known for their nativist position in the debate and take issue with constructivist views proposed by other researchers, with Henry Wellman, Alison Gopnik, and Ian Apperly at the forefront. More specifically, Carruthers together with Evan Westra propose a nativistic explanation of Theory of Mind Scale study results that Wellman et al. see as supporting constructivism. While allowing for development of the innate mindreading system, Westra and Carruthers base their argumentation essentially on a competence-performance gap, claiming that cross-cultural differences in Theory of Mind Scale progression as well as discrepancies between infants’ and toddlers’ results on verbal and non-verbal false-belief tasks are fully explainable in terms of acquisition of other, pragmatic, cognitive developments, which are said to allow for an expression of the innately present Theory of Mind understanding. The goal of the present paper is to bring together arguments against the view offered by Westra and Carruthers. It will be shown that even though Carruthers et al.’s interpretation has not been directly controlled for in Wellman et al.’s experiments, there are serious reasons to dismiss such nativistic views which Carruthers et al. advance. The present paper discusses the following issues that undermine Carruthers et al.’s nativistic conception: (1) The concept of innateness is argued to be developmentally inaccurate; it has been dropped in many biological sciences altogether and many developmental psychologists advocate for doing the same in cognitive psychology. Reality of development is a complex interaction of changing elements that is belied by the simplistic notion of ‘the innate.’ (2) The purported innate mindreading conceptual system posited by Carruthers ascribes adult-like understanding to infants, ignoring the difference between first- and second-order understanding, between what can be called ‘presentation’ and ‘representation.’ (3) Advances in neurobiology speak strongly against any inborn conceptual knowledge; neocortex, where conceptual knowledge finds its correlates, is said to be largely equipotential at birth. (4) Carruthers et al.’s interpretations are excessively charitable; they extend results of studies done with 15-month-olds to conclusions about innateness, whereas in reality at that age there has been plenty of time for construction of the skill. (5) Looking-time experiment paradigm used in non-verbal false belief tasks that provide the main support for Carruthers’ argumentation has been criticized on methodological grounds. In the light of the presented arguments, nativism in theory of mind research is concluded to be an untenable position.

Keywords: development, false belief, mindreading, nativism, theory of mind

Procedia PDF Downloads 208
519 Experiment-Based Teaching Method for the Varying Frictional Coefficient

Authors: Mihaly Homostrei, Tamas Simon, Dorottya Schnider

Abstract:

The topic of oscillation in physics is one of the key ideas which is usually taught based on the concept of harmonic oscillation. It can be an interesting activity to deal with a frictional oscillator in advanced high school classes or in university courses. Its mechanics are investigated in this research, which shows that the motion of the frictional oscillator is more complicated than a simple harmonic oscillator. The physics of the applied model in this study seems to be interesting and useful for undergraduate students. The study presents a well-known physical system, which is mostly discussed theoretically in high school and at the university. The ideal frictional oscillator is normally used as an example of harmonic oscillatory motion, as its theory relies on the constant coefficient of sliding friction. The structure of the system is simple: a rod with a homogeneous mass distribution is placed on two rotating identical cylinders placed at the same height so that they are horizontally aligned, and they rotate at the same angular velocity, however in opposite directions. Based on this setup, one could easily show that the equation of motion describes a harmonic oscillation considering the magnitudes of the normal forces in the system as the function of the position and the frictional forces with a constant coefficient of frictions are related to them. Therefore, the whole description of the model relies on simple Newtonian mechanics, which is available for students even in high school. On the other hand, the phenomenon of the described frictional oscillator does not seem to be so straightforward after all; experiments show that the simple harmonic oscillation cannot be observed in all cases, and the system performs a much more complex movement, whereby the rod adjusts itself to a non-harmonic oscillation with a nonzero stable amplitude after an unconventional damping effect. The stable amplitude, in this case, means that the position function of the rod converges to a harmonic oscillation with a constant amplitude. This leads to the idea of a more complex model which can describe the motion of the rod in a more accurate way. The main difference to the original equation of motion is the concept that the frictional coefficient varies with the relative velocity. This dependence on the velocity was investigated in many different research articles as well; however, this specific problem could demonstrate the key concept of the varying friction coefficient and its importance in an interesting and demonstrative way. The position function of the rod is described by a more complicated and non-trivial, yet more precise equation than the usual harmonic oscillation description of the movement. The study discusses the structure of the measurements related to the frictional oscillator, the qualitative and quantitative derivation of the theory, and the comparison of the final theoretical function as well as the measured position-function in time. The project provides useful materials and knowledge for undergraduate students and a new perspective in university physics education.

Keywords: friction, frictional coefficient, non-harmonic oscillator, physics education

Procedia PDF Downloads 191
518 Analysis of Correlation Between Manufacturing Parameters and Mechanical Strength Followed by Uncertainty Propagation of Geometric Defects in Lattice Structures

Authors: Chetra Mang, Ahmadali Tahmasebimoradi, Xavier Lorang

Abstract:

Lattice structures are widely used in various applications, especially in aeronautic, aerospace, and medical applications because of their high performance properties. Thanks to advancement of the additive manufacturing technology, the lattice structures can be manufactured by different methods such as laser beam melting technology. However, the presence of geometric defects in the lattice structures is inevitable due to the manufacturing process. The geometric defects may have high impact on the mechanical strength of the structures. This work analyzes the correlation between the manufacturing parameters and the mechanical strengths of the lattice structures. To do that, two types of the lattice structures; body-centered cubic with z-struts (BCCZ) structures made of Inconel718, and body-centered cubic (BCC) structures made of Scalmalloy, are manufactured by laser melting beam machine using Taguchi design of experiment. Each structure is placed on the substrate with a specific position and orientation regarding the roller direction of deposed metal powder. The position and orientation are considered as the manufacturing parameters. The geometric defects of each beam in the lattice are characterized and used to build the geometric model in order to perform simulations. Then, the mechanical strengths are defined by the homogeneous response as Young's modulus and yield strength. The distribution of mechanical strengths is observed as a function of manufacturing parameters. The mechanical response of the BCCZ structure is stretch-dominated, i.e., the mechanical strengths are directly dependent on the strengths of the vertical beams. As the geometric defects of vertical beams are slightly changed based on their position/orientation on the manufacturing substrate, the mechanical strengths are less dispersed. The manufacturing parameters are less influenced on the mechanical strengths of the structure BCCZ. The mechanical response of the BCC structure is bending-dominated. The geometric defects of inclined beam are highly dispersed within a structure and also based on their position/orientation on the manufacturing substrate. For different position/orientation on the substrate, the mechanical responses are highly dispersed as well. This shows that the mechanical strengths are directly impacted by manufacturing parameters. In addition, this work is carried out to study the uncertainty propagation of the geometric defects on the mechanical strength of the BCC lattice structure made of Scalmalloy. To do that, we observe the distribution of mechanical strengths of the lattice according to the distribution of the geometric defects. A probability density law is determined based on a statistical hypothesis corresponding to the geometric defects of the inclined beams. The samples of inclined beams are then randomly drawn from the density law to build the lattice structure samples. The lattice samples are then used for simulation to characterize the mechanical strengths. The results reveal that the distribution of mechanical strengths of the structures with the same manufacturing parameters is less dispersed than one of the structures with different manufacturing parameters. Nevertheless, the dispersion of mechanical strengths due to the structures with the same manufacturing parameters are unneglectable.

Keywords: geometric defects, lattice structure, mechanical strength, uncertainty propagation

Procedia PDF Downloads 122
517 Modeling of the Biodegradation Performance of a Membrane Bioreactor to Enhance Water Reuse in Agri-food Industry - Poultry Slaughterhouse as an Example

Authors: masmoudi Jabri Khaoula, Zitouni Hana, Bousselmi Latifa, Akrout Hanen

Abstract:

Mathematical modeling has become an essential tool for sustainable wastewater management, particularly for the simulation and the optimization of complex processes involved in activated sludge systems. In this context, the activated sludge model (ASM3h) was used for the simulation of a Biological Membrane Reactor (MBR) as it includes the integration of biological wastewater treatment and physical separation by membrane filtration. In this study, the MBR with a useful volume of 12.5 L was fed continuously with poultry slaughterhouse wastewater (PSWW) for 50 days at a feed rate of 2 L/h and for a hydraulic retention time (HRT) of 6.25h. Throughout its operation, High removal efficiency was observed for the removal of organic pollutants in terms of COD with 84% of efficiency. Moreover, the MBR has generated a treated effluent which fits with the limits of discharge into the public sewer according to the Tunisian standards which were set in March 2018. In fact, for the nitrogenous compounds, average concentrations of nitrate and nitrite in the permeat reached 0.26±0.3 mg. L-1 and 2.2±2.53 mg. L-1, respectively. The simulation of the MBR process was performed using SIMBA software v 5.0. The state variables employed in the steady state calibration of the ASM3h were determined using physical and respirometric methods. The model calibration was performed using experimental data obtained during the first 20 days of the MBR operation. Afterwards, kinetic parameters of the model were adjusted and the simulated values of COD, N-NH4+and N- NOx were compared with those reported from the experiment. A good prediction was observed for the COD, N-NH4+and N- NOx concentrations with 467 g COD/m³, 110.2 g N/m³, 3.2 g N/m³ compared to the experimental data which were 436.4 g COD/m³, 114.7 g N/m³ and 3 g N/m³, respectively. For the validation of the model under dynamic simulation, the results of the experiments obtained during the second treatment phase of 30 days were used. It was demonstrated that the model simulated the conditions accurately by yielding a similar pattern on the variation of the COD concentration. On the other hand, an underestimation of the N-NH4+ concentration was observed during the simulation compared to the experimental results and the measured N-NO3 concentrations were lower than the predicted ones, this difference could be explained by the fact that the ASM models were mainly designed for the simulation of biological processes in the activated sludge systems. In addition, more treatment time could be required by the autotrophic bacteria to achieve a complete and stable nitrification. Overall, this study demonstrated the effectiveness of mathematical modeling in the prediction of the performance of the MBR systems with respect to organic pollution, the model can be further improved for the simulation of nutrients removal for a longer treatment period.

Keywords: activated sludge model (ASM3h), membrane bioreactor (MBR), poultry slaughter wastewater (PSWW), reuse

Procedia PDF Downloads 58
516 De novo Transcriptome Assembly of Lumpfish (Cyclopterus lumpus L.) Brain Towards Understanding their Social and Cognitive Behavioural Traits

Authors: Likith Reddy Pinninti, Fredrik Ribsskog Staven, Leslie Robert Noble, Jorge Manuel de Oliveira Fernandes, Deepti Manjari Patel, Torstein Kristensen

Abstract:

Understanding fish behavior is essential to improve animal welfare in aquaculture research. Behavioral traits can have a strong influence on fish health and habituation. To identify the genes and biological pathways responsible for lumpfish behavior, we performed an experiment to understand the interspecies relationship (mutualism) between the lumpfish and salmon. Also, we tested the correlation between the gene expression data vs. observational/physiological data to know the essential genes that trigger stress and swimming behavior in lumpfish. After the de novo assembly of the brain transcriptome, all the samples were individually mapped to the available lumpfish (Cyclopterus lumpus L.) primary genome assembly (fCycLum1.pri, GCF_009769545.1). Out of ~16749 genes expressed in brain samples, we found 267 genes to be statistically significant (P > 0.05) found only in odor and control (1), model and control (41) and salmon and control (225) groups. However, genes with |LogFC| ≥0.5 were found to be only eight; these are considered as differentially expressed genes (DEG’s). Though, we are unable to find the differential genes related to the behavioral traits from RNA-Seq data analysis. From the correlation analysis, between the gene expression data vs. observational/physiological data (serotonin (5HT), dopamine (DA), 3,4-Dihydroxyphenylacetic acid (DOPAC), 5-hydroxy indole acetic acid (5-HIAA), Noradrenaline (NORAD)). We found 2495 genes found to be significant (P > 0.05) and among these, 1587 genes are positively correlated with the Noradrenaline (NORAD) hormone group. This suggests that Noradrenaline is triggering the change in pigmentation and skin color in lumpfish. Genes related to behavioral traits like rhythmic, locomotory, feeding, visual, pigmentation, stress, response to other organisms, taxis, dopamine synthesis and other neurotransmitter synthesis-related genes were obtained from the correlation analysis. In KEGG pathway enrichment analysis, we find important pathways, like the calcium signaling pathway and adrenergic signaling in cardiomyocytes, both involved in cell signaling, behavior, emotion, and stress. Calcium is an essential signaling molecule in the brain cells; it could affect the behavior of fish. Our results suggest that changes in calcium homeostasis and adrenergic receptor binding activity lead to changes in fish behavior during stress.

Keywords: behavior, De novo, lumpfish, salmon

Procedia PDF Downloads 172
515 Pathologies in the Left Atrium Reproduced Using a Low-Order Synergistic Numerical Model of the Cardiovascular System

Authors: Nicholas Pearce, Eun-jin Kim

Abstract:

Pathologies of the cardiovascular (CV) system remain a serious and deadly health problem for human society. Computational modelling provides a relatively accessible tool for diagnosis, treatment, and research into CV disorders. However, numerical models of the CV system have largely focused on the function of the ventricles, frequently overlooking the behaviour of the atria. Furthermore, in the study of the pressure-volume relationship of the heart, which is a key diagnosis of cardiac vascular pathologies, previous works often evoke popular yet questionable time-varying elastance (TVE) method that imposes the pressure-volume relationship instead of calculating it consistently. Despite the convenience of the TVE method, there have been various indications of its limitations and the need for checking its validity in different scenarios. A model of the combined left ventricle (LV) and left atrium (LA) is presented, which consistently considers various feedback mechanisms in the heart without having to use the TVE method. Specifically, a synergistic model of the left ventricle is extended and modified to include the function of the LA. The synergy of the original model is preserved by modelling the electro-mechanical and chemical functions of the micro-scale myofiber for the LA and integrating it with the microscale and macro-organ-scale heart dynamics of the left ventricle and CV circulation. The atrioventricular node function is included and forms the conduction pathway for electrical signals between the atria and ventricle. The model reproduces the essential features of LA behaviour, such as the two-phase pressure-volume relationship and the classic figure of eight pressure-volume loops. Using this model, disorders in the internal cardiac electrical signalling are investigated by recreating the mechano-electric feedback (MEF), which is impossible where the time-varying elastance method is used. The effects of AV node block and slow conduction are then investigated in the presence of an atrial arrhythmia. It is found that electrical disorders and arrhythmia in the LA degrade the CV system by reducing the cardiac output, power, and heart rate.

Keywords: cardiovascular system, left atrium, numerical model, MEF

Procedia PDF Downloads 113
514 The Observable Method for the Regularization of Shock-Interface Interactions

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique that is capable of regularizing the shocks and sharp interfaces simultaneously in the shock-interface interaction simulations. The direct numerical simulation of flows involving shocks has been investigated for many years and a lot of numerical methods were developed to capture the shocks. However, most of these methods rely on the numerical dissipation to regularize the shocks. Moreover, in high Reynolds number flows, the nonlinear terms in hyperbolic Partial Differential Equations (PDE) dominates, constantly generating small scale features. This makes direct numerical simulation of shocks even harder. The same difficulty happens in two-phase flow with sharp interfaces where the nonlinear terms in the governing equations keep sharpening the interfaces to discontinuities. The main idea of the proposed technique is to average out the small scales that is below the resolution (observable scale) of the computational grid by filtering the convective velocity in the nonlinear terms in the governing PDE. This technique is named “observable method” and it results in a set of hyperbolic equations called observable equations, namely, observable Navier-Stokes or Euler equations. The observable method has been applied to the flow simulations involving shocks, turbulence, and two-phase flows, and the results are promising. In the current paper, the observable method is examined on the performance of regularizing shocks and interfaces at the same time in shock-interface interaction problems. Bubble-shock interactions and Richtmyer-Meshkov instability are particularly chosen to be studied. Observable Euler equations will be numerically solved with pseudo-spectral discretization in space and third order Total Variation Diminishing (TVD) Runge Kutta method in time. Results are presented and compared with existing publications. The interface acceleration and deformation and shock reflection are particularly examined.

Keywords: compressible flow simulation, inviscid regularization, Richtmyer-Meshkov instability, shock-bubble interactions.

Procedia PDF Downloads 348
513 Characterization of Herberine Hydrochloride Nanoparticles

Authors: Bao-Fang Wen, Meng-Na Dai, Gao-Pei Zhu, Chen-Xi Zhang, Jing Sun, Xun-Bao Yin, Yu-Han Zhao, Hong-Wei Sun, Wei-Fen Zhang

Abstract:

A drug-loaded nanoparticles containing berberine hydrochloride (BH/FA-CTS-NPs) was prepared. The physicochemical characterizations of BH/FA-CTS-NPs and the inhibitory effect on the HeLa cells were investigated. Folic acid-conjugated chitosan (FA-CTS) was prepared by amino reaction of folic acid active ester and chitosan molecules; BH/FA-CTS-NPs were prepared using ionic cross-linking technique with BH as a model drug. The morphology and particle size were determined by Transmission Electron Microscope (TEM). The average diameters and polydispersity index (PDI) were evaluated by Dynamic Light Scattering (DLS). The interaction between various components and the nanocomplex were characterized by Fourier Transform Infrared Spectroscopy (FT-IR). The entrapment efficiency (EE), drug-loading (DL) and in vitro release were studied by UV spectrophotometer. The effect of cell anti-migratory and anti-invasive actions of BH/FA-CTS-NPs were investigated using MTT assays, wound healing assays, Annexin-V-FITC single staining assays, and flow cytometry, respectively. HeLa nude mice subcutaneously transplanted tumor model was established and treated with different drugs to observe the effect of BH/FA-CTS-NPs in vivo on HeLa bearing tumor. The BH/FA-CTS-NPs prepared in this experiment have a regular shape, uniform particle size, and no aggregation phenomenon. The results of DLS showed that mean particle size, PDI and Zeta potential of BH/FA-CTS NPs were (249.2 ± 3.6) nm, 0.129 ± 0.09, 33.6 ± 2.09, respectively, and the average diameter and PDI were stable in 90 days. The results of FT-IR demonstrated that the characteristic peaks of FA-CTS and BH/FA-CTS-NPs confirmed that FA-CTS cross-linked successfully and BH was encapsulated in NPs. The EE and DL amount were (79.3 ± 3.12) % and (7.24 ± 1.41) %, respectively. The results of in vitro release study indicated that the cumulative release of BH/FA-CTS NPs was (89.48±2.81) % in phosphate-buffered saline (PBS, pH 7.4) within 48h; these results by MTT assays and wund healing assays indicated that BH/FA-CTS NPs not only inhibited the proliferation of HeLa cells in a concentration and time-dependent manner but can induce apoptosis as well. The subcutaneous xenograft tumor formation rate of human cervical cancer cell line HeLa in nude mice was 98% after inoculation for 2 weeks. Compared with BH group and BH/CTS-NPs group, the xenograft tumor growth of BH/FA-CTS-NPs group was obviously slower; the result indicated that BH/FA-CTS-NPs could significantly inhibit the growth of HeLa xenograft tumor. BH/FA-CTS NPs with the sustained release effect could be prepared successfully by the ionic crosslinking method. Considering these properties, block proliferation and impairing the migration of the HeLa cell line, BH/FA-CTS NPs could be an important compound for consideration in the treatment of cervical cancer.

Keywords: folic-acid, chitosan, berberine hydrochloride, nanoparticles, cervical cancer

Procedia PDF Downloads 121
512 Rumen Metabolites and Microbial Load in Fattening Yankasa Rams Fed Urea and Lime Treated Groundnut (Arachis Hypogeae) Shell in a Complete Diet

Authors: Bello Muhammad Dogon Kade

Abstract:

The study was conducted to determine the effect of a treated groundnut (Arachis hypogaea) shell in a complete diet on blood metabolites and microbial load in fattening Yankasa rams. The study was conducted at the Teaching and Research Farm (Small Ruminants Unit of Animal Science Department, Faculty of Agriculture, Ahmadu Bello University, Zaria. Each kilogram of groundnut shell was treated with 5% urea and 5% lime for treatments 2 (UTGNS) and 3 (LTGNS), respectively. For treatment 4 (ULTGNS), 1 kg of groundnut shell was treated with 2.5% urea and 2.5% lime, but the shell in treatment 1 was not treated (UNTGNS). Sixteen Yankasa rams were used and randomly assigned to the four treatment diets with four animals per treatment in a completely randomized design (CRD). The diet was formulated to have 14% crude protein (CP) content. Rumen fluid was collected from each ram at the end of the experiment at 0 and 4 hours post-feeding. The samples were then put in a 30 ml bottle and acidified with 5 drops of concentrated sulphuric (0.1N H₂SO4) acid to trap ammonia. The results of the blood metabolites showed that the mean values of NH₃-N differed significantly (P<0.05) among the treatment groups, with rams in the ULTGNS diet having the highest significant value (31.96 mg/L). TVFs were significantly (P<0.05) higher in rams fed UNTGNS diet and higher in total nitrogen; the effect of sampling periods revealed that NH3N, TVFs and TP were significantly (P<0.05) higher in rumen fluid collected 4hrs post feeding among the rams across the treatment groups, but rumen fluid pH was significantly (p<0.05) higher in 0-hour post-feeding in all the rams in the treatment diets. In the treatment and sampling period’s interaction effects, animals on the ULTGNS diet had the highest mean values of NH3N in both 0 and 4 hours post-feeding and were significantly (P<0.5) higher compared to rams on the other treatment diets. Rams on the UTGNS diet had the highest bacteria load of 4.96X105/ml, which was significantly (P<0.05) higher than a microbial load of animals fed UNTGNS, LTGNS and ULTGNS diets. However, protozoa counts were significantly (P<0.05) higher in rams fed the UTGNS diet than those followed by the ULTGNS diet. The results showed that there was no significant difference (P>0.05) in the bacteria count of the animals at both 0 and 4 hours post-feeding. But rumen fungi and protozoa load at 0 hours were significantly (P<0.05) higher than at 4 hours post-feeding. The use of untreated ground groundnut shells in the diet of fattening Yankasa ram is therefore recommended.

Keywords: blood metabolites, microbial load, volatile fatty acid, ammonia, total protein

Procedia PDF Downloads 65
511 Transforming Data Science Curriculum Through Design Thinking

Authors: Samar Swaid

Abstract:

Today, corporates are moving toward the adoption of Design-Thinking techniques to develop products and services, putting their consumer as the heart of the development process. One of the leading companies in Design-Thinking, IDEO (Innovation, Design, Engineering Organization), defines Design-Thinking as an approach to problem-solving that relies on a set of multi-layered skills, processes, and mindsets that help people generate novel solutions to problems. Design thinking may result in new ideas, narratives, objects or systems. It is about redesigning systems, organizations, infrastructures, processes, and solutions in an innovative fashion based on the users' feedback. Tim Brown, president and CEO of IDEO, sees design thinking as a human-centered approach that draws from the designer's toolkit to integrate people's needs, innovative technologies, and business requirements. The application of design thinking has been witnessed to be the road to developing innovative applications, interactive systems, scientific software, healthcare application, and even to utilizing Design-Thinking to re-think business operations, as in the case of Airbnb. Recently, there has been a movement to apply design thinking to machine learning and artificial intelligence to ensure creating the "wow" effect on consumers. The Association of Computing Machinery task force on Data Science program states that" Data scientists should be able to implement and understand algorithms for data collection and analysis. They should understand the time and space considerations of algorithms. They should follow good design principles developing software, understanding the importance of those principles for testability and maintainability" However, this definition hides the user behind the machine who works on data preparation, algorithm selection and model interpretation. Thus, the Data Science program includes design thinking to ensure meeting the user demands, generating more usable machine learning tools, and developing ways of framing computational thinking. Here, describe the fundamentals of Design-Thinking and teaching modules for data science programs.

Keywords: data science, design thinking, AI, currculum, transformation

Procedia PDF Downloads 79
510 Investigation of Aerodynamic and Design Features of Twisting Tall Buildings

Authors: Sinan Bilgen, Bekir Ozer Ay, Nilay Sezer Uzol

Abstract:

After decades of conventional shapes, irregular forms with complex geometries are getting more popular for form generation of tall buildings all over the world. This trend has recently brought out diverse building forms such as twisting tall buildings. This study investigates both the aerodynamic and design features of twisting tall buildings through comparative analyses. Since twisting a tall building give rise to additional complexities related with the form and structural system, lateral load effects become of greater importance on these buildings. The aim of this study is to analyze the inherent characteristics of these iconic forms by comparing the wind loads on twisting tall buildings with those on their prismatic twins. Through a case study research, aerodynamic analyses of an existing twisting tall building and its prismatic counterpart were performed and the results have been compared. The prismatic twin of the original building were generated by removing the progressive rotation of its floors with the same plan area and story height. Performance-based measures under investigation have been evaluated in conjunction with the architectural design. Aerodynamic effects have been analyzed by both wind tunnel tests and computational methods. High frequency base balance tests and pressure measurements on 3D models were performed to evaluate wind load effects on a global and local scale. Comparisons of flat and real surface models were conducted to further evaluate the effects of the twisting form without façade texture contribution. Comparisons highlighted that, the twisting form under investigation shows better aerodynamic behavior both for along wind but particularly for across wind direction. Compared to the prismatic counterpart; twisting model is superior on reducing vortex-shedding dynamic response by disorganizing the wind vortices. Consequently, despite the difficulties arisen from inherent complexity of twisted forms, they could still be feasible and viable with their attractive images in the realm of tall buildings.

Keywords: aerodynamic tests, motivation for twisting, tall buildings, twisted forms, wind excitation

Procedia PDF Downloads 232
509 The Challenge of Assessing Social AI Threats

Authors: Kitty Kioskli, Theofanis Fotis, Nineta Polemi

Abstract:

The European Union (EU) directive Artificial Intelligence (AI) Act in Article 9 requires that risk management of AI systems includes both technical and human oversight, while according to NIST_AI_RFM (Appendix C) and ENISA AI Framework recommendations, claim that further research is needed to understand the current limitations of social threats and human-AI interaction. AI threats within social contexts significantly affect the security and trustworthiness of the AI systems; they are interrelated and trigger technical threats as well. For example, lack of explainability (e.g. the complexity of models can be challenging for stakeholders to grasp) leads to misunderstandings, biases, and erroneous decisions. Which in turn impact the privacy, security, accountability of the AI systems. Based on the NIST four fundamental criteria for explainability it can also classify the explainability threats into four (4) sub-categories: a) Lack of supporting evidence: AI systems must provide supporting evidence or reasons for all their outputs. b) Lack of Understandability: Explanations offered by systems should be comprehensible to individual users. c) Lack of Accuracy: The provided explanation should accurately represent the system's process of generating outputs. d) Out of scope: The system should only function within its designated conditions or when it possesses sufficient confidence in its outputs. Biases may also stem from historical data reflecting undesired behaviors. When present in the data, biases can permeate the models trained on them, thereby influencing the security and trustworthiness of the of AI systems. Social related AI threats are recognized by various initiatives (e.g., EU Ethics Guidelines for Trustworthy AI), standards (e.g. ISO/IEC TR 24368:2022 on AI ethical concerns, ISO/IEC AWI 42105 on guidance for human oversight of AI systems) and EU legislation (e.g. the General Data Protection Regulation 2016/679, the NIS 2 Directive 2022/2555, the Directive on the Resilience of Critical Entities 2022/2557, the EU AI Act, the Cyber Resilience Act). Measuring social threats, estimating the risks to AI systems associated to these threats and mitigating them is a research challenge. In this paper it will present the efforts of two European Commission Projects (FAITH and THEMIS) from the HorizonEurope programme that analyse the social threats by building cyber-social exercises in order to study human behaviour, traits, cognitive ability, personality, attitudes, interests, and other socio-technical profile characteristics. The research in these projects also include the development of measurements and scales (psychometrics) for human-related vulnerabilities that can be used in estimating more realistically the vulnerability severity, enhancing the CVSS4.0 measurement.

Keywords: social threats, artificial Intelligence, mitigation, social experiment

Procedia PDF Downloads 63
508 Studies on Optimizing the Level of Liquid Biofertilizers in Peanut and Maize and Their Economic Analysis

Authors: Chandragouda R. Patil, K. S. Jagadeesh, S. D. Kalolgi

Abstract:

Biofertilizers containing live microbial cells can mobilize one or more nutrients to plants when applied to either seed or rhizosphere. They form an integral part of nutrient management strategies for sustainable production of agricultural crops. Annually, about 22 tons of lignite-based biofertilizers are being produced and supplied to farmers at the Institute of Organic Farming, University of Agricultural Sciences, Dharwad, Karnataka state India. Although carrier based biofertilizers are common, they have shorter shelf life, poor quality, high contamination, unpredictable field performance and high cost of solid carriers. Hence, liquid formulations are being developed to increase their efficacy and broaden field applicability. An attempt was made to develop liquid formulation of strains of Rhizobium NC-92 (Groundnut), Azospirillum ACD15 both nitrogen-fixing biofertilizers and Pseudomonas striata an efficient P-solubilizing bacteria (PSB). Different concentration of amendments such as additives (glycerol and polyethylene glycol), adjuvants (carboxyl methyl cellulose), gum arabica (GA), surfactant (polysorbate) and trehalose specifically for Azospirillum were found essential. Combinations of formulations of Rhizobium and PSB for groundnut and Azospirillum and PSB for maize were evaluated under field conditions to determine the optimum level of inoculum required. Each biofertilizer strain was inoculated at the rate of 2, 4, 8 ml per kg of seeds and the efficacy of each formulation both individually and in combinations was evaluated against the lignite-based formulation at the rate of 20 g each per kg seeds and a un-inoculated set was included to compare the inoculation effect. The field experiment had 17 treatments in three replicates and the best level of inoculum was decided based on net returns and cost: benefit ratio. In peanut, the combination of 4 ml of Rhizobium and 2 ml of PSB resulted in the highest net returns and higher cost to benefit ratio of 1:2.98 followed by treatment with a combination of 2 ml per kg each of Rhizobium and PSB with a B;C ratio of 1:2.84. The benefits in terms of net returns were to the extent of 16 percent due to inoculation with lignite based formulations while it was up to 48 percent due to the best combination of liquid biofertilizers. In maize combination of liquid formulations consisting of 4 ml of Azospirillum and 2 ml of PSB resulted in the highest net returns; about 53 percent higher than the un-inoculated control and 20 percent higher than the treatment with lignite based formulation. In both the crops inoculation with lignite based formulations significantly increased the net returns over un-inoculated control while levels higher or lesser than 4 ml of Rhizobium and Azospirillum and higher or lesser than 2 ml of PSB were not economical and hence not optimal for these two crops.

Keywords: Rhizobium, Azospirillum, phosphate solubilizing bacteria, liquid formulation, benefit-cost ratio

Procedia PDF Downloads 492
507 A Review of Critical Framework Assessment Matrices for Data Analysis on Overheating in Buildings Impact

Authors: Martin Adlington, Boris Ceranic, Sally Shazhad

Abstract:

In an effort to reduce carbon emissions, changes in UK regulations, such as Part L Conservation of heat and power, dictates improved thermal insulation and enhanced air tightness. These changes were a direct response to the UK Government being fully committed to achieving its carbon targets under the Climate Change Act 2008. The goal is to reduce emissions by at least 80% by 2050. Factors such as climate change are likely to exacerbate the problem of overheating, as this phenomenon expects to increase the frequency of extreme heat events exemplified by stagnant air masses and successive high minimum overnight temperatures. However, climate change is not the only concern relevant to overheating, as research signifies, location, design, and occupation; construction type and layout can also play a part. Because of this growing problem, research shows the possibility of health effects on occupants of buildings could be an issue. Increases in temperature can perhaps have a direct impact on the human body’s ability to retain thermoregulation and therefore the effects of heat-related illnesses such as heat stroke, heat exhaustion, heat syncope and even death can be imminent. This review paper presents a comprehensive evaluation of the current literature on the causes and health effects of overheating in buildings and has examined the differing applied assessment approaches used to measure the concept. Firstly, an overview of the topic was presented followed by an examination of overheating research work from the last decade. These papers form the body of the article and are grouped into a framework matrix summarizing the source material identifying the differing methods of analysis of overheating. Cross case evaluation has identified systematic relationships between different variables within the matrix. Key areas focused on include, building types and country, occupants behavior, health effects, simulation tools, computational methods.

Keywords: overheating, climate change, thermal comfort, health

Procedia PDF Downloads 350
506 Effects of Learner-Content Interaction Activities on the Context of Verbal Learning Outcomes in Interactive Courses

Authors: Alper Tolga Kumtepe, Erdem Erdogdu, M. Recep Okur, Eda Kaypak, Ozlem Kaya, Serap Ugur, Deniz Dincer, Hakan Yildirim

Abstract:

Interaction is one of the most important components of open and distance learning. According to Moore, who proposed one of the keystones on interaction types, there are three basic types of interaction: learner-teacher, learner-content, and learner-learner. From these interaction types, learner-content interaction, without doubt, can be identified as the most fundamental one on which all education is based. Efficacy, efficiency, and attraction of open and distance learning systems can be achieved by the practice of effective learner-content interaction. With the development of new technologies, interactive e-learning materials have been commonly used as a resource in open and distance learning, along with the printed books. The intellectual engagement of the learners with the content that is course materials may also affect their satisfaction for the open and distance learning practices in general. Learner satisfaction holds an important place in open and distance learning since it will eventually contribute to the achievement of learning outcomes. Using the learner-content interaction activities in course materials, Anadolu University, by its Open Education system, tries to involve learners in deep and meaningful learning practices. Especially, during the e-learning material design and production processes, identifying appropriate learner-content interaction activities within the context of learning outcomes holds a big importance. Considering the lack of studies adopting this approach, as well as its being a study on the use of e-learning materials in Open Education system, this research holds a big value in open and distance learning literature. In this respect, the present study aimed to investigate a) which learner-content interaction activities included in interactive courses are the most effective in learners’ achievement of verbal information learning outcomes and b) to what extent distance learners are satisfied with these learner-content interaction activities. For this study, the quasi-experimental research design was adopted. The 120 participants of the study were from Anadolu University Open Education Faculty students living in Eskişehir. The students were divided into 6 groups randomly. While 5 of these groups received different learner-content interaction activities as a part of the experiment, the other group served as the control group. The data were collected mainly through two instruments: pre-test and post-test. In addition to those tests, learners’ perceived learning was assessed with an item at the end of the program. The data collected from pre-test and post-test were analyzed by ANOVA, and in the light of the findings of this approximately 24-month study, suggestions for the further design of e-learning materials within the context of learner-content interaction activities will be provided at the conference. The current study is planned to be an antecedent for the following studies that will examine the effects of activities on other learning domains.

Keywords: interaction, distance education, interactivity, online courses

Procedia PDF Downloads 194
505 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images

Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir

Abstract:

The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement; On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.

Keywords: altitude estimation, drone, image processing, trajectory planning

Procedia PDF Downloads 109
504 Evaluation of Arsenic Removal in Synthetic Solutions and Natural Waters by Rhizofiltration

Authors: P. Barreto, A. Guevara, V. Ibujes

Abstract:

In this study, the removal of arsenic from synthetic solutions and natural water from Papallacta Lagoon was evaluated, by using the rhizofiltration method with terrestrial and aquatic plant species. Ecuador is a country of high volcanic activity, that is why most of water sources come from volcanic glaciers. Therefore, it is necessary to find new, affordable and effective methods for treating water. The water from Papallacta Lagoon shows levels from 327 µg/L to 803 µg/L of arsenic. The evaluation for the removal of arsenic began with the selection of 16 different species of terrestrial and aquatic plants. These plants were immersed to solutions of 4500 µg/L arsenic concentration, for 48 hours. Subsequently, 3 terrestrial species and 2 aquatic species were selected based on the highest amount of absorbed arsenic they showed, analyzed by plasma optical emission spectrometry (ICP-OES), and their best capacity for adaptation into the arsenic solution. The chosen terrestrial species were cultivated from their seed with hydroponics methods, using coconut fiber and polyurethane foam as substrates. Afterwards, the species that best adapted to hydroponic environment were selected. Additionally, a control of the development for the selected aquatic species was carried out using a basic nutrient solution to provide the nutrients that the plants required. Following this procedure, 30 plants from the 3 types of species selected were exposed to a synthetic solution with levels of arsenic concentration of 154, 375 and 874 µg/L, for 15 days. Finally, the plant that showed the highest level of arsenic absorption was placed in 3 L of natural water, with arsenic levels of 803 µg/L. The plant laid in the water until it reached the desired level of arsenic of 10 µg/L. This experiment was carried out in a total of 30 days, in which the capacity of arsenic absorption of the plant was measured. As a result, the five species initially selected to be used in the last part of the evaluation were: sunflower (Helianthus annuus), clover (Trifolium), blue grass (Poa pratensis), water hyacinth (Eichhornia crassipes) and miniature aquatic fern (Azolla). The best result of arsenic removal was showed by the water hyacinth with a 53,7% of absorption, followed by the blue grass with 31,3% of absorption. On the other hand, the blue grass was the plant that best responded to the hydroponic cultivation, by obtaining a germination percentage of 97% and achieving its full growth in two months. Thus, it was the only terrestrial species selected. In summary, the final selected species were blue grass, water hyacinth and miniature aquatic fern. These three species were evaluated by immersing them in synthetic solutions with three different arsenic concentrations (154, 375 and 874 µg/L). Out of the three plants, the water hyacinth was the one that showed the highest percentages of arsenic removal with 98, 58 and 64%, for each one of the arsenic solutions. Finally, 12 plants of water hyacinth were chosen to reach an arsenic level up to 10 µg/L in natural water. This significant arsenic concentration reduction was obtained in 5 days. In conclusion, it was found that water hyacinth is the best plant to reduce arsenic levels in natural water.

Keywords: arsenic, natural water, plant species, rhizofiltration, synthetic solutions

Procedia PDF Downloads 120
503 A Dissipative Particle Dynamics Study of a Capsule in Microfluidic Intracellular Delivery System

Authors: Nishanthi N. S., Srikanth Vedantam

Abstract:

Intracellular delivery of materials has always proved to be a challenge in research and therapeutic applications. Usually, vector-based methods, such as liposomes and polymeric materials, and physical methods, such as electroporation and sonoporation have been used for introducing nucleic acids or proteins. Reliance on exogenous materials, toxicity, off-target effects was the short-comings of these methods. Microinjection was an alternative process which addressed the above drawbacks. However, its low throughput had hindered its adoption widely. Mechanical deformation of cells by squeezing them through constriction channel can cause the temporary development of pores that would facilitate non-targeted diffusion of materials. Advantages of this method include high efficiency in intracellular delivery, a wide choice of materials, improved viability and high throughput. This cell squeezing process can be studied deeper by employing simple models and efficient computational procedures. In our current work, we present a finite sized dissipative particle dynamics (FDPD) model to simulate the dynamics of the cell flowing through a constricted channel. The cell is modeled as a capsule with FDPD particles connected through a spring network to represent the membrane. The total energy of the capsule is associated with linear and radial springs in addition to constraint of the fixed area. By performing detailed simulations, we studied the strain on the membrane of the capsule for channels with varying constriction heights. The strain on the capsule membrane was found to be similar though the constriction heights vary. When strain on the membrane was correlated to the development of pores, we found higher porosity in capsule flowing in wider channel. This is due to localization of strain to a smaller region in the narrow constriction channel. But the residence time of the capsule increased as the channel constriction narrowed indicating that strain for an increased time will cause less cell viability.

Keywords: capsule, cell squeezing, dissipative particle dynamics, intracellular delivery, microfluidics, numerical simulations

Procedia PDF Downloads 139
502 Reading as Moral Afternoon Tea: An Empirical Study on the Compensation Effect between Literary Novel Reading and Readers’ Moral Motivation

Authors: Chong Jiang, Liang Zhao, Hua Jian, Xiaoguang Wang

Abstract:

The belief that there is a strong relationship between reading narrative and morality has generally become the basic assumption of scholars, philosophers, critics, and cultural critics. The virtuality constructed by literary novels inspires readers to regard the narrative as a thinking experiment, creating the distance between readers and events so that they can freely and morally experience the positions of different roles. Therefore, the virtual narrative combined with literary characteristics is always considered as a "moral laboratory." Well-established findings revealed that people show less lying and deceptive behaviors in the morning than in the afternoon, called the morning morality effect. As a limited self-regulation resource, morality will be constantly depleted with the change of time rhythm under the influence of the morning morality effect. It can also be compensated and restored in various ways, such as eating, sleeping, etc. As a common form of entertainment in modern society, literary novel reading gives people more virtual experience and emotional catharsis, just as a relaxing afternoon tea that helps people break away from fast-paced work, restore physical strength, and relieve stress in a short period of leisure. In this paper, inspired by the compensation control theory, we wonder whether reading literary novels in the digital environment could replenish a kind of spiritual energy for self-regulation to compensate for people's moral loss in the afternoon. Based on this assumption, we leverage the social annotation text content generated by readers in digital reading to represent the readers' reading attention. We then recognized the semantics and calculated the readers' moral motivation expressed in the annotations and investigated the fine-grained dynamics of the moral motivation changing in each time slot within 24 hours of a day. Comprehensively comparing the division of different time intervals, sufficient experiments showed that the moral motivation reflected in the annotations in the afternoon is significantly higher than that in the morning. The results robustly verified the hypothesis that reading compensates for moral motivation, which we called the moral afternoon tea effect. Moreover, we quantitatively identified that such moral compensation can last until 14:00 in the afternoon and 21:00 in the evening. In addition, it is interesting to find that the division of time intervals of different units impacts the identification of moral rhythms. Dividing the time intervals by four-hour time slot brings more insights of moral rhythms compared with that of three-hour and six-hour time slot.

Keywords: digital reading, social annotation, moral motivation, morning morality effect, control compensation

Procedia PDF Downloads 149
501 In vitro Characterization of Mice Bone Microstructural Changes by Low-Field and High-Field Nuclear Magnetic Resonance

Authors: Q. Ni, J. A. Serna, D. Holland, X. Wang

Abstract:

The objective of this study is to develop Nuclear Magnetic Resonance (NMR) techniques to enhance bone related research applied on normal and disuse (Biglycan knockout) mice bone in vitro by using both low-field and high-field NMR simultaneously. It is known that the total amplitude of T₂ relaxation envelopes, measured by the Carr-Purcell-Meiboom-Gill NMR spin echo train (CPMG), is a representation of the liquid phase inside the pores. Therefore, the NMR CPMG magnetization amplitude can be transferred to the volume of water after calibration with the NMR signal amplitude of the known volume of the selected water. In this study, the distribution of mobile water, porosity that can be determined by using low-field (20 MHz) CPMG relaxation technique, and the pore size distributions can be determined by a computational inversion relaxation method. It is also known that the total proton intensity of magnetization from the NMR free induction decay (FID) signal is due to the water present inside the pores (mobile water), the water that has undergone hydration with the bone (bound water), and the protons in the collagen and mineral matter (solid-like protons). Therefore, the components of total mobile and bound water within bone that can be determined by low-field NMR free induction decay technique. Furthermore, the bound water in solid phase (mineral and organic constituents), especially, the dominated component of calcium hydroxyapatite (Ca₁₀(OH)₂(PO₄)₆) can be determined by using high-field (400 MHz) magic angle spinning (MAS) NMR. With MAS technique reducing NMR spectral linewidth inhomogeneous broadening and susceptibility broadening of liquid-solid mix, in particular, we can conduct further research into the ¹H and ³¹P elements and environments of bone materials to identify the locations of bound water such as OH- group within minerals and bone architecture. We hypothesize that with low-field and high-field magic angle spinning NMR can provide a more complete interpretation of water distribution, particularly, in bound water, and these data are important to access bone quality and predict the mechanical behavior of bone.

Keywords: bone, mice bone, NMR, water in bone

Procedia PDF Downloads 175
500 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks

Authors: Van Trieu, Shouhuai Xu, Yusheng Feng

Abstract:

Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.

Keywords: causality, multilevel graph, cyber-attacks, prediction

Procedia PDF Downloads 156
499 Influence of Non-Formal Physical Education Curriculum, Based on Olympic Pedagogy, for 11-13 Years Old Children Physical Development

Authors: Asta Sarkauskiene

Abstract:

The pedagogy of Olympic education is based upon the main idea of P. de Coubertin, that physical education can and has to support the education of the perfect person, the one who was an aspiration in archaic Greece, when it was looking towards human as a one whole, which is composed of three interconnected functions: physical, psychical and spiritual. The following research question was formulated in the present study: What curriculum of non-formal physical education in school can positively influence physical development of 11-13 years old children? The aim of this study was to formulate and implement curriculum of non-formal physical education, based on Olympic pedagogy, and assess its effectiveness for physical development of 11-13 years old children. The research was conducted in two stages. In the first stage 51 fifth grade children (Mage = 11.3 years) participated in a quasi-experiment for two years. Children were organized into 2 groups: E and C. Both groups shared the duration (1 hour) and frequency (twice a week) but were different in their education curriculum. Experimental group (E) worked under the program developed by us. Priorities of the E group were: training of physical powers in unity with psychical and spiritual powers; integral growth of physical development, physical activity, physical health, and physical fitness; integration of children with lower health and physical fitness level; content that corresponds children needs, abilities, physical and functional powers. Control group (C) worked according to NFPE programs prepared by teachers and approved by school principal and school methodical group. Priorities of the C group were: motion actions teaching and development; physical qualities training; training of the most physically capable children. In the second stage (after four years) 72 sixth graders (Mage = 13.00) attended in the research from the same comprehensive schools. Children were organized into first and second groups. The curriculum of the first group was modified and the second - the same as group C. The focus groups conducted anthropometric (height, weight, BMI) and physiometric (VC, right and left handgrip strength) measurements. Dependent t test indicated that over two years E and C group girls and boys height, weight, right and left handgrip strength indices increased significantly, p < 0.05. E group girls and boys BMI indices did not change significantly, p > 0.05, i.e. height and weight ratio of girls, who participated in NFPE in school, became more proportional. C group girls VC indices did not differ significantly, p > 0.05. Independent t test indicated that in the first and second research stage differences of anthropometric and physiometric measurements of the groups are not significant, p > 0.05. Formulated and implemented curriculum of non-formal education in school, based on olympic pedagogy, had the biggest positive influence on decreasing 11-13 years old children level of BMI and increasing level of VC.

Keywords: non – formal physical education, olympic pedagogy, physical development, health sciences

Procedia PDF Downloads 562
498 Effect of Chronic Exposure to Diazinon on Glucose Homeostasis and Oxidative Stress in Pancreas of Rats and the Potential Role of Mesna in Ameliorating This Effect

Authors: Azza El-Medany, Jamila El-Medany

Abstract:

Residential and agricultural pesticide use is widespread in the world. Their extensive and indiscriminative use, in addition with their ability to interact with biological systems other than their primary targets constitute a health hazards to both humans and animals. The toxic effects of pesticides include alterations in metabolism; there is a lack of knowledge that organophosphates can cause pancreatic toxicity. The primary goal of this work is to study the effects of chronic exposure to Diazinon an organophosphate used in agriculture on pancreatic tissues and evaluate the ameliorating effect of Mesna as antioxidant on the toxicity of Diazinon on pancreatic tissues.40 adult male rats, their weight ranged between 300-350 g. The rats were classified into three groups; control (10 rats) was received corn oil at a dose of 1 0 mg/kg/day by gavage once a day for 2 months. Diazinon (15 rats) was received Diazinon at a dose of 10 mg/kg/day dissolved in corn oil by gavage once a day for 2 months. Treated group (15 rats), were received Mesna 180mg/kg once a week by gavage 15 minutes before administration of Diazinon for 2 months. At the end of the experiment, animals were anesthetized, blood samples were taken by cardiac puncture for glucose and insulin assays and pancreas was removed and divided into 3 portions; first portion for histopathological study; second portion for ultrastructural study; third portion for biochemical study using Elisa Kits including determination of malondialdehyde (MDA), tumor necrosis factor α (TNF-α), myeloperoxidase activity (MPO), interleukin 1β (IL-1β). A significant increase in the levels of MDA, TNF-α, MPO activity, IL-1β, serum glucose levels in the toxicated group with Diazinon were observed, while a significant reduction was noticed in GSH in serum insulin levels. After treatment with Mesna a significant reduction was observed in the previously mentioned parameters except that there was a significant rise in GSH in insulin levels. Histopathological and ultra-structural studies showed destruction in pancreatic tissues and β cells were the most affected cells among the injured islets as compared with the control group. The current study try to spot light about the effects of chronic exposure to pesticides on vital organs as pancreas also the role of oxidative stress that may be induced by them in evoking their toxicity. This study shows the role of antioxidant drugs in ameliorating or preventing the toxicity. This appears to be a promising approach that may be considered as a complementary treatment of pesticide toxicity.

Keywords: Diazinon, reduced glutathione, myeloperoxidase activity, tumor necrosis factor α, Mesna

Procedia PDF Downloads 239
497 Autophagy in the Midgut Epithelium of Spodoptera exigua Hübner (Lepidoptera: Noctuidae) Larvae Exposed to Various Cadmium Concentration - 6-Generational Exposure

Authors: Magdalena Maria Rost-Roszkowska, Alina Chachulska-Żymełka, Monika Tarnawska, Maria Augustyniak, Alina Kafel, Agnieszka Babczyńska

Abstract:

Autophagy is a form of cell remodeling in which an internalization of organelles into vacuoles that are called autophagosomes occur. Autophagosomes are the targets of lysosomes, thus causing digestion of cytoplasmic components. Eventually, it can lead to the death of the entire cell. However, in response to several stress factors, e.g., starvation, heavy metals (e.g., cadmium) autophagy can also act as a pro-survival factor, protecting the cell against its death. The main aim of our studies was to check if the process of autophagy, which could appear in the midgut epithelium after Cd treatment, can be fixed during the following generations of insects. As a model animal, we chose the beet armyworm Spodoptera exigua Hübner (Lepidoptera: Noctuidae), a well-known polyphagous pest of many vegetable crops. We analyzed specimens at final larval stage (5th larval stage), due to its hyperfagy, resulting in great amount of cadmium assimilate. The culture consisted of two strains: a control strain (K) fed a standard diet, and a cadmium strain (Cd), fed on standard diet supplemented with cadmium (44 mg Cd per kg of dry weight of food) for 146 generations, both strains. In addition, the control insects were transferred to the Cd supplemented diet (5 mg Cd per kg of dry weight of food, 10 mg Cd per kg of dry weight of food, 20 mg Cd per kg of dry weight of food, 44 mg Cd per kg of dry weight of food). Therefore, we obtained Cd1, Cd2, Cd3 and KCd experimental groups. Autophagy has been examined using transmission electron microscope. During this process, degenerated organelles were surrounded by a membranous phagophore and enclosed in an autophagosome. Eventually, after the autophagosome fused with a lysosome, an autolysosome was formed and the process of the digestion of organelles began. During the 1st year of the experiment, we analyzed specimens of 6 generations in all the lines. The intensity of autophagy depends significantly on the generation, tissue and cadmium concentration in the insect rearing medium. In the Ist, IInd, IIIrd, IVth, Vth and VIth generation the intensity of autophagy in the midguts from cadmium-exposed strains decreased gradually according to the following order of strains: Cd1, Cd2, Cd3 and KCd. The higher amount of cells with autophagy was observed in Cd1 and Cd2. However, it was still higher than the percentage of cells with autophagy in the same tissues of the insects from the control and multigenerational cadmium strain. This may indicate that during 6-generational exposure to various Cd concentration, a preserved tolerance to cadmium was not maintained. The study has been financed by the National Science Centre Poland, grant no 2016/21/B/NZ8/00831.

Keywords: autophagy, cell death, digestive system, ultrastructure

Procedia PDF Downloads 232
496 Design and Development of an Autonomous Beach Cleaning Vehicle

Authors: Mahdi Allaoua Seklab, Süleyman BaşTürk

Abstract:

In the quest to enhance coastal environmental health, this study introduces a fully autonomous beach cleaning machine, a breakthrough in leveraging green energy and advanced artificial intelligence for ecological preservation. Designed to operate independently, the machine is propelled by a solar-powered system, underscoring a commitment to sustainability and the use of renewable energy in autonomous robotics. The vehicle's autonomous navigation is achieved through a sophisticated integration of LIDAR and a camera system, utilizing an SSD MobileNet V2 object detection model for accurate and real-time trash identification. The SSD framework, renowned for its efficiency in detecting objects in various scenarios, is coupled with the lightweight and precise highly MobileNet V2 architecture, making it particularly suited for the computational constraints of on-board processing in mobile robotics. Training of the SSD MobileNet V2 model was conducted on Google Colab, harnessing cloud-based GPU resources to facilitate a rapid and cost-effective learning process. The model was refined with an extensive dataset of annotated beach debris, optimizing the parameters using the Adam optimizer and a cross-entropy loss function to achieve high-precision trash detection. This capability allows the machine to intelligently categorize and target waste, leading to more effective cleaning operations. This paper details the design and functionality of the beach cleaning machine, emphasizing its autonomous operational capabilities and the novel application of AI in environmental robotics. The results showcase the potential of such technology to fill existing gaps in beach maintenance, offering a scalable and eco-friendly solution to the growing problem of coastal pollution. The deployment of this machine represents a significant advancement in the field, setting a new standard for the integration of autonomous systems in the service of environmental stewardship.

Keywords: autonomous beach cleaning machine, renewable energy systems, coastal management, environmental robotics

Procedia PDF Downloads 23