Search results for: cute experiments
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3168

Search results for: cute experiments

348 Estimation of Rock Strength from Diamond Drilling

Authors: Hing Hao Chan, Thomas Richard, Masood Mostofi

Abstract:

The mining industry relies on an estimate of rock strength at several stages of a mine life cycle: mining (excavating, blasting, tunnelling) and processing (crushing and grinding), both very energy-intensive activities. An effective comminution design that can yield significant dividends often requires a reliable estimate of the material rock strength. Common laboratory tests such as rod, ball mill, and uniaxial compressive strength share common shortcomings such as time, sample preparation, bias in plug selection cost, repeatability, and sample amount to ensure reliable estimates. In this paper, the authors present a methodology to derive an estimate of the rock strength from drilling data recorded while coring with a diamond core head. The work presented in this paper builds on a phenomenological model of the bit-rock interface proposed by Franca et al. (2015) and is inspired by the now well-established use of the scratch test with PDC (Polycrystalline Diamond Compact) cutter to derive the rock uniaxial compressive strength. The first part of the paper introduces the phenomenological model of the bit-rock interface for a diamond core head that relates the forces acting on the drill bit (torque, axial thrust) to the bit kinematic variables (rate of penetration and angular velocity) and introduces the intrinsic specific energy or the energy required to drill a unit volume of rock for an ideally sharp drilling tool (meaning ideally sharp diamonds and no contact between the bit matrix and rock debris) that is found well correlated to the rock uniaxial compressive strength for PDC and roller cone bits. The second part describes the laboratory drill rig, the experimental procedure that is tailored to minimize the effect of diamond polishing over the duration of the experiments, and the step-by-step methodology to derive the intrinsic specific energy from the recorded data. The third section presents the results and shows that the intrinsic specific energy correlates well to the uniaxial compressive strength for the 11 tested rock materials (7 sedimentary and 4 igneous rocks). The last section discusses best drilling practices and a method to estimate the rock strength from field drilling data considering the compliance of the drill string and frictional losses along the borehole. The approach is illustrated with a case study from drilling data recorded while drilling an exploration well in Australia.

Keywords: bit-rock interaction, drilling experiment, impregnated diamond drilling, uniaxial compressive strength

Procedia PDF Downloads 110
347 Greenhouse Gasses’ Effect on Atmospheric Temperature Increase and the Observable Effects on Ecosystems

Authors: Alexander J. Severinsky

Abstract:

Radiative forces of greenhouse gases (GHG) increase the temperature of the Earth's surface, more on land, and less in oceans, due to their thermal capacities. Given this inertia, the temperature increase is delayed over time. Air temperature, however, is not delayed as air thermal capacity is much lower. In this study, through analysis and synthesis of multidisciplinary science and data, an estimate of atmospheric temperature increase is made. Then, this estimate is used to shed light on current observations of ice and snow loss, desertification and forest fires, and increased extreme air disturbances. The reason for this inquiry is due to the author’s skepticism that current changes cannot be explained by a "~1 oC" global average surface temperature rise within the last 50-60 years. The only other plausible cause to explore for understanding is that of atmospheric temperature rise. The study utilizes an analysis of air temperature rise from three different scientific disciplines: thermodynamics, climate science experiments, and climactic historical studies. The results coming from these diverse disciplines are nearly the same, within ± 1.6%. The direct radiative force of GHGs with a high level of scientific understanding is near 4.7 W/m2 on average over the Earth’s entire surface in 2018, as compared to one in pre-Industrial time in the mid-1700s. The additional radiative force of fast feedbacks coming from various forms of water gives approximately an additional ~15 W/m2. In 2018, these radiative forces heated the atmosphere by approximately 5.1 oC, which will create a thermal equilibrium average ground surface temperature increase of 4.6 oC to 4.8 oC by the end of this century. After 2018, the temperature will continue to rise without any additional increases in the concentration of the GHGs, primarily of carbon dioxide and methane. These findings of the radiative force of GHGs in 2018 were applied to estimates of effects on major Earth ecosystems. This additional force of nearly 20 W/m2 causes an increase in ice melting by an additional rate of over 90 cm/year, green leaves temperature increase by nearly 5 oC, and a work energy increase of air by approximately 40 Joules/mole. This explains the observed high rates of ice melting at all altitudes and latitudes, the spread of deserts and increases in forest fires, as well as increased energy of tornadoes, typhoons, hurricanes, and extreme weather, much more plausibly than the 1.5 oC increase in average global surface temperature in the same time interval. Planned mitigation and adaptation measures might prove to be much more effective when directed toward the reduction of existing GHGs in the atmosphere.

Keywords: greenhouse radiative force, greenhouse air temperature, greenhouse thermodynamics, greenhouse historical, greenhouse radiative force on ice, greenhouse radiative force on plants, greenhouse radiative force in air

Procedia PDF Downloads 79
346 Antagonistic Potential of Epiphytic Bacteria Isolated in Kazakhstan against Erwinia amylovora, the Causal Agent of Fire Blight

Authors: Assel E. Molzhigitova, Amankeldi K. Sadanov, Elvira T. Ismailova, Kulyash A. Iskandarova, Olga N. Shemshura, Ainur I. Seitbattalova

Abstract:

Fire blight is a very harmful for commercial apple and pear production quarantine bacterial disease. To date, several different methods have been proposed for disease control, including the use of copperbased preparations and antibiotics, which are not always reliable or effective. The use of bacteria as biocontrol agents is one of the most promising and eco-friendly alternative methods. Bacteria with protective activity against the causal agent of fire blight are often present among the epiphytic microorganisms of the phyllosphere of host plants. Therefore, the main objective of our study was screening of local epiphytic bacteria as possible antagonists against Erwinia amylovora, the causal agent of fire blight. Samples of infected organs of apple and pear trees (shoots, leaves, fruits) were collected from the industrial horticulture areas in various agro-ecological zones of Kazakhstan. Epiphytic microorganisms were isolated by standard and modified methods on specific nutrient media. The primary screening of selected microorganisms under laboratory conditions to determine the ability to suppress the growth of Erwinia amylovora was performed by agar-diffusion-test. Among 142 bacteria isolated from the fire blight host plants, 5 isolates, belonging to the genera Bacillus, Lactobacillus, Pseudomonas, Paenibacillus and Pantoea showed higher antagonistic activity against the pathogen. The diameters of inhibition zone have been depended on the species and ranged from 10 mm to 48 mm. The maximum diameter of inhibition zone (48 mm) was exhibited by B. amyloliquefaciens. Less inhibitory effect was showed by Pantoea agglomerans PA1 (19 mm). The study of inhibitory effect of Lactobacillus species against E. amylovora showed that among 7 isolates tested only one (Lactobacillus plantarum 17M) demonstrated inhibitory zone (30 mm). In summary, this study was devoted to detect the beneficial epiphytic bacteria from plants organs of pear and apple trees due to fire blight control in Kazakhstan. Results obtained from the in vitro experiments showed that the most efficient bacterial isolates are Lactobacillus plantarum 17M, Bacillus amyloliquefaciens MB40, and Pantoea agglomerans PA1. These antagonists are suitable for development as biocontrol agents for fire blight control. Their efficacies will be evaluated additionally, in biological tests under in vitro and field conditions during our further study.

Keywords: antagonists, epiphytic bacteria, Erwinia amylovora, fire blight

Procedia PDF Downloads 138
345 Nanofluidic Cell for Resolution Improvement of Liquid Transmission Electron Microscopy

Authors: Deybith Venegas-Rojas, Sercan Keskin, Svenja Riekeberg, Sana Azim, Stephanie Manz, R. J. Dwayne Miller, Hoc Khiem Trieu

Abstract:

Liquid Transmission Electron Microscopy (TEM) is a growing area with a broad range of applications from physics and chemistry to material engineering and biology, in which it is possible to image in-situ unseen phenomena. For this, a nanofluidic device is used to insert the nanoflow with the sample inside the microscope in order to keep the liquid encapsulated because of the high vacuum. In the last years, Si3N4 windows have been widely used because of its mechanical stability and low imaging contrast. Nevertheless, the pressure difference between the inside fluid and the outside vacuum in the TEM generates bulging in the windows. This increases the imaged fluid volume, which decreases the signal to noise ratio (SNR), limiting the achievable spatial resolution. With the proposed device, the membrane is fortified with a microstructure capable of stand higher pressure differences, and almost removing completely the bulging. A theoretical study is presented with Finite Element Method (FEM) simulations which provide a deep understanding of the membrane mechanical conditions and proves the effectiveness of this novel concept. Bulging and von Mises Stress were studied for different membrane dimensions, geometries, materials, and thicknesses. The microfabrication of the device was made with a thin wafer coated with thin layers of SiO2 and Si3N4. After the lithography process, these layers were etched (reactive ion etching and buffered oxide etch (BOE) respectively). After that, the microstructure was etched (deep reactive ion etching). Then the back side SiO2 was etched (BOE) and the array of free-standing micro-windows was obtained. Additionally, a Pyrex wafer was patterned with windows, and inlets/outlets, and bonded (anodic bonding) to the Si side to facilitate the thin wafer handling. Later, a thin spacer is sputtered and patterned with microchannels and trenches to guide the nanoflow with the samples. This approach reduces considerably the common bulging problem of the window, improving the SNR, contrast and spatial resolution, increasing substantially the mechanical stability of the windows, allowing a larger viewing area. These developments lead to a wider range of applications of liquid TEM, expanding the spectrum of possible experiments in the field.

Keywords: liquid cell, liquid transmission electron microscopy, nanofluidics, nanofluidic cell, thin films

Procedia PDF Downloads 231
344 Differentially Expressed Genes in Atopic Dermatitis: Bioinformatics Analysis Of Pooled Microarray Gene Expression Datasets In Gene Expression Omnibus

Authors: Danna Jia, Bin Li

Abstract:

Background: Atopic dermatitis (AD) is a chronic and refractory inflammatory skin disease characterized by relapsing eczematous and pruritic skin lesions. The global prevalence of AD ranges from 1~ 20%, and its incidence rates are increasing. It affects individuals from infancy to adulthood, significantly impacting their daily lives and social activities. Despite its major health burden, the precise mechanisms underlying AD remain unknown. Understanding the genetic differences associated with AD is crucial for advancing diagnosis and targeted treatment development. This study aims to identify candidate genes of AD by using bioinformatics analysis. Methods: We conducted a comprehensive analysis of four pooled transcriptomic datasets (GSE16161, GSE32924, GSE130588, and GSE120721) obtained from the Gene Expression Omnibus (GEO) database. Differential gene expression analysis was performed using the R statistical language. The differentially expressed genes (DEGs) between AD patients and normal individuals were functionally analyzed using Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment. Furthermore, a protein-protein interaction (PPI) network was constructed to identify candidate genes. Results: Among the patient-level gene expression datasets, we identified 114 shared DEGs, consisting of 53 upregulated genes and 61 downregulated genes. Functional analysis using GO and KEGG revealed that the DEGs were mainly associated with the negative regulation of transcription from RNA polymerase II promoter, membrane-related functions, protein binding, and the Human papillomavirus infection pathway. Through the PPI network analysis, we identified eight core genes: CD44, STAT1, HMMR, AURKA, MKI67, and SMARCA4. Conclusion: This study elucidates key genes associated with AD, providing potential targets for diagnosis and treatment. The identified genes have the potential to contribute to the understanding and management of AD. The bioinformatics analysis conducted in this study offers new insights and directions for further research on AD. Future studies can focus on validating the functional roles of these genes and exploring their therapeutic potential in AD. While these findings will require further verification as achieved with experiments involving in vivo and in vitro models, these results provided some initial insights into dysfunctional inflammatory and immune responses associated with AD. Such information offers the potential to develop novel therapeutic targets for use in preventing and treating AD.

Keywords: atopic dermatitis, bioinformatics, biomarkers, genes

Procedia PDF Downloads 54
343 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements

Authors: Alexander Buhr, Klaus Ehrenfried

Abstract:

Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.

Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements

Procedia PDF Downloads 279
342 Microfluidic Plasmonic Bio-Sensing of Exosomes by Using a Gold Nano-Island Platform

Authors: Srinivas Bathini, Duraichelvan Raju, Simona Badilescu, Muthukumaran Packirisamy

Abstract:

A bio-sensing method, based on the plasmonic property of gold nano-islands, has been developed for detection of exosomes in a clinical setting. The position of the gold plasmon band in the UV-Visible spectrum depends on the size and shape of gold nanoparticles as well as on the surrounding environment. By adsorbing various chemical entities, or binding them, the gold plasmon band will shift toward longer wavelengths and the shift is proportional to the concentration. Exosomes transport cargoes of molecules and genetic materials to proximal and distal cells. Presently, the standard method for their isolation and quantification from body fluids is by ultracentrifugation, not a practical method to be implemented in a clinical setting. Thus, a versatile and cutting-edge platform is required to selectively detect and isolate exosomes for further analysis at clinical level. The new sensing protocol, instead of antibodies, makes use of a specially synthesized polypeptide (Vn96), to capture and quantify the exosomes from different media, by binding the heat shock proteins from exosomes. The protocol has been established and optimized by using a glass substrate, in order to facilitate the next stage, namely the transfer of the protocol to a microfluidic environment. After each step of the protocol, the UV-Vis spectrum was recorded and the position of gold Localized Surface Plasmon Resonance (LSPR) band was measured. The sensing process was modelled, taking into account the characteristics of the nano-island structure, prepared by thermal convection and annealing. The optimal molar ratios of the most important chemical entities, involved in the detection of exosomes were calculated as well. Indeed, it was found that the results of the sensing process depend on the two major steps: the molar ratios of streptavidin to biotin-PEG-Vn96 and, the final step, the capture of exosomes by the biotin-PEG-Vn96 complex. The microfluidic device designed for sensing of exosomes consists of a glass substrate, sealed by a PDMS layer that contains the channel and a collecting chamber. In the device, the solutions of linker, cross-linker, etc., are pumped over the gold nano-islands and an Ocean Optics spectrometer is used to measure the position of the Au plasmon band at each step of the sensing. The experiments have shown that the shift of the Au LSPR band is proportional to the concentration of exosomes and, thereby, exosomes can be accurately quantified. An important advantage of the method is the ability to discriminate between exosomes having different origins.

Keywords: exosomes, gold nano-islands, microfluidics, plasmonic biosensing

Procedia PDF Downloads 149
341 Isolation and Selection of Strains Perspective for Sewage Sludge Processing

Authors: A. Zh. Aupova, A. Ulankyzy, A. Sarsenova, A. Kussayin, Sh. Turarbek, N. Moldagulova, A. Kurmanbayev

Abstract:

One of the methods of organic waste bioconversion into environmentally-friendly fertilizer is composting. Microorganisms that produce hydrolytic enzymes play a significant role in accelerating the process of organic waste composting. We studied the enzymatic potential (amylase, protease, cellulase, lipase, urease activity) of bacteria isolated from the sewage sludge of Nur-Sultan, Rudny, and Fort-Shevchenko cities, the dacha soil of Nur-Sultan city, and freshly cut grass from the dacha for processing organic waste and identifying active strains. Microorganism isolation was carried out by the cultures enrichment method on liquid nutrient media, followed by inoculating on different solid media to isolate individual colonies. As a result, sixty-one microorganisms were isolated, three of which were thermophiles (DS1, DS2, and DS3). The highest number of isolates, twenty-one and eighteen, were isolated from sewage sludge of Nur-Sultan and Rudny cities, respectively. Ten isolates were isolated from the wastewater of the sewage treatment plant in Fort-Shevchenko. From the dacha soil of Nur-Sultan city and freshly cut grass - 9 and 5 isolates were revealed, respectively. The lipolytic, proteolytic, amylolytic, cellulolytic, ureolytic, and oil-oxidizing activities of isolates were studied. According to the results of experiments, starch hydrolysis (amylolytic activity) was found in 2 isolates - CB2/2, and CB2/1. Three isolates - CB2, CB2/1, and CB1/1 were selected for the highest ability to break down casein. Among isolated 61 bacterial cultures, three isolates could break down fats - CB3, CBG1/1, and IL3. Seven strains had cellulolytic activity - DS1, DS2, IL3, IL5, P2, P5, and P3. Six isolates rapidly decomposed urea. Isolate P1 could break down casein and cellulose. Isolate DS3 was a thermophile and had cellulolytic activity. Thus, based on the conducted studies, 15 isolates were selected as a potential for sewage sludge composting - CB2, CB3, CB1/1, CB2/2, CBG1/1, CB2/1, DS1, DS2, DS3, IL3, IL5, P1, P2, P5, P3. Selected strains were identified on a mass spectrometer (Maldi-TOF). The isolate - CB 3 was referred to the genus Rhodococcus rhodochrous; two isolates CB2 and CB1 / 1 - to Bacillus cereus, CB 2/2 - to Cryseobacterium arachidis, CBG 1/1 - to Pseudoxanthomonas sp., CB2/1 - to Bacillus megaterium, DS1 - to Pediococcus acidilactici, DS2 - to Paenibacillus residui, DS3 - to Brevibacillus invocatus, three strains IL3, P5, P3 - to Enterobacter cloacae, two strains IL5, P2 - to Ochrobactrum intermedium, and P1 - Bacillus lichenoformis. Hence, 60 isolates were isolated from the wastewater of the cities of Nur-Sultan, Rudny, Fort-Shevchenko, the dacha soil of Nur-Sultan city, and freshly cut grass from the dacha. Based on the highest enzymatic activity, 15 active isolates were selected and identified. These strains may become the candidates for bio preparation for sewage sludge processing.

Keywords: sewage sludge, composting, bacteria, enzymatic activity

Procedia PDF Downloads 77
340 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites

Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy

Abstract:

Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.

Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites

Procedia PDF Downloads 151
339 Identification of Viruses Infecting Garlic Plants in Colombia

Authors: Diana M. Torres, Anngie K. Hernandez, Andrea Villareal, Magda R. Gomez, Sadao Kobayashi

Abstract:

Colombian Garlic crops exhibited mild mosaic, yellow stripes, and deformation. This group of symptoms suggested a viral infection. Several viruses belonging to the genera Potyvirus, Carlavirus and Allexivirus are known to infect garlic and lower their yield worldwide, but in Colombia, there are no studies of viral infections in this crop, only leek yellow stripe virus (LYSV) has been reported to our best knowledge. In Colombia, there are no management strategies for viral diseases in garlic because of the lack of information about viral infections on this crop, which is reflected in (i) high prevalence of viral related symptoms in garlic fields and (ii) high dispersal rate. For these reasons, the purpose of the present study was to evaluate the viral status of garlic in Colombia, which can represent a major threat on garlic yield and quality for this country 55 symptomatic leaf samples were collected for virus detection by RT-PCR and mechanical inoculation. Total RNA isolated from infected samples were subjected to RT-PCR with primers 1-OYDV-G/2-OYDV-G for Onion yellow dwarf virus (OYDV) (expected size 774pb), 1LYSV/2LYSV for LYSV (expected size 1000pb), SLV 7044/SLV 8004 for Shallot latent virus (SLV) (expected size 960pb), GCL-N30/GCL-C40 for Garlic common latent virus (GCLV) (expected size 481pb) and EF1F/EF1R for internal control (expected size 358pb). GCLV, SLV, and LYSV were detected in infected samples; in 95.6% of the analyzed samples was detected at least one of the viruses. GCLV and SLV were detected in single infection with low prevalence (9.3% and 7.4%, respectively). Garlic generally becomes coinfected with several types of viruses. Four viral complexes were identified: three double infection (64% of analyzed samples) and one triple infection (15%). The most frequent viral complex was SLV + GCLV infecting 48.1% of the samples. The other double complexes identified had a prevalence of 7% (GCLV + LYSV and SLV + LYSV) and 5.6% of the samples were free from these viruses. Mechanical transmission experiments were set up using leaf tissues of collected samples from infected fields, different test plants were assessed to know the host range, but it was restricted to C. quinoa, confirming the presence of detected viruses which have limited host range and were detected in C. quinoa by RT-PCR. The results of molecular and biological tests confirm the presence of SLV, LYSV, and GCLV; this is the first report of SLV and LYSV in garlic plants in Colombia, which can represent a serious threat for this crop in this country.

Keywords: SLV, GCLV, LYSV, leek yellow stripe virus, Allium sativum

Procedia PDF Downloads 118
338 In Vitro Evaluation of a Chitosan-Based Adhesive to Treat Bone Fractures

Authors: Francisco J. Cedano, Laura M. Pinzón, Camila I. Castro, Felipe Salcedo, Juan P. Casas, Juan C. Briceño

Abstract:

Complex fractures located in articular surfaces are challenging to treat and their reduction with conventional treatments could compromise the functionality of the affected limb. An adhesive material to treat those fractures is desirable for orthopedic surgeons. This adhesive must be biocompatible and have a high adhesion to bone surface in an aqueous environment. The proposed adhesive is based on chitosan, given its adhesive and biocompatibility properties. Chitosan is mixed with calcium carbonate and hydroxyapatite, which contribute to structural support and a gel like behavior, and glutaraldehyde is used as a cross-linking agent to keep the adhesive mechanical performance in aqueous environment. This work aims to evaluate the rheological, adhesion strength and biocompatibility properties of the proposed adhesive using in vitro tests. The gelification process of the adhesive was monitored by oscillatory rheometry in an ARG-2 TA Instruments rheometer, using a parallel plate geometry of 22 mm and a gap of 1 mm. Time sweep experiments were conducted at 1 Hz frequency, 1% strain and 37°C from 0 to 2400 s. Adhesion strength is measured using a butt joint test with bovine cancellous bone fragments as substrates. The test is conducted at 5 min, 20min and 24 hours after curing the adhesive under water at 37°C. Biocompatibility is evaluated by a cytotoxicity test in a fibroblast cell culture using MTT assay and SEM. Rheological results concluded that the average gelification time of the adhesive is 820±107 s, also it reaches storage modulus magnitudes up to 106 Pa; The adhesive show solid-like behavior. Butt joint test showed 28.6 ± 9.2 kPa of tensile bond strength for the adhesive cured for 24 hours. Also there was no significant difference in adhesion strength between 20 minutes and 24 hours. MTT showed 70 ± 23 % of active cells at sixth day of culture, this percentage is estimated respect to a positive control (only cells with culture medium and bovine serum). High vacuum SEM observation permitted to localize and study the morphology of fibroblasts presented in the adhesive. All captured fibroblasts presented in SEM typical flatted structure with filopodia growth attached to adhesive surface. This project reports an adhesive based on chitosan that is biocompatible due to high active cells presented in MTT test and these results were correlated using SEM. Also, it has adhesion properties in conditions that model the clinical application, and the adhesion strength do not decrease between 5 minutes and 24 hours.

Keywords: bioadhesive, bone adhesive, calcium carbonate, chitosan, hydroxyapatite, glutaraldehyde

Procedia PDF Downloads 295
337 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification

Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi

Abstract:

Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.

Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix

Procedia PDF Downloads 103
336 Noninvasive Technique for Measurement of Heartbeat in Zebrafish Embryos Exposed to Electromagnetic Fields at 27 GHz

Authors: Sara Ignoto, Elena M. Scalisi, Carmen Sica, Martina Contino, Greta Ferruggia, Antonio Salvaggio, Santi C. Pavone, Gino Sorbello, Loreto Di Donato, Roberta Pecoraro, Maria V. Brundo

Abstract:

The new fifth generation technology (5G), which should favor high data-rate connections (1Gbps) and latency times lower than the current ones (<1ms), has the characteristic of working on different frequency bands of the radio wave spectrum (700 MHz, 3.6-3.8 GHz and 26.5-27.5 GHz), thus also exploiting higher frequencies than previous mobile radio generations (1G-4G). The higher frequency waves, however, have a lower capacity to propagate in free space and therefore, in order to guarantee the capillary coverage of the territory for high reliability applications, it will be necessary to install a large number of repeaters. Following the introduction of this new technology, there has been growing concern in recent years about the possible harmful effects on human health and several studies were published using several animal models. This study aimed to observe the possible short-term effects induced by 5G-millimeter waves on heartbeat of early life stages of Danio rerio using DanioScope software (Noldus). DanioScope is the complete toolbox for measurements on zebrafish embryos and larvae. The effect of substances can be measured on the developing zebrafish embryo by a range of parameters: earliest activity of the embryo’s tail, activity of the developing heart, speed of blood flowing through the vein, length and diameters of body parts. Activity measurements, cardiovascular data, blood flow data and morphometric parameters can be combined in one single tool. Obtained data are elaborate and provided by the software both numerical as well as graphical. The experiments were performed at 27 GHz by a no commercial high gain pyramidal horn antenna. According to OECD guidelines, exposure to 5G-millimeter waves was tested by fish embryo toxicity test within 96 hours post fertilization, Observations were recorded every 24h, until the end of the short-term test (96h). The results have showed an increase of heartbeat rate on exposed embryos at 48h hpf than control group, but this increase has not been shown at 72-96 h hpf. Nowadays, there is a scant of literature data about this topic, so these results could be useful to approach new studies and also to evaluate potential cardiotoxic effects of mobile radiofrequency.

Keywords: Danio rerio, DanioScope, cardiotoxicity, millimeter waves.

Procedia PDF Downloads 129
335 The 2017 Summer Campaign for Night Sky Brightness Measurements on the Tuscan Coast

Authors: Andrea Giacomelli, Luciano Massetti, Elena Maggi, Antonio Raschi

Abstract:

The presentation will report the activities managed during the Summer of 2017 by a team composed by staff from a University Department, a National Research Council Institute, and an outreach NGO, collecting measurements of night sky brightness and other information on artificial lighting, in order to characterize light pollution issues on portions of the Tuscan coast, in Central Italy. These activities combine measurements collected by the principal scientists, citizen science observations led by students, and outreach events targeting a broad audience. This campaign aggregates the efforts of three actors: the BuioMetria Partecipativa project, which started collecting light pollution data on a national scale in 2008 with an environmental engineering and free/open source GIS core team; the Institute of Biometeorology from the National Research Council, with ongoing studies on light and urban vegetation and a consolidated track record in environmental education and citizen science; the Department of Biology from the University of Pisa, which started experiments to assess the impact of light pollution in coastal environments in 2015. While the core of the activities concerns in situ data, the campaign will account also for remote sensing data, thus considering heterogeneous data sources. The aim of the campaign is twofold: (1) To test actions of citizen and student engagement in monitoring sky brightness (2) To collect night sky brightness data and test a protocol for applications to studies on the ecological impact of light pollution, with a special focus on marine coastal ecosystems. The collaboration of an interdisciplinary team in the study of artificial lighting issues is not a common case in Italy, and the possibility of undertaking the campaign in Tuscany has the added value of operating in one of the territories where it is possible to observe both sites with extremely high lighting levels, and areas with extremely low light pollution, especially in the Southern part of the region. Combining environmental monitoring and communication actions in the context of the campaign, this effort will contribute to the promotion of night skies with a good quality as an important asset for the sustainability of coastal ecosystems, as well as to increase citizen awareness through star gazing, night photography and actively participating in field campaign measurements.

Keywords: citizen science, light pollution, marine coastal biodiversity, environmental education

Procedia PDF Downloads 152
334 Numerical Investigation of the Effects of Surfactant Concentrations on the Dynamics of Liquid-Liquid Interfaces

Authors: Bamikole J. Adeyemi, Prashant Jadhawar, Lateef Akanji

Abstract:

Theoretically, there exist two mathematical interfaces (fluid-solid and fluid-fluid) when a liquid film is present on solid surfaces. These interfaces overlap if the mineral surface is oil-wet or mixed wet, and therefore, the effects of disjoining pressure are significant on both boundaries. Hence, dewetting is a necessary process that could detach oil from the mineral surface. However, if the thickness of the thin water film directly in contact with the surface is large enough, disjoining pressure can be thought to be zero at the liquid-liquid interface. Recent studies show that the integration of fluid-fluid interactions with fluid-rock interactions is an important step towards a holistic approach to understanding smart water effects. Experiments have shown that the brine solution can alter the micro forces at oil-water interfaces, and these ion-specific interactions lead to oil emulsion formation. The natural emulsifiers present in crude oil behave as polyelectrolytes when the oil interfaces with low salinity water. Wettability alteration caused by low salinity waterflooding during Enhanced Oil Recovery (EOR) process results from the activities of divalent ions. However, polyelectrolytes are said to lose their viscoelastic property with increasing cation concentrations. In this work, the influence of cation concentrations on the dynamics of viscoelastic liquid-liquid interfaces is numerically investigated. The resultant ion concentrations at the crude oil/brine interfaces were estimated using a surface complexation model. Subsequently, the ion concentration parameter is integrated into a mathematical model to describe its effects on the dynamics of a viscoelastic interfacial thin film. The film growth, stability, and rupture were measured after different time steps for three types of fluids (Newtonian, purely elastic and viscoelastic fluids). The interfacial films respond to exposure time in a similar manner with an increasing growth rate, which resulted in the formation of more droplets with time. Increased surfactant accumulation at the interface results in a higher film growth rate which leads to instability and subsequent formation of more satellite droplets. Purely elastic and viscoelastic properties limit film growth rate and consequent film stability compared to the Newtonian fluid. Therefore, low salinity and reduced concentration of the potential determining ions in injection water will lead to improved interfacial viscoelasticity.

Keywords: liquid-liquid interfaces, surfactant concentrations, potential determining ions, residual oil mobilization

Procedia PDF Downloads 119
333 Phytoremediation; Pb, Cr and Cd Accumulation in Fruits and Leaves of Vitis Vinifera L. From Air Pollutions and Intraction between Their Uptake Based on the Distance from the Main Road

Authors: Fatemeh Mohsennezhad

Abstract:

Air pollution is one of major problems for environment. Providing healthy food and protecting water sources from pollution has been one of the concerns of human societies and decision-making centers so that protecting food from pollution, detecting sources of pollution and measuring them become important. Nutritive and political significance of grape in this area, extensive use of leaf and fruit of this plant and development of urban areas around grape gardens and construction of Tabriz – Miandoab road, which is the most important link between East and West Azarbaijan, led us to examine the impact of this road construction and urban environment pollutants such as lead chromium and cadmium on the quality of this valuable crop. First, the samples were taken from different adjacent places and medium distances from the road, each place being located exactly by Google earth and GPS. Digestion was done through burning dry material and hydrochloric acid and their ashes were analyzed by atomic absorption to determine (Pb, Cr, Cd) accumulations. In this experiments effects of 2 following factors were examined as a variable: Garden distance from the main road with levels 1: For 50 meters, 2: For 120-200 meters, 3: For above 800 meters, and plant organ with levels 1: For fruit, 2: For leaves. At the end, the results were processed by SPSS software. 3.54 ppm, the most lead quantity, was at sample No. 54 in fruits with 800 meters distance from the road and 1.00 ppm was the least lead quantity at sample No. 50 in fruits with 1000 meters from the road. In leaves, the most lead quantity was 19.16 ppm at sample No. 15 with 50 meters distance from the road and the least quantity was 1.41 ppm at sample No. 31 with 50 meters from the road. Pb uptake is significantly different at 50 meters and 200 meters distance. It means that Pb uptake near the main road is the highest. But this result is not true for others elements. Distance has not a meaningful effect on Cr uptake. The result of analysis of variation in distance and plant organ for Cd showed that between fruit and leaf, Cd uptake is significantly different. But distance and interaction between distance and plant organ is not meaningful. There is neither meaningful interaction between these elements uptakes in fruits nor in leaves. If leaves and fruits, assumed all together, showed a very meaningful integration between heavy metal accumulations. It means that each of these elements causes uptake others without considering special organs. In the tested area, it became clear that, from the accumulation of heavy metals perspective, there is no meaningful difference in existing distance between road and garden. There is a meaningful difference among heavy metals accumulation. In other words, increase ratio of one metal to another was different from the resulted differences shown in corresponding graphs. Interaction among elements and distance between garden and road was not meaningful.

Keywords: Vitis vinifera L., phytoremediation, heavy metals accumulation, lead, chromium, cadmium

Procedia PDF Downloads 329
332 Comparing Deep Architectures for Selecting Optimal Machine Translation

Authors: Despoina Mouratidis, Katia Lida Kermanidis

Abstract:

Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.

Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification

Procedia PDF Downloads 105
331 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach

Authors: Kristina Pflug, Markus Busch

Abstract:

Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.

Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology

Procedia PDF Downloads 106
330 Arguments against Innateness of Theory of Mind

Authors: Arkadiusz Gut, Robert Mirski

Abstract:

The nativist-constructivist debate constitutes a considerable part of current research on mindreading. Peter Carruthers and his colleagues are known for their nativist position in the debate and take issue with constructivist views proposed by other researchers, with Henry Wellman, Alison Gopnik, and Ian Apperly at the forefront. More specifically, Carruthers together with Evan Westra propose a nativistic explanation of Theory of Mind Scale study results that Wellman et al. see as supporting constructivism. While allowing for development of the innate mindreading system, Westra and Carruthers base their argumentation essentially on a competence-performance gap, claiming that cross-cultural differences in Theory of Mind Scale progression as well as discrepancies between infants’ and toddlers’ results on verbal and non-verbal false-belief tasks are fully explainable in terms of acquisition of other, pragmatic, cognitive developments, which are said to allow for an expression of the innately present Theory of Mind understanding. The goal of the present paper is to bring together arguments against the view offered by Westra and Carruthers. It will be shown that even though Carruthers et al.’s interpretation has not been directly controlled for in Wellman et al.’s experiments, there are serious reasons to dismiss such nativistic views which Carruthers et al. advance. The present paper discusses the following issues that undermine Carruthers et al.’s nativistic conception: (1) The concept of innateness is argued to be developmentally inaccurate; it has been dropped in many biological sciences altogether and many developmental psychologists advocate for doing the same in cognitive psychology. Reality of development is a complex interaction of changing elements that is belied by the simplistic notion of ‘the innate.’ (2) The purported innate mindreading conceptual system posited by Carruthers ascribes adult-like understanding to infants, ignoring the difference between first- and second-order understanding, between what can be called ‘presentation’ and ‘representation.’ (3) Advances in neurobiology speak strongly against any inborn conceptual knowledge; neocortex, where conceptual knowledge finds its correlates, is said to be largely equipotential at birth. (4) Carruthers et al.’s interpretations are excessively charitable; they extend results of studies done with 15-month-olds to conclusions about innateness, whereas in reality at that age there has been plenty of time for construction of the skill. (5) Looking-time experiment paradigm used in non-verbal false belief tasks that provide the main support for Carruthers’ argumentation has been criticized on methodological grounds. In the light of the presented arguments, nativism in theory of mind research is concluded to be an untenable position.

Keywords: development, false belief, mindreading, nativism, theory of mind

Procedia PDF Downloads 189
329 Experiment-Based Teaching Method for the Varying Frictional Coefficient

Authors: Mihaly Homostrei, Tamas Simon, Dorottya Schnider

Abstract:

The topic of oscillation in physics is one of the key ideas which is usually taught based on the concept of harmonic oscillation. It can be an interesting activity to deal with a frictional oscillator in advanced high school classes or in university courses. Its mechanics are investigated in this research, which shows that the motion of the frictional oscillator is more complicated than a simple harmonic oscillator. The physics of the applied model in this study seems to be interesting and useful for undergraduate students. The study presents a well-known physical system, which is mostly discussed theoretically in high school and at the university. The ideal frictional oscillator is normally used as an example of harmonic oscillatory motion, as its theory relies on the constant coefficient of sliding friction. The structure of the system is simple: a rod with a homogeneous mass distribution is placed on two rotating identical cylinders placed at the same height so that they are horizontally aligned, and they rotate at the same angular velocity, however in opposite directions. Based on this setup, one could easily show that the equation of motion describes a harmonic oscillation considering the magnitudes of the normal forces in the system as the function of the position and the frictional forces with a constant coefficient of frictions are related to them. Therefore, the whole description of the model relies on simple Newtonian mechanics, which is available for students even in high school. On the other hand, the phenomenon of the described frictional oscillator does not seem to be so straightforward after all; experiments show that the simple harmonic oscillation cannot be observed in all cases, and the system performs a much more complex movement, whereby the rod adjusts itself to a non-harmonic oscillation with a nonzero stable amplitude after an unconventional damping effect. The stable amplitude, in this case, means that the position function of the rod converges to a harmonic oscillation with a constant amplitude. This leads to the idea of a more complex model which can describe the motion of the rod in a more accurate way. The main difference to the original equation of motion is the concept that the frictional coefficient varies with the relative velocity. This dependence on the velocity was investigated in many different research articles as well; however, this specific problem could demonstrate the key concept of the varying friction coefficient and its importance in an interesting and demonstrative way. The position function of the rod is described by a more complicated and non-trivial, yet more precise equation than the usual harmonic oscillation description of the movement. The study discusses the structure of the measurements related to the frictional oscillator, the qualitative and quantitative derivation of the theory, and the comparison of the final theoretical function as well as the measured position-function in time. The project provides useful materials and knowledge for undergraduate students and a new perspective in university physics education.

Keywords: friction, frictional coefficient, non-harmonic oscillator, physics education

Procedia PDF Downloads 173
328 Assessing the Blood-Brain Barrier (BBB) Permeability in PEA-15 Mutant Cat Brain using Magnetization Transfer (MT) Effect at 7T

Authors: Sultan Z. Mahmud, Emily C. Graff, Adil Bashir

Abstract:

Phosphoprotein enriched in astrocytes 15 kDa (PEA-15) is a multifunctional adapter protein which is associated with the regulation of apoptotic cell death. Recently it has been discovered that PEA-15 is crucial in normal neurodevelopment of domestic cats, a gyrencephalic animal model, although the exact function of PEA-15 in neurodevelopment is unknown. This study investigates how PEA-15 affects the blood-brain barrier (BBB) permeability in cat brain, which can cause abnormalities in tissue metabolite and energy supplies. Severe polymicrogyria and microcephaly have been observed in cats with a loss of function PEA-15 mutation, affecting the normal neurodevelopment of the cat. This suggests that the vital role of PEA-15 in neurodevelopment is associated with gyrification. Neurodevelopment is a highly energy demanding process. The mammalian brain depends on glucose as its main energy source. PEA-15 plays a very important role in glucose uptake and utilization by interacting with phospholipase D1 (PLD1). Mitochondria also plays a critical role in bioenergetics and essential to supply adequate energy needed for neurodevelopment. Cerebral blood flow regulates adequate metabolite supply and recent findings also showed that blood plasma contains mitochondria as well. So the BBB can play a very important role in regulating metabolite and energy supply in the brain. In this study the blood-brain permeability in cat brain was measured using MRI magnetization transfer (MT) effect on the perfusion signal. Perfusion is the tissue mass normalized supply of blood to the capillary bed. Perfusion also accommodates the supply of oxygen and other metabolites to the tissue. A fraction of the arterial blood can diffuse to the tissue, which depends on the BBB permeability. This fraction is known as water extraction fraction (EF). MT is a process of saturating the macromolecules, which has an effect on the blood that has been diffused into the tissue while having minimal effect on intravascular blood water that has not been exchanged with the tissue. Measurement of perfusion signal with and without MT enables to estimate the microvascular blood flow, EF and permeability surface area product (PS) in the brain. All the experiments were performed with Siemens 7T Magnetom with 32 channel head coil. Three control cats and three PEA-15 mutant cats were used for the study. Average EF in white and gray matter was 0.9±0.1 and 0.86±0.15 respectively, perfusion in white and gray matter was 85±15 mL/100g/min and 97±20 mL/100g/min respectively, PS in white and gray matter was 201±25 mL/100g/min and 225±35 mL/100g/min respectively for control cats. For PEA-15 mutant cats, average EF in white and gray matter was 0.81±0.15 and 0.77±0.2 respectively, perfusion in white and gray matter was 140±25 mL/100g/min and 165±18 mL/100g/min respectively, PS in white and gray matter was 240±30 mL/100g/min and 259±21 mL/100g/min respectively. This results show that BBB is compromised in PEA-15 mutant cat brain, where EF is decreased and perfusion as well as PS are increased in the mutant cats compared to the control cats. This findings might further explain the function of PEA-15 in neurodevelopment.

Keywords: BBB, cat brain, magnetization transfer, PEA-15

Procedia PDF Downloads 108
327 Risk and Emotion: Measuring the Effect of Emotion and Other Visceral Factors on Decision Making under Risk

Authors: Michael Mihalicz, Aziz Guergachi

Abstract:

Background: The science of modelling choice preferences has evolved over centuries into an interdisciplinary field contributing to several branches of Microeconomics and Mathematical Psychology. Early theories in Decision Science rested on the logic of rationality, but as it and related fields matured, descriptive theories emerged capable of explaining systematic violations of rationality through cognitive mechanisms underlying the thought processes that guide human behaviour. Cognitive limitations are not, however, solely responsible for systematic deviations from rationality and many are now exploring the effect of visceral factors as the more dominant drivers. The current study builds on the existing literature by exploring sleep deprivation, thermal comfort, stress, hunger, fear, anger and sadness as moderators to three distinct elements that define individual risk preference under Cumulative Prospect Theory. Methodology: This study is designed to compare the risk preference of participants experiencing an elevated affective or visceral state to those in a neutral state using nonparametric elicitation methods across three domains. Two experiments will be conducted simultaneously using different methodologies. The first will determine visceral states and risk preferences randomly over a two-week period by prompting participants to complete an online survey remotely. In each round of questions, participants will be asked to self-assess their current state using Visual Analogue Scales before answering a series of lottery-style elicitation questions. The second experiment will be conducted in a laboratory setting using psychological primes to induce a desired state. In this experiment, emotional states will be recorded using emotion analytics and used a basis for comparison between the two methods. Significance: The expected results include a series of measurable and systematic effects on the subjective interpretations of gamble attributes and evidence supporting the proposition that a portion of the variability in human choice preferences unaccounted for by cognitive limitations can be explained by interacting visceral states. Significant results will promote awareness about the subconscious effect that emotions and other drive states have on the way people process and interpret information, and can guide more effective decision making by informing decision-makers of the sources and consequences of irrational behaviour.

Keywords: decision making, emotions, prospect theory, visceral factors

Procedia PDF Downloads 128
326 Modeling of the Biodegradation Performance of a Membrane Bioreactor to Enhance Water Reuse in Agri-food Industry - Poultry Slaughterhouse as an Example

Authors: masmoudi Jabri Khaoula, Zitouni Hana, Bousselmi Latifa, Akrout Hanen

Abstract:

Mathematical modeling has become an essential tool for sustainable wastewater management, particularly for the simulation and the optimization of complex processes involved in activated sludge systems. In this context, the activated sludge model (ASM3h) was used for the simulation of a Biological Membrane Reactor (MBR) as it includes the integration of biological wastewater treatment and physical separation by membrane filtration. In this study, the MBR with a useful volume of 12.5 L was fed continuously with poultry slaughterhouse wastewater (PSWW) for 50 days at a feed rate of 2 L/h and for a hydraulic retention time (HRT) of 6.25h. Throughout its operation, High removal efficiency was observed for the removal of organic pollutants in terms of COD with 84% of efficiency. Moreover, the MBR has generated a treated effluent which fits with the limits of discharge into the public sewer according to the Tunisian standards which were set in March 2018. In fact, for the nitrogenous compounds, average concentrations of nitrate and nitrite in the permeat reached 0.26±0.3 mg. L-1 and 2.2±2.53 mg. L-1, respectively. The simulation of the MBR process was performed using SIMBA software v 5.0. The state variables employed in the steady state calibration of the ASM3h were determined using physical and respirometric methods. The model calibration was performed using experimental data obtained during the first 20 days of the MBR operation. Afterwards, kinetic parameters of the model were adjusted and the simulated values of COD, N-NH4+and N- NOx were compared with those reported from the experiment. A good prediction was observed for the COD, N-NH4+and N- NOx concentrations with 467 g COD/m³, 110.2 g N/m³, 3.2 g N/m³ compared to the experimental data which were 436.4 g COD/m³, 114.7 g N/m³ and 3 g N/m³, respectively. For the validation of the model under dynamic simulation, the results of the experiments obtained during the second treatment phase of 30 days were used. It was demonstrated that the model simulated the conditions accurately by yielding a similar pattern on the variation of the COD concentration. On the other hand, an underestimation of the N-NH4+ concentration was observed during the simulation compared to the experimental results and the measured N-NO3 concentrations were lower than the predicted ones, this difference could be explained by the fact that the ASM models were mainly designed for the simulation of biological processes in the activated sludge systems. In addition, more treatment time could be required by the autotrophic bacteria to achieve a complete and stable nitrification. Overall, this study demonstrated the effectiveness of mathematical modeling in the prediction of the performance of the MBR systems with respect to organic pollution, the model can be further improved for the simulation of nutrients removal for a longer treatment period.

Keywords: activated sludge model (ASM3h), membrane bioreactor (MBR), poultry slaughter wastewater (PSWW), reuse

Procedia PDF Downloads 27
325 Radar on Bike: Coarse Classification based on Multi-Level Clustering for Cyclist Safety Enhancement

Authors: Asma Omri, Noureddine Benothman, Sofiane Sayahi, Fethi Tlili, Hichem Besbes

Abstract:

Cycling, a popular mode of transportation, can also be perilous due to cyclists' vulnerability to collisions with vehicles and obstacles. This paper presents an innovative cyclist safety system based on radar technology designed to offer real-time collision risk warnings to cyclists. The system incorporates a low-power radar sensor affixed to the bicycle and connected to a microcontroller. It leverages radar point cloud detections, a clustering algorithm, and a supervised classifier. These algorithms are optimized for efficiency to run on the TI’s AWR 1843 BOOST radar, utilizing a coarse classification approach distinguishing between cars, trucks, two-wheeled vehicles, and other objects. To enhance the performance of clustering techniques, we propose a 2-Level clustering approach. This approach builds on the state-of-the-art Density-based spatial clustering of applications with noise (DBSCAN). The objective is to first cluster objects based on their velocity, then refine the analysis by clustering based on position. The initial level identifies groups of objects with similar velocities and movement patterns. The subsequent level refines the analysis by considering the spatial distribution of these objects. The clusters obtained from the first level serve as input for the second level of clustering. Our proposed technique surpasses the classical DBSCAN algorithm in terms of geometrical metrics, including homogeneity, completeness, and V-score. Relevant cluster features are extracted and utilized to classify objects using an SVM classifier. Potential obstacles are identified based on their velocity and proximity to the cyclist. To optimize the system, we used the View of Delft dataset for hyperparameter selection and SVM classifier training. The system's performance was assessed using our collected dataset of radar point clouds synchronized with a camera on an Nvidia Jetson Nano board. The radar-based cyclist safety system is a practical solution that can be easily installed on any bicycle and connected to smartphones or other devices, offering real-time feedback and navigation assistance to cyclists. We conducted experiments to validate the system's feasibility, achieving an impressive 85% accuracy in the classification task. This system has the potential to significantly reduce the number of accidents involving cyclists and enhance their safety on the road.

Keywords: 2-level clustering, coarse classification, cyclist safety, warning system based on radar technology

Procedia PDF Downloads 50
324 Flow-Induced Vibration Marine Current Energy Harvesting Using a Symmetrical Balanced Pair of Pivoted Cylinders

Authors: Brad Stappenbelt

Abstract:

The phenomenon of vortex-induced vibration (VIV) for elastically restrained cylindrical structures in cross-flows is relatively well investigated. The utility of this mechanism in harvesting energy from marine current and tidal flows is however arguably still in its infancy. With relatively few moving components, a flow-induced vibration-based energy conversion device augers low complexity compared to the commonly employed turbine design. Despite the interest in this concept, a practical device has yet to emerge. It is desirable for optimal system performance to design for a very low mass or mass moment of inertia ratio. The device operating range, in particular, is maximized below the vortex-induced vibration critical point where an infinite resonant response region is realized. An unfortunate consequence of this requirement is large buoyancy forces that need to be mitigated by gravity-based, suction-caisson or anchor mooring systems. The focus of this paper is the testing of a novel VIV marine current energy harvesting configuration that utilizes a symmetrical and balanced pair of horizontal pivoted cylinders. The results of several years of experimental investigation, utilizing the University of Wollongong fluid mechanics laboratory towing tank, are analyzed and presented. A reduced velocity test range of 0 to 60 was covered across a large array of device configurations. In particular, power take-off damping ratios spanning from 0.044 to critical damping were examined in order to determine the optimal conditions and hence the maximum device energy conversion efficiency. The experiments conducted revealed acceptable energy conversion efficiencies of around 16% and desirable low flow-speed operating ranges when compared to traditional turbine technology. The potentially out-of-phase spanwise VIV cells on each arm of the device synchronized naturally as no decrease in amplitude response and comparable energy conversion efficiencies to the single cylinder arrangement were observed. In addition to the spatial design benefits related to the horizontal device orientation, the main advantage demonstrated by the current symmetrical horizontal configuration is to allow large velocity range resonant response conditions without the excessive buoyancy. The novel configuration proposed shows clear promise in overcoming many of the practical implementation issues related to flow-induced vibration marine current energy harvesting.

Keywords: flow-induced vibration, vortex-induced vibration, energy harvesting, tidal energy

Procedia PDF Downloads 127
323 Effect of Synbiotics on Rats' Intestinal Microbiota

Authors: Da Yoon Yu, Jeong A. Kim, In Sung Kim, Yeon Hee Hong, Jae Young Kim, Sang Suk Lee, Sung Chan Kim, So Hui Choe, In Soon Choi, Kwang Keun Cho

Abstract:

The present study was conducted to identify the effects of synbiotics composed of lactic acid (LA) bacteria (LAB) and sea tangle on rat’s intestinal microorganisms and anti-obesity effects. The experiment was conducted for six weeks using an 8-week old male rat as experiment animals and the experimental design was to use six treatments groups of 4 repetitions using three mice per repetition. The treatment groups were organized into a normal fat diet control (NFC), a high fat (HF) diet control (HFC), a prebiotic 0% treatment (HF+LA+sea tangle 0%, ST0), a prebiotic 5% treatment (HF+LA+sea tangle 5%, ST5), a prebiotic 10% treatment (HF+LA+sea tangle 10%, ST10), and a prebiotic 15% treatment group (HF+LA+sea tangle 15%, ST15) to conduct experiments with various levels of prebiotics. According to the results of the experiment, the NFC group showed the highest daily weight gain (22.34g) and the ST0 group showed the lowest daily weight gain (19.41g). However, weight gains during the entire experimental period were the highest in the HFC group (475.73g) and the lowest in the ST0 group (454.23g). Feed efficiency was the highest in the HFC group (0.20). Treatment with synbiotics composed of LAB and sea tangle suppressed weight increases due to HF diet and reduced feed efficiency. Intestinal microorganisms were identified through pyrosequncing and according to the results, Firmicutes phylum (approximately 60%) and Bacteroidetes phylum (approximately 30%) accounted for approximately 90% or more of intestinal microorganisms in all of the treatment groups indicating these bacteria are dominating in the intestines. Firmicutes that is related to weight increases accounted for 64.96% of microorganisms in the NFC group, 75.32% in the HFC group, 59.51% in the ST0 group, 61.29% in the ST5 group, 49.91% in the ST10 group, and 39.65% in the ST15 group. Therefore, Firmicutes showed the highest share the HFC group that showed high weight gains and the lowest share in the group treated with mixed synbiotics composed of LAB and sea tangle. Bacteroidetes that is related to weight gain inhibition accounted for 32.12% of microorganisms in the NFC group, and HFC group 21.57%, ST0 group 37.66%, ST5 group 34.92%, ST10 group 44.46%, and ST15 group 53.22%. Therefore, the share of Bacteroidetes was the lowest in the HFC group with no addition of synbiotics and increased along with the level of treatment with synbiotics. Changes in blood components were not significantly different among the groups and SCFA yields were shown to be higher in groups treated with synbiotics than in groups not added with synbiotics. Through the present study, it was shown that the supply of synbiotics composed of LAB and sea tangle increased feed intake but led to weight losses and that the intake of synbiotics composed of LAB and sea tangle had anti-obesity effects due to decreases in Firmicutes which are microorganisms related to weight gains and increases in Bacteroidetes which are microorganisms related to weight losses. Therefore, synbiotics composed of LAB and sea tangle are considered to have the effect to prevent metabolic disorders in the rat.

Keywords: bacteroidetes, firmicutes, intestinal microbiota, lactic acid, sea tangle, synbiotics

Procedia PDF Downloads 375
322 Application of the Material Point Method as a New Fast Simulation Technique for Textile Composites Forming and Material Handling

Authors: Amir Nazemi, Milad Ramezankhani, Marian Kӧrber, Abbas S. Milani

Abstract:

The excellent strength to weight ratio of woven fabric composites, along with their high formability, is one of the primary design parameters defining their increased use in modern manufacturing processes, including those in aerospace and automotive. However, for emerging automated preform processes under the smart manufacturing paradigm, complex geometries of finished components continue to bring several challenges to the designers to cope with manufacturing defects on site. Wrinklinge. g. is a common defectoccurring during the forming process and handling of semi-finished textile composites. One of the main reasons for this defect is the weak bending stiffness of fibers in unconsolidated state, causing excessive relative motion between them. Further challenges are represented by the automated handling of large-area fiber blanks with specialized gripper systems. For fabric composites forming simulations, the finite element (FE)method is a longstanding tool usedfor prediction and mitigation of manufacturing defects. Such simulations are predominately meant, not only to predict the onset, growth, and shape of wrinkles but also to determine the best processing condition that can yield optimized positioning of the fibers upon forming (or robot handling in the automated processes case). However, the need for use of small-time steps via explicit FE codes, facing numerical instabilities, as well as large computational time, are among notable drawbacks of the current FEtools, hindering their extensive use as fast and yet efficient digital twins in industry. This paper presents a novel woven fabric simulation technique through the application of the material point method (MPM), which enables the use of much larger time steps, facing less numerical instabilities, hence the ability to run significantly faster and efficient simulationsfor fabric materials handling and forming processes. Therefore, this method has the ability to enhance the development of automated fiber handling and preform processes by calculating the physical interactions with the MPM fiber models and rigid tool components. This enables the designers to virtually develop, test, and optimize their processes based on either algorithmicor Machine Learning applications. As a preliminary case study, forming of a hemispherical plain weave is shown, and the results are compared to theFE simulations, as well as experiments.

Keywords: material point method, woven fabric composites, forming, material handling

Procedia PDF Downloads 154
321 Experimental Uniaxial Tensile Characterization of One-Dimensional Nickel Nanowires

Authors: Ram Mohan, Mahendran Samykano, Shyam Aravamudhan

Abstract:

Metallic nanowires with sub-micron and hundreds of nanometer diameter have a diversity of applications in nano/micro-electromechanical systems (NEMS/MEMS). Characterizing the mechanical properties of such sub-micron and nano-scale metallic nanowires are tedious; require sophisticated and careful experimentation to be performed within high-powered microscopy systems (scanning electron microscope (SEM), atomic force microscope (AFM)). Also, needed are nanoscale devices for placing the nanowires; loading them with the intended conditions; obtaining the data for load–deflection during the deformation within the high-powered microscopy environment poses significant challenges. Even picking the grown nanowires and placing them correctly within a nanoscale loading device is not an easy task. Mechanical characterizations through experimental methods for such nanowires are still very limited. Various techniques at different levels of fidelity, resolution, and induced errors have been attempted by material science and nanomaterial researchers. The methods for determining the load, deflection within the nanoscale devices also pose a significant problem. The state of the art is thus still at its infancy. All these factors result and is seen in the wide differences in the characterization curves and the reported properties in the current literature. In this paper, we discuss and present our experimental method, results, and discussions of uniaxial tensile loading and the development of subsequent stress–strain characteristics curves for Nickel nanowires. Nickel nanowires in the diameter range of 220–270 nm were obtained in our laboratory via an electrodeposition method, which is a solution based, template method followed in our present work for growing 1-D Nickel nanowires. Process variables such as the presence of magnetic field, its intensity; and varying electrical current density during the electrodeposition process were found to influence the morphological and physical characteristics including crystal orientation, size of the grown nanowires1. To further understand the correlation and influence of electrodeposition process variables, associated formed structural features of our grown Nickel nanowires to their mechanical properties, careful experiments within scanning electron microscope (SEM) were conducted. Details of the uniaxial tensile characterization, testing methodology, nanoscale testing device, load–deflection characteristics, microscopy images of failure progression, and the subsequent stress–strain curves are discussed and presented.

Keywords: uniaxial tensile characterization, nanowires, electrodeposition, stress-strain, nickel

Procedia PDF Downloads 382
320 Relationship between Structure of Some Nitroaromatic Pollutants and Their Degradation Kinetic Parameters in UV-VIS/TIO2 System

Authors: I. Nitoi, P. Oancea, M. Raileanu, M. Crisan, L. Constantin, I. Cristea

Abstract:

Hazardous organic compounds like nitroaromatics are frequently found in chemical and petroleum industries discharged effluents. Due to their bio-refractory character and high chemical stability cannot be efficiently removed by classical biological or physical-chemical treatment processes. In the past decades, semiconductor photocatalysis has been frequently applied for the advanced degradation of toxic pollutants. Among various semiconductors titania was a widely studied photocatalyst, due to its chemical inertness, low cost, photostability and nontoxicity. In order to improve optical absorption and photocatalytic activity of TiO2 many attempts have been made, one feasible approach consists of doping oxide semiconductor with metal. The degradation of dinitrobenzene (DNB) and dinitrotoluene (DNT) from aqueous solution under UVA-VIS irradiation using heavy metal (0.5% Fe, 1%Co, 1%Ni ) doped titania was investigated. The photodegradation experiments were carried out using a Heraeus laboratory scale UV-VIS reactor equipped with a medium-pressure mercury lamp which emits in the range: 320-500 nm. Solutions with (0.34-3.14) x 10-4 M pollutant content were photo-oxidized in the following working conditions: pH = 5-9; photocatalyst dose = 200 mg/L; irradiation time = 30 – 240 minutes. Prior to irradiation, the photocatalyst powder was added to the samples, and solutions were bubbled with air (50 L/hour), in the dark, for 30 min. Dopant type, pH, structure and initial pollutant concentration influence on the degradation efficiency were evaluated in order to set up the optimal working conditions which assure substrate advanced degradation. The kinetics of nitroaromatics degradation and organic nitrogen mineralization was assessed and pseudo-first order rate constants were calculated. Fe doped photocatalyst with lowest metal content (0.5 wt.%) showed a considerable better behaviour in respect to pollutant degradation than Co and Ni (1wt.%) doped titania catalysts. For the same working conditions, degradation efficiency was higher for DNT than DNB in accordance with their calculated adsobance constants (Kad), taking into account that degradation process occurs on catalyst surface following a Langmuir-Hinshalwood model. The presence of methyl group in the structure of DNT allows its degradation by oxidative and reductive pathways, while DNB is converted only by reductive route, which also explain the highest DNT degradation efficiency. For highest pollutant concentration tested (3 x 10-4 M), optimum working conditions (0.5 wt.% Fe doped –TiO2 loading of 200 mg/L, pH=7 and 240 min. irradiation time) assures advanced nitroaromatics degradation (ηDNB=89%, ηDNT=94%) and organic nitrogen mineralization (ηDNB=44%, ηDNT=47%).

Keywords: hazardous organic compounds, irradiation, nitroaromatics, photocatalysis

Procedia PDF Downloads 285
319 Challenge in Teaching Physics during the Pandemic: Another Way of Teaching and Learning

Authors: Edson Pierre, Gustavo de Jesus Lopez Nunez

Abstract:

The objective of this work is to analyze how physics can be taught remotely through the use of platforms and software to attract the attention of 2nd-year high school students at Colégio Cívico Militar Professor Carmelita Souza Dias and point out how remote teaching can be a teaching-learning strategy during the period of social distancing. Teaching physics has been a challenge for teachers and students, permeating common sense with the great difficulty of teaching and learning the subject. The challenge increased in 2020 and 2021 with the impact caused by the new coronavirus pandemic (Sars-Cov-2) and its variants that have affected the entire world. With these changes, a new teaching modality emerged: remote teaching. It brought new challenges and one of them was promoting distance research experiences, especially in physics teaching, since there are learning difficulties and it is often impossible for the student to relate the theory observed in class with the reality that surrounds them. Teaching physics in schools faces some difficulties, which makes it increasingly less attractive for young people to choose this profession. Bearing in mind that the study of physics is very important, as it puts students in front of concrete and real situations, situations that physical principles can respond to, helping to understand nature, nourishing and nurturing a taste for science. The use of new platforms and software, such as PhET Interactive Simulations from the University of Colorado at Boulder, is a virtual laboratory that has numerous simulations of scientific experiments, which serve to improve the understanding of the content taught practically, facilitating student learning and absorption of content, being a simple, practical and free simulation tool, attracts more attention from students, causing them to acquire greater knowledge about the subject studied, or even a quiz, bringing certain healthy competitiveness to students, generating knowledge and interest in the themes used. The present study takes the Theory of Social Representations as a theoretical reference, examining the content and process of constructing the representations of teachers, subjects of our investigation, on the evaluation of teaching and learning processes, through a methodology of qualitative. The result of this work has shown that remote teaching was really a very important strategy for the process of teaching and learning physics in the 2nd year of high school. It provided greater interaction between the teacher and the student. Therefore, the teacher also plays a fundamental role since technology is increasingly present in the educational environment, and he is the main protagonist of this process.

Keywords: physics teaching, technologies, remote learning, pandemic

Procedia PDF Downloads 31