Search results for: mobile tracking applications
213 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments
Authors: David X. Dong, Qingming Zhang, Meng Lu
Abstract:
Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.Keywords: optical sensor, regression model, nitrites, water quality
Procedia PDF Downloads 72212 High-Resolution Facial Electromyography in Freely Behaving Humans
Authors: Lilah Inzelberg, David Rand, Stanislav Steinberg, Moshe David Pur, Yael Hanein
Abstract:
Human facial expressions carry important psychological and neurological information. Facial expressions involve the co-activation of diverse muscles. They depend strongly on personal affective interpretation and on social context and vary between spontaneous and voluntary activations. Smiling, as a special case, is among the most complex facial emotional expressions, involving no fewer than 7 different unilateral muscles. Despite their ubiquitous nature, smiles remain an elusive and debated topic. Smiles are associated with happiness and greeting on one hand and anger or disgust-masking on the other. Accordingly, while high-resolution recording of muscle activation patterns, in a non-interfering setting, offers exciting opportunities, it remains an unmet challenge, as contemporary surface facial electromyography (EMG) methodologies are cumbersome, restricted to the laboratory settings, and are limited in time and resolution. Here we present a wearable and non-invasive method for objective mapping of facial muscle activation and demonstrate its application in a natural setting. The technology is based on a recently developed dry and soft electrode array, specially designed for surface facial EMG technique. Eighteen healthy volunteers (31.58 ± 3.41 years, 13 females), participated in the study. Surface EMG arrays were adhered to participant left and right cheeks. Participants were instructed to imitate three facial expressions: closing the eyes, wrinkling the nose and smiling voluntary and to watch a funny video while their EMG signal is recorded. We focused on muscles associated with 'enjoyment', 'social' and 'masked' smiles; three categories with distinct social meanings. We developed a customized independent component analysis algorithm to construct the desired facial musculature mapping. First, identification of the Orbicularis oculi and the Levator labii superioris muscles was demonstrated from voluntary expressions. Second, recordings of voluntary and spontaneous smiles were used to locate the Zygomaticus major muscle activated in Duchenne and non-Duchenne smiles. Finally, recording with a wireless device in an unmodified natural work setting revealed expressions of neutral, positive and negative emotions in face-to-face interaction. The algorithm outlined here identifies the activation sources in a subject-specific manner, insensitive to electrode placement and anatomical diversity. Our high-resolution and cross-talk free mapping performances, along with excellent user convenience, open new opportunities for affective processing and objective evaluation of facial expressivity, objective psychological and neurological assessment as well as gaming, virtual reality, bio-feedback and brain-machine interface applications.Keywords: affective expressions, affective processing, facial EMG, high-resolution electromyography, independent component analysis, wireless electrodes
Procedia PDF Downloads 247211 Isolation and Characterization of Chromium Tolerant Staphylococcus aureus from Industrial Wastewater and Their Potential Use to Bioremediate Environmental Chromium
Authors: Muhammad Tariq, Muhammad Waseem, Muhammad Hidayat Rasool
Abstract:
Isolation and characterization of chromium tolerant Staphylococcus aureus from industrial wastewater and their potential use to bioremediate environmental chromium. Objectives: Chromium with its great economic importance in industrial use is major metal pollutant of the environment. Chromium are used in different industries for various applications such as textile, dyeing and pigmentation, wood preservation, manufacturing pulp and paper, chrome plating, steel and tanning. The release of untreated chromium in industrial effluents causes serious threat to environment and human health, therefore, the current study designed to isolate chromium tolerant Staphylococcus aureus for removal of chromium prior to their final discharge into the environment due to its cost effective and beneficial advantage over physical and chemical methods. Methods: Wastewater samples were collected from discharge point of different industries. Heavy metal analysis by atomic absorption spectrophotometer and microbiological analysis such as total viable count, total coliform, fecal coliform and Escherichia coli were conducted. Staphylococcus aureus was identified through gram’s staining, biomeriux vitek 2 microbial identification system and 16S rRNA gene amplification by polymerase chain reaction. Optimum growth conditions with respect to temperature, pH, salt concentrations and effect of chromium on the growth of bacteria, resistance to other heavy metal ions, minimum inhibitory concentration and chromium uptake ability of Staphylococcus aureus strain K1 was determined by spectrophotometer. Antibiotic sensitivity pattern was also determined by disc diffusion method. Furthermore, chromium uptake ability was confirmed by Fourier transform infrared spectroscopy (FTIR) and scanning electron microscope equipped with Oxford Energy Dipersive X-ray (EDX) micro analysis system. Results: The results presented that optimum temperature was 35ᵒC, pH was 8.0 and salt concentration was 0.5% for growth of Staphylococcus aureus K1. The maximum uptake ability of chromium by bacteria was 20mM than other heavy metal ions. The antibiotic sensitivity pattern revealed that Staphylococcus aureus was vancomycin and methicillin sensitive. Non hemolytic activity on blood agar and negative coagulase reaction showed that it was non-pathogenic. Furthermore, the growth of bacteria decreases in the presence of chromium and maximum chromium uptake by bacteria observed at optimum growth conditions. Fourier transform infrared spectroscopy (FTIR), scanning electron microscope (SEM) and Energy dispersive X-ray (EDX) analysis confirmed the presence of chromium uptake by Staphylococcus aureus K1. Conclusion: The study revealed that Staphylococcus aureus K1 have the potential to bio-remediate chromium toxicity from wastewater. Gradually, this biological treatment becomes more important due to its advantage over physical and chemical methods to protect environment and human health.Keywords: wastewater, staphylococcus, chromium, bioremediation
Procedia PDF Downloads 170210 Colocalization Analysis to Understand Yttrium Uptake in Saxifraga paniculata Using Complementary Imaging Technics
Authors: Till Fehlauer, Blanche Collin, Bernard Angeletti, Andrea Somogyi, Claire Lallemand, Perrine Chaurand, Cédric Dentant, Clement Levard, Jerome Rose
Abstract:
Over the last decades, yttrium (Y) has gained importance in high-tech applications. It is an essential part of alloys and compounds used for lasers, displays, or cell phones, for example. Due to its chemical similarities with the lanthanides, Y is often considered a rare earth element (REE). Despite their increased usage, the environmental behavior of REEs remains poorly understood. Especially regarding their interactions with plants, many uncertainties exist. On the one hand, Y is known to have a negative effect on root development and germination, but on the other hand, it appears to promote plant growth at low concentrations. In order to understand these phenomena, a precise knowledge is necessary about how Y is absorbed by the plant and how it is handled once inside the organism. Contradictory studies exist, stating that due to a similar ionic radius, Y and the other REEs might be absorbed through Ca²⁺-channels, while others suspect that Y has a shared pathway with Al³⁺. In this study, laser ablation coupled ICP-MS, and synchrotron-based micro-X-ray fluorescence (µXRF, beamline Nanoscopium, SOLEIL, France) have been used in order to localize Y within the plant tissue and identify associated elements. The plant used in this study is Saxifraga paniculata, a rugged alpine plant that has shown an affinity for Y in previous studies (in prep.). Furthermore, Saxifraga paniculata performs guttation, which means that it possesses phloem sap secreting openings on the leaf surface that serve to regulate root pressure. These so-called hydathodes could provide special insights in elemental transport in plants. The plants have been grown on Y doped soil (500mg/kg DW) for four months. The results showed that Y was mainly concentrated in the roots of Saxifraga paniculata (260 ± 85mg/kg), and only a small amount was translocated to the leaves (10 ± 7.8mg/kg). µXRF analysis indicated that within the root transects, the majority of Y remained in the epidermis and hardly penetrated the stele. Laser ablation coupled ICP-MS confirmed this finding and showed a positive correlation in the roots between Y, Fe, Al, and to a lesser extent Ca. In the stem transect, Y was mainly detected in a hotspot of approximately 40µm in diameter situated in the endodermis area. Within the stem and especially in the hotspot, Y was highly colocalized with Al and Fe. Similar-sized Y hotspots have been detected in/on the leaves. All of them were strongly colocalized with Al and Fe, except for those situated within the hydathodes, which showed no colocalization with any of the measured elements. Accordingly, a relation between Y and Ca during root uptake remains possible, whereas a correlation to Fe and Al appears to be dominant in the aerial parts, suggesting common storage compartments, the formation of complexes, or a shared pathway during translocation.Keywords: laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), Phytoaccumulation, Rare earth elements, Saxifraga paniculata, Synchrotron-based micro-X-ray fluorescence, Yttrium
Procedia PDF Downloads 151209 Influence of Ride Control Systems on the Motions Response and Passenger Comfort of High-Speed Catamarans in Irregular Waves
Authors: Ehsan Javanmardemamgheisi, Javad Mehr, Jason Ali-Lavroff, Damien Holloway, Michael Davis
Abstract:
During the last decades, a growing interest in faster and more efficient waterborne transportation has led to the development of high-speed vessels for both commercial and military applications. To satisfy this global demand, a wide variety of arrangements of high-speed crafts have been proposed by designers. Among them, high-speed catamarans have proven themselves to be a suitable Roll-on/Roll-off configuration for carrying passengers and cargo due to widely spaced demi hulls, a wide deck zone, and a high ratio of deadweight to displacement. To improve passenger comfort and crew workability and enhance the operability and performance of high-speed catamarans, mitigating the severity of motions and structural loads using Ride Control Systems (RCS) is essential.In this paper, a set of towing tank tests was conducted on a 2.5 m scaled model of a 112 m Incat Tasmania high-speed catamaran in irregular head seas to investigate the effect of different ride control algorithms including linear and nonlinear versions of the heave control, pitch control, and local control on motion responses and passenger comfort of the full-scale ship. The RCS included a centre bow-fitted T-Foil and two transom-mounted stern tabs. All the experiments were conducted at the Australian Maritime College (AMC) towing tank at a model speed of 2.89 m/s (37 knots full scale), a modal period of 1.5 sec (10 sec full scale) and two significant wave heights of 60 mm and 90 mm, representing full-scale wave heights of 2.7 m and 4 m, respectively. Spectral analyses were performed using Welch’s power spectral density method on the vertical motion time records of the catamaran model to calculate heave and pitch Response Amplitude Operators (RAOs). Then, noting that passenger discomfort arises from vertical accelerations and that the vertical accelerations vary at different longitudinal locations within the passenger cabin due to the variations in amplitude and relative phase of the pitch and heave motions, the vertical accelerations were calculated at three longitudinal locations (LCG, T-Foil, and stern tabs). Finally, frequency-weighted Root Mean Square (RMS) vertical accelerations were calculated to estimate Motion Sickness Dose Value (MSDV) of the ship based on ISO 2631-recommendations. It was demonstrated that in small seas, implementing a nonlinear pitch control algorithm reduces the peak pitch motions by 41%, the vertical accelerations at the forward location by 46%, and motion sickness at the forward position by around 20% which provides great potential for further improvement in passenger comfort, crew workability, and operability of high-speed catamarans.Keywords: high-speed catamarans, ride control system, response amplitude operators, vertical accelerations, motion sickness, irregular waves, towing tank tests.
Procedia PDF Downloads 84208 Single Cell and Spatial Transcriptomics: A Beginners Viewpoint from the Conceptual Pipeline
Authors: Leo Nnamdi Ozurumba-Dwight
Abstract:
Messenger ribooxynucleic acid (mRNA) molecules are compositional, protein-based. These proteins, encoding mRNA molecules (which collectively connote the transcriptome), when analyzed by RNA sequencing (RNAseq), unveils the nature of gene expression in the RNA. The obtained gene expression provides clues of cellular traits and their dynamics in presentations. These can be studied in relation to function and responses. RNAseq is a practical concept in Genomics as it enables detection and quantitative analysis of mRNA molecules. Single cell and spatial transcriptomics both present varying avenues for expositions in genomic characteristics of single cells and pooled cells in disease conditions such as cancer, auto-immune diseases, hematopoietic based diseases, among others, from investigated biological tissue samples. Single cell transcriptomics helps conduct a direct assessment of each building unit of tissues (the cell) during diagnosis and molecular gene expressional studies. A typical technique to achieve this is through the use of a single-cell RNA sequencer (scRNAseq), which helps in conducting high throughput genomic expressional studies. However, this technique generates expressional gene data for several cells which lack presentations on the cells’ positional coordinates within the tissue. As science is developmental, the use of complimentary pre-established tissue reference maps using molecular and bioinformatics techniques has innovatively sprung-forth and is now used to resolve this set back to produce both levels of data in one shot of scRNAseq analysis. This is an emerging conceptual approach in methodology for integrative and progressively dependable transcriptomics analysis. This can support in-situ fashioned analysis for better understanding of tissue functional organization, unveil new biomarkers for early-stage detection of diseases, biomarkers for therapeutic targets in drug development, and exposit nature of cell-to-cell interactions. Also, these are vital genomic signatures and characterizations of clinical applications. Over the past decades, RNAseq has generated a wide array of information that is igniting bespoke breakthroughs and innovations in Biomedicine. On the other side, spatial transcriptomics is tissue level based and utilized to study biological specimens having heterogeneous features. It exposits the gross identity of investigated mammalian tissues, which can then be used to study cell differentiation, track cell line trajectory patterns and behavior, and regulatory homeostasis in disease states. Also, it requires referenced positional analysis to make up of genomic signatures that will be sassed from the single cells in the tissue sample. Given these two presented approaches to RNA transcriptomics study in varying quantities of cell lines, with avenues for appropriate resolutions, both approaches have made the study of gene expression from mRNA molecules interesting, progressive, developmental, and helping to tackle health challenges head-on.Keywords: transcriptomics, RNA sequencing, single cell, spatial, gene expression.
Procedia PDF Downloads 124207 Corrosion Protective Coatings in Machines Design
Authors: Cristina Diaz, Lucia Perez, Simone Visigalli, Giuseppe Di Florio, Gonzalo Fuentes, Roberto Canziani, Paolo Gronchi
Abstract:
During the last 50 years, the selection of materials is one of the main decisions in machine design for different industrial applications. It is due to numerous physical, chemical, mechanical and technological factors to consider in it. Corrosion effects are related with all of these factors and impact in the life cycle, machine incidences and the costs for the life of the machine. Corrosion affects the deterioration or destruction of metals due to the reaction with the environment, generally wet. In food industry, dewatering industry, concrete industry, paper industry, etc. corrosion is an unsolved problem and it might introduce some alterations of some characteristics in the final product. Nowadays, depending on the selected metal, its surface and its environment of work, corrosion prevention might be a change of metal, use a coating, cathodic protection, use of corrosion inhibitors, etc. In the vast majority of the situations, use of a corrosion resistant material or in its defect, a corrosion protection coating is the solution. Stainless steels are widely used in machine design, because of their strength, easily cleaned capacity, corrosion resistance and appearance. Typical used are AISI 304 and AISI 316. However, their benefits don’t fit every application, and some coatings are required against corrosion such as some paintings, galvanizing, chrome plating, SiO₂, TiO₂ or ZrO₂ coatings, etc. In this work, some coatings based in a bilayer made of Titanium-Tantalum, Titanium-Niobium, Titanium-Hafnium or Titanium-Zirconium, have been developed used magnetron sputtering configuration by PVD (Physical Vapor Deposition) technology, for trying to reduce corrosion effects on AISI 304, AISI 316 and comparing it with Titanium alloy substrates. Ti alloy display exceptional corrosion resistance to chlorides, sour and oxidising acidic media and seawater. In this study, Ti alloy (99%) has been included for comparison with coated AISI 304 and AISI 316 stainless steel. Corrosion tests were conducted by a Gamry Instrument under ASTM G5-94 standard, using different electrolytes such as tomato salsa, wine, olive oil, wet compost, a mix of sand and concrete with water and NaCl for testing corrosion in different industrial environments. In general, in all tested environments, the results showed an improvement of corrosion resistance of all coated AISI 304 and AISI 316 stainless steel substrates when they were compared to uncoated stainless steel substrates. After that, comparing these results with corrosion studies on uncoated Ti alloy substrate, it was observed that in some cases, coated stainless steel substrates, reached similar current density that uncoated Ti alloy. Moreover, Titanium-Zirconium and Titanium-Tantalum coatings showed for all substrates in study including coated Ti alloy substrates, a reduction in current density more than two order in magnitude. As conclusion, Ti-Ta, Ti-Zr, Ti-Nb and Ti-Hf coatings have been developed for improving corrosion resistance of AISI 304 and AISI 316 materials. After corrosion tests in several industry environments, substrates have shown improvements on corrosion resistance. Similar processes have been carried out in Ti alloy (99%) substrates. Coated AISI 304 and AISI 316 stainless steel, might reach similar corrosion protection on the surface than uncoated Ti alloy (99%). Moreover, coated Ti Alloy (99%) might increase its corrosion resistance using these coatings.Keywords: coatings, corrosion, PVD, stainless steel
Procedia PDF Downloads 158206 Modern Technology-Based Methods in Neurorehabilitation for Social Competence Deficit in Children with Acquired Brain Injury
Authors: M. Saard, A. Kolk, K. Sepp, L. Pertens, L. Reinart, C. Kööp
Abstract:
Introduction: Social competence is often impaired in children with acquired brain injury (ABI), but evidence-based rehabilitation for social skills has remained undeveloped. Modern technology-based methods create effective and safe learning environments for pediatric social skills remediation. The aim of the study was to implement our structured model of neuro rehab for socio-cognitive deficit using multitouch-multiuser tabletop (MMT) computer-based platforms and virtual reality (VR) technology. Methods: 40 children aged 8-13 years (yrs) have participated in the pilot study: 30 with ABI -epilepsy, traumatic brain injury and/or tic disorder- and 10 healthy age-matched controls. From the patients, 12 have completed the training (M = 11.10 yrs, SD = 1.543) and 20 are still in training or in the waiting-list group (M = 10.69 yrs, SD = 1.704). All children performed the first individual and paired assessments. For patients, second evaluations were performed after the intervention period. Two interactive applications were implemented into rehabilitation design: Snowflake software on MMT tabletop and NoProblem on DiamondTouch Table (DTT), which allowed paired training (2 children at once). Also, in individual training sessions, HTC Vive VR device was used with VR metaphors of difficult social situations to treat social anxiety and train social skills. Results: At baseline (B) evaluations, patients had higher deficits in executive functions on the BRIEF parents’ questionnaire (M = 117, SD = 23.594) compared to healthy controls (M = 22, SD = 18.385). The most impaired components of social competence were emotion recognition, Theory of Mind skills (ToM), cooperation, verbal/non-verbal communication, and pragmatics (Friendship Observation Scale scores only 25-50% out of 100% for patients). In Sentence Completion Task and Spence Anxiety Scale, the patients reported a lack of friends, behavioral problems, bullying in school, and social anxiety. Outcome evaluations: Snowflake on MMT improved executive and cooperation skills and DTT developed communication skills, metacognitive skills, and coping. VR, video modelling and role-plays improved social attention, emotional attitude, gestural behaviors, and decreased social anxiety. NEPSY-II showed improvement in Affect Recognition [B = 7, SD = 5.01 vs outcome (O) = 10, SD = 5.85], Verbal ToM (B = 8, SD = 3.06 vs O = 10, SD = 4.08), Contextual ToM (B = 8, SD = 3.15 vs O = 11, SD = 2.87). ToM Stories test showed an improved understanding of Intentional Lying (B = 7, SD = 2.20 vs O = 10, SD = 0.50), and Sarcasm (B=6, SD = 2.20 vs O = 7, SD = 2.50). Conclusion: Neurorehabilitation based on the Structured Model of Neurorehab for Socio-Cognitive Deficit in children with ABI were effective in social skills remediation. The model helps to understand theoretical connections between components of social competence and modern interactive computerized platforms. We encourage therapists to implement these next-generation devices into the rehabilitation process as MMT and VR interfaces are motivating for children, thus ensuring good compliance. Improving children’s social skills is important for their and their families’ quality of life and social capital.Keywords: acquired brain injury, children, social skills deficit, technology-based neurorehabilitation
Procedia PDF Downloads 121205 Concentration of Droplets in a Transient Gas Flow
Authors: Timur S. Zaripov, Artur K. Gilfanov, Sergei S. Sazhin, Steven M. Begg, Morgan R. Heikal
Abstract:
The calculation of the concentration of inertial droplets in complex flows is encountered in the modelling of numerous engineering and environmental phenomena; for example, fuel droplets in internal combustion engines and airborne pollutant particles. The results of recent research, focused on the development of methods for calculating concentration and their implementation in the commercial CFD code, ANSYS Fluent, is presented here. The study is motivated by the investigation of the mixture preparation processes in internal combustion engines with direct injection of fuel sprays. Two methods are used in our analysis; the Fully Lagrangian method (also known as the Osiptsov method) and the Eulerian approach. The Osiptsov method predicts droplet concentrations along path lines by solving the equations for the components of the Jacobian of the Eulerian-Lagrangian transformation. This method significantly decreases the computational requirements as it does not require counting of large numbers of tracked droplets as in the case of the conventional Lagrangian approach. In the Eulerian approach the average droplet velocity is expressed as a function of the carrier phase velocity as an expansion over the droplet response time and transport equation can be solved in the Eulerian form. The advantage of the method is that droplet velocity can be found without solving additional partial differential equations for the droplet velocity field. The predictions from the two approaches were compared in the analysis of the problem of a dilute gas-droplet flow around an infinitely long, circular cylinder. The concentrations of inertial droplets, with Stokes numbers of 0.05, 0.1, 0.2, in steady-state and transient laminar flow conditions, were determined at various Reynolds numbers. In the steady-state case, flows with Reynolds numbers of 1, 10, and 100 were investigated. It has been shown that the results predicted using both methods are almost identical at small Reynolds and Stokes numbers. For larger values of these numbers (Stokes — 0.1, 0.2; Reynolds — 10, 100) the Eulerian approach predicted a wider spread in concentration in the perturbations caused by the cylinder that can be attributed to the averaged droplet velocity field. The transient droplet flow case was investigated for a Reynolds number of 200. Both methods predicted a high droplet concentration in the zones of high strain rate and low concentrations in zones of high vorticity. The maxima of droplet concentration predicted by the Osiptsov method was up to two orders of magnitude greater than that predicted by the Eulerian method; a significant variation for an approach widely used in engineering applications. Based on the results of these comparisons, the Osiptsov method has resulted in a more precise description of the local properties of the inertial droplet flow. The method has been applied to the analysis of the results of experimental observations of a liquid gasoline spray at representative fuel injection pressure conditions. The preliminary results show good qualitative agreement between the predictions of the model and experimental data.Keywords: internal combustion engines, Eulerian approach, fully Lagrangian approach, gasoline fuel sprays, droplets and particle concentrations
Procedia PDF Downloads 258204 Low Cost LiDAR-GNSS-UAV Technology Development for PT Garam’s Three Dimensional Stockpile Modeling Needs
Authors: Mohkammad Nur Cahyadi, Imam Wahyu Farid, Ronny Mardianto, Agung Budi Cahyono, Eko Yuli Handoko, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan
Abstract:
Unmanned aerial vehicle (UAV) technology has cost efficiency and data retrieval time advantages. Using technologies such as UAV, GNSS, and LiDAR will later be combined into one of the newest technologies to cover each other's deficiencies. This integration system aims to increase the accuracy of calculating the volume of the land stockpile of PT. Garam (Salt Company). The use of UAV applications to obtain geometric data and capture textures that characterize the structure of objects. This study uses the Taror 650 Iron Man drone with four propellers, which can fly for 15 minutes. LiDAR can classify based on the number of image acquisitions processed in the software, utilizing photogrammetry and structural science principles from Motion point cloud technology. LiDAR can perform data acquisition that enables the creation of point clouds, three-dimensional models, Digital Surface Models, Contours, and orthomosaics with high accuracy. LiDAR has a drawback in the form of coordinate data positions that have local references. Therefore, researchers use GNSS, LiDAR, and drone multi-sensor technology to map the stockpile of salt on open land and warehouses every year, carried out by PT. Garam twice, where the previous process used terrestrial methods and manual calculations with sacks. Research with LiDAR needs to be combined with UAV to overcome data acquisition limitations because it only passes through the right and left sides of the object, mainly when applied to a salt stockpile. The UAV is flown to assist data acquisition with a wide coverage with the help of integration of the 200-gram LiDAR system so that the flying angle taken can be optimal during the flight process. Using LiDAR for low-cost mapping surveys will make it easier for surveyors and academics to obtain pretty accurate data at a more economical price. As a survey tool, LiDAR is included in a tool with a low price, around 999 USD; this device can produce detailed data. Therefore, to minimize the operational costs of using LiDAR, surveyors can use Low-Cost LiDAR, GNSS, and UAV at a price of around 638 USD. The data generated by this sensor is in the form of a visualization of an object shape made in three dimensions. This study aims to combine Low-Cost GPS measurements with Low-Cost LiDAR, which are processed using free user software. GPS Low Cost generates data in the form of position-determining latitude and longitude coordinates. The data generates X, Y, and Z values to help georeferencing process the detected object. This research will also produce LiDAR, which can detect objects, including the height of the entire environment in that location. The results of the data obtained are calibrated with pitch, roll, and yaw to get the vertical height of the existing contours. This study conducted an experimental process on the roof of a building with a radius of approximately 30 meters.Keywords: LiDAR, unmanned aerial vehicle, low-cost GNSS, contour
Procedia PDF Downloads 97203 All-In-One Universal Cartridge Based Truly Modular Electrolyte Analyzer
Authors: S. Dalvi, N. Sane, V. Patil, D. Bansode, A. Tharakan, V. Mathur
Abstract:
Measurement of routine clinical electrolyte tests is common in labs worldwide for screening of illness or diseases. All the analyzers for the measurement of electrolyte parameters have sensors, reagents, sampler, pump tubing, valve, other tubing’s separate that are either expensive, require heavy maintenance and have a short shelf-life. Moreover, the costs required to maintain such Lab instrumentation is high and this limits the use of the device to only highly specialized personnel and sophisticated labs. In order to provide Healthcare Diagnostics to ALL at affordable costs, there is a need for an All-in-one Universal Modular Cartridge that contains sensors, reagents, sampler, valve, pump tubing, and other tubing’s in one single integrated module-in-module cartridge that is affordable, reliable, easy-to-use, requires very low sample volume and is truly modular and maintenance-free. DiaSys India has developed a World’s first, Patent Pending, Versatile All-in-one Universal Module-in-Module Cartridge based Electrolyte Analyzer (QDx InstaLyte) that can perform sodium, potassium, chloride, calcium, pH, lithium tests. QDx InstaLyte incorporates High Performance, Inexpensive All-in-one Universal Cartridge for rapid quantitative measurement of electrolytes in body fluids. Our proposed methodology utilizes Advanced & Improved long life ISE sensors to provide a sensitive and accurate result in 120 sec with just 100 µl of sample volume. The All-in-One Universal Cartridge has a very low reagent consumption capable of maximum of 1000 tests with a Use-life of 3-4 months and a long Shelf life of 12-18 months at 4-25°C making it very cost-effective. Methods: QDx InstaLyte analyzers with All-in-one Universal Modular Cartridges were independently evaluated with three R&D lots for Method Performance (Linearity, Precision, Method Comparison, Cartridge Stability) to measure Sodium, Potassium, Chloride. Method Comparison was done against Medica EasyLyte Plus Na/K/Cl Electrolyte Analyzer, a mid-size lab based clinical chemistry analyzer with N = 100 samples run over 10 days. Within-run precision study was done using modified CLSI guidelines with N = 20 samples and day-to-day precision study was done for 7 consecutive days using Trulab N & P Quality Control Samples. Accelerated stability testing was done at 45oC for 4 weeks with Production Lots. Results: Data analysis indicates that the CV for within-run precision for Na is ≤ 1%, for K is ≤2%, and for Cl is ≤2% and with R2 ≥ 0.95 for Method Comparison. Further, the All-in-One Universal Cartridge is stable up to 12-18 months at 4-25oC storage temperature based on preliminary extrapolated data. Conclusion: The Developed Technology Platform of All-in-One Universal Module-in-Module Cartridge based QDx InstaLyte is Reliable and meets all the performance specifications of the lab and is Truly Modular and Maintenance-Free. Hence, it can be easily adapted for low cost, sensitive and rapid measurement of electrolyte tests in low resource settings such as in urban, semi-urban and rural areas in the developing countries and can be used as a Point-of-care testing system for worldwide applications.Keywords: all-in-one modular catridge, electrolytes, maintenance free, QDx instalyte
Procedia PDF Downloads 35202 Mondoc: Informal Lightweight Ontology for Faceted Semantic Classification of Hypernymy
Authors: M. Regina Carreira-Lopez
Abstract:
Lightweight ontologies seek to concrete union relationships between a parent node, and a secondary node, also called "child node". This logic relation (L) can be formally defined as a triple ontological relation (LO) equivalent to LO in ⟨LN, LE, LC⟩, and where LN represents a finite set of nodes (N); LE is a set of entities (E), each of which represents a relationship between nodes to form a rooted tree of ⟨LN, LE⟩; and LC is a finite set of concepts (C), encoded in a formal language (FL). Mondoc enables more refined searches on semantic and classified facets for retrieving specialized knowledge about Atlantic migrations, from the Declaration of Independence of the United States of America (1776) and to the end of the Spanish Civil War (1939). The model looks forward to increasing documentary relevance by applying an inverse frequency of co-ocurrent hypernymy phenomena for a concrete dataset of textual corpora, with RMySQL package. Mondoc profiles archival utilities implementing SQL programming code, and allows data export to XML schemas, for achieving semantic and faceted analysis of speech by analyzing keywords in context (KWIC). The methodology applies random and unrestricted sampling techniques with RMySQL to verify the resonance phenomena of inverse documentary relevance between the number of co-occurrences of the same term (t) in more than two documents of a set of texts (D). Secondly, the research also evidences co-associations between (t) and their corresponding synonyms and antonyms (synsets) are also inverse. The results from grouping facets or polysemic words with synsets in more than two textual corpora within their syntagmatic context (nouns, verbs, adjectives, etc.) state how to proceed with semantic indexing of hypernymy phenomena for subject-heading lists and for authority lists for documentary and archival purposes. Mondoc contributes to the development of web directories and seems to achieve a proper and more selective search of e-documents (classification ontology). It can also foster on-line catalogs production for semantic authorities, or concepts, through XML schemas, because its applications could be used for implementing data models, by a prior adaptation of the based-ontology to structured meta-languages, such as OWL, RDF (descriptive ontology). Mondoc serves to the classification of concepts and applies a semantic indexing approach of facets. It enables information retrieval, as well as quantitative and qualitative data interpretation. The model reproduces a triple tuple ⟨LN, LE, LT, LCF L, BKF⟩ where LN is a set of entities that connect with other nodes to concrete a rooted tree in ⟨LN, LE⟩. LT specifies a set of terms, and LCF acts as a finite set of concepts, encoded in a formal language, L. Mondoc only resolves partial problems of linguistic ambiguity (in case of synonymy and antonymy), but neither the pragmatic dimension of natural language nor the cognitive perspective is addressed. To achieve this goal, forthcoming programming developments should target at oriented meta-languages with structured documents in XML.Keywords: hypernymy, information retrieval, lightweight ontology, resonance
Procedia PDF Downloads 126201 Modern Detection and Description Methods for Natural Plants Recognition
Authors: Masoud Fathi Kazerouni, Jens Schlemper, Klaus-Dieter Kuhnert
Abstract:
Green planet is one of the Earth’s names which is known as a terrestrial planet and also can be named the fifth largest planet of the solar system as another scientific interpretation. Plants do not have a constant and steady distribution all around the world, and even plant species’ variations are not the same in one specific region. Presence of plants is not only limited to one field like botany; they exist in different fields such as literature and mythology and they hold useful and inestimable historical records. No one can imagine the world without oxygen which is produced mostly by plants. Their influences become more manifest since no other live species can exist on earth without plants as they form the basic food staples too. Regulation of water cycle and oxygen production are the other roles of plants. The roles affect environment and climate. Plants are the main components of agricultural activities. Many countries benefit from these activities. Therefore, plants have impacts on political and economic situations and future of countries. Due to importance of plants and their roles, study of plants is essential in various fields. Consideration of their different applications leads to focus on details of them too. Automatic recognition of plants is a novel field to contribute other researches and future of studies. Moreover, plants can survive their life in different places and regions by means of adaptations. Therefore, adaptations are their special factors to help them in hard life situations. Weather condition is one of the parameters which affect plants life and their existence in one area. Recognition of plants in different weather conditions is a new window of research in the field. Only natural images are usable to consider weather conditions as new factors. Thus, it will be a generalized and useful system. In order to have a general system, distance from the camera to plants is considered as another factor. The other considered factor is change of light intensity in environment as it changes during the day. Adding these factors leads to a huge challenge to invent an accurate and secure system. Development of an efficient plant recognition system is essential and effective. One important component of plant is leaf which can be used to implement automatic systems for plant recognition without any human interface and interaction. Due to the nature of used images, characteristic investigation of plants is done. Leaves of plants are the first characteristics to select as trusty parts. Four different plant species are specified for the goal to classify them with an accurate system. The current paper is devoted to principal directions of the proposed methods and implemented system, image dataset, and results. The procedure of algorithm and classification is explained in details. First steps, feature detection and description of visual information, are outperformed by using Scale invariant feature transform (SIFT), HARRIS-SIFT, and FAST-SIFT methods. The accuracy of the implemented methods is computed. In addition to comparison, robustness and efficiency of results in different conditions are investigated and explained.Keywords: SIFT combination, feature extraction, feature detection, natural images, natural plant recognition, HARRIS-SIFT, FAST-SIFT
Procedia PDF Downloads 278200 Developing Telehealth-Focused Advanced Practice Nurse Educational Partnerships
Authors: Shelley Y. Hawkins
Abstract:
Introduction/Background: As technology has grown exponentially in healthcare, nurse educators must prepare Advanced Practice Registered Nurse (APRN) graduates with the knowledge and skills in information systems/technology to support and improve patient care and health care systems. APRN’s are expected to lead in caring for populations who lack accessibility and availability through the use of technology, specifically telehealth. The capacity to effectively and efficiently use technology in patient care delivery is clearly delineated in the American Association of Colleges of Nursing (AACN) Doctor of Nursing Practice (DNP) and Master of Science in Nursing (MSN) Essentials. However, APRN’s have minimal, or no, exposure to formalized telehealth education and lack necessary technical skills needed to incorporate telehealth into their patient care. APRN’s must successfully master the technology using telehealth/telemedicine, electronic health records, health information technology, and clinical decision support systems to advance health. Furthermore, APRN’s must be prepared to lead the coordination and collaboration with other healthcare providers in their use and application. Aim/Goal/Purpose: The purpose of this presentation is to establish and operationalize telehealth-focused educational partnerships between one University School of Nursing and two health care systems in order to enhance the preparation of APRN NP students for practice, teaching, and/or scholarly endeavors. Methods: The proposed project was initially presented by the project director to selected multidisciplinary stakeholders including leadership, home telehealth personnel, primary care providers, and decision support systems within two major health care systems to garner their support for acceptance and implementation. Concurrently, backing was obtained from key university-affiliated colleagues including the Director of Simulation and Innovative Learning Lab and Coordinator of the Health Care Informatics Program. Technology experts skilled in design and production in web applications and electronic modules were secured from two local based technology companies. Results: Two telehealth-focused APRN Program academic/practice partnerships have been established. Students have opportunities to engage in clinically based telehealth experiences focused on: (1) providing patient care while incorporating various technology with a specific emphasis on telehealth; (2) conducting research and/or evidence-based practice projects in order to further develop the scientific foundation regarding incorporation of telehealth with patient care; and (3) participating in the production of patient-level educational materials related to specific topical areas. Conclusions: Evidence-based APRN student telehealth clinical experiences will assist in preparing graduates who can effectively incorporate telehealth into their clinical practice. Greater access for diverse populations will be available as a result of the telehealth service model as well as better care and better outcomes at lower costs. Furthermore, APRN’s will provide the necessary leadership and coordination through interprofessional practice by transforming health care through new innovative care models using information systems and technology.Keywords: academic/practice partnerships, advanced practice nursing, nursing education, telehealth
Procedia PDF Downloads 242199 Effects of Hydrogen Bonding and Vinylcarbazole Derivatives on 3-Cyanovinylcarbazole Mediated Photo-Cross-Linking Induced Cytosine Deamination
Authors: Siddhant Sethi, Yasuharu Takashima, Shigetaka Nakamura, Kenzo Fujimoto
Abstract:
Site-directed mutagenesis is a renowned technique to introduce specific mutations in the genome. To achieve site-directed mutagenesis, many chemical and enzymatic approaches have been reported in the past like disulphite induced genome editing, CRISPR-Cas9, TALEN etc. The chemical methods are invasive whereas the enzymatic approaches are time-consuming and expensive. Most of these techniques are unusable in the cellular application due to their toxicity and other limitations. Photo-chemical cytosine deamination, introduced in 2010, is one of the major technique for enzyme-free single-point mutation of cytosine to uracil in DNA and RNA, wherein, 3-cyanovinylcarbazole nucleoside (CNVK) containing oligodeoxyribonucleotide (ODN) having CNVK at -1 position to that of target cytosine is reversibly crosslinked to target DNA strand using 366 nm and then incubated at 90ºC to accommodate deamination. This technique is superior to enzymatic methods of site-directed mutagenesis but has a disadvantage that it requires the use of high temperature for the deamination step which restricts its applicability in the in vivo applications. This study has been focused on improving the technique by reducing the temperature required for deamination. Firstly, the photo-cross-linker, CNVK has been modified by replacing cyano group attached to vinyl group with methyl ester (OMeVK), amide (NH2VK), and carboxylic acid (OHVK) to observe the acceleration in the deamination of target cytosine cross-linked to vinylcarbazole derivative. Among the derivatives, OHVK has shown 2 times acceleration in deamination reaction as compared to CNVK, while the other two derivatives have shown deceleration towards deamination reaction. The trend of rate of deamination reaction follows the same order as that of hydrophilicity of the vinylcarbazole derivatives. OHVK being most hydrophilic has shown highest acceleration while OMeVK is least hydrophilic has proven to be least active for deamination. Secondly, in the related study, the counter-base of the target cytosine, guanine has been replaced by inosine, 2-aminopurine, nebularine, and 5-nitroindole having distinct hydrogen bonding patterns with target cytosine. Among the ODNs with these counter bases, ODN with inosine has shown 12 fold acceleration towards deamination of cytosine cross-linked to CNVK at physiological conditions as compared to guanosine. Whereas, when 2-aminopurine, nebularine, and 5-nitroindole were used, no deamination reaction took place. It can be concluded that inosine has potential to be used as the counter base of target cytosine for the CNVK mediated photo-cross-linking induced deamination of cytosine. The increase in rate of deamination reaction has been attributed to pattern and number of hydrogen bonding between the cytosine and counter base. One of the important factor is presence of hydrogen bond between exo-cyclic amino group of cytosine and the counter base. These results will be useful for development of more efficient technique for site-directed mutagenesis for C → U transformations in the DNA/RNA which might be used in the living system for treatment of various genetic disorders and genome engineering for making designer and non-native proteins.Keywords: C to U transformation, DNA editing, genome engineering, ultra-fast photo-cross-linking
Procedia PDF Downloads 236198 Edible Active Antimicrobial Coatings onto Plastic-Based Laminates and Its Performance Assessment on the Shelf Life of Vacuum Packaged Beef Steaks
Authors: Andrey A. Tyuftin, David Clarke, Malco C. Cruz-Romero, Declan Bolton, Seamus Fanning, Shashi K. Pankaj, Carmen Bueno-Ferrer, Patrick J. Cullen, Joe P. Kerry
Abstract:
Prolonging of shelf-life is essential in order to address issues such as; supplier demands across continents, economical profit, customer satisfaction, and reduction of food wastage. Smart packaging solutions presented in the form of naturally occurred antimicrobially-active packaging may be a solution to these and other issues. Gelatin film forming solution with adding of natural sourced antimicrobials is a promising tool for the active smart packaging. The objective of this study was to coat conventional plastic hydrophobic packaging material with hydrophilic antimicrobial active beef gelatin coating and conduct shelf life trials on beef sub-primal cuts. Minimal inhibition concentration (MIC) of Caprylic acid sodium salt (SO) and commercially available Auranta FV (AFV) (bitter oranges extract with mixture of nutritive organic acids) were found of 1 and 1.5 % respectively against bacterial strains Bacillus cereus, Pseudomonas fluorescens, Escherichia coli, Staphylococcus aureus and aerobic and anaerobic beef microflora. Therefore SO or AFV were incorporated in beef gelatin film forming solution in concentration of two times of MIC which was coated on a conventional plastic LDPE/PA film on the inner cold plasma treated polyethylene surface. Beef samples were vacuum packed in this material and stored under chilling conditions, sampled at weekly intervals during 42 days shelf life study. No significant differences (p < 0.05) in the cook loss was observed among the different treatments compared to control samples until the day 29. Only for AFV coated beef sample it was 3% higher (37.3%) than the control (34.4 %) on the day 36. It was found antimicrobial films did not protect beef against discoloration. SO containing packages significantly (p < 0.05) reduced Total viable bacterial counts (TVC) compared to the control and AFV samples until the day 35. No significant reduction in TVC was observed between SO and AFV films on the day 42 but a significant difference was observed compared to control samples with a 1.40 log of bacteria reduction on the day 42. AFV films significantly (p < 0.05) reduced TVC compared to control samples from the day 14 until the day 42. Control samples reached the set value of 7 log CFU/g on day 27 of testing, AFV films did not reach this set limit until day 35 and SO films until day 42 of testing. The antimicrobial AFV and SO coated films significantly prolonged the shelf-life of beef steaks by 33 or 55% (on 7 and 14 days respectively) compared to control film samples. It is concluded antimicrobial coated films were successfully developed by coating the inner polyethylene layer of conventional LDPE/PA laminated films after plasma surface treatment. The results indicated that the use of antimicrobial active packaging coated with SO or AFV increased significantly (p < 0.05) the shelf life of the beef sub-primal. Overall, AFV or SO containing gelatin coatings have the potential of being used as effective antimicrobials for active packaging applications for muscle-based food products.Keywords: active packaging, antimicrobials, edible coatings, food packaging, gelatin films, meat science
Procedia PDF Downloads 304197 Towards Visual Personality Questionnaires Based on Deep Learning and Social Media
Authors: Pau Rodriguez, Jordi Gonzalez, Josep M. Gonfaus, Xavier Roca
Abstract:
Image sharing in social networks has increased exponentially in the past years. Officially, there are 600 million Instagrammers uploading around 100 million photos and videos per day. Consequently, there is a need for developing new tools to understand the content expressed in shared images, which will greatly benefit social media communication and will enable broad and promising applications in education, advertisement, entertainment, and also psychology. Following these trends, our work aims to take advantage of the existing relationship between text and personality, already demonstrated by multiple researchers, so that we can prove that there exists a relationship between images and personality as well. To achieve this goal, we consider that images posted on social networks are typically conditioned on specific words, or hashtags, therefore any relationship between text and personality can also be observed with those posted images. Our proposal makes use of the most recent image understanding models based on neural networks to process the vast amount of data generated by social users to determine those images most correlated with personality traits. The final aim is to train a weakly-supervised image-based model for personality assessment that can be used even when textual data is not available, which is an increasing trend. The procedure is described next: we explore the images directly publicly shared by users based on those accompanying texts or hashtags most strongly related to personality traits as described by the OCEAN model. These images will be used for personality prediction since they have the potential to convey more complex ideas, concepts, and emotions. As a result, the use of images in personality questionnaires will provide a deeper understanding of respondents than through words alone. In other words, from the images posted with specific tags, we train a deep learning model based on neural networks, that learns to extract a personality representation from a picture and use it to automatically find the personality that best explains such a picture. Subsequently, a deep neural network model is learned from thousands of images associated with hashtags correlated to OCEAN traits. We then analyze the network activations to identify those pictures that maximally activate the neurons: the most characteristic visual features per personality trait will thus emerge since the filters of the convolutional layers of the neural model are learned to be optimally activated depending on each personality trait. For example, among the pictures that maximally activate the high Openness trait, we can see pictures of books, the moon, and the sky. For high Conscientiousness, most of the images are photographs of food, especially healthy food. The high Extraversion output is mostly activated by pictures of a lot of people. In high Agreeableness images, we mostly see flower pictures. Lastly, in the Neuroticism trait, we observe that the high score is maximally activated by animal pets like cats or dogs. In summary, despite the huge intra-class and inter-class variabilities of the images associated to each OCEAN traits, we found that there are consistencies between visual patterns of those images whose hashtags are most correlated to each trait.Keywords: emotions and effects of mood, social impact theory in social psychology, social influence, social structure and social networks
Procedia PDF Downloads 198196 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning
Authors: Akeel A. Shah, Tong Zhang
Abstract:
Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning
Procedia PDF Downloads 42195 3D Label-Free Bioimaging of Native Tissue with Selective Plane Illumination Optical Microscopy
Authors: Jing Zhang, Yvonne Reinwald, Nick Poulson, Alicia El Haj, Chung See, Mike Somekh, Melissa Mather
Abstract:
Biomedical imaging of native tissue using light offers the potential to obtain excellent structural and functional information in a non-invasive manner with good temporal resolution. Image contrast can be derived from intrinsic absorption, fluorescence, or scatter, or through the use of extrinsic contrast. A major challenge in applying optical microscopy to in vivo tissue imaging is the effects of light attenuation which limits light penetration depth and achievable imaging resolution. Recently Selective Plane Illumination Microscopy (SPIM) has been used to map the 3D distribution of fluorophores dispersed in biological structures. In this approach, a focused sheet of light is used to illuminate the sample from the side to excite fluorophores within the sample of interest. Images are formed based on detection of fluorescence emission orthogonal to the illumination axis. By scanning the sample along the detection axis and acquiring a stack of images, 3D volumes can be obtained. The combination of rapid image acquisition speeds with the low photon dose to samples optical sectioning provides SPIM is an attractive approach for imaging biological samples in 3D. To date all implementations of SPIM rely on the use of fluorescence reporters be that endogenous or exogenous. This approach has the disadvantage that in the case of exogenous probes the specimens are altered from their native stage rendering them unsuitable for in vivo studies and in general fluorescence emission is weak and transient. Here we present for the first time to our knowledge a label-free implementation of SPIM that has downstream applications in the clinical setting. The experimental set up used in this work incorporates both label-free and fluorescent illumination arms in addition to a high specification camera that can be partitioned for simultaneous imaging of both fluorescent emission and scattered light from intrinsic sources of optical contrast in the sample being studied. This work first involved calibration of the imaging system and validation of the label-free method with well characterised fluorescent microbeads embedded in agarose gel. 3D constructs of mammalian cells cultured in agarose gel with varying cell concentrations were then imaged. A time course study to track cell proliferation in the 3D construct was also carried out and finally a native tissue sample was imaged. For each sample multiple images were obtained by scanning the sample along the axis of detection and 3D maps reconstructed. The results obtained validated label-free SPIM as a viable approach for imaging cells in a 3D gel construct and native tissue. This technique has the potential use in a near-patient environment that can provide results quickly and be implemented in an easy to use manner to provide more information with improved spatial resolution and depth penetration than current approaches.Keywords: bioimaging, optics, selective plane illumination microscopy, tissue imaging
Procedia PDF Downloads 250194 Mechanical Properties of Poly(Propylene)-Based Graphene Nanocomposites
Authors: Luiza Melo De Lima, Tito Trindade, Jose M. Oliveira
Abstract:
The development of thermoplastic-based graphene nanocomposites has been of great interest not only to the scientific community but also to different industrial sectors. Due to the possible improvement of performance and weight reduction, thermoplastic nanocomposites are a great promise as a new class of materials. These nanocomposites are of relevance for the automotive industry, namely because the emission limits of CO2 emissions imposed by the European Commission (EC) regulations can be fulfilled without compromising the car’s performance but by reducing its weight. Thermoplastic polymers have some advantages over thermosetting polymers such as higher productivity, lower density, and recyclability. In the automotive industry, for example, poly(propylene) (PP) is a common thermoplastic polymer, which represents more than half of the polymeric raw material used in automotive parts. Graphene-based materials (GBM) are potential nanofillers that can improve the properties of polymer matrices at very low loading. In comparison to other composites, such as fiber-based composites, weight reduction can positively affect their processing and future applications. However, the properties and performance of GBM/polymer nanocomposites depend on the type of GBM and polymer matrix, the degree of dispersion, and especially the type of interactions between the fillers and the polymer matrix. In order to take advantage of the superior mechanical strength of GBM, strong interfacial strength between GBM and the polymer matrix is required for efficient stress transfer from GBM to the polymer. Thus, chemical compatibilizers and physicochemical modifications have been reported as important tools during the processing of these nanocomposites. In this study, PP-based nanocomposites were obtained by a simple melt blending technique, using a Brabender type mixer machine. Graphene nanoplatelets (GnPs) were applied as structural reinforcement. Two compatibilizers were used to improve the interaction between PP matrix and GnPs: PP graft maleic anhydride (PPgMA) and PPgMA modified with tertiary amine alcohol (PPgDM). The samples for tensile and Charpy impact tests were obtained by injection molding. The results suggested the GnPs presence can increase the mechanical strength of the polymer. However, it was verified that the GnPs presence can promote a decrease of impact resistance, turning the nanocomposites more fragile than neat PP. The compatibilizers’ incorporation increases the impact resistance, suggesting that the compatibilizers can enhance the adhesion between PP and GnPs. Compared to neat PP, Young’s modulus of non-compatibilized nanocomposite increase demonstrated that GnPs incorporation can promote a stiffness improvement of the polymer. This trend can be related to the several physical crosslinking points between the PP matrix and the GnPs. Furthermore, the decrease of strain at a yield of PP/GnPs, together with the enhancement of Young’s modulus, confirms that the GnPs incorporation led to an increase in stiffness but to a decrease in toughness. Moreover, the results demonstrated that incorporation of compatibilizers did not affect Young’s modulus and strain at yield results compared to non-compatibilized nanocomposite. The incorporation of these compatibilizers showed an improvement of nanocomposites’ mechanical properties compared both to those the non-compatibilized nanocomposite and to a PP sample used as reference.Keywords: graphene nanoplatelets, mechanical properties, melt blending processing, poly(propylene)-based nanocomposites
Procedia PDF Downloads 187193 Fiber Stiffness Detection of GFRP Using Combined ABAQUS and Genetic Algorithms
Authors: Gyu-Dong Kim, Wuk-Jae Yoo, Sang-Youl Lee
Abstract:
Composite structures offer numerous advantages over conventional structural systems in the form of higher specific stiffness and strength, lower life-cycle costs, and benefits such as easy installation and improved safety. Recently, there has been a considerable increase in the use of composites in engineering applications and as wraps for seismic upgrading and repairs. However, these composites deteriorate with time because of outdated materials, excessive use, repetitive loading, climatic conditions, manufacturing errors, and deficiencies in inspection methods. In particular, damaged fibers in a composite result in significant degradation of structural performance. In order to reduce the failure probability of composites in service, techniques to assess the condition of the composites to prevent continual growth of fiber damage are required. Condition assessment technology and nondestructive evaluation (NDE) techniques have provided various solutions for the safety of structures by means of detecting damage or defects from static or dynamic responses induced by external loading. A variety of techniques based on detecting the changes in static or dynamic behavior of isotropic structures has been developed in the last two decades. These methods, based on analytical approaches, are limited in their capabilities in dealing with complex systems, primarily because of their limitations in handling different loading and boundary conditions. Recently, investigators have introduced direct search methods based on metaheuristics techniques and artificial intelligence, such as genetic algorithms (GA), simulated annealing (SA) methods, and neural networks (NN), and have promisingly applied these methods to the field of structural identification. Among them, GAs attract our attention because they do not require a considerable amount of data in advance in dealing with complex problems and can make a global solution search possible as opposed to classical gradient-based optimization techniques. In this study, we propose an alternative damage-detection technique that can determine the degraded stiffness distribution of vibrating laminated composites made of Glass Fiber-reinforced Polymer (GFRP). The proposed method uses a modified form of the bivariate Gaussian distribution function to detect degraded stiffness characteristics. In addition, this study presents a method to detect the fiber property variation of laminated composite plates from the micromechanical point of view. The finite element model is used to study free vibrations of laminated composite plates for fiber stiffness degradation. In order to solve the inverse problem using the combined method, this study uses only first mode shapes in a structure for the measured frequency data. In particular, this study focuses on the effect of the interaction among various parameters, such as fiber angles, layup sequences, and damage distributions, on fiber-stiffness damage detection.Keywords: stiffness detection, fiber damage, genetic algorithm, layup sequences
Procedia PDF Downloads 277192 Physiological Effects on Scientist Astronaut Candidates: Hypobaric Training Assessment
Authors: Pedro Llanos, Diego García
Abstract:
This paper is addressed to expanding our understanding of the effects of hypoxia training on our bodies to better model its dynamics and leverage some of its implications and effects on human health. Hypoxia training is a recommended practice for military and civilian pilots that allow them to recognize their early hypoxia signs and symptoms, and Scientist Astronaut Candidates (SACs) who underwent hypobaric hypoxia (HH) exposure as part of a training activity for prospective suborbital flight applications. This observational-analytical study describes physiologic responses and symptoms experienced by a SAC group before, during and after HH exposure and proposes a model for assessing predicted versus observed physiological responses. A group of individuals with diverse Science Technology Engineering Mathematics (STEM) backgrounds conducted a hypobaric training session to an altitude up to 22,000 ft (FL220) or 6,705 meters, where heart rate (HR), breathing rate (BR) and core temperature (Tc) were monitored with the use of a chest strap sensor pre and post HH exposure. A pulse oximeter registered levels of saturation of oxygen (SpO2), number and duration of desaturations during the HH chamber flight. Hypoxia symptoms as described by the SACs during the HH training session were also registered. This data allowed to generate a preliminary predictive model of the oxygen desaturation and O2 pressure curve for each subject, which consists of a sixth-order polynomial fit during exposure, and a fifth or fourth-order polynomial fit during recovery. Data analysis showed that HR and BR showed no significant differences between pre and post HH exposure in most of the SACs, while Tc measures showed slight but consistent decrement changes. All subjects registered SpO2 greater than 94% for the majority of their individual HH exposures, but all of them presented at least one clinically significant desaturation (SpO2 < 85% for more than 5 seconds) and half of the individuals showed SpO2 below 87% for at least 30% of their HH exposure time. Finally, real time collection of HH symptoms presented temperature somatosensory perceptions (SP) for 65% of individuals, and task-focus issues for 52.5% of individuals as the most common HH indications. 95% of the subjects experienced HH onset symptoms below FL180; all participants achieved full recovery of HH symptoms within 1 minute of donning their O2 mask. The current HH study performed on this group of individuals suggests a rapid and fully reversible physiologic response after HH exposure as expected and obtained in previous studies. Our data showed consistent results between predicted versus observed SpO2 curves during HH suggesting a mathematical function that may be used to model HH performance deficiencies. During the HH study, real-time HH symptoms were registered providing evidenced SP and task focusing as the earliest and most common indicators. Finally, an assessment of HH signs of symptoms in a group of heterogeneous, non-pilot individuals showed similar results to previous studies in homogeneous populations of pilots.Keywords: slow onset hypoxia, hypobaric chamber training, altitude sickness, symptoms and altitude, pressure cabin
Procedia PDF Downloads 117191 Effect of Spermidine on Physicochemical Properties of Protein Based Films
Authors: Mohammed Sabbah, Prospero Di Pierro, Raffaele Porta
Abstract:
Protein-based edible films and coatings have attracted an increasing interest in recent years since they might be used to protect pharmaceuticals or improve the shelf life of different food products. Among them, several plant proteins represent an abundant, inexpensive and renewable raw source. These natural biopolymers are used as film forming agents, being able to form intermolecular linkages by various interactions. However, without the addition of a plasticizing agent, many biomaterials are brittle and, consequently, very difficult to be manipulated. Plasticizers are generally small and non-volatile organic additives used to increase film extensibility and reduce its crystallinity, brittleness and water vapor permeability. Plasticizers normally act by decreasing the intermolecular forces along the polymer chains, thus reducing the relative number of polymer-polymer contacts, producing a decrease in cohesion and tensile strength and thereby increasing film flexibility allowing its deformation without rupture. The most commonly studied plasticizers are polyols, like glycerol (GLY) and some mono or oligosaccharides. In particular, GLY not only increases film extensibility but also migrates inside the film network often causing the loss of desirable mechanical properties of the material. Therefore, replacing GLY with a different plasticizer might help to improve film characteristics allowing potential industrial applications. To improve film properties, it seemed of interest to test as plasticizers some cationic small molecules like polyamines (PAs). Putrescine, spermidine (SPD), and spermine are PAs widely distributed in nature and of particular interest for their biological activities that may have some beneficial health effects. Since PAs contains amino instead of hydroxyl functional groups, they are able to trigger ionic interactions with negatively charged proteins. Bitter vetch (Vicia ervilia; BV) is an ancient grain legume crop, originated in the Mediterranean region, which can be found today in many countries around the world. This annual Vicia genus shows several favorable features, being their seeds a cheap and abundant protein source. The main objectives of this study were to investigate the effect of different concentrations of SPD on the mechanical and permeability properties of films prepared with native or heat denatured BV proteins in the presence of different concentrations of SPD and/or GLY. Therefore, a BV seed protein concentrate (BVPC), containing about 77% proteins, was used to prepare film forming solutions (FFSs), whereas GLY and SPD were added as film plasticizers, either singly or in combination, at various concentrations. Since a primary plasticizer is generally defined as a molecule that when added to a material makes it softer, more flexible and easier to be processed, our findings lead to consider SPD as a possible primary plasticizer of protein-based films. In fact, the addition of millimolar concentrations of SPD to BVPC FFS allowed obtaining handleable biomaterials with improved properties. Moreover, SPD can be also considered as a secondary plasticizer, namely an 'extender', because of its ability even to enhance the plasticizing performance of GLY. In conclusion, our studies indicate that innovative edible protein-based films and coatings can be obtained by using PAs as new plasticizers.Keywords: edible films, glycerol, plasticizers, polyamines, spermidine
Procedia PDF Downloads 197190 Pt Decorated Functionalized Acetylene Black as Efficient Cathode Material for Li Air Battery and Fuel Cell Applications
Authors: Rajashekar Badam, Vedarajan Raman, Noriyoshi Matsumi
Abstract:
Efficiency of energy converting and storage systems like fuel cells and Li-Air battery principally depended on oxygen reduction reaction (ORR) which occurs at cathode. As the kinetics of the ORR is very slow, it becomes the rate determining step. Exploring carbon substrates for enhancing the dispersion and activity of the metal catalyst and commercially viable simple preparation method is a very crucial area of research in the field of energy materials. Hence, many researchers made large number of carbon-based ORR materials today. But, there are hardly few studies on the effect of interaction between Pt-carbon and carbon-electrolyte on activity. In this work, we have prepared functionalized carbon-based Pt catalyst (Pt-FAB) with enhanced interfacial properties that lead to efficient ORR catalysis. The present work deals with a single-pot method to exfoliate and functionalized acetylene black with enhanced interaction with Pt as well as electrolyte. Acetylene black was functionalized and exfoliated using a facile single pot acid treatment method. The resulted FAB was further decorated with Pt-nano particles (Pt-np). The TEM images of Pt-FAB with uniformly decorated Pt-np of ~3 nm. Further, XPS studies of Pt 4f peak revealed that Pt0 peak was shifted by 0.4 eV in Pt-FAB compared to binding energy of typical Pt⁰ found in Pt/C. The shift can be ascribed to the modulation of electronic state and strong electronic interaction of Pt with carbon. Modulated electronic structure of Pt and strong electronic interaction of Pt with FAB enhances the catalytic activity and durability respectively. To understand the electrode electrolyte interface, electrochemical impedance spectroscopy was carried out. These measurements revealed that the charge transfer resistance of electrode to electrolyte for Pt-FAB is 10 times smaller than that of conventional Pt/C. The interaction with electrolyte helps reduce the interface boundaries, which in turn affects the overall catalytic performance of the electrode. Cyclic voltammetric measurements in 0.1M HClO₄ aq. at a potential scan rate of 50 mVs-1 was employed to evaluate electrochemical surface area (ECSA) of Pt. ECSA of Pt-FAB was found to be as high as 67.2 m²g⁻¹. The three-electrode system showed very high ORR catalytic activity. Mass activity at 0.9 V vs. RHE showed 460 A/g which is much higher than the DOE target values for the year 2020. Further, it showed enhanced performance by showing 723 mW/cm² of highest power density and 1006 mA/cm² of current density at 0.6 V in fuel cell single cell type configuration and 1030 mAhg⁻¹ of rechargeable capacity in Li air battery application. The higher catalytic activity can be ascribed to the improved interaction of FAB with Pt and electrolyte. The aforementioned results evince that Pt-FAB will be a promising cathode material for efficient ORR with significant cyclability for its application in fuel cells and Li-Air batteries. In conclusion, a disordered material was prepared from AB and was systematically characterized. The extremely high ORR activity and ease of preparation make it competent for replacing commercially available ORR materials.Keywords: functionalized acetylene black, oxygen reduction reaction, fuel cells, Functionalized battery
Procedia PDF Downloads 109189 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model
Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson
Abstract:
The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania
Procedia PDF Downloads 107188 Wetting Characterization of High Aspect Ratio Nanostructures by Gigahertz Acoustic Reflectometry
Authors: C. Virgilio, J. Carlier, P. Campistron, M. Toubal, P. Garnier, L. Broussous, V. Thomy, B. Nongaillard
Abstract:
Wetting efficiency of microstructures or nanostructures patterned on Si wafers is a real challenge in integrated circuits manufacturing. In fact, bad or non-uniform wetting during wet processes limits chemical reactions and can lead to non-complete etching or cleaning inside the patterns and device defectivity. This issue is more and more important with the transistors size shrinkage and concerns mainly high aspect ratio structures. Deep Trench Isolation (DTI) structures enabling pixels’ isolation in imaging devices are subject to this phenomenon. While low-frequency acoustic reflectometry principle is a well-known method for Non Destructive Test applications, we have recently shown that it is also well suited for nanostructures wetting characterization in a higher frequency range. In this paper, we present a high-frequency acoustic reflectometry characterization of DTI wetting through a confrontation of both experimental and modeling results. The acoustic method proposed is based on the evaluation of the reflection of a longitudinal acoustic wave generated by a 100 µm diameter ZnO piezoelectric transducer sputtered on the silicon wafer backside using MEMS technologies. The transducers have been fabricated to work at 5 GHz corresponding to a wavelength of 1.7 µm in silicon. The DTI studied structures, manufactured on the wafer frontside, are crossing trenches of 200 nm wide and 4 µm deep (aspect ratio of 20) etched into a Si wafer frontside. In that case, the acoustic signal reflection occurs at the bottom and at the top of the DTI enabling its characterization by monitoring the electrical reflection coefficient of the transducer. A Finite Difference Time Domain (FDTD) model has been developed to predict the behavior of the emitted wave. The model shows that the separation of the reflected echoes (top and bottom of the DTI) from different acoustic modes is possible at 5 Ghz. A good correspondence between experimental and theoretical signals is observed. The model enables the identification of the different acoustic modes. The evaluation of DTI wetting is then performed by focusing on the first reflected echo obtained through the reflection at Si bottom interface, where wetting efficiency is crucial. The reflection coefficient is measured with different water / ethanol mixtures (tunable surface tension) deposited on the wafer frontside. Two cases are studied: with and without PFTS hydrophobic treatment. In the untreated surface case, acoustic reflection coefficient values with water show that liquid imbibition is partial. In the treated surface case, the acoustic reflection is total with water (no liquid in DTI). The impalement of the liquid occurs for a specific surface tension but it is still partial for pure ethanol. DTI bottom shape and local pattern collapse of the trenches can explain these incomplete wetting phenomena. This high-frequency acoustic method sensitivity coupled with a FDTD propagative model thus enables the local determination of the wetting state of a liquid on real structures. Partial wetting states for non-hydrophobic surfaces or low surface tension liquids are then detectable with this method.Keywords: wetting, acoustic reflectometry, gigahertz, semiconductor
Procedia PDF Downloads 327187 Spin Rate Decaying Law of Projectile with Hemispherical Head in Exterior Trajectory
Authors: Quan Wen, Tianxiao Chang, Shaolu Shi, Yushi Wang, Guangyu Wang
Abstract:
As a kind of working environment of the fuze, the spin rate decaying law of projectile in exterior trajectory is of great value in the design of the rotation count fixed distance fuze. In addition, it is significant in the field of devices for simulation tests of fuze exterior ballistic environment, flight stability, and dispersion accuracy of gun projectile and opening and scattering design of submunition and illuminating cartridges. Besides, the self-destroying mechanism of the fuze in small-caliber projectile often works by utilizing the attenuation of centrifugal force. In the theory of projectile aerodynamics and fuze design, there are many formulas describing the change law of projectile angular velocity in external ballistic such as Roggla formula, exponential function formula, and power function formula. However, these formulas are mostly semi-empirical due to the poor test conditions and insufficient test data at that time. These formulas are difficult to meet the design requirements of modern fuze because they are not accurate enough and have a narrow range of applications now. In order to provide more accurate ballistic environment parameters for the design of a hemispherical head projectile fuze, the projectile’s spin rate decaying law in exterior trajectory under the effect of air resistance was studied. In the analysis, the projectile shape was simplified as hemisphere head, cylindrical part, rotating band part, and anti-truncated conical tail. The main assumptions are as follows: a) The shape and mass are symmetrical about the longitudinal axis, b) There is a smooth transition between the ball hea, c) The air flow on the outer surface is set as a flat plate flow with the same area as the expanded outer surface of the projectile, and the boundary layer is turbulent, d) The polar damping moment attributed to the wrench hole and rifling mark on the projectile is not considered, e) The groove of the rifle on the rotating band is uniform, smooth and regular. The impacts of the four parts on aerodynamic moment of the projectile rotation were obtained by aerodynamic theory. The surface friction stress of the projectile, the polar damping moment formed by the head of the projectile, the surface friction moment formed by the cylindrical part, the rotating band, and the anti-truncated conical tail were obtained by mathematical derivation. After that, the mathematical model of angular spin rate attenuation was established. In the whole trajectory with the maximum range angle (38°), the absolute error of the polar damping torque coefficient obtained by simulation and the coefficient calculated by the mathematical model established in this paper is not more than 7%. Therefore, the credibility of the mathematical model was verified. The mathematical model can be described as a first-order nonlinear differential equation, which has no analytical solution. The solution can be only gained as a numerical solution by connecting the model with projectile mass motion equations in exterior ballistics.Keywords: ammunition engineering, fuze technology, spin rate, numerical simulation
Procedia PDF Downloads 148186 Using Business Simulations and Game-Based Learning for Enterprise Resource Planning Implementation Training
Authors: Carin Chuang, Kuan-Chou Chen
Abstract:
An Enterprise Resource Planning (ERP) system is an integrated information system that supports the seamless integration of all the business processes of a company. Implementing an ERP system can increase efficiencies and decrease the costs while helping improve productivity. Many organizations including large, medium and small-sized companies have already adopted an ERP system for decades. Although ERP system can bring competitive advantages to organizations, the lack of proper training approach in ERP implementation is still a major concern. Organizations understand the importance of ERP training to adequately prepare managers and users. The low return on investment, however, for the ERP training makes the training difficult for knowledgeable workers to transfer what is learned in training to the jobs at workplace. Inadequate and inefficient ERP training limits the value realization and success of an ERP system. That is the need to call for a profound change and innovation for ERP training in both workplace at industry and the Information Systems (IS) education in academia. The innovated ERP training approach can improve the users’ knowledge in business processes and hands-on skills in mastering ERP system. It also can be instructed as educational material for IS students in universities. The purpose of the study is to examine the use of ERP simulation games via the ERPsim system to train the IS students in learning ERP implementation. The ERPsim is the business simulation game developed by ERPsim Lab at HEC Montréal, and the game is a real-life SAP (Systems Applications and Products) ERP system. The training uses the ERPsim system as the tool for the Internet-based simulation games and is designed as online student competitions during the class. The competitions involve student teams with the facilitation of instructor and put the students’ business skills to the test via intensive simulation games on a real-world SAP ERP system. The teams run the full business cycle of a manufacturing company while interacting with suppliers, vendors, and customers through sending and receiving orders, delivering products and completing the entire cash-to-cash cycle. To learn a range of business skills, student needs to adopt individual business role and make business decisions around the products and business processes. Based on the training experiences learned from rounds of business simulations, the findings show that learners have reduced risk in making mistakes that help learners build self-confidence in problem-solving. In addition, the learners’ reflections from their mistakes can speculate the root causes of the problems and further improve the efficiency of the training. ERP instructors teaching with the innovative approach report significant improvements in student evaluation, learner motivation, attendance, engagement as well as increased learner technology competency. The findings of the study can provide ERP instructors with guidelines to create an effective learning environment and can be transferred to a variety of other educational fields in which trainers are migrating towards a more active learning approach.Keywords: business simulations, ERP implementation training, ERPsim, game-based learning, instructional strategy, training innovation
Procedia PDF Downloads 141185 Collagen/Hydroxyapatite Compositions Doped with Transitional Metals for Bone Tissue Engineering Applications
Authors: D. Ficai, A. Ficai, D. Gudovan, I. A. Gudovan, I. Ardelean, R. Trusca, E. Andronescu, V. Mitran, A. Cimpean
Abstract:
In the last years, scientists struggled hardly to mimic bone structures to develop implants and biostructures which present higher biocompatibility and reduced rejection rate. One way to obtain this goal is to use similar materials as that of bone, namely collagen/hydroxyapatite composite materials. However, it is very important to tailor both compositions but also the microstructure of the bone that would ensure both the optimal osteointegartion and the mechanical properties required by the application. In this study, new collagen/hydroxyapatites composite materials doped with Cu, Li, Mn, Zn were successfully prepared. The synthesis method is described below: weight the Ca(OH)₂ mass, i.e., 7,3067g, and ZnCl₂ (0.134g), CuSO₄ (0.159g), LiCO₃ (0.133g), MnCl₂.4H₂O (0.1971g), and suspend in 100ml distilled water under magnetic stirring. The solution thus obtained is added a solution of NaH₂PO₄*H2O (8.247g dissolved in 50ml distilled water) under slow dropping of 1 ml/min followed by adjusting the pH to 9.5 with HCl and finally filter and wash until neutral pH. The as-obtained slurry was dried in the oven at 80°C and then calcined at 600°C in order to ensure a proper purification of the final product of organic phases, also inducing a proper sterilization of the mixture before insertion into the collagen matrix. The collagen/hydroxyapatite composite materials are tailored from morphological point of view to optimize their biocompatibility and bio-integration against mechanical properties whereas the addition of the dopants is aimed to improve the biological activity of the samples. The addition of transitional metals can improve the biocompatibility and especially the osteoblasts adhesion (Mn²⁺) or to induce slightly better osteoblast differentiation of the osteoblast, Zn²⁺ being a cofactor for many enzymes including those responsible for cell differentiation. If the amount is too high, the final material can become toxic and lose all of its biocompatibility. In order to achieve a good biocompatibility and not reach the cytotoxic effect, the amount of transitional metals added has to be maintained at low levels (0.5% molar). The amount of transitional metals entering into the elemental cell of HA will be verified using inductively-coupled plasma mass spectrometric system. This highly sensitive technique is necessary, because, at such low levels of transitional metals, the difference between biocompatible and cytotoxic is a very thin line, thus requiring proper and thorough investigation using a precise technique. In order to determine the structure and morphology of the obtained composite materials, IR spectroscopy, X-Ray diffraction (XRD), scanning electron microscopy (SEM), and Energy Dispersive X-Ray Spectrometry (EDS) were used. Acknowledgment: The present work was possible due to the EU-funding grant POSCCE-A2O2.2.1-2013-1, Project No. 638/12.03.2014, code SMIS-CSNR 48652. The financial contribution received from the national project “Biomimetic porous structures obtained by 3D printing developed for bone tissue engineering (BIOGRAFTPRINT), No. 127PED/2017 is also highly acknowledged.Keywords: collagen, composite materials, hydroxyapatite, bone tissue engineering
Procedia PDF Downloads 207184 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport
Authors: Aamir Shahzad, Mao-Gang He
Abstract:
Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow
Procedia PDF Downloads 274