Search results for: renewable energy technology innovation
448 Combustion Characteristics and Pollutant Emissions in Gasoline/Ethanol Mixed Fuels
Authors: Shin Woo Kim, Eui Ju Lee
Abstract:
The recent development of biofuel production technology facilitates the use of bioethanol and biodiesel on automobile. Bioethanol, especially, can be used as a fuel for gasoline vehicles because the addition of ethanol has been known to increase octane number and reduce soot emissions. However, the wide application of biofuel has been still limited because of lack of detailed combustion properties such as auto-ignition temperature and pollutant emissions such as NOx and soot, which has been concerned mainly on the vehicle fire safety and environmental safety. In this study, the combustion characteristics of gasoline/ethanol fuel were investigated both numerically and experimentally. For auto-ignition temperature and NOx emission, the numerical simulation was performed on the well-stirred reactor (WSR) to simulate the homogeneous gasoline engine and to clarify the effect of ethanol addition in the gasoline fuel. Also, the response surface method (RSM) was introduced as a design of experiment (DOE), which enables the various combustion properties to be predicted and optimized systematically with respect to three independent variables, i.e., ethanol mole fraction, equivalence ratio and residence time. The results of stoichiometric gasoline surrogate show that the auto-ignition temperature increases but NOx yields decrease with increasing ethanol mole fraction. This implies that the bioethanol added gasoline is an eco-friendly fuel on engine running condition. However, unburned hydrocarbon is increased dramatically with increasing ethanol content, which results from the incomplete combustion and hence needs to adjust combustion itself rather than an after-treatment system. RSM results analyzed with three independent variables predict the auto-ignition temperature accurately. However, NOx emission had a big difference between the calculated values and the predicted values using conventional RSM because NOx emission varies very steeply and hence the obtained second order polynomial cannot follow the rates. To relax the increasing rate of dependent variable, NOx emission is taken as common logarithms and worked again with RSM. NOx emission predicted through logarithm transformation is in a fairly good agreement with the experimental results. For more tangible understanding of gasoline/ethanol fuel on pollutant emissions, experimental measurements of combustion products were performed in gasoline/ethanol pool fires, which is widely used as a fire source of laboratory scale experiments. Three measurement methods were introduced to clarify the pollutant emissions, i.e., various gas concentrations including NOx, gravimetric soot filter sampling for elements analysis and pyrolysis, thermophoretic soot sampling with transmission electron microscopy (TEM). Soot yield by gravimetric sampling was decreased dramatically as ethanol was added, but NOx emission was almost comparable regardless of ethanol mole fraction. The morphology of the soot particle was investigated to address the degree of soot maturing. The incipient soot such as a liquid like PAHs was observed clearly on the soot of higher ethanol containing gasoline, and the soot might be matured under the undiluted gasoline fuel.Keywords: gasoline/ethanol fuel, NOx, pool fire, soot, well-stirred reactor (WSR)
Procedia PDF Downloads 211447 The Impact of Inconclusive Results of Thin Layer Chromatography for Marijuana Analysis and It’s Implication on Forensic Laboratory Backlog
Authors: Ana Flavia Belchior De Andrade
Abstract:
Forensic laboratories all over the world face a great challenge to overcame waiting time and backlog in many different areas. Many aspects contribute to this situation, such as an increase in drug complexity, increment in the number of exams requested and cuts in funding limiting laboratories hiring capacity. Altogether, those facts pose an essential challenge for forensic chemistry laboratories to keep both quality and time of response within an acceptable period. In this paper we will analyze how the backlog affects test results and, in the end, the whole judicial system. In this study data from marijuana samples seized by the Federal District Civil Police in Brazil between the years 2013 and 2017 were tabulated and the results analyzed and discussed. In the last five years, the number of petitioned exams increased from 822 in February 2013 to 1358 in March 2018, representing an increase of 32% in 5 years, a rise of more than 6% per year. Meanwhile, our data shows that the number of performed exams did not grow at the same rate. Product numbers are stationed as using the actual technology scenario and analyses routine the laboratory is running in full capacity. Marijuana detection is the most prevalence exam required, representing almost 70% of all exams. In this study, data from 7,110 (seven thousand one hundred and ten) marijuana samples were analyzed. Regarding waiting time, most of the exams were performed not later than 60 days after receipt (77%). Although some samples waited up to 30 months before being examined (0,65%). When marijuana´s exam is delayed we notice the enlargement of inconclusive results using thin-layer chromatography (TLC). Our data shows that if a marijuana sample is stored for more than 18 months, inconclusive results rise from 2% to 7% and when if storage exceeds 30 months, inconclusive rates increase to 13%. This is probably because Cannabis plants and preparations undergo oxidation under storage resulting in a decrease in the content of Δ9-tetrahydrocannabinol ( Δ9-THC). An inconclusive result triggers other procedures that require at least two more working hours of our analysts (e.g., GC/MS analysis) and the report would be delayed at least one day. Those new procedures increase considerably the running cost of a forensic drug laboratory especially when the backlog is significant as inconclusive results tend to increase with waiting time. Financial aspects are not the only ones to be observed regarding backlog cases; there are also social issues as legal procedures can be delayed and prosecution of serious crimes can be unsuccessful. Delays may slow investigations and endanger public safety by giving criminals more time on the street to re-offend. This situation also implies a considerable cost to society as at some point, if the exam takes a long time to be performed, an inconclusive can turn into a negative result and a criminal can be absolved by flawed expert evidence.Keywords: backlog, forensic laboratory, quality management, accreditation
Procedia PDF Downloads 121446 Estimation of State of Charge, State of Health and Power Status for the Li-Ion Battery On-Board Vehicle
Authors: S. Sabatino, V. Calderaro, V. Galdi, G. Graber, L. Ippolito
Abstract:
Climate change is a rapidly growing global threat caused mainly by increased emissions of carbon dioxide (CO₂) into the atmosphere. These emissions come from multiple sources, including industry, power generation, and the transport sector. The need to tackle climate change and reduce CO₂ emissions is indisputable. A crucial solution to achieving decarbonization in the transport sector is the adoption of electric vehicles (EVs). These vehicles use lithium (Li-Ion) batteries as an energy source, making them extremely efficient and with low direct emissions. However, Li-Ion batteries are not without problems, including the risk of overheating and performance degradation. To ensure its safety and longevity, it is essential to use a battery management system (BMS). The BMS constantly monitors battery status, adjusts temperature and cell balance, ensuring optimal performance and preventing dangerous situations. From the monitoring carried out, it is also able to optimally manage the battery to increase its life. Among the parameters monitored by the BMS, the main ones are State of Charge (SoC), State of Health (SoH), and State of Power (SoP). The evaluation of these parameters can be carried out in two ways: offline, using benchtop batteries tested in the laboratory, or online, using batteries installed in moving vehicles. Online estimation is the preferred approach, as it relies on capturing real-time data from batteries while operating in real-life situations, such as in everyday EV use. Actual battery usage conditions are highly variable. Moving vehicles are exposed to a wide range of factors, including temperature variations, different driving styles, and complex charge/discharge cycles. This variability is difficult to replicate in a controlled laboratory environment and can greatly affect performance and battery life. Online estimation captures this variety of conditions, providing a more accurate assessment of battery behavior in real-world situations. In this article, a hybrid approach based on a neural network and a statistical method for real-time estimation of SoC, SoH, and SoP parameters of interest is proposed. These parameters are estimated from the analysis of a one-day driving profile of an electric vehicle, assumed to be divided into the following four phases: (i) Partial discharge (SoC 100% - SoC 50%), (ii) Partial discharge (SoC 50% - SoC 80%), (iii) Deep Discharge (SoC 80% - SoC 30%) (iv) Full charge (SoC 30% - SoC 100%). The neural network predicts the values of ohmic resistance and incremental capacity, while the statistical method is used to estimate the parameters of interest. This reduces the complexity of the model and improves its prediction accuracy. The effectiveness of the proposed model is evaluated by analyzing its performance in terms of square mean error (RMSE) and percentage error (MAPE) and comparing it with the reference method found in the literature.Keywords: electric vehicle, Li-Ion battery, BMS, state-of-charge, state-of-health, state-of-power, artificial neural networks
Procedia PDF Downloads 67445 Using Scilab® as New Introductory Method in Numerical Calculations and Programming for Computational Fluid Dynamics (CFD)
Authors: Nicoly Coelho, Eduardo Vieira Vilas Boas, Paulo Orestes Formigoni
Abstract:
Faced with the remarkable developments in the various segments of modern engineering, provided by the increasing technological development, professionals of all educational areas need to overcome the difficulties generated due to the good understanding of those who are starting their academic journey. Aiming to overcome these difficulties, this article aims at an introduction to the basic study of numerical methods applied to fluid mechanics and thermodynamics, demonstrating the modeling and simulations with its substance, and a detailed explanation of the fundamental numerical solution for the use of finite difference method, using SCILAB, a free software easily accessible as it is free and can be used for any research center or university, anywhere, both in developed and developing countries. It is known that the Computational Fluid Dynamics (CFD) is a necessary tool for engineers and professionals who study fluid mechanics, however, the teaching of this area of knowledge in undergraduate programs faced some difficulties due to software costs and the degree of difficulty of mathematical problems involved in this way the matter is treated only in postgraduate courses. This work aims to bring the use of DFC low cost in teaching Transport Phenomena for graduation analyzing a small classic case of fundamental thermodynamics with Scilab® program. The study starts from the basic theory involving the equation the partial differential equation governing heat transfer problem, implies the need for mastery of students, discretization processes that include the basic principles of series expansion Taylor responsible for generating a system capable of convergence check equations using the concepts of Sassenfeld, finally coming to be solved by Gauss-Seidel method. In this work we demonstrated processes involving both simple problems solved manually, as well as the complex problems that required computer implementation, for which we use a small algorithm with less than 200 lines in Scilab® in heat transfer study of a heated plate in rectangular shape on four sides with different temperatures on either side, producing a two-dimensional transport with colored graphic simulation. With the spread of computer technology, numerous programs have emerged requiring great researcher programming skills. Thinking that this ability to program DFC is the main problem to be overcome, both by students and by researchers, we present in this article a hint of use of programs with less complex interface, thus enabling less difficulty in producing graphical modeling and simulation for DFC with an extension of the programming area of experience for undergraduates.Keywords: numerical methods, finite difference method, heat transfer, Scilab
Procedia PDF Downloads 385444 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration
Authors: Matthew Yeager, Christopher Willy, John Bischoff
Abstract:
The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design
Procedia PDF Downloads 183443 Effects of Virtual Reality Treadmill Training on Gait and Balance Performance of Patients with Stroke: Review
Authors: Hanan Algarni
Abstract:
Background: Impairment of walking and balance skills has negative impact on functional independence and community participation after stroke. Gait recovery is considered a primary goal in rehabilitation by both patients and physiotherapists. Treadmill training coupled with virtual reality technology is a new emerging approach that offers patients with feedback, open and random skills practice while walking and interacting with virtual environmental scenes. Objectives: To synthesize the evidence around the effects of the VR treadmill training on gait speed and balance primarily, functional independence and community participation secondarily in stroke patients. Methods: Systematic review was conducted; search strategy included electronic data bases: MEDLINE, AMED, Cochrane, CINAHL, EMBASE, PEDro, Web of Science, and unpublished literature. Inclusion criteria: Participant: adult >18 years, stroke, ambulatory, without severe visual or cognitive impartments. Intervention: VR treadmill training alone or with physiotherapy. Comparator: any other interventions. Outcomes: gait speed, balance, function, community participation. Characteristics of included studies were extracted for analysis. Risk of bias assessment was performed using Cochrane's ROB tool. Narrative synthesis of findings was undertaken and summary of findings in each outcome was reported using GRADEpro. Results: Four studies were included involving 84 stroke participants with chronic hemiparesis. Interventions intensity ranged (6-12 sessions, 20 minutes-1 hour/session). Three studies investigated the effects on gait speed and balance. 2 studies investigated functional outcomes and one study assessed community participation. ROB assessment showed 50% unclear risk of selection bias and 25% of unclear risk of detection bias across the studies. Heterogeneity was identified in the intervention effects at post training and follow up. Outcome measures, training intensity and durations also varied across the studies, grade of evidence was low for balance, moderate for speed and function outcomes, and high for community participation. However, it is important to note that grading was done on few numbers of studies in each outcome. Conclusions: The summary of findings suggests positive and statistically significant effects (p<0.05) of VR treadmill training compared to other interventions on gait speed, dynamic balance skills, function and participation directly after training. However, the effects were not sustained at follow up in two studies (2 weeks-1 month) and other studies did not perform follow up measurements. More RCTs with larger sample sizes and higher methodological quality are required to examine the long term effects of VR treadmill effects on function independence and community participation after stroke, in order to draw conclusions and produce stronger robust evidence.Keywords: virtual reality, treadmill, stroke, gait rehabilitation
Procedia PDF Downloads 272442 Enhancing the Structural and Electrochemical Performance of Li-Rich Layered Metal Oxides Cathodes for Li-Ion Battery by Coating with the Active Material
Authors: Cyril O. Ehi-Eromosele, Ajayi Kayode
Abstract:
The Li-rich layered metal oxides (LLO) are the most promising candidates for promising electrodes of high energy Li-ion battery (LIB). In literature, these electrode system has either been designed as a hetero-structure of the primary components (composite) or as a core-shell structure with improved electrochemistry reported for both configurations when compared with its primary components. With the on-going efforts to improve on the electrochemical performance of the LIB, it is important to investigate comparatively the structural and electrochemical characteristics of the core-shell like and ‘composite’ forms of these materials with the same compositions and synthesis conditions which could influence future engineering of these materials. Therefore, this study concerns the structural and electrochemical properties of the ‘composite’ and core-shell like LLO cathode materials with the same nominal composition of 0.5Li₂MnO₃-0.5LiNi₀.₅Mn₀.₃Co₀.₂O₂ (LiNi₀.₅Mn₀.₃Co₀.₂O₂ as core and Li₂MnO₃ as the shell). The results show that the core-shell sample (–CS) gave better electrochemical performance than the ‘composite’ sample (–C). Both samples gave the same initial charge capacity of ~300 mAh/g when cycled at 10 mA/g and comparable charge capacity (246 mAh/g for the –CS sample and 240 mAh/g for the –C sample) when cycled at 200 mA/g. However, the –CS sample gave a higher initial discharge capacity at both current densities. The discharge capacity of the –CS sample was 232 mAh/g and 164 mAh/g while the –C sample is 208 mAh/g and 143 mAh/g at the current densities of 10 mA/g and 200 mA/g, respectively. Electrochemical impedance spectroscopy (EIS) results show that the –CS sample generally exhibited a smaller resistance than the –C sample both for the uncycled and after 50th cycle. Detailed structural analysis is on-going, but preliminary results show that the –CS sample had bigger unit cell volume and a higher degree of cation mixing. The thermal stability of the –CS sample was higher than the –C sample. XPS investigation also showed that the pristine –C sample gave a more reactive surface (showing formation of carbonate species to a greater degree) which could result in the greater resistance seen in the EIS result. To reinforce the results obtained for the 0.5Li₂MnO₃-0.5LiNi₀.₅Mn₀.₃Co₀.₃O₂ composition, the same investigations were extended to another ‘composite’ and core-shell like LLO cathode materials also with the same nominal composition of 0.5Li₂MnO₃-0.5LiNi₀.₃Mn₀.₃Co₀.₃O₂. In this case, the aim was to determine the electrochemical performance of the material using a low Ni content (LiNi₀.₃Mn₀.₃Co₀.₃O₂) as the core to clarify the contributions of the core-shell configuration to the electrochemical performance of these materials. Ni-rich layered oxides show active catalytic surface leading to electrolyte oxidation resulting in poor thermal stability and cycle life. Here, the core-shell sample also gave better electrochemical performance than the ‘composite’ sample with 0.5Li₂MnO₃-0.5LiNi₀.₃Mn₀.₃Co₀.₃O₂ composition. Furthermore, superior electrochemical performance was also recorded for the core-shell like spinel modified LLO (0.5Li₂MnO₃-0.45LiNi₀.₅Mn₀.₃Co₀.₂O₂-0.05LiNi₀.₅Mn₁.₅O₄) when compared to the composite system. These results show that the core-shell configuration can generally be used to improve the structural and electrochemical properties of the LLO and spinel modified LLO materials.Keywords: lithium-ion battery, lithium rich oxide cathode, core-shell structure, composite structure
Procedia PDF Downloads 121441 Analysis of Taxonomic Compositions, Metabolic Pathways and Antibiotic Resistance Genes in Fish Gut Microbiome by Shotgun Metagenomics
Authors: Anuj Tyagi, Balwinder Singh, Naveen Kumar B. T., Niraj K. Singh
Abstract:
Characterization of diverse microbial communities in specific environment plays a crucial role in the better understanding of their functional relationship with the ecosystem. It is now well established that gut microbiome of fish is not the simple replication of microbiota of surrounding local habitat, and extensive species, dietary, physiological and metabolic variations in fishes may have a significant impact on its composition. Moreover, overuse of antibiotics in human, veterinary and aquaculture medicine has led to rapid emergence and propagation of antibiotic resistance genes (ARGs) in the aquatic environment. Microbial communities harboring specific ARGs not only get a preferential edge during selective antibiotic exposure but also possess the significant risk of ARGs transfer to other non-resistance bacteria within the confined environments. This phenomenon may lead to the emergence of habitat-specific microbial resistomes and subsequent emergence of virulent antibiotic-resistant pathogens with severe fish and consumer health consequences. In this study, gut microbiota of freshwater carp (Labeo rohita) was investigated by shotgun metagenomics to understand its taxonomic composition and functional capabilities. Metagenomic DNA, extracted from the fish gut, was subjected to sequencing on Illumina NextSeq to generate paired-end (PE) 2 x 150 bp sequencing reads. After the QC of raw sequencing data by Trimmomatic, taxonomic analysis by Kraken2 taxonomic sequence classification system revealed the presence of 36 phyla, 326 families and 985 genera in the fish gut microbiome. At phylum level, Proteobacteria accounted for more than three-fourths of total bacterial populations followed by Actinobacteria (14%) and Cyanobacteria (3%). Commonly used probiotic bacteria (Bacillus, Lactobacillus, Streptococcus, and Lactococcus) were found to be very less prevalent in fish gut. After sequencing data assembly by MEGAHIT v1.1.2 assembler and PROKKA automated analysis pipeline, pathway analysis revealed the presence of 1,608 Metacyc pathways in the fish gut microbiome. Biosynthesis pathways were found to be the most dominant (51%) followed by degradation (39%), energy-metabolism (4%) and fermentation (2%). Almost one-third (33%) of biosynthesis pathways were involved in the synthesis of secondary metabolites. Metabolic pathways for the biosynthesis of 35 antibiotic types were also present, and these accounted for 5% of overall metabolic pathways in the fish gut microbiome. Fifty-one different types of antibiotic resistance genes (ARGs) belonging to 15 antimicrobial resistance (AMR) gene families and conferring resistance against 24 antibiotic types were detected in fish gut. More than 90% ARGs in fish gut microbiome were against beta-lactams (penicillins, cephalosporins, penems, and monobactams). Resistance against tetracycline, macrolides, fluoroquinolones, and phenicols ranged from 0.7% to 1.3%. Some of the ARGs for multi-drug resistance were also found to be located on sequences of plasmid origin. The presence of pathogenic bacteria and ARGs on plasmid sequences suggested the potential risk due to horizontal gene transfer in the confined gut environment.Keywords: antibiotic resistance, fish gut, metabolic pathways, microbial diversity
Procedia PDF Downloads 141440 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory
Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker
Abstract:
In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.Keywords: chemical analysis, concrete, LIBS, spectroscopy
Procedia PDF Downloads 103439 Effect of Time on Stream on the Performances of Plasma Assisted Fe-Doped Cryptomelanes in Trichloroethylene (TCE) Oxidation
Authors: Sharmin Sultana, Nicolas Nuns, Pardis Simon, Jean-Marc Giraudon, Jean-Francois Lamonior, Nathalie D. Geyter, Rino Morent
Abstract:
Environmental issues, especially air pollution, have become a huge concern of environmental legislation as a consequence of growing awareness in our global world. In this regard, control of volatile organic compounds (VOCs) emission has become an important issue due to their potential toxicity, carcinogenicity, and mutagenicity. The research of innovative technologies for VOC abatement is stimulated to accommodate the new stringent standards in terms of VOC emission. One emerging strategy is the coupling of 2 existing complementary technologies, namely here non-thermal plasma (NTP) and heterogeneous catalysis, to get a more efficient process for VOC removal in air. The objective of this current work is to investigate the abatement of trichloroethylene (TCE-highly toxic chlorinated VOC) from moist air (RH=15%) as a function of time by combined use of multi-pin-to-plate negative DC corona/glow discharge with Fe-doped cryptomelanes catalyst downstream i.e. post plasma-catalysis (PPC) process. For catalyst alone case, experiments reveal that, initially, Fe doped cryptomelane (regardless the mode of Fe incorporation by co-precipitation (Fe-K-OMS-2)/ impregnation (Fe/K-OMS-2)) exhibits excellent activity to decompose TCE compared to cryptomelane (K-OMS-2) itself. A maximum obtained value of TCE abatement after 6 min is as follows: Fe-KOMS-2 (73.3%) > Fe/KOMS-2 (48.5) > KOMS-2 (22.6%). However, with prolonged operation time, whatever the catalyst under concern, the abatement of TCE decreases. After 111 min time of exposure, the catalysts can be ranked as follows: Fe/KOMS-2 (11%) < K-OMS-2 (12.3%) < Fe-KOMS-2 (14.5%). Clearly, this phenomenon indicates catalyst deactivation either by chlorination or by blocking the active sites. Remarkably, in PPC configuration (energy density = 60 J/L, catalyst temperature = 150°C), experiments reveal an enhanced performance towards TCE removal regardless the type of catalyst. After 6 min time on stream, the TCE removal efficiency amount as follows: K-OMS-2 (60%) < Fe/K-OMS-2 (79%) < Fe-K-OMS-2 (99.3%). The enhanced performances over Fe-K-OMS-2 catalyst are attributed to its high surface oxygen mobility and structural defects leading to high O₃ decomposition efficiency to give active species able to oxidize the plasma processed hazardous\by-products and the possibly remaining VOC into CO₂. Moreover, both undoped and doped catalysts remain strongly capable to abate TCE with time on stream. The TCE removal efficiencies of the PPC processes with Fe/KOMS-2 and KOMS-2 catalysts are not affected by time on stream indicating an excellent catalyst stability. When using the Fe-K-OMS-2 as catalyst, TCE abatement slightly reduces with time on stream. However, it is noteworthy to stress that still a constant abatement of 83% is observed during at least 30 minutes. These results prove that the combination of NTP with catalysts not only increases the catalytic activity but also allows to avoid, to some extent, the poisoning of catalytic sites resulting in an enhanced catalyst stability. In order to better understand the different surface processes occurring in the course of the total TCE oxidation in PPC experiments, a detailed X-ray Photoelectron Spectroscopy (XPS) and Time of Flight-Secondary Ion Mass Spectrometry (ToF-SIMS) study on the fresh and used catalysts is in progress.Keywords: Fe doped cryptomelane, non-thermal plasma, plasma-catalysis, stability, trichloroethylene
Procedia PDF Downloads 206438 Optimization of Territorial Spatial Functional Partitioning in Coal Resource-based Cities Based on Ecosystem Service Clusters - The Case of Gujiao City in Shanxi Province
Authors: Gu Sihao
Abstract:
The coordinated development of "ecology-production-life" in cities has been highly concerned by the country, and the transformation development and sustainable development of resource-based cities have become a hot research topic at present. As an important part of China's resource-based cities, coal resource-based cities have the characteristics of large number and wide distribution. However, due to the adjustment of national energy structure and the gradual exhaustion of urban coal resources, the development vitality of coal resource-based cities is gradually reduced. In many studies, the deterioration of ecological environment in coal resource-based cities has become the main problem restricting their urban transformation and sustainable development due to the "emphasis on economy and neglect of ecology". Since the 18th National Congress of the Communist Party of China (CPC), the Central Government has been deepening territorial space planning and development. On the premise of optimizing territorial space development pattern, it has completed the demarcation of ecological protection red lines, carried out ecological zoning and ecosystem evaluation, which have become an important basis and scientific guarantee for ecological modernization and ecological civilization construction. Grasp the regional multiple ecosystem services is the precondition of the ecosystem management, and the relationship between the multiple ecosystem services study, ecosystem services cluster can identify the interactions between multiple ecosystem services, and on the basis of the characteristics of the clusters on regional ecological function zoning, to better Social-Ecological system management. Based on this cognition, this study optimizes the spatial function zoning of Gujiao, a coal resource-based city, in order to provide a new theoretical basis for its sustainable development. This study is based on the detailed analysis of characteristics and utilization of Gujiao city land space, using SOFM neural networks to identify local ecosystem service clusters, according to the cluster scope and function of ecological function zoning of space partition balance and coordination between different ecosystem services strength, establish a relationship between clusters and land use, and adjust the functions of territorial space within each zone. Then, according to the characteristics of coal resources city and national spatial function zoning characteristics, as the driving factors of land change, by cellular automata simulation program, such as simulation under different restoration strategy situation of urban future development trend, and provides relevant theories and technical methods for the "third-line" demarcations of Gujiao's territorial space planning, optimizes territorial space functions, and puts forward targeted strategies for the promotion of regional ecosystem services, providing theoretical support for the improvement of human well-being and sustainable development of resource-based cities.Keywords: coal resource-based city, territorial spatial planning, ecosystem service cluster, gmop model, geosos-FLUS model, functional zoning optimization and upgrading
Procedia PDF Downloads 61437 Development of a Novel Ankle-Foot Orthotic Using a User Centered Approach for Improved Satisfaction
Authors: Ahlad Neti, Elisa Arch, Martha Hall
Abstract:
Studies have shown that individuals who use Ankle-Foot-Orthoses (AFOs) have a high level of dissatisfaction regarding their current AFOs. Studies point to the focus on technical design with little attention given to the user perspective as a source of AFO designs that leave users dissatisfied. To design a new AFO that satisfies users and thereby improves their quality of life, the reasons for their dissatisfaction and their wants and needs for an improved AFO design must be identified. There has been little research into the user perspective on AFO use and desired improvements, so the relationship between AFO design and satisfaction in daily use must be assessed to develop appropriate metrics and constraints prior to designing a novel AFO. To assess the user perspective on AFO design, structured interviews were conducted with 7 individuals (average age of 64.29±8.81 years) who use AFOs. All interviews were transcribed and coded to identify common themes using Grounded Theory Method in NVivo 12. Qualitative analysis of these results identified sources of user dissatisfaction such as heaviness, bulk, and uncomfortable material and overall needs and wants for an AFO. Beyond the user perspective, certain objective factors must be considered in the construction of metrics and constraints to ensure that the AFO fulfills its medical purpose. These more objective metrics are rooted in a common medical device market and technical standards. Given the large body of research concerning these standards, these objective metrics and constraints were derived through a literature review. Through these two methods, a comprehensive list of metrics and constraints accounting for both the user perspective on AFO design and the AFO’s medical purpose was compiled. These metrics and constraints will establish the framework for designing a new AFO that carries out its medical purpose while also improving the user experience. The metrics can be categorized into several overarching areas for AFO improvement. Categories of user perspective related metrics include comfort, discreteness, aesthetics, ease of use, and compatibility with clothing. Categories of medical purpose related metrics include biomechanical functionality, durability, and affordability. These metrics were used to guide an iterative prototyping process. Six concepts were ideated and compared using system-level analysis. From these six concepts, two concepts – the piano wire model and the segmented model – were selected to move forward into prototyping. Evaluation of non-functional prototypes of the piano wire and segmented models determined that the piano wire model better fulfilled the metrics by offering increased stability, longer durability, fewer points for failure, and a strong enough core component to allow a sock to cover over the AFO while maintaining the overall structure. As such, the piano wire AFO has moved forward into the functional prototyping phase, and healthy subject testing is being designed and recruited to conduct design validation and verification.Keywords: ankle-foot orthotic, assistive technology, human centered design, medical devices
Procedia PDF Downloads 155436 Online Monitoring and Control of Continuous Mechanosynthesis by UV-Vis Spectrophotometry
Authors: Darren A. Whitaker, Dan Palmer, Jens Wesholowski, James Flaherty, John Mack, Ahmad B. Albadarin, Gavin Walker
Abstract:
Traditional mechanosynthesis has been performed by either ball milling or manual grinding. However, neither of these techniques allow the easy application of process control. The temperature may change unpredictably due to friction in the process. Hence the amount of energy transferred to the reactants is intrinsically non-uniform. Recently, it has been shown that the use of Twin-Screw extrusion (TSE) can overcome these limitations. Additionally, TSE enables a platform for continuous synthesis or manufacturing as it is an open-ended process, with feedstocks at one end and product at the other. Several materials including metal-organic frameworks (MOFs), co-crystals and small organic molecules have been produced mechanochemically using TSE. The described advantages of TSE are offset by drawbacks such as increased process complexity (a large number of process parameters) and variation in feedstock flow impacting on product quality. To handle the above-mentioned drawbacks, this study utilizes UV-Vis spectrophotometry (InSpectroX, ColVisTec) as an online tool to gain real-time information about the quality of the product. Additionally, this is combined with real-time process information in an Advanced Process Control system (PharmaMV, Perceptive Engineering) allowing full supervision and control of the TSE process. Further, by characterizing the dynamic behavior of the TSE, a model predictive controller (MPC) can be employed to ensure the process remains under control when perturbed by external disturbances. Two reactions were studied; a Knoevenagel condensation reaction of barbituric acid and vanillin and, the direct amidation of hydroquinone by ammonium acetate to form N-Acetyl-para-aminophenol (APAP) commonly known as paracetamol. Both reactions could be carried out continuously using TSE, nuclear magnetic resonance (NMR) spectroscopy was used to confirm the percentage conversion of starting materials to product. This information was used to construct partial least squares (PLS) calibration models within the PharmaMV development system, which relates the percent conversion to product to the acquired UV-Vis spectrum. Once this was complete, the model was deployed within the PharmaMV Real-Time System to carry out automated optimization experiments to maximize the percentage conversion based on a set of process parameters in a design of experiments (DoE) style methodology. With the optimum set of process parameters established, a series of PRBS process response tests (i.e. Pseudo-Random Binary Sequences) around the optimum were conducted. The resultant dataset was used to build a statistical model and associated MPC. The controller maximizes product quality whilst ensuring the process remains at the optimum even as disturbances such as raw material variability are introduced into the system. To summarize, a combination of online spectral monitoring and advanced process control was used to develop a robust system for optimization and control of two TSE based mechanosynthetic processes.Keywords: continuous synthesis, pharmaceutical, spectroscopy, advanced process control
Procedia PDF Downloads 175435 Impact of Wastewater Irrigation on Soil Quality and Productivity of Tuberose (Polianthes tuberosa L. cv. Prajwal)
Authors: D. S. Gurjar, R. Kaur, K. P. Singh, R. Singh
Abstract:
A greater volume of wastewater generate from urban areas in India. Due to the adequate availability, less energy requirement and nutrient richness, farmers of urban and peri-urban areas are deliberately using wastewater to grow high value vegetable crops. Wastewater contains pathogens and toxic pollutants, which can enter in the food chain system while using wastewater for irrigating vegetable crops. Hence, wastewater can use for growing commercial flower crops that may avoid food chain contamination. Tuberose (Polianthes tuberosa L.) is one of the most important commercially grown, cultivated over 30, 000 ha area, flower crop in India. Its popularity is mainly due to the sweet fragrance as well as the long keeping quality of the flower spikes. The flower spikes of tuberose has high market price and usually blooms during summer and rainy seasons when there is meager supply of other flowers in the market. It has high irrigation water requirement and fresh water supply is inadequate in tuberose growing areas of India. Therefore, wastewater may fulfill the water and nutrients requirements and may enhance the productivity of tuberose. Keeping in view, the present study was carried out at WTC farm of ICAR-Indian Agricultural Research Institute, New Delhi in 2014-15. Prajwal was the variety of test crop. The seven treatments were taken as T-1. Wastewater irrigation at 0.6 ID/CPE, T-2: Wastewater irrigation at 0.8 ID/CPE, T-3: Wastewater irrigation at 1.0 ID/CPE, T-4: Wastewater irrigation at 1.2 ID/CPE, T-5: Wastewater irrigation at 1.4 ID/CPE, T-6: Conjunctive use of Groundwater and Wastewater irrigation at 1.0 ID/CPE in cyclic mode, T-7: Control (Groundwater irrigation at 1.0 ID/CPE) in randomized block design with three replication. Wastewater and groundwater samples were collected on monthly basis (April 2014 to March 2015) and analyzed for different parameters of irrigation quality (pH, EC, SAR, RSC), pollution hazard (BOD, toxic heavy metals and Faecal coliforms) and nutrients potential (N, P, K, Cu, Fe, Mn, Zn) as per standard methods. After harvest of tuberose crop, soil samples were also collected and analyzed for different parameters of soil quality as per standard methods. The vegetative growth and flower parameters were recorded at flowering stage of tuberose plants. Results indicated that wastewater samples had higher nutrient potential, pollution hazard as compared to groundwater used in experimental crop. Soil quality parameters such as pH EC, available phosphorous & potassium and heavy metals (Cu, Fe, Mn, Zn, Cd. Pb, Ni, Cr, Co, As) were not significantly changed whereas organic carbon and available nitrogen were significant higher in the treatments where wastewater irrigations were given at 1.2 and 1.4 ID/CPE as compared to groundwater irrigations. Significantly higher plant height (68.47 cm), leaves per plant (78.35), spike length (99.93 cm), rachis length (37.40 cm), numbers of florets per spike (56.53), cut spike yield (0.93 lakh/ha) and loose flower yield (8.5 t/ha) were observed in the treatment of Wastewater irrigation at 1.2 ID/CPE. Study concluded that given quality of wastewater improves the productivity of tuberose without an adverse impact on soil quality/health. However, its long term impacts need to be further evaluated.Keywords: conjunctive use, irrigation, tuberose, wastewater
Procedia PDF Downloads 330434 Applying Simulation-Based Digital Teaching Plans and Designs in Operating Medical Equipment
Authors: Kuo-Kai Lin, Po-Lun Chang
Abstract:
Background: The Emergency Care Research Institute released a list for the top 10 medical technology hazards in 2017, with the following hazard topping the list: ‘infusion errors can be deadly if simple safety steps are overlooked.’ In addition, hospitals use various assessment items to evaluate the safety of their medical equipment, confirming the importance of medical equipment safety. In recent years, the topic of patient safety has garnered increasing attention. Accordingly, various agencies have established patient safety-related committees to coordinate, collect, and analyze information regarding abnormal events associated with medical practice. Activities to promote and improve employee training have been introduced to diminish the recurrence of medical malpractice. Objective: To allow nursing personnel to acquire the skills needed to operate common medical equipment and update and review such skills whenever necessary to elevate medical care quality and reduce patient injuries caused by medical equipment operation errors. Method: In this study, a quasi-experimental design was adopted and nurses from a regional teaching hospital were selected as the study sample. Online videos instructing the operation method of common medical equipment were made and quick response codes were designed for the nursing personnel to quickly access the videos when necessary. Senior nursing supervisors and equipment experts were invited to formulate a ‘Scale-based Questionnaire for Assessing Nursing Personnel’s Operational Knowledge of Common Medical Equipment’ to evaluate the nursing personnel’s literacy regarding the operation of the medical equipment. From March to October 2017, an employee training on medical equipment operation and a practice course (simulation course) were implemented, after which the effectiveness of the training and practice course were assessed. Results: Prior to and after the training and practice course, the 66 participating nurses scored 58 and 87 on ‘operational knowledge of common medical equipment,’ respectively (showing a significant statistical difference; t = -9.407, p < .001); 53.5 and 86.3 on ‘operational knowledge of 12-lead electrocardiography’ (z = -2.087, p < .01), respectively; 40 and 79.5 on ‘operational knowledge of cardiac defibrillators’ (z = -3.849, p < .001), respectively; 90 and 98 on ‘operational knowledge of Abbott pumps’ (z = -1.841, p = 0.066), respectively; and 8.7 and 13.7 on ‘perceived competence’ (showing a significant statistical difference; t = -2.77, p < .05). In the participating hospital, medical equipment operation errors were observed in both 2016 and 2017. However, since the implementation of the intervention, medical equipment operation errors have not yet been observed up to October 2017, which can be regarded as the secondary outcome of this study. Conclusion: In this study, innovative teaching strategies were adopted to effectively enhance the professional literacy and skills of nursing personnel in operating medical equipment. The training and practice course also elevated the nursing personnel’s related literacy and perceived competence of operating medical equipment. The nursing personnel was thus able to accurately operate the medical equipment and avoid operational errors that might jeopardize patient safety.Keywords: medical equipment, digital teaching plan, simulation-based teaching plan, operational knowledge, patient safety
Procedia PDF Downloads 137433 System-Driven Design Process for Integrated Multifunctional Movable Concepts
Authors: Oliver Bertram, Leonel Akoto Chama
Abstract:
In today's civil transport aircraft, the design of flight control systems is based on the experience gained from previous aircraft configurations with a clear distinction between primary and secondary flight control functions for controlling the aircraft altitude and trajectory. Significant system improvements are now seen particularly in multifunctional moveable concepts where the flight control functions are no longer considered separate but integral. This allows new functions to be implemented in order to improve the overall aircraft performance. However, the classical design process of flight controls is sequential and insufficiently interdisciplinary. In particular, the systems discipline is involved only rudimentarily in the early phase. In many cases, the task of systems design is limited to meeting the requirements of the upstream disciplines, which may lead to integration problems later. For this reason, approaching design with an incremental development is required to reduce the risk of a complete redesign. Although the potential and the path to multifunctional moveable concepts are shown, the complete re-engineering of aircraft concepts with less classic moveable concepts is associated with a considerable risk for the design due to the lack of design methods. This represents an obstacle to major leaps in technology. This gap in state of the art is even further increased if, in the future, unconventional aircraft configurations shall be considered, where no reference data or architectures are available. This means that the use of the above-mentioned experience-based approach used for conventional configurations is limited and not applicable to the next generation of aircraft. In particular, there is a need for methods and tools for a rapid trade-off between new multifunctional flight control systems architectures. To close this gap in the state of the art, an integrated system-driven design process for multifunctional flight control systems of non-classical aircraft configurations will be presented. The overall goal of the design process is to find optimal solutions for single or combined target criteria in a fast process from the very large solution space for the flight control system. In contrast to the state of the art, all disciplines are involved for a holistic design in an integrated rather than a sequential process. To emphasize the systems discipline, this paper focuses on the methodology for designing moveable actuation systems in the context of this integrated design process of multifunctional moveables. The methodology includes different approaches for creating system architectures, component design methods as well as the necessary process outputs to evaluate the systems. An application example of a reference configuration is used to demonstrate the process and validate the results. For this, new unconventional hydraulic and electrical flight control system architectures are calculated which result from the higher requirements for multifunctional moveable concept. In addition to typical key performance indicators such as mass and required power requirements, the results regarding the feasibility and wing integration aspects of the system components are examined and discussed here. This is intended to show how the systems design can influence and drive the wing and overall aircraft design.Keywords: actuation systems, flight control surfaces, multi-functional movables, wing design process
Procedia PDF Downloads 143432 Alternative Energy and Carbon Source for Biosurfactant Production
Authors: Akram Abi, Mohammad Hossein Sarrafzadeh
Abstract:
Because of their several advantages over chemical surfactants, biosurfactants have given rise to a growing interest in the past decades. Advantages such as lower toxicity, higher biodegradability, higher selectivity and applicable at extreme temperature and pH which enables them to be used in a variety of applications such as: enhanced oil recovery, environmental and pharmaceutical applications, etc. Bacillus subtilis produces a cyclic lipopeptide, called surfactin, which is one of the most powerful biosurfactants with ability to decrease surface tension of water from 72 mN/m to 27 mN/m. In addition to its biosurfactant character, surfactin exhibits interesting biological activities such as: inhibition of fibrin clot formation, lyses of erythrocytes and several bacterial spheroplasts, antiviral, anti-tumoral and antibacterial properties. Surfactin is an antibiotic substance and has been shown recently to possess anti-HIV activity. However, application of biosurfactants is limited by their high production cost. The cost can be reduced by optimizing biosurfactant production using cheap feed stock. Utilization of inexpensive substrates and unconventional carbon sources like urban or agro-industrial wastes is a promising strategy to decrease the production cost of biosurfactants. With suitable engineering optimization and microbiological modifications, these wastes can be used as substrates for large-scale production of biosurfactants. As an effort to fulfill this purpose, in this work we have tried to utilize olive oil as second carbon source and also yeast extract as second nitrogen source to investigate the effect on both biomass and biosurfactant production improvement in Bacillus subtilis cultures. Since the turbidity of the culture was affected by presence of the oil, optical density was compromised and no longer could be used as an index of growth and biomass concentration. Therefore, cell Dry Weight measurements with applying necessary tactics for removing oil drops to prevent interference with biomass weight were carried out to monitor biomass concentration during the growth of the bacterium. The surface tension and critical micelle dilutions (CMD-1, CMD-2) were considered as an indirect measurement of biosurfactant production. Distinctive and promising results were obtained in the cultures containing olive oil compared to cultures without it: more than two fold increase in biomass production (from 2 g/l to 5 g/l) and considerable reduction in surface tension, down to 40 mN/m at surprisingly early hours of culture time (only 5hr after inoculation). This early onset of biosurfactant production in this culture is specially interesting when compared to the conventional cultures at which this reduction in surface tension is not obtained until 30 hour of culture time. Reducing the production time is a very prominent result to be considered for large scale process development. Furthermore, these results can be used to develop strategies for utilization of agro-industrial wastes (such as olive oil mill residue, molasses, etc.) as cheap and easily accessible feed stocks to decrease the high costs of biosurfactant production.Keywords: agro-industrial waste, bacillus subtilis, biosurfactant, fermentation, second carbon and nitrogen source, surfactin
Procedia PDF Downloads 301431 Audio-Visual Co-Data Processing Pipeline
Authors: Rita Chattopadhyay, Vivek Anand Thoutam
Abstract:
Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech
Procedia PDF Downloads 78430 The Effect of Calcium Phosphate Composite Scaffolds on the Osteogenic Differentiation of Rabbit Dental Pulp Stem Cells
Authors: Ling-Ling E, Lin Feng, Hong-Chen Liu, Dong-Sheng Wang, Zhanping Shi, Juncheng Wang, Wei Luo, Yan Lv
Abstract:
The objective of this study was to compare the effects of the two calcium phosphate composite scaffolds on the attachment, proliferation and osteogenic differentiation of rabbit dental pulp stem cells (DPSCs). One nano-hydroxyapatite/collagen/poly (L-lactide) (nHAC/PLA), imitating the composition and the micro-structure characteristics of the natural bone, was made by Beijing Allgens Medical Science & Technology Co., Ltd. (China). The other beta-tricalcium phosphate (β-TCP), being fully interoperability globular pore structure, was provided by Shanghai Bio-lu Biomaterials Co, Ltd. (China). We compared the absorption water rate and the protein adsorption rate of two scaffolds and the characterization of DPSCs cultured on the culture plate and both scaffolds under osteogenic differentiation media (ODM) treatment. The constructs were then implanted subcutaneously into the back of severe combined immunodeficient (SCID) mice for 8 and 12 weeks to compare their bone formation capacity. The results showed that the ODM-treated DPSCs expressed osteocalcin (OCN), bone sialoprotein (BSP), type I collagen (COLI) and osteopontin (OPN) by immunofluorescence staining. Positive alkaline phosphatase (ALP) staining, calcium deposition and calcium nodules were also observed on the ODM-treated DPSCs. The nHAC/PLA had significantly higher absorption water rate and protein adsorption rate than ß-TCP. The initial attachment of DPSCs seeded onto nHAC/PLA was significantly higher than that onto ß-TCP; and the proliferation rate of the cells was significantly higher than that of ß-TCP on 1, 3 and 7 days of cell culture. DPSCs+ß-TCP had significantly higher ALP activity, calcium/phosphorus content and mineral formation than DPSCs+nHAC/PLA. When implanted into the back of SCID mice, nHAC/PLA alone had no new bone formation, newly formed mature bone and osteoid were only observed in β-TCP alone, DPSCs+nHAC/PLA and DPSCs+β-TCP, and this three groups displayed increased bone formation over the 12-week period. The percentage of total bone formation area had no difference between DPSCs+β-TCP and DPSCs+nHAC/PLA at each time point,but the percentage of mature bone formation area of DPSCs+β-TCP was significantly higher than that of DPSCs+nHAC/PLA. Our results demonstrated that the DPSCs on nHAC/PLA had a better proliferation and that the DPSCs on β-TCP had a more mineralization in vitro, much more newly formed mature bones in vivo were presented in DPSCs+β-TCP group. These findings have provided a further knowledge that scaffold architecture has a different influence on the attachment, proliferation and differentiation of cells. This study may provide insight into the clinical periodontal bone tissue repair with DPSCs+β-TCP construct.Keywords: dental pulp stem cells, nano-hydroxyapatite/collagen/poly(L-lactide), beta-tricalcium phosphate, periodontal tissue engineering, bone regeneration
Procedia PDF Downloads 332429 Privacy Paradox and the Internet of Medical Things
Authors: Isabell Koinig, Sandra Diehl
Abstract:
In recent years, the health-care context has not been left unaffected by technological developments. In recent years, the Internet of Medical Things (IoMT)has not only led to a collaboration between disease management and advanced care coordination but also to more personalized health care and patient empowerment. With more than 40 % of all health technology being IoMT-related by 2020, questions regarding privacy become more prevalent, even more so during COVID-19when apps allowing for an intensive tracking of people’s whereabouts and their personal contacts cause privacy advocates to protest and revolt. There is a widespread tendency that even though users may express concerns and fears about their privacy, they behave in a manner that appears to contradict their statements by disclosing personal data. In literature, this phenomenon is discussed as a privacy paradox. While there are some studies investigating the privacy paradox in general, there is only scarce research related to the privacy paradox in the health sector and, to the authors’ knowledge, no empirical study investigating young people’s attitudes toward data security when using wearables and health apps. The empirical study presented in this paper tries to reduce this research gap by focusing on the area of digital and mobile health. It sets out to investigate the degree of importance individuals attribute to protecting their privacy and individual privacy protection strategies. Moreover, the question to which degree individuals between the ages of 20 and 30 years are willing to grant commercial parties access to their private data to use digital health services and apps are put to the test. To answer this research question, results from 6 focus groups with 40 participants will be presented. The focus was put on this age segment that has grown up in a digitally immersed environment. Moreover, it is particularly the young generation who is not only interested in health and fitness but also already uses health-supporting apps or gadgets. Approximately one-third of the study participants were students. Subjects were recruited in August and September 2019 by two trained researchers via email and were offered an incentive for their participation. Overall, results indicate that the young generation is well informed about the growing data collection and is quite critical of it; moreover, they possess knowledge of the potential side effects associated with this data collection. Most respondents indicated to cautiously handle their data and consider privacy as highly relevant, utilizing a number of protective strategies to ensure the confidentiality of their information. Their willingness to share information in exchange for services was only moderately pronounced, particularly in the health context, since health data was seen as valuable and sensitive. The majority of respondents indicated to rather miss out on using digital and mobile health offerings in order to maintain their privacy. While this behavior might be an unintended consequence, it is an important piece of information for app developers and medical providers, who have to find a way to find a user base for their products against the background of rising user privacy concerns.Keywords: digital health, privacy, privacy paradox, IoMT
Procedia PDF Downloads 136428 Floating Building Potential for Adaptation to Rising Sea Levels: Development of a Performance Based Building Design Framework
Authors: Livia Calcagni
Abstract:
Most of the largest cities in the world are located in areas that are vulnerable to coastal erosion and flooding, both linked to climate change and rising sea levels (RSL). Nevertheless, more and more people are moving to these vulnerable areas as cities keep growing. Architects, engineers and policy makers are called to rethink the way we live and to provide timely and adequate responses not only by investigating measures to improve the urban fabric, but also by developing strategies capable of planning change, exploring unusual and resilient frontiers of living, such as floating architecture. Since the beginning of the 21st century we have seen a dynamic growth of water-based architecture. At the same time, the shortage of land available for urban development also led to reclaim the seabed or to build floating structures. In light of these considerations, time is ripe to consider floating architecture not only as a full-fledged building typology but especially as a full-fledged adaptation solution for RSL. Currently, there is no global international legal framework for urban development on water and there is no structured performance based building design (PBBD) approach for floating architecture in most countries, let alone national regulatory systems. Thus, the research intends to identify the technological, morphological, functional, economic, managerial requirements that must be considered in a the development of the PBBD framework conceived as a meta-design tool. As it is expected that floating urban development is mostly likely to take place as extension of coastal areas, the needs and design criteria are definitely more similar to those of the urban environment than of the offshore industry. Therefor, the identification and categorization of parameters takes the urban-architectural guidelines and regulations as the starting point, taking the missing aspects, such as hydrodynamics, from the offshore and shipping regulatory frameworks. This study is carried out through an evidence-based assessment of performance guidelines and regulatory systems that are effective in different countries around the world addressing on-land and on-water architecture as well as offshore and shipping industries. It involves evidence-based research and logical argumentation methods. Overall, this paper highlights how inhabiting water is not only a viable response to the problem of RSL, thus a resilient frontier for urban development, but also a response to energy insecurity, clean water and food shortages, environmental concerns and urbanization, in line with Blue Economy principles and the Agenda 2030. Moreover, the discipline of architecture is presented as a fertile field for investigating solutions to cope with climate change and its effects on life safety and quality. Future research involves the development of a decision support system as an information tool to guide the user through the decision-making process, emphasizing the logical interaction between the different potential choices, based on the PBBD.Keywords: adaptation measures, floating architecture, performance based building design, resilient architecture, rising sea levels
Procedia PDF Downloads 85427 New Media and the Personal Vote in General Elections: A Comparison of Constituency Level Candidates in the United Kingdom and Japan
Authors: Sean Vincent
Abstract:
Within the academic community, there is a consensus that political parties in established liberal democracies are facing a myriad of organisational challenges as a result of falling membership, weakening links to grass-roots support and rising voter apathy. During the same period of party decline and growing public disengagement political parties have become increasingly professionalised. The professionalisation of political parties owes much to changes in technology, with television becoming the dominant medium for political communication. In recent years, however, it has become clear that a new medium of communication is becoming utilised by political parties and candidates – New Media. New Media, a term hard to define but related to internet based communication, offers a potential revolution in political communication. It can be utilised by anyone with access to the internet and its most widely used platforms of communication such as Facebook and Twitter, are free to use. The advent of Web 2.0 has dramatically changed what can be done with the Internet. Websites now allow candidates at the constituency level to fundraise, organise and set out personalised policies. Social media allows them to communicate with supporters and potential voters practically cost-free. As such candidate dependency on the national party for resources and image now lies open to debate. Arguing that greater candidate independence may be a natural next step in light of the contemporary challenges faced by parties, this paper examines how New Media is being used by candidates at the constituency level to increase their personal vote. The paper will present findings from research carried out during two elections – the Japanese Lower House election of 2014 and the UK general election of 2015. During these elections a sample of candidates, totalling 150 candidates, from the three biggest parties in each country were selected and their new media output, specifically candidate websites, Twitter and Facebook output subjected to content analysis. The analysis examines how candidates are using new media to both become more functionally, through fundraising and volunteer mobilisation and politically, through the promotion of personal/local policies, independent from the national party. In order to validate the results of content analysis this paper will also present evidence from interviews carried out with 17 candidates that stood in the 2014 Japanese Lower House election or 2015 UK general election. With a combination of statistical analysis and interviews, several conclusions can be made about the use of New Media at constituency level. The findings show not just a clear difference in the way candidates from each country are using New Media but also differences within countries based upon the particular circumstances of each constituency. While it has not yet replaced traditional methods of fundraising and activist mobilisation, New Media is also becoming increasingly important in campaign organisation and the general consensus amongst candidates is that its importance will continue to grow along as politics in both countries becomes more diffuse.Keywords: political campaigns, elections, new media, political communication
Procedia PDF Downloads 225426 Improved Soil and Snow Treatment with the Rapid Update Cycle Land-Surface Model for Regional and Global Weather Predictions
Authors: Tatiana G. Smirnova, Stan G. Benjamin
Abstract:
Rapid Update Cycle (RUC) land surface model (LSM) was a land-surface component in several generations of operational weather prediction models at the National Center for Environment Prediction (NCEP) at the National Oceanic and Atmospheric Administration (NOAA). It was designed for short-range weather predictions with an emphasis on severe weather and originally was intentionally simple to avoid uncertainties from poorly known parameters. Nevertheless, the RUC LSM, when coupled with the hourly-assimilating atmospheric model, can produce a realistic evolution of time-varying soil moisture and temperature, as well as the evolution of snow cover on the ground surface. This result is possible only if the soil/vegetation/snow component of the coupled weather prediction model has sufficient skill to avoid long-term drift. RUC LSM was first implemented in the operational NCEP Rapid Update Cycle (RUC) weather model in 1998 and later in the Weather Research Forecasting Model (WRF)-based Rapid Refresh (RAP) and High-resolution Rapid Refresh (HRRR). Being available to the international WRF community, it was implemented in operational weather models in Austria, New Zealand, and Switzerland. Based on the feedback from the US weather service offices and the international WRF community and also based on our own validation, RUC LSM has matured over the years. Also, a sea-ice module was added to RUC LSM for surface predictions over the Arctic sea-ice. Other modifications include refinements to the snow model and a more accurate specification of albedo, roughness length, and other surface properties. At present, RUC LSM is being tested in the regional application of the Unified Forecast System (UFS). The next generation UFS-based regional Rapid Refresh FV3 Standalone (RRFS) model will replace operational RAP and HRRR at NCEP. Over time, RUC LSM participated in several international model intercomparison projects to verify its skill using observed atmospheric forcing. The ESM-SnowMIP was the last of these experiments focused on the verification of snow models for open and forested regions. The simulations were performed for ten sites located in different climatic zones of the world forced with observed atmospheric conditions. While most of the 26 participating models have more sophisticated snow parameterizations than in RUC, RUC LSM got a high ranking in simulations of both snow water equivalent and surface temperature. However, ESM-SnowMIP experiment also revealed some issues in the RUC snow model, which will be addressed in this paper. One of them is the treatment of grid cells partially covered with snow. RUC snow module computes energy and moisture budgets of snow-covered and snow-free areas separately by aggregating the solutions at the end of each time step. Such treatment elevates the importance of computing in the model snow cover fraction. Improvements to the original simplistic threshold-based approach have been implemented and tested both offline and in the coupled weather model. The detailed description of changes to the snow cover fraction and other modifications to RUC soil and snow parameterizations will be described in this paper.Keywords: land-surface models, weather prediction, hydrology, boundary-layer processes
Procedia PDF Downloads 86425 Time Travel Testing: A Mechanism for Improving Renewal Experience
Authors: Aritra Majumdar
Abstract:
While organizations strive to expand their new customer base, retaining existing relationships is a key aspect of improving overall profitability and also showcasing how successful an organization is in holding on to its customers. It is an experimentally proven fact that the lion’s share of profit always comes from existing customers. Hence seamless management of renewal journeys across different channels goes a long way in improving trust in the brand. From a quality assurance standpoint, time travel testing provides an approach to both business and technology teams to enhance the customer experience when they look to extend their partnership with the organization for a defined phase of time. This whitepaper will focus on key pillars of time travel testing: time travel planning, time travel data preparation, and enterprise automation. Along with that, it will call out some of the best practices and common accelerator implementation ideas which are generic across verticals like healthcare, insurance, etc. In this abstract document, a high-level snapshot of these pillars will be provided. Time Travel Planning: The first step of setting up a time travel testing roadmap is appropriate planning. Planning will include identifying the impacted systems that need to be time traveled backward or forward depending on the business requirement, aligning time travel with other releases, frequency of time travel testing, preparedness for handling renewal issues in production after time travel testing is done and most importantly planning for test automation testing during time travel testing. Time Travel Data Preparation: One of the most complex areas in time travel testing is test data coverage. Aligning test data to cover required customer segments and narrowing it down to multiple offer sequencing based on defined parameters are keys for successful time travel testing. Another aspect is the availability of sufficient data for similar combinations to support activities like defect retesting, regression testing, post-production testing (if required), etc. This section will talk about the necessary steps for suitable data coverage and sufficient data availability from a time travel testing perspective. Enterprise Automation: Time travel testing is never restricted to a single application. The workflow needs to be validated in the downstream applications to ensure consistency across the board. Along with that, the correctness of offers across different digital channels needs to be checked in order to ensure a smooth customer experience. This section will talk about the focus areas of enterprise automation and how automation testing can be leveraged to improve the overall quality without compromising on the project schedule. Along with the above-mentioned items, the white paper will elaborate on the best practices that need to be followed during time travel testing and some ideas pertaining to accelerator implementation. To sum it up, this paper will be written based on the real-time experience author had on time travel testing. While actual customer names and program-related details will not be disclosed, the paper will highlight the key learnings which will help other teams to implement time travel testing successfully.Keywords: time travel planning, time travel data preparation, enterprise automation, best practices, accelerator implementation ideas
Procedia PDF Downloads 158424 Nurse Participation for the Economical Effectiveness in Medical Organizations
Authors: Alua Masalimova, Dameli Sulubecova, Talgat Isaev, Raushan Magzumova
Abstract:
The usual relation to nurses of heads of medical organizations in Kazakhstan is to use them only for per performing medical manipulations, but new economic conditions require the introduction of nursing innovations. There is an increasing need for managers of hospital departments and regions of ambulatory clinics to ensure comfortable conditions for doctors, nurses, aides, as well as monitoring marketing technology (the needs and satisfaction of staff work, the patient satisfaction of the department). It is going to the past the nursing activities as physician assistant performing his prescriptions passively. We are suggesting a model for the developing the head nurse as the manager on the example of Blood Service. We have studied in the scientific-production center of blood transfusion head nurses by the standard method of interviewing for involvement in coordinating the flow of information, promoting the competitiveness of the department. Results: the average age of the respondents 43,1 ± 9,8, female - 100%; manager in the Organization – 9,3 ± 10,3 years. Received positive responses to the knowledge of the nearest offices in providing similar medical service - 14,2%. The cost of similar medical services in other competitive organizations did not know 100%, did a study of employee satisfaction Division labour-85,7% answered negatively, the satisfaction donors work staff studied in 50.0% of cases involved in attracting paid Services Division showed a 28.5% of the respondent. Participation in management decisions medical organization: strategic planning - 14,2%, forming analysis report for the year – 14,2%, recruitment-30.0%, equipment-14.2%. Participation in the social and technical designing workplaces Division staff showed 85,0% of senior nurses. Participate in the cohesion of the staff of the Division method of the team used the 10.0% of respondents. Further, we have studied the behavioral competencies for senior sisters: customer focus – 20,0% of respondents have attended, the ability to work in a team – 40,0%. Personal qualities senior nurses were apparent: sociability – 80,0%, the ability to manage information – 40,0%, to make their own decisions - 14,2%, 28,5% creativity, the desire to improve their professionalism – 50,0%. Thus, the modern market conditions dictate this organization, which works for the rights of economic management; include the competence of the post of the senior nurse knowledge and skills of Marketing Management Department. Skills to analyses the information collected and use of management offers superior medical leadership organization. The medical organization in the recruitment of the senior nurse offices take into account personal qualities: flexibility, fluency of thinking, communication skills and ability to work in a team. As well as leadership qualities, ambition, high emotional and social intelligence, that will bring out the medical unit on competitiveness within the country and abroad.Keywords: blood service, head nurse, manager, skills
Procedia PDF Downloads 241423 A Study of the Effect of the Flipped Classroom on Mixed Abilities Classes in Compulsory Secondary Education in Italy
Authors: Giacoma Pace
Abstract:
The research seeks to evaluate whether students with impairments can achieve enhanced academic progress by actively engaging in collaborative problem-solving activities with teachers and peers, to overcome the obstacles rooted in socio-economic disparities. Furthermore, the research underscores the significance of fostering students' self-awareness regarding their learning process and encourages teachers to adopt a more interactive teaching approach. The research also posits that reducing conventional face-to-face lessons can motivate students to explore alternative learning methods, such as collaborative teamwork and peer education within the classroom. To address socio-cultural barriers it is imperative to assess their internet access and possession of technological devices, as these factors can contribute to a digital divide. The research features a case study of a Flipped Classroom Learning Unit, administered to six third-year high school classes: Scientific Lyceum, Technical School, and Vocational School, within the city of Turin, Italy. Data are about teachers and the students involved in the case study, some impaired students in each class, level of entry, students’ performance and attitude before using Flipped Classrooms, level of motivation, family’s involvement level, teachers’ attitude towards Flipped Classroom, goal obtained, the pros and cons of such activities, technology availability. The selected schools were contacted; meetings for the English teachers to gather information about their attitude and knowledge of the Flipped Classroom approach. Questionnaires to teachers and IT staff were administered. The information gathered, was used to outline the profile of the subjects involved in the study and was further compared with the second step of the study made up of a study conducted with the classes of the selected schools. The learning unit is the same, structure and content are decided together with the English colleagues of the classes involved. The pacing and content are matched in every lesson and all the classes participate in the same labs, use the same materials, homework, same assessment by summative and formative testing. Each step follows a precise scheme, in order to be as reliable as possible. The outcome of the case study will be statistically organised. The case study is accompanied by a study on the literature concerning EFL approaches and the Flipped Classroom. Document analysis method was employed, i.e. a qualitative research method in which printed and/or electronic documents containing information about the research subject are reviewed and evaluated with a systematic procedure. Articles in the Web of Science Core Collection, Education Resources Information Center (ERIC), Scopus and Science Direct databases were searched in order to determine the documents to be examined (years considered 2000-2022).Keywords: flipped classroom, impaired, inclusivity, peer instruction
Procedia PDF Downloads 52422 Artificial Neural Network and Satellite Derived Chlorophyll Indices for Estimation of Wheat Chlorophyll Content under Rainfed Condition
Authors: Muhammad Naveed Tahir, Wang Yingkuan, Huang Wenjiang, Raheel Osman
Abstract:
Numerous models used in prediction and decision-making process but most of them are linear in natural environment, and linear models reach their limitations with non-linearity in data. Therefore accurate estimation is difficult. Artificial Neural Networks (ANN) found extensive acceptance to address the modeling of the complex real world for the non-linear environment. ANN’s have more general and flexible functional forms than traditional statistical methods can effectively deal with. The link between information technology and agriculture will become more firm in the near future. Monitoring crop biophysical properties non-destructively can provide a rapid and accurate understanding of its response to various environmental influences. Crop chlorophyll content is an important indicator of crop health and therefore the estimation of crop yield. In recent years, remote sensing has been accepted as a robust tool for site-specific management by detecting crop parameters at both local and large scales. The present research combined the ANN model with satellite-derived chlorophyll indices from LANDSAT 8 imagery for predicting real-time wheat chlorophyll estimation. The cloud-free scenes of LANDSAT 8 were acquired (Feb-March 2016-17) at the same time when ground-truthing campaign was performed for chlorophyll estimation by using SPAD-502. Different vegetation indices were derived from LANDSAT 8 imagery using ERADAS Imagine (v.2014) software for chlorophyll determination. The vegetation indices were including Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Chlorophyll Absorbed Ratio Index (CARI), Modified Chlorophyll Absorbed Ratio Index (MCARI) and Transformed Chlorophyll Absorbed Ratio index (TCARI). For ANN modeling, MATLAB and SPSS (ANN) tools were used. Multilayer Perceptron (MLP) in MATLAB provided very satisfactory results. For training purpose of MLP 61.7% of the data, for validation purpose 28.3% of data and rest 10% of data were used to evaluate and validate the ANN model results. For error evaluation, sum of squares error and relative error were used. ANN model summery showed that sum of squares error of 10.786, the average overall relative error was .099. The MCARI and NDVI were revealed to be more sensitive indices for assessing wheat chlorophyll content with the highest coefficient of determination R²=0.93 and 0.90 respectively. The results suggested that use of high spatial resolution satellite imagery for the retrieval of crop chlorophyll content by using ANN model provides accurate, reliable assessment of crop health status at a larger scale which can help in managing crop nutrition requirement in real time.Keywords: ANN, chlorophyll content, chlorophyll indices, satellite images, wheat
Procedia PDF Downloads 146421 Train Timetable Rescheduling Using Sensitivity Analysis: Application of Sobol, Based on Dynamic Multiphysics Simulation of Railway Systems
Authors: Soha Saad, Jean Bigeon, Florence Ossart, Etienne Sourdille
Abstract:
Developing better solutions for train rescheduling problems has been drawing the attention of researchers for decades. Most researches in this field deal with minor incidents that affect a large number of trains due to cascading effects. They focus on timetables, rolling stock and crew duties, but do not take into account infrastructure limits. The present work addresses electric infrastructure incidents that limit the power available for train traction, and hence the transportation capacity of the railway system. Rescheduling is needed in order to optimally share the available power among the different trains. We propose a rescheduling process based on dynamic multiphysics railway simulations that include the mechanical and electrical properties of all the system components and calculate physical quantities such as the train speed profiles, voltage along the catenary lines, temperatures, etc. The optimization problem to solve has a large number of continuous and discrete variables, several output constraints due to physical limitations of the system, and a high computation cost. Our approach includes a phase of sensitivity analysis in order to analyze the behavior of the system and help the decision making process and/or more precise optimization. This approach is a quantitative method based on simulation statistics of the dynamic railway system, considering a predefined range of variation of the input parameters. Three important settings are defined. Factor prioritization detects the input variables that contribute the most to the outputs variation. Then, factor fixing allows calibrating the input variables which do not influence the outputs. Lastly, factor mapping is used to study which ranges of input values lead to model realizations that correspond to feasible solutions according to defined criteria or objectives. Generalized Sobol indexes are used for factor prioritization and factor fixing. The approach is tested in the case of a simple railway system, with a nominal traffic running on a single track line. The considered incident is the loss of a feeding power substation, which limits the power available and the train speed. Rescheduling is needed and the variables to be adjusted are the trains departure times, train speed reduction at a given position and the number of trains (cancellation of some trains if needed). The results show that the spacing between train departure times is the most critical variable, contributing to more than 50% of the variation of the model outputs. In addition, we identify the reduced range of variation of this variable which guarantees that the output constraints are respected. Optimal solutions are extracted, according to different potential objectives: minimizing the traveling time, the train delays, the traction energy, etc. Pareto front is also built.Keywords: optimization, rescheduling, railway system, sensitivity analysis, train timetable
Procedia PDF Downloads 398420 Supermarket Shoppers Perceptions to Genetically Modified Foods in Trinidad and Tobago: Focus on Health Risks and Benefits
Authors: Safia Hasan Varachhia, Neela Badrie, Marsha Singh
Abstract:
Genetic modification of food is an innovative technology that offers a host of benefits and advantages to consumers. Consumer attitudes towards GM food and GM technologies can be identified a major determinant in conditioning market force and encouraging policy makers and regulators to recognize the significance of consumer influence on the market. This study aimed to investigate and evaluate the extent of consumer awareness, knowledge, perception and acceptance of GM foods and its associated health risks and benefit in Trinidad and Tobago, West Indies. The specific objectives of this study were to (determine consumer awareness to GM foods, ascertain their perspectives on health and safety risks and ethical issues associated with GM foods and determine whether labeling of GM foods and ingredients will influence consumers’ willingness to purchase GM foods. A survey comprising of a questionnaire consisting of 40 questions, both open-ended and close-ended was administered to 240 shoppers in small, medium and large-scale supermarkets throughout Trinidad between April-May, 2015 using convenience sampling. This survey investigated consumer awareness, knowledge, perception and acceptance of GM foods and its associated health risks/benefits. The data was analyzed using SPSS 19.0 and Minitab 16.0. One-way ANOVA investigated the effects categories of supermarkets and knowledge scores on shoppers’ awareness, knowledge, perception and acceptance of GM foods. Linear Regression tested whether demographic variables (category of supermarket, age of consumer, level of were useful predictors of consumer’s knowledge of GM foods). More than half of respondents (64.3%) were aware of GM foods and GM technologies, 28.3% of consumers indicated the presence of GM foods in local supermarkets and 47.1% claimed to be knowledgeable of GM foods. Furthermore, significant associations (P < 0.05) were observed between demographic variables (age, income, and education), and consumer knowledge of GM foods. Also, significant differences (P < 0.05) were observed between demographic variables (education, gender, and income) and consumer knowledge of GM foods. In addition, age, education, gender and income (P < 0.05) were useful predictors of consumer knowledge of GM foods. There was a contradiction as whilst 35% of consumers considered GM foods safe for consumption, 70% of consumers were wary of the unknown health risks of GM foods. About two-thirds of respondents (67.5%) considered the creation of GM foods morally wrong and unethical. Regarding GM food labeling preferences, 88% of consumers preferred mandatory labeling of GM foods and 67% of consumers specified that any food product containing a trace of GM food ingredients required mandatory GM labeling. Also, despite the declaration of GM food ingredients on food labels and the reassurance of its safety for consumption by food safety and regulatory institutions, the majority of consumers (76.1%) still preferred conventionally produced foods over GM foods. The study revealed the need to inform shoppers of the presence of GM foods and technologies, present the scientific evidence as to the benefits and risks and the need for a policy on labeling so that informed choices could be taken.Keywords: genetically modified foods, income, labeling consumer awareness, ingredients, morality and ethics, policy
Procedia PDF Downloads 327419 Data Science/Artificial Intelligence: A Possible Panacea for Refugee Crisis
Authors: Avi Shrivastava
Abstract:
In 2021, two heart-wrenching scenes, shown live on television screens across countries, painted a grim picture of refugees. One of them was of people clinging onto an airplane's wings in their desperate attempt to flee war-torn Afghanistan. They ultimately fell to their death. The other scene was the U.S. government authorities separating children from their parents or guardians to deter migrants/refugees from coming to the U.S. These events show the desperation refugees feel when they are trying to leave their homes in disaster zones. However, data paints a grave picture of the current refugee situation. It also indicates that a bleak future lies ahead for the refugees across the globe. Data and information are the two threads that intertwine to weave the shimmery fabric of modern society. Data and information are often used interchangeably, but they differ considerably. For example, information analysis reveals rationale, and logic, while data analysis, on the other hand, reveals a pattern. Moreover, patterns revealed by data can enable us to create the necessary tools to combat huge problems on our hands. Data analysis paints a clear picture so that the decision-making process becomes simple. Geopolitical and economic data can be used to predict future refugee hotspots. Accurately predicting the next refugee hotspots will allow governments and relief agencies to prepare better for future refugee crises. The refugee crisis does not have binary answers. Given the emotionally wrenching nature of the ground realities, experts often shy away from realistically stating things as they are. This hesitancy can cost lives. When decisions are based solely on data, emotions can be removed from the decision-making process. Data also presents irrefutable evidence and tells whether there is a solution or not. Moreover, it also responds to a nonbinary crisis with a binary answer. Because of all that, it becomes easier to tackle a problem. Data science and A.I. can predict future refugee crises. With the recent explosion of data due to the rise of social media platforms, data and insight into data has solved many social and political problems. Data science can also help solve many issues refugees face while staying in refugee camps or adopted countries. This paper looks into various ways data science can help solve refugee problems. A.I.-based chatbots can help refugees seek legal help to find asylum in the country they want to settle in. These chatbots can help them find a marketplace where they can find help from the people willing to help. Data science and technology can also help solve refugees' many problems, including food, shelter, employment, security, and assimilation. The refugee problem seems to be one of the most challenging for social and political reasons. Data science and machine learning can help prevent the refugee crisis and solve or alleviate some of the problems that refugees face in their journey to a better life. With the explosion of data in the last decade, data science has made it possible to solve many geopolitical and social issues.Keywords: refugee crisis, artificial intelligence, data science, refugee camps, Afghanistan, Ukraine
Procedia PDF Downloads 71