Search results for: generalized linear mixed model (GLMM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21298

Search results for: generalized linear mixed model (GLMM)

358 Waveguiding in an InAs Quantum Dots Nanomaterial for Scintillation Applications

Authors: Katherine Dropiewski, Michael Yakimov, Vadim Tokranov, Allan Minns, Pavel Murat, Serge Oktyabrsky

Abstract:

InAs Quantum Dots (QDs) in a GaAs matrix is a well-documented luminescent material with high light yield, as well as thermal and ionizing radiation tolerance due to quantum confinement. These benefits can be leveraged for high-efficiency, room temperature scintillation detectors. The proposed scintillator is composed of InAs QDs acting as luminescence centers in a GaAs stopping medium, which also acts as a waveguide. This system has appealing potential properties, including high light yield (~240,000 photons/MeV) and fast capture of photoelectrons (2-5ps), orders of magnitude better than currently used inorganic scintillators, such as LYSO or BaF2. The high refractive index of the GaAs matrix (n=3.4) ensures light emitted by the QDs is waveguided, which can be collected by an integrated photodiode (PD). Scintillation structures were grown using Molecular Beam Epitaxy (MBE) and consist of thick GaAs waveguiding layers with embedded sheets of modulation p-type doped InAs QDs. An AlAs sacrificial layer is grown between the waveguide and the GaAs substrate for epitaxial lift-off to separate the scintillator film and transfer it to a low-index substrate for waveguiding measurements. One consideration when using a low-density material like GaAs (~5.32 g/cm³) as a stopping medium is the matrix thickness in the dimension of radiation collection. Therefore, luminescence properties of very thick (4-20 microns) waveguides with up to 100 QD layers were studied. The optimization of the medium included QD shape, density, doping, and AlGaAs barriers at the waveguide surfaces to prevent non-radiative recombination. To characterize the efficiency of QD luminescence, low temperature photoluminescence (PL) (77-450 K) was measured and fitted using a kinetic model. The PL intensity degrades by only 40% at RT, with an activation energy for electron escape from QDs to the barrier of ~60 meV. Attenuation within the waveguide (WG) is a limiting factor for the lateral size of a scintillation detector, so PL spectroscopy in the waveguiding configuration was studied. Spectra were measured while the laser (630 nm) excitation point was scanned away from the collecting fiber coupled to the edge of the WG. The QD ground state PL peak at 1.04 eV (1190 nm) was inhomogeneously broadened with FWHM of 28 meV (33 nm) and showed a distinct red-shift due to self-absorption in the QDs. Attenuation stabilized after traveling over 1 mm through the WG, at about 3 cm⁻¹. Finally, a scintillator sample was used to test detection and evaluate timing characteristics using 5.5 MeV alpha particles. With a 2D waveguide and a small area of integrated PD, the collected charge averaged 8.4 x10⁴ electrons, corresponding to a collection efficiency of about 7%. The scintillation response had 80 ps noise-limited time resolution and a QD decay time of 0.6 ns. The data confirms unique properties of this scintillation detector which can be potentially much faster than any currently used inorganic scintillator.

Keywords: GaAs, InAs, molecular beam epitaxy, quantum dots, III-V semiconductor

Procedia PDF Downloads 254
357 Correlation between Defect Suppression and Biosensing Capability of Hydrothermally Grown ZnO Nanorods

Authors: Mayoorika Shukla, Pramila Jakhar, Tejendra Dixit, I. A. Palani, Vipul Singh

Abstract:

Biosensors are analytical devices with wide range of applications in biological, chemical, environmental and clinical analysis. It comprises of bio-recognition layer which has biomolecules (enzymes, antibodies, DNA, etc.) immobilized over it for detection of analyte and transducer which converts the biological signal into the electrical signal. The performance of biosensor primarily the depends on the bio-recognition layer and therefore it has to be chosen wisely. In this regard, nanostructures of metal oxides such as ZnO, SnO2, V2O5, and TiO2, etc. have been explored extensively as bio-recognition layer. Recently, ZnO has the attracted attention of researchers due to its unique properties like high iso-electric point, biocompatibility, stability, high electron mobility and high electron binding energy, etc. Although there have been many reports on usage of ZnO as bio-recognition layer but to the authors’ knowledge, none has ever observed correlation between optical properties like defect suppression and biosensing capability of the sensor. Here, ZnO nanorods (ZNR) have been synthesized by a low cost, simple and low-temperature hydrothermal growth process, over Platinum (Pt) coated glass substrate. The ZNR have been synthesized in two steps viz. initially a seed layer was coated over substrate (Pt coated glass) followed by immersion of it into nutrient solution of Zinc nitrate and Hexamethylenetetramine (HMTA) with in situ addition of KMnO4. The addition of KMnO4 was observed to have a profound effect over the growth rate anisotropy of ZnO nanostructures. Clustered and powdery growth of ZnO was observed without addition of KMnO4, although by addition of it during the growth, uniform and crystalline ZNR were found to be grown over the substrate. Moreover, the same has resulted in suppression of defects as observed by Normalized Photoluminescence (PL) spectra since KMnO4 is a strong oxidizing agent which provides an oxygen rich growth environment. Further, to explore the correlation between defect suppression and biosensing capability of the ZNR Glucose oxidase (Gox) was immobilized over it, using physical adsorption technique followed by drop casting of nafion. Here the main objective of the work was to analyze effect of defect suppression over biosensing capability, and therefore Gox has been chosen as model enzyme, and electrochemical amperometric glucose detection was performed. The incorporation of KMnO4 during growth has resulted in variation of optical and charge transfer properties of ZNR which in turn were observed to have deep impact on biosensor figure of merits. The sensitivity of biosensor was found to increase by 12-18 times, due to variations introduced by addition of KMnO4 during growth. The amperometric detection of glucose in continuously stirred buffer solution was performed. Interestingly, defect suppression has been observed to contribute towards the improvement of biosensor performance. The detailed mechanism of growth of ZNR along with the overall influence of defect suppression on the sensing capabilities of the resulting enzymatic electrochemical biosensor and different figure of merits of the biosensor (Glass/Pt/ZNR/Gox/Nafion) will be discussed during the conference.

Keywords: biosensors, defects, KMnO4, ZnO nanorods

Procedia PDF Downloads 281
356 Intriguing Modulations in the Excited State Intramolecular Proton Transfer Process of Chrysazine Governed by Host-Guest Interactions with Macrocyclic Molecules

Authors: Poojan Gharat, Haridas Pal, Sharmistha Dutta Choudhury

Abstract:

Tuning photophysical properties of guest dyes through host-guest interactions involving macrocyclic hosts are the attractive research areas since past few decades, as these changes can directly be implemented in chemical sensing, molecular recognition, fluorescence imaging and dye laser applications. Excited state intramolecular proton transfer (ESIPT) is an intramolecular prototautomerization process display by some specific dyes. The process is quite amenable to tunability by the presence of different macrocyclic hosts. The present study explores the interesting effect of p-sulfonatocalix[n]arene (SCXn) and cyclodextrin (CD) hosts on the excited-state prototautomeric equilibrium of Chrysazine (CZ), a model antitumour drug. CZ exists exclusively in its normal form (N) in the ground state. However, in the excited state, the excited N* form undergoes ESIPT along with its pre-existing intramolecular hydrogen bonds, giving the excited state prototautomer (T*). Accordingly, CZ shows a single absorption band due to N form, but two emission bands due to N* and T* forms. Facile prototautomerization of CZ is considerably inhibited when the dye gets bound to SCXn hosts. However, in spite of lower binding affinity, the inhibition is more profound with SCX6 host as compared to SCX4 host. For CD-CZ system, while prototautomerization process is hindered by the presence of β-CD, it remains unaffected in the presence of γCD. Reduction in the prototautomerization process of CZ by SCXn and βCD hosts is unusual, because T* form is less dipolar in nature than the N*, hence binding of CZ within relatively hydrophobic hosts cavities should have enhanced the prototautomerization process. At the same time, considering the similar chemical nature of two CD hosts, their effect on prototautomerization process of CZ would have also been similar. The atypical effects on the prototautomerization process of CZ by the studied hosts are suggested to arise due to the partial inclusion or external binding of CZ with the hosts. As a result, there is a strong possibility of intermolecular H-bonding interaction between CZ dye and the functional groups present at the portals of SCXn and βCD hosts. Formation of these intermolecular H-bonds effectively causes the pre-existing intramolecular H-bonding network within CZ molecule to become weak, and this consequently reduces the prototautomerization process for the dye. Our results suggest that rather than the binding affinity between the dye and host, it is the orientation of CZ in the case of SCXn-CZ complexes and the binding stoichiometry in the case of CD-CZ complexes that play the predominant role in influencing the prototautomeric equilibrium of the dye CZ. In the case of SCXn-CZ complexes, the results obtained through experimental findings are well supported by quantum chemical calculations. Similarly for CD-CZ systems, binding stoichiometries obtained through geometry optimization studies on the complexes between CZ and CD hosts correlate nicely with the experimental results. Formation of βCD-CZ complexes with 1:1 stoichiometry while formation of γCD-CZ complexes with 1:1, 1:2 and 2:2 stoichiometries are revealed from geometry optimization studies and these results are in good accordance with the observed effects by the βCD and γCD hosts on the ESIPT process of CZ dye.

Keywords: intermolecular proton transfer, macrocyclic hosts, quantum chemical studies, photophysical studies

Procedia PDF Downloads 119
355 Multi-Agent System Based Distributed Voltage Control in Distribution Systems

Authors: A. Arshad, M. Lehtonen. M. Humayun

Abstract:

With the increasing Distributed Generation (DG) penetration, distribution systems are advancing towards the smart grid technology for least latency in tackling voltage control problem in a distributed manner. This paper proposes a Multi-agent based distributed voltage level control. In this method a flat architecture of agents is used and agents involved in the whole controlling procedure are On Load Tap Changer Agent (OLTCA), Static VAR Compensator Agent (SVCA), and the agents associated with DGs and loads at their locations. The objectives of the proposed voltage control model are to minimize network losses and DG curtailments while maintaining voltage value within statutory limits as close as possible to the nominal. The total loss cost is the sum of network losses cost, DG curtailment costs, and voltage damage cost (which is based on penalty function implementation). The total cost is iteratively calculated for various stricter limits by plotting voltage damage cost and losses cost against varying voltage limit band. The method provides the optimal limits closer to nominal value with minimum total loss cost. In order to achieve the objective of voltage control, the whole network is divided into multiple control regions; downstream from the controlling device. The OLTCA behaves as a supervisory agent and performs all the optimizations. At first, a token is generated by OLTCA on each time step and it transfers from node to node until the node with voltage violation is detected. Upon detection of such a node, the token grants permission to Load Agent (LA) for initiation of possible remedial actions. LA will contact the respective controlling devices dependent on the vicinity of the violated node. If the violated node does not lie in the vicinity of the controller or the controlling capabilities of all the downstream control devices are at their limits then OLTC is considered as a last resort. For a realistic study, simulations are performed for a typical Finnish residential medium-voltage distribution system using Matlab ®. These simulations are executed for two cases; simple Distributed Voltage Control (DVC) and DVC with optimized loss cost (DVC + Penalty Function). A sensitivity analysis is performed based on DG penetration. The results indicate that costs of losses and DG curtailments are directly proportional to the DG penetration, while in case 2 there is a significant reduction in total loss. For lower DG penetration, losses are reduced more or less 50%, while for higher DG penetration, loss reduction is not very significant. Another observation is that the newer stricter limits calculated by cost optimization moves towards the statutory limits of ±10% of the nominal with the increasing DG penetration as for 25, 45 and 65% limits calculated are ±5, ±6.25 and 8.75% respectively. Observed results conclude that the novel voltage control algorithm proposed in case 1 is able to deal with the voltage control problem instantly but with higher losses. In contrast, case 2 make sure to reduce the network losses through proposed iterative method of loss cost optimization by OLTCA, slowly with time.

Keywords: distributed voltage control, distribution system, multi-agent systems, smart grids

Procedia PDF Downloads 310
354 Foslip Loaded and CEA-Affimer Functionalised Silica Nanoparticles for Fluorescent Imaging of Colorectal Cancer Cells

Authors: Yazan S. Khaled, Shazana Shamsuddin, Jim Tiernan, Mike McPherson, Thomas Hughes, Paul Millner, David G. Jayne

Abstract:

Introduction: There is a need for real-time imaging of colorectal cancer (CRC) to allow tailored surgery to the disease stage. Fluorescence guided laparoscopic imaging of primary colorectal cancer and the draining lymphatics would potentially bring stratified surgery into clinical practice and realign future CRC management to the needs of patients. Fluorescent nanoparticles can offer many advantages in terms of intra-operative imaging and therapy (theranostic) in comparison with traditional soluble reagents. Nanoparticles can be functionalised with diverse reagents and then targeted to the correct tissue using an antibody or Affimer (artificial binding protein). We aimed to develop and test fluorescent silica nanoparticles and targeted against CRC using an anti-carcinoembryonic antigen (CEA) Affimer (Aff). Methods: Anti-CEA and control Myoglobin Affimer binders were subcloned into the expressing vector pET11 followed by transformation into BL21 Star™ (DE3) E.coli. The expression of Affimer binders was induced using 0.1 mM isopropyl β-D-1-thiogalactopyranoside (IPTG). Cells were harvested, lysed and purified using nickle chelating affinity chromatography. The photosensitiser Foslip (soluble analogue of 5,10,15,20-Tetra(m-hydroxyphenyl) chlorin) was incorporated into the core of silica nanoparticles using water-in-oil microemulsion technique. Anti-CEA or control Affs were conjugated to silica nanoparticles surface using sulfosuccinimidyl-4-(N-maleimidomethyl) cyclohexane-1-carboxylate (sulfo SMCC) chemical linker. Binding of CEA-Aff or control nanoparticles to colorectal cancer cells (LoVo, LS174T and HC116) was quantified in vitro using confocal microscopy. Results: The molecular weights of the obtained band of Affimers were ~12.5KDa while the diameter of functionalised silica nanoparticles was ~80nm. CEA-Affimer targeted nanoparticles demonstrated 9.4, 5.8 and 2.5 fold greater fluorescence than control in, LoVo, LS174T and HCT116 cells respectively (p < 0.002) for the single slice analysis. A similar pattern of successful CEA-targeted fluorescence was observed in the maximum image projection analysis, with CEA-targeted nanoparticles demonstrating 4.1, 2.9 and 2.4 fold greater fluorescence than control particles in LoVo, LS174T, and HCT116 cells respectively (p < 0.0002). There was no significant difference in fluorescence for CEA-Affimer vs. CEA-Antibody targeted nanoparticles. Conclusion: We are the first to demonstrate that Foslip-doped silica nanoparticles conjugated to anti-CEA Affimers via SMCC allowed tumour cell-specific fluorescent targeting in vitro, and had shown sufficient promise to justify testing in an animal model of colorectal cancer. CEA-Affimer appears to be a suitable targeting molecule to replace CEA-Antibody. Targeted silica nanoparticles loaded with Foslip photosensitiser is now being optimised to drive photodynamic killing, via reactive oxygen generation.

Keywords: colorectal cancer, silica nanoparticles, Affimers, antibodies, imaging

Procedia PDF Downloads 238
353 Prevalence, Antimicrobial Susceptibility Pattern and Public Health Significance for Staphylococcus Aureus of Isolated from Raw Red Meat at Butchery and Abattoir House in Mekelle, Northern Ethiopia

Authors: Haftay Abraha Tadesse

Abstract:

Background: Staphylococcus is a genus of worldwide distributed bacteria correlated to several infectious of different sites in humans and animals. They are among the most important causes of infection that are associated with the consumption of contaminated food. Objective: The objective of this study was to determine the isolates, antimicrobial susceptibility patterns and Public Health Significance of Staphylococcus aureus in raw meat from butchery and abattoir houses of Mekelle, Northern Ethiopia. Methodology: A cross-sectional study was conducted from April to October 2019. Socio-demographic data and Public Health Significance were collected using a predesigned questionnaire. The raw meat samples were collected aseptically in the butchery and abattoir houses and transported using an ice box to Mekelle University, College of Veterinary Sciences, for isolating and identification of Staphylococcus aureus. Antimicrobial susceptibility tests were determined by the disc diffusion method. Data obtained were cleaned and entered into STATA 22.0 and a logistic regression model with odds ratio was calculated to assess the association of risk factors with bacterial contamination. A P-value < 0.05 was considered statistically significant. Results: In the present study, 88 out of 250 (35.2%) were found to be contaminated with Staphylococcus aureus. Among the raw meat specimens, the positivity rate of Staphylococcus aureus was 37.6% (n=47) and (32.8% (n=41), butchery and abattoir houses, respectively. Among the associated risks, factories not using gloves reduces risk was found to (AOR=0.222; 95% CI: 0.104-0.473), Strict Separation b/n clean & dirty (AOR= 1.37; 95% CI: 0.66-2.86) and poor habit of hand washing (AOR=1.08; 95%CI: 0.35 3.35) was found to be statistically significant and have associated with Staphylococcus aureus contamination. All isolates of thirty-seven of Staphylococcus aureus were checked and displayed (100%) sensitive to doxycycline, trimethoprim, gentamicin, sulphamethoxazole, amikacin, CN, Co trimoxazole and nitrofurantoi. Whereas the showed resistance to cefotaxime (100%), ampicillin (87.5%), Penicillin (75%), B (75%), and nalidixic acid (50%) from butchery houses. On the other hand, all isolates of Staphylococcus aureus isolate 100% (n= 10) showed sensitive chloramphenicol, gentamicin and nitrofurantoin, whereas they showed 100% resistance of Penicillin, B, AMX, ceftriaxone, ampicillin and cefotaxime from abattoirs houses. The overall multi-drug resistance pattern for Staphylococcus aureus was 90% and 100% of butchery and abattoir houses, respectively. Conclusion: 35.3% Staphylococcus aureus isolated were recovered from the raw meat samples collected from the butchery and abattoirs houses. More has to be done in the development of hand washing behavior and availability of safe water in the butchery houses to reduce the burden of bacterial contamination. The results of the present finding highlight the need to implement protective measures against the levels of food contamination and alternative drug options. The development of antimicrobial resistance is nearly always a result of repeated therapeutic and/or indiscriminate use of them. Regular antimicrobial sensitivity testing helps to select effective antibiotics and to reduce the problems of drug resistance development towards commonly used antibiotics.

Keywords: abattoir house, AMR, butchery house, S. aureus

Procedia PDF Downloads 95
352 Seasonal Variability of Picoeukaryotes Community Structure Under Coastal Environmental Disturbances

Authors: Benjamin Glasner, Carlos Henriquez, Fernando Alfaro, Nicole Trefault, Santiago Andrade, Rodrigo De La Iglesia

Abstract:

A central question in ecology refers to the relative importance that local-scale variables have over community composition, when compared with regional-scale variables. In coastal environments, strong seasonal abiotic influence dominates these systems, weakening the impact of other parameters like micronutrients. After the industrial revolution, micronutrients like trace metals have increased in ocean as pollutants, with strong effects upon biotic entities and biological processes in coastal regions. Coastal picoplankton communities had been characterized as a cyanobacterial dominated fraction, but in recent years the eukaryotic component of this size fraction has gained relevance due to their high influence in carbon cycle, although, diversity patterns and responses to disturbances are poorly understood. South Pacific upwelling coastal environments represent an excellent model to study seasonal changes due to a strong influence in the availability of macro- and micronutrients between seasons. In addition, some well constrained coastal bays of this region have been subjected to strong disturbances due to trace metal inputs. In this study, we aim to compare the influence of seasonality and trace metals concentrations, on the community structure of planktonic picoeukaryotes. To describe seasonal patterns in the study area, satellite data in a 6 years time series and in-situ measurements with a traditional oceanographic approach such as CTDO equipment were performed. In addition, trace metal concentrations were analyzed trough ICP-MS analysis, for the same region. For biological data collection, field campaigns were performed in 2011-2012 and the picoplankton community was described by flow cytometry and taxonomical characterization with next-generation sequencing of ribosomal genes. The relation between the abiotic and biotic components was finally determined by multivariate statistical analysis. Our data show strong seasonal fluctuations in abiotic parameters such as photosynthetic active radiation and superficial sea temperature, with a clear differentiation of seasons. However, trace metal analysis allows identifying strong differentiation within the study area, dividing it into two zones based on trace metals concentration. Biological data indicate that there are no major changes in diversity but a significant fluctuation in evenness and community structure. These changes are related mainly with regional parameters, like temperature, but by analyzing the metal influence in picoplankton community structure, we identify a differential response of some plankton taxa to metal pollution. We propose that some picoeukaryotic plankton groups respond differentially to metal inputs, by changing their nutritional status and/or requirements under disturbances as a derived outcome of toxic effects and tolerance.

Keywords: Picoeukaryotes, plankton communities, trace metals, seasonal patterns

Procedia PDF Downloads 172
351 Nutritional Genomics Profile Based Personalized Sport Nutrition

Authors: Eszter Repasi, Akos Koller

Abstract:

Our genetic information determines our look, physiology, sports performance and all our features. Maximizing the performances of athletes have adopted a science-based approach to the nutritional support. Nowadays genetics studies have blended with nutritional sciences, and a dynamically evolving, new research field have appeared. Nutritional genomics is needed to be used by nutritional experts. This is a recent field of nutritional science, which can provide a solution to reach the best sport performance using correlations between the athlete’s genome, nutritions, molecules, included human microbiome (links between food, microbiome and epigenetics), nutrigenomics and nutrigenetics. Nutritional genomics has a tremendous potential to change the future of dietary guidelines and personal recommendations. Experts need to use new technology to get information about the athletes, like nutritional genomics profile (included the determination of the oral and gut microbiome and DNA coded reaction for food components), which can modify the preparation term and sports performance. The influence of nutrients on the genes expression is called Nutrigenomics. The heterogeneous response of gene variants to nutrients, dietary components is called Nutrigenetics. The human microbiome plays a critical role in the state of health and well-being, and there are more links between food or nutrition and the human microbiome composition, which can develop diseases and epigenetic changes as well. A nutritional genomics-based profile of athletes can be the best technic for a dietitian to make a unique sports nutrition diet plan. Using functional food and the right food components can be effected on health state, thus sports performance. Scientists need to determine the best response, due to the effect of nutrients on health, through altering genome promote metabolites and result changes in physiology. Nutritional biochemistry explains why polymorphisms in genes for the absorption, circulation, or metabolism of essential nutrients (such as n-3 polyunsaturated fatty acids or epigallocatechin-3-gallate), would affect the efficacy of that nutrient. Controlled nutritional deficiencies and failures, prevented the change of health state or a newly discovered food intolerance are observed by a proper medical team, can support better sports performance. It is important that the dietetics profession informed on gene-diet interactions, that may be leading to optimal health, reduced risk of injury or disease. A special medical application for documentation and monitoring of data of health state and risk factors can uphold and warn the medical team for an early action and help to be able to do a proper health service in time. This model can set up a personalized nutrition advice from the status control, through the recovery, to the monitoring. But more studies are needed to understand the mechanisms and to be able to change the composition of the microbiome, environmental and genetic risk factors in cases of athletes.

Keywords: gene-diet interaction, multidisciplinary team, microbiome, diet plan

Procedia PDF Downloads 169
350 Management of Mycotoxin Production and Fungicide Resistance by Targeting Stress Response System in Fungal Pathogens

Authors: Jong H. Kim, Kathleen L. Chan, Luisa W. Cheng

Abstract:

Control of fungal pathogens, such as foodborne mycotoxin producers, is problematic as effective antimycotic agents are often very limited. Mycotoxin contamination significantly interferes with the safe production of foods or crops worldwide. Moreover, expansion of fungal resistance to commercial drugs or fungicides is a global human health concern. Therefore, there is a persistent need to enhance the efficacy of commercial antimycotic agents or to develop new intervention strategies. Disruption of the cellular antioxidant system should be an effective method for pathogen control. Such disruption can be achieved with safe, redox-active compounds. Natural phenolic derivatives are potent redox cyclers that inhibit fungal growth through destabilization of the cellular antioxidant system. The goal of this study is to identify novel, redox-active compounds that disrupt the fungal antioxidant system. The identified compounds could also function as sensitizing agents to conventional antimycotics (i.e., chemosensitization) to improve antifungal efficacy. Various benzo derivatives were tested against fungal pathogens. Gene deletion mutants of the yeast Saccharomyces cerevisiae were used as model systems for identifying molecular targets of benzo analogs. The efficacy of identified compounds as potent antifungal agents or as chemosensitizing agents to commercial drugs or fungicides was examined with methods outlined by the Clinical Laboratory Standards Institute or the European Committee on Antimicrobial Susceptibility Testing. Selected benzo derivatives possessed potent antifungal or antimycotoxigenic activity. Molecular analyses by using S. cerevisiae mutants indicated antifungal activity of benzo derivatives was through disruption of cellular antioxidant or cell wall integrity system. Certain benzo analogs screened overcame tolerance of Aspergillus signaling mutants, namely mitogen-activated protein kinase mutants, to fludioxonil fungicide. Synergistic antifungal chemosensitization greatly lowered minimum inhibitory or fungicidal concentrations of test compounds, including inhibitors of mitochondrial respiration. Of note, salicylaldehyde is a potent antimycotic volatile that has some practical application as a fumigant. Altogether, benzo derivatives targeting cellular antioxidant system of fungi (along with cell wall integrity system) effectively suppress fungal growth. Candidate compounds possess the antifungal, antimycotoxigenic or chemosensitizing capacity to augment the efficacy of commercial antifungals. Therefore, chemogenetic approaches can lead to the development of novel antifungal intervention strategies, which enhance the efficacy of established microbe intervention practices and overcome drug/fungicide resistance. Chemosensitization further reduces costs and alleviates negative side effects associated with current antifungal treatments.

Keywords: antifungals, antioxidant system, benzo derivatives, chemosensitization

Procedia PDF Downloads 259
349 A Systematic Review of Forest School for Early Childhood Education in China: Lessons Learned from European Studies from a Perspective of Ecological System

Authors: Xiaoying Zhang

Abstract:

Forest school – an outdoor educational experience that is undertaken in an outdoor environment with trees – becomes an emerging field of early childhood education recently. In China, the benefits of natural outdoor education to children and young people’s wellness have raised attention. Although different types of outdoor-based activities have been involved in some pre-school of China, few study and practice have been conducted in terms of the notion of forest school. To comprehend the impact of forest school for children and young people, this study aims to systematically review articles on the topic of forest school in preschool education from an ecological perspective, i.e. from individual level (e.g., behavior and mental health) to microsystem level (e.g., the relationship between teachers and children) to ecosystem level. Based on PRISMA framework flow, using the key words of “Forest School” and “Early Childhood Education” for searching in Web-of-science database, a total of 33 articles were identified. Sample participants of 13 studies were not preschool children, five studies were not on forest school theme, and two literature review articles were excluded for further analysis. Finally, 13 articles were eligible for thematic analysis. According to Bronfenbrenner's ecological systems theory, there are some fingdings, on the individual level, current forest school studies are concerned about the children behavioral experience in forest school, how these experience may relate to their achievement or to develop children’s wellbeing/wellness, and how this type of learning experience may enhance children’s self-awareness on risk and safety issues. On the microsystem/mesosystem level, this review indicated that pedagogical development for forest school, risk perception from teachers and parents, social development between peers, and adult’s role in the participation of forest school were concerned, explored and discussed most frequently. On the macrosystem, the conceptualization of forest school is the key theme. Different forms of presentation in various countries with diverse cultures could provide various models of forest school education. However, there was no study investigating forest school on an ecosystem level. As for the potential benefits of physical health and mental wellness that results from forest school, it informs us to reflect the system of preschool education from the ecological perspective for Chinese children. For instance, most Chinese kindergartens ignored the significance of natural outdoor activities for children. Preschool education in China is strongly oriented by primary school system, which means pre-school children are expected to be trained as primary school students to do different subjects, such as math. Hardly any kindergarteners provide the opportunities for children and young people to take risks in a natural environment like forest school does. However, merely copying forest school model for a Chinese preschool education system will be less effective. This review of different level concerns could inform us that the localization the idea of forest school to adapt to a Chinese political, educational and cultural background. More detailed results and profound discussions will be presented in the full paper.

Keywords: early childhood education, ecological system, education development prospects in China, forest school

Procedia PDF Downloads 151
348 Seafloor and Sea Surface Modelling in the East Coast Region of North America

Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk

Abstract:

Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.

Keywords: seafloor, sea surface height, bathymetry, satellite altimetry

Procedia PDF Downloads 78
347 An Unusual Manifestation of Spirituality: Kamppi Chapel of Helsinki

Authors: Emine Umran Topcu

Abstract:

In both urban design and architecture, the primary goal is considered to be looking for ways in which people feel and think about space and place. Humans, in general, see a place as security and space as freedom and feel attached to place and long for space. Contemporary urban design manifests itself by addressing basic physical and psychological human needs. Not much attention is paid to transcendence. There seems to be a gap in the hierarchy of human needs. Usually, social aspects of public space are addressed through urban design. More personal and intimately scaled needs of an individual are neglected. How does built form contribute to an individual’s growth, contemplation, and exploration? In other words, a greater meaning in the immediate environment. Architects love to talk about meaning, poetics, attachment and other ethereal aspects of space that are not visible attributes of places. This paper aims at describing spirituality through built form with a personal experience of Kamppi Chapel of Helsinki. Experience covers various modes through which a person unfolds or constructs reality. Perception, sensation, emotion, and thought can be counted as for these modes. To experience is to get to know. What can be known is a construct of experience. Feelings and thoughts about space and place are very complex in human beings. They grow out of life experiences. The author had the chance of visiting Kamppi Chapel in April 2017, out of which the experience grew. The Kamppi Chapel is located on the South side of the busy Narinnka Square in central Helsinki. It offers a place to quiet down and compose oneself in a most lively urban space. With its curved wooden facade, the small building looks more like a museum than a chapel. It can be called a museum for contemplation. With its gently shaped interior, it embraces visitors and shields them from the hustle bustle of the city outside. Places of worship in all faiths signify sacred power. The author, having origins in a part of the world where domes and minarets dominate the cityscape, was impressed by the size and the architectural visibility of the Chapel. Anyone born and trained in such a tradition shares the inherent values and psychological mechanisms of spirituality, sacredness and the modest realities of their environment. Spirituality in all cultural traditions has not been analyzed and reinterpreted in new conceptual frameworks. Fundamentalists may reject this positivist attitude, but Kamppi Chapel as it stands does not look like it has a say like “I’m a model to be followed”. It just faces the task of representing a religious facility in an urban setting largely shaped by modern urban planning, which seems to the author as looking for a new definition of individual status. The quest between the established and the new is the demand for modern efficiency versus dogmatic rigidity. The architecture here has played a very promising and rewarding role for spirituality. The designers have been the translators for human desire for better life and aesthetic environment for an optimal satisfaction of local citizens and the visitors alike.

Keywords: architecture, Kamppi Chapel, spirituality, urban

Procedia PDF Downloads 182
346 Eosinophils and Platelets: Players of the Game in Morbid Obese Boys with Metabolic Syndrome

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Childhood obesity, which may lead to increased risk for heart diseases in children as well as adults, is one of the most important health problems throughout the world. Prevalences of morbid obesity and metabolic syndrome (MetS) are being increased during childhood age group. MetS is a cluster of metabolic and vascular abnormalities including hypercoagulability and an increased risk of cardiovascular diseases (CVDs). There are also some relations between some components of MetS and leukocytes. The aim of this study is to investigate complete blood cell count parameters that differ between morbidly obese boys and girls with MetS diagnosis. A total of 117 morbid obese children with MetS consulted to Department of Pediatrics in Faculty of Medicine Hospital at Namik Kemal University were included into the scope of the study. The study population was classified based upon their genders (60 girls and 57 boys). Their heights and weights were measured and body mass index (BMI) values were calculated. WHO BMI-for age and sex percentiles were used. The values above 99 percentile were defined as morbid obesity. Anthropometric measurements were performed. Waist-to-hip and head-to-neck ratios as well as homeostatic model assessment of insulin resistance (HOMA-IR) were calculated. Components of MetS (central obesity, glucose intolerance, high blood pressure, high triacylglycerol levels, low levels of high density lipoprotein cholesterol) were determined. Hematological variables were measured. Statistical analyses were performed using SPSS. The degree for statistical significance was p ≤ 0.05. There was no statistically significant difference between the ages (11.2±2.6 years vs 11.2±3.0 years) and BMIs (28.6±5.2 kg/m2 vs 29.3±5.2 kg/m2) of boys and girls (p ≥ 0.05), respectively. Significantly increased waist-to-hip ratios were obtained for boys (0.94±0.08 vs 0.91±0.06; p=0.023). Significantly elevated values of hemoglobin (13.55±0.98 vs 13.06±0.82; p=0.004), mean corpuscular hemoglobin concentration (33.79±0.91 vs 33.21±1.14; p=0.003), eosinophils (0.300±0.253 vs 0.196±0.197; p=0.014), and platelet (347.1±81.7 vs 319.0±65.9; p=0.042) were detected for boys. There was no statistically significant difference between the groups in terms of neutrophil/lymphocyte ratios as well as HOMA-IR values (p ≥ 0.05). Statistically significant gender-based differences were found for hemoglobin as well as mean corpuscular hemoglobin concentration and hence, separate reference intervals for two genders should be considered for these parameters. Eosinophils may contribute to the development of thrombus in acute coronary syndrome. Eosinophils are also known to make an important contribution to mechanisms related to thrombosis pathogenesis in acute myocardial infarction. Increased platelet activity is observed in patients with MetS and these individuals are more susceptible to CVDs. In our study, elevated platelets described as dominant contributors to hypercoagulability and elevated eosinophil counts suggested to be related to the development of CVDs observed in boys may be the early indicators of the future cardiometabolic complications in this gender.

Keywords: children, complete blood count, gender, metabolic syndrome

Procedia PDF Downloads 214
345 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 290
344 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data

Authors: Kai Warsoenke, Maik Mackiewicz

Abstract:

To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.

Keywords: automotive production, machine learning, process optimization, smart tolerancing

Procedia PDF Downloads 114
343 Safety Considerations of Furanics for Sustainable Applications in Advanced Biorefineries

Authors: Anitha Muralidhara, Victor Engelen, Christophe Len, Pascal Pandard, Guy Marlair

Abstract:

Production of bio-based chemicals and materials from lignocellulosic biomass is gaining tremendous importance in advanced bio-refineries while aiming towards progressive replacement of petroleum based chemicals in transportation fuels and commodity polymers. One such attempt has resulted in the production of key furan derivatives (FD) such as furfural, HMF, MMF etc., via acid catalyzed dehydration (ACD) of C6 and C5 sugars, which are further converted into key chemicals or intermediates (such as Furandicarboxylic acid, Furfuryl alcohol etc.,). In subsequent processes, many high potential FD are produced, that can be converted into high added value polymers or high energy density biofuels. During ACD, an unavoidable polyfuranic byproduct is generated which is called humins. The family of FD is very large with varying chemical structures and diverse physicochemical properties. Accordingly, the associated risk profiles may largely vary. Hazardous Material (Haz-mat) classification systems such as GHS (CLP in the EU) and the UN TDG Model Regulations for transport of dangerous goods are one of the preliminary requirements for all chemicals for their appropriate classification, labelling, packaging, safe storage, and transportation. Considering the growing application routes of FD, it becomes important to notice the limited access to safety related information (safety data sheets available only for famous compounds such as HMF, furfural etc.,) in these internationally recognized haz-mat classification systems. However, these classifications do not necessarily provide information about the extent of risk involved when the chemical is used in any specific application. Factors such as thermal stability, speed of combustion, chemical incompatibilities, etc., can equally influence the safety profile of a compound, that are clearly out of the scope of any haz-mat classification system. Irrespective of the bio-based origin, FD has so far received inconsistent remarks concerning their toxicity profiles. With such inconsistencies, there is a fear that, a large family of FD may also follow extreme judgmental scenarios like ionic liquids, by ranking some compounds as extremely thermally stable, non-flammable, etc., Unless clarified, these messages could lead to misleading judgements while ranking the chemical based on its hazard rating. Safety is a key aspect in any sustainable biorefinery operation/facility, which is often underscored or neglected. To fill up these existing data gaps and to address ambiguities and discrepancies, the current study focuses on giving preliminary insights on safety assessment of FD and their potential targeted by-products. With the available information in the literature and obtained experimental results, physicochemical safety, environmental safety as well as (a scenario based) fire safety profiles of key FD, as well as side streams such as humins and levulinic acid, will be considered. With this, the study focuses on defining patterns and trends that gives coherent safety related information for existing and newly synthesized FD in the market for better functionality and sustainable applications.

Keywords: furanics, humins, safety, thermal and fire hazard, toxicity

Procedia PDF Downloads 164
342 An Integrated Approach to Cultural Heritage Management in the Indian Context

Authors: T. Lakshmi Priya

Abstract:

With the widening definition of heritage, the challenges of heritage management has become more complex . Today heritage not only includes significant monuments but comprises historic areas / sites, historic cities, cultural landscapes, and living heritage sites. There is a need for a comprehensive understanding of the values associated with these heritage resources, which will enable their protection and management. These diverse cultural resources are managed by multiple agencies having their own way of operating in the heritage sites. An Integrated approach to management of these cultural resources ensures its sustainability for the future generation. This paper outlines the importance of an integrated approach for the management and protection of complex heritage sites in India by examining four case studies. The methodology for this study is based on secondary research and primary surveys conducted during the preparation of the conservation management plansfor the various sites. The primary survey included basic documentation, inventorying, and community surveys. Red Fort located in the city of Delhi is one of the most significant forts built in 1639 by the Mughal Emperor Shahjahan. This fort is a national icon and stands testimony to the various historical events . It is on the ramparts of Red Fort that the national flag was unfurled on 15th August 1947, when India became independent, which continues even today. Management of this complex fort necessitated the need for an integrated approach, where in the needs of the official and non official stakeholders were addressed. The understanding of the inherent values and significance of this site was arrived through a systematic methodology of inventorying and mapping of information. Hampi, located in southern part of India, is a living heritage site inscribed in the World Heritage list in 1986. The site comprises of settlements, built heritage structures, traditional water systems, forest, agricultural fields and the remains of the metropolis of the 16th century Vijayanagar empire. As Hampi is a living heritage site having traditional systems of management and practices, the aim has been to include these practices in the current management so that there is continuity in belief, thought and practice. The existing national, regional and local planning instruments have been examined and the local concerns have been addressed.A comprehensive understanding of the site, achieved through an integrated model, is being translated to an action plan which safeguards the inherent values of the site. This paper also examines the case of the 20th century heritage building of National Archives of India, Delhi and protection of a 12th century Tomb of Sultan Ghari located in south Delhi. A comprehensive understanding of the site, lead to the delineation of the Archaeological Park of Sultan Ghari, in the current Master Plan for Delhi, for the protection of the tomb and the settlement around it. Through this study it is concluded that the approach of Integrated Conservation has enabled decision making that sustains the values of these complex heritage sites in Indian context.

Keywords: conservation, integrated, management, approach

Procedia PDF Downloads 87
341 Multi-Objectives Genetic Algorithm for Optimizing Machining Process Parameters

Authors: Dylan Santos De Pinho, Nabil Ouerhani

Abstract:

Energy consumption of machine-tools is becoming critical for machine-tool builders and end-users because of economic, ecological and legislation-related reasons. Many machine-tool builders are seeking for solutions that allow the reduction of energy consumption of machine-tools while preserving the same productivity rate and the same quality of machined parts. In this paper, we present the first results of a project conducted jointly by academic and industrial partners to reduce the energy consumption of a Swiss-Type lathe. We employ genetic algorithms to find optimal machining parameters – the set of parameters that lead to the best trade-off between energy consumption, part quality and tool lifetime. Three main machining process parameters are considered in our optimization technique, namely depth of cut, spindle rotation speed and material feed rate. These machining process parameters have been identified as the most influential ones in the configuration of the Swiss-type machining process. A state-of-the-art multi-objective genetic algorithm has been used. The algorithm combines three fitness functions, which are objective functions that permit to evaluate a set of parameters against the three objectives: energy consumption, quality of the machined parts, and tool lifetime. In this paper, we focus on the investigation of the fitness function related to energy consumption. Four different energy consumption related fitness functions have been investigated and compared. The first fitness function refers to the Kienzle cutting force model. The second fitness function uses the Material Removal Rate (RMM) as an indicator of energy consumption. The two other fitness functions are non-deterministic, learning-based functions. One fitness function uses a simple Neural Network to learn the relation between the process parameters and the energy consumption from experimental data. Another fitness function uses Lasso regression to determine the same relation. The goal is, then, to find out which fitness functions predict best the energy consumption of a Swiss-Type machining process for the given set of machining process parameters. Once determined, these functions may be used for optimization purposes – determine the optimal machining process parameters leading to minimum energy consumption. The performance of the four fitness functions has been evaluated. The Tornos DT13 Swiss-Type Lathe has been used to carry out the experiments. A mechanical part including various Swiss-Type machining operations has been selected for the experiments. The evaluation process starts with generating a set of CNC (Computer Numerical Control) programs for machining the part at hand. Each CNC program considers a different set of machining process parameters. During the machining process, the power consumption of the spindle is measured. All collected data are assigned to the appropriate CNC program and thus to the set of machining process parameters. The evaluation approach consists in calculating the correlation between the normalized measured power consumption and the normalized power consumption prediction for each of the four fitness functions. The evaluation shows that the Lasso and Neural Network fitness functions have the highest correlation coefficient with 97%. The fitness function “Material Removal Rate” (MRR) has a correlation coefficient of 90%, whereas the Kienzle-based fitness function has a correlation coefficient of 80%.

Keywords: adaptive machining, genetic algorithms, smart manufacturing, parameters optimization

Procedia PDF Downloads 145
340 Enhancement to Green Building Rating Systems for Industrial Facilities by Including the Assessment of Impact on the Landscape

Authors: Lia Marchi, Ernesto Antonini

Abstract:

The impact of industrial sites on people’s living environment both involves detrimental effects on the ecosystem and perceptual-aesthetic interferences with the scenery. These, in turn, affect the economic and social value of the landscape, as well as the wellbeing of workers and local communities. Given the diffusion of the phenomenon and the relevance of its effects, it emerges the need for a joint approach to assess and thus mitigate the impact of factories on the landscape –being this latest assumed as the result of the action and interaction of natural and human factors. However, the impact assessment tools suitable for the purpose are quite heterogeneous and mostly monodisciplinary. On the one hand, green building rating systems (GBRSs) are increasingly used to evaluate the performance of manufacturing sites, mainly by quantitative indicators focused on environmental issues. On the other hand, methods to detect the visual and social impact of factories on the landscape are gradually emerging in the literature, but they generally adopt only qualitative gauges. The research addresses the integration of the environmental impact assessment and the perceptual-aesthetic interferences of factories on the landscape. The GBRSs model is assumed as a reference since it is adequate to simultaneously investigate different topics which affect sustainability, returning a global score. A critical analysis of GBRSs relevant to industrial facilities has led to select the U.S. GBC LEED protocol as the most suitable to the scope. A revision of LEED v4 Building Design+Construction has then been provided by including specific indicators to measure the interferences of manufacturing sites with the perceptual-aesthetic and social aspects of the territory. To this end, a new impact category was defined, namely ‘PA - Perceptual-aesthetic aspects’, comprising eight new credits which are specifically designed to assess how much the buildings are in harmony with their surroundings: these investigate, for example the morphological and chromatic harmonization of the facility with the scenery or the site receptiveness and attractiveness. The credits weighting table was consequently revised, according to the LEED points allocation system. As all LEED credits, each new PA credit is thoroughly described in a sheet setting its aim, requirements, and the available options to gauge the interference and get a score. Lastly, each credit is related to mitigation tactics, which are drawn from a catalogue of exemplary case studies, it also developed by the research. The result is a modified LEED scheme which includes compatibility with the landscape within the sustainability assessment of the industrial sites. The whole system consists of 10 evaluation categories, which contain in total 62 credits. Lastly, a test of the tool on an Italian factory was performed, allowing the comparison of three mitigation scenarios with increasing compatibility level. The study proposes a holistic and viable approach to the environmental impact assessment of factories by a tool which integrates the multiple involved aspects within a worldwide recognized rating protocol.

Keywords: environmental impact, GBRS, landscape, LEED, sustainable factory

Procedia PDF Downloads 111
339 Implementation of a Culturally Responsive Home Visiting Framework in Head Start Teacher Professional Development

Authors: Meilan Jin, Mary Jane Moran

Abstract:

This study aims to introduce the framework of culturally responsive home visiting (CRHV) to head start teacher professional sessions in the Southeastern of the US and investigate its influence on the evolving beliefs of teachers about their roles and relationships with families in-home visits. The framework orients teachers to an effective way of taking on the role of learner to listen for spoken and unspoken needs and look for family strengths. In addition, it challenges the deficit model that is grounded on 'cultural deprivation,' and it stresses the value of family cultures and advocates equal, collaborative parent-teacher relationships. The home visit reflection papers and focus group transcriptions of eight teachers have been collected since 2010 throughout a five-year longitudinal collaboration with them. Reflection papers were written by the teachers before and after introducing the CRHV framework, including the details of visit purposes and actions and their plans for later home visits. Particularly, the CRHV framework guided the teachers to listen and look for information about family-living environments; parent-child interactions; child-rearing practices; and parental beliefs, values, and needs. Two focus groups were organized in 2014 by asking the teachers to read their written reflection papers and then discussing their shared beliefs and experiences of home visits in recent years. The average length of the discussions was one hour, and the discussions were audio-recorded and transcribed verbatim. Moreover, the data were analyzed using constant comparative analysis, and the analysis was verified through (a) the uses of multiple data sources, (b) the involvement of multiple researchers, (c) coding checks, and (d) the provisions of the thick descriptions of the findings. The study findings corroborate that the teachers become to reposition themselves as 'knowledge seekers' through reorienting their cynosure toward 'setting stones' to learn, grow, and change rather than framing their home visits. The teachers also continually engage in careful listening, observing, questioning, and dialoguing, and these actions reflect their care toward parents. The value of teamwork with parents is advocated, and the teachers recognize that when parents feel empowered, they are active and committed to doing more for their children, which can further advantage proactive long-term parent-teacher collaborations. The study findings also validate that the framework is influential for educators to provide the experiences of home visiting that is culturally responsive and to share collaborative relationships with caregivers. The long-term impact of the framework further implies that teachers continue to put themselves in the position of evolving, including beliefs and actions, to better work with children and families who are culturally, ethnically, and linguistically different from them. This framework can be applicable to educators and professionals who are looking for avenues to bridge the relationship between home and school and parents and teachers.

Keywords: culturally responsive home visit, early childhood education, parent–teacher collaboration, teacher professional development

Procedia PDF Downloads 96
338 The Effect of a Multidisciplinary Spine Clinic on Treatment Rates and Lead Times to Care

Authors: Ishan Naidu, Jessica Ryvlin, Devin Videlefsky

Abstract:

Introduction: Back pain is a leading cause of years lived with disability and economic burden, exceeding over $20 billion in healthcare costs not including indirect costs such as absence from work and caregiving. The multifactorial nature of back pain leads to treatment modalities administered by a variety of specialists, which are often disjointed. Multiple studies have found that patients receiving delayed physical therapy for lower back pain had higher medical-related costs from increased health service utilization as well as a reduced improvement in pain severity compared to early management. Uncoordinated health care delivery can exacerbate the physical and economic toll of the chronic condition, thus improvements in interdisciplinary, shared decision-making may improve outcomes. Objective: To assess whether a multidisciplinary spine clinic (MSC), consisting of orthopedic surgery, neurosurgery, pain medicine, and physiatry, alters interventional and non-interventional planning and treatment compared to a traditional unidisciplinary spine clinic (USC) including only orthopedic surgery. Methods: We conducted a retrospective cohort study with patients initially presenting for spine care to orthopedic surgeons between July 1, 2018 to June 30, 2019. Time to treatment recommendation, time to treatment and rates of treatment recommendations were assessed, including physical therapy, injections and surgery. Treatment rates were compared between MSC and USC using Pearson’s chi-square test logistic regression. Time to treatment recommendation and time to treatment were compared using log-rank test and Cox proportional hazard regression. All analyses were repeated for the propensity score (PS) matched subsample. Results: This study included 1,764 patients, with 692 at MSC and 1,072 at USC. Patients in MSC were more likely to be recommended injection when compared to USC (8.5% vs. 5.4%, p=0.01). When adjusted for confounders, the likelihood of injection recommendation remained greater in MSC than USC (Odds ratio [OR]=2.22, 95% CI: (1.39, 3.53), p=0.001). MSC was also associated with a shorter time to receiving injection recommendation versus USC (median: 21 vs. 32 days, log-rank: p<0.001; hazard ratio [HR]=1.90, 95% CI: (1.25, 2.90), p=0.003). MSC was associated with a higher likelihood of injection treatment (OR=2.27, 95% CI: (1.39, 3.73), p=0.001) and shorter lead time (HR=1.98, 95% CI: (1.27, 3.09), p=0.003). PS-matched analyses yielded similar conclusions. Conclusions: Care delivered at a multidisciplinary spine clinic was associated with a higher likelihood of recommending injection and a shorter lead time to injection administration when compared to a traditional unidisciplinary spine surgery clinic. Multidisciplinary clinics may facilitate coordinated care amongst different specialties resulting in increased utilization of less invasive treatment modalities while also improving care efficiency. The multidisciplinary clinic model is an important advancement in care delivery and communication, which can be used as a powerful method of improving patient outcomes as treatment guidelines evolve.

Keywords: coordinated care, epidural steroid injection, multi-disciplinary, non-invasive

Procedia PDF Downloads 140
337 Immersive and Non-Immersive Virtual Reality Applied to the Cervical Spine Assessment

Authors: Pawel Kiper, Alfonc Baba, Mahmoud Alhelou, Giorgia Pregnolato, Michela Agostini, Andrea Turolla

Abstract:

Impairment of cervical spine mobility is often related to pain triggered by musculoskeletal disorders or direct traumatic injuries of the spine. To date, these disorders are assessed with goniometers and inclinometers, which are the most popular devices used in clinical settings. Nevertheless, these technologies usually allow measurement of no more than two-dimensional range of motion (ROM) quotes in static conditions. Conversely, the wide use of motion tracking systems able to measure 3 to 6 degrees of freedom dynamically, while performing standard ROM assessment, are limited due to technical complexities in preparing the setup and high costs. Thus, motion tracking systems are primarily used in research. These systems are an integral part of virtual reality (VR) technologies, which can be used for measuring spine mobility. To our knowledge, the accuracy of VR measure has not yet been studied within virtual environments. Thus, the aim of this study was to test the reliability of a protocol for the assessment of sensorimotor function of the cervical spine in a population of healthy subjects and to compare whether using immersive or non-immersive VR for visualization affects the performance. Both VR assessments consisted of the same five exercises and random sequence determined which of the environments (i.e. immersive or non-immersive) was used as first assessment. Subjects were asked to perform head rotation (right and left), flexion, extension and lateral flexion (right and left side bending). Each movement was executed five times. Moreover, the participants were invited to perform head reaching movements i.e. head movements toward 8 targets placed along a circular perimeter each 45°, visualized one-by-one in random order. Finally, head repositioning movement was obtained by head movement toward the same 8 targets as for reaching and following reposition to the start point. Thus, each participant performed 46 tasks during assessment. Main measures were: ROM of rotation, flexion, extension, lateral flexion and complete kinematics of the cervical spine (i.e. number of completed targets, time of execution (seconds), spatial length (cm), angle distance (°), jerk). Thirty-five healthy participants (i.e. 14 males and 21 females, mean age 28.4±6.47) were recruited for the cervical spine assessment with immersive and non-immersive VR environments. Comparison analysis demonstrated that: head right rotation (p=0.027), extension (p=0.047), flexion (p=0.000), time (p=0.001), spatial length (p=0.004), jerk target (p=0.032), trajectory repositioning (p=0.003), and jerk target repositioning (p=0.007) were significantly better in immersive than non-immersive VR. A regression model showed that assessment in immersive VR was influenced by height, trajectory repositioning (p<0.05), and handedness (p<0.05), whereas in non-immersive VR performance was influenced by height, jerk target (p=0.002), head extension, jerk target repositioning (p=0.002), and by age, head flex/ext, trajectory repositioning, and weight (p=0.040). The results of this study showed higher accuracy of cervical spine assessment when executed in immersive VR. The assessment of ROM and kinematics of the cervical spine can be affected by independent and dependent variables in both immersive and non-immersive VR settings.

Keywords: virtual reality, cervical spine, motion analysis, range of motion, measurement validity

Procedia PDF Downloads 164
336 Resolving a Piping Vibration Problem by Installing Viscous Damper Supports

Authors: Carlos Herrera Sierralta, Husain M. Muslim, Meshal T. Alsaiari, Daniel Fischer

Abstract:

Preventing piping fatigue flow induced vibration in the Oil & Gas sector demands not only the constant development of engineering design methodologies based on available software packages, but also special piping support technologies for designing safe and reliable piping systems. The vast majority of piping vibration problems in the Oil & Gas industry are provoked by the process flow characteristics which are basically intrinsically related to the fluid properties, the type of service and its different operational scenarios. In general, the corrective actions recommended for flow induced vibration in piping systems can be grouped in two major areas: those which affect the excitation mechanisms typically associated to process variables, and those which affect the response mechanism of the pipework per se, and the pipework associated steel support structure. Where possible the first option is to try to solve the flow induced problem from the excitation mechanism perspective. However, in producing facilities the approach of changing process parameters might not always be convenient as it could lead to reduction of production rates or it may require the shutdown of the system in order to perform the required piping modification. That impediment might lead to a second option, which is to modify the response of the piping system to excitation generated by the type of process flow. In principle, the action of shifting the natural frequency of the system well above the frequency inherent to the process always favours the elimination, or considerably reduces, the level of vibration experienced by the piping system. Tightening up the clearances at the supports (ideally zero gap), and adding new static supports at the system, are typical ways of increasing the natural frequency of the piping system. However, only stiffening the piping system may not be sufficient to resolve the vibration problem, and in some cases, it might not be feasible to implement it at all, as the available piping layout could create limitations on adding supports due to thermal expansion/contraction requirements. In these cases, utilization of viscous damper supports could be recommended as these devices can allow relatively large quasi-static movement of piping while providing sufficient capabilities of dissipating the vibration. Therefore, when correctly selected and installed, viscous damper supports can provide a significant effect on the response of the piping system over a wide range of frequencies. Viscous dampers cannot be used to support sustained, static loads. This paper shows over a real case example, a methodology which allows to determine the selection of the viscous damper supports via a dynamic analysis model. By implementing this methodology, it was possible to resolve the piping vibration problem throughout redesigning adequately the existing static piping supports and by adding new viscous dampers supports. This was conducted on-stream at the oil crude pipeline in question without the necessity of reducing the production of the plant. Concluding that the application of the methodology of this paper can be applied to solve similar cases in a straightforward manner.

Keywords: dynamic analysis, flow induced vibration, piping supports, turbulent flow, slug flow, viscous damper

Procedia PDF Downloads 142
335 Performance Evaluation of Various Displaced Left Turn Intersection Designs

Authors: Hatem Abou-Senna, Essam Radwan

Abstract:

With increasing traffic and limited resources, accommodating left-turning traffic has been a challenge for traffic engineers as they seek balance between intersection capacity and safety; these are two conflicting goals in the operation of a signalized intersection that are mitigated through signal phasing techniques. Hence, to increase the left-turn capacity and reduce the delay at the intersections, the Florida Department of Transportation (FDOT) moves forward with a vision of optimizing intersection control using innovative intersection designs through the Transportation Systems Management & Operations (TSM&O) program. These alternative designs successfully eliminate the left-turn phase, which otherwise reduces the conventional intersection’s (CI) efficiency considerably, and divide the intersection into smaller networks that would operate in a one-way fashion. This study focused on the Crossover Displaced Left-turn intersections (XDL), also known as Continuous Flow Intersections (CFI). The XDL concept is best suited for intersections with moderate to high overall traffic volumes, especially those with very high or unbalanced left turn volumes. There is little guidance on determining whether partial XDL intersections are adequate to mitigate the overall intersection condition or full XDL is always required. The primary objective of this paper was to evaluate the overall intersection performance in the case of different partial XDL designs compared to a full XDL. The XDL alternative was investigated for 4 different scenarios; partial XDL on the east-west approaches, partial XDL on the north-south approaches, partial XDL on the north and east approaches and full XDL on all 4 approaches. Also, the impact of increasing volume on the intersection performance was considered by modeling the unbalanced volumes with 10% increment resulting in 5 different traffic scenarios. The study intersection, located in Orlando Florida, is experiencing recurring congestion in the PM peak hour and is operating near capacity with volume to a capacity ratio closer to 1.00 due to the presence of two heavy conflicting movements; southbound and westbound. The results showed that a partial EN XDL alternative proved to be effective and compared favorably to a full XDL alternative followed by the partial EW XDL alternative. The analysis also showed that Full, EW and EN XDL alternatives outperformed the NS XDL and the CI alternatives with respect to the throughput, delay and queue lengths. Significant throughput improvements were remarkable at the higher volume level with percent increase in capacity of 25%. The percent reduction in delay for the critical movements in the XDL scenarios compared to the CI scenario ranged from 30-45%. Similarly, queue lengths showed percent reduction in the XDL scenarios ranging from 25-40%. The analysis revealed how partial XDL design can improve the overall intersection performance at various demands, reduce the costs associated with full XDL and proved to outperform the conventional intersection. However, partial XDL serving low volumes or only one of the critical movements while other critical movements are operating near or above capacity do not provide significant benefits when compared to the conventional intersection.

Keywords: continuous flow intersections, crossover displaced left-turn, microscopic traffic simulation, transportation system management and operations, VISSIM simulation model

Procedia PDF Downloads 310
334 Design of Ultra-Light and Ultra-Stiff Lattice Structure for Performance Improvement of Robotic Knee Exoskeleton

Authors: Bing Chen, Xiang Ni, Eric Li

Abstract:

With the population ageing, the number of patients suffering from chronic diseases is increasing, among which stroke is a high incidence for the elderly. In addition, there is a gradual increase in the number of patients with orthopedic or neurological conditions such as spinal cord injuries, nerve injuries, and other knee injuries. These diseases are chronic, with high recurrence and complications, and normal walking is difficult for such patients. Nowadays, robotic knee exoskeletons have been developed for individuals with knee impairments. However, the currently available robotic knee exoskeletons are generally developed with heavyweight, which makes the patients uncomfortable to wear, prone to wearing fatigue, shortening the wearing time, and reducing the efficiency of exoskeletons. Some lightweight materials, such as carbon fiber and titanium alloy, have been used for the development of robotic knee exoskeletons. However, this increases the cost of the exoskeletons. This paper illustrates the design of a new ultra-light and ultra-stiff truss type of lattice structure. The lattice structures are arranged in a fan shape, which can fit well with circular arc surfaces such as circular holes, and it can be utilized in the design of rods, brackets, and other parts of a robotic knee exoskeleton to reduce the weight. The metamaterial is formed by continuous arrangement and combination of small truss structure unit cells, which changes the diameter of the pillar section, geometrical size, and relative density of each unit cell. It can be made quickly through additive manufacturing techniques such as metal 3D printing. The unit cell of the truss structure is small, and the machined parts of the robotic knee exoskeleton, such as connectors, rods, and bearing brackets, can be filled and replaced by gradient arrangement and non-uniform distribution. Under the condition of satisfying the mechanical properties of the robotic knee exoskeleton, the weight of the exoskeleton is reduced, and hence, the patient’s wearing fatigue is relaxed, and the wearing time of the exoskeleton is increased. Thus, the efficiency and wearing comfort, and safety of the exoskeleton can be improved. In this paper, a brief description of the hardware design of the prototype of the robotic knee exoskeleton is first presented. Next, the design of the ultra-light and ultra-stiff truss type of lattice structures is proposed, and the mechanical analysis of the single-cell unit is performed by establishing the theoretical model. Additionally, simulations are performed to evaluate the maximum stress-bearing capacity and compressive performance of the uniform arrangement and gradient arrangement of the cells. Finally, the static analysis is performed for the cell-filled rod and the unmodified rod, respectively, and the simulation results demonstrate the effectiveness and feasibility of the designed ultra-light and ultra-stiff truss type of lattice structures. In future studies, experiments will be conducted to further evaluate the performance of the designed lattice structures.

Keywords: additive manufacturing, lattice structures, metamaterial, robotic knee exoskeleton

Procedia PDF Downloads 104
333 Spectral Responses of the Laser Generated Coal Aerosol

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Tomi Smausz, Zoltán Kónya, Béla Hopp, Gábor Szabó, Zoltán Bozóki

Abstract:

Characterization of spectral responses of light absorbing carbonaceous particulate matter (LAC) is of great importance in both modelling its climate effect and interpreting remote sensing measurement data. The residential or domestic combustion of coal is one of the dominant LAC constituent. According to some related assessments the residential coal burning account for roughly half of anthropogenic BC emitted from fossil fuel burning. Despite of its significance in climate the comprehensive investigation of optical properties of residential coal aerosol is really limited in the literature. There are many reason of that starting from the difficulties associated with the controlled burning conditions of the fuel, through the lack of detailed supplementary proximate and ultimate chemical analysis enforced, the interpretation of the measured optical data, ending with many analytical and methodological difficulties regarding the in-situ measurement of coal aerosol spectral responses. Since the gas matrix of ambient can significantly mask the physicochemical characteristics of the generated coal aerosol the accurate and controlled generation of residential coal particulates is one of the most actual issues in this research area. Most of the laboratory imitation of residential coal combustion is simply based on coal burning in stove with ambient air support allowing one to measure only the apparent spectral feature of the particulates. However, the recently introduced methodology based on a laser ablation of solid coal target opens up novel possibilities to model the real combustion procedure under well controlled laboratory conditions and makes the investigation of the inherent optical properties also possible. Most of the methodology for spectral characterization of LAC is based on transmission measurement made of filter accumulated aerosol or deduced indirectly from parallel measurements of scattering and extinction coefficient using free floating sampling. In the former one the accuracy while in the latter one the sensitivity are liming the applicability of this approaches. Although the scientific community are at the common platform that aerosol-phase PhotoAcoustic Spectroscopy (PAS) is the only method for precise and accurate determination of light absorption by LAC, the PAS based instrumentation for spectral characterization of absorption has only been recently introduced. In this study, the investigation of the inherent, spectral features of laser generated and chemically characterized residential coal aerosols are demonstrated. The experimental set-up and its characteristic for residential coal aerosol generation are introduced here. The optical absorption and the scattering coefficients as well as their wavelength dependency are determined by our state-of-the-art multi wavelength PAS instrument (4λ-PAS) and multi wavelength cosinus sensor (Aurora 3000). The quantified wavelength dependency (AAE and SAE) are deduced from the measured data. Finally, some correlation between the proximate and ultimate chemical as well as the measured or deduced optical parameters are also revealed.

Keywords: absorption, scattering, residential coal, aerosol generation by laser ablation

Procedia PDF Downloads 358
332 Offshore Facilities Load Out: Case Study of Jacket Superstructure Loadout by Strand Jacking Skidding Method

Authors: A. Rahim Baharudin, Nor Arinee binti Mat Saaud, Muhammad Afiq Azman, Farah Adiba A. Sani

Abstract:

Objectives: This paper shares the case study on the engineering analysis, data analysis, and real-time data comparison for qualifying the stand wires' minimum breaking load and safe working load upon loadout operation for a new project and, at the same time, eliminate the risk due to discrepancies and unalignment of COMPANY Technical Standards to Industry Standards and Practices. This paper demonstrates “Lean Construction” for COMPANY’s Project by sustaining fit-for-purpose Technical Requirements of Loadout Strand Wire Factor of Safety (F.S). The case study utilizes historical engineering data from a few loadout operations by skidding methods from different projects. It is also demonstrating and qualifying the skidding wires' minimum breaking load and safe working load used for loadout operation for substructure and other facilities for the future. Methods: Engineering analysis and comparison of data were taken as referred to the international standard and internal COMPANY standard requirements. Data was taken from nine (9) previous projects for both topsides and jacket facilities executed at the several local fabrication yards where load out was conducted by three (3) different service providers with emphasis on four (4) basic elements: i) Industry Standards for Loadout Engineering and Operation Reference: COMPANY internal standard was referred to superseded documents of DNV-OS-H201 and DNV/GL 0013/ND. DNV/GL 0013/ND and DNVGL-ST-N001 do not mention any requirements of Strand Wire F.S of 4.0 for Skidding / Pulling Operations. ii) Reference to past Loadout Engineering and Execution Package: Reference was made to projects delivered by three (3) major offshore facilities operators. Strand Wire F.S observed ranges from 2.0 MBL (Min) to 2.5 MBL (Max). No Loadout Operation using the requirements of 4.0 MBL was sighted from the reference. iii) Strand Jack Equipment Manufacturer Datasheet Reference: Referring to Strand Jack Equipment Manufactured Datasheet by different loadout service providers, it is shown that the Designed F.S for the equipment is also ranging between 2.0 ~ 2.5. Eight (8) Strand Jack Datasheet Model was referred to, ranging from 15 Mt to 850 Mt Capacity; however, there are NO observations of designed F.S 4.0 sighted. iv) Site Monitoring on Actual Loadout Data and Parameter: Max Load on Strand Wire was captured during 2nd Breakout, which is during Static Condition of 12.9 MT / Strand Wire (67.9% Utilization). Max Load on Strand Wire for Dynamic Conditions during Step 8 and Step 12 is 9.4 Mt / Strand Wire (49.5% Utilization). Conclusion: This analysis and study demonstrated the adequacy of strand wires supplied by the service provider were technically sufficient in terms of strength, and via engineering analysis conducted, the minimum breaking load and safe working load utilized and calculated for the projects were satisfied and operated safely for the projects. It is recommended from this study that COMPANY’s technical requirements are to be revised for future projects’ utilization.

Keywords: construction, load out, minimum breaking load, safe working load, strand jacking, skidding

Procedia PDF Downloads 111
331 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication

Authors: Farhan A. Alenizi

Abstract:

Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.

Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing

Procedia PDF Downloads 159
330 Valuing Cultural Ecosystem Services of Natural Treatment Systems Using Crowdsourced Data

Authors: Andrea Ghermandi

Abstract:

Natural treatment systems such as constructed wetlands and waste stabilization ponds are increasingly used to treat water and wastewater from a variety of sources, including stormwater and polluted surface water. The provision of ancillary benefits in the form of cultural ecosystem services makes these systems unique among water and wastewater treatment technologies and greatly contributes to determine their potential role in promoting sustainable water management practices. A quantitative analysis of these benefits, however, has been lacking in the literature. Here, a critical assessment of the recreational and educational benefits in natural treatment systems is provided, which combines observed public use from a survey of managers and operators with estimated public use as obtained using geotagged photos from social media as a proxy for visitation rates. Geographic Information Systems (GIS) are used to characterize the spatial boundaries of 273 natural treatment systems worldwide. Such boundaries are used as input for the Application Program Interfaces (APIs) of two popular photo-sharing websites (Flickr and Panoramio) in order to derive the number of photo-user-days, i.e., the number of yearly visits by individual photo users in each site. The adequateness and predictive power of four univariate calibration models using the crowdsourced data as a proxy for visitation are evaluated. A high correlation is found between photo-user-days and observed annual visitors (Pearson's r = 0.811; p-value < 0.001; N = 62). Standardized Major Axis (SMA) regression is found to outperform Ordinary Least Squares regression and count data models in terms of predictive power insofar as standard verification statistics – such as the root mean square error of prediction (RMSEP), the mean absolute error of prediction (MAEP), the reduction of error (RE), and the coefficient of efficiency (CE) – are concerned. The SMA regression model is used to estimate the intensity of public use in all 273 natural treatment systems. System type, influent water quality, and area are found to statistically affect public use, consistently with a priori expectations. Publicly available information regarding the home location of the sampled visitors is derived from their social media profiles and used to infer the distance they are willing to travel to visit the natural treatment systems in the database. Such information is analyzed using the travel cost method to derive monetary estimates of the recreational benefits of the investigated natural treatment systems. Overall, the findings confirm the opportunities arising from an integrated design and management of natural treatment systems, which combines the objectives of water quality enhancement and provision of cultural ecosystem services through public use in a multi-functional approach and compatibly with the need to protect public health.

Keywords: constructed wetlands, cultural ecosystem services, ecological engineering, waste stabilization ponds

Procedia PDF Downloads 179
329 Process of Production of an Artisanal Brewery in a City in the North of the State of Mato Grosso, Brazil

Authors: Ana Paula S. Horodenski, Priscila Pelegrini, Salli Baggenstoss

Abstract:

The brewing industry with artisanal concepts seeks to serve a specific market, with diversified production that has been gaining ground in the national environment, also in the Amazon region. This growth is due to the more demanding consumer, with a diversified taste that wants to try new types of beer, enjoying products with new aromas, flavors, as a differential of what is so widely spread through the big industrial brands. Thus, through qualitative research methods, the study aimed to investigate how is the process of managing the production of a craft brewery in a city in the northern State of Mato Grosso (BRAZIL), providing knowledge of production processes and strategies in the industry. With the efficient use of resources, it is possible to obtain the necessary quality and provide better performance and differentiation of the company, besides analyzing the best management model. The research is descriptive with a qualitative approach through a case study. For the data collection, a semi-structured interview was elaborated, composed of the areas: microbrewery characterization, artisan beer production process, and the company supply chain management. Also, production processes were observed during technical visits. With the study, it was verified that the artisan brewery researched develops preventive maintenance strategies with the inputs, machines, and equipment, so that the quality of the product and the production process are achieved. It was observed that the distance from the supplying centers makes the management of processes and the supply chain be carried out with a longer planning time so that the delivery of the final product is satisfactory. The production process of the brewery is composed of machines and equipment that allows the control and quality of the product, which the manager states that for the productive capacity of the industry and its consumer market, the available equipment meets the demand. This study also contributes to highlight one of the challenges for the development of small breweries in front of the market giants, that is, the legislation, which fits the microbreweries as producers of alcoholic beverages. This makes the micro and small business segment to be taxed as a major, who has advantages in purchasing large batches of raw materials and tax incentives because they are large employers and tax pickers. It was possible to observe that the supply chain management system relies on spreadsheets and notes that are done manually, which could be simplified with a computer program to streamline procedures and reduce risks and failures of the manual process. In relation to the control of waste and effluents affected by the industry is outsourced and meets the needs. Finally, the results showed that the industry uses preventive maintenance as a productive strategy, which allows better conditions for the production and quality of artisanal beer. The quality is directly related to the satisfaction of the final consumer, being prized and performed throughout the production process, with the selection of better inputs, the effectiveness of the production processes and the relationship with the commercial partners.

Keywords: artisanal brewery, production management, production processes, supply chain

Procedia PDF Downloads 119