Search results for: parallel combined spread spectrum
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6024

Search results for: parallel combined spread spectrum

564 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 381
563 Development of Coastal Inundation–Inland and River Flow Interface Module Based on 2D Hydrodynamic Model

Authors: Eun-Taek Sin, Hyun-Ju Jang, Chang Geun Song, Yong-Sik Han

Abstract:

Due to the climate change, the coastal urban area repeatedly suffers from the loss of property and life by flooding. There are three main causes of inland submergence. First, when heavy rain with high intensity occurs, the water quantity in inland cannot be drained into rivers by increase in impervious surface of the land development and defect of the pump, storm sewer. Second, river inundation occurs then water surface level surpasses the top of levee. Finally, Coastal inundation occurs due to rising sea water. However, previous studies ignored the complex mechanism of flooding, and showed discrepancy and inadequacy due to linear summation of each analysis result. In this study, inland flooding and river inundation were analyzed together by HDM-2D model. Petrov-Galerkin stabilizing method and flux-blocking algorithm were applied to simulate the inland flooding. In addition, sink/source terms with exponentially growth rate attribute were added to the shallow water equations to include the inland flooding analysis module. The applications of developed model gave satisfactory results, and provided accurate prediction in comprehensive flooding analysis. The applications of developed model gave satisfactory results, and provided accurate prediction in comprehensive flooding analysis. To consider the coastal surge, another module was developed by adding seawater to the existing Inland Flooding-River Inundation binding module for comprehensive flooding analysis. Based on the combined modules, the Coastal Inundation – Inland & River Flow Interface was simulated by inputting the flow rate and depth data in artificial flume. Accordingly, it was able to analyze the flood patterns of coastal cities over time. This study is expected to help identify the complex causes of flooding in coastal areas where complex flooding occurs, and assist in analyzing damage to coastal cities. Acknowledgements—This research was supported by a grant ‘Development of the Evaluation Technology for Complex Causes of Inundation Vulnerability and the Response Plans in Coastal Urban Areas for Adaptation to Climate Change’ [MPSS-NH-2015-77] from the Natural Hazard Mitigation Research Group, Ministry of Public Safety and Security of Korea.

Keywords: flooding analysis, river inundation, inland flooding, 2D hydrodynamic model

Procedia PDF Downloads 336
562 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads

Authors: Gaurav Kumar Sinha

Abstract:

In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.

Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies

Procedia PDF Downloads 44
561 The Role of the Child's Previous Inventory in Verb Overgeneralization in Spanish Child Language: A Case Study

Authors: Mary Rosa Espinosa-Ochoa

Abstract:

The study of overgeneralization in inflectional morphology provides evidence for understanding how a child's mind works when applying linguistic patterns in a novel way. High-frequency inflectional forms in the input cause inappropriate use in contexts related to lower-frequency forms. Children learn verbs as lexical items and new forms develop only gradually, around their second year: most of the utterances that children produce are closely related to what they have previously produced. Spanish has a complex verbal system that inflects for person, mood, and tense. Approximately 200 verbs are irregular, and bare roots always require an inflected form, which represents a challenge for the memory. The aim of this research is to investigate i) what kinds of overgeneralization errors children make in verb production, ii) to what extent these errors are related to verb forms previously produced, and iii) whether the overgeneralized verb components are also frequent in children’s linguistic inventory. It consists of a high-density longitudinal study of a middle-class girl (1;11,24-2;02,24) from Mexico City, whose utterances were recorded almost daily for three months to compile a unique corpus in the Spanish language. Of the 358 types of inflected verbs produced by the child, 9.11% are overgeneralizations. Not only are inflected forms (verbal and pronominal clitics) overgeneralized, but also verbal roots. Each of the forms can be traced to previous utterances, and they show that the child is detecting morphological patterns. Neither verbal roots nor inflected forms are associated with high frequency patterns in her own speech. For example, the child alternates the bare roots of an irregular verb, cáye-te* and cáiga-te* (“fall down”), to express the imperative of the verb cá-e-te (fall down.IMPERATIVE-PRONOMINAL.CLITIC), although cay-ó (PAST.PERF.3SG) is the most frequent form of her previous complete inventory, and the combined frequency of caer (INF), cae (PRES.INDICATIVE.3SG), and caes (PRES.INDICATIVE.2SG) is the same as that of as caiga (PRES.SUBJ.1SG and 3SG). These results provide evidence that a) two forms of the same verb compete in the child’s memory, and b) although the child uses her own inventory to create new forms, these forms are not necessarily frequent in her memory storage, which means that her mind is more sensitive to external stimuli. Language acquisition is a developing process, given the sensitivity of the human mind to linguistic interaction with the outside world.

Keywords: inflection, morphology, child language acquisition, Spanish

Procedia PDF Downloads 80
560 Improving Recovery Reuse and Irrigation Scheme Efficiency – North Gaza Emergency Sewage Treatment Project as Case Study

Authors: Yaser S. Kishawi, Sadi R. Ali

Abstract:

Part of Palestine, Gaza Strip (365 km2 and 1.8 million inhabitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely cover the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is finding non-conventional water resource from treated wastewater to cover agricultural requirements and serve the population. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line & infiltration basins - IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme – RRS – to capture the spreading plume). Currently, only phase A is functioning. Nearly 23 Mm3 of partially treated wastewater were infiltrated into the aquifer. Phase B and phase C witnessed many delays and this forced a reassessment of the RRS original design. An Environmental Management Plan was conducted from Jul 2013 to Jun 2014 on 13 existing monitoring wells surrounding the project location. This is to measure the efficiency of the SAT system and the spread of the contamination plume with relation to the efficiency of the proposed RRS. Along with the proposed location of the 27 recovery wells as part of the proposed RRS. The results of monitored wells were assessed compared with PWA baseline data. This was put into a groundwater model to simulate the plume to propose the best suitable solution to the delays. The redesign mainly manipulated the pumping rate of wells, proposed locations and functioning schedules (including wells groupings). The proposed simulations were examined using visual MODFLOW V4.2 to simulate the results. The results of monitored wells were assessed based on the location of the monitoring wells related to the proposed recovery wells locations (200m, 500m and 750m away from the IBs). Near the 500m line (the first row of proposed recovery wells), an increase of nitrate (from 30 to 70mg/L) compare to a decrease in Chloride (1500 to below 900mg/L) was found during the monitoring period which indicated an expansion of plume to this distance. On this rate with the required time to construct the recovery scheme, keeping the original design the RRS will fail to capture the plume. Based on that many simulations were conducted leading into three main scenarios. The scenarios manipulated the starting dates, the pumping rate and the locations of recovery wells. A simulation of plume expansion and path-lines were extracted from the model monitoring how to prevent the expansion towards the nearby municipal wells. It was concluded that the location is the most important factor in determining the RRS efficiency. Scenario III was adopted and showed an effective results even with a reduced pumping rates. This scenario proposed adding two additional recovery wells in a location beyond the 750m line to compensate the delays and effectively capture the plume. A continuous monitoring program for current and future monitoring wells should be in place to support the proposed scenario and ensure maximum protection.

Keywords: soil aquifer treatment, recovery and reuse scheme, infiltration basins, north gaza

Procedia PDF Downloads 297
559 Synchrotron Based Techniques for the Characterization of Chemical Vapour Deposition Overgrowth Diamond Layers on High Pressure, High Temperature Substrates

Authors: T. N. Tran Thi, J. Morse, C. Detlefs, P. K. Cook, C. Yıldırım, A. C. Jakobsen, T. Zhou, J. Hartwig, V. Zurbig, D. Caliste, B. Fernandez, D. Eon, O. Loto, M. L. Hicks, A. Pakpour-Tabrizi, J. Baruchel

Abstract:

The ability to grow boron-doped diamond epilayers of high crystalline quality is a prerequisite for the fabrication of diamond power electronic devices, in particular high voltage diodes and metal-oxide-semiconductor (MOS) transistors. Boron and intrinsic diamond layers are homoepitaxially overgrown by microwave assisted chemical vapour deposition (MWCVD) on single crystal high pressure, high temperature (HPHT) grown bulk diamond substrates. Various epilayer thicknesses were grown, with dopant concentrations ranging from 1021 atom/cm³ at nanometer thickness in the case of 'delta doping', up 1016 atom/cm³ and 50µm thickness or high electric field drift regions. The crystalline quality of these overgrown layers as regards defects, strain, distortion… is critical for the device performance through its relation to the final electrical properties (Hall mobility, breakdown voltage...). In addition to the optimization of the epilayer growth conditions in the MWCVD reactor, other important questions related to the crystalline quality of the overgrown layer(s) are: 1) what is the dependence on the bulk quality and surface preparation methods of the HPHT diamond substrate? 2) how do defects already present in the substrate crystal propagate into the overgrown layer; 3) what types of new defects are created during overgrowth, what are their growth mechanisms, and how can these defects be avoided? 4) how can we relate in a quantitative manner parameters related to the measured crystalline quality of the boron doped layer to the electronic properties of final processed devices? We describe synchrotron-based techniques developed to address these questions. These techniques allow the visualization of local defects and crystal distortion which complements the data obtained by other well-established analysis methods such as AFM, SIMS, Hall conductivity…. We have used Grazing Incidence X-ray Diffraction (GIXRD) at the ID01 beamline of the ESRF to study lattice parameters and damage (strain, tilt and mosaic spread) both in diamond substrate near surface layers and in thick (10–50 µm) overgrown boron doped diamond epi-layers. Micro- and nano-section topography have been carried out at both the BM05 and ID06-ESRF) beamlines using rocking curve imaging techniques to study defects which have propagated from the substrate into the overgrown layer(s) and their influence on final electronic device performance. These studies were performed using various commercially sourced HPHT grown diamond substrates, with the MWCVD overgrowth carried out at the Fraunhofer IAF-Germany. The synchrotron results are in good agreement with low-temperature (5°K) cathodoluminescence spectroscopy carried out on the grown samples using an Inspect F5O FESEM fitted with an IHR spectrometer.

Keywords: synchrotron X-ray diffaction, crystalline quality, defects, diamond overgrowth, rocking curve imaging

Procedia PDF Downloads 239
558 Efficacy and Safety of Updated Target Therapies for Treatment of Platinum-Resistant Recurrent Ovarian Cancer

Authors: John Hang Leung, Shyh-Yau Wang, Hei-Tung Yip, Fion, Ho Tsung-chin, Agnes LF Chan

Abstract:

Objectives: Platinum-resistant ovarian cancer has a short overall survival of 9–12 months and limited treatment options. The combination of immunotherapy and targeted therapy appears to be a promising treatment option for patients with ovarian cancer, particularly to patients with platinum-resistant recurrent ovarian cancer (PRrOC). However, there are no direct head-to-head clinical trials comparing their efficacy and toxicity. We, therefore, used a network to directly and indirectly compare seven newer immunotherapies or targeted therapies combined with chemotherapy in platinum-resistant relapsed ovarian cancer, including antibody-drug conjugates, PD-1 (Programmed death-1) and PD-L1 (Programmed death-ligand 1), PARP (Poly ADP-ribose polymerase) inhibitors, TKIs (Tyrosine kinase inhibitors), and antiangiogenic agents. Methods: We searched PubMed (Public/Publisher MEDLINE), EMBASE (Excerpta Medica Database), and the Cochrane Library electronic databases for phase II and III trials involving PRrOC patients treated with immunotherapy or targeted therapy plus chemotherapy. The quality of included trials was assessed using the GRADE method. The primary outcomes compared were progression-free survival, the secondary outcomes were overall survival and safety. Results: Seven randomized controlled trials involving a total of 2058 PRrOC patients were included in this analysis. Bevacizumab plus chemotherapy showed statistically significant differences in PFS (Progression-free survival) but not OS (Overall survival) for all interested targets and immunotherapy regimens; however, according to the heatmap analysis, bevacizumab plus chemotherapy had a statistically significant risk of ≥grade 3 SAEs (Severe adverse effects), particularly hematological severe adverse events (neutropenia, anemia, leukopenia, and thrombocytopenia). Conclusions: Bevacizumab plus chemotherapy resulted in better PFS as compared with all interested regimens for the treatment of PRrOC. However, statistical differences in SAEs as bevacizumab plus chemotherapy is associated with a greater risk for hematological SAE.

Keywords: platinum-resistant recurrent ovarian cancer, network meta-analysis, immune checkpoint inhibitors, target therapy, antiangiogenic agents

Procedia PDF Downloads 52
557 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 263
556 Ribotaxa: Combined Approaches for Taxonomic Resolution Down to the Species Level from Metagenomics Data Revealing Novelties

Authors: Oshma Chakoory, Sophie Comtet-Marre, Pierre Peyret

Abstract:

Metagenomic classifiers are widely used for the taxonomic profiling of metagenomic data and estimation of taxa relative abundance. Small subunit rRNA genes are nowadays a gold standard for the phylogenetic resolution of complex microbial communities, although the power of this marker comes down to its use as full-length. We benchmarked the performance and accuracy of rRNA-specialized versus general-purpose read mappers, reference-targeted assemblers and taxonomic classifiers. We then built a pipeline called RiboTaxa to generate a highly sensitive and specific metataxonomic approach. Using metagenomics data, RiboTaxa gave the best results compared to other tools (Kraken2, Centrifuge (1), METAXA2 (2), PhyloFlash (3)) with precise taxonomic identification and relative abundance description, giving no false positive detection. Using real datasets from various environments (ocean, soil, human gut) and from different approaches (metagenomics and gene capture by hybridization), RiboTaxa revealed microbial novelties not seen by current bioinformatics analysis opening new biological perspectives in human and environmental health. In a study focused on corals’ health involving 20 metagenomic samples (4), an affiliation of prokaryotes was limited to the family level with Endozoicomonadaceae characterising healthy octocoral tissue. RiboTaxa highlighted 2 species of uncultured Endozoicomonas which were dominant in the healthy tissue. Both species belonged to a genus not yet described, opening new research perspectives on corals’ health. Applied to metagenomics data from a study on human gut and extreme longevity (5), RiboTaxa detected the presence of an uncultured archaeon in semi-supercentenarians (aged 105 to 109 years) highlighting an archaeal genus, not yet described, and 3 uncultured species belonging to the Enorma genus that could be species of interest participating in the longevity process. RiboTaxa is user-friendly, rapid, allowing microbiota structure description from any environment and the results can be easily interpreted. This software is freely available at https://github.com/oschakoory/RiboTaxa under the GNU Affero General Public License 3.0.

Keywords: metagenomics profiling, microbial diversity, SSU rRNA genes, full-length phylogenetic marker

Procedia PDF Downloads 94
555 Characterization of β-Lactamases Resistance amongst Acinetobacter Baumannii Isolated from Clinical Samples, Egypt

Authors: Amal Saafan, Kareem Al Sofy, Sameh AbdelGhani, Magdy Amin

Abstract:

Background: Acinetobacter spp. resistance towards β-lactam antibiotics is mediated mainly by different classes of β-lactamases production; detection of some genes responsible for production of β-lactamases is the objective of the study. Methods: One hundred fifty bacterial isolates were recovered from blood, sputum, and urine specimens from different hospitals in Egypt. Sixty-nine isolate were identified as Acinetobacter baumannii using traditional biochemical tests, CHROM agar, MicroScan and PCR amplification of blaoxa-51like gene. Acinetobacterbaumannii isolates were grouped into carbapenem resistant group (GP1), cefotaxime, ceftazidime and cefoxitin resistant group (GP2) and carbapenem and cephalosporin non-resistant group (GP3). Carbapenemase activity was screened using modified Hodge test (MHT) for GP1.Metallo-β-lactamases screening was performed for MHT positive isolates using double disk synergy test (DDST) and combined disk test (CDT). Amp C activity was screened using Amp C disk test with Tris-EDTA, DDST, and CDT for GP2. Finally, PCR amplification of blaoxa-51like, blaoxa-23like, blaIMP-like, blaVIM-like, and blaADC-like genes was performed for isolates that showed, at least, two positive results of three for both AmpC and carbapenemases phenotypic screening tests (obvious activity), in addition to GP3 (for comparison). Detection of blaoxa-51like and blaADC-like genes preceded by ISAba1 was also performed. Results: Antibiogram of 69 pure Acinetobacter baumannii isolates resulted in 57, 64, and 2 isolates enrolled into GP1, GP2, and GP3, respectively. Carbapenemase activity was shown by 49(85.9%) isolate using MHT. Metallo-β-lactamases screening revealed 32(65.3%) and 35(71.4%) using DDST and CDT, respectively.AmpC activity was shown by 43(67.2%) and 50 (78.1%) isolates using AmpC disk test with Tris-EDTA, and both DDST and CDT, respectively. Twenty-seven isolates showed obvious activity, all of them (100%) were harboring blaoxa-51like and blaADC-like genes, while blaoxa-23like, blaIMP-like andblaVIM-like genes were harbored by 23(85.2%), 9 (33.%) and no isolate respectively. Only 12 (44.4%) isolates harbored blaoxa-51like and blaADC-like genes preceded by ISAba1. GP3 isolates showed only positive blaoxa-51like and blaADC-like genes. Conclusion: It is not possible to correlate resistance with presence of blaoxa-51like and blaADC-like genes and presence of ISAba1 was immediate as transcriptional promoter. A blaoxa-23like gene played an important role in carbapenem resistance when compared with blaIMP-like and blaVIM-like gene.

Keywords: acinetobacter, beta-lactams, resistance, antimicrobial agents

Procedia PDF Downloads 325
554 European Electromagnetic Compatibility Directive Applied to Astronomical Observatories

Authors: Oibar Martinez, Clara Oliver

Abstract:

The Cherenkov Telescope Array Project (CTA) aims to build two different observatories of Cherenkov Telescopes, located in Cerro del Paranal, Chile, and La Palma, Spain. These facilities are used in this paper as a case study to investigate how to apply standard Directives on Electromagnetic Compatibility to astronomical observatories. Cherenkov Telescopes are able to provide valuable information from both Galactic and Extragalactic sources by measuring Cherenkov radiation, which is produced by particles which travel faster than light in the atmosphere. The construction requirements demand compliance with the European Electromagnetic Compatibility Directive. The largest telescopes of these observatories, called Large Scale Telescopes (LSTs), are high precision instruments with advanced photomultipliers able to detect the faint sub-nanosecond blue light pulses produced by Cherenkov Radiation. They have a 23-meter parabolic reflective surface. This surface focuses the radiation on a camera composed of an array of high-speed photosensors which are highly sensitive to the radio spectrum pollution. The camera has a field of view of about 4.5 degrees and has been designed for maximum compactness and lowest weight, cost and power consumption. Each pixel incorporates a photo-sensor able to discriminate single photons and the corresponding readout electronics. The first LST is already commissioned and intends to be operated as a service to Scientific Community. Because of this, it must comply with a series of reliability and functional requirements and must have a Conformité Européen (CE) marking. This demands compliance with Directive 2014/30/EU on electromagnetic compatibility. The main difficulty of accomplishing this goal resides on the fact that Conformité Européen marking setups and procedures were implemented for industrial products, whereas no clear protocols have been defined for scientific installations. In this paper, we aim to give an answer to the question on how the directive should be applied to our installation to guarantee the fulfillment of all the requirements and the proper functioning of the telescope itself. Experts in Optics and Electromagnetism were both needed to make these kinds of decisions and match tests which were designed to be made over the equipment of limited dimensions on large scientific plants. An analysis of the elements and configurations most likely to be affected by external interferences and those that are most likely to cause the maximum disturbances was also performed. Obtaining the Conformité Européen mark requires knowing what the harmonized standards are and how the elaboration of the specific requirement is defined. For this type of large installations, one needs to adapt and develop the tests to be carried out. In addition, throughout this process, certification entities and notified bodies play a key role in preparing and agreeing the required technical documentation. We have focused our attention mostly on the technical aspects of each point. We believe that this contribution will be of interest for other scientists involved in applying industrial quality assurance standards to large scientific plant.

Keywords: CE marking, electromagnetic compatibility, european directive, scientific installations

Procedia PDF Downloads 88
553 Biochemical Effects of Low Dose Dimethyl Sulfoxide on HepG2 Liver Cancer Cell Line

Authors: Esra Sengul, R. G. Aktas, M. E. Sitar, H. Isan

Abstract:

Hepatocellular carcinoma (HCC) is a hepatocellular tumor commonly found on the surface of the chronic liver. HepG2 is the most commonly used cell type in HCC studies. The main proteins remaining in the blood serum after separation of plasma fibrinogen are albumin and globulin. The fact that the albumin showed hepatocellular damage and reflect the synthesis capacity of the liver was the main reason for our use. Alpha-Fetoprotein (AFP) is an albumin-like structural embryonic globulin found in the embryonic cortex, cord blood, and fetal liver. It has been used as a marker in the follow-up of tumor growth in various malign tumors and in the efficacy of surgical-medical treatments, so it is a good protein to look at with albumins. We have seen the morphological changes of dimethyl sulfoxide (DMSO) on HepG2 and decided to investigate its biochemical effects. We examined the effects of DMSO, which is used in cell cultures, on albumin, AFP and total protein at low doses. Material Method: Cell Culture: Medium was prepared in cell culture using Dulbecco's Modified Eagle Media (DMEM), Fetal Bovine Serum Dulbecco's (FBS), Phosphate Buffered Saline and trypsin maintained at -20 ° C. Fixation of Cells: HepG2 cells, which have been appropriately developed at the end of the first week, were fixed with acetone. We stored our cells in PBS at + 4 ° C until the fixation was completed. Area Calculation: The areas of the cells are calculated in the ImageJ (IJ). Microscope examination: The examination was performed with a Zeiss Inverted Microscope. Daytime photographs were taken at 40x, 100x 200x and 400x. Biochemical Tests: Protein (Total): Serum sample was analyzed by a spectrophotometric method in autoanalyzer. Albumin: Serum sample was analyzed by a spectrophotometric method in autoanalyzer. Alpha-fetoprotein: Serum sample was analyzed by ECLIA method. Results: When liver cancer cells were cultured in medium with 1% DMSO for 4 weeks, a significant difference was observed when compared with the control group. As a result, we have seen that DMSO can be used as an important agent in the treatment of liver cancer. Cell areas were reduced in the DMSO group compared to the control group and the confluency ratio increased. The ability to form spheroids was also significantly higher in the DMSO group. Alpha-fetoprotein was lower than the values of an ordinary liver cancer patient and the total protein amount increased to the reference range of the normal individual. Because the albumin sample was below the specimen value, the numerical results could not be obtained on biochemical examinations. We interpret all these results as making DMSO a caretaking aid. Since each one was not enough alone we used 3 parameters and the results were positive when we refer to the values of a normal healthy individual in parallel. We hope to extend the study further by adding new parameters and genetic analyzes, by increasing the number of samples, and by using DMSO as an adjunct agent in the treatment of liver cancer.

Keywords: hepatocellular carcinoma, HepG2, dimethyl sulfoxide, cell culture, ELISA

Procedia PDF Downloads 119
552 Integrating Reactive Chlorine Species Generation with H2 Evolution in a Multifunctional Photoelectrochemical System for Low Operational Carbon Emissions Saline Sewage Treatment

Authors: Zexiao Zheng, Irene M. C. Lo

Abstract:

Organic pollutants, ammonia, and bacteria are major contaminants in sewage, which may adversely impact ecosystems without proper treatment. Conventional wastewater treatment plants (WWTPs) are operated to remove these contaminants from sewage but suffer from high carbon emissions and are powerless to remove emerging organic pollutants (EOPs). Herein, we have developed a low operational carbon emissions multifunctional photoelectrochemical (PEC) system for saline sewage treatment to simultaneously remove organic compounds, ammonia, and bacteria, coupled with H2 evolution. A reduced BiVO4 (r-BiVO4) with improved PEC properties due to the construction of oxygen vacancies and V4+ species was developed for the multifunctional PEC system. The PEC/r-BiVO4 process could treat saline sewage to meet local WWTPs’ discharge standard in 40 minutes at 2.0 V vs. Ag/AgCl and completely degrade carbamazepine (one of the EOPs), coupled with significant evolution of H2. A remarkable reduction in operational carbon emissions was achieved by the PEC/r-BiVO4 process compared with large-scale WWTPs, attributed to the restrained direct carbon emissions from the generation of greenhouse gases. Mechanistic investigation revealed that the PEC system could activate chloride ions in sewage to generate reactive chlorine species and facilitate •OH production, promoting contaminants removal. The PEC system exhibited operational feasibility at different pH and total suspended solids concentrations and has outstanding reusability and stability, confirming its promising practical potential. The study combined the simultaneous removal of three major contaminants from saline sewage and H2 evolution in a single PEC process, demonstrating a viable approach to supplementing and extending the existing wastewater treatment technologies. The study generated profound insights into the in-situ activation of existing chloride ions in sewage for contaminants removal and offered fundamental theories for applying the PEC system in sewage remediation with low operational carbon emissions. The developed PEC system can fit well with the future needs of wastewater treatment because of the following features: (i) low operational carbon emissions, benefiting the carbon neutrality process; (ii) higher quality of the effluent due to the elimination of EOPs; (iii) chemical-free in the operation of sewage treatment; (iv) easy reuse and recycling without secondary pollution.

Keywords: contaminants removal, H2 evolution, multifunctional PEC system, operational carbon emissions, saline sewage treatment, r-BiVO4 photoanodes

Procedia PDF Downloads 130
551 Accessing Motional Quotient for All Round Development

Authors: Zongping Wang, Chengjun Cui, Jiacun Wang

Abstract:

The concept of intelligence has been widely used to access an individual's cognitive abilities to learn, form concepts, understand, apply logic, and reason. According to the multiple intelligence theory, there are eight distinguished types of intelligence. One of them is the bodily-kinaesthetic intelligence that links to the capacity of an individual controlling his body and working with objects. Motor intelligence, on the other hand, reflects the capacity to understand, perceive and solve functional problems by motor behavior. Both bodily-kinaesthetic intelligence and motor intelligence refer directly or indirectly to bodily capacity. Inspired by these two intelligence concepts, this paper introduces motional intelligence (MI). MI is two-fold. (1) Body strength, which is the capacity of various organ functions manifested by muscle activity under the control of the central nervous system during physical exercises. It can be measured by the magnitude of muscle contraction force, the frequency of repeating a movement, the time to finish a movement of body position, the duration to maintain muscles in a working status, etc. Body strength reflects the objective of MI. (2) Level of psychiatric willingness to physical events. It is a subjective thing and determined by an individual’s self-consciousness to physical events and resistance to fatigue. As such, we call it subjective MI. Subjective MI can be improved through education and proper social events. The improvement of subjective MI can lead to that of objective MI. A quantitative score of an individual’s MI is motional quotient (MQ). MQ is affected by several factors, including genetics, physical training, diet and lifestyle, family and social environment, and personal awareness of the importance of physical exercise. Genes determine one’s body strength potential. Physical training, in general, makes people stronger, faster and swifter. Diet and lifestyle have a direct impact on health. Family and social environment largely affect one’s passion for physical activities, so does personal awareness of the importance of physical exercise. The key to the success of the MQ study is developing an acceptable and efficient system that can be used to assess MQ objectively and quantitatively. We should apply different accessing systems to different groups of people according to their ages and genders. Field test, laboratory test and questionnaire are among essential components of MQ assessment. A scientific interpretation of MQ score is part of an MQ assessment system as it will help an individual to improve his MQ. IQ (intelligence quotient) and EQ (emotional quotient) and their test have been studied intensively. We argue that IQ and EQ study alone is not sufficient for an individual’s all round development. The significance of MQ study is that it offsets IQ and EQ study. MQ reflects an individual’s mental level as well as bodily level of intelligence in physical activities. It is well-known that the American Springfield College seal includes the Luther Gulick triangle with the words “spirit,” “mind,” and “body” written within it. MQ, together with IQ and EQ, echoes this education philosophy. Since its inception in 2012, the MQ research has spread rapidly in China. By now, six prestigious universities in China have established research centers on MQ and its assessment.

Keywords: motional Intelligence, motional quotient, multiple intelligence, motor intelligence, all round development

Procedia PDF Downloads 133
550 Dependence of Androgen Status in Men with Primary Hypothyroidism on Duration and Condition of Compensation

Authors: Krytskyy T.

Abstract:

Introduction: The role of androgen deficiency in men as a factor in the pathogenesis of many somatic diseases is unmistakable. The interaction of thyroid and sex hormones with hypothyroidism in men is still the subject of discussions. The purpose of the study is to assess the androgen status of men with primary hypothyroidism, depending on its duration and the state of compensation. Materials and methods: 45 men with primary hypothyroidism aged 35 to 60 years, as well as 25 healthy men, who formed a control group, were under supervision. A selection of men for examination was conducted in the process of outpatient and in-patient treatment at the endocrinology department of the University Hospital in Ternopil. The functional state of the pituitary-gonadal system was evaluated in order to characterize the androgen status of patients. The concentration of follicle stimulating hormone, luteinizing hormone, prolactin, thyroid-stimulating hormone was determined in blood with the help of enzyme-linked method. Also, the content of hormones: total testosterone, linking sex hormones globulin were determined. Results: Reduced total testosterone (TT) content was found in 42.2% of patients with hypothyroidism. Herewith in 17.8% of patients, blood TT levels were lower than 8.0 nmol / L, and in 11 (24.4%) men, the rate was in the range of 8.0 to 12.0 nmol / L. Based on the results of the determination of the content of free testosterone (FT), the frequency of laboratory hypogonadism in men with hypothyroidism was higher than the results of the determination of TT. The degree of compensation of hypothyroidism probably did not affect the average levels of gonadotropic and sex hormones. Conclusions: Reduced total testosterone content was found in 42.2% of patients with primary hypothyroidism. Herewith, in 17.8% of patients blood TT levels were lower than 8.0 nmol / L, which is a sign of absolute deficiency of testosterone, and in 24.4% of men the rate ranged from 8.0 to 12.0 nmol / l , indicating partial androgen deficiency. Linking sex hormones globulin levels were believed to be lower in 46.7% of patients with hypothyroidism compared to control group. The average levels of E2 in the examined patients did not significantly differ from the mean of control group. FSH, LH, and prolactin levels in men with hypothyroidism were within the normal age limits and probably did not differ from those of control group. The degree of compensation of hypothyroidism probably did not affect the average levels of gonadotropic and sex hormones. The mean LH content in the blood was significantly increased in men with a duration of hypothyroidism up to 5 years and did not differ from that of the control group and in men with a duration of hypothyroidism over 5 years. In men with hypothyroidism, a probable reduction in T / LH coefficient is found. The obtained data may indicate a combined lesion of the central and peripheral parts of the pituitary-gonadal system in men with hypothyroidism.

Keywords: androgenic status, hypothyroidism, testosterone, linking sex hormones globulin

Procedia PDF Downloads 168
549 Model-Based Global Maximum Power Point Tracking at Photovoltaic String under Partial Shading Conditions Using Multi-Input Interleaved Boost DC-DC Converter

Authors: Seyed Hossein Hosseini, Seyed Majid Hashemzadeh

Abstract:

Solar energy is one of the remarkable renewable energy sources that have particular characteristics such as unlimited, no environmental pollution, and free access. Generally, solar energy can be used in thermal and photovoltaic (PV) types. The cost of installation of the PV system is very high. Additionally, due to dependence on environmental situations such as solar radiation and ambient temperature, electrical power generation of this system is unpredictable and without power electronics devices, there is no guarantee to maximum power delivery at the output of this system. Maximum power point tracking (MPPT) should be used to achieve the maximum power of a PV string. MPPT is one of the essential parts of the PV system which without this section, it would be impossible to reach the maximum amount of the PV string power and high losses are caused in the PV system. One of the noticeable challenges in the problem of MPPT is the partial shading conditions (PSC). In PSC, the output photocurrent of the PV module under the shadow is less than the PV string current. The difference between the mentioned currents passes from the module's internal parallel resistance and creates a large negative voltage across shaded modules. This significant negative voltage damages the PV module under the shadow. This condition is called hot-spot phenomenon. An anti-paralleled diode is inserted across the PV module to prevent the happening of this phenomenon. This diode is known as the bypass diode. Due to the performance of the bypass diode under PSC, the P-V curve of the PV string has several peaks. One of the P-V curve peaks that makes the maximum available power is the global peak. Model-based Global MPPT (GMPPT) methods can estimate the optimal point with higher speed than other GMPPT approaches. Centralized, modular, and interleaved DC-DC converter topologies are the significant structures that can be used for GMPPT at a PV string. there are some problems in the centralized structure such as current mismatch losses at PV sting, loss of power of the shaded modules because of bypassing by bypass diodes under PSC, needing to series connection of many PV modules to reach the desired voltage level. In the modular structure, each PV module is connected to a DC-DC converter. In this structure, by increasing the amount of demanded power from the PV string, the number of DC-DC converters that are used at the PV system will increase. As a result, the cost of the modular structure is very high. We can implement the model-based GMPPT through the multi-input interleaved boost DC-DC converter to increase the power extraction from the PV string and reduce hot-spot and current mismatch error in a PV string under different environmental condition and variable load circumstances. The interleaved boost DC-DC converter has many privileges than other mentioned structures, such as high reliability and efficiency, better regulation of DC voltage at DC link, overcome the notable errors such as module's current mismatch and hot spot phenomenon, and power switches voltage stress reduction.

Keywords: solar energy, photovoltaic systems, interleaved boost converter, maximum power point tracking, model-based method, partial shading conditions

Procedia PDF Downloads 109
548 Hansen Solubility Parameter from Surface Measurements

Authors: Neveen AlQasas, Daniel Johnson

Abstract:

Membranes for water treatment are an established technology that attracts great attention due to its simplicity and cost effectiveness. However, membranes in operation suffer from the adverse effect of membrane fouling. Bio-fouling is a phenomenon that occurs at the water-membrane interface, and is a dynamic process that is initiated by the adsorption of dissolved organic material, including biomacromolecules, on the membrane surface. After initiation, attachment of microorganisms occurs, followed by biofilm growth. The biofilm blocks the pores of the membrane and consequently results in reducing the water flux. Moreover, the presence of a fouling layer can have a substantial impact on the membrane separation properties. Understanding the mechanism of the initiation phase of biofouling is a key point in eliminating the biofouling on membrane surfaces. The adhesion and attachment of different fouling materials is affected by the surface properties of the membrane materials. Therefore, surface properties of different polymeric materials had been studied in terms of their surface energies and Hansen solubility parameters (HSP). The difference between the combined HSP parameters (HSP distance) allows prediction of the affinity of two materials to each other. The possibilities of measuring the HSP of different polymer films via surface measurements, such as contact angle has been thoroughly investigated. Knowing the HSP of a membrane material and the HSP of a specific foulant, facilitate the estimation of the HSP distance between the two, and therefore the strength of attachment to the surface. Contact angle measurements using fourteen different solvents on five different polymeric films were carried out using the sessile drop method. Solvents were ranked as good or bad solvents using different ranking method and ranking was used to calculate the HSP of each polymeric film. Results clearly indicate the absence of a direct relation between contact angle values of each film and the HSP distance between each polymer film and the solvents used. Therefore, estimating HSP via contact angle alone is not sufficient. However, it was found if the surface tensions and viscosities of the used solvents are taken in to the account in the analysis of the contact angle values, a prediction of the HSP from contact angle measurements is possible. This was carried out via training of a neural network model. The trained neural network model has three inputs, contact angle value, surface tension and viscosity of solvent used. The model is able to predict the HSP distance between the used solvent and the tested polymer (material). The HSP distance prediction is further used to estimate the total and individual HSP parameters of each tested material. The results showed an accuracy of about 90% for all the five studied films

Keywords: surface characterization, hansen solubility parameter estimation, contact angle measurements, artificial neural network model, surface measurements

Procedia PDF Downloads 64
547 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data

Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton

Abstract:

The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.

Keywords: analytics, digitization, industry 4.0, manufacturing

Procedia PDF Downloads 88
546 Numerical Investigations of Unstable Pressure Fluctuations Behavior in a Side Channel Pump

Authors: Desmond Appiah, Fan Zhang, Shouqi Yuan, Wei Xueyuan, Stephen N. Asomani

Abstract:

The side channel pump has distinctive hydraulic performance characteristics over other vane pumps because of its generation of high pressure heads in only one impeller revolution. Hence, there is soaring utilization and application in the fields of petrochemical, food processing fields, automotive and aerospace fuel pumping where high heads are required at low flows. The side channel pump is characterized by unstable flow because after fluid flows into the impeller passage, it moves into the side channel and comes back to the impeller again and then moves to the next circulation. Consequently, the flow leaves the side channel pump following a helical path. However, the pressure fluctuation exhibited in the flow greatly contributes to the unwanted noise and vibration which is associated with the flow. In this paper, a side channel pump prototype was examined thoroughly through numerical calculations based on SST k-ω turbulence model to ascertain the pressure fluctuation behavior. The pressure fluctuation intensity of the 3D unstable flow dynamics were carefully investigated under different working conditions 0.8QBEP, 1.0 QBEP and 1.2QBEP. The results showed that the pressure fluctuation distribution around the pressure side of the blade is greater than the suction side at the impeller and side channel interface (z=0) for all three operating conditions. Part-load condition 0.8QBEP recorded the highest pressure fluctuation distribution because of the high circulation velocity thus causing an intense exchanged flow between the impeller and side channel. Time and frequency domains spectra of the pressure fluctuation patterns in the impeller and the side channel were also analyzed under the best efficiency point value, QBEP using the solution from the numerical calculations. It was observed from the time-domain analysis that the pressure fluctuation characteristics in the impeller flow passage increased steadily until the flow reached the interrupter which separates low-pressure at the inflow from high pressure at the outflow. The pressure fluctuation amplitudes in the frequency domain spectrum at the different monitoring points depicted a gentle decreasing trend of the pressure amplitudes which was common among the operating conditions. The frequency domain also revealed that the main excitation frequencies occurred at 600Hz, 1200Hz, and 1800Hz and continued in the integers of the rotating shaft frequency. Also, the mass flow exchange plots indicated that the side channel pump is characterized with many vortex flows. Operating conditions 0.8QBEP, 1.0 QBEP depicted less and similar vortex flow while 1.2Q recorded many vortex flows around the inflow, middle and outflow regions. The results of the numerical calculations were finally verified experimentally. The performance characteristics curves from the simulated results showed that 0.8QBEP working condition recorded a head increase of 43.03% and efficiency decrease of 6.73% compared to 1.0QBEP. It can be concluded that for industrial applications where the high heads are mostly required, the side channel pump can be designed to operate at part-load conditions. This paper can serve as a source of information in order to optimize a reliable performance and widen the applications of the side channel pumps.

Keywords: exchanged flow, pressure fluctuation, numerical simulation, side channel pump

Procedia PDF Downloads 110
545 Impacts of Public Insurance on Health Access and Outcomes: Evidence from India

Authors: Titir Bhattacharya, Tanika Chakraborty, Prabal K. De

Abstract:

Maternal and child health continue to be a significant policy focus in developing countries, including India. An emerging model in health care is the creation of public and private partnerships. Since the construction of physical infrastructure is costly, governments at various levels have tried to implement social health insurance schemes where a trust calculates insurance premiums and medical payments. Typically, qualifying families get full subsidization of the premium and get access to private hospitals, in addition to low cost public hospitals, for their tertiary care needs. We analyze one such pioneering social insurance scheme in the Indian state of Andhra Pradesh (AP). The Rajiv Aarogyasri program (RA) was introduced by the Government of AP on a pilot basis in 2007 and implemented in 2008. In this paper, we first examine the extent to which access to reproductive health care changed. For example, the RA scheme reimburses hospital deliveries leading us to expect an increase in institutional deliveries, particularly in private hospitals. Second, we expect an increase in institutional deliveries to also improve child health outcomes. Hence, we estimate if the program improved infant and child mortality. We use District Level Health Survey data to create annual birth cohorts from 2000-2015. Since AP was the only state in which such a state insurance program was implemented, the neighboring states constituted a plausible control group. Combined with the policy timing, and the year of birth, we employ a difference-indifference strategy to identify the effects of RA on the residents of AP. We perform several checks against threats to identification, including testing for pre-treatment trends between the treatment and control states. We find that the policy significantly lowered infant and child mortality in AP. We also find that deliveries in private hospitals increased, and government hospitals decreased, showing a substitution effect of the relative price change. Finally, as expected, out-of-pocket costs declined for the treatment group. However, we do not find any significant effects for usual preventive care such as vaccination, showing that benefits of insurance schemes targeted at the tertiary level may not trickle down to the primary care level.

Keywords: public health insurance, maternal and child health, public-private choice

Procedia PDF Downloads 56
544 A webGIS Methodology to Support Sediments Management in Wallonia

Authors: Nathalie Stephenne, Mathieu Veschkens, Stéphane Palm, Christophe Charlemagne, Jacques Defoux

Abstract:

According to Europe’s first River basin Management Plans (RBMPs), 56% of European rivers failed to achieve the good status targets of the Water Framework Directive WFD. In Central European countries such as Belgium, even more than 80% of rivers failed to achieve the WFD quality targets. Although the RBMP’s should reduce the stressors and improve water body status, their potential to address multiple stress situations is limited due to insufficient knowledge on combined effects, multi-stress, prioritization of measures, impact on ecology and implementation effects. This paper describes a webGis prototype developed for the Walloon administration to improve the communication and the management of sediment dredging actions carried out in rivers and lakes in the frame of RBMPs. A large number of stakeholders are involved in the management of rivers and lakes in Wallonia. They are in charge of technical aspects (client and dredging operators, organizations involved in the treatment of waste…), management (managers involved in WFD implementation at communal, provincial or regional level) or policy making (people responsible for policy compliance or legislation revision). These different kinds of stakeholders need different information and data to cover their duties but have to interact closely at different levels. Moreover, information has to be shared between them to improve the management quality of dredging operations within the ecological system. In the Walloon legislation, leveling dredged sediments on banks requires an official authorization from the administration. This request refers to spatial information such as the official land use map, the cadastral map, the distance to potential pollution sources. The production of a collective geodatabase can facilitate the management of these authorizations from both sides. The proposed internet system integrates documents, data input, integration of data from disparate sources, map representation, database queries, analysis of monitoring data, presentation of results and cartographic visualization. A prototype of web application using the API geoviewer chosen by the Geomatic department of the SPW has been developed and discussed with some potential users to facilitate the communication, the management and the quality of the data. The structure of the paper states the why, what, who and how of this communication tool.

Keywords: sediments, web application, GIS, rivers management

Procedia PDF Downloads 388
543 Prediction of Sepsis Illness from Patients Vital Signs Using Long Short-Term Memory Network and Dynamic Analysis

Authors: Marcio Freire Cruz, Naoaki Ono, Shigehiko Kanaya, Carlos Arthur Mattos Teixeira Cavalcante

Abstract:

The systems that record patient care information, known as Electronic Medical Record (EMR) and those that monitor vital signs of patients, such as heart rate, body temperature, and blood pressure have been extremely valuable for the effectiveness of the patient’s treatment. Several kinds of research have been using data from EMRs and vital signs of patients to predict illnesses. Among them, we highlight those that intend to predict, classify, or, at least identify patterns, of sepsis illness in patients under vital signs monitoring. Sepsis is an organic dysfunction caused by a dysregulated patient's response to an infection that affects millions of people worldwide. Early detection of sepsis is expected to provide a significant improvement in its treatment. Preceding works usually combined medical, statistical, mathematical and computational models to develop detection methods for early prediction, getting higher accuracies, and using the smallest number of variables. Among other techniques, we could find researches using survival analysis, specialist systems, machine learning and deep learning that reached great results. In our research, patients are modeled as points moving each hour in an n-dimensional space where n is the number of vital signs (variables). These points can reach a sepsis target point after some time. For now, the sepsis target point was calculated using the median of all patients’ variables on the sepsis onset. From these points, we calculate for each hour the position vector, the first derivative (velocity vector) and the second derivative (acceleration vector) of the variables to evaluate their behavior. And we construct a prediction model based on a Long Short-Term Memory (LSTM) Network, including these derivatives as explanatory variables. The accuracy of the prediction 6 hours before the time of sepsis, considering only the vital signs reached 83.24% and by including the vectors position, speed, and acceleration, we obtained 94.96%. The data are being collected from Medical Information Mart for Intensive Care (MIMIC) Database, a public database that contains vital signs, laboratory test results, observations, notes, and so on, from more than 60.000 patients.

Keywords: dynamic analysis, long short-term memory, prediction, sepsis

Procedia PDF Downloads 101
542 Degradation and Detoxification of Tetracycline by Sono-Fenton and Ozonation

Authors: Chikang Wang, Jhongjheng Jian, Poming Huang

Abstract:

Among a wide variety of pharmaceutical compounds, tetracycline antibiotics are one of the largest groups of pharmaceutical compounds extensively used in human and veterinary medicine to treat and prevent bacterial infections. Because it is water soluble, biologically active, stable and bio-refractory, release to the environment threatens aquatic life and increases the risk posed by antibiotic-resistant pathogens. In practice, due to its antibacterial nature, tetracycline cannot be effectively destructed by traditional biological methods. Hence, in this study, two advanced oxidation processes such as ozonation and sono-Fenton processes were conducted individually to degrade the tetracycline for investigating their feasibility on tetracycline degradation. Effect of operational variables on tetracycline degradation, release of nitrogen and change of toxicity were also proposed. Initial tetracycline concentration was 50 mg/L. To evaluate the efficiency of tetracycline degradation by ozonation, the ozone gas was produced by an ozone generator (Model LAB2B, Ozonia) and introduced into the reactor with different flows (25 - 500 mL/min) at varying pH levels (pH 3 - pH 11) and reaction temperatures (15 - 55°C). In sono-Fenton system, an ultrasonic transducer (Microson VCX 750, USA) operated at 20 kHz combined with H₂O₂ (2 mM) and Fe²⁺ (0.2 mM) were carried out at different pH levels (pH 3 - pH 11), aeration gas and flows (air and oxygen; 0.2 - 1.0 L/min), tetracycline concentrations (10 - 200 mg/L), reaction temperatures (15 - 55°C) and ultrasonic powers (25 - 200 Watts), respectively. Sole ultrasound was ineffective on tetracycline degradation, where the degradation efficiencies were lower than 10% with 60 min reaction. Contribution of Fe²⁺ and H₂O₂ on the degradation of tetracycline was significant, where the maximum tetracycline degradation efficiency in sono-Fenton process was as high as 91.3% followed by 45.8% mineralization. Effect of initial pH level on tetracycline degradation was insignificant from pH 3 to pH 6 but significantly decreased as the pH was greater than pH 7. Increase of the ultrasonic power was slightly increased the degradation efficiency of tetracycline, which indicated that the hydroxyl radicals dominated the oxidation of tetracycline. Effects of aeration of air or oxygen with different flows and reaction temperatures were insignificant. Ozonation showed better efficiencies in tetracycline degradation, where the optimum reaction condition was found at pH 3, 100 mL O₃/min and 25°C with 94% degradation and 60% mineralization. The toxicity of tetracycline was significantly decreased due to the mineralization of tetracycline. In addition, less than 10% of nitrogen content was released to solution phase as NH₃-N, and the most degraded tetracycline cannot be full mineralized to CO₂. The results shown in this study indicated that both the sono-Fenton process and ozonation can effectively degrade the tetracycline and reduce its toxicity at profitable condition. The costs of two systems needed to be further investigated to understand the feasibility in tetracycline degradation.

Keywords: degradation, detoxification, mineralization, ozonation, sono-Fenton process, tetracycline

Procedia PDF Downloads 246
541 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 371
540 The Gender Criteria of Film Criticism: Creating the ‘Big’, Avoiding the Important

Authors: Eleni Karasavvidou

Abstract:

Social and anthropological research, parallel to Gender Studies, highlighted the relationship between social structures and symbolic forms as an important field of interaction and recording of 'social trends.' Since the study of representations can contribute to the understanding of the social functions and power relations, they encompass. This ‘mirage,’ however, has not only to do with the representations themselves but also with the ways they are received and the film or critical narratives that are established as dominant or alternative. Cinema and the criticism of its cultural products are no exception. Even in the rapidly changing media landscape of the 21st century, movies remain an integral and widespread part of popular culture, making films an extremely powerful means of 'legitimizing' or 'delegitimizing' visions of domination and commonsensical gender stereotypes throughout society. And yet it is film criticism, the 'language per se,' that legitimizes, reinforces, rewards and reproduces (or at least ignores) the stereotypical depictions of female roles that remain common in the realm of film images. This creates the need for this issue to have emerged (also) in academic research questioning gender criteria in film reviews as part of the effort for an inclusive art and society. Qualitative content analysis is used to examine female roles in selected Oscar-nominated films against their reviews from leading websites and newspapers. This method was chosen because of the complex nature of the depictions in the films and the narratives they evoke. The films were divided into basic scenes depicting social functions, such as love and work relationships, positions of power and their function, which were analyzed by content analysis, with borrowings from structuralism (Gennette) and the local/universal images of intercultural philology (Wierlacher). In addition to the measurement of the general ‘representation-time’ by gender, other qualitative characteristics were also analyzed, such as: speaking time, sayings or key actions, overall quality of the character's action in relation to the development of the scenario and social representations in general, as well as quantitatively (insufficient number of female lead roles, fewer key supporting roles, relatively few female directors and people in the production chain and how they might affect screen representations. The quantitative analysis in this study was used to complement the qualitative content analysis. Then the focus shifted to the criteria of film criticism and to the rhetorical narratives that exclude or highlight in relation to gender identities and functions. In the criteria and language of film criticism, stereotypes are often reproduced or allegedly overturned within the framework of apolitical "identity politics," which mainly addresses the surface of a self-referential cultural-consumer product without connecting it more deeply with the material and cultural life. One of the prime examples of this failure is the Bechtel Test, which tracks whether female characters speak in a film regardless of whether women's stories are represented or not in the films analyzed. If perceived unbiased male filmmakers still fail to tell truly feminist stories, the same is the case with the criteria of criticism and the related interventions.

Keywords: representations, context analysis, reviews, sexist stereotypes

Procedia PDF Downloads 59
539 The Assessment of Infiltrated Wastewater on the Efficiency of Recovery Reuse and Irrigation Scheme: North Gaza Emergency Sewage Treatment Project as a Case Study

Authors: Yaser S. Kishawi, Sadi R. Ali

Abstract:

Part of Palestine, Gaza Strip (365 km2 and 1.8 million habitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely covers the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is finding non-conventional water resource from treated wastewater to cover agricultural requirements and serve the population. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line and infiltration basins-IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme–RRS– to capture the spreading plume). Currently, only phase A is functioning. Nearly 23 Mm3 of partially treated wastewater were infiltrated into the aquifer. Phase B and phase C witnessed many delays and this forced a reassessment of the RRS original design. An Environmental Management Plan was conducted from Jul 2013 to Jun 2014 on 13 existing monitoring wells surrounding the project location. This is to measure the efficiency of the SAT system and the spread of the contamination plume with relation to the efficiency of the proposed RRS. Along with the proposed location of the 27 recovery wells as part of the proposed RRS. The results of monitored wells were assessed compared with PWA baseline data. This was put into a groundwater model to simulate the plume to propose the best suitable solution to the delays. The redesign mainly manipulated the pumping rate of wells, proposed locations and functioning schedules (including wells groupings). The proposed simulations were examined using visual MODFLOW V4.2 to simulate the results. The results of monitored wells were assessed based on the location of the monitoring wells related to the proposed recovery wells locations (200m, 500m, and 750m away from the IBs). Near the 500m line (the first row of proposed recovery wells), an increase of nitrate (from 30 to 70mg/L) compare to a decrease in Chloride (1500 to below 900mg/L) was found during the monitoring period which indicated an expansion of plume to this distance. On this rate with the required time to construct the recovery scheme, keeping the original design the RRS will fail to capture the plume. Based on that many simulations were conducted leading into three main scenarios. The scenarios manipulated the starting dates, the pumping rate and the locations of recovery wells. A simulation of plume expansion and path-lines were extracted from the model monitoring how to prevent the expansion towards the nearby municipal wells. It was concluded that the location is the most important factor in determining the RRS efficiency. Scenario III was adopted and showed effective results even with a reduced pumping rates. This scenario proposed adding two additional recovery wells in a location beyond the 750m line to compensate the delays and effectively capture the plume. A continuous monitoring program for current and future monitoring wells should be in place to support the proposed scenario and ensure maximum protection.

Keywords: soil aquifer treatment, recovery reuse scheme, infiltration basins, North Gaza

Procedia PDF Downloads 185
538 Wind Resource Classification and Feasibility of Distributed Generation for Rural Community Utilization in North Central Nigeria

Authors: O. D. Ohijeagbon, Oluseyi O. Ajayi, M. Ogbonnaya, Ahmeh Attabo

Abstract:

This study analyzed the electricity generation potential from wind at seven sites spread across seven states of the North-Central region of Nigeria. Twenty-one years (1987 to 2007) wind speed data at a height of 10m were assessed from the Nigeria Meteorological Department, Oshodi. The data were subjected to different statistical tests and also compared with the two-parameter Weibull probability density function. The outcome shows that the monthly average wind speeds ranged between 2.2 m/s in November for Bida and 10.1 m/s in December for Jos. The yearly average ranged between 2.1m/s in 1987 for Bida and 11.8 m/s in 2002 for Jos. Also, the power density for each site was determined to range between 29.66 W/m2 for Bida and 864.96 W/m2 for Jos, Two parameters (k and c) of the Weibull distribution were found to range between 2.3 in Lokoja and 6.5 in Jos for k, while c ranged between 2.9 in Bida and 9.9m/s in Jos. These outcomes points to the fact that wind speeds at Jos, Minna, Ilorin, Makurdi and Abuja are compatible with the cut-in speeds of modern wind turbines and hence, may be economically feasible for wind-to-electricity at and above the height of 10 m. The study further assessed the potential and economic viability of standalone wind generation systems for off-grid rural communities located in each of the studied sites. A specific electric load profile was developed to suite hypothetic communities, each consisting of 200 homes, a school and a community health center. Assessment of the design that will optimally meet the daily load demand with a loss of load probability (LOLP) of 0.01 was performed, considering 2 stand-alone applications of wind and diesel. The diesel standalone system (DSS) was taken as the basis of comparison since the experimental locations have no connection to a distribution network. The HOMER® software optimizing tool was utilized to determine the optimal combination of system components that will yield the lowest life cycle cost. Sequel to the analysis for rural community utilization, a Distributed Generation (DG) analysis that considered the possibility of generating wind power in the MW range in order to take advantage of Nigeria’s tariff regime for embedded generation was carried out for each site. The DG design incorporated each community of 200 homes, freely catered for and offset from the excess electrical energy generated above the minimum requirement for sales to a nearby distribution grid. Wind DG systems were found suitable and viable in producing environmentally friendly energy in terms of life cycle cost and levelised value of producing energy at Jos ($0.14/kWh), Minna ($0.12/kWh), Ilorin ($0.09/kWh), Makurdi ($0.09/kWh), and Abuja ($0.04/kWh) at a particluar turbine hub height. These outputs reveal the value retrievable from the project after breakeven point as a function of energy consumed Based on the results, the study demonstrated that including renewable energy in the rural development plan will enhance fast upgrade of the rural communities.

Keywords: wind speed, wind power, distributed generation, cost per kilowatt-hour, clean energy, North-Central Nigeria

Procedia PDF Downloads 487
537 Relationship between Functional Properties and Supramolecular Structure of the Poly(Trimethylene 2,5-Furanoate) Based Multiblock Copolymers with Aliphatic Polyethers or Aliphatic Polyesters

Authors: S. Paszkiewicz, A. Zubkiewicz, A. Szymczyk, D. Pawlikowska, I. Irska, E. Piesowicz, A. Linares, T. A. Ezquerra

Abstract:

Over the last century, the world has become increasingly dependent on oil as its main source of chemicals and energy. Driven largely by the strong economic growth of India and China, demand for oil is expected to increase significantly in the coming years. This growth in demand, combined with diminishing reserves, will require the development of new, sustainable sources for fuels and bulk chemicals. Biomass is an attractive alternative feedstock, as it is widely available carbon source apart from oil and coal. Nowadays, academic and industrial research in the field of polymer materials is strongly oriented towards bio-based alternatives to petroleum-derived plastics with enhanced properties for advanced applications. In this context, 2,5-furandicarboxylic acid (FDCA), a biomass-based chemical product derived from lignocellulose, is one of the most high-potential biobased building blocks for polymers and the first candidate to replace the petro-derived terephthalic acid. FDCA has been identified as one of the top 12 chemicals in the future, which may be used as a platform chemical for the synthesis of biomass-based polyester. The aim of this study is to synthesize and characterize the multiblock copolymers containing rigid segments of poly(trimethylene 2,5-furanoate) (PTF) and soft segments of poly(tetramethylene oxide) (PTMO) with excellent elastic properties or aliphatic polyesters of polycaprolactone (PCL). Two series of PTF based copolymers, i.e., PTF-block-PTMO-T and PTF-block-PCL-T, with different content of flexible segments were synthesized by means of a two-step melt polycondensation process and characterized by various methods. The rigid segments of PTF, as well as the flexible PTMO/or PCL ones, were randomly distributed along the chain. On the basis of 1H NMR, SAXS and WAXS, DSC an DMTA results, one can conclude that both phases were thermodynamically immiscible and the values of phase transition temperatures varied with the composition of the copolymer. The copolymers containing 25, 35 and 45wt.% of flexible segments (PTMO) exhibited elastomeric property characteristics. Moreover, with respect to the flexible segments content, the temperatures corresponding to 5%, 25%, 50% and 90% mass loss as well as the values of tensile modulus decrease with the increasing content of aliphatic polyether or aliphatic polyester in the composition.

Keywords: furan based polymers, multiblock copolymers, supramolecular structure, functional properties

Procedia PDF Downloads 107
536 iPSCs More Effectively Differentiate into Neurons on PLA Scaffolds with High Adhesive Properties for Primary Neuronal Cells

Authors: Azieva A. M., Yastremsky E. V., Kirillova D. A., Patsaev T. D., Sharikov R. V., Kamyshinsky R. A., Lukanina K. I., Sharikova N. A., Grigoriev T. E., Vasiliev A. L.

Abstract:

Adhesive properties of scaffolds, which predominantly depend on the chemical and structural features of their surface, play the most important role in tissue engineering. The basic requirements for such scaffolds are biocompatibility, biodegradation, high cell adhesion, which promotes cell proliferation and differentiation. In many cases, synthetic polymers scaffolds have proven advantageous because they are easy to shape, they are tough, and they have high tensile properties. The regeneration of nerve tissue still remains a big challenge for medicine, and neural stem cells provide promising therapeutic potential for cell replacement therapy. However, experiments with stem cells have their limitations, such as low level of cell viability and poor control of cell differentiation. Whereas the study of already differentiated neuronal cell culture obtained from newborn mouse brain is limited only to cell adhesion. The growth and implantation of neuronal culture requires proper scaffolds. Moreover, the polymer scaffolds implants with neuronal cells could demand specific morphology. To date, it has been proposed to use numerous synthetic polymers for these purposes, including polystyrene, polylactic acid (PLA), polyglycolic acid, and polylactide-glycolic acid. Tissue regeneration experiments demonstrated good biocompatibility of PLA scaffolds, despite the hydrophobic nature of the compound. Problem with poor wettability of the PLA scaffold surface could be overcome in several ways: the surface can be pre-treated by poly-D-lysine or polyethyleneimine peptides; roughness and hydrophilicity of PLA surface could be increased by plasma treatment, or PLA could be combined with natural fibers, such as collagen or chitosan. This work presents a study of adhesion of both induced pluripotent stem cells (iPSCs) and mouse primary neuronal cell culture on the polylactide scaffolds of various types: oriented and non-oriented fibrous nonwoven materials and sponges – with and without the effect of plasma treatment and composites with collagen and chitosan. To evaluate the effect of different types of PLA scaffolds on the neuronal differentiation of iPSCs, we assess the expression of NeuN in differentiated cells through immunostaining. iPSCs more effectively differentiate into neurons on PLA scaffolds with high adhesive properties for primary neuronal cells.

Keywords: PLA scaffold, neurons, neuronal differentiation, stem cells, polylactid

Procedia PDF Downloads 57
535 The Impact of the Covid-19 Pandemic on Marine-Wildlife Tourism in Massachusetts, United States

Authors: K. C. Bloom, Cynde McInnis

Abstract:

The Covid-19 pandemic has caused immense changes in the way that we live, work and travel. The impact of these changes is readily apparent in tourism to Massachusetts and the region of New England. Whereas, in general, Massachusetts and New England are a hotspot for travelers from around the world, this form of travel has largely been shut down due to the pandemic. One such area where the impact has been felt is in marine-based wildlife tourism. Massachusetts is home to not only whales but also seals and great white sharks. Prior to the pandemic, whale watching had long been a popular activity while seal and shark tourism has been a developing one. Given that seeing a great white shark was rare in New England for many years, shark tourism has not played a role in the economies of the region until recently. While whales have steadily been found within the marine environments of Massachusetts and whale watching has been a popular attraction since the mid-1970s, the lack of great white sharks in New England was, in part, a response to a change in their environment in that a favorite food source, the gray seals, were culled by regional fishermen as the fishermen believed that seals were taking their catch. This retaliatory behavior ended when the Marine Mammal Protection Act of 1972 (MMPA) was passed. The MMPA prohibited the killing of seals and since then the seal population has increased to traditional numbers (Tech Times, 2014). Given the increase in the seal population in New England, and especially Cape Cod, Massachusetts, there has been a similar increase in the numbers of great white sharks. In fact, over the time between 2004 and 2014, the number of sightings increased from an average of two per year to more than 20 (NY Post, 7/21/14). This has increased even more over the last six years. As a result, residents and businesses in Massachusetts have begun to embrace the great whites as a potential tourism draw. Local business owners are considering opening up cage diving and shark viewing businesses while there has also been an increase in shark-related merchandise throughout the Cape Cod region. Combined with a large whale watching industry, marine-based wildlife tourism is big business to Massachusetts. With the Covid-19 pandemic shuttering international travel, this study aims to look at the impacts of the pandemic on this industry. Through interviews with marine-based wildlife tourism businesses as well as survey data collection from visitors, this study looks at the holistic impacts of the Covid-19 pandemic on an important part of the marine tourism industry in the state.

Keywords: marine tourism, ecotourism, Covid, wildlife

Procedia PDF Downloads 140