Search results for: operator pool
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 725

Search results for: operator pool

95 The Impact of COVID-19 on Italian Tourism: the Current Scenario, Opportunity and Future Tourism Organizational Strategies

Authors: Marco Camilli

Abstract:

This article examines the impact of the pandemic outbreak of COVID-19 in the tourism sector in Italy, analyzing the current scenario, the government decisions and the private company reaction for the summer season 2020. The framework of the data analyzed shows how massive it’s the impact of the pandemic outbreak in the tourism revenue, and the weaknesses of the measures proposed. Keywords Travel &Tourism, Transportation, Sustainability, COVID-19, Businesses Introduction The current COVID-19 scenario shows a shocking situation for the tourism and transportation sectors: it could be the most affected by the Coronavirus in Italy. According to forecasts, depending on the duration of the epidemic outbreak and the lockdown strategy applied by the Government, businesses in the supply chain could lose between 24 and 66 billion in turnover in the period of 2020-21, with huge diversified impacts at the national and regional level. Many tourist companies are on the verge of survival and if there are no massive measures by the government they risk closure. Data analysis The tourism and transport sector could be among the sectors most damaged by Covid-19 in Italy. Considering the two-year period 2020-21, companies operating in the travel & tourism sector (Tour operator, Travel Agencies, Hotel, Guides, Bus Company, etc..) could in suffer losses in revenues of 24 to 64 billion euros, especially in the sectors such as the travel agencies, hotel and rental. According to Statista Research Department, from April 2020 estimated that the coronavirus (COVID-19) pandemic will have a significant impact on revenues of the tourism industry in Italy. Revenues are expected to decrease by over 40 billion euros in the first semester of 2020, compared to the same period of the previous year. According to the study, hotel and non-hotel accommodations will experience the highest loss. Revenues of this sector are expected to decrease by 13 billion euros compared to the first semester of 2019 when accommodations registered revenues for about 17 billion euros. According to Statista.com, in 2020, Italy is expected to register a decrease of roughly 28.5 million tourist arrivals due to the impact of coronavirus (COVID-19) on the country's tourist sector. According to the estimate, the region of Veneto will record the highest drop with a decrease of roughly 4.61 million arrivals. Similarly, Lombardy is expected to register a decrease of about 3.87 million arrivals in 2020.

Keywords: travel and tourism, sustainability, COVID-19, businesses, transportation

Procedia PDF Downloads 172
94 Anti-DNA Antibodies from Patients with Schizophrenia Hydrolyze DNA

Authors: Evgeny A. Ermakov, Lyudmila P. Smirnova, Valentina N. Buneva

Abstract:

Schizophrenia associated with dysregulation of neurotransmitter processes in the central nervous system and disturbances in the humoral immune system resulting in the formation of antibodies (Abs) to the various components of the nervous tissue. Abs to different neuronal receptors and DNA were detected in the blood of patients with schizophrenia. Abs hydrolyzing DNA were detected in pool of polyclonal autoantibodies in autoimmune and infectious diseases, such catalytic Abs were named abzymes. It is believed that DNA-hydrolyzing abzymes are cytotoxic, cause nuclear DNA fragmentation and induce cell death by apoptosis. Abzymes with DNAase activity are interesting because of the mechanism of formation and the possibility of use as diagnostic markers. Therefore, in our work we have set following goals: to determine the level anti-DNA Abs in the serum of patients with schizophrenia and to study DNA-hydrolyzing activity of IgG of patients with schizophrenia. Materials and methods: In our study there were included 41 patients with a verified diagnosis of paranoid or simple schizophrenia and 24 healthy donors. Electrophoretically and immunologically homogeneous IgGs were obtained by sequential affinity chromatography of the serum proteins on protein G-Sepharose and gel filtration. The levels of anti-DNA Abs were determined using ELISA. DNA-hydrolyzing activity was detected as the level of supercoiled pBluescript DNA transition in circular and linear forms, the hydrolysis products were analyzed by agarose electrophoresis followed by ethidium bromide stain. To correspond the registered catalytic activity directly to the antibodies we carried out a number of strict criteria: electrophoretic homogeneity of the antibodies, gel filtration (acid shock analysis) and in situ activity. Statistical analysis was performed in ‘Statistica 9.0’ using the non-parametric Mann-Whitney test. Results: The sera of approximately 30% of schizophrenia patients displayed a higher level of Abs interacting with single-stranded (ssDNA) and double-stranded DNA (dsDNA) compared with healthy donors. The average level of Abs interacting with ssDNA was only 1.1-fold lower than that for interacting with dsDNA. IgG of patient with schizophrenia were shown to possess DNA hydrolyzing activity. Using affinity chromatography, electrophoretic analysis of isolated IgG homogeneity, gel filtration in acid shock conditions and in situ DNAse activity analysis we proved that the observed activity is intrinsic property of studied antibodies. We have shown that the relative DNAase activity of IgG in patients with schizophrenia averaged 55.4±32.5%, IgG of healthy donors showed much lower activity (average of 9.1±6.5%). It should be noted that DNAase activity of IgG in patients with schizophrenia with a negative symptoms was significantly higher (73.3±23.8%), than in patients with positive symptoms (43.3±33.1%). Conclusion: Anti-DNA Abs of patients with schizophrenia not only bind DNA, but quite efficiently hydrolyze the substrate. The data show a correlation with the level of DNase activity and leading symptoms of patients with schizophrenia.

Keywords: anti-DNA antibodies, abzymes, DNA hydrolysis, schizophrenia

Procedia PDF Downloads 303
93 The Four Elements of Zoroastrianism and Sustainable Ecosystems with an Ecological Approach

Authors: Esmat Momeni, Shabnam Basari, Mohammad Beheshtinia

Abstract:

The purpose of this study is to provide a symbolic explanation of the four elements in Zoroastrianism and sustainable ecosystems with an ecological approach. The research method is fundamental and deductive content analysis. Data collection has been done through library and documentary methods and through reading books and related articles. The population and sample of the present study are Yazd city and Iran country after discovering symbolic concepts derived from the theoretical foundations of Zoroastrianism in four elements of water, air, soil, fire and conformity with Iranian architecture with the ecological approach in Yazd city, the sustainable ecosystem it is explained by the system of nature. The validity and reliability of the results are based on the trust and confidence of the research literature. Research findings show that Yazd was one of the bases of Zoroastrianism in Iran. Many believe that the first person to discuss the elements of nature and respect Zoroastrians is the Prophet of this religion. Keeping the environment clean and pure by paying attention to and respecting these four elements. The water element is a symbol of existence in Zoroastrianism, so the people of Yazd used the aqueduct and designed a pool in front of the building. The soil element is a symbol of the raw material of human creation in the Zoroastrian religion, the most readily available material in the desert areas of Yazd, used as bricks and adobes, creating one of the most magnificent roof coverings is the dome. The wind element represents the invisible force of the soul in Creation in Zoroastrianism, the most important application of wind in the windy, which is a highly efficient cooling system. The element of fire, which is always a symbol of purity in Zoroastrianism, is located in a special place in Yazd's Ataskadeh (altar/ temple), where the most important religious prayers are held in and against the fire. Consequently, indigenous knowledge and attention to indigenous architecture is a part of the national capital of each nation that encompasses their beliefs, values, methods, and knowledge. According to studies on the four elements of Zoroastrianism, the link between these four elements are that due to the hot and dry fire at the beginning, it is the fire that begins to follow the nature of the movement in the stillness of the earth, and arises from the heat of the fire and because of vigor and its decreases, cold (wind) emerges, and from cold, humidity and wetness. And by examining books and resources on Yazd's architectural design with an ecological approach to the values of the four elements Zoroastrianism has been inspired, it can be concluded that in order to have environmentally friendly architecture, it is essential to use sustainable architectural principles, to link religious and sacrament culture and ecology through architecture.

Keywords: ecology, architecture, quadruple elements of air, soil, water, fire, Zoroastrian religion, sustainable ecosystem, Iran, Yazd city

Procedia PDF Downloads 91
92 Correlations and Impacts Of Optimal Rearing Parameters on Nutritional Value Of Mealworm (Tenebrio Molitor)

Authors: Fabienne Vozy, Anick Lepage

Abstract:

Insects are displaying high nutritional value, low greenhouse gas emissions, low land use requirements and high food conversion efficiency. They can contribute to the food chain and be one of many solutions to protein shortages. Currently, in North America, nutritional entomology is under-developed and the needs to better understand its benefits remain to convince large-scale producers and consumers (both for human and agricultural needs). As such, large-scale production of mealworms offers a promising alternative to replacing traditional sources of protein and fatty acids. To proceed orderly, it is required to collect more data on the nutritional values of insects such as, a) Evaluate the diets of insects to improve their dietary value; b) Test the breeding conditions to optimize yields; c) Evaluate the use of by-products and organic residues as sources of food. Among the featured technical parameters, relative humidity (RH) percentage and temperature, optimal substrates and hydration sources are critical elements, thus establishing potential benchmarks for to optimize conversion rates of protein and fatty acids. This research is to establish the combination of the most influential rearing parameters with local food residues, to correlate the findings with the nutritional value of the larvae harvested. 125 same-monthly old adults/replica are randomly selected in the mealworm breeding pool then placed to oviposit in growth chambers preset at 26°C and 65% RH. Adults are removed after 7 days. Larvae are harvested upon the apparition of the first nymphosis signs and batches, are analyzed for their nutritional values using wet chemistry analysis. The first samples analyses include total weight of both fresh and dried larvae, residual humidity, crude proteins (CP%), and crude fats (CF%). Further analyses are scheduled to include soluble proteins and fatty acids. Although they are consistent with previous published data, the preliminary results show no significant differences between treatments for any type of analysis. Nutritional properties of each substrate combination have yet allowed to discriminate the most effective residue recipe. Technical issues such as the particles’ size of the various substrate combinations and larvae screen compatibility are to be investigated since it induced a variable percentage of lost larvae upon harvesting. To address those methodological issues are key to develop a standardized efficient procedure. The aim is to provide producers with easily reproducible conditions, without incurring additional excessive expenditure on their part in terms of equipment and workforce.

Keywords: entomophagy, nutritional value, rearing parameters optimization, Tenebrio molitor

Procedia PDF Downloads 87
91 The Staphylococcus aureus Exotoxin Recognition Using Nanobiosensor Designed by an Antibody-Attached Nanosilica Method

Authors: Hamed Ahari, Behrouz Akbari Adreghani, Vadood Razavilar, Amirali Anvar, Sima Moradi, Hourieh Shalchi

Abstract:

Considering the ever increasing population and industrialization of the developmental trend of humankind's life, we are no longer able to detect the toxins produced in food products using the traditional techniques. This is due to the fact that the isolation time for food products is not cost-effective and even in most of the cases, the precision in the practical techniques like the bacterial cultivation and other techniques suffer from operator errors or the errors of the mixtures used. Hence with the advent of nanotechnology, the design of selective and smart sensors is one of the greatest industrial revelations of the quality control of food products that in few minutes time, and with a very high precision can identify the volume and toxicity of the bacteria. Methods and Materials: In this technique, based on the bacterial antibody connection to nanoparticle, a sensor was used. In this part of the research, as the basis for absorption for the recognition of bacterial toxin, medium sized silica nanoparticles of 10 nanometer in form of solid powder were utilized with Notrino brand. Then the suspension produced from agent-linked nanosilica which was connected to bacterial antibody was positioned near the samples of distilled water, which were contaminated with Staphylococcus aureus bacterial toxin with the density of 10-3, so that in case any toxin exists in the sample, a connection between toxin antigen and antibody would be formed. Finally, the light absorption related to the connection of antigen to the particle attached antibody was measured using spectrophotometry. The gene of 23S rRNA that is conserved in all Staphylococcus spp., also used as control. The accuracy of the test was monitored by using serial dilution (l0-6) of overnight cell culture of Staphylococcus spp., bacteria (OD600: 0.02 = 107 cell). It showed that the sensitivity of PCR is 10 bacteria per ml of cells within few hours. Result: The results indicate that the sensor detects up to 10-4 density. Additionally, the sensitivity of the sensors was examined after 60 days, the sensor by the 56 days had confirmatory results and started to decrease after those time periods. Conclusions: Comparing practical nano biosensory to conventional methods like that culture and biotechnology methods(such as polymerase chain reaction) is accuracy, sensitiveness and being unique. In the other way, they reduce the time from the hours to the 30 minutes.

Keywords: exotoxin, nanobiosensor, recognition, Staphylococcus aureus

Procedia PDF Downloads 365
90 Enhanced Furfural Extraction from Aqueous Media Using Neoteric Hydrophobic Solvents

Authors: Ahmad S. Darwish, Tarek Lemaoui, Hanifa Taher, Inas M. AlNashef, Fawzi Banat

Abstract:

This research reports a systematic top-down approach for designing neoteric hydrophobic solvents –particularly, deep eutectic solvents (DES) and ionic liquids (IL)– as furfural extractants from aqueous media for the application of sustainable biomass conversion. The first stage of the framework entailed screening 32 neoteric solvents to determine their efficacy against toluene as the application’s conventional benchmark for comparison. The selection criteria for the best solvents encompassed not only their efficiency in extracting furfural but also low viscosity and minimal toxicity levels. Additionally, for the DESs, their natural origins, availability, and biodegradability were also taken into account. From the screening pool, two neoteric solvents were selected: thymol:decanoic acid 1:1 (Thy:DecA) and trihexyltetradecyl phosphonium bis(trifluoromethylsulfonyl) imide [P₁₄,₆,₆,₆][NTf₂]. These solvents outperformed the toluene benchmark, achieving efficiencies of 94.1% and 97.1% respectively, compared to toluene’s 81.2%, while also possessing the desired properties. These solvents were then characterized thoroughly in terms of their physical properties, thermal properties, critical properties, and cross-contamination solubilities. The selected neoteric solvents were then extensively tested under various operating conditions, and an exceptional stable performance was exhibited, maintaining high efficiency across a broad range of temperatures (15–100 °C), pH levels (1–13), and furfural concentrations (0.1–2.0 wt%) with a remarkable equilibrium time of only 2 minutes, and most notably, demonstrated high efficiencies even at low solvent-to-feed ratios. The durability of the neoteric solvents was also validated to be stable over multiple extraction-regeneration cycles, with limited leachability to the aqueous phase (≈0.1%). Moreover, the extraction performance of the solvents was then modeled through machine learning, specifically multiple non-linear regression (MNLR) and artificial neural networks (ANN). The models demonstrated high accuracy, indicated by their low absolute average relative deviations with values of 2.74% and 2.28% for Thy:DecA and [P₁₄,₆,₆,₆][NTf₂], respectively, using MNLR, and 0.10% for Thy:DecA and 0.41% for [P₁₄,₆,₆,₆][NTf₂] using ANN, highlighting the significantly enhanced predictive accuracy of the ANN. The neoteric solvents presented herein offer noteworthy advantages over traditional organic solvents, including their high efficiency in both extraction and regeneration processes, their stability and minimal leachability, making them particularly suitable for applications involving aqueous media. Moreover, these solvents are more environmentally friendly, incorporating renewable and sustainable components like thymol and decanoic acid. This exceptional efficacy of the newly developed neoteric solvents signifies a significant advancement, providing a green and sustainable alternative for furfural production from biowaste.

Keywords: sustainable biomass conversion, furfural extraction, ionic liquids, deep eutectic solvents

Procedia PDF Downloads 39
89 Laparoscopic Resection Shows Comparable Outcomes to Open Thoracotomy for Thoracoabdominal Neuroblastomas: A Meta-Analysis and Systematic Review

Authors: Peter J. Fusco, Dave M. Mathew, Chris Mathew, Kenneth H. Levy, Kathryn S. Varghese, Stephanie Salazar-Restrepo, Serena M. Mathew, Sofia Khaja, Eamon Vega, Mia Polizzi, Alyssa Mullane, Adham Ahmed

Abstract:

Background: Laparoscopic (LS) removal of neuroblastomas in children has been reported to offer favorable outcomes compared to the conventional open thoracotomy (OT) procedure. Critical perioperative measures such as blood loss, operative time, length of stay, and time to postoperative chemotherapy have all supported laparoscopic use rather than its more invasive counterpart. Herein, a pairwise meta-analysis was performed comparing perioperative outcomes between LS and OT in thoracoabdominal neuroblastoma cases. Methods: A comprehensive literature search was performed on PubMed, Ovid EMBASE, and Scopus databases to identify studies comparing the outcomes of pediatric patients with thoracoabdominal neuroblastomas undergoing resection via OT or LS. After deduplication, 4,227 studies were identified and subjected to initial title screening with exclusion and inclusion criteria to ensure relevance. When studies contained overlapping cohorts, only the larger series were included. Primary outcomes include estimated blood loss (EBL), hospital length of stay (LOS), and mortality, while secondary outcomes were tumor recurrence, post-operative complications, and operation length. The “meta” and “metafor” packages were used in R, version 4.0.2, to pool risk ratios (RR) or standardized mean differences (SMD) in addition to their 95% confidence intervals in the random effects model via the Mantel-Haenszel method. Heterogeneity between studies was assessed using the I² test, while publication bias was assessed via funnel plot. Results: The pooled analysis included 209 patients from 5 studies (141 OT, 68 LS). Of the included studies, 2 originated from the United States, 1 from Toronto, 1 from China, and 1was from a Japanese center. Mean age between study cohorts ranged from 2.4 to 5.3 years old, with female patients occupying between 30.8% to 50% of the study populations. No statistically significant difference was found between the two groups for LOS (SMD -1.02; p=0.083), mortality (RR 0.30; p=0.251), recurrence(RR 0.31; p=0.162), post-operative complications (RR 0.73; p=0.732), or operation length (SMD -0.07; p=0.648). Of note, LS appeared to be protective in the analysis for EBL, although it did not reach statistical significance (SMD -0.4174; p= 0.051). Conclusion: Despite promising literature assessing LS removal of pediatric neuroblastomas, results showed it was non-superior to OT for any explored perioperative outcomes. Given the limited comparative data on the subject, it is evident that randomized trials are necessary to further the efficacy of the conclusions reached.

Keywords: laparoscopy, neuroblastoma, thoracoabdominal, thoracotomy

Procedia PDF Downloads 107
88 Application of Industrial Ecology to the INSPIRA Zone: Territory Planification and New Activities

Authors: Mary Hanhoun, Jilla Bamarni, Anne-Sophie Bougard

Abstract:

INSPIR’ECO is a 18-month research and innovation project that aims to specify and develop a tool to offer new services for industrials and territorial planners/managers based on Industrial Ecology Principles. This project is carried out on the territory of Salaise Sablons and the services are designed to be deployed on other territories. Salaise-Sablons area is located in the limit of 5 departments on a major European economic axis multimodal traffic (river, rail and road). The perimeter of 330 ha includes 90 hectares occupied by 20 companies, with a total of 900 jobs, and represents a significant potential basin of development. The project involves five multi-disciplinary partners (Syndicat Mixte INSPIRA, ENGIE, IDEEL, IDEAs Laboratory and TREDI). INSPIR’ECO project is based on the principles that local stakeholders need services to pool, share their activities/equipment/purchases/materials. These services aims to : 1. initiate and promote exchanges between existing companies and 2. identify synergies between pre-existing industries and future companies that could be implemented in INSPIRA. These eco-industrial synergies can be related to: the recovery / exchange of industrial flows (industrial wastewater, waste, by-products, etc.); the pooling of business services (collective waste management, stormwater collection and reuse, transport, etc.); the sharing of equipments (boiler, steam production, wastewater treatment unit, etc.) or resources (splitting jobs cost, etc.); and the creation of new activities (interface activities necessary for by-product recovery, development of products or services from a newly identified resource, etc.). These services are based on IT tool used by the interested local stakeholders that intends to allow local stakeholders to take decisions. Thus, this IT tool: - include an economic and environmental assessment of each implantation or pooling/sharing scenarios for existing or further industries; - is meant for industrial and territorial manager/planners - is designed to be used for each new industrial project. - The specification of the IT tool is made through an agile process all along INSPIR’ECO project fed with: - Users expectations thanks to workshop sessions where mock-up interfaces are displayed; - Data availability based on local and industrial data inventory. These input allow to specify the tool not only with technical and methodological constraints (notably the ones from economic and environmental assessments) but also with data availability and users expectations. A feedback on innovative resource management initiatives in port areas has been realized in the beginning of the project to feed the designing services step.

Keywords: development opportunities, INSPIR’ECO, INSPIRA, industrial ecology, planification, synergy identification

Procedia PDF Downloads 138
87 A Study on the Current State and Policy Implications of Engineer Operated National Research Facility and Equipment in Korea

Authors: Chang-Yong Kim, Dong-Woo Kim, Whon-Hyun Lee, Yong-Joo Kim, Tae-Won Chung, Kyung-Mi Lee, Han-Sol Kim, Eun-Joo Lee, Euh Duck Jeong

Abstract:

In the past, together with the annual increase in investment on national R&D projects, the government’s budget investment in FE has steadily maintained. In the case of major developed countries, R&D and its supporting works are distinguished and professionalized in their own right, in so far as having a training system for facilities, equipment operation, and maintenance personnel. In Korea, however, research personnel conduct both research and equipment operation, leading to quantitative shortages of operational manpower and qualitative problems due to insecure employment such as maintenance issues or the loss of effectiveness of necessary equipment. Therefore, the purpose of this study was to identify the current status of engineer operated national research FE in Korea based on a 2017 survey results of domestic facilities and to suggest policy implications. A total of 395 research institutes that carried out national R&D projects and registered more than two FE since 2005 were surveyed on-line for two months. The survey showed that 395 non-profit research facilities were operating 45,155 pieces of equipment with 2,211 engineer operated national research FE, meaning that each engineer had to manage 21 items of FE. Among these, 43.9% of the workers were employed in temporary positions, including indefinite term contracts. Furthermore, the salary and treatment of the engineer personnel were relatively low compared to researchers. In short, engineers who exclusively focused on managing and maintaining FE play a very important role in increasing research immersion and obtaining highly reliable research results. Moreover, institutional efforts and government support for securing operators are severely lacking as domestic national R&D policies are mostly focused on researchers. The 2017 survey on FE also showed that 48.1% of all research facilities did not even employ engineers. In order to solve the shortage of the engineer personnel, the government will start the pilot project in 2012, and then only the 'research equipment engineer training project' from 2013. Considering the above, a national long-term manpower training plan that addresses the quantitative and qualitative shortage of operators needs to be established through a study of the current situation. In conclusion, the findings indicate that this should not only include a plan which connects training to employment but also measures the creation of additional jobs by re-defining and re-establishing operator roles and improving working conditions.

Keywords: engineer, Korea, maintenance, operation, research facilities and equipment

Procedia PDF Downloads 167
86 Tracing a Timber Breakthrough: A Qualitative Study of the Introduction of Cross-Laminated-Timber to the Student Housing Market in Norway

Authors: Marius Nygaard, Ona Flindall

Abstract:

The Palisaden student housing project was completed in August 2013 and was, with its eight floors, Norway’s tallest timber building at the time of completion. It was the first time cross-laminated-timber (CLT) was utilized at this scale in Norway. The project was the result of a concerted effort by a newly formed management company to establish CLT as a sustainable and financially competitive alternative to conventional steel and concrete systems. The introduction of CLT onto the student housing market proved so successful that by 2017 more than 4000 individual student residences will have been built using the same model of development and construction. The aim of this paper is to identify the key factors that enabled this breakthrough for CLT. It is based on an in-depth study of a series of housing projects and the role of the management company who both instigated and enabled this shift of CLT from the margin to the mainstream. Specifically, it will look at how a new building system was integrated into a marketing strategy that identified a market potential within the existing structure of the construction industry and within the economic restrictions inherent to student housing in Norway. It will show how a key player established a project model that changed both the patterns of cooperation and the information basis for decisions. Based on qualitative semi-structured interviews with managers, contractors and the interdisciplinary teams of consultants (architects, structural engineers, acoustical experts etc.) this paper will trace the introduction, expansion and evolution of CLT-based building systems in the student housing market. It will show how the project management firm’s position in the value chain enabled them to function both as a liaison between contractor and client, and between contractor and producer. A position that allowed them to improve the flow of information. This ensured that CLT was handled on equal terms to other structural solutions in the project specifications, enabling realistic pricing and risk evaluation. Secondly, this paper will describe and discuss how the project management firm established and interacted with a growing network of contractors, architects and engineers to pool expertise and broaden the knowledge base across Norway’s regional markets. Finally, it will examine the role of the client, the building typology, and the industrial and technological factors in achieving this breakthrough for CLT in the construction industry. This paper gives an in-depth view of the progression of a single case rather than a broad description of the state of the art of large-scale timber building in Norway. However, this type of study may offer insights that are important to the understanding not only of specific markets but also of how new technologies should be introduced in big and well-established industries.

Keywords: cross-laminated-timber (CLT), industry breakthrough, student housing, timber market

Procedia PDF Downloads 199
85 A New Method Separating Relevant Features from Irrelevant Ones Using Fuzzy and OWA Operator Techniques

Authors: Imed Feki, Faouzi Msahli

Abstract:

Selection of relevant parameters from a high dimensional process operation setting space is a problem frequently encountered in industrial process modelling. This paper presents a method for selecting the most relevant fabric physical parameters for each sensory quality feature. The proposed relevancy criterion has been developed using two approaches. The first utilizes a fuzzy sensitivity criterion by exploiting from experimental data the relationship between physical parameters and all the sensory quality features for each evaluator. Next an OWA aggregation procedure is applied to aggregate the ranking lists provided by different evaluators. In the second approach, another panel of experts provides their ranking lists of physical features according to their professional knowledge. Also by applying OWA and a fuzzy aggregation model, the data sensitivity-based ranking list and the knowledge-based ranking list are combined using our proposed percolation technique, to determine the final ranking list. The key issue of the proposed percolation technique is to filter automatically and objectively the relevant features by creating a gap between scores of relevant and irrelevant parameters. It permits to automatically generate threshold that can effectively reduce human subjectivity and arbitrariness when manually choosing thresholds. For a specific sensory descriptor, the threshold is defined systematically by iteratively aggregating (n times) the ranking lists generated by OWA and fuzzy models, according to a specific algorithm. Having applied the percolation technique on a real example, of a well known finished textile product especially the stonewashed denims, usually considered as the most important quality criteria in jeans’ evaluation, we separate the relevant physical features from irrelevant ones for each sensory descriptor. The originality and performance of the proposed relevant feature selection method can be shown by the variability in the number of physical features in the set of selected relevant parameters. Instead of selecting identical numbers of features with a predefined threshold, the proposed method can be adapted to the specific natures of the complex relations between sensory descriptors and physical features, in order to propose lists of relevant features of different sizes for different descriptors. In order to obtain more reliable results for selection of relevant physical features, the percolation technique has been applied for combining the fuzzy global relevancy and OWA global relevancy criteria in order to clearly distinguish scores of the relevant physical features from those of irrelevant ones.

Keywords: data sensitivity, feature selection, fuzzy logic, OWA operators, percolation technique

Procedia PDF Downloads 576
84 Development of a Test Plant for Parabolic Trough Solar Collectors Characterization

Authors: Nelson Ponce Jr., Jonas R. Gazoli, Alessandro Sete, Roberto M. G. Velásquez, Valério L. Borges, Moacir A. S. de Andrade

Abstract:

The search for increased efficiency in generation systems has been of great importance in recent years to reduce the impact of greenhouse gas emissions and global warming. For clean energy sources, such as the generation systems that use concentrated solar power technology, this efficiency improvement impacts a lower investment per kW, improving the project’s viability. For the specific case of parabolic trough solar concentrators, their performance is strongly linked to their geometric precision of assembly and the individual efficiencies of their main components, such as parabolic mirrors and receiver tubes. Thus, for accurate efficiency analysis, it should be conducted empirically, looking for mounting and operating conditions like those observed in the field. The Brazilian power generation and distribution company Eletrobras Furnas, through the R&D program of the National Agency of Electrical Energy, has developed a plant for testing parabolic trough concentrators located in Aparecida de Goiânia, in the state of Goiás, Brazil. The main objective of this test plant is the characterization of the prototype concentrator that is being developed by the company itself in partnership with Eudora Energia, seeking to optimize it to obtain the same or better efficiency than the concentrators of this type already known commercially. This test plant is a closed pipe system where a pump circulates a heat transfer fluid, also calledHTF, in the concentrator that is being characterized. A flow meter and two temperature transmitters, installed at the inlet and outlet of the concentrator, record the parameters necessary to know the power absorbed by the system and then calculate its efficiency based on the direct solar irradiation available during the test period. After the HTF gains heat in the concentrator, it flows through heat exchangers that allow the acquired energy to be dissipated into the ambient. The goal is to keep the concentrator inlet temperature constant throughout the desired test period. The developed plant performs the tests in an autonomous way, where the operator must enter the HTF flow rate in the control system, the desired concentrator inlet temperature, and the test time. This paper presents the methodology employed for design and operation, as well as the instrumentation needed for the development of a parabolic trough test plant, being a guideline for standardization facilities.

Keywords: parabolic trough, concentrated solar power, CSP, solar power, test plant, energy efficiency, performance characterization, renewable energy

Procedia PDF Downloads 94
83 Teamwork on Innovation in Young Enterprises: A Qualitative Analysis

Authors: Polina Trusova

Abstract:

The majority of young enterprises is founded and run by teams and develops new, innovative products or services. While problems within the team are considered to be an important reason for the failure of young enterprises, effective teamwork on innovation may be a key success factor. It may require special teamwork design or members’ creativity not needed during work routine. However, little is known about how young enterprises develop innovative solutions in teams, what makes their teamwork special and what influences its effectivity. Extending this knowledge is essential for understanding the success and failure factors for young enterprises. Previous research focused on working on innovation or professional teams in general. Rare studies combining these issues usually concentrate on homogenous groups like IT expert teams in innovation projects of big, well-established firms. The transferability of those studies’ findings to the entrepreneurial context is doubtful because of several reasons why teamwork should differ significantly between big, well-established firms and young enterprises. First, teamwork is conducted by team members, e.g., employees. The personality of employees in young enterprises, in contrast to that of employees in established firms, has been shown to be more similar to the personality of entrepreneurs. As entrepreneurs were found to be more open to experience and show less risk aversion, it may have a positive impact on their teamwork. Persons open to novelty are more likely to develop or accept a creative solution, which is especially important for teamwork on innovation. Secondly, young enterprises are often characterized by a flat hierarchy, so in general, teamwork should be more participative there. It encourages each member (and not only the founder) to produce and discuss innovative ideas, increasing their variety and enabling the team to select the best idea from the larger idea pool. Thirdly, teams in young enterprises are often multidisciplinary. It has some advantages but also increases the risk of internal conflicts making teamwork less effective. Despite the key role of teamwork on innovation and presented barriers for transferring existing evidence to the context of young enterprises, only a few researchers have addressed this issue. In order to close the existing research gap, to explore and understand how innovations are developed in teams of young enterprises and which factors influencing teamwork may be especially relevant for such teams, a qualitative study has been developed. The study consisting of 20 half-structured interviews with (co-)founders of young innovative enterprises in the UK and USA started in September 2017. The interview guide comprises but is not limited to teamwork dimensions discussed in literature like members’ skill or authority differentiation. Data will be evaluated following the rules of qualitative content analysis. First results indicate some factors which may be relevant especially for teamwork in young innovative enterprises. They will enrich the scientific discussion and provide the evidence needed to test a possible causality between identified factors and teamwork effectivity in future research on young innovative enterprises. Results and their discussion can be presented at the conference.

Keywords: innovation, qualitative study, teamwork, young enterprises

Procedia PDF Downloads 171
82 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications

Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon

Abstract:

The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.

Keywords: analysis, automated fibre placement, high speed, splicing

Procedia PDF Downloads 122
81 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection

Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad

Abstract:

The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.

Keywords: community detection, electrical segmentation, multiplex graph, power grid

Procedia PDF Downloads 51
80 CRM Cloud Computing: An Efficient and Cost Effective Tool to Improve Customer Interactions

Authors: Gaurangi Saxena, Ravindra Saxena

Abstract:

Lately, cloud computing is used to enhance the ability to attain corporate goals more effectively and efficiently at lower cost. This new computing paradigm “The Cloud Computing” has emerged as a powerful tool for optimum utilization of resources and gaining competitiveness through cost reduction and achieving business goals with greater flexibility. Realizing the importance of this new technique, most of the well known companies in computer industry like Microsoft, IBM, Google and Apple are spending millions of dollars in researching cloud computing and investigating the possibility of producing interface hardware for cloud computing systems. It is believed that by using the right middleware, a cloud computing system can execute all the programs a normal computer could run. Potentially, everything from most simple generic word processing software to highly specialized and customized programs designed for specific company could work successfully on a cloud computing system. A Cloud is a pool of virtualized computer resources. Clouds are not limited to grid environments, but also support “interactive user-facing applications” such as web applications and three-tier architectures. Cloud Computing is not a fundamentally new paradigm. It draws on existing technologies and approaches, such as utility Computing, Software-as-a-service, distributed computing, and centralized data centers. Some companies rent physical space to store servers and databases because they don’t have it available on site. Cloud computing gives these companies the option of storing data on someone else’s hardware, removing the need for physical space on the front end. Prominent service providers like Amazon, Google, SUN, IBM, Oracle, Salesforce etc. are extending computing infrastructures and platforms as a core for providing top-level services for computation, storage, database and applications. Application services could be email, office applications, finance, video, audio and data processing. By using cloud computing system a company can improve its customer relationship management. A CRM cloud computing system may be highly useful in delivering a sales team a blend of unique functionalities to improve agent/customer interactions. This paper attempts to first define the cloud computing as a tool for running business activities more effectively and efficiently at a lower cost; and then it distinguishes cloud computing with grid computing. Based on exhaustive literature review, authors discuss application of cloud computing in different disciplines of management especially in the field of marketing with special reference to use of cloud computing in CRM. Study concludes that CRM cloud computing platform helps a company track any data, such as orders, discounts, references, competitors and many more. By using CRM cloud computing, companies can improve its customer interactions and by serving them more efficiently that too at a lower cost can help gaining competitive advantage.

Keywords: cloud computing, competitive advantage, customer relationship management, grid computing

Procedia PDF Downloads 281
79 Training During Emergency Response to Build Resiliency in Water, Sanitation, and Hygiene

Authors: Lee Boudreau, Ash Kumar Khaitu, Laura A. S. MacDonald

Abstract:

In April 2015, a magnitude 7.8 earthquake struck Nepal, killing, injuring, and displacing thousands of people. The earthquake also damaged water and sanitation service networks, leading to a high risk of diarrheal disease and the associated negative health impacts. In response to the disaster, the Environment and Public Health Organization (ENPHO), a Kathmandu-based non-governmental organization, worked with the Centre for Affordable Water and Sanitation Technology (CAWST), a Canadian education, training and consulting organization, to develop two training programs to educate volunteers on water, sanitation, and hygiene (WASH) needs. The first training program was intended for acute response, with the second focusing on longer term recovery. A key focus was to equip the volunteers with the knowledge and skills to formulate useful WASH advice in the unanticipated circumstances they would encounter when working in affected areas. Within the first two weeks of the disaster, a two-day acute response training was developed, which focused on enabling volunteers to educate those affected by the disaster about local WASH issues, their link to health, and their increased importance immediately following emergency situations. Between March and October 2015, a total of 19 training events took place, with over 470 volunteers trained. The trained volunteers distributed hygiene kits and liquid chlorine for household water treatment. They also facilitated health messaging and WASH awareness activities in affected communities. A three-day recovery phase training was also developed and has been delivered to volunteers in Nepal since October 2015. This training focused on WASH issues during the recovery and reconstruction phases. The interventions and recommendations in the recovery phase training focus on long-term WASH solutions, and so form a link between emergency relief strategies and long-term development goals. ENPHO has trained 226 volunteers during the recovery phase, with training ongoing as of April 2016. In the aftermath of the earthquake, ENPHO found that its existing pool of volunteers were more than willing to help those in their communities who were more in need. By training these and new volunteers, ENPHO was able to reach many more communities in the immediate aftermath of the disaster; together they reached 11 of the 14 earthquake-affected districts. The collaboration between ENPHO and CAWST in developing the training materials was a highly collaborative and iterative process, which enabled the training materials to be developed within a short response time. By training volunteers on basic WASH topics during both the immediate response and the recovery phase, ENPHO and CAWST have been able to link immediate emergency relief to long-term developmental goals. While the recovery phase training continues in Nepal, CAWST is planning to decontextualize the training used in both phases so that it can be applied to other emergency situations in the future. The training materials will become part of the open content materials available on CAWST’s WASH Resources website.

Keywords: water and sanitation, emergency response, education and training, building resilience

Procedia PDF Downloads 285
78 ENDO-β-1,4-Xylanase from Thermophilic Geobacillus stearothermophilus: Immobilization Using Matrix Entrapment Technique to Increase the Stability and Recycling Efficiency

Authors: Afsheen Aman, Zainab Bibi, Shah Ali Ul Qader

Abstract:

Introduction: Xylan is a heteropolysaccharide composed of xylose monomers linked together through 1,4 linkages within a complex xylan network. Owing to wide applications of xylan hydrolytic products (xylose, xylobiose and xylooligosaccharide) the researchers are focusing towards the development of various strategies for efficient xylan degradation. One of the most important strategies focused is the use of heat tolerant biocatalysts which acts as strong and specific cleaving agents. Therefore, the exploration of microbial pool from extremely diversified ecosystem is considerably vital. Microbial populations from extreme habitats are keenly explored for the isolation of thermophilic entities. These thermozymes usually demonstrate fast hydrolytic rate, can produce high yields of product and are less prone to microbial contamination. Another possibility of degrading xylan continuously is the use of immobilization technique. The current work is an effort to merge both the positive aspects of thermozyme and immobilization technique. Methodology: Geobacillus stearothermophilus was isolated from soil sample collected near the blast furnace site. This thermophile is capable of producing thermostable endo-β-1,4-xylanase which cleaves xylan effectively. In the current study, this thermozyme was immobilized within a synthetic and a non-synthetic matrice for continuous production of metabolites using entrapment technique. The kinetic parameters of the free and immobilized enzyme were studied. For this purpose calcium alginate and polyacrylamide beads were prepared. Results: For the synthesis of immobilized beads, sodium alginate (40.0 gL-1) and calcium chloride (0.4 M) was used amalgamated. The temperature (50°C) and pH (7.0) optima of immobilized enzyme remained same for xylan hydrolysis however, the enzyme-substrate catalytic reaction time raised from 5.0 to 30.0 minutes as compared to free counterpart. Diffusion limit of high molecular weight xylan (corncob) caused a decline in Vmax of immobilized enzyme from 4773 to 203.7 U min-1 whereas, Km value increased from 0.5074 to 0.5722 mg ml-1 with reference to free enzyme. Immobilized endo-β-1,4-xylanase showed its stability at high temperatures as compared to free enzyme. It retained 18% and 9% residual activity at 70°C and 80°C, respectively whereas; free enzyme completely lost its activity at both temperatures. The Immobilized thermozyme displayed sufficient recycling efficiency and can be reused up to five reaction cycles, indicating that this enzyme can be a plausible candidate in paper processing industry. Conclusion: This thermozyme showed better immobilization yield and operational stability with the purpose of hydrolyzing the high molecular weight xylan. However, the enzyme immobilization properties can be improved further by immobilizing it on different supports for industrial purpose.

Keywords: immobilization, reusability, thermozymes, xylanase

Procedia PDF Downloads 357
77 The Role Played by Awareness and Complexity through the Use of a Logistic Regression Analysis

Authors: Yari Vecchio, Margherita Masi, Jorgelina Di Pasquale

Abstract:

Adoption of Precision Agriculture (PA) is involved in a multidimensional and complex scenario. The process of adopting innovations is complex and social inherently, influenced by other producers, change agents, social norms and organizational pressure. Complexity depends on factors that interact and influence the decision to adopt. Farm and operator characteristics, as well as organizational, informational and agro-ecological context directly affect adoption. This influence has been studied to measure drivers and to clarify 'bottlenecks' of the adoption of agricultural innovation. Making decision process involves a multistage procedure, in which individual passes from first hearing about the technology to final adoption. Awareness is the initial stage and represents the moment in which an individual learns about the existence of the technology. 'Static' concept of adoption has been overcome. Awareness is a precondition to adoption. This condition leads to not encountering some erroneous evaluations, arose from having carried out analysis on a population that is only in part aware of technologies. In support of this, the present study puts forward an empirical analysis among Italian farmers, considering awareness as a prerequisite for adoption. The purpose of the present work is to analyze both factors that affect the probability to adopt and determinants that drive an aware individual to not adopt. Data were collected through a questionnaire submitted in November 2017. A preliminary descriptive analysis has shown that high levels of adoption have been found among younger farmers, better educated, with high intensity of information, with large farm size and high labor-intensive, and whose perception of the complexity of adoption process is lower. The use of a logit model permits to appreciate the weight played by the intensity of labor and complexity perceived by the potential adopter in PA adoption process. All these findings suggest important policy implications: measures dedicated to promoting innovation will need to be more specific for each phase of this adoption process. Specifically, they should increase awareness of PA tools and foster dissemination of information to reduce the degree of perceived complexity of the adoption process. These implications are particularly important in Europe where is pre-announced the reform of Common Agricultural Policy, oriented to innovation. In this context, these implications suggest to the measures supporting innovation to consider the relationship between various organizational and structural dimensions of European agriculture and innovation approaches.

Keywords: adoption, awareness, complexity, precision agriculture

Procedia PDF Downloads 112
76 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations

Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso

Abstract:

Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.

Keywords: pipeline, leakage, detection, AI

Procedia PDF Downloads 151
75 The Joy of Painless Maternity: The Reproductive Policy of the Bolsheviks in the 1930s

Authors: Almira Sharafeeva

Abstract:

In the Soviet Union of the 1930s, motherhood was seen as a natural need of women. The masculine Bolshevik state did not see the emancipated woman as free from her maternal burden. In order to support the idea of "joyful motherhood," a medical discourse on the anesthesia of childbirth emerges. In March 1935 at the IX Congress of obstetricians and gynecologists the People's Commissar of Public Health of the RSFSR G.N. Kaminsky raised the issue of anesthesia of childbirth. It was also from that year that medical, literary and artistic editions with enviable frequency began to publish articles, studies devoted to the issue, the goal - to anesthetize all childbirths in the USSR - was proclaimed. These publications were often filled with anti-German and anti-capitalist propaganda, through which the advantages of socialism over Capitalism and Nazism were demonstrated. At congresses, in journals, and at institute meetings, doctors' discussions around obstetric anesthesia were accompanied by discussions of shortening the duration of the childbirth process, the prevention and prevention of disease, the admission of nurses to the procedure, and the proper behavior of women during the childbirth process. With the help of articles from medical periodicals of the 1930s., brochures, as well as documents from the funds of the Institute of Obstetrics and Gynecology of the Academy of Medical Sciences of the USSR (TsGANTD SPb) and the Department of Obstetrics and Gynecology of the NKZ USSR (GARF) in this paper we will show, how the advantages of the Soviet system and the socialist way of life were constructed through the problem of childbirth pain relief, and we will also show how childbirth pain relief in the USSR was related to the foreign policy situation and how projects of labor pain relief were related to the anti-abortion policy of the state. This study also attempts to answer the question of why anesthesia of childbirth in the USSR did not become widespread and how, through this medical procedure, the Soviet authorities tried to take control of a female function (childbirth) that was not available to men. Considering this subject from the perspective of gender studies and the social history of medicine, it is productive to use the term "biopolitics. Michel Foucault and Antonio Negri, wrote that biopolitics takes under its wing the control and management of hygiene, nutrition, fertility, sexuality, contraception. The central issue of biopolitics is population reproduction. It includes strategies for intervening in collective existence in the name of life and health, ways of subjectivation by which individuals are forced to work on themselves. The Soviet state, through intervention in the reproductive lives of its citizens, sought to realize its goals of population growth, which was necessary to demonstrate the benefits of living in the Soviet Union and to train a pool of builders of socialism. The woman's body was seen as the object over which the socialist experiment of reproductive policy was being conducted.

Keywords: labor anesthesia, biopolitics of stalinism, childbirth pain relief, reproductive policy

Procedia PDF Downloads 47
74 DTI Connectome Changes in the Acute Phase of Aneurysmal Subarachnoid Hemorrhage Improve Outcome Classification

Authors: Sarah E. Nelson, Casey Weiner, Alexander Sigmon, Jun Hua, Haris I. Sair, Jose I. Suarez, Robert D. Stevens

Abstract:

Graph-theoretical information from structural connectomes indicated significant connectivity changes and improved acute prognostication in a Random Forest (RF) model in aneurysmal subarachnoid hemorrhage (aSAH), which can lead to significant morbidity and mortality and has traditionally been fraught by poor methods to predict outcome. This study’s hypothesis was that structural connectivity changes occur in canonical brain networks of acute aSAH patients, and that these changes are associated with functional outcome at six months. In a prospective cohort of patients admitted to a single institution for management of acute aSAH, patients underwent diffusion tensor imaging (DTI) as part of a multimodal MRI scan. A weighted undirected structural connectome was created of each patient’s images using Constant Solid Angle (CSA) tractography, with 176 regions of interest (ROIs) defined by the Johns Hopkins Eve atlas. ROIs were sorted into four networks: Default Mode Network, Executive Control Network, Salience Network, and Whole Brain. The resulting nodes and edges were characterized using graph-theoretic features, including Node Strength (NS), Betweenness Centrality (BC), Network Degree (ND), and Connectedness (C). Clinical (including demographics and World Federation of Neurologic Surgeons scale) and graph features were used separately and in combination to train RF and Logistic Regression classifiers to predict two outcomes: dichotomized modified Rankin Score (mRS) at discharge and at six months after discharge (favorable outcome mRS 0-2, unfavorable outcome mRS 3-6). A total of 56 aSAH patients underwent DTI a median (IQR) of 7 (IQR=8.5) days after admission. The best performing model (RF) combining clinical and DTI graph features had a mean Area Under the Receiver Operator Characteristic Curve (AUROC) of 0.88 ± 0.00 and Area Under the Precision Recall Curve (AUPRC) of 0.95 ± 0.00 over 500 trials. The combined model performed better than the clinical model alone (AUROC 0.81 ± 0.01, AUPRC 0.91 ± 0.00). The highest-ranked graph features for prediction were NS, BC, and ND. These results indicate reorganization of the connectome early after aSAH. The performance of clinical prognostic models was increased significantly by the inclusion of DTI-derived graph connectivity metrics. This methodology could significantly improve prognostication of aSAH.

Keywords: connectomics, diffusion tensor imaging, graph theory, machine learning, subarachnoid hemorrhage

Procedia PDF Downloads 163
73 Mathematical Model to Simulate Liquid Metal and Slag Accumulation, Drainage and Heat Transfer in Blast Furnace Hearth

Authors: Hemant Upadhyay, Tarun Kumar Kundu

Abstract:

It is utmost important for a blast furnace operator to understand the mechanisms governing the liquid flow, accumulation, drainage and heat transfer between various phases in blast furnace hearth for a stable and efficient blast furnace operation. Abnormal drainage behavior may lead to high liquid build up in the hearth. Operational problems such as pressurization, low wind intake, and lower material descent rates, normally be encountered if the liquid levels in the hearth exceed a critical limit when Hearth coke and Deadman start to float. Similarly, hot metal temperature is an important parameter to be controlled in the BF operation; it should be kept at an optimal level to obtain desired product quality and a stable BF performance. It is not possible to carry out any direct measurement of above due to the hostile conditions in the hearth with chemically aggressive hot liquids. The objective here is to develop a mathematical model to simulate the variation in hot metal / slag accumulation and temperature during the tapping of the blast furnace based on the computed drainage rate, production rate, mass balance, heat transfer between metal and slag, metal and solids, slag and solids as well as among the various zones of metal and slag itself. For modeling purpose, the BF hearth is considered as a pressurized vessel, filled with solid coke particles. Liquids trickle down in hearth from top and accumulate in voids between the coke particles which are assumed thermally saturated. A set of generic mass balance equations gives the amount of metal and slag intake in hearth. A small drainage (tap hole) is situated at the bottom of the hearth and flow rate of liquids from tap hole is computed taking in account the amount of both the phases accumulated their level in hearth, pressure from gases in the furnace and erosion behaviors of tap hole itself. Heat transfer equations provide the exchange of heat between various layers of liquid metal and slag, and heat loss to cooling system through refractories. Based on all that information a dynamic simulation is carried out which provides real time information of liquids accumulation in hearth before and during tapping, drainage rate and its variation, predicts critical event timings during tapping and expected tapping temperature of metal and slag on preset time intervals. The model is in use at JSPL, India BF-II and its output is regularly cross-checked with actual tapping data, which are in good agreement.

Keywords: blast furnace, hearth, deadman, hotmetal

Procedia PDF Downloads 159
72 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia

Authors: Jun Won Kim

Abstract:

Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.

Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility

Procedia PDF Downloads 117
71 The Language of Landscape Architecture

Authors: Hosna Pourhashemi

Abstract:

Chahar Bagh, the symbol of the world, displayed around the pool of life in the centre, attempts to emulate Eden. It represents a duality concept based on the division of the universe into two perceptional insights, a celestial and an earthly one. Chahar Bagh garden pattern refers to the Garden of Eden, that was watered and framed by main four rivers. This microcosm is combined with a mystical love of flowers, sweet-scented trees, the variety of colors, and the sense of eternal life. This symbol of the integration of spontaneous expressivity of the natural elements and reasoning awareness of man strives for the ideal of divine perfection. Through collecting and analyzing the data, the prevalence and continuous influence of Chahar Bagh concept on selected historical gardens was elaborated and evaluated. After the conquest of Persia by the Arabs in the 7th century, Chahar Bagh was adopted and spread throughout the Islamic expansion, from the Middle East, westward across northern Africa to Morocco and the Iberian Peninsula, and eastward through Iran to Central Asia and India. Furthermore, its continuity to the mid of 16th century Renaissance period is shown. By adapting the semiotic theory of Peirce and Saussure on the Persian garden, Chahar Bagh was defined as the basic pattern language for the garden culture. The hypothesis of the continuous influence of Chahar Bagh pattern language on today’s landscape architecture was examined on selected works of Dieter Kienast, as the important and relevant protagonist of his time (end of twentieth ct.) and up to our time. Chahar Bagh pattern language offers collective cultural sensitive healing wisdom transmitted down through the millennia. Through my reflections in Dieter Kienast’s works, I transformed my personal experience into a transpersonal understanding regarding the Sufi philosophy and the Jung psychology, which brings me to define new design theories and methods to form a spiritual, as I call it” Quaternary Perception Model” for landscape architecture. Based on a cognition process through self-journeying in this holistic model, human consciousness can be developed to access to “higher” levels of being and embrace the unity. The self-purification and mindfulness through transpersonal confrontation in the ”Quaternary Perception Model” generates a form of heart-based treatment. I adapted the seven spiritual levels of Sufi self-development on the perception of landscape, representing the stages of the self, ranging from absolutely self-centered to pure spiritual humanity. I redefine and reread the elements and features of Chahar Bagh pattern language for today’s landscape architecture. The “lost paradise” lies in our heart and can be perceived by all humans in landscapes and cities designed in the spirit of” Quaternary Model”.

Keywords: persian garden, pattern language of Chahar Bagh, wholistic Perception, dieter kienast, “quaternary model”

Procedia PDF Downloads 57
70 Profiling of Bacterial Communities Present in Feces, Milk, and Blood of Lactating Cows Using 16S rRNA Metagenomic Sequencing

Authors: Khethiwe Mtshali, Zamantungwa T. H. Khumalo, Stanford Kwenda, Ismail Arshad, Oriel M. M. Thekisoe

Abstract:

Ecologically, the gut, mammary glands and bloodstream consist of distinct microbial communities of commensals, mutualists and pathogens, forming a complex ecosystem of niches. The by-products derived from these body sites i.e. faeces, milk and blood, respectively, have many uses in rural communities where they aid in the facilitation of day-to-day household activities and occasional rituals. Thus, although livestock rearing plays a vital role in the sustenance of the livelihoods of rural communities, it may serve as a potent reservoir of different pathogenic organisms that could have devastating health and economic implications. This study aimed to simultaneously explore the microbial profiles of corresponding faecal, milk and blood samples from lactating cows using 16S rRNA metagenomic sequencing. Bacterial communities were inferred through the Divisive Amplicon Denoising Algorithm 2 (DADA2) pipeline coupled with SILVA database v138. All downstream analyses were performed in R v3.6.1. Alpha-diversity metrics showed significant differences between faeces and blood, faeces and milk, but did not vary significantly between blood and milk (Kruskal-Wallis, P < 0.05). Beta-diversity metrics on Principal Coordinate Analysis (PCoA) and Non-Metric Dimensional Scaling (NMDS) clustered samples by type, suggesting that microbial communities of the studied niches are significantly different (PERMANOVA, P < 0.05). A number of taxa were significantly differentially abundant (DA) between groups based on the Wald test implemented in the DESeq2 package (Padj < 0.01). The majority of the DA taxa were significantly enriched in faeces than in milk and blood, except for the genus Anaplasma, which was significantly enriched in blood and was, in turn, the most abundant taxon overall. A total of 30 phyla, 74 classes, 156 orders, 243 families and 408 genera were obtained from the overall analysis. The most abundant phyla obtained between the three body sites were Firmicutes, Bacteroidota, and Proteobacteria. A total of 58 genus-level taxa were simultaneously detected between the sample groups, while bacterial signatures of at least 8 of these occurred concurrently in corresponding faeces, milk and blood samples from the same group of animals constituting a pool. The important taxa identified in this study could be categorized into four potentially pathogenic clusters: i) arthropod-borne; ii) food-borne and zoonotic; iii) mastitogenic and; iv) metritic and abortigenic. This study provides insight into the microbial composition of bovine faeces, milk, and blood and its extent of overlapping. It further highlights the potential risk of disease occurrence and transmission between the animals and the inhabitants of the sampled rural community, pertaining to their unsanitary practices associated with the use of cattle by-products.

Keywords: microbial profiling, 16S rRNA, NGS, feces, milk, blood, lactating cows, small-scale farmers

Procedia PDF Downloads 85
69 Development of a Novel Clinical Screening Tool, Using the BSGE Pain Questionnaire, Clinical Examination and Ultrasound to Predict the Severity of Endometriosis Prior to Laparoscopic Surgery

Authors: Marlin Mubarak

Abstract:

Background: Endometriosis is a complex disabling disease affecting young females in the reproductive period mainly. The aim of this project is to generate a diagnostic model to predict severity and stage of endometriosis prior to Laparoscopic surgery. This will help to improve the pre-operative diagnostic accuracy of stage 3 & 4 endometriosis and as a result, refer relevant women to a specialist centre for complex Laparoscopic surgery. The model is based on the British Society of Gynaecological Endoscopy (BSGE) pain questionnaire, clinical examination and ultrasound scan. Design: This is a prospective, observational, study, in which women completed the BSGE pain questionnaire, a BSGE requirement. Also, as part of the routine preoperative assessment patient had a routine ultrasound scan and when recto-vaginal and deep infiltrating endometriosis was suspected an MRI was performed. Setting: Luton & Dunstable University Hospital. Patients: Symptomatic women (n = 56) scheduled for laparoscopy due to pelvic pain. The age ranged between 17 – 52 years of age (mean 33.8 years, SD 8.7 years). Interventions: None outside the recognised and established endometriosis centre protocol set up by BSGE. Main Outcome Measure(s): Sensitivity and specificity of endometriosis diagnosis predicted by symptoms based on BSGE pain questionnaire, clinical examinations and imaging. Findings: The prevalence of diagnosed endometriosis was calculated to be 76.8% and the prevalence of advanced stage was 55.4%. Deep infiltrating endometriosis in various locations was diagnosed in 32/56 women (57.1%) and some had DIE involving several locations. Logistic regression analysis was performed on 36 clinical variables to create a simple clinical prediction model. After creating the scoring system using variables with P < 0.05, the model was applied to the whole dataset. The sensitivity was 83.87% and specificity 96%. The positive likelihood ratio was 20.97 and the negative likelihood ratio was 0.17, indicating that the model has a good predictive value and could be useful in predicting advanced stage endometriosis. Conclusions: This is a hypothesis-generating project with one operator, but future proposed research would provide validation of the model and establish its usefulness in the general setting. Predictive tools based on such model could help organise the appropriate investigation in clinical practice, reduce risks associated with surgery and improve outcome. It could be of value for future research to standardise the assessment of women presenting with pelvic pain. The model needs further testing in a general setting to assess if the initial results are reproducible.

Keywords: deep endometriosis, endometriosis, minimally invasive, MRI, ultrasound.

Procedia PDF Downloads 321
68 Environmental Impact of a New-Build Educational Building in England: Life-Cycle Assessment as a Method to Calculate Whole Life Carbon Emissions

Authors: Monkiz Khasreen

Abstract:

In the context of the global trend towards reducing new buildings carbon footprint, the design team is required to make early decisions that have a major influence on embodied and operational carbon. Sustainability strategies should be clear during early stages of building design process, as changes made later can be extremely costly. Life-Cycle Assessment (LCA) could be used as the vehicle to carry other tools and processes towards achieving the requested improvement. Although LCA is the ‘golden standard’ to evaluate buildings from 'cradle to grave', lack of details available on the concept design makes LCA very difficult, if not impossible, to be used as an estimation tool at early stages. Issues related to transparency and accessibility of information in the building industry are affecting the credibility of LCA studies. A verified database derived from LCA case studies is required to be accessible to researchers, design professionals, and decision makers in order to offer guidance on specific areas of significant impact. This database could be the build-up of data from multiple sources within a pool of research held in this context. One of the most important factors that affects the reliability of such data is the temporal factor as building materials, components, and systems are rapidly changing with the advancement of technology making production more efficient and less environmentally harmful. Recent LCA studies on different building functions, types, and structures are always needed to update databases derived from research and to form case bases for comparison studies. There is also a need to make these studies transparent and accessible to designers. The work in this paper sets out to address this need. This paper also presents life-cycle case study of a new-build educational building in England. The building utilised very current construction methods and technologies and is rated as BREEAM excellent. Carbon emissions of different life-cycle stages and different building materials and components were modelled. Scenario and sensitivity analyses were used to estimate the future of new educational buildings in England. The study attempts to form an indicator during the early design stages of similar buildings. Carbon dioxide emissions of this case study building, when normalised according to floor area, lie towards the lower end of the range of worldwide data reported in the literature. Sensitivity analysis shows that life cycle assessment results are highly sensitive to future assumptions made at the design stage, such as future changes in electricity generation structure over time, refurbishment processes and recycling. The analyses also prove that large savings in carbon dioxide emissions can result from very small changes at the design stage.

Keywords: architecture, building, carbon dioxide, construction, educational buildings, England, environmental impact, life-cycle assessment

Procedia PDF Downloads 96
67 Variability and Stability of Bread and Durum Wheat for Phytic Acid Content

Authors: Gordana Branković, Vesna Dragičević, Dejan Dodig, Desimir Knežević, Srbislav Denčić, Gordana Šurlan-Momirović

Abstract:

Phytic acid is a major pool in the flux of phosphorus through agroecosystems and represents a sum equivalent to > 50% of all phosphorus fertilizer used annually. Nutrition rich in phytic acid can substantially decrease micronutrients apsorption as calcium, zink, iron, manganese, copper due to phytate salts excretion by human and non-ruminant animals as poultry, swine and fish, having in common very scarce phytase activity, and consequently the ability to digest and utilize phytic acid, thus phytic acid derived phosphorus in animal waste contributes to water pollution. The tested accessions consisted of 15 genotypes of bread wheat (Triticum aestivum L. ssp. vulgare) and of 15 genotypes of durum wheat (Triticum durum Desf.). The trials were sown at the three test sites in Serbia: Rimski Šančevi (RS) (45º19´51´´N; 19º50´59´´E), Zemun Polje (ZP) (44º52´N; 20º19´E) and Padinska Skela (PS) (44º57´N 20º26´E) during two vegetation seasons 2010-2011 and 2011-2012. The experimental design was randomized complete block design with four replications. The elementary plot consisted of 3 internal rows of 0.6 m2 area (3 × 0.2 m × 1 m). Grains were grinded with Laboratory Mill 120 Perten (“Perten”, Sweden) (particles size < 500 μm) and flour was used for the analysis. Phytic acid grain content was determined spectrophotometrically with the Shimadzu UV-1601 spectrophotometer (Shimadzu Corporation, Japan). Objectives of this study were to determine: i) variability and stability of the phytic acid content among selected genotypes of bread and durum wheat, ii) predominant source of variation regarding genotype (G), environment (E) and genotype × environment interaction (GEI) from the multi-environment trial, iii) influence of climatic variables on the GEI for the phytic acid content. Based on the analysis of variance it had been determined that the variation of phytic acid content was predominantly influenced by environment in durum wheat, while the GEI prevailed for the variation of the phytic acid content in bread wheat. Phytic acid content expressed on the dry mass basis was in the range 14.21-17.86 mg g-1 with the average of 16.05 mg g-1 for bread wheat and 14.63-16.78 mg g-1 with the average of 15.91 mg g-1 for durum wheat. Average-environment coordination view of the genotype by environment (GGE) biplot was used for the selection of the most desirable genotypes for breeding for low phytic acid content in the sense of good stability and lower level of phytic acid content. The most desirable genotypes of bread and durum wheat for breeding for phytic acid were Apache and 37EDUYT /07 No. 7849. Models of climatic factors in the highest percentage (> 91%) were useful in interpreting GEI for phytic acid content, and included relative humidity in June, sunshine hours in April, mean temperature in April and winter moisture reserves for genotypes of bread wheat, as well as precipitation in June and April, maximum temperature in April and mean temperature in June for genotypes of durum wheat.

Keywords: genotype × environment interaction, phytic acid, stability, variability

Procedia PDF Downloads 366
66 Installation of an Inflatable Bladder and Sill Walls for Riverbank Erosion Protection and Improved Water Intake Zone Smokey Hill River – Salina, Kansas

Authors: Jeffrey A. Humenik

Abstract:

Environmental, Limited Liability Corporation (EMR) provided civil construction services to the U.S. Army Corps of Engineers, Kansas City District, for the placement of a protective riprap blanket on the west bank of the Smoky Hill River, construction of 2 shore abutments and the construction of a 140 foot long sill wall spanning the Smoky Hill River in Salina, Kansas. The purpose of the project was to protect the riverbank from erosion and hold back water to a specified elevation, creating a pool to ensure adequate water intake for the municipal water supply. Geotextile matting and riprap were installed for streambank erosion protection. An inflatable bladder (AquaDam®) was designed to the specific river dimension and installed to divert the river and allow for dewatering during the construction of the sill walls and cofferdam. AquaDam® consists of water filled polyethylene tubes to create aqua barriers and divert water flow or prevent flooding. A challenge of the project was the fact that 100% of the sill wall was constructed within an active river channel. The threat of flooding of the work area, damage to the aqua dam by debris, and potential difficulty of water removal presented a unique set of challenges to the construction team. Upon completion of the West Sill Wall, floating debris punctured the AquaDam®. The manufacturing and delivery of a new AquaDam® would delay project completion by at least 6 weeks. To keep the project ahead of schedule, the decision was made to construct an earthen cofferdam reinforced with rip rap for the construction of the East Abutment and East Sill Wall section. During construction of the west sill wall section, a deep scour hole was encountered in the wall alignment that prevented EMR from using the natural rock formation as a concrete form for the lower section of the sill wall. A formwork system was constructed, that allowed the west sill wall section to be placed in two horizontal lifts of concrete poured on separate occasions. The first sectional lift was poured to fill in the scour hole and act as a footing for the second sectional lift. Concrete wall forms were set on the first lift and anchored to the surrounding riverbed in a manner that the second lift was poured in a similar fashion as a basement wall. EMR’s timely decision to keep the project moving toward completion in the face of changing conditions enabled project completion two (2) months ahead of schedule. The use of inflatable bladders is an effective and cost-efficient technology to divert river flow during construction. However, a secondary plan should be part of project design in the event debris transported by river punctures or damages the bladders.

Keywords: abutment, AquaDam®, riverbed, scour

Procedia PDF Downloads 122