Search results for: engine performance analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 36053

Search results for: engine performance analysis

27743 Determination of Full Energy Peak Efficiency and Resolution of Nai (Tl) Detector Using Gamma-ray Spectroscopy

Authors: Jibon Sharma, Alakjyoti Patowary, Moirangthem Nara Singh

Abstract:

In experimental research it is very much essential to obtain the quality control of the system used for the experiment. NaI (Tl) scintillation detector is the most commonly used in radiation and medical physics for measurement of the gamma ray activity of various samples. In addition, the scintillation detector has a lot of applications in the elemental analysis of various compounds, alloys using activation analysis. In each application for quantitative analysis, it is very much essential to know the detection efficiency and resolution for different gamma energies. In this work, the energy dependence of efficiency and resolution of NaI (Tl) detector using gamma-ray spectroscopy are investigated. Different photon energies of 356.01 keV,511keV,661.60keV,1170 keV,1274.53 keV and 1330 keV are obtained from four radioactive sources (133Ba,22Na,137Cs and 60 Co) used in these studies. Values of full energy peak efficiencies of these gamma energies are found to be respectively 58.46%,10.15%,14.39%,1.4%,3.27% and 1.31%. The values of percent resolution for above different gamma ray energies are found to be 11.27%,7.27%,6.38%,5.17%,4.86% and 4.74% respectively. It was found that the efficiency of the detector exponentially decreases with energy and the resolution of the detector is directly proportional to the energy of gamma-ray.

Keywords: naI (Tl) gamma-ray spectrometer, resolution, full energy peak efficiency, radioactive sources

Procedia PDF Downloads 95
27742 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings

Authors: Jude K. Safo

Abstract:

Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.

Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics

Procedia PDF Downloads 63
27741 Nonlinear Multivariable Analysis of CO2 Emissions in China

Authors: Hsiao-Tien Pao, Yi-Ying Li, Hsin-Chia Fu

Abstract:

This paper addressed the impacts of energy consumption, economic growth, financial development, and population size on environmental degradation using grey relational analysis (GRA) for China, where foreign direct investment (FDI) inflows is the proxy variable for financial development. The more recent historical data during the period 2004–2011 are used, because the use of very old data for data analysis may not be suitable for rapidly developing countries. The results of the GRA indicate that the linkage effects of energy consumption–emissions and GDP–emissions are ranked first and second, respectively. These reveal that energy consumption and economic growth are strongly correlated with emissions. Higher economic growth requires more energy consumption and increasing environmental pollution. Likewise, more efficient energy use needs a higher level of economic development. Therefore, policies to improve energy efficiency and create a low-carbon economy can reduce emissions without hurting economic growth. The finding of FDI–emissions linkage is ranked third. This indicates that China do not apply weak environmental regulations to attract inward FDI. Furthermore, China’s government in attracting inward FDI should strengthen environmental policy. The finding of population–emissions linkage effect is ranked fourth, implying that population size does not directly affect CO2 emissions, even though China has the world’s largest population, and Chinese people are very economical use of energy-related products. Overall, the energy conservation, improving efficiency, managing demand, and financial development, which aim at curtailing waste of energy, reducing both energy consumption and emissions, and without loss of the country’s competitiveness, can be adopted for developing economies. The GRA is one of the best way to use a lower data to build a dynamic analysis model.

Keywords: China, CO₂ emissions, foreign direct investment, grey relational analysis

Procedia PDF Downloads 397
27740 The Story of a Spoiled Identity: Blogging on Disability and Feminity

Authors: Anna Ślebioda

Abstract:

The paper discusses intersections between disability and femininity. Their imbrication may impede negotiation of identity. The analysis of a blog of a women with disability aims to prove this hypothesis. It involves 724 entries written in the span of six years. The conceptual framework for the considerations constitute the concepts of stigma and spoiled identity, and overlapping elements of femininity and disability. The empirical part comprises content analysis. It allows to locate the narrative on femininity and disability within the dimensions of imbricated categories described in the theoretical part. The results demonstrate aspects to consider in further research on identity in women with disabilities.

Keywords: disability, femininity, spoiled identity, stigma

Procedia PDF Downloads 657
27739 A Statistical Energy Analysis Model of an Automobile for the Prediction of the Internal Sound Pressure Level

Authors: El Korchi Ayoub, Cherif Raef

Abstract:

Interior noise in vehicles is an essential factor affecting occupant comfort. Over recent decades, much work has been done to develop simulation tools for vehicle NVH. At the medium high-frequency range, the statistical energy analysis method (SEA) shows significant effectiveness in predicting noise and vibration responses of mechanical systems. In this paper, the evaluation of the sound pressure level (SPL) inside an automobile cabin has been performed numerically using the statistical energy analysis (SEA) method. A test car cabin was performed using a monopole source as a sound source. The decay rate method was employed to obtain the damping loss factor (DLF) of each subsystem of the developed SEA model. These parameters were then used to predict the sound pressure level in the interior cabin. The results show satisfactory agreement with the directly measured SPL. The developed SEA vehicle model can be used in early design phases and allows the engineer to identify sources contributing to the total noise and transmission paths.

Keywords: SEA, SPL, DLF, NVH

Procedia PDF Downloads 85
27738 The Effects of Weather Events and Land Use Change on Urban Ecosystems: From Risk to Resilience

Authors: Szu-Hua Wang

Abstract:

Urban ecosystems, as complex coupled human-environment systems, contain abundant natural resources for breeding natural assets and, at the same time, attract urban assets and consume natural resources, triggered by urban development. Land use change illustrates the interaction between human activities and environments factually. However, IPCC (2014) announces that land use change and urbanization due to human activities are the major cause of climate change, leading to serious impacts on urban ecosystem resilience and risk. For this reason, risk assessment and resilience analysis are the keys for responding to climate change on urban ecosystems. Urban spatial planning can guide urban development by land use planning, transportation planning, and environmental planning and affect land use allocation and human activities by building major constructions and protecting important national land resources simultaneously. Urban spatial planning can aggravate climate change and, on the other hand, mitigate and adapt climate change. Research on effects of spatial planning on land use change and climate change is one of intense issues currently. Therefore, this research focuses on developing frameworks for risk assessment and resilience analysis from the aspect of ecosystem based on typhoon precipitation in Taipei area. The integrated method of risk assessment and resilience analysis will be also addressed for applying spatial planning practice and sustainable development.

Keywords: ecosystem, land use change, risk analysis, resilience

Procedia PDF Downloads 409
27737 Evaluation of the Diagnostic Potential of IL-2 after Specific Antigen Stimulation with PE35 (Rv3872) and PPE68 (Rv3873) for the Discrimination of Active and Latent Tuberculosis

Authors: Shima Mahmoudi, Babak Pourakbari, Setareh Mamishi, Mostafa Teymuri, Majid Marjani

Abstract:

Although cytokine analysis has greatly contributed to the understanding of tuberculosis (TB) pathogenesis, data on cytokine profiles that might distinguish progression from latency of TB infection are scarce. Since PE/PPE proteins are known to induce strong humoral and cellular immune responses, the aim of this study was to evaluate the diagnostic potential of interleukin-2 (IL-2) as biomarker after specific antigen stimulation with PE35 and PPE68 for the discrimination of active and latent tuberculosis infection (LTBI). The production of IL-2 was measured in the antigen-stimulated whole-blood supernatants following stimulation with recombinant PE35 and PPE68. All the patients with active TB and LTBI had positive QuantiFERON-TB Gold in Tube test. The level of IL-2 following stimulation with recombinant PE35 and PPE68 were significantly higher in LTBI group than in patients with active TB infection or control group. The discrimination performance (assessed by the area under ROC curve) for IL-2 following stimulation with recombinant PE35 and PPE68 between LTBI and patients with active TB were 0.837 (95%CI: 0.72-0.97) and 0.75 (95%CI: 0.63-0.89), respectively. Applying the 12.4 pg/mL cut-off for IL-2 induced by PE35 in the present study population resulted in sensitivity of 78%, specificity of 78%, PPV of 78% and NPV of 100%. In addition, a sensitivity of 81%, specificity of 70%, PPV of 67% and 87% of NPV was reported based on the 4.4 pg/mL cut-off for IL-2 induced by PPE68. In conclusion, peptides of the antigen PE35 and PPE68, absent from commonly used BCG strains, stimulated strong IL-2- positive T cell responses in patients with LTBI. This study confirms IL-2 induced by PE35 and PPE68 as a sensitive and specific biomarker and highlights IL-2 as new promising adjunct markers for discriminating of LTBI and Active TB infection.

Keywords: IL-2, PE35, PPE68, tuberculosis

Procedia PDF Downloads 405
27736 Identification of Hepatocellular Carcinoma Using Supervised Learning Algorithms

Authors: Sagri Sharma

Abstract:

Analysis of diseases integrating multi-factors increases the complexity of the problem and therefore, development of frameworks for the analysis of diseases is an issue that is currently a topic of intense research. Due to the inter-dependence of the various parameters, the use of traditional methodologies has not been very effective. Consequently, newer methodologies are being sought to deal with the problem. Supervised Learning Algorithms are commonly used for performing the prediction on previously unseen data. These algorithms are commonly used for applications in fields ranging from image analysis to protein structure and function prediction and they get trained using a known dataset to come up with a predictor model that generates reasonable predictions for the response to new data. Gene expression profiles generated by DNA analysis experiments can be quite complex since these experiments can involve hypotheses involving entire genomes. The application of well-known machine learning algorithm - Support Vector Machine - to analyze the expression levels of thousands of genes simultaneously in a timely, automated and cost effective way is thus used. The objectives to undertake the presented work are development of a methodology to identify genes relevant to Hepatocellular Carcinoma (HCC) from gene expression dataset utilizing supervised learning algorithms and statistical evaluations along with development of a predictive framework that can perform classification tasks on new, unseen data.

Keywords: artificial intelligence, biomarker, gene expression datasets, hepatocellular carcinoma, machine learning, supervised learning algorithms, support vector machine

Procedia PDF Downloads 422
27735 Numerical Performance Evaluation of a Savonius Wind Turbines Using Resistive Torque Modeling

Authors: Guermache Ahmed Chafik, Khelfellah Ismail, Ait-Ali Takfarines

Abstract:

The Savonius vertical axis wind turbine is characterized by sufficient starting torque at low wind speeds, simple design and does not require orientation to the wind direction; however, the developed power is lower than other types of wind turbines such as Darrieus. To increase these performances several studies and researches have been developed, such as optimizing blades shape, using passive controls and also minimizing power losses sources like the resisting torque due to friction. This work aims to estimate the performance of a Savonius wind turbine introducing a User Defined Function to the CFD model analyzing resisting torque. This User Defined Function is developed to simulate the action of the wind speed on the rotor; it receives the moment coefficient as an input to compute the rotational velocity that should be imposed on computational domain rotating regions. The rotational velocity depends on the aerodynamic moment applied on the turbine and the resisting torque, which is considered a linear function. Linking the implemented User Defined Function with the CFD solver allows simulating the real functioning of the Savonius turbine exposed to wind. It is noticed that the wind turbine takes a while to reach the stationary regime where the rotational velocity becomes invariable; at that moment, the tip speed ratio, the moment and power coefficients are computed. To validate this approach, the power coefficient versus tip speed ratio curve is compared with the experimental one. The obtained results are in agreement with the available experimental results.

Keywords: resistant torque modeling, Savonius wind turbine, user-defined function, vertical axis wind turbine performances

Procedia PDF Downloads 150
27734 The Impact of the Enron Scandal on the Reputation of Corporate Social Responsibility Rating Agencies

Authors: Jaballah Jamil

Abstract:

KLD (Peter Kinder, Steve Lydenberg and Amy Domini) research & analytics is an independent intermediary of social performance information that adopts an investor-pay model. KLD rating agency does not have an explicit monitoring on the rated firm which suggests that KLD ratings may not include private informations. Moreover, the incapacity of KLD to predict accurately the extra-financial rating of Enron casts doubt on the reliability of KLD ratings. Therefore, we first investigate whether KLD ratings affect investors' perception by studying the effect of KLD rating changes on firms' financial performances. Second, we study the impact of the Enron scandal on investors' perception of KLD rating changes by comparing the effect of KLD rating changes on firms' financial performances before and after the failure of Enron. We propose an empirical study that relates a number of equally-weighted portfolios returns, excess stock returns and book-to-market ratio to different dimensions of KLD social responsibility ratings. We first find that over the last two decades KLD rating changes influence significantly and negatively stock returns and book-to-market ratio of rated firms. This finding suggests that a raise in corporate social responsibility rating lowers the firm's risk. Second, to assess the Enron scandal's effect on the perception of KLD ratings, we compare the effect of KLD rating changes before and after the Enron scandal. We find that after the Enron scandal this significant effect disappears. This finding supports the view that the Enron scandal annihilates the KLD's effect on Socially Responsible Investors. Therefore, our findings may question results of recent studies that use KLD ratings as a proxy for Corporate Social Responsibility behavior.

Keywords: KLD social rating agency, investors' perception, investment decision, financial performance

Procedia PDF Downloads 431
27733 Challenges in the Material and Action-Resistance Factor Design for Embedded Retaining Wall Limit State Analysis

Authors: Kreso Ivandic, Filip Dodigovic, Damir Stuhec

Abstract:

The paper deals with the proposed 'Material' and 'Action-resistance factor' design methods in designing the embedded retaining walls. The parametric analysis of evaluating the differences of the output values mutually and compared with classic approach computation was performed. There is a challenge with the criteria for choosing the proposed calculation design methods in Eurocode 7 with respect to current technical regulations and regular engineering practice. The basic criterion for applying a particular design method is to ensure minimum an equal degree of reliability in relation to the current practice. The procedure of combining the relevant partial coefficients according to design methods was carried out. The use of mentioned partial coefficients should result in the same level of safety, regardless of load combinations, material characteristics and problem geometry. This proposed approach of the partial coefficients related to the material and/or action-resistance should aimed at building a bridge between calculations used so far and pure probability analysis. The measure to compare the results was to determine an equivalent safety factor for each analysis. The results show a visible wide span of equivalent values of the classic safety factors.

Keywords: action-resistance factor design, classic approach, embedded retaining wall, Eurocode 7, limit states, material factor design

Procedia PDF Downloads 227
27732 Organic Co-Polymer Monolithic Columns for Liquid Chromatography Mixed Mode Protein Separations

Authors: Ahmed Alkarimi, Kevin Welham

Abstract:

Organic mixed mode monolithic columns were fabricated from; glycidyl methacrylate-co-ethylene dimethacrylate-co-stearyl methacrylate, using glycidyl methacrylate and stearyl methacrylate as co monomers representing 30% and 70% respectively of the liquid volume with ethylene dimethacrylate crosslinker and 2,2-dimethoxy-2-phenylacetophenone as the free radical initiator. The monomers were mixed with a binary porogenic solvent, comprising propan-1-ol, and methanol (0.825 mL each). The monolith was formed by photo polymerization (365 nm) inside a borosilicate glass tube (1.5 mm ID and 3 mm OD x 50 mm length). The monolith was observed to have formed correctly by optical examination and generated reasonable backpressure, approximately 650 psi at a flow rate of 0.2 mL min⁻¹ 50:50 acetonitrile: water. The morphological properties of the monolithic columns were investigated using scanning electron microscopy images, and Brunauer-Emmett-Teller analysis, the results showed that the monolith was formed properly with 19.98 ± 0.01 mm² surface area, 0.0205 ± 0.01 cm³ g⁻¹ pore volume and 6.93 ± 0.01 nm average pore size. The polymer monolith formed was further investigated using proton nuclear magnetic resonance, and Fourier transform infrared spectroscopy. The monolithic columns were investigated using high-performance liquid chromatography to test their ability to separate different samples with a range of properties. The columns displayed both hydrophobic/hydrophilic and hydrophobic/ion exchange interactions with the compounds tested indicating that true mixed mode separations. The mixed mode monolithic columns exhibited significant separation of proteins.

Keywords: LC separation, proteins separation, monolithic column, mixed mode

Procedia PDF Downloads 155
27731 A Knowledge-Based Development of Risk Management Approaches for Construction Projects

Authors: Masoud Ghahvechi Pour

Abstract:

Risk management is a systematic and regular process of identifying, analyzing and responding to risks throughout the project's life cycle in order to achieve the optimal level of elimination, reduction or control of risk. The purpose of project risk management is to increase the probability and effect of positive events and reduce the probability and effect of unpleasant events on the project. Risk management is one of the most fundamental parts of project management, so that unmanaged or untransmitted risks can be one of the primary factors of failure in a project. Effective risk management does not apply to risk regression, which is apparently the cheapest option of the activity. However, the main problem with this option is the economic sensitivity, because what is potentially profitable is by definition risky, and what does not pose a risk is economically interesting and does not bring tangible benefits. Therefore, in relation to the implemented project, effective risk management is finding a "middle ground" in its management, which includes, on the one hand, protection against risk from a negative direction by means of accurate identification and classification of risk, which leads to analysis And it becomes a comprehensive analysis. On the other hand, management using all mathematical and analytical tools should be based on checking the maximum benefits of these decisions. Detailed analysis, taking into account all aspects of the company, including stakeholder analysis, will allow us to add what will become tangible benefits for our project in the future to effective risk management. Identifying the risk of the project is based on the theory that which type of risk may affect the project, and also refers to specific parameters and estimating the probability of their occurrence in the project. These conditions can be divided into three groups: certainty, uncertainty, and risk, which in turn support three types of investment: risk preference, risk neutrality, specific risk deviation, and its measurement. The result of risk identification and project analysis is a list of events that indicate the cause and probability of an event, and a final assessment of its impact on the environment.

Keywords: risk, management, knowledge, risk management

Procedia PDF Downloads 55
27730 Quantum Confinement in LEEH Capped CdS Nanocrystalline

Authors: Mihir Hota, Namita Jena, S. N. Sahu

Abstract:

LEEH (L-cysteine ethyl ester hydrochloride) capped CdS semiconductor nanocrystals are grown at 800C using a simple chemical route. Photoluminescence (PL), Optical absorption (UV) and Transmission Electron Microscopy (TEM) have been carried out to evaluate the structural and optical properties of the nanocrystal. Optical absorption studies have been carried out to optimize the sample. XRD and TEM analysis shows that the nanocrystal belongs to FCC structure having average size of 3nm while a bandgap of 2.84eV is estimated from Photoluminescence analysis. The nanocrystal emits bluish light when excited with 355nm LASER.

Keywords: cadmium sulphide, nanostructures, luminescence, optical properties

Procedia PDF Downloads 393
27729 Technical and Economic Analysis of Smart Micro-Grid Renewable Energy Systems: An Applicable Case Study

Authors: M. A. Fouad, M. A. Badr, Z. S. Abd El-Rehim, Taher Halawa, Mahmoud Bayoumi, M. M. Ibrahim

Abstract:

Renewable energy-based micro-grids are presently attracting significant consideration. The smart grid system is presently considered a reliable solution for the expected deficiency in the power required from future power systems. The purpose of this study is to determine the optimal components sizes of a micro-grid, investigating technical and economic performance with the environmental impacts. The micro grid load is divided into two small factories with electricity, both on-grid and off-grid modes are considered. The micro-grid includes photovoltaic cells, back-up diesel generator wind turbines, and battery bank. The estimated load pattern is 76 kW peak. The system is modeled and simulated by MATLAB/Simulink tool to identify the technical issues based on renewable power generation units. To evaluate system economy, two criteria are used: the net present cost and the cost of generated electricity. The most feasible system components for the selected application are obtained, based on required parameters, using HOMER simulation package. The results showed that a Wind/Photovoltaic (W/PV) on-grid system is more economical than a Wind/Photovoltaic/Diesel/Battery (W/PV/D/B) off-grid system as the cost of generated electricity (COE) is 0.266 $/kWh and 0.316 $/kWh, respectively. Considering the cost of carbon dioxide emissions, the off-grid will be competitive to the on-grid system as COE is found to be (0.256 $/kWh, 0.266 $/kWh), for on and off grid systems.

Keywords: renewable energy sources, micro-grid system, modeling and simulation, on/off grid system, environmental impacts

Procedia PDF Downloads 262
27728 Comparative Analysis of the Expansion Rate and Soil Erodibility Factor (K) of Some Gullies in Nnewi and Nnobi, Anambra State Southeastern Nigeria

Authors: Nzereogu Stella Kosi, Igwe Ogbonnaya, Emeh Chukwuebuka Odinaka

Abstract:

A comparative analysis of the expansion rate and soil erodibility of some gullies in Nnewi and Nnobi both of Nanka Formation were studied. The study involved an integration of field observations, geotechnical analysis, slope stability analysis, multivariate statistical analysis, gully expansion rate analysis, and determination of the soil erodibility factor (K) from Revised Universal Soil Loss Equation (RUSLE). Fifteen representative gullies were studied extensively, and results reveal that the geotechnical properties of the soil, topography, vegetation cover, rainfall intensity, and the anthropogenic activities in the study area were major factors propagating and influencing the erodibility of the soils. The specific gravity of the soils ranged from 2.45-2.66 and 2.54-2.78 for Nnewi and Nnobi, respectively. Grain size distribution analysis revealed that the soils are composed of gravel (5.77-17.67%), sand (79.90-91.01%), and fines (2.36-4.05%) for Nnewi and gravel (7.01-13.65%), sand (82.47-88.67%), and fines (3.78-5.02%) for Nnobi. The soils are moderately permeable with values ranging from 2.92 x 10-5 - 6.80 x 10-4 m/sec and 2.35 x 10-6 - 3.84 x 10⁻⁴m/sec for Nnewi and Nnobi respectively. All have low cohesion values ranging from 1–5kPa and 2-5kPa and internal friction angle ranging from 29-38° and 30-34° for Nnewi and Nnobi, respectively, which suggests that the soils have low shear strength and are susceptible to shear failure. Furthermore, the compaction test revealed that the soils were loose and easily erodible with values of maximum dry density (MDD) and optimum moisture content (OMC) ranging from 1.82-2.11g/cm³ and 8.20-17.81% for Nnewi and 1.98-2.13g/cm³ and 6.00-17.80% respectively. The plasticity index (PI) of the fines showed that they are nonplastic to low plastic soils and highly liquefiable with values ranging from 0-10% and 0-9% for Nnewi and Nnobi, respectively. Multivariate statistical analyses were used to establish relationship among the determined parameters. Slope stability analysis gave factor of safety (FoS) values in the range of 0.50-0.76 and 0.82-0.95 for saturated condition and 0.73-0.98 and 0.87-1.04 for unsaturated condition for both Nnewi and Nnobi, respectively indicating that the slopes are generally unstable to critically stable. The erosion expansion rate analysis for a fifteen-year period (2005-2020) revealed an average longitudinal expansion rate of 36.05m/yr, 10.76m/yr, and 183m/yr for Nnewi, Nnobi, and Nanka type gullies, respectively. The soil erodibility factor (K) are 8.57x10⁻² and 1.62x10-4 for Nnewi and Nnobi, respectively, indicating that the soils in Nnewi have higher erodibility potentials than those of Nnobi. From the study, both the Nnewi and Nnobi areas are highly prone to erosion. However, based on the relatively lower fine content of the soil, relatively lower topography, steeper slope angle, and sparsely vegetated terrain in Nnewi, soil erodibility and gully intensity are more profound in Nnewi than Nnobi.

Keywords: soil erodibility, gully expansion, nnewi-nnobi, slope stability, factor of safety

Procedia PDF Downloads 119
27727 Investigation of Genetic Diversity of Tilia tomentosa Moench. (Silver Lime) in Duzce-Turkey

Authors: Ibrahim Ilker Ozyigit, Ertugrul Filiz, Seda Birbilener, Semsettin Kulac, Zeki Severoglu

Abstract:

In this study, we have performed genetic diversity analysis of Tilia tomentosa genotypes by using randomly amplified polymorphic DNA (RAPD) primers. A total of 28 genotypes, including 25 members from the urban ecosystem and 3 genotypes from forest ecosystem as outgroup were used. 8 RAPD primers produced a total of 53 bands, of which 48 (90.6 %) were polymorphic. Percentage of polymorphic loci (P), observed number of alleles (Na), effective number of alleles (Ne), Nei's (1973) gene diversity (h), and Shannon's information index (I) were found as 94.29 %, 1.94, 1.60, 0.34, and 0.50, respectively. The unweighted pair-group method with arithmetic average (UPGMA) cluster analysis revealed that two major groups were observed. The genotypes of urban and forest ecosystems showed a high genetic similarity between 28% and 92% and these genotypes did not separate from each other in UPGMA tree. Also, urban and forest genotypes clustered together in principal component analysis (PCA).

Keywords: Tilia tomentosa, genetic diversity, urban ecosystem, RAPD, UPGMA

Procedia PDF Downloads 506
27726 Climate Change and Tourism: A Scientometric Analysis Using Citespace

Authors: Yan Fang, Jie Yin, Bihu Wu

Abstract:

The interaction between climate change and tourism is one of the most promising research areas of recent decades. In this paper, a scientometric analysis of 976 academic publications between 1990 and 2015 related to climate change and tourism is presented in order to characterize the intellectual landscape by identifying and visualizing the evolution of the collaboration network, the co-citation network, and emerging trends of citation burst and keyword co-occurrence. The results show that the number of publications in this field has increased rapidly and it has become an interdisciplinary and multidisciplinary topic. The research areas are dominated by Australia, USA, Canada, New Zealand, and European countries, which have the most productive authors and institutions. The hot topics of climate change and tourism research in recent years are further identified, including the consequences of climate change for tourism, necessary adaptations, the vulnerability of the tourism industry, tourist behaviour and demand in response to climate change, and emission reductions in the tourism sector. The work includes an in-depth analysis of a major forum of climate change and tourism to help readers to better understand global trends in this field in the past 25 years.

Keywords: climate change, tourism, scientometrics, CiteSpace

Procedia PDF Downloads 405
27725 Towards Safety-Oriented System Design: Preventing Operator Errors by Scenario-Based Models

Authors: Avi Harel

Abstract:

Most accidents are commonly attributed in hindsight to human errors, yet most methodologies for safety focus on technical issues. According to the Black Swan theory, this paradox is due to insufficient data about the ways systems fail. The article presents a study of the sources of errors, and proposes a methodology for utility-oriented design, comprising methods for coping with each of the sources identified. Accident analysis indicates that errors typically result from difficulties of operating in exceptional conditions. Therefore, following STAMP, the focus should be on preventing exceptions. Exception analysis indicates that typically they involve an improper account of the operational scenario, due to deficiencies in the system integration. The methodology proposes a model, which is a formal definition of the system operation, as well as principles and guidelines for safety-oriented system integration. The article calls to develop and integrate tools for recording and analysis of the system activity during the operation, required to implement validate the model.

Keywords: accidents, complexity, errors, exceptions, interaction, modeling, resilience, risks

Procedia PDF Downloads 192
27724 The Representations of Protesters in the UK National Daily Press: Pro- And Anti- Brexit Demonstrations 2016-2019

Authors: Charlotte-Rose Kennedy

Abstract:

In a political climate divided by Brexit, it is crucial to be critical of the press, as it is the apparatus which political authorities use to impose their laws and shape public opinion. Although large protests have the power to shake and disrupt policy-making by making it difficult for governments to ignore their goals, the British press historically constructs protesters as delegitimate, deviant, and criminal, which could limit protests’ credibility and democratic power. This paper explores how the remain supporting daily UK press (The Mirror, Financial Times, The Independent, The Guardian) and the leave supporting daily UK press (The Daily Mail, The Daily Star, The Sun, The Express, The Telegraph) discursively constructed every pro- and anti-Brexit demonstration from 2016 to 2019. 702 instances of the terms ‘protester’, ‘protesters’, ‘protestor’ and ‘protestors’ were analyzed through both transitivity analysis and critical discourse analysis. This mixed-methods approach allowed for the analysis of how the UK press perpetuated and upheld social ideologies about protests through their specific grammatical and language choices. The results of this analysis found that both remain and leave supporting press utilized the same discourses to report on protests they oppose and protests they support. For example, the remain backing The Mirror used water metaphors regularly associated with influxes of refugees and asylum seekers to support the protesters on the remain protest ‘Final Say’, and oppose the protesters on the leave protest ‘March to Leave’. Discourses of war, violence, and victimhood are also taken on by both sides of the press Brexit debate and are again used to support and oppose the same arguments. Finally, the paper concludes that these analogous discourses do nothing to help the already marginalized social positions of protesters in the UK and could potentially lead to reduced public support for demonstrations. This could, in turn, facilitate the government in introducing increasingly restrictive legislation in relation to freedom of assembly rights, which could be detrimental to British democracy.

Keywords: Brexit, critical discourse analysis, protests, transitivity analysis, UK press

Procedia PDF Downloads 173
27723 Evaluation of Ensemble Classifiers for Intrusion Detection

Authors: M. Govindarajan

Abstract:

One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection. 

Keywords: data mining, ensemble, radial basis function, support vector machine, accuracy

Procedia PDF Downloads 242
27722 Syntactic Ambiguity and Syntactic Analysis: Transformational Grammar Approach

Authors: Olufemi Olupe

Abstract:

Within linguistics, various approaches have been adopted to the study of language. One of such approaches is the syntax. The syntax is an aspect of the grammar of the language which deals with how words are put together to form phrases and sentences and how such structures are interpreted in language. Ambiguity, which is also germane in this discourse is about the uncertainty of meaning as a result of the possibility of a phrase or sentence being understood and interpreted in more than one way. In the light of the above, this paper attempts a syntactic study of syntactic ambiguities in The English Language, using the Transformational Generative Grammar (TGG) Approach. In doing this, phrases and sentences were raised with each description followed by relevant analysis. Finding in the work reveals that ambiguity cannot always be disambiguated by the means of syntactic analysis alone without recourse to semantic interpretation. The further finding shows that some syntactical ambiguities structures cannot be analysed on two surface structures in spite of the fact that there are more than one deep structures. The paper concludes that in as much as ambiguity remains in language; it will continue to pose a problem of understanding to a second language learner. Users of English as a second language, must, however, make a conscious effort to avoid its usage to achieve effective communication.

Keywords: language, syntax, semantics, morphology, ambiguity

Procedia PDF Downloads 384
27721 The Environmental and Economic Analysis of Extended Input-Output Table for Thailand’s Biomass Pellet Industry

Authors: Prangvalai Buasan, Boonrod Sajjakulnukit, Thongchart Bowonthumrongchai

Abstract:

The demand for biomass pellets in the industrial sector has significantly increased since 2020. The revised version of Thailand’s power development plan as well as the Alternative Energy Development Plan, aims to promote biomass fuel consumption by around 485 MW by 2030. The replacement of solid fossil fuel with biomass pellets will affect medium-term and long-term national benefits for all industries throughout the supply chain. Therefore, the evaluation of environmental and economic impacts throughout the biomass pellet supply chain needs to be performed to provide better insight into the goods and financial flow of this activity. This study extended the national input-output table for the biomass pellet industry and applied the input-output analysis (IOA) method, a sort of macroeconomic analysis, to interpret the result of transactions between industries in the monetary unit when the revised national power development plan was adopted and enforced. Greenhouse gas emissions from consuming energy and raw material through the supply chain are also evaluated. The total intermediate transactions of all economic sectors, which included the biomass pellets sector (CASE 2), increased by 0.02% when compared with the conservative case (CASE 1). The control total, which is the sum of total intermediate transactions and value-added, the control total of CASE 2 is increased by 0.07% when compared with CASE 1. The pellet production process emitted 432.26 MtCO2e per year. The major sharing of the GHG is from the plantation process of raw biomass.

Keywords: input-output analysis, environmental extended input-output analysis, macroeconomic planning, biomass pellets, renewable energy

Procedia PDF Downloads 92
27720 Round Addition DFA on Lightweight Block Ciphers with On-The-Fly Key Schedule

Authors: Hideki Yoshikawa, Masahiro Kaminaga, Arimitsu Shikoda, Toshinori Suzuki

Abstract:

Round addition differential fault analysis (DFA) using operation bypassing for lightweight block ciphers with on-the-fly key schedule is presented. For 64-bit KLEIN and 64-bit LED, it is shown that only a pair of correct ciphertext and faulty ciphertext can derive the secret master key. For PRESENT, one correct ciphertext and two faulty ciphertexts are required to reconstruct the secret key.

Keywords: differential fault analysis (DFA), round addition, block cipher, on-the-fly key schedule

Procedia PDF Downloads 698
27719 Investigation of Pollution and the Physical and Chemical Condition of Polour River, East of Tehran, Iran

Authors: Azita Behbahaninia

Abstract:

This research has been carried out to determine the water quality and physico-chemical properties Polour River, one of the most branch of Haraz River. Polour River was studied for a period of one year Samples were taken from different stations along the main branch of River polour. In water samples determined pH, DO, SO4, Cl, PO4, NO3, EC, BOD, COD, Temprature, color and number of Caliform per liter. ArcGIS was used for the zoning of phosphate concentration in the polour River basin. The results indicated that the river is polluted in polour village station, because of discharge domestic wastewater and also river is polluted in Ziar village station, because of agricultural wastewater and water is contaminated in aquaculture station, because of fish ponds wastewater. Statistical analysis shows that between independent traits and coliform regression relationship is significant at the 1% level. Coefficient explanation index indicated independent traits control 80% coliform and 20 % is for unknown parameters. The causality analysis showed Temperature (0.6) has the most positive and direct effect on coliform and sulfate has direct and negative effect on coliform. The results of causality analysis and the results of the regression analysis are matched and other forms direct and indirect effects were negligible and ignorable. Kruskal-Wallis test showed, there is different between sampling stations and studied characters. Between stations for temperature, DO, COD, EC, sulfate and coliform is at 1 % and for phosphate 5 % level of significance.

Keywords: coliform, GIS, pollution, phosphate, river

Procedia PDF Downloads 458
27718 AniMoveMineR: Animal Behavior Exploratory Analysis Using Association Rules Mining

Authors: Suelane Garcia Fontes, Silvio Luiz Stanzani, Pedro L. Pizzigatti Corrła Ronaldo G. Morato

Abstract:

Environmental changes and major natural disasters are most prevalent in the world due to the damage that humanity has caused to nature and these damages directly affect the lives of animals. Thus, the study of animal behavior and their interactions with the environment can provide knowledge that guides researchers and public agencies in preservation and conservation actions. Exploratory analysis of animal movement can determine the patterns of animal behavior and with technological advances the ability of animals to be tracked and, consequently, behavioral studies have been expanded. There is a lot of research on animal movement and behavior, but we note that a proposal that combines resources and allows for exploratory analysis of animal movement and provide statistical measures on individual animal behavior and its interaction with the environment is missing. The contribution of this paper is to present the framework AniMoveMineR, a unified solution that aggregates trajectory analysis and data mining techniques to explore animal movement data and provide a first step in responding questions about the animal individual behavior and their interactions with other animals over time and space. We evaluated the framework through the use of monitored jaguar data in the city of Miranda Pantanal, Brazil, in order to verify if the use of AniMoveMineR allows to identify the interaction level between these jaguars. The results were positive and provided indications about the individual behavior of jaguars and about which jaguars have the highest or lowest correlation.

Keywords: data mining, data science, trajectory, animal behavior

Procedia PDF Downloads 136
27717 Identification of Factors and Impacts on the Success of Implementing Extended Enterprise Resource Planning: Case Study of Manufacturing Industries in East Java, Indonesia

Authors: Zeplin Jiwa Husada Tarigan, Sautma Ronni Basana, Widjojo Suprapto

Abstract:

The ERP is integrating all data from various departments within the company into one data base. One department inputs the data and many other departments can access and use the data through the connected information system. As many manufacturing companies in Indonesia implement the ERP technology, many adjustments are to be made to align with the business process in the companies, especially the management policy and the competitive advantages. For companies that are successful in the initial implementation, they still have to maintain the process so that the initial success can develop along with the changing of business processes of the company. For companies which have already implemented the ERP successfully, they are still in need to maintain the system so that it can match up with the business development and changes. The continued success of the extended ERP implementation aims to achieve efficient and effective performance for the company. This research is distributing 100 questionnaires to manufacturing companies in East Java, Indonesia, which have implemented and have going live ERP for over five years. There are 90 returned questionnaires with ten disqualified questionnaires because they are from companies that implement ERP less than five years. There are only 80 questionnaires used as the data, with the response rate of 80%. Based on the data results and analysis with PLS (Partial Least Square), it is obtained that the organization commitment brings impacts to the user’s effectiveness and provides the adequate IT infrastructure. The user’s effectiveness brings impacts to the adequate IT infrastructure. The information quality of the company increases the implementation of the extended ERP in manufacturing companies in East Java, Indonesia.

Keywords: organization commitment, adequate IT infrastructure, information quality, extended ERP implementation

Procedia PDF Downloads 156
27716 Creep Analysis and Rupture Evaluation of High Temperature Materials

Authors: Yuexi Xiong, Jingwu He

Abstract:

The structural components in an energy facility such as steam turbine machines are operated under high stress and elevated temperature in an endured time period and thus the creep deformation and creep rupture failure are important issues that need to be addressed in the design of such components. There are numerous creep models being used for creep analysis that have both advantages and disadvantages in terms of accuracy and efficiency. The Isochronous Creep Analysis is one of the simplified approaches in which a full-time dependent creep analysis is avoided and instead an elastic-plastic analysis is conducted at each time point. This approach has been established based on the rupture dependent creep equations using the well-known Larson-Miller parameter. In this paper, some fundamental aspects of creep deformation and the rupture dependent creep models are reviewed and the analysis procedures using isochronous creep curves are discussed. Four rupture failure criteria are examined from creep fundamental perspectives including criteria of Stress Damage, Strain Damage, Strain Rate Damage, and Strain Capability. The accuracy of these criteria in predicting creep life is discussed and applications of the creep analysis procedures and failure predictions of simple models will be presented. In addition, a new failure criterion is proposed to improve the accuracy and effectiveness of the existing criteria. Comparisons are made between the existing criteria and the new one using several examples materials. Both strain increase and stress relaxation form a full picture of the creep behaviour of a material under high temperature in an endured time period. It is important to bear this in mind when dealing with creep problems. Accordingly there are two sets of rupture dependent creep equations. While the rupture strength vs LMP equation shows how the rupture time depends on the stress level under load controlled condition, the strain rate vs rupture time equation reflects how the rupture time behaves under strain-controlled condition. Among the four existing failure criteria for rupture life predictions, the Stress Damage and Strain Damage Criteria provide the most conservative and non-conservative predictions, respectively. The Strain Rate and Strain Capability Criteria provide predictions in between that are believed to be more accurate because the strain rate and strain capability are more determined quantities than stress to reflect the creep rupture behaviour. A modified Strain Capability Criterion is proposed making use of the two sets of creep equations and therefore is considered to be more accurate than the original Strain Capability Criterion.

Keywords: creep analysis, high temperature mateials, rapture evalution, steam turbine machines

Procedia PDF Downloads 284
27715 Performance of the New Laboratory-Based Algorithm for HIV Diagnosis in Southwestern China

Authors: Yanhua Zhao, Chenli Rao, Dongdong Li, Chuanmin Tao

Abstract:

The Chinese Centers for Disease Control and Prevention (CCDC) issued a new laboratory-based algorithm for HIV diagnosis on April 2016, which initially screens with a combination HIV-1/HIV-2 antigen/antibody fourth-generation immunoassay (IA) followed, when reactive, an HIV-1/HIV-2 undifferentiated antibody IA in duplicate. Reactive specimens with concordant results undergo supplemental tests with western blots, or HIV-1 nucleic acid tests (NATs) and non-reactive specimens with discordant results receive HIV-1 NATs or p24 antigen tests or 2-4 weeks follow-up tests. However, little data evaluating the application of the new algorithm have been reported to date. The study was to evaluate the performance of new laboratory-based HIV diagnostic algorithm in an inpatient population of Southwest China over the initial 6 months by compared with the old algorithm. Plasma specimens collected from inpatients from May 1, 2016, to October 31, 2016, are submitted to the laboratory for screening HIV infection performed by both the new HIV testing algorithm and the old version. The sensitivity and specificity of the algorithms and the difference of the categorized numbers of plasmas were calculated. Under the new algorithm for HIV diagnosis, 170 of the total 52 749 plasma specimens were confirmed as positively HIV-infected (0.32%). The sensitivity and specificity of the new algorithm were 100% (170/170) and 100% (52 579/52 579), respectively; while 167 HIV-1 positive specimens were identified by the old algorithm with sensitivity 98.24% (167/170) and 100% (52 579/52 579), respectively. Three acute HIV-1 infections (AHIs) and two early HIV-1 infections (EHIs) were identified by the new algorithm; the former was missed by old procedure. Compared with the old version, the new algorithm produced fewer WB-indeterminate results (2 vs. 16, p = 0.001), which led to fewer follow-up tests. Therefore, the new HIV testing algorithm is more sensitive for detecting acute HIV-1 infections with maintaining the ability to verify the established HIV-1 infections and can dramatically decrease the greater number of WB-indeterminate specimens.

Keywords: algorithm, diagnosis, HIV, laboratory

Procedia PDF Downloads 391
27714 A Framework for Incorporating Non-Linear Degradation of Conductive Adhesive in Environmental Testing

Authors: Kedar Hardikar, Joe Varghese

Abstract:

Conductive adhesives have found wide-ranging applications in electronics industry ranging from fixing a defective conductor on printed circuit board (PCB) attaching an electronic component in an assembly to protecting electronics components by the formation of “Faraday Cage.” The reliability requirements for the conductive adhesive vary widely depending on the application and expected product lifetime. While the conductive adhesive is required to maintain the structural integrity, the electrical performance of the associated sub-assembly can be affected by the degradation of conductive adhesive. The degradation of the adhesive is dependent upon the highly varied use case. The conventional approach to assess the reliability of the sub-assembly involves subjecting it to the standard environmental test conditions such as high-temperature high humidity, thermal cycling, high-temperature exposure to name a few. In order to enable projection of test data and observed failures to predict field performance, systematic development of an acceleration factor between the test conditions and field conditions is crucial. Common acceleration factor models such as Arrhenius model are based on rate kinetics and typically rely on an assumption of linear degradation in time for a given condition and test duration. The application of interest in this work involves conductive adhesive used in an electronic circuit of a capacitive sensor. The degradation of conductive adhesive in high temperature and humidity environment is quantified by the capacitance values. Under such conditions, the use of established models such as Hallberg-Peck model or Eyring Model to predict time to failure in the field typically relies on linear degradation rate. In this particular case, it is seen that the degradation is nonlinear in time and exhibits a square root t dependence. It is also shown that for the mechanism of interest, the presence of moisture is essential, and the dominant mechanism driving the degradation is the diffusion of moisture. In this work, a framework is developed to incorporate nonlinear degradation of the conductive adhesive for the development of an acceleration factor. This method can be extended to applications where nonlinearity in degradation rate can be adequately characterized in tests. It is shown that depending on the expected product lifetime, the use of conventional linear degradation approach can overestimate or underestimate the field performance. This work provides guidelines for suitability of linear degradation approximation for such varied applications

Keywords: conductive adhesives, nonlinear degradation, physics of failure, acceleration factor model.

Procedia PDF Downloads 127