Search results for: gene frequency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5367

Search results for: gene frequency

3477 Effect of Drag Coefficient Models concerning Global Air-Sea Momentum Flux in Broad Wind Range including Extreme Wind Speeds

Authors: Takeshi Takemoto, Naoya Suzuki, Naohisa Takagaki, Satoru Komori, Masako Terui, George Truscott

Abstract:

Drag coefficient is an important parameter in order to correctly estimate the air-sea momentum flux. However, The parameterization of the drag coefficient hasn’t been established due to the variation in the field data. Instead, a number of drag coefficient model formulae have been proposed, even though almost all these models haven’t discussed the extreme wind speed range. With regards to such models, it is unclear how the drag coefficient changes in the extreme wind speed range as the wind speed increased. In this study, we investigated the effect of the drag coefficient models concerning the air-sea momentum flux in the extreme wind range on a global scale, comparing two different drag coefficient models. Interestingly, one model didn’t discuss the extreme wind speed range while the other model considered it. We found that the difference of the models in the annual global air-sea momentum flux was small because the occurrence frequency of strong wind was approximately 1% with a wind speed of 20m/s or more. However, we also discovered that the difference of the models was shown in the middle latitude where the annual mean air-sea momentum flux was large and the occurrence frequency of strong wind was high. In addition, the estimated data showed that the difference of the models in the drag coefficient was large in the extreme wind speed range and that the largest difference became 23% with a wind speed of 35m/s or more. These results clearly show that the difference of the two models concerning the drag coefficient has a significant impact on the estimation of a regional air-sea momentum flux in an extreme wind speed range such as that seen in a tropical cyclone environment. Furthermore, we estimated each air-sea momentum flux using several kinds of drag coefficient models. We will also provide data from an observation tower and result from CFD (Computational Fluid Dynamics) concerning the influence of wind flow at and around the place.

Keywords: air-sea interaction, drag coefficient, air-sea momentum flux, CFD (Computational Fluid Dynamics)

Procedia PDF Downloads 367
3476 StockTwits Sentiment Analysis on Stock Price Prediction

Authors: Min Chen, Rubi Gupta

Abstract:

Understanding and predicting stock market movements is a challenging problem. It is believed stock markets are partially driven by public sentiments, which leads to numerous research efforts to predict stock market trend using public sentiments expressed on social media such as Twitter but with limited success. Recently a microblogging website StockTwits is becoming increasingly popular for users to share their discussions and sentiments about stocks and financial market. In this project, we analyze the text content of StockTwits tweets and extract financial sentiment using text featurization and machine learning algorithms. StockTwits tweets are first pre-processed using techniques including stopword removal, special character removal, and case normalization to remove noise. Features are extracted from these preprocessed tweets through text featurization process using bags of words, N-gram models, TF-IDF (term frequency-inverse document frequency), and latent semantic analysis. Machine learning models are then trained to classify the tweets' sentiment as positive (bullish) or negative (bearish). The correlation between the aggregated daily sentiment and daily stock price movement is then investigated using Pearson’s correlation coefficient. Finally, the sentiment information is applied together with time series stock data to predict stock price movement. The experiments on five companies (Apple, Amazon, General Electric, Microsoft, and Target) in a duration of nine months demonstrate the effectiveness of our study in improving the prediction accuracy.

Keywords: machine learning, sentiment analysis, stock price prediction, tweet processing

Procedia PDF Downloads 149
3475 Prediction and Identification of a Permissive Epitope Insertion Site for St Toxoid in cfaB from Enterotoxigenic Escherichia coli

Authors: N. Zeinalzadeh, Mahdi Sadeghi

Abstract:

Enterotoxigenic Escherichia coli (ETEC) is the most common cause of non-inflammatory diarrhea in the developing countries, resulting in approximately 20% of all diarrheal episodes in children in these areas. ST is one of the most important virulence factors and CFA/I is one of the frequent colonization factors that help to process of ETEC infection. ST and CfaB (CFA/I subunit) are among vaccine candidates against ETEC. So, ST because of its small size is not a good immunogenic in the natural form. However to increase its immunogenic potential, here we explored candidate positions for ST insertion in CfaB sequence. After bioinformatics analysis, one of the candidate positions was selected and the chimeric gene (cfaB*st) sequence was synthesized and expressed in E. coli BL21 (DE3). The chimeric recombinant protein was purified with Ni-NTA columns and characterized with western blot analysis. The residue 74-75 of CfaB sequence could be a good candidate position for ST and other epitopes insertion.

Keywords: bioinformatics, CFA/I, enterotoxigenic E. coli, ST toxoid

Procedia PDF Downloads 445
3474 Identifying Promoters and Their Types Based on a Two-Layer Approach

Authors: Bin Liu

Abstract:

Prokaryotic promoter, consisted of two short DNA sequences located at in -35 and -10 positions, is responsible for controlling the initiation and expression of gene expression. Different types of promoters have different functions, and their consensus sequences are similar. In addition, their consensus sequences may be different for the same type of promoter, which poses difficulties for promoter identification. Unfortunately, all existing computational methods treat promoter identification as a binary classification task and can only identify whether a query sequence belongs to a specific promoter type. It is desired to develop computational methods for effectively identifying promoters and their types. Here, a two-layer predictor is proposed to try to deal with the problem. The first layer is designed to predict whether a given sequence is a promoter and the second layer predicts the type of promoter that is judged as a promoter. Meanwhile, we also analyze the importance of feature and sequence conversation in two aspects: promoter identification and promoter type identification. To the best knowledge of ours, it is the first computational predictor to detect promoters and their types.

Keywords: promoter, promoter type, random forest, sequence information

Procedia PDF Downloads 182
3473 Assessment Using Copulas of Simultaneous Damage to Multiple Buildings Due to Tsunamis

Authors: Yo Fukutani, Shuji Moriguchi, Takuma Kotani, Terada Kenjiro

Abstract:

If risk management of the assets owned by companies, risk assessment of real estate portfolio, and risk identification of the entire region are to be implemented, it is necessary to consider simultaneous damage to multiple buildings. In this research, the Sagami Trough earthquake tsunami that could have a significant effect on the Japanese capital region is focused on, and a method is proposed for simultaneous damage assessment using copulas that can take into consideration the correlation of tsunami depths and building damage between two sites. First, the tsunami inundation depths at two sites were simulated by using a nonlinear long-wave equation. The tsunamis were simulated by varying the slip amount (five cases) and the depths (five cases) for each of 10 sources of the Sagami Trough. For each source, the frequency distributions of the tsunami inundation depth were evaluated by using the response surface method. Then, Monte-Carlo simulation was conducted, and frequency distributions of tsunami inundation depth were evaluated at the target sites for all sources of the Sagami Trough. These are marginal distributions. Kendall’s tau for the tsunami inundation simulation at two sites was 0.83. Based on this value, the Gaussian copula, t-copula, Clayton copula, and Gumbel copula (n = 10,000) were generated. Then, the simultaneous distributions of the damage rate were evaluated using the marginal distributions and the copulas. For the correlation of the tsunami inundation depth at the two sites, the expected value hardly changed compared with the case of no correlation, but the damage rate of the ninety-ninth percentile value was approximately 2%, and the maximum value was approximately 6% when using the Gumbel copula.

Keywords: copulas, Monte-Carlo simulation, probabilistic risk assessment, tsunamis

Procedia PDF Downloads 140
3472 Optimal Approach for Siewert Type Ⅱ Adenocarcinoma of the Esophagogastric Junction: A Systematic Review and Metanalysis

Authors: Maatouk Mohamed, Nouira Mariem

Abstract:

Background and aims: Healthcare-associated infections (HAI) represent a major public health problem worldwide. They represent one of the most serious adverse events in health care. The objectives of our study were to estimate the prevalence of HAI at the Charles Nicolle Hospital (CNH) and to identify the main associated factors as well as to estimate the frequency of antibiotic use. Methods: It was a cross sectional study at the CNH with a unique passage per department (OctoberDecember 2018). All patients present at the wards for more than 48 hours were included. All patients from outpatient consultations, emergency and dialysis departments were not included. The site definitions of infections proposed by the Centers for Disease Control and Prevention (CDC) were used. Only clinically and/or microbiologically confirmed active HAIs were included. Results: A total of 318 patients were included with a mean age of 52 years and a sex ratio (Female/Male) of 1.05. A total of 41 patients had one or more active HAIs, corresponding to a prevalence of 13.1% (95% CI: 9.3%-16.9%). The most frequent sites infections were urinary tract infections and pneumonia. Multivariate analysis among adult patients (>=18 years) (n=261), revealed that infection on admission (p=0.01), alcoholism (p=0.01), high blood pressure (p=0.008), having at least one invasive device inserted (p=0.004), and history of recent surgery (p=0.03), increased significantly the risk of HAIs. More than 1 of 3 patients (35.4%) were under antibiotics on the day of the survey, of which more than half (57.4%) were under 2 or more types of antibiotics. Conclusion: The prevalence of HAIs and antibiotic prescriptions at the CNH were considerably high. An infection prevention and control committee, as well as the development of an Antibiotic stewardship program with continuous monitoring using repeated prevalence surveys must be implemented to limit the frequency of these infections effectively.

Keywords: tumors, oesophagectomy, esophagogastric junction, systematic review

Procedia PDF Downloads 79
3471 Lexical Collocations in Medical Articles of Non-Native vs Native English-Speaking Researchers

Authors: Waleed Mandour

Abstract:

This study presents multidimensional scrutiny of Benson et al.’s seven-category taxonomy of lexical collocations used by Egyptian medical authors and their peers of native-English speakers. It investigates 212 medical papers, all published during a span of 6 years (from 2013 to 2018). The comparison is held to the medical research articles submitted by native speakers of English (25,238 articles in total with over 103 million words) as derived from the Directory of Open Access Journals (a 2.7 billion-word corpus). The non-native speakers compiled corpus was properly annotated and marked-up manually by the researcher according to the standards of Weisser. In terms of statistical comparisons, though, deployed were the conventional frequency-based analysis besides the relevant criteria, such as association measures (AMs) in which LogDice is deployed as per the recommendation of Kilgariff et al. when comparing large corpora. Despite the terminological convergence in the subject corpora, comparison results confirm the previous literature of which the non-native speakers’ compositions reveal limited ranges of lexical collocations in terms of their distribution. However, there is a ubiquitous tendency of overusing the NS-high-frequency multi-words in all lexical categories investigated. Furthermore, Egyptian authors, conversely to their English-speaking peers, tend to embrace more collocations denoting quantitative rather than qualitative analyses in their produced papers. This empirical work, per se, contributes to the English for Academic Purposes (EAP) and English as a Lingua Franca in Academic settings (ELFA). In addition, there are pedagogical implications that would promote a better quality of medical research papers published in Egyptian universities.

Keywords: corpus linguistics, EAP, ELFA, lexical collocations, medical discourse

Procedia PDF Downloads 126
3470 Major Histocompatibility Complex (MHC) Polymorphism and Disease Resistance

Authors: Oya Bulut, Oguzhan Avci, Zafer Bulut, Atilla Simsek

Abstract:

Livestock breeders have focused on the improvement of production traits with little or no attention for improvement of disease resistance traits. In order to determine the association between the genetic structure of the individual gene loci with possibility of the occurrence and the development of diseases, MHC (major histocompatibility complex) are frequently used. Because of their importance in the immune system, MHC locus is considered as candidate genes for resistance/susceptibility against to different diseases. Major histocompatibility complex (MHC) molecules play a critical role in both innate and adaptive immunity and have been considered candidate molecular markers of an association between polymorphisms and resistance/susceptibility to diseases. The purpose of this study is to give some information about MHC genes become an important area of study in recent years in terms of animal husbandry and determine the relation between MHC genes and resistance/susceptibility to disease.

Keywords: MHC, polymorphism, disease, resistance

Procedia PDF Downloads 628
3469 The Effect of General Corrosion on the Guided Wave Inspection of the Pipeline

Authors: Shiuh-Kuang Yang, Sheam-Chyun Lin, Jyin-Wen Cheng, Deng-Guei Hsu

Abstract:

The torsional mode of guided wave, T(0,1), has been applied to detect characteristics and defects in pipelines, especially in the cases of coated, elevated and buried pipes. The signals of minor corrosions would be covered by the noise, unfortunately, because the coated material and buried medium always induce a strong attenuation of the guided wave. Furthermore, the guided wave would be attenuated more seriously and make the signals hard to be identified when setting the array ring of the transducers on a general corrosion area of the pipe. The objective of this study is then to discuss the effects of the above-mentioned general corrosion on guided wave tests by experiments and signal processing techniques, based on the use of the finite element method, the two-dimensional Fourier transform and the continuous wavelet transform. Results show that the excitation energy would be reduced when the array ring set on the pipe surface having general corrosion. The non-uniformed contact surface also produces the unwanted asymmetric modes of the propagating guided wave. Some of them are even mixing together with T(0,1) mode and increase the difficulty of measurements, especially when a defect or local corrosion merged in the general corrosion area. It is also showed that the guided waves attenuation are increasing with the increasing corrosion depth or the rising inspection frequency. However, the coherent signals caused by the general corrosion would be decayed with increasing frequency. The results obtained from this research should be able to provide detectors to understand the impact when the array ring set on the area of general corrosion and the way to distinguish the localized corrosion which is inside the area of general corrosion.

Keywords: guided wave, finite element method, two-dimensional fourier transform, wavelet transform, general corrosion, localized corrosion

Procedia PDF Downloads 399
3468 Synthetic Method of Contextual Knowledge Extraction

Authors: Olga Kononova, Sergey Lyapin

Abstract:

Global information society requirements are transparency and reliability of data, as well as ability to manage information resources independently; particularly to search, to analyze, to evaluate information, thereby obtaining new expertise. Moreover, it is satisfying the society information needs that increases the efficiency of the enterprise management and public administration. The study of structurally organized thematic and semantic contexts of different types, automatically extracted from unstructured data, is one of the important tasks for the application of information technologies in education, science, culture, governance and business. The objectives of this study are the contextual knowledge typologization, selection or creation of effective tools for extracting and analyzing contextual knowledge. Explication of various kinds and forms of the contextual knowledge involves the development and use full-text search information systems. For the implementation purposes, the authors use an e-library 'Humanitariana' services such as the contextual search, different types of queries (paragraph-oriented query, frequency-ranked query), automatic extraction of knowledge from the scientific texts. The multifunctional e-library «Humanitariana» is realized in the Internet-architecture in WWS-configuration (Web-browser / Web-server / SQL-server). Advantage of use 'Humanitariana' is in the possibility of combining the resources of several organizations. Scholars and research groups may work in a local network mode and in distributed IT environments with ability to appeal to resources of any participating organizations servers. Paper discusses some specific cases of the contextual knowledge explication with the use of the e-library services and focuses on possibilities of new types of the contextual knowledge. Experimental research base are science texts about 'e-government' and 'computer games'. An analysis of the subject-themed texts trends allowed to propose the content analysis methodology, that combines a full-text search with automatic construction of 'terminogramma' and expert analysis of the selected contexts. 'Terminogramma' is made out as a table that contains a column with a frequency-ranked list of words (nouns), as well as columns with an indication of the absolute frequency (number) and the relative frequency of occurrence of the word (in %% ppm). The analysis of 'e-government' materials showed, that the state takes a dominant position in the processes of the electronic interaction between the authorities and society in modern Russia. The media credited the main role in these processes to the government, which provided public services through specialized portals. Factor analysis revealed two factors statistically describing the used terms: human interaction (the user) and the state (government, processes organizer); interaction management (public officer, processes performer) and technology (infrastructure). Isolation of these factors will lead to changes in the model of electronic interaction between government and society. In this study, the dominant social problems and the prevalence of different categories of subjects of computer gaming in science papers from 2005 to 2015 were identified. Therefore, there is an evident identification of several types of contextual knowledge: micro context; macro context; dynamic context; thematic collection of queries (interactive contextual knowledge expanding a composition of e-library information resources); multimodal context (functional integration of iconographic and full-text resources through hybrid quasi-semantic algorithm of search). Further studies can be pursued both in terms of expanding the resource base on which they are held, and in terms of the development of appropriate tools.

Keywords: contextual knowledge, contextual search, e-library services, frequency-ranked query, paragraph-oriented query, technologies of the contextual knowledge extraction

Procedia PDF Downloads 355
3467 Quartz Crystal Microbalance Based Hydrophobic Nanosensor for Lysozyme Detection

Authors: F. Yılmaz, Y. Saylan, A. Derazshamshir, S. Atay, A. Denizli

Abstract:

Quartz crystal microbalance (QCM), high-resolution mass-sensing technique, measures changes in mass on oscillating quartz crystal surface by measuring changes in oscillation frequency of crystal in real time. Protein adsorption techniques via hydrophobic interaction between protein and solid support, called hydrophobic interaction chromatography (HIC), can be favorable in many cases. Some nanoparticles can be effectively applied for HIC. HIC takes advantage of the hydrophobicity of proteins by promoting its separation on the basis of hydrophobic interactions between immobilized hydrophobic ligands and nonpolar regions on the surface of the proteins. Lysozyme is found in a variety of vertebrate cells and secretions, such as spleen, milk, tears, and egg white. Its common applications are as a cell-disrupting agent for extraction of bacterial intracellular products, as an antibacterial agent in ophthalmologic preparations, as a food additive in milk products and as a drug for treatment of ulcers and infections. Lysozyme has also been used in cancer chemotherapy. The aim of this study is the synthesis of hydrophobic nanoparticles for Lysozyme detection. For this purpose, methacryoyl-L-phenylalanine was chosen as a hydrophobic matrix. The hydrophobic nanoparticles were synthesized by micro-emulsion polymerization method. Then, hydrophobic QCM nanosensor was characterized by Attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy, atomic force microscopy (AFM) and zeta size analysis. Hydrophobic QCM nanosensor was tested for real-time detection of Lysozyme from aqueous solution. The kinetic and affinity studies were determined by using Lysozyme solutions with different concentrations. The responses related to a mass (Δm) and frequency (Δf) shifts were used to evaluate adsorption properties.

Keywords: nanosensor, HIC, lysozyme, QCM

Procedia PDF Downloads 344
3466 Compact Dual-band 4-MIMO Antenna Elements for 5G Mobile Applications

Authors: Fayad Ghawbar

Abstract:

The significance of the Multiple Input Multiple Output (MIMO) system in the 5G wireless communication system is essential to enhance channel capacity and provide a high data rate resulting in a need for dual-polarization in vertical and horizontal. Furthermore, size reduction is critical in a MIMO system to deploy more antenna elements requiring a compact, low-profile design. A compact dual-band 4-MIMO antenna system has been presented in this paper with pattern and polarization diversity. The proposed single antenna structure has been designed using two antenna layers with a C shape in the front layer and a partial slot with a U-shaped cut in the ground to enhance isolation. The single antenna is printed on an FR4 dielectric substrate with an overall size of 18 mm×18 mm×1.6 mm. The 4-MIMO antenna elements were printed orthogonally on an FR4 substrate with a size dimension of 36 × 36 × 1.6 mm3 with zero edge-to-edge separation distance. The proposed compact 4-MIMO antenna elements resonate at 3.4-3.6 GHz and 4.8-5 GHz. The s-parameters measurement and simulation results agree, especially in the lower band with a slight frequency shift of the measurement results at the upper band due to fabrication imperfection. The proposed design shows isolation above -15 dB and -22 dB across the 4-MIMO elements. The MIMO diversity performance has been evaluated in terms of efficiency, ECC, DG, TARC, and CCL. The total and radiation efficiency were above 50 % across all parameters in both frequency bands. The ECC values were lower than 0.10, and the DG results were about 9.95 dB in all antenna elements. TARC results exhibited values lower than 0 dB with values lower than -25 dB in all MIMO elements at the dual-bands. Moreover, the channel capacity losses in the MIMO system were depicted using CCL with values lower than 0.4 Bits/s/Hz.

Keywords: compact antennas, MIMO antenna system, 5G communication, dual band, ECC, DG, TARC

Procedia PDF Downloads 140
3465 Wideband Performance Analysis of C-FDTD Based Algorithms in the Discretization Impoverishment of a Curved Surface

Authors: Lucas L. L. Fortes, Sandro T. M. Gonçalves

Abstract:

In this work, it is analyzed the wideband performance with the mesh discretization impoverishment of the Conformal Finite Difference Time-Domain (C-FDTD) approaches developed by Raj Mittra, Supriyo Dey and Wenhua Yu for the Finite Difference Time-Domain (FDTD) method. These approaches are a simple and efficient way to optimize the scattering simulation of curved surfaces for Dielectric and Perfect Electric Conducting (PEC) structures in the FDTD method, since curved surfaces require dense meshes to reduce the error introduced due to the surface staircasing. Defined, on this work, as D-FDTD-Diel and D-FDTD-PEC, these approaches are well-known in the literature, but the improvement upon their application is not quantified broadly regarding wide frequency bands and poorly discretized meshes. Both approaches bring improvement of the accuracy of the simulation without requiring dense meshes, also making it possible to explore poorly discretized meshes which bring a reduction in simulation time and the computational expense while retaining a desired accuracy. However, their applications present limitations regarding the mesh impoverishment and the frequency range desired. Therefore, the goal of this work is to explore the approaches regarding both the wideband and mesh impoverishment performance to bring a wider insight over these aspects in FDTD applications. The D-FDTD-Diel approach consists in modifying the electric field update in the cells intersected by the dielectric surface, taking into account the amount of dielectric material within the mesh cells edges. By taking into account the intersections, the D-FDTD-Diel provides accuracy improvement at the cost of computational preprocessing, which is a fair trade-off, since the update modification is quite simple. Likewise, the D-FDTD-PEC approach consists in modifying the magnetic field update, taking into account the PEC curved surface intersections within the mesh cells and, considering a PEC structure in vacuum, the air portion that fills the intersected cells when updating the magnetic fields values. Also likewise to D-FDTD-Diel, the D-FDTD-PEC provides a better accuracy at the cost of computational preprocessing, although with a drawback of having to meet stability criterion requirements. The algorithms are formulated and applied to a PEC and a dielectric spherical scattering surface with meshes presenting different levels of discretization, with Polytetrafluoroethylene (PTFE) as the dielectric, being a very common material in coaxial cables and connectors for radiofrequency (RF) and wideband application. The accuracy of the algorithms is quantified, showing the approaches wideband performance drop along with the mesh impoverishment. The benefits in computational efficiency, simulation time and accuracy are also shown and discussed, according to the frequency range desired, showing that poorly discretized mesh FDTD simulations can be exploited more efficiently, retaining the desired accuracy. The results obtained provided a broader insight over the limitations in the application of the C-FDTD approaches in poorly discretized and wide frequency band simulations for Dielectric and PEC curved surfaces, which are not clearly defined or detailed in the literature and are, therefore, a novelty. These approaches are also expected to be applied in the modeling of curved RF components for wideband and high-speed communication devices in future works.

Keywords: accuracy, computational efficiency, finite difference time-domain, mesh impoverishment

Procedia PDF Downloads 128
3464 A Systematic Review Investigating the Use of EEG Measures in Neuromarketing

Authors: A. M. Byrne, E. Bonfiglio, C. Rigby, N. Edelstyn

Abstract:

Introduction: Neuromarketing employs numerous methodologies when investigating products and advertisement effectiveness. Electroencephalography (EEG), a non-invasive measure of electrical activity from the brain, is commonly used in neuromarketing. EEG data can be considered using time-frequency (TF) analysis, where changes in the frequency of brainwaves are calculated to infer participant’s mental states, or event-related potential (ERP) analysis, where changes in amplitude are observed in direct response to a stimulus. This presentation discusses the findings of a systematic review of EEG measures in neuromarketing. A systematic review summarises evidence on a research question, using explicit measures to identify, select, and critically appraise relevant research papers. Thissystematic review identifies which EEG measures are the most robust predictor of customer preference and purchase intention. Methods: Search terms identified174 papers that used EEG in combination with marketing-related stimuli. Publications were excluded if they were written in a language other than English or were not published as journal articles (e.g., book chapters). The review investigated which TF effect (e.g., theta-band power) and ERP component (e.g., N400) most consistently reflected preference and purchase intention. Machine-learning prediction was also investigated, along with the use of EEG combined with physiological measures such as eye-tracking. Results: Frontal alpha asymmetry was the most reliable TF signal, where an increase in activity over the left side of the frontal lobe indexed a positive response to marketing stimuli, while an increase in activity over the right side indexed a negative response. The late positive potential, a positive amplitude increase around 600 ms after stimulus presentation, was the most reliable ERP component, reflecting the conscious emotional evaluation of marketing stimuli. However, each measure showed mixed results when related to preference and purchase behaviour. Predictive accuracy was greatly improved through machine-learning algorithms such as deep neural networks, especially when combined with eye-tracking or facial expression analyses. Discussion: This systematic review provides a novel catalogue of the most effective use of each EEG measure commonly used in neuromarketing. Exciting findings to emerge are the identification of the frontal alpha asymmetry and late positive potential as markers of preferential responses to marketing stimuli. Predictive accuracy using machine-learning algorithms achieved predictive accuracies as high as 97%, and future research should therefore focus on machine-learning prediction when using EEG measures in neuromarketing.

Keywords: EEG, ERP, neuromarketing, machine-learning, systematic review, time-frequency

Procedia PDF Downloads 109
3463 The Usage of Negative Emotive Words in Twitter

Authors: Martina Katalin Szabó, István Üveges

Abstract:

In this paper, the usage of negative emotive words is examined on the basis of a large Hungarian twitter-database via NLP methods. The data is analysed from a gender point of view, as well as changes in language usage over time. The term negative emotive word refers to those words that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g. rohadt jó ’damn good’) or a sentiment expression with positive polarity despite their negative prior polarity (e.g. brutális, ahogy ez a férfi rajzol ’it’s awesome (lit. brutal) how this guy draws’. Based on the findings of several authors, the same phenomenon can be found in other languages, so it is probably a language-independent feature. For the recent analysis, 67783 tweets were collected: 37818 tweets (19580 tweets written by females and 18238 tweets written by males) in 2016 and 48344 (18379 tweets written by females and 29965 tweets written by males) in 2021. The goal of the research was to make up two datasets comparable from the viewpoint of semantic changes, as well as from gender specificities. An exhaustive lexicon of Hungarian negative emotive intensifiers was also compiled (containing 214 words). After basic preprocessing steps, tweets were processed by ‘magyarlanc’, a toolkit is written in JAVA for the linguistic processing of Hungarian texts. Then, the frequency and collocation features of all these words in our corpus were automatically analyzed (via the analysis of parts-of-speech and sentiment values of the co-occurring words). Finally, the results of all four subcorpora were compared. Here some of the main outcomes of our analyses are provided: There are almost four times fewer cases in the male corpus compared to the female corpus when the negative emotive intensifier modified a negative polarity word in the tweet (e.g., damn bad). At the same time, male authors used these intensifiers more frequently, modifying a positive polarity or a neutral word (e.g., damn good and damn big). Results also pointed out that, in contrast to female authors, male authors used these words much more frequently as a positive polarity word as well (e.g., brutális, ahogy ez a férfi rajzol ’it’s awesome (lit. brutal) how this guy draws’). We also observed that male authors use significantly fewer types of emotive intensifiers than female authors, and the frequency proportion of the words is more balanced in the female corpus. As for changes in language usage over time, some notable differences in the frequency and collocation features of the words examined were identified: some of the words collocate with more positive words in the 2nd subcorpora than in the 1st, which points to the semantic change of these words over time.

Keywords: gender differences, negative emotive words, semantic changes over time, twitter

Procedia PDF Downloads 202
3462 Mesoporous Tussah Silk Fibroin Microspheres for Drug Delivery

Authors: Weitao Zhou, Qing Wang, Jianxin He, Shizhong Cui

Abstract:

Mesoporous Tussah silk fibroin (TSF) spheres were fabricated via the self-assembly of TSF molecules in aqueous solutions. The results showed that TSF particles were approximately three-dimensional spheres with the diameter ranging from 500nm to 6μm without adherence. More importantly, the surface morphology is mesoporous structure with nano-pores of 20nm - 200nm in size. Fourier transform infrared (FT-IR) and X-ray diffraction (XRD) studies demonstrated that mesoporous TSF spheres mainly contained beta-sheet conformation (44.1 %) as well as slight amounts of random coil (13.2 %). Drug release test was performed with 5-fluorouracil (5-Fu) as a model drug and the result indicated the mesoporous TSF microspheres had a good capacity of sustained drug release. It is expected that these stable and high-crystallinity mesoporous TSF sphere produced without organic solvents, which have significantly improved drug release properties, is a very promising material for controlled gene medicines delivery.

Keywords: Tussah silk fibroin, porous materials, microsphere, drug release

Procedia PDF Downloads 455
3461 The Effect of Dopamine D2 Receptor TAQ A1 Allele on Sprinter and Endurance Athlete

Authors: Öznur Özge Özcan, Canan Sercan, Hamza Kulaksız, Mesut Karahan, Korkut Ulucan

Abstract:

Genetic structure is very important to understand the brain dopamine system which is related to athletic performance. Hopefully, there will be enough studies about athletics performance in the terms of addiction-related genetic markers in the future. In the present study, we intended to investigate the Receptor-2 Gene (DRD2) rs1800497, which is related to brain dopaminergic system. 10 sprinter and 10 endurance athletes were enrolled in the study. Real-Time Polymerase Chain Reaction method was used for genotyping. According to results, A1A1, A1A2 and A2A2 genotypes in athletes were 0 (%0), 3 (%15) and 17 (%85). A1A1 genotype was not found and A2 allele was counted as the dominating allele in our cohort. These findings show that dopaminergic mechanism effects on sport genetic may be explained by the polygenic and multifactorial view.

Keywords: addiction, athletic performance, genotype, sport genetics

Procedia PDF Downloads 210
3460 The Pigeon Circovirus Evolution and Epidemiology under Conditions of One Loft Race Rearing System: The Preliminary Results

Authors: Tomasz Stenzel, Daria Dziewulska, Ewa Łukaszuk, Joy Custer, Simona Kraberger, Arvind Varsani

Abstract:

Viral diseases, especially those leading to impairment of the immune system, are among the most important problems in avian pathology. However, there is not much data available on this subject other than commercial poultry bird species. Recently, increasing attention has been paid to racing pigeons, which have been refined for many years in terms of their ability to return to their place of origin. Currently, these birds are used for races at distances from 100 to 1000 km, and winning pigeons are highly valuable. The rearing system of racing pigeons contradicts the principles of biosecurity, as birds originating from various breeding facilities are commonly transported and reared in “One Loft Race” (OLR) facilities. This favors the spread of multiple infections and provides conditions for the development of novel variants of various pathogens through recombination. One of the most significant viruses occurring in this avian species is the pigeon circovirus (PiCV), which is detected in ca. 70% of pigeons. Circoviruses are characterized by vast genetic diversity which is due to, among other things, the recombination phenomenon. It consists of an exchange of fragments of genetic material among various strains of the virus during the infection of one organism. The rate and intensity of the development of PiCV recombinants have not been determined so far. For this reason, an experiment was performed to investigate the frequency of development of novel PiCV recombinants in racing pigeons kept in OLR-type conditions. 15 racing pigeons originating from 5 different breeding facilities, subclinically infected with various PiCV strains, were housed in one room for eight weeks, which was supposed to mimic the conditions of OLR rearing. Blood and swab samples were collected from birds every seven days to recover complete PiCV genomes that were amplified through Rolling Circle Amplification (RCA), cloned, sequenced, and subjected to bioinformatic analyses aimed at determining the genetic diversity and the dynamics of recombination phenomenon among the viruses. In addition, virus shedding rate/level of viremia, expression of the IFN-γ and interferon-related genes, and anti-PiCV antibodies were determined to enable the complete analysis of the course of infection in the flock. Initial results have shown that 336 full PiCV genomes were obtained, exhibiting nucleotide similarity ranging from 86.6 to 100%, and 8 of those were recombinants originating from viruses of different lofts of origin. The first recombinant appeared after seven days of experiment, but most of the recombinants appeared after 14 and 21 days of joint housing. The level of viremia and virus shedding was the highest in the 2nd week of the experiment and gradually decreased to the end of the experiment, which partially corresponded with Mx 1 gene expression and antibody dynamics. The results have shown that the OLR pigeon-rearing system could play a significant role in spreading infectious agents such as circoviruses and contributing to PiCV evolution through recombination. Therefore, it is worth considering whether a popular gambling game such as pigeon racing is sensible from both animal welfare and epidemiological point of view.

Keywords: pigeon circovirus, recombination, evolution, one loft race

Procedia PDF Downloads 68
3459 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression

Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin

Abstract:

This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.

Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression

Procedia PDF Downloads 287
3458 A Thermographic and Energy Based Approach to Define High Cycle Fatigue Strength of Flax Fiber Reinforced Thermoset Composites

Authors: Md. Zahirul Islam, Chad A. Ulven

Abstract:

Fiber-reinforced polymer matrix composites have a wide range of applications in the sectors of automotive, aerospace, sports utilities, among others, due to their high specific strength, stiffness as well as reduced weight. In addition to those favorable properties, composites composed of natural fibers and bio-based resins (i.e., biocomposites) have eco-friendliness and biodegradability. However, the applications of biocomposites are limited due to the lack of knowledge about their long-term reliability under fluctuating loads. In order to explore the long-term reliability of flax fiber reinforced composites under fluctuating loads through high cycle fatigue strength (HCFS), fatigue test were conducted on unidirectional flax fiber reinforced thermoset composites at different percentage loads of ultimate tensile strength (UTS) with a loading frequency of 5 Hz. Change of temperature of the sample during cyclic loading was captured using an IR camera. Initially, the temperature increased rapidly, but after a certain time, it stabilized. A mathematical model was developed to predict the fatigue life from the data of stabilized temperature. Stabilized temperature and dissipated energy per cycle were compared with applied stress. Both showed bilinear behavior and the intersection of those curves were used to determine HCFS. HCFS for unidirectional flax fiber reinforced composites is around 45% of UTS for a loading frequency of 5Hz. Unlike fatigue life, stabilized temperature and dissipated energy-based models are convenient to define HCFS as they have little variation from sample to sample.

Keywords: energy method, fatigue, flax fiber reinforced composite, HCFS, thermographic approach

Procedia PDF Downloads 104
3457 Finite Element Model to Investigate the Dynamic Behavior of Ring-Stiffened Conical Shell Fully and Partially Filled with Fluid

Authors: Mohammadamin Esmaeilzadehazimi, Morteza Shayan Arani, Mohammad Toorani, Aouni Lakis

Abstract:

This study uses a hybrid finite element method to predict the dynamic behavior of both fully and partially-filled truncated conical shells stiffened with ring stiffeners. The method combines classical shell theory and the finite element method, and employs displacement functions derived from exact solutions of Sanders' shell equilibrium equations for conical shells. The shell-fluid interface is analyzed by utilizing the velocity potential, Bernoulli's equation, and impermeability conditions to determine an explicit expression for fluid pressure. The equations of motion presented in this study apply to both conical and cylindrical shells. This study presents the first comparison of the method applied to ring-stiffened shells with other numerical and experimental findings. Vibration frequencies for conical shells with various boundary conditions and geometries in a vacuum and filled with water are compared with experimental and numerical investigations, achieving good agreement. The study thoroughly investigates the influence of geometric parameters, stiffener quantity, semi-vertex cone angle, level of water filled in the cone, and applied boundary conditions on the natural frequency of fluid-loaded ring-stiffened conical shells, and draws some useful conclusions. The primary advantage of the current method is its use of a minimal number of finite elements while achieving highly accurate results.

Keywords: finite element method, fluid–structure interaction, conical shell, natural frequency, ring-stiffener

Procedia PDF Downloads 74
3456 Efficacy of Learning: Digital Sources versus Print

Authors: Rahimah Akbar, Abdullah Al-Hashemi, Hanan Taqi, Taiba Sadeq

Abstract:

As technology continues to develop, teaching curriculums in both schools and universities have begun adopting a more computer/digital based approach to the transmission of knowledge and information, as opposed to the more old-fashioned use of textbooks. This gives rise to the question: Are there any differences in learning from a digital source over learning from a printed source, as in from a textbook? More specifically, which medium of information results in better long-term retention? A review of the confounding factors implicated in understanding the relationship between learning from the two different mediums was done. Alongside this, a 4-week cohort study involving 76 1st year English Language female students was performed, whereby the participants were divided into 2 groups. Group A studied material from a paper source (referred to as the Print Medium), and Group B studied material from a digital source (Digital Medium). The dependent variables were grading of memory recall indexed by a 4 point grading system, and total frequency of item repetition. The study was facilitated by advanced computer software called Super Memo. Results showed that, contrary to prevailing evidence, the Digital Medium group showed no statistically significant differences in terms of the shift from Remember (Episodic) to Know (Semantic) when all confounding factors were accounted for. The shift from Random Guess and Familiar to Remember occurred faster in the Digital Medium than it did in the Print Medium.

Keywords: digital medium, print medium, long-term memory recall, episodic memory, semantic memory, super memo, forgetting index, frequency of repetitions, total time spent

Procedia PDF Downloads 288
3455 Intended Use of Genetically Modified Organisms, Advantages and Disadvantages

Authors: Pakize Ozlem Kurt Polat

Abstract:

GMO (genetically modified organism) is the result of a laboratory process where genes from the DNA of one species are extracted and artificially forced into the genes of an unrelated plant or animal. This technology includes; nucleic acid hybridization, recombinant DNA, RNA, PCR, cell culture and gene cloning techniques. The studies are divided into three groups of properties transferred to the transgenic plant. Up to 59% herbicide resistance characteristic of the transfer, 28% resistance to insects and the virus seems to be related to quality characteristics of 13%. Transgenic crops are not included in the commercial production of each product; mostly commercial plant is soybean, maize, canola, and cotton. Day by day increasing GMO interest can be listed as follows; Use in the health area (Organ transplantation, gene therapy, vaccines and drug), Use in the industrial area (vitamins, monoclonal antibodies, vaccines, anti-cancer compounds, anti -oxidants, plastics, fibers, polyethers, human blood proteins, and are used to produce carotenoids, emulsifiers, sweeteners, enzymes , food preservatives structure is used as a flavor enhancer or color changer),Use in agriculture (Herbicide resistance, Resistance to insects, Viruses, bacteria, fungi resistance to disease, Extend shelf life, Improving quality, Drought , salinity, resistance to extreme conditions such as frost, Improve the nutritional value and quality), we explain all this methods step by step in this research. GMO has advantages and disadvantages, which we explain all of them clearly in full text, because of this topic, worldwide researchers have divided into two. Some researchers thought that the GMO has lots of disadvantages and not to be in use, some of the researchers has opposite thought. If we look the countries law about GMO, we should know Biosafety law for each country and union. For this Biosecurity reasons, the problems caused by the transgenic plants, including Turkey, to minimize 130 countries on 24 May 2000, ‘the United Nations Biosafety Protocol’ signed nudes. This protocol has been prepared in addition to Cartagena Biosafety Protocol entered into force on September 11, 2003. This protocol GMOs in general use by addressing the risks to human health, biodiversity and sustainable transboundary movement of all GMOs that may affect the prevention, transit covers were dealt and used. Under this protocol we have to know the, ‘US Regulations GMO’, ‘European Union Regulations GMO’, ‘Turkey Regulations GMO’. These three different protocols have different applications and rules. World population increasing day by day and agricultural fields getting smaller for this reason feeding human and animal we should improve agricultural product yield and quality. Scientists trying to solve this problem and one solution way is molecular biotechnology which is including the methods of GMO too. Before decide to support or against the GMO, should know the GMO protocols and it effects.

Keywords: biotechnology, GMO (genetically modified organism), molecular marker

Procedia PDF Downloads 232
3454 Predictive Factors of Healthcare-Associated Infections and Antibiotic Use Patterns: A Cross-Sectional Survey at the Charles Nicolle Hospital of Tunis

Authors: Nouira Mariem, Ennigrou Samir

Abstract:

Background and aims: Healthcare-associated infections (HAI) represent a major public health problem worldwide. They represent one of the most serious adverse events in health care. The objectives of our study were to estimate the prevalence of HAI at the Charles Nicolle Hospital (CNH) and to identify the main associated factors as well as to estimate the frequency of antibiotic use. Methods: It was a cross-sectional study at the CNH with a unique passage per department (October-December 2018). All patients present at the wards for more than 48 hours were included. All patients from outpatient consultations, emergency, and dialysis departments were not included. The site definitions of infections proposed by the Centers for Disease Control and Prevention (CDC) were used. Only clinically and/or microbiologically confirmed active HAIs were included. Results: A total of 318 patients were included, with a mean age of 52 years and a sex ratio (female/male) of 1.05. A total of 41 patients had one or more active HAIs, corresponding to a prevalence of 13.1% (95% CI: 9.3%-16.9%). The most frequent site infections were urinary tract infections and pneumonia. Multivariate analysis among adult patients (>=18 years) (n=261) revealed that infection on admission (p=0.01), alcoholism (p=0.01), high blood pressure (p=0.008), having at least one invasive device inserted (p=0.004), and history of recent surgery (p=0.03), increased the risk of HAIs significantly. More than 1 of 3 patients (35.4%) were under antibiotics on the day of the survey, of which more than half (57.4%) were under two or more types of antibiotics. Conclusion: The prevalence of HAIs and antibiotic prescriptions at the CNH were considerably high. An infection prevention and control committee, as well as the development of an antibiotic stewardship program with continuous monitoring using repeated prevalence surveys, must be implemented to limit the frequency of these infections effectively.

Keywords: prevalence, healthcare associated infection, antibiotic, Tunisia

Procedia PDF Downloads 78
3453 Free Vibration Analysis of Timoshenko Beams at Higher Modes with Central Concentrated Mass Using Coupled Displacement Field Method

Authors: K. Meera Saheb, K. Krishna Bhaskar

Abstract:

Complex structures used in many fields of engineering are made up of simple structural elements like beams, plates etc. These structural elements, sometimes carry concentrated masses at discrete points, and when subjected to severe dynamic environment tend to vibrate with large amplitudes. The frequency amplitude relationship is very much essential in determining the response of these structural elements subjected to the dynamic loads. For Timoshenko beams, the effects of shear deformation and rotary inertia are to be considered to evaluate the fundamental linear and nonlinear frequencies. A commonly used method for solving vibration problem is energy method, or a finite element analogue of the same. In the present Coupled Displacement Field method the number of undetermined coefficients is reduced to half when compared to the famous Rayleigh Ritz method, which significantly simplifies the procedure to solve the vibration problem. This is accomplished by using a coupling equation derived from the static equilibrium of the shear flexible structural element. The prime objective of the present paper here is to study, in detail, the effect of a central concentrated mass on the large amplitude free vibrations of uniform shear flexible beams. Accurate closed form expressions for linear frequency parameter for uniform shear flexible beams with a central concentrated mass was developed and the results are presented in digital form.

Keywords: coupled displacement field, coupling equation, large amplitude vibrations, moderately thick plates

Procedia PDF Downloads 222
3452 Harnessing Earth's Electric Field and Transmission of Electricity

Authors: Vaishakh Medikeri

Abstract:

Energy in this Universe is the most basic characteristic of every particle. Since the birth of life on this planet, there has been a quest undertaken by the living beings to analyze, understand and harness the precious natural facts of the nature. In this quest, one of the greatest undertaken is the process of harnessing the naturally available energy. Scientists around the globe have discovered many ways to harness the freely available energy. But even today we speak of “Power Crisis”. Nikola Tesla once said “Nature has stored up in this universe infinite energy”. Energy is everywhere around us in unlimited quantities; all of it waiting to be harnessed by us. Here in this paper a method has been proposed to harness earth's electric field and transmit the stored electric energy using strong magnetic fields and electric fields. In this paper a new technique has been proposed to harness earth's electric field which is everywhere around the world in infinite quantities. Near the surface of the earth there is an electric field of about 120V/m. This electric field is used to charge a capacitor with high capacitance. Later the energy stored is allowed to pass through a device which converts the DC stored into AC. The AC so produced is then passed through a step down transformer to magnify the incoming current. Later the current passes through the RLC circuit. Later the current can be transmitted wirelessly using the principle of resonant inductive coupling. The proposed apparatus can be placed in most of the required places and any circuit tuned to the frequency of the transmitted current can receive the energy. The new source of renewable energy is of great importance if implemented since the apparatus is not costly and can be situated in most of the required places. And also the receiver which receives the transmitted energy is just an RLC circuit tuned to the resonant frequency of the transmitted energy. By using the proposed apparatus the energy losses can be reduced to a very large extent.

Keywords: capacitor, inductive resonant coupling, RLC circuit, transmission of electricity

Procedia PDF Downloads 369
3451 Measurement of Magnetic Properties of Grainoriented Electrical Steels at Low and High Fields Using a Novel Single

Authors: Nkwachukwu Chukwuchekwa, Joy Ulumma Chukwuchekwa

Abstract:

Magnetic characteristics of grain-oriented electrical steel (GOES) are usually measured at high flux densities suitable for its typical applications in power transformers. There are limited magnetic data at low flux densities which are relevant for the characterization of GOES for applications in metering instrument transformers and low frequency magnetic shielding in magnetic resonance imaging medical scanners. Magnetic properties such as coercivity, B-H loop, AC relative permeability and specific power loss of conventional grain oriented (CGO) and high permeability grain oriented (HGO) electrical steels were measured and compared at high and low flux densities at power magnetising frequency. 40 strips comprising 20 CGO and 20 HGO, 305 mm x 30 mm x 0.27 mm from a supplier were tested. The HGO and CGO strips had average grain sizes of 9 mm and 4 mm respectively. Each strip was singly magnetised under sinusoidal peak flux density from 8.0 mT to 1.5 T at a magnetising frequency of 50 Hz. The novel single sheet tester comprises a personal computer in which LabVIEW version 8.5 from National Instruments (NI) was installed, a NI 4461 data acquisition (DAQ) card, an impedance matching transformer, to match the 600  minimum load impedance of the DAQ card with the 5 to 20  low impedance of the magnetising circuit, and a 4.7 Ω shunt resistor. A double vertical yoke made of GOES which is 290 mm long and 32 mm wide is used. A 500-turn secondary winding, about 80 mm in length, was wound around a plastic former, 270 mm x 40 mm, housing the sample, while a 100-turn primary winding, covering the entire length of the plastic former was wound over the secondary winding. A standard Epstein strip to be tested is placed between the yokes. The magnetising voltage was generated by the LabVIEW program through a voltage output from the DAQ card. The voltage drop across the shunt resistor and the secondary voltage were acquired by the card for calculation of magnetic field strength and flux density respectively. A feedback control system implemented in LabVIEW was used to control the flux density and to make the induced secondary voltage waveforms sinusoidal to have repeatable and comparable measurements. The low noise NI4461 card with 24 bit resolution and a sampling rate of 204.8 KHz and 92 KHz bandwidth were chosen to take the measurements to minimize the influence of thermal noise. In order to reduce environmental noise, the yokes, sample and search coil carrier were placed in a noise shielding chamber. HGO was found to have better magnetic properties at both high and low magnetisation regimes. This is because of the higher grain size of HGO and higher grain-grain misorientation of CGO. HGO is better CGO in both low and high magnetic field applications.

Keywords: flux density, electrical steel, LabVIEW, magnetization

Procedia PDF Downloads 288
3450 Perception of Public Transport Quality of Service among Regular Private Vehicle Users in Five European Cities

Authors: Juan de Ona, Esperanza Estevez, Rocío de Ona

Abstract:

Urban traffic levels can be reduced by drawing travelers away from private vehicles over to using public transport. This modal change can be achieved by either introducing restrictions on private vehicles or by introducing measures which increase people’s satisfaction with public transport. For public transport users, quality of service affects customer satisfaction, which, in turn, influences the behavioral intentions towards the service. This paper intends to identify the main attributes which influence the perception private vehicle users have about the public transport services provided in five European cities: Berlin, Lisbon, London, Madrid and Rome. Ordinal logit models have been applied to an online panel survey with a sample size of 2,500 regular private vehicle users (approximately 500 inhabitants per city). To achieve a comprehensive analysis and to deal with heterogeneity in perceptions, 15 models have been developed for the entire sample and 14 user segments. The results show differences between the cities and among the segments. Madrid was taken as reference city and results indicate that the inhabitants are satisfied with public transport in Madrid and that the most important public transport service attributes for private vehicle users are frequency, speed and intermodality. Frequency is an important attribute for all the segments, while speed and intermodality are important for most of the segments. An analysis by segments has identified attributes which, although not important in most cases, are relevant for specific segments. This study also points out important differences between the five cities. Findings from this study can be used to develop policies and recommendations for persuading.

Keywords: service quality, satisfaction, public transportation, private vehicle users, car users, segmentation, ordered logit

Procedia PDF Downloads 114
3449 Seasonal Variability of M₂ Internal Tides Energetics in the Western Bay of Bengal

Authors: A. D. Rao, Sachiko Mohanty

Abstract:

The Internal Waves (IWs) are generated by the flow of barotropic tide over the rapidly varying and steep topographic features like continental shelf slope, subsurface ridges, and the seamounts, etc. The IWs of the tidal frequency are generally known as internal tides. These waves have a significant influence on the vertical density and hence causes mixing in the region. Such waves are also important in submarine acoustics, underwater navigation, offshore structures, ocean mixing and biogeochemical processes, etc. over the shelf-slope region. The seasonal variability of internal tides in the Bay of Bengal with special emphasis on its energetics is examined by using three-dimensional MITgcm model. The numerical simulations are performed for different periods covering August-September, 2013; November-December, 2013 and March-April, 2014 representing monsoon, post-monsoon and pre-monsoon seasons respectively during which high temporal resolution in-situ data sets are available. The model is initially validated through the spectral estimates of density and the baroclinic velocities. From the estimates, it is inferred that the internal tides associated with semi-diurnal frequency are more dominant in both observations and model simulations for November-December and March-April. However, in August, the estimate is found to be maximum near-inertial frequency at all the available depths. The observed vertical structure of the baroclinic velocities and its magnitude are found to be well captured by the model. EOF analysis is performed to decompose the zonal and meridional baroclinic tidal currents into different vertical modes. The analysis suggests that about 70-80% of the total variance comes from Mode-1 semi-diurnal internal tide in both observations as well as in the model simulations. The first three modes are sufficient to describe most of the variability for semidiurnal internal tides, as they represent 90-95% of the total variance for all the seasons. The phase speed, group speed, and wavelength are found to be maximum for post-monsoon season compared to other two seasons. The model simulation suggests that the internal tide is generated all along the shelf-slope regions and propagate away from the generation sites in all the months. The model simulated energy dissipation rate infers that its maximum occurs at the generation sites and hence the local mixing due to internal tide is maximum at these sites. The spatial distribution of available potential energy is found to be maximum in November (20kg/m²) in northern BoB and minimum in August (14kg/m²). The detailed energy budget calculation are made for all the seasons and results are analysed.

Keywords: available potential energy, baroclinic energy flux, internal tides, Bay of Bengal

Procedia PDF Downloads 165
3448 The Relationship of Socioeconomic Status and Levels of Delinquency among Senior High School Students with Secured Attachment to Their Mothers

Authors: Aldrin Avergas, Quennie Mariel Peñaranda, Niña Karen San Miguel, Alexis Katrina Agustin, Peralta Xusha Mae, Maria Luisa Sison

Abstract:

The research is entitled “The Relationship of Socioeconomic Status and Levels of Delinquency among Senior High School Students with Secured Attachment to their Mothers”. The researchers had explored the relationship between socioeconomic status and delinquent tendencies among grade 11 students. The objective of the research is to discover if delinquent behavior will have a relationship with the current socio-economic status of an adolescent student having a warm relationship with their mothers. The researchers utilized three questionnaires that would measure the three variables of the study, namely: (1) 1SEC 2012: The New Philippines Socioeconomic Classification System was used to show the current socioeconomic status of the respondents, (2) Self-Reported Delinquency – Problem Behavior Frequency Scale was utilized to determine the individual's frequency in engaging to delinquent behavior, and (3) Inventory of Parent and Peer Attachment Revised (IPPA-R) was used to determine the attachment style of the respondents. The researchers utilized a quantitative research design, specifically correlation research. The study concluded that there is no significant relationship between socioeconomic status and academic delinquency despite the fact that these participants had secured attachment to their mother hence this research implies that delinquency is not just a problem for students belonging in the lower socio-economic status and that even having a warm and close relationship with their mothers is not sufficient enough for these students to completely be free from engaging in delinquent acts. There must be other factors (such as peer pressure, emotional quotient, self-esteem or etc.) that are might be contributing to delinquent behaviors.

Keywords: adolescents, delinquency, high school students, secured attachment style, socioeconomic status

Procedia PDF Downloads 181