Search results for: false positives
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 399

Search results for: false positives

129 Development of an Implicit Physical Influence Upwind Scheme for Cell-Centered Finite Volume Method

Authors: Shidvash Vakilipour, Masoud Mohammadi, Rouzbeh Riazi, Scott Ormiston, Kimia Amiri, Sahar Barati

Abstract:

An essential component of a finite volume method (FVM) is the advection scheme that estimates values on the cell faces based on the calculated values on the nodes or cell centers. The most widely used advection schemes are upwind schemes. These schemes have been developed in FVM on different kinds of structured and unstructured grids. In this research, the physical influence scheme (PIS) is developed for a cell-centered FVM that uses an implicit coupled solver. Results are compared with the exponential differencing scheme (EDS) and the skew upwind differencing scheme (SUDS). Accuracy of these schemes is evaluated for a lid-driven cavity flow at Re = 1000, 3200, and 5000 and a backward-facing step flow at Re = 800. Simulations show considerable differences between the results of EDS scheme with benchmarks, especially for the lid-driven cavity flow at high Reynolds numbers. These differences occur due to false diffusion. Comparing SUDS and PIS schemes shows relatively close results for the backward-facing step flow and different results in lid-driven cavity flow. The poor results of SUDS in the lid-driven cavity flow can be related to its lack of sensitivity to the pressure difference between cell face and upwind points, which is critical for the prediction of such vortex dominant flows.

Keywords: cell-centered finite volume method, coupled solver, exponential differencing scheme (EDS), physical influence scheme (PIS), pressure weighted interpolation method (PWIM), skew upwind differencing scheme (SUDS)

Procedia PDF Downloads 239
128 Enzymatic Repair Prior To DNA Barcoding, Aspirations, and Restraints

Authors: Maxime Merheb, Rachel Matar

Abstract:

Retrieving ancient DNA sequences which in return permit the entire genome sequencing from fossils have extraordinarily improved in recent years, thanks to sequencing technology and other methodological advances. In any case, the quest to search for ancient DNA is still obstructed by the damage inflicted on DNA which accumulates after the death of a living organism. We can characterize this damage into three main categories: (i) Physical abnormalities such as strand breaks which lead to the presence of short DNA fragments. (ii) Modified bases (mainly cytosine deamination) which cause errors in the sequence due to an incorporation of a false nucleotide during DNA amplification. (iii) DNA modifications referred to as blocking lesions, will halt the PCR extension which in return will also affect the amplification and sequencing process. We can clearly see that the issues arising from breakage and coding errors were significantly decreased in recent years. Fast sequencing of short DNA fragments was empowered by platforms for high-throughput sequencing, most of the coding errors were uncovered to be the consequences of cytosine deamination which can be easily removed from the DNA using enzymatic treatment. The methodology to repair DNA sequences is still in development, it can be basically explained by the process of reintroducing cytosine rather than uracil. This technique is thus restricted to amplified DNA molecules. To eliminate any type of damage (particularly those that block PCR) is a process still pending the complete repair methodologies; DNA detection right after extraction is highly needed. Before using any resources into extensive, unreasonable and uncertain repair techniques, it is vital to distinguish between two possible hypotheses; (i) DNA is none existent to be amplified to begin with therefore completely un-repairable, (ii) the DNA is refractory to PCR and it is worth to be repaired and amplified. Hence, it is extremely important to develop a non-enzymatic technique to detect the most degraded DNA.

Keywords: ancient DNA, DNA barcodong, enzymatic repair, PCR

Procedia PDF Downloads 366
127 Automated Fact-Checking by Incorporating Contextual Knowledge and Multi-Faceted Search

Authors: Wenbo Wang, Yi-Fang Brook Wu

Abstract:

The spread of misinformation and disinformation has become a major concern, particularly with the rise of social media as a primary source of information for many people. As a means to address this phenomenon, automated fact-checking has emerged as a safeguard against the spread of misinformation and disinformation. Existing fact-checking approaches aim to determine whether a news claim is true or false, and they have achieved decent veracity prediction accuracy. However, the state-of-the-art methods rely on manually verified external information to assist the checking model in making judgments, which requires significant human resources. This study introduces a framework, SAC, which focuses on 1) augmenting the representation of a claim by incorporating additional context using general-purpose, comprehensive, and authoritative data; 2) developing a search function to automatically select relevant, new, and credible references; 3) focusing on the important parts of the representations of a claim and its reference that are most relevant to the fact-checking task. The experimental results demonstrate that 1) Augmenting the representations of claims and references through the use of a knowledge base, combined with the multi-head attention technique, contributes to improved performance of fact-checking. 2) SAC with auto-selected references outperforms existing fact-checking approaches with manual selected references. Future directions of this study include I) exploring knowledge graphs in Wikidata to dynamically augment the representations of claims and references without introducing too much noise, II) exploring semantic relations in claims and references to further enhance fact-checking.

Keywords: fact checking, claim verification, deep learning, natural language processing

Procedia PDF Downloads 20
126 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data

Procedia PDF Downloads 295
125 Epilepsy Seizure Prediction by Effective Connectivity Estimation Using Granger Causality and Directed Transfer Function Analysis of Multi-Channel Electroencephalogram

Authors: Mona Hejazi, Ali Motie Nasrabadi

Abstract:

Epilepsy is a persistent neurological disorder that affects more than 50 million people worldwide. Hence, there is a necessity to introduce an efficient prediction model for making a correct diagnosis of the epileptic seizure and accurate prediction of its type. In this study we consider how the Effective Connectivity (EC) patterns obtained from intracranial Electroencephalographic (EEG) recordings reveal information about the dynamics of the epileptic brain and can be used to predict imminent seizures, as this will enable the patients (and caregivers) to take appropriate precautions. We use this definition because we believe that effective connectivity near seizures begin to change, so we can predict seizures according to this feature. Results are reported on the standard Freiburg EEG dataset which contains data from 21 patients suffering from medically intractable focal epilepsy. Six channels of EEG from each patients are considered and effective connectivity using Directed Transfer Function (DTF) and Granger Causality (GC) methods is estimated. We concentrate on effective connectivity standard deviation over time and feature changes in five brain frequency sub-bands (Alpha, Beta, Theta, Delta, and Gamma) are compared. The performance obtained for the proposed scheme in predicting seizures is: average prediction time is 50 minutes before seizure onset, the maximum sensitivity is approximate ~80% and the false positive rate is 0.33 FP/h. DTF method is more acceptable to predict epileptic seizures and generally we can observe that the greater results are in gamma and beta sub-bands. The research of this paper is significantly helpful for clinical applications, especially for the exploitation of online portable devices.

Keywords: effective connectivity, Granger causality, directed transfer function, epilepsy seizure prediction, EEG

Procedia PDF Downloads 420
124 Didactics of Literature within the Brechtian Theatre in Edward Albee's Who's Afraid of Virginia Woolf? and Ernest Lehman's Screenplay Adaptation from an Audiovisual Perspective

Authors: Angel Mauricio Castillo

Abstract:

The background to the way theatrical performances and music dramas- as they were known in the mid-nineteenth century, provided the audience with a complete immersion into the feelings of the characters through poetry, music and other artistic representations which create a false sense of reality. However, a novel representation on stage some eighty years later, which is non-cathartic, is significant because it represents the antithesis to the common creations of the period and is originated by the separation of the elements as a dominant. A succinct description of the basic methodologies includes the sense of defamiliarization that results as a near translation of the German word Verfremdung will be referred to along this work as the V-effect (also known as the ‘alienation effect’) and will embody the representation of the performing techniques that enables the audience to watch a play being fully aware of its nature. A play might sometimes present the audience with a constant reminder that it is only a play; therefore, all elements will be introduced to provoke dissimilar reactions and opinions. A clear indication of the major findings of the study is that there is a strong correlation between Hegel, Marx and Brecht as it is disclosed how the didactics of Literature have been influencing not only Brecht’s productions but also every educational context in which these ideas are intertwined. The result is a new dialectical process that is to say, a new thesis that creates independent thinking skills on the part of the audience. Therefore, this model opposes to the Hegelian formula thesis-antithesis-synthesis in that the synthesis in the Brechtian theatre will inevitably fall into the category of a different thesis within an enlightening type of discourse. The confronting ideas of illusion versus reality will create a new dialectical thesis instead of resulting into a synthesis.

Keywords: Brechtian theatre, didactics, literature, education

Procedia PDF Downloads 139
123 Early Gastric Cancer Prediction from Diet and Epidemiological Data Using Machine Learning in Mizoram Population

Authors: Brindha Senthil Kumar, Payel Chakraborty, Senthil Kumar Nachimuthu, Arindam Maitra, Prem Nath

Abstract:

Gastric cancer is predominantly caused by demographic and diet factors as compared to other cancer types. The aim of the study is to predict Early Gastric Cancer (ECG) from diet and lifestyle factors using supervised machine learning algorithms. For this study, 160 healthy individual and 80 cases were selected who had been followed for 3 years (2016-2019), at Civil Hospital, Aizawl, Mizoram. A dataset containing 11 features that are core risk factors for the gastric cancer were extracted. Supervised machine algorithms: Logistic Regression, Naive Bayes, Support Vector Machine (SVM), Multilayer perceptron, and Random Forest were used to analyze the dataset using Python Jupyter Notebook Version 3. The obtained classified results had been evaluated using metrics parameters: minimum_false_positives, brier_score, accuracy, precision, recall, F1_score, and Receiver Operating Characteristics (ROC) curve. Data analysis results showed Naive Bayes - 88, 0.11; Random Forest - 83, 0.16; SVM - 77, 0.22; Logistic Regression - 75, 0.25 and Multilayer perceptron - 72, 0.27 with respect to accuracy and brier_score in percent. Naive Bayes algorithm out performs with very low false positive rates as well as brier_score and good accuracy. Naive Bayes algorithm classification results in predicting ECG showed very satisfactory results using only diet cum lifestyle factors which will be very helpful for the physicians to educate the patients and public, thereby mortality of gastric cancer can be reduced/avoided with this knowledge mining work.

Keywords: Early Gastric cancer, Machine Learning, Diet, Lifestyle Characteristics

Procedia PDF Downloads 106
122 Evaluating the Implementation of a Quality Management System in the COVID-19 Diagnostic Laboratory of a Tertiary Care Hospital in Delhi

Authors: Sukriti Sabharwal, Sonali Bhattar, Shikhar Saxena

Abstract:

Introduction: COVID-19 molecular diagnostic laboratory is the cornerstone of the COVID-19 disease diagnosis as the patient’s treatment and management protocol depend on the molecular results. For this purpose, it is extremely important that the laboratory conducting these results adheres to the quality management processes to increase the accuracy and validity of the reports generated. We started our own molecular diagnostic setup at the onset of the pandemic. Therefore, we conducted this study to generate our quality management data to help us in improving on our weak points. Materials and Methods: A total of 14561 samples were evaluated by the retrospective observational method. The quality variables analysed were classified into pre-analytical, analytical, and post-analytical variables, and the results were presented in percentages. Results: Among the pre-analytical variables, sample leaking was the most common cause of the rejection of samples (134/14561, 0.92%), followed by non-generation of SRF ID (76/14561, 0.52%) and non-compliance to triple packaging (44/14561, 0.3%). The other pre-analytical aspects assessed were incomplete patient identification (17/14561, 0.11%), insufficient quantity of samples (12/14561, 0.08%), missing forms/samples (7/14561, 0.04%), samples in the wrong vials/empty VTM tubes (5/14561, 0.03%) and LIMS entry not done (2/14561, 0.01%). We are unable to obtain internal quality control in 0.37% of samples (55/14561). We also experienced two incidences of cross-contamination among the samples resulting in false-positive results. Among the post-analytical factors, a total of 0.07% of samples (11/14561) could not be dispatched within the stipulated time frame. Conclusion: Adherence to quality control processes is foremost for the smooth running of any diagnostic laboratory, especially the ones involved in critical reporting. Not only do the indicators help in keeping in check the laboratory parameters but they also allow comparison with other laboratories.

Keywords: laboratory quality management, COVID-19, molecular diagnostics, healthcare

Procedia PDF Downloads 113
121 YOLO-IR: Infrared Small Object Detection in High Noise Images

Authors: Yufeng Li, Yinan Ma, Jing Wu, Chengnian Long

Abstract:

Infrared object detection aims at separating small and dim targets from cluttered backgrounds, and its capabilities extend beyond the limits of visible light, making it invaluable in a wide range of applications, such as improving safety, security, efficiency, and functionality. However, existing methods are usually sensitive to the noise of the input infrared image, leading to a decrease in target detection accuracy and an increase in the false alarm rate in high-noise environments. To address this issue, an infrared small target detection algorithm called YOLO-IR is proposed in this paper to improve the robustness to high infrared noise. To address the problem that high noise significantly reduces the clarity and reliability of target features in infrared images, we design a soft-threshold coordinate attention mechanism to improve the model’s ability to extract target features and its robustness to noise. Since the noise may overwhelm the local details of the target, resulting in the loss of small target features during depth down-sampling, we propose a deep and shallow feature fusion neck to improve the detection accuracy. In addition, because the generalized Intersection over Union (IoU)-based loss functions may be sensitive to noise and lead to unstable training in high-noise environments, we introduce a Wasserstein-distance based loss function to improve the training of the model. The experimental results show that YOLO-IR achieves a 5.0% improvement in recall and a 6.6% improvement in the F1 score over the existing state-of-the-art model.

Keywords: infrared small target detection, high noise, robustness, soft-threshold coordinate attention, feature fusion

Procedia PDF Downloads 8
120 Questioning the Relationship Between Young People and Fake News Through Their Use of Social Media

Authors: Marion Billard

Abstract:

This paper will focus on the question of the real relationship between young people and fake news. Fake news is one of today’s main issues in the world of information and communication. Social media and its democratization helped to spread false information. According to traditional beliefs, young people are more inclined to believe what they read through social media. But, the individuals concerned, think that they are more inclined to make a distinction between real and fake news. This phenomenon is due to their use of the internet and social media from an early age. During the 2016 and 2017 French and American presidential campaigns, the term fake news was in the mouth of the entire world and became a real issue in the field of information. While young people were informing themselves with newspapers or television until the beginning of the ’90s, Gen Z (meaning people born between 1997 and 2010), has always been immersed in this world of fast communication. They know how to use social media from a young age and the internet has no secret for them. Today, despite the sporadic use of traditional media, young people tend to turn to their smartphones and social networks such as Instagram or Twitter to stay abreast of the latest news. The growth of social media information led to an “ambient journalism”, giving access to an endless quantity of information. Waking up in the morning, young people will see little posts with short texts supplying the essential of the news, without, for the most, many details. As a result, impressionable people are not able to do a distinction between real media, and “junk news” or Fake News. This massive use of social media is probably explained by the inability of the youngsters to find connections between the communication of the traditional media and what they are living. The question arises if this over-confidence of the young people in their ability to distinguish between accurate and fake news would not make it more difficult for them to examine critically the information. Their relationship with media and fake news is more complex than popular opinion. Today’s young people are not the master in the quest for information, nor inherently the most impressionable public on social media.

Keywords: fake news, youngsters, social media, information, generation

Procedia PDF Downloads 121
119 Virtue, Truth, Freedom, And The History Of Philosophy

Authors: Ashley DelCorno

Abstract:

GEM Anscombe’s 1958 essay Modern Moral Philosophy and the tradition of virtue ethics that followed has given rise to the restoration (or, more plainly, the resurrection) of Aristotle as something of an authority figure. Alisdair MacIntyre and Martha Nussbaum are proponents, for example, not just of Aristotle’s relevancy but also of his apparent implicit authority. That said, it’s not clear that the schema imagined by virtue ethicists accurately describes moral life or that it does not inadvertently work to impoverish genuine decision-making. If the label ‘virtue’ is categorically denied to some groups (while arbitrarily afforded to others), it can only turn on itself, thus rendering ridiculous its own premise. Likewise, as an inescapable feature of virtue ethics, Aristotelean binaries like ‘virtue/vice’ and ‘voluntary/involuntary’ offer up false dichotomies that may seriously compromise an agent’s ability to conceptualize choices that are truly free and rooted in meaningful criteria. Here, this topic is analyzed through a feminist lens predicated on the known paradoxes of patriarchy. The work of feminist theorists Jacqui Alexander, Katharine Angel, Simone de Beauvoir, bell hooks, Audre Lorde, Imani Perry, and Amia Srinivasan serves as important guideposts, and the argument here is built from a key tenet of black feminist thought regarding scarcity and possibility. Above all, it’s clear that though the philosophical tradition of virtue ethics presents itself as recovering the place of agency in ethics, its premises possess crippling limitations toward the achievement of this goal. These include, most notably, virtue ethics’ binding analysis of history, as well as its axiomatic attachment to obligatory clauses, problematic reading-in of Aristotle and arbitrary commitment to predetermined and competitively patriarchal ideas of what counts as a virtue.

Keywords: feminist history, the limits of utopic imagination, curatorial creation, truth, virtue, freedom

Procedia PDF Downloads 45
118 Comparative Study of Flood Plain Protection Zone Determination Methodologies in Colombia, Spain and Canada

Authors: P. Chang, C. Lopez, C. Burbano

Abstract:

Flood protection zones are riparian buffers that are formed to manage and mitigate the impact of flooding, and in turn, protect local populations. The purpose of this study was to evaluate the Guía Técnica de Criterios para el Acotamiento de las Rondas Hídricas in Colombia against international regulations in Canada and Spain, in order to determine its limitations and contribute to its improvement. The need to establish a specific corridor that allows for the dynamic development of a river is clear; however, limitations present in the Colombian Technical Guide are identified. The study shows that international regulations provide similar concepts as used in Colombia, but additionally integrate aspects such as regionalization that allows for a better characterization of the channel way, and incorporate the frequency of flooding and its probability of occurrence in the concept of risk when determining the protection zone. The case study analyzed in Dosquebradas - Risaralda aimed at comparing the application of the different standards through hydraulic modeling. It highlights that the current Colombian standard does not offer sufficient details in its implementation phase, which leads to a false sense of security related to inaccuracy and lack of data. Furthermore, the study demonstrates how the Colombian norm is ill-adapted to the conditions of Dosquebradas typical of the Andes region, both in the social and hydraulic aspects, and does not reduce the risk, nor does it improve the protection of the population. Our study considers it pertinent to include risk estimation as an integral part of the methodology when establishing protect flood zone, considering the particularity of water systems, as they are characterized by an heterogeneous natural dynamic behavior.

Keywords: environmental corridor, flood zone determination, hydraulic domain, legislation flood protection zone

Procedia PDF Downloads 80
117 A Psychoanalytic Lens: Unmasked Layers of the Self among Post-Graduate Psychology Students in Surviving the COVID-19 Lockdown

Authors: Sharon Sibanda, Benny Motileng

Abstract:

The World Health Organisation (WHO) identified the Sars-Cov-2 (COVID-19) as a pandemic on the 12ᵗʰ of March 2020, with South Africa recording its first case on the 5ᵗʰ of March 2020. The rapidly spreading virus led the South African government to implement one of the strictest nationwide lockdowns globally, resulting in the closing down of all institutions of higher learning effective March 18ᵗʰ 2020. Thus, this qualitative study primarily aimed to explore whether post-graduate psychology students were in a state of a depleted or cohesive self, post the psychological isolation of COVID-19 risk-adjusted level 5 lockdown. Semi-structured interviews from a qualitative interpretive approach comprising N=6 psychology post-graduate students facilitated a rich understanding of their intra-psychic experiences of the self. Thematic analysis of data gathered from the interviews illuminated how students were forced into the self by the emotional isolation of hard lockdown, with the emergence of core psychic conflict often defended against through external self-object experiences. The findings also suggest that lockdown stripped off this sample of psychology post-graduate students’ defensive escape from the inner self through external self-object distractions. The external self was stripped to the core of the internal self by the isolation of hard lockdown, thereby uncovering the psychic function of roles and defenses amalgamated throughout modern cultural consciousness that dictates self-functioning. The study suggests modelling reflexivity skills in the integration of internal and external self-experience dynamics as part of a training model for continued personal and professional development for psychology students.

Keywords: COVID-19, fragmentation, self-object experience, true/false self

Procedia PDF Downloads 7
116 Analysis of Real Time Seismic Signal Dataset Using Machine Learning

Authors: Sujata Kulkarni, Udhav Bhosle, Vijaykumar T.

Abstract:

Due to the closeness between seismic signals and non-seismic signals, it is vital to detect earthquakes using conventional methods. In order to distinguish between seismic events and non-seismic events depending on their amplitude, our study processes the data that come from seismic sensors. The authors suggest a robust noise suppression technique that makes use of a bandpass filter, an IIR Wiener filter, recursive short-term average/long-term average (STA/LTA), and Carl short-term average (STA)/long-term average for event identification (LTA). The trigger ratio used in the proposed study to differentiate between seismic and non-seismic activity is determined. The proposed work focuses on significant feature extraction for machine learning-based seismic event detection. This serves as motivation for compiling a dataset of all features for the identification and forecasting of seismic signals. We place a focus on feature vector dimension reduction techniques due to the temporal complexity. The proposed notable features were experimentally tested using a machine learning model, and the results on unseen data are optimal. Finally, a presentation using a hybrid dataset (captured by different sensors) demonstrates how this model may also be employed in a real-time setting while lowering false alarm rates. The planned study is based on the examination of seismic signals obtained from both individual sensors and sensor networks (SN). A wideband seismic signal from BSVK and CUKG station sensors, respectively located near Basavakalyan, Karnataka, and the Central University of Karnataka, makes up the experimental dataset.

Keywords: Carl STA/LTA, features extraction, real time, dataset, machine learning, seismic detection

Procedia PDF Downloads 51
115 The Improved Therapeutic Effect of Trans-Cinnamaldehyde on Adipose-Derived Stem Cells without Chemical Induction

Authors: Karthyayani Rajamani, Yi-Chun Lin, Tung-Chou Wen, Jeanne Hsieh, Yi-Maun Subeq, Jen-Wei Liu, Po-Cheng Lin, Horng-Jyh Harn, Shinn-Zong Lin, Tzyy-Wen Chiou

Abstract:

Assuring cell quality is an essential parameter for the success of stem cell therapy, utilization of various components to improve this potential has been the primary goal of stem cell research. The aim of this study was not only to demonstrate the capacity of trans-cinnamaldehyde (TC) to reverse stress-induced senescence but also improve the therapeutic abilities of stem cells. Because of the availability and the promising application potential in regenerative medicine, adipose-derived stem cells (ADSCs) were chosen for the study. We found that H2O2 treatment resulted in the expression of senescence characteristics in the ADSCs, including decreased proliferation rate, increased senescence-associated- β-galactosidase (SA-β-gal) activity, decreased SIRT1 (silent mating type information regulation 2 homologs) expression and decreased telomerase activity. However, TC treatment was sufficient to rescue or reduce the effects of H2O2 induction, ultimately leading to an increased proliferation rate, a decrease in the percentage of SA-β-gal positive cells, upregulation of SIRT1 expression, and increased telomerase activity of the senescent ADSCs at the cellular level. Further recently it was observed that the ADSCs were treated with TC without induction of senescence, all the before said positives were observed. Moreover, a chemically induced liver fibrosis animal model was used to evaluate the functionality of these rescued cells in vivo. Liver dysfunction was established by injecting 200 mg/kg thioacetamide (TAA) intraperitoneally into Wistar rats every third day for 60 days. The experimental rats were separated into groups; normal group (rats without TAA induction), sham group (without ADSC transplantation), positive control group (transplanted with normal ADSCs); H2O2 group (transplanted with H2O2 -induced senescent ADSCs), H2O2+TC group (transplanted with ADSCs pretreated with H2O2 and then further treated with TC) and TC group (ADSC treated with TC without H2O2 treatment). In the transplantation group, 1 × 106 human ADSCs were introduced into each rat via direct liver injection. Based on the biochemical analysis and immunohistochemical staining results, it was determined that the therapeutic effects on liver fibrosis by the induced senescent ADSCs (H2O2 group) were not as significant as those exerted by the normal ADSCs (the positive control group). However, the H2O2+TC group showed significant reversal of liver damage when compared to the H2O2 group 1 week post-transplantation. Further ADSCs without H2O2 treatment but with just TC treatment performed much better than all the groups. These data confirmed that the TC treatment had the potential to improve the therapeutic effect of ADSCs. It is therefore suggested that TC has potential applications in maintaining stem cell quality and could possibly aid in the treatment of senescence-related disorders.

Keywords: senescence, SIRT1, adipose derived stem cells, liver fibrosis

Procedia PDF Downloads 214
114 Tc-99m MIBI Scintigraphy to Differentiate Malignant from Benign Lesions, Detected on Planar Bone Scan

Authors: Aniqa Jabeen

Abstract:

The aim of this study was to evaluate the effectiveness of Tc-99m MIBI (Technetium 99-methoxy-iso-butyl-isonitrile) scintigraphy to differentiate malignancies from benign lesions, which were detected on planar bone scans. Materials and Methods: 59 patients with bone lesions were enrolled in the study. The scintigraphic findings were compared with the clinical, radiological and the histological findings. Each patient initially underwent a three-phase bone scan with Tc-99m MDP (Methylene Diphosphonate) and if evidence of lesion found, the patient then underwent a dynamic and static MIBI scintigraphy after three to four days. The MDP and MIBI scans were evaluated visually and quantitatively. For quantitative analysis count ratios of lesions and contralateral normal side (L/C) were taken by region of interests drawn on scans. The Student T test was applied to assess the significant difference between benign and malignant lesions p-value < 0.05 was considered significant. Result: The MDP scans showed the increase tracer uptake, but there was no significant difference between benign and malignant uptake of the radiotracer. However significant difference (p-value 0.015), in uptake was seen in malignant (L/C = 3.51 ± 1.02) and benign lesion (L/C = 2.50±0.42) on MIBI scan. Three of thirty benign lesions did not show significant MIBI uptake. Seven malignant appeared as false negatives. Specificity of the scan was 86.66%, and its Negative Predictive Value (NPV) was 81.25% whereas the sensitivity of scan was 79.31%. In excluding the axial metastasis from the lesions, the sensitivity of MIBI scan increased to 91.66% and the NPV also increased to 92.85%. Conclusion: MIBI scintigraphy provides its usefulness by distinguishing malignant from benign lesions. MIBI also correctly identifies metastatic lesions. The negative predictive value of the scan points towards its ability to accurately diagnose the normal (benign) cases. However, biopsy remains the gold standard and a definitive diagnostic modality in musculoskeletal tumors. MIBI scan provides useful information in preoperative assessment and in distinguishing between malignant and benign lesions.

Keywords: benign, malignancies, MDP bone scan, MIBI scintigraphy

Procedia PDF Downloads 362
113 Sentiment Analysis of Fake Health News Using Naive Bayes Classification Models

Authors: Danielle Shackley, Yetunde Folajimi

Abstract:

As more people turn to the internet seeking health-related information, there is more risk of finding false, inaccurate, or dangerous information. Sentiment analysis is a natural language processing technique that assigns polarity scores to text, ranging from positive, neutral, and negative. In this research, we evaluate the weight of a sentiment analysis feature added to fake health news classification models. The dataset consists of existing reliably labeled health article headlines that were supplemented with health information collected about COVID-19 from social media sources. We started with data preprocessing and tested out various vectorization methods such as Count and TFIDF vectorization. We implemented 3 Naive Bayes classifier models, including Bernoulli, Multinomial, and Complement. To test the weight of the sentiment analysis feature on the dataset, we created benchmark Naive Bayes classification models without sentiment analysis, and those same models were reproduced, and the feature was added. We evaluated using the precision and accuracy scores. The Bernoulli initial model performed with 90% precision and 75.2% accuracy, while the model supplemented with sentiment labels performed with 90.4% precision and stayed constant at 75.2% accuracy. Our results show that the addition of sentiment analysis did not improve model precision by a wide margin; while there was no evidence of improvement in accuracy, we had a 1.9% improvement margin of the precision score with the Complement model. Future expansion of this work could include replicating the experiment process and substituting the Naive Bayes for a deep learning neural network model.

Keywords: sentiment analysis, Naive Bayes model, natural language processing, topic analysis, fake health news classification model

Procedia PDF Downloads 57
112 Human Leukocyte Antigen Class 1 Phenotype Distribution and Analysis in Persons from Central Uganda with Active Tuberculosis and Latent Mycobacterium tuberculosis Infection

Authors: Helen K. Buteme, Rebecca Axelsson-Robertson, Moses L. Joloba, Henry W. Boom, Gunilla Kallenius, Markus Maeurer

Abstract:

Background: The Ugandan population is heavily affected by infectious diseases and Human leukocyte antigen (HLA) diversity plays a crucial role in the host-pathogen interaction and affects the rates of disease acquisition and outcome. The identification of HLA class 1 alleles and determining which alleles are associated with tuberculosis (TB) outcomes would help in screening individuals in TB endemic areas for susceptibility to TB and to predict resistance or progression to TB which would inevitably lead to better clinical management of TB. Aims: To be able to determine the HLA class 1 phenotype distribution in a Ugandan TB cohort and to establish the relationship between these phenotypes and active and latent TB. Methods: Blood samples were drawn from 32 HIV negative individuals with active TB and 45 HIV negative individuals with latent MTB infection. DNA was extracted from the blood samples and the DNA samples HLA typed by the polymerase chain reaction-sequence specific primer method. The allelic frequencies were determined by direct count. Results: HLA-A*02, A*01, A*74, A*30, B*15, B*58, C*07, C*03 and C*04 were the dominant phenotypes in this Ugandan cohort. There were differences in the distribution of HLA types between the individuals with active TB and the individuals with LTBI with only HLA-A*03 allele showing a statistically significant difference (p=0.0136). However, after FDR computation the corresponding q-value is above the expected proportion of false discoveries (q-value 0.2176). Key findings: We identified a number of HLA class I alleles in a population from Central Uganda which will enable us to carry out a functional characterization of CD8+ T-cell mediated immune responses to MTB. Our results also suggest that there may be a positive association between the HLA-A*03 allele and TB implying that individuals with the HLA-A*03 allele are at a higher risk of developing active TB.

Keywords: HLA, phenotype, tuberculosis, Uganda

Procedia PDF Downloads 368
111 From News Breakers to News Followers: The Influence of Facebook on the Coverage of the January 2010 Crisis in Jos

Authors: T. Obateru, Samuel Olaniran

Abstract:

In an era when the new media is affording easy access to packaging and dissemination of information, the social media have become a popular avenue for sharing information for good or ill. It is evident that the traditional role of journalists as ‘news breakers’ is fast being eroded. People now share information on happenings via the social media like Facebook, Twitter and the rest, such that journalists themselves now get leads on happenings from such sources. Beyond the access to information provided by the new media is the erosion of the gatekeeping role of journalists who by their training and calling, are supposed to handle information with responsibility. Thus, sensitive information that journalists would normally filter is randomly shared by social media activists. This was the experience of journalists in Jos, Plateau State in January 2010 when another of the recurring ethnoreligious crisis that engulfed the state resulted in another widespread killing, vandalism, looting, and displacements. Considered as one of the high points of crises in the state, journalists who had the duty of covering the crisis also relied on some of these sources to get their bearing on the violence. This paper examined the role of Facebook in the work of journalists who covered the 2010 crisis. Taking the gatekeeping perspective, it interrogated the extent to which Facebook impacted their professional duty positively or negatively vis-à-vis the peace journalism model. It employed survey to elicit information from 50 journalists who covered the crisis using questionnaire as instrument. The paper revealed that the dissemination of hate information via mobile phones and social media, especially Facebook, aggravated the crisis situation. Journalists became news followers rather than news breakers because a lot of them were put on their toes by information (many of which were inaccurate or false) circulated on Facebook. It recommended that journalists must remain true to their calling by upholding their ‘gatekeeping’ role of disseminating only accurate and responsible information if they would remain the main source of credible information on which their audience rely.

Keywords: crisis, ethnoreligious, Facebook, journalists

Procedia PDF Downloads 257
110 Combination between Intrusion Systems and Honeypots

Authors: Majed Sanan, Mohammad Rammal, Wassim Rammal

Abstract:

Today, security is a major concern. Intrusion Detection, Prevention Systems and Honeypot can be used to moderate attacks. Many researchers have proposed to use many IDSs ((Intrusion Detection System) time to time. Some of these IDS’s combine their features of two or more IDSs which are called Hybrid Intrusion Detection Systems. Most of the researchers combine the features of Signature based detection methodology and Anomaly based detection methodology. For a signature based IDS, if an attacker attacks slowly and in organized way, the attack may go undetected through the IDS, as signatures include factors based on duration of the events but the actions of attacker do not match. Sometimes, for an unknown attack there is no signature updated or an attacker attack in the mean time when the database is updating. Thus, signature-based IDS fail to detect unknown attacks. Anomaly based IDS suffer from many false-positive readings. So there is a need to hybridize those IDS which can overcome the shortcomings of each other. In this paper we propose a new approach to IDS (Intrusion Detection System) which is more efficient than the traditional IDS (Intrusion Detection System). The IDS is based on Honeypot Technology and Anomaly based Detection Methodology. We have designed Architecture for the IDS in a packet tracer and then implemented it in real time. We have discussed experimental results performed: both the Honeypot and Anomaly based IDS have some shortcomings but if we hybridized these two technologies, the newly proposed Hybrid Intrusion Detection System (HIDS) is capable enough to overcome these shortcomings with much enhanced performance. In this paper, we present a modified Hybrid Intrusion Detection System (HIDS) that combines the positive features of two different detection methodologies - Honeypot methodology and anomaly based intrusion detection methodology. In the experiment, we ran both the Intrusion Detection System individually first and then together and recorded the data from time to time. From the data we can conclude that the resulting IDS are much better in detecting intrusions from the existing IDSs.

Keywords: security, intrusion detection, intrusion prevention, honeypot, anomaly-based detection, signature-based detection, cloud computing, kfsensor

Procedia PDF Downloads 327
109 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection

Authors: Yulan Wu

Abstract:

With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.

Keywords: fake news, deep learning, natural language processing, multiple domains

Procedia PDF Downloads 43
108 Syndecan -1 as Regulator of Ischemic-Reperfusion Damage Limitation in Experiment

Authors: M. E. Kolpakova, A. A. Jakovleva, L. S. Poliakova, H. El Amghari, S. Soliman, D. R. Faizullina, V. V. Sharoyko

Abstract:

Brain neuroplasticity is associated with blood-brain barrier vascular endothelial proteoglycans and post-stroke microglial activation. The study of the mechanisms of reperfusion injury limitation by remote ischemic postconditioning (RC) is of interest due to the effects on functional recovery after cerebral ischemia. The goal of the study is the assessment of the role of syndecan-1 (SDC-1) in restriction of ischemic-reperfusion injury on middle cerebral artery model in rats using RC protocol. Randomized controlled trials were conducted. Ischemia was performed by middle cerebral artery occlusion by Belayev L. (1996) on the Wistar rat-males (n= 87) weighting 250 ± 50 g. under general anesthesia (Zoletil 100 и Xylazine 2%). Syndecan-1 (SDC-1) concentration difference in plasma samples of false operated animals and animals with brain ischemia was 30% (30 min. МСАо: 41.4 * ± 1.3 ng/ml). SDC-1 concentration in animal plasma samples with ischemia + RC protocol was 112% (30 min МСАо+ RC): 67.8**± 5.8 ng/ml). Calculation of infarction volume in the ischemia group revealed brain injury in 31.97 ± 2.5%; the volume of infarction was 13.6 ± 1.3% in 30 min. МCАо + RC group. Swelling of tissue in the group 30 min. МCАо + RC was 16 ± 2.1%; it was 47 ± 3.3%. in 30 min. МCАо group. Correlation analysis showed a high direct correlation relationship between infarct area and muscle strength in the right forelimb (КК=0.72) in the 30 min. МCАо + RC group. Correlation analysis showed very high inverse correlation between infarct area and capillary blood flow in the 30 min. МCАо + RC group (p <0.01; r = -0.98). We believe the SDC-1 molecule in blood plasma may play role of potential messenger of ischemic-reperfusion injury restriction mechanisms. This leads to infarct-limiting effect of remote ischemic postconditioning and early functioning recovery.

Keywords: ischemia, МСАо, remote ischemic postconditioning, syndecan-1

Procedia PDF Downloads 18
107 Scientific Investigation for an Ancient Egyptian Polychrome Wooden Stele

Authors: Ahmed Abdrabou, Medhat Abdalla

Abstract:

The studied stele dates back to Third Intermediate Period (1075-664) B.C in an ancient Egypt. It is made of wood and covered with painted gesso layers. This study aims to use a combination of multi spectral imaging {visible, infrared (IR), Visible-induced infrared luminescence (VIL), Visible-induced ultraviolet luminescence (UVL) and ultraviolet reflected (UVR)}, along with portable x-ray fluorescence in order to map and identify the pigments as well as to provide a deeper understanding of the painting techniques. Moreover; the authors were significantly interested in the identification of wood species. Multispectral imaging acquired in 3 spectral bands, ultraviolet (360-400 nm), visible (400-780 nm) and infrared (780-1100 nm) using (UV Ultraviolet-induced luminescence (UVL), UV Reflected (UVR), Visible (VIS), Visible-induced infrared luminescence (VIL) and Infrared photography. False color images are made by digitally editing the VIS with IR or UV images using Adobe Photoshop. Optical Microscopy (OM), potable X-ray fluorescence spectroscopy (p-XRF) and Fourier Transform Infrared Spectroscopy (FTIR) were used in this study. Mapping and imaging techniques provided useful information about the spatial distribution of pigments, in particular visible-induced luminescence (VIL) which allowed the spatial distribution of Egyptian blue pigment to be mapped and every region containing Egyptian blue, even down to single crystals in some instances, is clearly visible as a bright white area; however complete characterization of the pigments requires the use of p. XRF spectroscopy. Based on the elemental analysis found by P.XRF, we conclude that the artists used mixtures of the basic mineral pigments to achieve a wider palette of hues. Identification of wood species Microscopic identification indicated that the wood used was Sycamore Fig (Ficus sycomorus L.) which is recorded as being native to Egypt and was used to make wooden artifacts since at least the Fifth Dynasty.

Keywords: polychrome wooden stele, multispectral imaging, IR luminescence, Wood identification, Sycamore Fig, p-XRF

Procedia PDF Downloads 227
106 Tool for Maxillary Sinus Quantification in Computed Tomography Exams

Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina

Abstract:

The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.

Keywords: maxillary sinus, support vector machine, region growing, volume quantification

Procedia PDF Downloads 472
105 Culture and Mental Health in Nigeria: A Qualitative Study of Berom, Hausa, Yoruba and Igbo Cultural Beliefs

Authors: Dung Jidong, Rachel Tribe, Poul Rohlerder, Aneta Tunariu

Abstract:

Cultural understandings of mental health problems are frequently overshadowed by the western conceptualizations. Research on culture and mental health in the Nigerian context seems to be lacking. This study examined the linguistic understandings and cultural beliefs that have implications for mental health among the Berom, Hausa, Yoruba and Igbo people of Nigeria. A purposive sample of 53 participants underwent semi-structured interviews that lasted approximately 55 minutes each. Of the N=53 participants, n=26 were psychology-aligned practitioners and n=27 ‘laypersons’. Participants were recruited from four states in Nigeria, Plateau, Kaduna, Ekiti, and Enugu. All participants were self-identified as members of their ethnic groups who speak and understand their native-languages, cultural beliefs, and also are domiciled within their ethnic communities. Thematic analysis using socio-constructionism from a critical-realist position was employed to explore the participants’ beliefs about mental health, and the clash between western trained practitioners’ views and the cultural beliefs of the ‘laypersons’. Data analysis found three main themes that re-emerged across the four ethnic samples: (i) beliefs about mental health problems as a spiritual curse (ii) traditional and religious healing are used more often than western mental health care (iii) low levels of mental health awareness. In addition, the Nigerian traditional and religious healing are also revealed to be helpful as the practice gives prominence to the native-languages, religious and cultural values. However, participants described the role of ‘false’ traditional or religious healers in communities as being potentially harmful. Finally, due to the current lack of knowledge about mental health problems, awareness creation and re-orientation may be beneficial for both rural and urban Nigerian communities.

Keywords: beliefs cultures, health mental, languages religions, values

Procedia PDF Downloads 247
104 Short-Term Effects of an Open Monitoring Meditation on Cognitive Control and Information Processing

Authors: Sarah Ullrich, Juliane Rolle, Christian Beste, Nicole Wolff

Abstract:

Inhibition and cognitive flexibility are essential parts of executive functions in our daily lives, as they enable the avoidance of unwanted responses or selectively switch between mental processes to generate appropriate behavior. There is growing interest in improving inhibition and response selection through brief mindfulness-based meditations. Arguably, open-monitoring meditation (OMM) improves inhibitory and flexibility performance by optimizing cognitive control and information processing. Yet, the underlying neurophysiological processes have been poorly studied. Using the Simon-Go/Nogo paradigm, the present work examined the effect of a single 15-minute smartphone app-based OMM on inhibitory performance and response selection in meditation novices. We used both behavioral and neurophysiological measures (event-related potentials, ERPs) to investigate which subprocesses of response selection and inhibition are altered after OMM. The study was conducted in a randomized crossover design with N = 32 healthy adults. We thereby investigated Go and Nogo trials in the paradigm. The results show that as little as 15 minutes of OMM can improve response selection and inhibition at behavioral and neurophysiological levels. More specifically, OMM reduces the rate of false alarms, especially during Nogo trials regardless of congruency. It appears that OMM optimizes conflict processing and response inhibition compared to no meditation, also reflected in the ERP N2 and P3 time windows. The results may be explained by the meta control model, which argues in terms of a specific processing mode with increased flexibility and inclusive decision-making under OMM. Importantly, however, the effects of OMM were only evident when there was the prior experience with the task. It is likely that OMM provides more cognitive resources, as the amplitudes of these EKPs decreased. OMM novices seem to induce finer adjustments during conflict processing after familiarization with the task.

Keywords: EEG, inhibition, meditation, Simon Nogo

Procedia PDF Downloads 166
103 Logical Thinking: A Surprising and Promising Insight for Creative and Critical Thinkers

Authors: Luc de Brabandere

Abstract:

Searchers in various disciplines have long tried to understand how a human being thinks. Most of them seem to agree that the brain works in two very different modes. For us, the first phase of thought imagines, diverges, and unlocks the field of possibilities. The second phase, judges converge and choose. But if we were to stop there, that would give the impression that thought is essentially an individual effort that seldom depends on context. This is, however, not the case. Whether we be a champion in creativity, so primarily in induction, or a master in logic where we are confronted with reality, the ideas we layout are indeed destined to be presented to third parties. They should therefore be exposed, defended, communicated, negotiated, or even sold. Regardless of the quality of the concepts we craft (creative thinking) and the interferences we build (logical thinking) we will take one day, or another, be confronted by people whose beliefs, opinions and ideas differ from ours (critical thinking). Logic and critique: The shared characteristics of logical and critical thoughts include a three-level structure of reasoning invented by the Greeks. For the first time in history, Aristotle tried to model thought deployable in three stages: the concept, the statement, and the reasoning. The three levels can be assessed according to different criteria. A concept is more or less useful, a statement is true or false, and reasoning is right or wrong. This three-level structure allows us to differentiate logic and critique, where the intention and words used are not the same. Logic only deals with the structure of reasoning and exhausts the problem. It regards premises as acquired and excludes the debate. Logic is in all certainty and pursues the truth. Critique is most probably searching for the plausible. Logic and creativity: Many known models present the brain as a two-stroke engine (divergence vs convergence, fast vs. slow, left-brain vs right-brain, Yin vs Yang, etc.). But that’s not the only thing. “Why didn’t we think of that before?” How often have we heard that sentence? A creative idea is the outcome of logic, but you can only understand it afterward! Through the use of exercises, we will witness how logic and creativity work together. A third theme is hidden behind the two main themes of the conference: logical thought, which the author can shed some light on.

Keywords: creativity, logic, critique, digital

Procedia PDF Downloads 58
102 Effects of Acupuncture Treatment in Gait Parameters in Parkinson's Disease

Authors: Catarina Isabel Ramos Pereira, Jorge Machado, Begona Alonso Criado, Maria João Santos

Abstract:

Introduction: Gait disorders are one of the symptoms that have severe implications on the quality of life in Parkinson's disease (PD). Currently, there is no therapy to reverse or treat this condition. None of the drugs used in conventional medical treatment is entirely efficient, and all have a high incidence of side effects. Acupuncture treatment is believed to improve motor ability, but there is still little scientific evidence in individuals with PD. Aim: The aim of the study is to investigate the acute effect of acupuncture on gait parameters in Parkinson's disease. Methods: This is a randomized and controlled crossover study. The same individual patient was part of both the experimental (real acupuncture) and control group (false acupuncture/sham), and the sequence was randomized. Gait parameters were measured at two different moments, before and after treatment, using four force platforms as well as the collection of 3D markers positions taken by 11 cameras. Images were quantitatively analyzed using Qualisys Track Manager software that let us extract data related to the quality of gait and balance. Seven patients with the diagnosis of Parkinson's disease were included in the study. Results: Statistically significant differences were found in gait speed (p = 0.016), gait cadence (p = 0.006), support base width (p = 0.0001), medio-lateral oscillation (p = 0.017), left-right step length (p = 0.0002), and stride length: right-right (p = 0.0000) and left-left (p = 0.0018), time of left support phase (p = 0.029), right support phase (p = 0.025) and double support phase (p = 0.015), between the initial and final moments for the experimental group. Differences in right-left stride length were found for both groups. Conclusion: Our results show that acupuncture could enhance gait in Parkinson's disease patients. Deep research involving a larger number of volunteers should be accomplished to validate these encouraging findings.

Keywords: acupuncture, traditional Chinese medicine, Parkinson's disease, gait

Procedia PDF Downloads 133
101 A Proposed Optimized and Efficient Intrusion Detection System for Wireless Sensor Network

Authors: Abdulaziz Alsadhan, Naveed Khan

Abstract:

In recent years intrusions on computer network are the major security threat. Hence, it is important to impede such intrusions. The hindrance of such intrusions entirely relies on its detection, which is primary concern of any security tool like Intrusion Detection System (IDS). Therefore, it is imperative to accurately detect network attack. Numerous intrusion detection techniques are available but the main issue is their performance. The performance of IDS can be improved by increasing the accurate detection rate and reducing false positive. The existing intrusion detection techniques have the limitation of usage of raw data set for classification. The classifier may get jumble due to redundancy, which results incorrect classification. To minimize this problem, Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Local Binary Pattern (LBP) can be applied to transform raw features into principle features space and select the features based on their sensitivity. Eigen values can be used to determine the sensitivity. To further classify, the selected features greedy search, back elimination, and Particle Swarm Optimization (PSO) can be used to obtain a subset of features with optimal sensitivity and highest discriminatory power. These optimal feature subset used to perform classification. For classification purpose, Support Vector Machine (SVM) and Multilayer Perceptron (MLP) used due to its proven ability in classification. The Knowledge Discovery and Data mining (KDD’99) cup dataset was considered as a benchmark for evaluating security detection mechanisms. The proposed approach can provide an optimal intrusion detection mechanism that outperforms the existing approaches and has the capability to minimize the number of features and maximize the detection rates.

Keywords: Particle Swarm Optimization (PSO), Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), Local Binary Pattern (LBP), Support Vector Machine (SVM), Multilayer Perceptron (MLP)

Procedia PDF Downloads 323
100 Implications of Stakeholder Theory as a Critical Theory

Authors: Louis Hickman

Abstract:

Stakeholder theory is a powerful conception of the firm based on the notion that a primary focus on shareholders is inadequate and, in fact, detrimental to the long-term health of the firm. As such it represents a departure from prevalent business school teachings with their focus on accounting and cost controls. Herein, it is argued that stakeholder theory can be better conceptualized as a critical theory, or one which represents a fundamental change in business behavior and can transform the behavior of businesses if accepted. By arguing that financial interests underdetermine the success of the firm, stakeholder theory further democratizes business by endorsing an increased awareness of the importance of non-shareholder stakeholders. Stakeholder theory requires new, non-financial, measures of success that provide a new consciousness for management and businesses when conceiving their actions and place in society. Thereby, stakeholder theory can show individuals through self-reflection that the capitalist impulses to generate wealth cannot act as primary drivers of business behavior, but rather, that we would choose to support interests outside ourselves if we made the decision in free discussion. This is due to the false consciousness embedded in our capitalism that the firm’s finances are the foremost concern of modern organizations at the expense of other goals. A focus on non-shareholder stakeholders in addition to shareholders generates greater benefits for society by improving the state of customers, employees, suppliers, the community, and shareholders alike. These positive effects generate further positive gains in well-being for stakeholders and translate into increased health for the future firm. Additionally, shareholders are the only stakeholder group that does not provide long-term firm value since there are not always communities with qualified employees, suppliers capable of providing the quality of product needed, or persons with purchasing power for all conceivable products. Therefore, the firm’s long-term health is benefited most greatly by improving the greatest possible parts of the society in which it inhabits, rather than solely the shareholder.

Keywords: capitalism, critical theory, self-reflection, stakeholder theory

Procedia PDF Downloads 296