Search results for: failure detection and prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7690

Search results for: failure detection and prediction

940 Computational Approach to Identify Novel Chemotherapeutic Agents against Multiple Sclerosis

Authors: Syed Asif Hassan, Tabrej Khan

Abstract:

Multiple sclerosis (MS) is a chronic demyelinating autoimmune disorder, of the central nervous system (CNS). In the present scenario, the current therapies either do not halt the progression of the disease or have side effects which limit the usage of current Disease Modifying Therapies (DMTs) for a longer period of time. Therefore, keeping the current treatment failure schema, we are focusing on screening novel analogues of the available DMTs that specifically bind and inhibit the Sphingosine1-phosphate receptor1 (S1PR1) thereby hindering the lymphocyte propagation toward CNS. The novel drug-like analogs molecule will decrease the frequency of relapses (recurrence of the symptoms associated with MS) with higher efficacy and lower toxicity to human system. In this study, an integrated approach involving ligand-based virtual screening protocol (Ultrafast Shape Recognition with CREDO Atom Types (USRCAT)) to identify the non-toxic drug like analogs of the approved DMTs were employed. The potency of the drug-like analog molecules to cross the Blood Brain Barrier (BBB) was estimated. Besides, molecular docking and simulation using Auto Dock Vina 1.1.2 and GOLD 3.01 were performed using the X-ray crystal structure of Mtb LprG protein to calculate the affinity and specificity of the analogs with the given LprG protein. The docking results were further confirmed by DSX (DrugScore eXtented), a robust program to evaluate the binding energy of ligands bound to the ligand binding domain of the Mtb LprG lipoprotein. The ligand, which has a higher hypothetical affinity, also has greater negative value. Further, the non-specific ligands were screened out using the structural filter proposed by Baell and Holloway. Based on the USRCAT, Lipinski’s values, toxicity and BBB analysis, the drug-like analogs of fingolimod and BG-12 showed that RTL and CHEMBL1771640, respectively are non-toxic and permeable to BBB. The successful docking and DSX analysis showed that RTL and CHEMBL1771640 could bind to the binding pocket of S1PR1 receptor protein of human with greater affinity than as compared to their parent compound (Fingolimod). In this study, we also found that all the drug-like analogs of the standard MS drugs passed the Bell and Holloway filter.

Keywords: antagonist, binding affinity, chemotherapeutics, drug-like, multiple sclerosis, S1PR1 receptor protein

Procedia PDF Downloads 254
939 Competitive DNA Calibrators as Quality Reference Standards (QRS™) for Germline and Somatic Copy Number Variations/Variant Allelic Frequencies Analyses

Authors: Eirini Konstanta, Cedric Gouedard, Aggeliki Delimitsou, Stefania Patera, Samuel Murray

Abstract:

Introduction: Quality reference DNA standards (QRS) for molecular testing by next-generation sequencing (NGS) are essential for accurate quantitation of copy number variations (CNV) for germline and variant allelic frequencies (VAF) for somatic analyses. Objectives: Presently, several molecular analytics for oncology patients are reliant upon quantitative metrics. Test validation and standardisation are also reliant upon the availability of surrogate control materials allowing for understanding test LOD (limit of detection), sensitivity, specificity. We have developed a dual calibration platform allowing for QRS pairs to be included in analysed DNA samples, allowing for accurate quantitation of CNV and VAF metrics within and between patient samples. Methods: QRS™ blocks up to 500nt were designed for common NGS panel targets incorporating ≥ 2 identification tags (IDTDNA.com). These were analysed upon spiking into gDNA, somatic, and ctDNA using a proprietary CalSuite™ platform adaptable to common LIMS. Results: We demonstrate QRS™ calibration reproducibility spiked to 5–25% at ± 2.5% in gDNA and ctDNA. Furthermore, we demonstrate CNV and VAF within and between samples (gDNA and ctDNA) with the same reproducibility (± 2.5%) in a clinical sample of lung cancer and HBOC (EGFR and BRCA1, respectively). CNV analytics was performed with similar accuracy using a single pair of QRS calibrators when using multiple single targeted sequencing controls. Conclusion: Dual paired QRS™ calibrators allow for accurate and reproducible quantitative analyses of CNV, VAF, intrinsic sample allele measurement, inter and intra-sample measure not only simplifying NGS analytics but allowing for monitoring clinically relevant biomarker VAF across patient ctDNA samples with improved accuracy.

Keywords: calibrator, CNV, gene copy number, VAF

Procedia PDF Downloads 151
938 New Test Algorithm to Detect Acute and Chronic HIV Infection Using a 4th Generation Combo Test

Authors: Barun K. De

Abstract:

Acquired immunodeficiency syndrome (AIDS) is caused by two types of human immunodeficiency viruses, collectively designated HIV. HIV infection is spreading globally particularly in developing countries. Before an individual is diagnosed with HIV, the disease goes through different phases. First there is an acute early phase that is followed by an established or chronic phase. Subsequently, there is a latency period after which the individual becomes immunodeficient. It is in the acute phase that an individual is highly infectious due to a high viral load. Presently, HIV diagnosis involves use of tests that do not detect the acute phase infection during which both the viral RNA and p24 antigen are expressed. Instead, these less sensitive tests detect antibodies to viral antigens which are typically sero-converted later in the disease process following acute infection. These antibodies are detected in both asymptomatic HIV-infected individuals as well as AIDS patients. Studies indicate that early diagnosis and treatment of HIV infection can reduce medical costs, improve survival, and reduce spreading of infection to new uninfected partners. Newer 4th generation combination antigen/antibody tests are highly sensitive and specific for detection of acute and established HIV infection (HIV1 and HIV2) enabling immediate linkage to care. The CDC (Center of Disease Control, USA) recently recommended an algorithm involving three different tests to screen and diagnose acute and established infections of HIV-1 and HIV-2 in a general population. Initially a 4th generation combo test detects a viral antigen p24 and specific antibodies against HIV -1 and HIV-2 envelope proteins. If the test is positive it is followed by a second test known as a differentiation assay which detects antibodies against specific HIV-1 and HIV-2 envelope proteins confirming established infection of HIV-1 or HIV-2. However if it is negative then another test is performed that measures viral load confirming an acute HIV-1 infection. Screening results of a Phoenix area population detected 0.3% new HIV infections among which 32.4% were acute cases. Studies in the U.S. indicate that this algorithm effectively reduces HIV infection through immediate treatment and education following diagnosis.

Keywords: new algorithm, HIV, diagnosis, infection

Procedia PDF Downloads 409
937 The Reasons for Failure in Writing Essays: Teaching Writing as a Project-Based Enterprise

Authors: Ewa Toloczko

Abstract:

Studies show that developing writing skills throughout years of formal foreign language instruction does not necessarily result in rewarding accomplishments among learners, nor an affirmative attitude they build towards written assignments. What causes this apparently wide-spread bias to writing might be a diminished relevance students attach to it, as opposed to the other productive skill — speaking, insufficient resources available for them to succeed, or the ways writing is approached by instructors, that is inapt teaching techniques that discourage rather that inflame learners’ engagement. The assumption underlying this presentation is that psychological and psycholinguistic factors constitute a key dimension of every writing process, and hence should be seriously considered in both material design and lesson planning. The author intends to demonstrate research in which writing tasks were conceived of as attitudinal rather than technical operations, and consequently turned into meaningful and socially-oriented incidents that students could relate to and have an active hand in. The instrument employed to achieve this purpose and to make writing even more interactive was the format of a project, a carefully devised series of tasks, which involved students as human beings, not only language learners. The projects rested upon the premise that the presence of peers and the teacher in class could be taken advantage of in a supportive rather than evaluative mode. In fact, the research showed that collaborative work and constant meaning negotiation reinforced not only bonds between learners, but also the language form and structure of the output. Accordingly, the role of the teacher shifted from the assessor to problem barometer, always ready to accept the slightest improvements in students’ language performance. This way, written verbal communication, which usually aims to merely manifest accuracy and coherent content for assessment, became part of the enterprise meant to emphasise its social aspect — the writer in real-life setting. The samples of projects show the spectrum of possibilities teachers have when exploring the domain of writing within school curriculum. The ideas are easy to modify and adjust to all proficiency levels and ages. Initially, however, they were meant to suit teenage and young adult learners of English as a foreign language in both European and Asian contexts.

Keywords: projects, psycholinguistic/ psychological dimension of writing, writing as a social enterprise, writing skills, written assignments

Procedia PDF Downloads 233
936 In-vitro Metabolic Fingerprinting Using Plasmonic Chips by Laser Desorption/Ionization Mass Spectrometry

Authors: Vadanasundari Vedarethinam, Kun Qian

Abstract:

The metabolic analysis is more distal over proteomics and genomics engaging in clinics and needs rationally distinct techniques, designed materials, and device for clinical diagnosis. Conventional techniques such as spectroscopic techniques, biochemical analyzers, and electrochemical have been used for metabolic diagnosis. Currently, there are four major challenges including (I) long-term process in sample pretreatment; (II) difficulties in direct metabolic analysis of biosamples due to complexity (III) low molecular weight metabolite detection with accuracy and (IV) construction of diagnostic tools by materials and device-based platforms for real case application in biomedical applications. Development of chips with nanomaterial is promising to address these critical issues. Mass spectroscopy (MS) has displayed high sensitivity and accuracy, throughput, reproducibility, and resolution for molecular analysis. Particularly laser desorption/ ionization mass spectrometry (LDI MS) combined with devices affords desirable speed for mass measurement in seconds and high sensitivity with low cost towards large scale uses. We developed a plasmonic chip for clinical metabolic fingerprinting as a hot carrier in LDI MS by series of chips with gold nanoshells on the surface through controlled particle synthesis, dip-coating, and gold sputtering for mass production. We integrated the optimized chip with microarrays for laboratory automation and nanoscaled experiments, which afforded direct high-performance metabolic fingerprinting by LDI MS using 500 nL of serum, urine, cerebrospinal fluids (CSF) and exosomes. Further, we demonstrated on-chip direct in-vitro metabolic diagnosis of early-stage lung cancer patients using serum and exosomes without any pretreatment or purifications. To our best knowledge, this work initiates a bionanotechnology based platform for advanced metabolic analysis toward large-scale diagnostic use.

Keywords: plasmonic chip, metabolic fingerprinting, LDI MS, in-vitro diagnostics

Procedia PDF Downloads 161
935 Experimental Study on Bending and Torsional Strength of Bulk Molding Compound Seat Back Frame Part

Authors: Hee Yong Kang, Hyeon Ho Shin, Jung Cheol Yoo, Il Taek Lee, Sung Mo Yang

Abstract:

Lightweight technology using composites is being developed for vehicle seat structures, and its design must meet the safety requirements. According to the Federal Motor Vehicle Safety Standard (FMVSS) 207 seating systems test procedure, the back moment load is applied to the seat back frame structure for the safety evaluation of the vehicle seat. The seat back frame using the composites is divided into three parts: upper part frame, and left- and right-side frame parts following the manufacturing process. When a rear moment load is applied to the seat back frame, the side frame receives the bending load and the torsional load at the same time. This results in the largest loaded strength. Therefore, strength test of the component unit is required. In this study, a component test method based on the FMVSS 207 seating systems test procedure was proposed for the strength analysis of bending load and torsional load of the automotive Bulk Molding Compound (BMC) Seat Back Side Frame. Moreover, strength evaluation according to the carbon band reinforcement was performed. The back-side frame parts of the seat that are applied to the test were manufactured through BMC that is composed of vinyl ester Matrix and short carbon fiber. Then, two kinds of reinforced and non-reinforced parts of carbon band were formed through a high-temperature compression molding process. In addition, the structure that is applied to the component test was constructed by referring to the FMVSS 207. Then, the bending load and the torsional load were applied through the displacement control to perform the strength test for four load conditions. The results of each test are shown through the load-displacement curves of the specimen. The failure strength of the parts caused by the reinforcement of the carbon band was analyzed. Additionally, the fracture characteristics of the parts for four strength tests were evaluated, and the weakness structure of the back-side frame of the seat structure was confirmed according to the test conditions. Through the bending and torsional strength test methods, we confirmed the strength and fracture characteristics of BMC Seat Back Side Frame according to the carbon band reinforcement. And we proposed a method of testing the part strength of a seat back frame for vehicles that can meet the FMVSS 207.

Keywords: seat back frame, bending and torsional strength, BMC (Bulk Molding Compound), FMVSS 207 seating systems

Procedia PDF Downloads 208
934 Authentication and Traceability of Meat Products from South Indian Market by Species-Specific Polymerase Chain Reaction

Authors: J. U. Santhosh Kumar, V. Krishna, Sebin Sebastian, G. S. Seethapathy, G. Ravikanth, R. Uma Shaanker

Abstract:

Food is one of the basic needs of human beings. It requires the normal function of the body part and a healthy growth. Recently, food adulteration increases day by day to increase the quantity and make more benefit. Animal source foods can provide a variety of micronutrients that are difficult to obtain in adequate quantities from plant source foods alone. Particularly in the meat industry, products from animals are susceptible targets for fraudulent labeling due to the economic profit that results from selling cheaper meat as meat from more profitable and desirable species. This work presents an overview of the main PCR-based techniques applied to date to verify the authenticity of beef meat and meat products from beef species. We were analyzed 25 market beef samples in South India. We examined PCR methods based on the sequence of the cytochrome b gene for source species identification. We found all sample were sold as beef meat as Bos Taurus. However, interestingly Male meats are more valuable high price compare to female meat, due to this reason most of the markets samples are susceptible. We were used sex determination gene of cattle like TSPY(Y-encoded, testis-specific protein TSPY is a Y-specific gene). TSPY homologs exist in several mammalian species, including humans, horses, and cattle. This gene is Y coded testis protein genes, which only amplify the male. We used multiple PCR products form species-specific “fingerprints” on gel electrophoresis, which may be useful for meat authentication. Amplicons were obtained only by the Cattle -specific PCR. We found 13 market meat samples sold as female beef samples. These results suggest that the species-specific PCR methods established in this study would be useful for simple and easy detection of adulteration of meat products.

Keywords: authentication, meat products, species-specific, TSPY

Procedia PDF Downloads 373
933 Comparative Study of Skeletonization and Radial Distance Methods for Automated Finger Enumeration

Authors: Mohammad Hossain Mohammadi, Saif Al Ameri, Sana Ziaei, Jinane Mounsef

Abstract:

Automated enumeration of the number of hand fingers is widely used in several motion gaming and distance control applications, and is discussed in several published papers as a starting block for hand recognition systems. The automated finger enumeration technique should not only be accurate, but also must have a fast response for a moving-picture input. The high performance of video in motion games or distance control will inhibit the program’s overall speed, for image processing software such as Matlab need to produce results at high computation speeds. Since an automated finger enumeration with minimum error and processing time is desired, a comparative study between two finger enumeration techniques is presented and analyzed in this paper. In the pre-processing stage, various image processing functions were applied on a real-time video input to obtain the final cleaned auto-cropped image of the hand to be used for the two techniques. The first technique uses the known morphological tool of skeletonization to count the number of skeleton’s endpoints for fingers. The second technique uses a radial distance method to enumerate the number of fingers in order to obtain a one dimensional hand representation. For both discussed methods, the different steps of the algorithms are explained. Then, a comparative study analyzes the accuracy and speed of both techniques. Through experimental testing in different background conditions, it was observed that the radial distance method was more accurate and responsive to a real-time video input compared to the skeletonization method. All test results were generated in Matlab and were based on displaying a human hand for three different orientations on top of a plain color background. Finally, the limitations surrounding the enumeration techniques are presented.

Keywords: comparative study, hand recognition, fingertip detection, skeletonization, radial distance, Matlab

Procedia PDF Downloads 381
932 Component Test of Martensitic/Ferritic Steels and Nickel-Based Alloys and Their Welded Joints under Creep and Thermo-Mechanical Fatigue Loading

Authors: Daniel Osorio, Andreas Klenk, Stefan Weihe, Andreas Kopp, Frank Rödiger

Abstract:

Future power plants currently face high design requirements due to worsening climate change and environmental restrictions, which demand high operational flexibility, superior thermal performance, minimal emissions, and higher cyclic capability. The aim of the paper is, therefore, to investigate the creep and thermo-mechanical material behavior of improved materials experimentally and welded joints at component scale under near-to-service operating conditions, which are promising for application in highly efficient and flexible future power plants. These materials promise an increase in flexibility and a reduction in manufacturing costs by providing enhanced creep strength and, therefore, the possibility for wall thickness reduction. At the temperature range between 550°C and 625°C, the investigation focuses on the in-phase thermo-mechanical fatigue behavior of dissimilar welded joints of conventional materials (ferritic and martensitic material T24 and T92) to nickel-based alloys (A617B and HR6W) by means of membrane test panels. The temperature and external load are varied in phase during the test, while the internal pressure remains constant. At the temperature range between 650°C and 750°C, it focuses on the creep behavior under multiaxial stress loading of similar and dissimilar welded joints of high temperature resistant nickel-based alloys (A740H, A617B, and HR6W) by means of a thick-walled-component test. In this case, the temperature, the external axial load, and the internal pressure remain constant during testing. Numerical simulations are used for the estimation of the axial component load in order to induce a meaningful damage evolution without causing a total component failure. Metallographic investigations after testing will provide support for understanding the damage mechanism and the influence of the thermo-mechanical load and multiaxiality on the microstructure change and on the creep and TMF- strength.

Keywords: creep, creep-fatigue, component behaviour, weld joints, high temperature material behaviour, nickel-alloys, high temperature resistant steels

Procedia PDF Downloads 116
931 Structural Optimization, Design, and Fabrication of Dissolvable Microneedle Arrays

Authors: Choupani Andisheh, Temucin Elif Sevval, Bediz Bekir

Abstract:

Due to their various advantages compared to many other drug delivery systems such as hypodermic injections and oral medications, microneedle arrays (MNAs) are a promising drug delivery system. To achieve enhanced performance of the MN, it is crucial to develop numerical models, optimization methods, and simulations. Accordingly, in this work, the optimized design of dissolvable MNAs, as well as their manufacturing, is investigated. For this purpose, a mechanical model of a single MN, having the geometry of an obelisk, is developed using commercial finite element software. The model considers the condition in which the MN is under pressure at the tip caused by the reaction force when penetrating the skin. Then, a multi-objective optimization based on non-dominated sorting genetic algorithm II (NSGA-II) is performed to obtain geometrical properties such as needle width, tip (apex) angle, and base fillet radius. The objective of the optimization study is to reach a painless and effortless penetration into the skin along with minimizing its mechanical failures caused by the maximum stress occurring throughout the structure. Based on the obtained optimal design parameters, master (male) molds are then fabricated from PMMA using a mechanical micromachining process. This fabrication method is selected mainly due to the geometry capability, production speed, production cost, and the variety of materials that can be used. Then to remove any chip residues, the master molds are cleaned using ultrasonic cleaning. These fabricated master molds can then be used repeatedly to fabricate Polydimethylsiloxane (PDMS) production (female) molds through a micro-molding approach. Finally, Polyvinylpyrrolidone (PVP) as a dissolvable polymer is cast into the production molds under vacuum to produce the dissolvable MNAs. This fabrication methodology can also be used to fabricate MNAs that include bioactive cargo. To characterize and demonstrate the performance of the fabricated needles, (i) scanning electron microscope images are taken to show the accuracy of the fabricated geometries, and (ii) in-vitro piercing tests are performed on artificial skin. It is shown that optimized MN geometries can be precisely fabricated using the presented fabrication methodology and the fabricated MNAs effectively pierce the skin without failure.

Keywords: microneedle, microneedle array fabrication, micro-manufacturing structural optimization, finite element analysis

Procedia PDF Downloads 111
930 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method

Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry

Abstract:

The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.

Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design

Procedia PDF Downloads 151
929 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources

Authors: Mustafa Alhamdi

Abstract:

Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.

Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification

Procedia PDF Downloads 150
928 Role of Human Epididymis Protein 4 as a Biomarker in the Diagnosis of Ovarian Cancer

Authors: Amar Ranjan, Julieana Durai, Pranay Tanwar

Abstract:

Background &Introduction: Ovarian cancer is one of the most common malignant tumor in the female. 70% of the cases of ovarian cancer are diagnosed at an advanced stage. The five-year survival rate associated with ovarian cancer is less than 30%. The early diagnosis of ovarian cancer becomes a key factor in improving the survival rate of patients. Presently, CAl25 (carbohydrate antigen125) is used for the diagnosis and therapeutic monitoring of ovarian cancer, but its sensitivity and specificity is not ideal. The introduction of HE4, human epididymis protein 4 has attracted much attention. HE4 has a sensitivity and specificity of 72.9% and 95% for differentiating between benign and malignant adnexal masses, which is better than CA125 detection.  Methods: Serum HE4 and CA -125 were estimated using the chemiluminescence method. Our cases were 40 epithelial ovarian cancer, 9 benign ovarian tumor, 29 benign gynaecological diseases and 13 healthy individuals. This group include healthy woman those who have undergoing family planning and menopause-related medical consultations and they are negative for ovarian mass. Optimal cut off values for HE4 and CA125 were 55.89pmol/L and 40.25U/L respectively (determined by statistical analysis). Results: The level of HE4 was raised in all ovarian cancer patients (n=40) whereas CA125 levels were normal in 6/40 ovarian cancer patients, which were the cases of OC confirmed by histopathology. There is a significant decrease in the level of HE4 with comparison to CA125 in benign ovarian tumor cases. Both the levels of HE4 and CA125 were raised in the nonovarian cancer group, which includes cancer of endometrium and cervix. In the healthy group, HE4 was normal in all patients except in one case of the rudimentary horn, and the reason for this raised HE4 level is due to the incomplete development of uterus whereas CA125 was raised in 3 cases. Conclusions: Findings showed that the serum level of HE4 is an important indicator in the diagnosis of ovarian cancer, and it also distinguishes between benign and malignant pelvic masses. However, a combination of HE4 and CA125 panel will be extremely valuable in improving the diagnostic efficiency of ovarian cancer. These findings of our study need to be validated in the larger cohort of patients.

Keywords: human epididymis protein 4, ovarian cancer, diagnosis, benign lesions

Procedia PDF Downloads 129
927 Expression of DNMT Enzymes-Regulated miRNAs Involving in Epigenetic Event of Tumor and Margin Tissues in Patients with Breast Cancer

Authors: Fatemeh Zeinali Sehrig

Abstract:

Background: miRNAs play an important role in the post-transcriptional regulation of genes, including genes involved in DNA methylation (DNMTs), and are also important regulators of oncogenic pathways. The study of microRNAs and DNMTs in breast cancer allows the development of targeted treatments and early detection of this cancer. Methods and Materials: Clinical Patients and Samples: Institutional guidelines, including ethical approval and informed consent, were followed by the Ethics Committee (Ethics code: IR.IAU.TABRIZ.REC.1401.063) of Tabriz Azad University, Tabriz, Iran. In this study, tissues of 100 patients with breast cancer and tissues of 100 healthy women were collected from Noor Nejat Hospital in Tabriz. The basic characteristics of the patients with breast cancer included: 1)Tumor grade(Grade 3 = 5%, Grade 2 = 87.5%, Grade 1 = 7.5%), 2)Lymph node(Yes = 87.5%, No = 12.5%), 3)Family cancer history(Yes = 47.5%, No = 41.3%, Unknown = 11.2%), 4) Abortion history(Yes = 36.2%).In silico methods (data gathering, process, and build networks): Gene Expression Omnibus (GEO), a high-throughput genomic database, was queried for miRNAs expression profiles in breast cancer. For Experimental protocol Tissue Processing, Total RNA isolation, complementary DNA(cDNA) synthesis, and quantitative real time PCR (QRT-PCR) analysis were performed. Results: In the present study, we found significant (p.value<0.05) changes in the expression level of miRNAs and DNMTs in patients with breast cancer. In bioinformatics studies, the GEO microarray data set, similar to qPCR results, showed a decreased expression of miRNAs and increased expression of DNMTs in breast cancer. Conclusion: According to the results of the present study, which showed a decrease in the expression of miRNAs and DNMTs in breast cancer, it can be said that these genes can be used as important diagnostic and therapeutic biomarkers in breast cancer.

Keywords: gene expression omnibus, microarray dataset, breast cancer, miRNA, DNMT (DNA methyltransferases)

Procedia PDF Downloads 32
926 Evidence of Microplastic Pollution in the Río Bravo/Rio Grande (Mexico/US Border)

Authors: Stephanie Hernández-Carreón, Judith Virginia Ríos-Arana

Abstract:

Microplastics (MPs) are plastic particles smaller than 5 mm that has been detected in soil, air, organisms, and mostly water around the world. Most studies have focused on MPs detection in marine waters, and less so in freshwater, such is the case of Mexico, where studies about MPs in freshwaters are limited. One of the most important rivers in the country is The Rio Grande/Río Bravo, a natural border between Mexico and the United States. Its waters serve different purposes, such as fishing, habitat to endemic species, electricity generation, agriculture, and drinking water sources, among others. Despite its importance, the river’s waters have not been analyzed to determine the presence of MPs; therefore, the purpose of this research is to determine if the Rio Bravo/Rio Grande is polluted with microplastics. For doing so, three sites (Borderland, Casa de Adobe, and Guadalupe) along the El Paso-Juárez metroplex have been sampled: 30 L of water were filtered through a plankton net (64 µm) in each site and sediments-composed samples were collected. Water samples and sediments were 1) digested with a hydrogen peroxide solution (30%), 2) resuspended in a calcium chloride solution (1.5 g/cm3) to separate MPs, and 3) filtered through a 0.45 µm nitrocellulose membrane. Processed water samples were dyed with Nile Red (1 mg/ml ethanol) and analyzed by fluorescence microscopy. Two water samples have been analyzed until January 2023: Casa de Adobe and Borderland finding a concentration of 5.67 particles/L and 5.93 particles/L, respectively. Three types of particles were observed: fibers, fragments, and films, fibers being the most abundant. These data, as well as the data obtained from the rest of the samples, will be analyzed by an ANOVA (α=0.05). The concentrations and types of particles found in the Río Bravo correspond with other studies on rivers associated with urban environments and agricultural activities in China, where a range of 3.67—10.7 particles/L was reported in the Wei River. Even though we are in the early stages of the study, and three new sites will be sampled and analyzed in 2023 to provide more data about this issue in the river, this presents the first evidence of microplastic pollution in the Rio Grande.

Keywords: microplastics, fresh water, Rio Bravo, fluorescence microscopy

Procedia PDF Downloads 148
925 Dosimetric Application of α-Al2O3:C for Food Irradiation Using TA-OSL

Authors: A. Soni, D. R. Mishra, D. K. Koul

Abstract:

α-Al2O3:C has been reported to have deeper traps at 600°C and 900°C respectively. These traps have been reported to accessed at relatively earlier temperatures (122 and 322 °C respectively) using thermally assisted OSL (TA-OSL). In this work, the dose response α-Al2O3:C was studied in the dose range of 10Gy to 10kGy for its application in food irradiation in low ( upto 1kGy) and medium(1 to 10kGy) dose range. The TOL (Thermo-optically stimulated luminescence) measurements were carried out on RisØ TL/OSL, TL-DA-15 system having a blue light-emitting diodes (λ=470 ±30nm) stimulation source with power level set at the 90% of the maximum stimulation intensity for the blue LEDs (40 mW/cm2). The observations were carried on commercial α-Al2O3:C phosphor. The TOL experiments were carried out with number of active channel (300) and inactive channel (1). Using these settings, the sample is subjected to linear thermal heating and constant optical stimulation. The detection filter used in all observations was a Hoya U-340 (Ip ~ 340 nm, FWHM ~ 80 nm). Irradiation of the samples was carried out using a 90Sr/90Y β-source housed in the system. A heating rate of 2 °C/s was preferred in TL measurements so as to reduce the temperature lag between the heater plate and the samples. To study the dose response of deep traps of α-Al2O3:C, samples were irradiated with various dose ranging from 10 Gy to 10 kGy. For each set of dose, three samples were irradiated. In order to record the TA-OSL, initially TL was recorded up to a temperature of 400°C, to deplete the signal due to 185°C main dosimetry TL peak in α-Al2O3:C, which is also associated with the basic OSL traps. After taking TL readout, the sample was subsequently subjected to TOL measurement. As a result, two well-defined TA-OSL peaks at 121°C and at 232°C occur in time as well as temperature domain which are different from the main dosimetric TL peak which occurs at ~ 185°C. The linearity of the integrated TOL signal has been measured as a function of absorbed dose and found to be linear upto 10kGy. Thus, it can be used for low and intermediate dose range of for its application in food irradiation. The deep energy level defects of α-Al2O3:C phosphor can be accessed using TOL section of RisØ reader system.

Keywords: α-Al2O3:C, deep traps, food irradiation, TA-OSL

Procedia PDF Downloads 298
924 Using of the Fractal Dimensions for the Analysis of Hyperkinetic Movements in the Parkinson's Disease

Authors: Sadegh Marzban, Mohamad Sobhan Sheikh Andalibi, Farnaz Ghassemi, Farzad Towhidkhah

Abstract:

Parkinson's disease (PD), which is characterized by the tremor at rest, rigidity, akinesia or bradykinesia and postural instability, affects the quality of life of involved individuals. The concept of a fractal is most often associated with irregular geometric objects that display self-similarity. Fractal dimension (FD) can be used to quantify the complexity and the self-similarity of an object such as tremor. In this work, we are aimed to propose a new method for evaluating hyperkinetic movements such as tremor, by using the FD and other correlated parameters in patients who are suffered from PD. In this study, we used 'the tremor data of Physionet'. The database consists of fourteen participants, diagnosed with PD including six patients with high amplitude tremor and eight patients with low amplitude. We tried to extract features from data, which can distinguish between patients before and after medication. We have selected fractal dimensions, including correlation dimension, box dimension, and information dimension. Lilliefors test has been used for normality test. Paired t-test or Wilcoxon signed rank test were also done to find differences between patients before and after medication, depending on whether the normality is detected or not. In addition, two-way ANOVA was used to investigate the possible association between the therapeutic effects and features extracted from the tremor. Just one of the extracted features showed significant differences between patients before and after medication. According to the results, correlation dimension was significantly different before and after the patient's medication (p=0.009). Also, two-way ANOVA demonstrates significant differences just in medication effect (p=0.033), and no significant differences were found between subject's differences (p=0.34) and interaction (p=0.97). The most striking result emerged from the data is that correlation dimension could quantify medication treatment based on tremor. This study has provided a technique to evaluate a non-linear measure for quantifying medication, nominally the correlation dimension. Furthermore, this study supports the idea that fractal dimension analysis yields additional information compared with conventional spectral measures in the detection of poor prognosis patients.

Keywords: correlation dimension, non-linear measure, Parkinson’s disease, tremor

Procedia PDF Downloads 242
923 The Impact of Floods and Typhoons on Housing Welfare: Case Study of Thua Thien Hue Province, Vietnam

Authors: Seyeon Lee, Suyeon Lee, Julia Rogers

Abstract:

This research investigates and records post-flood and typhoon conditions of low income housing in the Thua Thien Hue Province, Vietnam; area prone to extreme flooding in Central Vietnam. The cost of rebuilding houses after flood and typhoon has been always a burden for low income households. These costs often lead to the elimination of essential construction practices for disaster resistance. Despite relief efforts from international non-profit organizations and Vietnam government, the impacts of flood and typhoon damages to residential construction has been reoccurring to the same neighborhood annually. Notwithstanding its importance, this topic has not been systematically investigated. The study is limited to assistance provided to low income households documenting existing conditions of low income homes impacted by post flood and typhoon conditions in the Thua Thien Hue Province. The research identifies leading causes of the building failure from the natural disasters. Relief efforts and progress made since the last typhoon is documented. The quality of construction and repairs are assessed based on Home Builders Guide to Coastal Construction by Federal Emergency Management Agency. Focus group discussions and individual interviews with local residents from four different communities were conducted to get incites on repair effort by the non-profit organizations and Vietnam government, and their needs post flood and typhoon. The findings from the field study informed that many of the local people are now aware of the importance of improving housing conditions as one of the key coping strategies to withstand flood and typhoon events as it makes housing and community more resilient to future events. While there has been a remarkable improvement of housing and infrastructure with the support from the local government as well as the non-profit organizations, many households in the study areas are found to still live in weak and fragile housing conditions without gaining access to the aid to repair and strengthen the houses. Given that the major immediate recovery action taken by the local people tends to focus on repairing damaged houses, and on this ground, low-income households spend a considerable amount of their income on housing repair, providing proper and applicable construction practices will not only improve the housing condition, but also contribute to reducing poverty in Vietnam.

Keywords: disaster coping mechanism, housing welfare, low-income housing, recovery reduction

Procedia PDF Downloads 269
922 Detecting Elderly Abuse in US Nursing Homes Using Machine Learning and Text Analytics

Authors: Minh Huynh, Aaron Heuser, Luke Patterson, Chris Zhang, Mason Miller, Daniel Wang, Sandeep Shetty, Mike Trinh, Abigail Miller, Adaeze Enekwechi, Tenille Daniels, Lu Huynh

Abstract:

Machine learning and text analytics have been used to analyze child abuse, cyberbullying, domestic abuse and domestic violence, and hate speech. However, to the authors’ knowledge, no research to date has used these methods to study elder abuse in nursing homes or skilled nursing facilities from field inspection reports. We used machine learning and text analytics methods to analyze 356,000 inspection reports, which have been extracted from CMS Form-2567 field inspections of US nursing homes and skilled nursing facilities between 2016 and 2021. Our algorithm detected occurrences of the various types of abuse, including physical abuse, psychological abuse, verbal abuse, sexual abuse, and passive and active neglect. For example, to detect physical abuse, our algorithms search for combinations or phrases and words suggesting willful infliction of damage (hitting, pinching or burning, tethering, tying), or consciously ignoring an emergency. To detect occurrences of elder neglect, our algorithm looks for combinations or phrases and words suggesting both passive neglect (neglecting vital needs, allowing malnutrition and dehydration, allowing decubiti, deprivation of information, limitation of freedom, negligence toward safety precautions) and active neglect (intimidation and name-calling, tying the victim up to prevent falls without consent, consciously ignoring an emergency, not calling a physician in spite of indication, stopping important treatments, failure to provide essential care, deprivation of nourishment, leaving a person alone for an inappropriate amount of time, excessive demands in a situation of care). We further compare the prevalence of abuse before and after Covid-19 related restrictions on nursing home visits. We also identified the facilities with the most number of cases of abuse with no abuse facilities within a 25-mile radius as most likely candidates for additional inspections. We also built an interactive display to visualize the location of these facilities.

Keywords: machine learning, text analytics, elder abuse, elder neglect, nursing home abuse

Procedia PDF Downloads 143
921 Geological Mapping of Gabel Humr Akarim Area, Southern Eastern Desert, Egypt: Constrain from Remote Sensing Data, Petrographic Description and Field Investigation

Authors: Doaa Hamdi, Ahmed Hashem

Abstract:

The present study aims at integrating the ASTER data and Landsat 8 data to discriminate and map alteration and/or mineralization zones in addition to delineating different lithological units of Humr Akarim Granites area. The study area is located at 24º9' to 24º13' N and 34º1' to 34º2'45"E., covering a total exposed surface area of about 17 km². The area is characterized by rugged topography with low to moderate relief. Geologic fieldwork and petrographic investigations revealed that the basement complex of the study area is composed of metasediments, mafic dikes, older granitoids, and alkali-feldspar granites. Petrographic investigations revealed that the secondary minerals in the study area are mainly represented by chlorite, epidote, clay minerals and iron oxides. These minerals have specific spectral signatures in the region of visible near-infrared and short-wave infrared (0.4 to 2.5 µm). So that the ASTER imagery processing was concentrated on VNIR-SWIR spectrometric data in order to achieve the purposes of this study (geologic mapping of hydrothermal alteration zones and delineate possible radioactive potentialities). Mapping of hydrothermal alterations zones in addition to discriminating the lithological units in the study area are achieved through the utilization of some different image processing, including color band composites (CBC) and data transformation techniques such as band ratios (BR), band ratio codes (BRCs), principal component analysis(PCA), Crosta Technique and minimum noise fraction (MNF). The field verification and petrographic investigation confirm the results of ASTER imagery and Landsat 8 data, proposing a geological map (scale 1:50000).

Keywords: remote sensing, petrography, mineralization, alteration detection

Procedia PDF Downloads 162
920 Assessment of Impact of Urbanization in Drainage Urban Systems, Cali-Colombia

Authors: A. Caicedo Padilla, J. Zambrano Nájera

Abstract:

Cali, the capital of Valle del Cauca and the second city of Colombia, is located in the Cauca River Valley between the Western and Central Cordillera that is South West of the country. The topography of the city is mainly flat, but it is possibly to find mountains in the west. The city has increased urbanization during XX century, especially since 1958 when started a rapid growth due to migration of people from other parts of the region. Much of that population has settled in eastern of Cali, an area originally intended for cane cultivation and a zone of flood from Cauca River and its tributaries. Due to the unplanned migration, settling was inadequate and produced changes in natural dynamics of the basins, which has resulted in increases in runoff volumes, peak flows and flow velocities, that in turn increases flood risk. Sewerage networks capacity were not enough for this higher runoff volume, because in first term they were not adequately designed and built, causing its failure. This in turn generates increasingly recurrent floods generating considerable effects on the economy and development of normal activities in Cali. Thus, it becomes very important to know hydrological behavior of Urban Watersheds. This research aims to determine the impact of urbanization on hydrology of watersheds with very low slopes. The project aims to identify changes in natural drainage patterns caused by the changes made on landscape. From the identification of such modifications it will be defined the most critical areas due to recurring flood events in the city of Cali. Critical areas are defined as areas where the sewerage system does not work properly as surface runoff increases considerable with storm events, and floods are recurrent. The assessment will be done from the analysis of Geographic Information Systems (GIS) theme layers from CVC Environmental Institution of Regional Control in Valle del Cauca, hydrological data and disaster database developed by OSSO Corporation. Rainfall data from a network and historical stream flow data will be used for analysis of historical behavior and change of precipitation and hydrological response according to homogeneous zones characterized by EMCALI S.A. public utility enterprise of Cali in 1999.

Keywords: drainage systems, land cover changes, urban hydrology, urban planning

Procedia PDF Downloads 263
919 Translation and Adaptation of the Assessment Instrument “Kiddycat” for European Portuguese

Authors: Elsa Marta Soares, Ana Rita Valente, Cristiana Rodrigues, Filipa Gonçalves

Abstract:

Background: The assessment of feelings and attitudes of preschool children in relation to stuttering is crucial. Negative experiences can lead to anxiety, worry or frustration. To avoid the worsening of attitudes and feelings related to stuttering, it is important the early detection in order to intervene as soon as possible through an individualized intervention plan. Then it is important to have Portuguese instruments that allow this assessment. Aims: The aim of the present study is to realize the translation and adaptation of the Communication Attitude Test for Children in Preschool Age and Kindergarten (KiddyCat) for EP. Methodology: For the translation and adaptation process, a methodological study was carried out with the following steps: translation, back translation, assessment by a committee of experts and pre-test. This abstract describes the results of the first two phases of this process. The translation was accomplished by two bilingual individuals without experience in health and any knowledge about the instrument. One of them was an English teacher and the other one a Translator. The back-translation was conducted by two Senior Class Teachers that live in United Kingdom without any knowledge in health and about the instrument. Results and Discussion: In translation there were differences in semantic equivalences of various expressions and concepts. A discussion between the two translators, mediated by the researchers, allowed to achieve the consensus version of the translated instrument. Taking into account the original version of KiddyCAT the results demonstrated that back-translation versions were similar to the original version of this assessment instrument. Although the back-translators used different words, they were synonymous, maintaining semantic and idiomatic equivalences of the instrument’s items. Conclusion: This project contributes with an important resource that can be used in the assessment of feelings and attitudes of preschool children who stutter. This was the first phase of the research; expert panel and pretest are being developed. Therefore, it is expected that this instrument contributes to an holistic therapeutic intervention, taking into account the individual characteristics of each child.

Keywords: assessment, feelings and attitudes, preschool children, stuttering

Procedia PDF Downloads 147
918 Tardiness and Self-Regulation: Degree and Reason for Tardiness in Undergraduate Students in Japan

Authors: Keiko Sakai

Abstract:

In Japan, all stages of public education aim to foster a zest for life. ‘Zest’ implies solving problems by oneself, using acquired knowledge and skills. It is related to the self-regulation of metacognition. To enhance this, establishing good learning habits is important. Tardiness in undergraduate students should be examined based on self-regulation. Accordingly, we focussed on self-monitoring and self-planning strategies among self-regulated learning factors to examine the causes of tardiness. This study examines the impact of self-monitoring and self-planning learning skills on the degree and reason for tardiness in undergraduate students. A questionnaire survey was conducted, targeted to undergraduate students in University X in the autumn semester of 2018. Participants were 247 (average age 19.7, SD 1.9; 144 males, 101 females, 2 no answers). The survey contained the following items and measures: school year, the number of classes in the semester, degree of tardiness in the semester (subjective degree and objective times), active participation in and action toward schoolwork, self-planning and self-monitoring learning skills, and reason for tardiness (open-ended question). First, the relation between strategies and tardiness was examined by multiple regressions. A statistically significant relationship between a self-monitoring learning strategy and the degree of subjective and objective tardiness was revealed, after statistically controlling the school year and the number of classes. There was no significant relationship between a self-planning learning strategy and the degree of tardiness. These results suggest that self-monitoring skills reduce tardiness. Secondly, the relation between a self-monitoring learning strategy and the reason of tardiness was analysed, after classifying the reason for tardiness into one of seven categories: ‘overslept’, ‘illness’, ‘poor time management’, ‘traffic delays’, ‘carelessness’, ‘low motivation’, and ‘stuff to do’. Chi-square tests and Fisher’s exact tests showed a statistically significant relationship between a self-monitoring learning strategy and the frequency of ‘traffic delays’. This result implies that self-monitoring skills prevent tardiness because of traffic delays. Furthermore, there was a weak relationship between a self-monitoring learning strategy score and the reason-for-tardiness categories. When self-monitoring skill is higher, a decrease in ‘overslept’ and ‘illness’, and an increase in ‘poor time management’, ‘carelessness’, and ‘low motivation’ are indicated. It is suggested that a self-monitoring learning strategy is related to an internal causal attribution of failure and self-management for how to prevent tardiness. From these findings, the effectiveness of a self-monitoring learning skill strategy for reducing tardiness in undergraduate students is indicated.

Keywords: higher-education, self-monitoring, self-regulation, tardiness

Procedia PDF Downloads 135
917 Remote Sensing of Aerated Flows at Large Dams: Proof of Concept

Authors: Ahmed El Naggar, Homyan Saleh

Abstract:

Dams are crucial for flood control, water supply, and the creation of hydroelectric power. Every dam has a water conveyance system, such as a spillway, providing the safe discharge of catastrophic floods when necessary. Spillway design has historically been investigated in laboratory research owing to the absence of suitable full-scale flow monitoring equipment and safety problems. Prototype measurements of aerated flows are urgently needed to quantify projected scale effects and provide missing validation data for design guidelines and numerical simulations. In this work, an image-based investigation of free-surface flows on a tiered spillway was undertaken at the laboratory (fixed camera installation) and prototype size (drone video) (drone footage) (drone footage). The drone videos were generated using data from citizen science. Analyses permitted the measurement of the free-surface aeration inception point, air-water surface velocities, fluctuations, and residual energy at the chute's downstream end from a remote site. The prototype observations offered full-scale proof of concept, while laboratory results were efficiently confirmed against invasive phase-detection probe data. This paper stresses the efficacy of image-based analyses at prototype spillways. It highlights how citizen science data may enable academics better understand real-world air-water flow dynamics and offers a framework for a small collection of long-missing prototype data.

Keywords: remote sensing, aerated flows, large dams, proof of concept, dam spillways, air-water flows, prototype operation, remote sensing, inception point, optical flow, turbulence, residual energy

Procedia PDF Downloads 90
916 Development of a Computer Aided Diagnosis Tool for Brain Tumor Extraction and Classification

Authors: Fathi Kallel, Abdulelah Alabd Uljabbar, Abdulrahman Aldukhail, Abdulaziz Alomran

Abstract:

The brain is an important organ in our body since it is responsible about the majority actions such as vision, memory, etc. However, different diseases such as Alzheimer and tumors could affect the brain and conduct to a partial or full disorder. Regular diagnosis are necessary as a preventive measure and could help doctors to early detect a possible trouble and therefore taking the appropriate treatment, especially in the case of brain tumors. Different imaging modalities are proposed for diagnosis of brain tumor. The powerful and most used modality is the Magnetic Resonance Imaging (MRI). MRI images are analyzed by doctor in order to locate eventual tumor in the brain and describe the appropriate and needed treatment. Diverse image processing methods are also proposed for helping doctors in identifying and analyzing the tumor. In fact, a large Computer Aided Diagnostic (CAD) tools including developed image processing algorithms are proposed and exploited by doctors as a second opinion to analyze and identify the brain tumors. In this paper, we proposed a new advanced CAD for brain tumor identification, classification and feature extraction. Our proposed CAD includes three main parts. Firstly, we load the brain MRI. Secondly, a robust technique for brain tumor extraction is proposed. This technique is based on both Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). DWT is characterized by its multiresolution analytic property, that’s why it was applied on MRI images with different decomposition levels for feature extraction. Nevertheless, this technique suffers from a main drawback since it necessitates a huge storage and is computationally expensive. To decrease the dimensions of the feature vector and the computing time, PCA technique is considered. In the last stage, according to different extracted features, the brain tumor is classified into either benign or malignant tumor using Support Vector Machine (SVM) algorithm. A CAD tool for brain tumor detection and classification, including all above-mentioned stages, is designed and developed using MATLAB guide user interface.

Keywords: MRI, brain tumor, CAD, feature extraction, DWT, PCA, classification, SVM

Procedia PDF Downloads 246
915 Authorship Attribution Using Sociolinguistic Profiling When Considering Civil and Criminal Cases

Authors: Diana A. Sokolova

Abstract:

This article is devoted to one of the possibilities for identifying the author of an oral or written text - sociolinguistic profiling. Sociolinguistic profiling is utilized as a forensic linguistics technique to identify individuals through language patterns, particularly in criminal cases. It examines how social factors influence language use. This study aims to showcase the significance of linguistic profiling for attributing authorship in texts and emphasizes the necessity for its continuous enhancement while considering its strengths and weaknesses. The study employs semantic-syntactic, lexical-semantic, linguopragmatic, logical, presupposition, authorization, and content analysis methods to investigate linguistic profiling. The research highlights the relevance of sociolinguistic profiling in authorship attribution and underscores the importance of ongoing refinement of the technique, considering its limitations. This study emphasizes the practical application of linguistic profiling in legal settings and underscores the impact of social factors on language use, contributing to the field of forensic linguistics. Data collection involves collecting oral and written texts from criminal and civil court cases to analyze language patterns for authorship attribution. The collected data is analyzed using various linguistic analysis methods to identify individual characteristics and patterns that can aid in authorship attribution. The study addresses the effectiveness of sociolinguistic profiling in identifying authors of texts and explores the impact of social factors on language use in legal contexts. In spite of advantages challenges in linguistics profiling have spurred debates and controversies in academic circles, legal environments, and the public sphere. So, this research highlights the significance of sociolinguistic profiling in authorship attribution and emphasizes the need for further development of this method, considering its strengths and weaknesses.

Keywords: authorship attribution, detection of identifying, dialect, features, forensic linguistics, social influence, sociolinguistics, unique speech characteristics

Procedia PDF Downloads 33
914 Research Trends in Fine Arts Education Dissertations in Turkey

Authors: Suzan Duygu Bedir Erişti

Abstract:

The present study tried to make a general evaluation of the dissertations conducted in the last decade in the field of art education in the Department of Fine Arts Education in the Institutes of Education Sciences in Turkey. In the study, most of the universities which involved an Institute of Education Sciences within their bodies in Turkey were reached. As a result, a total of a hundred dissertations conducted in the departments of Fine Arts Education at several universities (Anadolu, Gazi, Ankara, Marmara, Dokuz Eylul, Ondokuz Mayıs, Selcuk and Necmettin Erbakan) were determined via the open access systems of universities as well as via the Thesis Search System of Higher Education Council. Most of the dissertations were reached via the latter system, and in cases of failure, the dissertations were reached via the former system. Consequently, most of the dissertations which did not have any access restriction and which had appropriate content were reached. The dissertations reached were examined based on document analysis in terms of their research topics, research paradigms, contents, purposes, methodologies, data collection tools, and analysis techniques. The dissertations conducted in institutes of Education Sciences could be said to have demonstrated a development, especially in recent years with respect to their qualities. It was also found that a great majority of the dissertations were carried out at Gazi University and Marmara University and that a similar number of dissertations were conducted in other universities. When all the dissertations were taken into account, in general, they were found to differ a lot in their subject areas. In most of the dissertations, the quantitative paradigm was adopted, while especially in recent years, more importance has been given to methods based on the qualitative paradigm. In addition, most of the dissertations conducted with quantitative paradigm were structured based on the general survey model and experimental research model. In terms of statistical techniques, university-focused approaches were used. In some universities, advanced statistical techniques were applied, while in some other universities, there was a moderate use of statistical techniques. Most of the studies produced results generalizable to the levels of postgraduate education and elementary school education. The studies were generally structured in face-to-face teaching processes, while some of them were designed in environments which did not include results generalizable to the face-to-face education system. In the present study, it was seen that the dissertations conducted in the departments of Fine Arts Education at the Institutes of Education Sciences in Turkey did not involve application-based approaches which included art-based or visual research in terms of either research topic or methodology.

Keywords: fine arts education, dissertations, evaluation of dissertations, research trends in fine arts education

Procedia PDF Downloads 196
913 Frequency Response of Complex Systems with Localized Nonlinearities

Authors: E. Menga, S. Hernandez

Abstract:

Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.

Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber

Procedia PDF Downloads 265
912 Finite Element Modelling of Mechanical Connector in Steel Helical Piles

Authors: Ramon Omar Rosales-Espinoza

Abstract:

Pile-to-pile mechanical connections are used if the depth of the soil layers with sufficient bearing strength exceeds the original (“leading”) pile length, with the additional pile segment being termed “extension” pile. Mechanical connectors permit a safe transmission of forces from leading to extension pile while meeting strength and serviceability requirements. Common types of connectors consist of an assembly of sleeve-type external couplers, bolts, pins, and other mechanical interlock devices that ensure the transmission of compressive, tensile, torsional and bending stresses between leading and extension pile segments. While welded connections allow for a relatively simple structural design, mechanical connections are advantageous over welded connections because they lead to shorter installation times and significant cost reductions since specialized workmanship and inspection activities are not required. However, common practices followed to design mechanical connectors neglect important aspects of the assembly response, such as stress concentration around pin/bolt holes, torsional stresses from the installation process, and interaction between the forces at the installation (torsion), service (compression/tension-bending), and removal stages (torsion). This translates into potentially unsatisfactory designs in terms of the ultimate and service limit states, exhibiting either reduced strength or excessive deformations. In this study, the experimental response under compressive forces of a type of mechanical connector is presented, in terms of strength, deformation and failure modes. The tests revealed that the type of connector used can safely transmit forces from pile to pile. Using the results from the compressive tests, an analysis model was developed using the finite element (FE) method to study the interaction of forces under installation and service stages of a typical mechanical connector. The response of the analysis model is used to identify potential areas for design optimization, including size, gap between leading and extension piles, number of pin/bolts, hole sizes, and material properties. The results show the design of mechanical connectors should take into account the interaction of forces present at every stage of their life cycle, and that the torsional stresses occurring during installation are critical for the safety of the assembly.

Keywords: piles, FEA, steel, mechanical connector

Procedia PDF Downloads 262
911 Heat Transfer Dependent Vortex Shedding of Thermo-Viscous Shear-Thinning Fluids

Authors: Markus Rütten, Olaf Wünsch

Abstract:

Non-Newtonian fluid properties can change the flow behaviour significantly, its prediction is more difficult when thermal effects come into play. Hence, the focal point of this work is the wake flow behind a heated circular cylinder in the laminar vortex shedding regime for thermo-viscous shear thinning fluids. In the case of isothermal flows of Newtonian fluids the vortex shedding regime is characterised by a distinct Reynolds number and an associated Strouhal number. In the case of thermo-viscous shear thinning fluids the flow regime can significantly change in dependence of the temperature of the viscous wall of the cylinder. The Reynolds number alters locally and, consequentially, the Strouhal number globally. In the present CFD study the temperature dependence of the Reynolds and Strouhal number is investigated for the flow of a Carreau fluid around a heated cylinder. The temperature dependence of the fluid viscosity has been modelled by applying the standard Williams-Landel-Ferry (WLF) equation. In the present simulation campaign thermal boundary conditions have been varied over a wide range in order to derive a relation between dimensionless heat transfer, Reynolds and Strouhal number. Together with the shear thinning due to the high shear rates close to the cylinder wall this leads to a significant decrease of viscosity of three orders of magnitude in the nearfield of the cylinder and a reduction of two orders of magnitude in the wake field. Yet the shear thinning effect is able to change the flow topology: a complex K´arm´an vortex street occurs, also revealing distinct characteristic frequencies associated with the dominant and sub-dominant vortices. Heating up the cylinder wall leads to a delayed flow separation and narrower wake flow, giving lesser space for the sequence of counter-rotating vortices. This spatial limitation does not only reduce the amplitude of the oscillating wake flow it also shifts the dominant frequency to higher frequencies, furthermore it damps higher harmonics. Eventually the locally heated wake flow smears out. Eventually, the CFD simulation results of the systematically varied thermal flow parameter study have been used to describe a relation for the main characteristic order parameters.

Keywords: heat transfer, thermo-viscous fluids, shear thinning, vortex shedding

Procedia PDF Downloads 297