Search results for: non-coplanar imaging technique
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7560

Search results for: non-coplanar imaging technique

6060 Complicated Sinusitis with Sphenopalatine Artery Thrombosis in a Covid-19 Patient

Authors: Sara Mahmood, Omar Ahmed, Youssef Aladham, Moustafa Abdelnaby

Abstract:

The varied complications of COVID-19 present an ongoing challenge to healthcare professionals. A rare presentation of complicated sinusitis with pre-septal cellulitis and hard palatal necrosis in a COVID-19 patient, was reported. A 52-year-old male was admitted to the hospital with typical COVID manifestations where he had two successive COVID-19 positive swabs. During his admission, he developed symptoms of right orbital complications of sinusitis along with both clinical and radiological evidence of ipsilateral hard palatal necrosis. Imaging confirmed a diagnosis of right pan-sinusitis complicated with right pre-septal infection and hard palatal bony defect on the same side. Intra-operatively, the sphenopalatine artery was found to be thrombosed. This case focuses on the possible association between these manifestations and the known thromboembolic complications of COVID-19. Ongoing management of such complicated rare cases should be through a multidisciplinary team.

Keywords: COVID-19, sinusitis, sphenopalatine artery, thrombosis

Procedia PDF Downloads 166
6059 Quantitative Characterization of Single Orifice Hydraulic Flat Spray Nozzle

Authors: Y. C. Khoo, W. T. Lai

Abstract:

The single orifice hydraulic flat spray nozzle was evaluated with two global imaging techniques to characterize various aspects of the resulting spray. The two techniques were high resolution flow visualization and Particle Image Velocimetry (PIV). A CCD camera with 29 million pixels was used to capture shadowgraph images to realize ligament formation and collapse as well as droplet interaction. Quantitative analysis was performed to give the sizing information of the droplets and ligaments. This camera was then applied with a PIV system to evaluate the overall velocity field of the spray, from nozzle exit to droplet discharge. PIV images were further post-processed to determine the inclusion angle of the spray. The results from those investigations provided significant quantitative understanding of the spray structure. Based on the quantitative results, detailed understanding of the spray behavior was achieved.

Keywords: spray, flow visualization, PIV, shadowgraph, quantitative sizing, velocity field

Procedia PDF Downloads 373
6058 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings

Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir

Abstract:

Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.

Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine

Procedia PDF Downloads 156
6057 Image Segmentation Techniques: Review

Authors: Lindani Mbatha, Suvendi Rimer, Mpho Gololo

Abstract:

Image segmentation is the process of dividing an image into several sections, such as the object's background and the foreground. It is a critical technique in both image-processing tasks and computer vision. Most of the image segmentation algorithms have been developed for gray-scale images and little research and algorithms have been developed for the color images. Most image segmentation algorithms or techniques vary based on the input data and the application. Nearly all of the techniques are not suitable for noisy environments. Most of the work that has been done uses the Markov Random Field (MRF), which involves the computations and is said to be robust to noise. In the past recent years' image segmentation has been brought to tackle problems such as easy processing of an image, interpretation of the contents of an image, and easy analysing of an image. This article reviews and summarizes some of the image segmentation techniques and algorithms that have been developed in the past years. The techniques include neural networks (CNN), edge-based techniques, region growing, clustering, and thresholding techniques and so on. The advantages and disadvantages of medical ultrasound image segmentation techniques are also discussed. The article also addresses the applications and potential future developments that can be done around image segmentation. This review article concludes with the fact that no technique is perfectly suitable for the segmentation of all different types of images, but the use of hybrid techniques yields more accurate and efficient results.

Keywords: clustering-based, convolution-network, edge-based, region-growing

Procedia PDF Downloads 85
6056 Microstructure of Ti – AlN Composite Produced by Selective Laser Melting

Authors: Jaroslaw Mizera, Pawel Wisniewski, Ryszard Sitek

Abstract:

Selective Laser Melting (SLM) is an advanced additive manufacturing technique used for producing parts made of wide range of materials such as: austenitic steel, titanium, nickel etc. In the our experiment we produced a Ti-AlN composite from a mixture of titanium and aluminum nitride respectively 70% at. and 30% at. using SLM technique. In order to define the size of powder particles, laser diffraction tests were performed on HORIBA LA-950 device. The microstructure and chemical composition of the composite was examined by Scanning Electron Microscopy (SEM). The chemical composition in micro areas of the obtained samples was determined by of EDS. The phase composition was analyzed by X-ray phase analysis (XRD). Microhardness Vickers tests were performed using Zwick/Roell microhardness machine under the load of 0.2kG (HV0.2). Hardness measurements were made along the building (xy) and along the plane of the lateral side of the cuboid (xz). The powder used for manufacturing of the samples had a mean particle size of 41μm. It was homogenous with a spherical shape. The specimens were built chiefly from Ti, TiN and AlN. The dendritic microstructure was porous and fine-grained. Some of the aluminum nitride remained unmelted but no porosity was observed in the interface. The formed material was characterized by high hardness exceeding 700 HV0.2 over the entire cross-section.

Keywords: Selective Laser Melting, Composite, SEM, microhardness

Procedia PDF Downloads 131
6055 An Entropy Stable Three Dimensional Ideal MHD Solver with Guaranteed Positive Pressure

Authors: Andrew R. Winters, Gregor J. Gassner

Abstract:

A high-order numerical magentohydrodynamics (MHD) solver built upon a non-linear entropy stable numerical flux function that supports eight traveling wave solutions will be described. The method is designed to treat the divergence-free constraint on the magnetic field in a similar fashion to a hyperbolic divergence cleaning technique. The solver is especially well-suited for flows involving strong discontinuities due to its strong stability without the need to enforce artificial low density or energy limits. Furthermore, a new formulation of the numerical algorithm to guarantee positivity of the pressure during the simulation is described and presented. By construction, the solver conserves mass, momentum, and energy and is entropy stable. High spatial order is obtained through the use of a third order limiting technique. High temporal order is achieved by utilizing the family of strong stability preserving (SSP) Runge-Kutta methods. Main attributes of the solver are presented as well as details on an implementation of the new solver into the multi-physics, multi-scale simulation code FLASH. The accuracy, robustness, and computational efficiency is demonstrated with a variety of numerical tests. Comparisons are also made between the new solver and existing methods already present in FLASH framework.

Keywords: entropy stability, finite volume scheme, magnetohydrodynamics, pressure positivity

Procedia PDF Downloads 338
6054 Software Tool Design for Heavy Oil Upgrading by Hydrogen Donor Addition in a Hydrodynamic Cavitation Process

Authors: Munoz A. Tatiana, Solano R. Brandon, Montes C. Juan, Cierco G. Javier

Abstract:

The hydrodynamic cavitation is a process in which the energy that the fluids have in the phase changes is used. From this energy, local temperatures greater than 5000 °C are obtained where thermal cracking of the fluid molecules takes place. The process applied to heavy oil affects variables such as viscosity, density, and composition, which constitutes an important improvement in the quality of crude oil. In this study, the need to design a software through mathematical integration models of mixing, cavitation, kinetics, and reactor, allows modeling changes in density, viscosity, and composition of a heavy oil crude, when the fluid passes through a hydrodynamic cavitation reactor. In order to evaluate the viability of this technique in the industry, a heavy oil of 18° API gravity, was simulated using naphtha as a hydrogen donor at concentrations of 1, 2 and 5% vol, where the simulation results showed an API gravity increase to 0.77, 1.21 and 1.93° respectively and a reduction viscosity by 9.9, 12.9 and 15.8%. The obtained results allow to have a favorable panorama on this technological development, an appropriate visualization on the generation of innovative knowledge of this technique and the technical-economic opportunity that benefits the development of the hydrocarbon sector related to heavy crude oil that includes the largest world oil production.

Keywords: hydrodynamic cavitation, thermal cracking, hydrogen donor, heavy oil upgrading, simulator

Procedia PDF Downloads 147
6053 Radio-Guided Surgery with β− Radiation: Test on Ex-Vivo Specimens

Authors: E. Solfaroli Camillocci, C. Mancini-Terracciano, V. Bocci, A. Carollo, M. Colandrea, F. Collamati, M. Cremonesi, M. E. Ferrari, P. Ferroli, F. Ghielmetti, C. M. Grana, M. Marafini, S. Morganti, M. Patane, G. Pedroli, B. Pollo, L. Recchia, A. Russomando, M. Schiariti, M. Toppi, G. Traini, R. Faccini

Abstract:

A Radio-Guided Surgery technique exploiting β− emitting radio-tracers has been suggested to overcome the impact of the large penetration of γ radiation. The detection of electrons in low radiation background provides a clearer delineation of the margins of lesioned tissues. As a start, the clinical cases were selected between the tumors known to express receptors to a β− emitting radio-tracer: 90Y-labelled DOTATOC. The results of tests on ex-vivo specimens of meningioma brain tumor and abdominal neuroendocrine tumors are presented. Voluntary patients were enrolled according to the standard uptake value (SUV > 2 g/ml) and the expected tumor-to-non-tumor ratios (TNR∼10) estimated from PET images after administration of 68Ga-DOTATOC. All these tests validated this technique yielding a significant signal on the bulk tumor and a negligible background from the nearby healthy tissue. Even injecting as low as 1.4 MBq/kg of radiotracer, tumor remnants of 0.1 ml would be detectable. The negligible medical staff exposure was confirmed and among the biological wastes only urine had a significant activity.

Keywords: ex-vivo test, meningioma, neuroendocrine tumor, radio-guided surgery

Procedia PDF Downloads 289
6052 Cantilever Shoring Piles with Prestressing Strands: An Experimental Approach

Authors: Hani Mekdash, Lina Jaber, Yehia Temsah

Abstract:

Underground space is becoming a necessity nowadays, especially in highly congested urban areas. Retaining underground excavations using shoring systems is essential in order to protect adjoining structures from potential damage or collapse. Reinforced Concrete Piles (RCP) supported by multiple rows of tie-back anchors are commonly used type of shoring systems in deep excavations. However, executing anchors can sometimes be challenging because they might illegally trespass neighboring properties or get obstructed by infrastructure and other underground facilities. A technique is proposed in this paper, and it involves the addition of eccentric high-strength steel strands to the RCP section through ducts without providing the pile with lateral supports. The strands are then vertically stressed externally on the pile cap using a hydraulic jack, creating a compressive strengthening force in the concrete section. An experimental study about the behavior of the shoring wall by pre-stressed piles is presented during the execution of an open excavation in an urban area (Beirut city) followed by numerical analysis using finite element software. Based on the experimental results, this technique is proven to be cost-effective and provides flexible and sustainable construction of shoring works.

Keywords: deep excavation, prestressing, pre-stressed piles, shoring system

Procedia PDF Downloads 115
6051 A Comprehensive Review of Foam Assisted Water Alternating Gas (FAWAG) Technique: Foam Applications and Mechanisms

Authors: A. Shabib-Asl, M. Abdalla Ayoub Mohammed, A. F. Alta’ee, I. Bin Mohd Saaid, P. Paulo Jose Valentim

Abstract:

In the last few decades, much focus has been placed on enhancing oil recovery from existing fields. This is accomplished by the study and application of various methods. As for recent cases, the study of fluid mobility control and sweep efficiency in gas injection process as well as water alternating gas (WAG) method have demonstrated positive results on oil recovery and thus gained wide interest in petroleum industry. WAG injection application results in an increased oil recovery. Its mechanism consists in reduction of gas oil ratio (GOR). However, there are some problems associated with this which includes poor volumetric sweep efficiency due to its low density and high mobility when compared with oil. This has led to the introduction of foam assisted water alternating gas (FAWAG) technique, which in contrast with WAG injection, acts in improving the sweep efficiency and reducing the gas oil ration therefore maximizing the production rate from the producer wells. This paper presents a comprehensive review of FAWAG process from perspective of Snorre field experience. In addition, some comparative results between FAWAG and the other EOR methods are presented including their setbacks. The main aim is to provide a solid background for future laboratory research and successful field application-extend.

Keywords: GOR, mobility ratio, sweep efficiency, WAG

Procedia PDF Downloads 442
6050 Extremely Large Sinus Pericranii with Involvement of the Torcular and Associated with Crouzon’s Syndrome

Authors: Felipe H. Sanders, Bryan A. Edwards, Matthew Fusco, Rod J. Oskouian, R. Shane Tubbs

Abstract:

Introduction: Sinus pericranii is a rare vascular malformation that connects the intracranial dural sinuses to the extracranial venous drainage system and is caused by either trauma or congenital defects. Although the majority of these vascular structures are due to trauma, some are congenital. Case report: Herein, we report a 5-month-old patient with a very large and fluctuating subcutaneous mass over the occiput and the diagnosis of Crouzon’s syndrome. The child presented with a large midline mass that on imaging, connected to the underlying torcular and was diagnosed as a sinus pericranii. At long-term follow up and without operative intervention, the sinus pericranii resolved. This uncommon relationship is reviewed. Conclusion: Premature closure of posterior fossa sutures as part of Crouzon syndrome can present with large sinus pericranii. Such subcutaneous swellings might resolve spontaneously.

Keywords: congenital, craniosynostosis, pediatric, vascular malformation

Procedia PDF Downloads 198
6049 An Improved Image Steganography Technique Based on Least Significant Bit Insertion

Authors: Olaiya Folorunsho, Comfort Y. Daramola, Joel N. Ugwu, Lawrence B. Adewole, Olufisayo S. Ekundayo

Abstract:

In today world, there is a tremendous rise in the usage of internet due to the fact that almost all the communication and information sharing is done over the web. Conversely, there is a continuous growth of unauthorized access to confidential data. This has posed a challenge to information security expertise whose major goal is to curtail the menace. One of the approaches to secure the safety delivery of data/information to the rightful destination without any modification is steganography. Steganography is the art of hiding information inside an embedded information. This research paper aimed at designing a secured algorithm with the use of image steganographic technique that makes use of Least Significant Bit (LSB) algorithm for embedding the data into the bit map image (bmp) in order to enhance security and reliability. In the LSB approach, the basic idea is to replace the LSB of the pixels of the cover image with the Bits of the messages to be hidden without destroying the property of the cover image significantly. The system was implemented using C# programming language of Microsoft.NET framework. The performance evaluation of the proposed system was experimented by conducting a benchmarking test for analyzing the parameters like Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). The result showed that image steganography performed considerably in securing data hiding and information transmission over the networks.

Keywords: steganography, image steganography, least significant bits, bit map image

Procedia PDF Downloads 261
6048 Interpersonal Emotion Regulation in Adolescence: An Enhanced Critical Incident Study

Authors: Setareh Shayanfar

Abstract:

Given the increasing importance of peer relationships during adolescence, the present study aimed to examine peer interactions that facilitate or hinder adolescents’ regulation of negative emotions. Using the Enhanced Critical Incident Technique, 1-hour semi-structured interviews were conducted with 16 junior high school adolescents. Participants were asked to recall situations when they experienced strong negative emotions during the past school year, indicate the peer interactions that helped or hindered their emotion regulation, and identify prospective interactions with the potential to help regulate their emotions. Data analysis extracted 182 critical incidents, including 109 helping incidents, 45 hindering incidents, and 28 wish list items, which generated 10 categories nested within four overarching themes: Positive Personal Support included (a) supportive presence, (b) expressing concern, (c) empathizing, and (d) encouraging and cheering up; while Strategy Transmission included (e) sharing perspective, and (f) giving advice; Activated Support included (g) taking action, and (h) distracting; while Negative Personal Interactions included (i) withdrawing and (j) punishing. Implications for mental health and service providers, as well as recommendations for future research, are presented.

Keywords: adolescence, emotion regulation, enhanced critical incident technique, peers

Procedia PDF Downloads 137
6047 Identification of Effective Factors on Marketing Performance Management in Iran’s Airports and Air Navigation Companies

Authors: Morteza Hamidpour, Kambeez Shahroudi

Abstract:

The aim of this research was to identify the factors affecting the measurement and management of marketing performance in Iran's airports and air navigation companies (Economics in Air and Airport Transport). This study was exploratory and used a qualitative content analysis technique. The study population consisted of university professors in the field of air transportation and senior airport managers, with 15 individuals selected as samples using snowball technique. Based on the results, 15 main indicators were identified for measuring the marketing performance of Iran's airports and air navigation companies. These indicators include airport staff, general and operational expenses, annual passenger reception capacity, number of counter receptions and passenger dispatches, airport runway length, airline companies' loyalty to using airport space and facilities, regional market share of transit and departure flights, claims and net profit (aviation and non-aviation). By keeping the input indicators constant, the output indicators can be improved, enhancing performance efficiency and consequently increasing the economic situation in air transportation.

Keywords: air transport economics, marketing performance management, marketing performance input factors, marketing performance intermediary factors, marketing performance output factors, content analysis

Procedia PDF Downloads 61
6046 Modeling of a UAV Longitudinal Dynamics through System Identification Technique

Authors: Asadullah I. Qazi, Mansoor Ahsan, Zahir Ashraf, Uzair Ahmad

Abstract:

System identification of an Unmanned Aerial Vehicle (UAV), to acquire its mathematical model, is a significant step in the process of aircraft flight automation. The need for reliable mathematical model is an established requirement for autopilot design, flight simulator development, aircraft performance appraisal, analysis of aircraft modifications, preflight testing of prototype aircraft and investigation of fatigue life and stress distribution etc.  This research is aimed at system identification of a fixed wing UAV by means of specifically designed flight experiment. The purposely designed flight maneuvers were performed on the UAV and aircraft states were recorded during these flights. Acquired data were preprocessed for noise filtering and bias removal followed by parameter estimation of longitudinal dynamics transfer functions using MATLAB system identification toolbox. Black box identification based transfer function models, in response to elevator and throttle inputs, were estimated using least square error   technique. The identification results show a high confidence level and goodness of fit between the estimated model and actual aircraft response.

Keywords: fixed wing UAV, system identification, black box modeling, longitudinal dynamics, least square error

Procedia PDF Downloads 320
6045 Modified Newton's Iterative Method for Solving System of Nonlinear Equations in Two Variables

Authors: Sara Mahesar, Saleem M. Chandio, Hira Soomro

Abstract:

Nonlinear system of equations in two variables is a system which contains variables of degree greater or equal to two or that comprises of the transcendental functions. Mathematical modeling of numerous physical problems occurs as a system of nonlinear equations. In applied and pure mathematics it is the main dispute to solve a system of nonlinear equations. Numerical techniques mainly used for finding the solution to problems where analytical methods are failed, which leads to the inexact solutions. To find the exact roots or solutions in case of the system of non-linear equations there does not exist any analytical technique. Various methods have been proposed to solve such systems with an improved rate of convergence and accuracy. In this paper, a new scheme is developed for solving system of non-linear equation in two variables. The iterative scheme proposed here is modified form of the conventional Newton’s Method (CN) whose order of convergence is two whereas the order of convergence of the devised technique is three. Furthermore, the detailed error and convergence analysis of the proposed method is also examined. Additionally, various numerical test problems are compared with the results of its counterpart conventional Newton’s Method (CN) which confirms the theoretic consequences of the proposed method.

Keywords: conventional Newton’s method, modified Newton’s method, order of convergence, system of nonlinear equations

Procedia PDF Downloads 251
6044 High Gain Mobile Base Station Antenna Using Curved Woodpile EBG Technique

Authors: P. Kamphikul, P. Krachodnok, R. Wongsan

Abstract:

This paper presents the gain improvement of a sector antenna for mobile phone base station by using the new technique to enhance its gain for microstrip antenna (MSA) array without construction enlargement. The curved woodpile Electromagnetic Band Gap (EBG) has been utilized to improve the gain instead. The advantages of this proposed antenna are reducing the length of MSAs array but providing the higher gain and easy fabrication and installation. Moreover, it provides a fan-shaped radiation pattern, wide in the horizontal direction and relatively narrow in the vertical direction, which appropriate for mobile phone base station. The paper also presents the design procedures of a 1x8 MSAs array associated with U-shaped reflector for decreasing their back and side lobes. The fabricated curved woodpile EBG exhibits bandgap characteristics at 2.1 GHz and is utilized for realizing a resonant cavity of MSAs array. This idea has been verified by both the Computer Simulation Technology (CST) software and experimental results. As the results, the fabricated proposed antenna achieves a high gain of 20.3 dB and the half-power beam widths in the E- and H-plane of 36.8 and 8.7 degrees, respectively. Good qualitative agreement between measured and simulated results of the proposed antenna was obtained.

Keywords: gain improvement, microstrip antenna array, electromagnetic band gap, base station

Procedia PDF Downloads 305
6043 Joubert Syndrome: A Rare Genetic Disorder Reported in Kurdish Family

Authors: Aran Abd Al Rahman

Abstract:

Joubert syndrome regards as a congenital cerebellar ataxia caused by autosomal recessive carried on X chromosome. The disease diagnosed by brain imaging—the so-called molar tooth sign. Neurological signs were present from the neonatal period and include hypotonia progressing to ataxia, global developmental delay, ocular motor apraxia, and breathing dysregulation. These signs are variably associated with multiorgan involvement, mainly of the retina, kidneys, skeleton, and liver. 30 causative genes have been identified so far, all of which encode for proteins of the primary cilium or its apparatus, The purpose of our project was to detect the mutant gene (INPP5E gene) which cause Joubert syndrome. There were many methods used for diagnosis such as MRI and CT- scan and molecular diagnosis by doing ARMS PCR for detection of mutant gene that we were used in this research project. In this research for individual family which reported, the two children with parents, the two children were affected and were carrier.

Keywords: Joubert syndrome, genetic disease, Kurdistan region, Sulaimani

Procedia PDF Downloads 133
6042 Neuro-Connectivity Analysis Using Abide Data in Autism Study

Authors: Dulal Bhaumik, Fei Jie, Runa Bhaumik, Bikas Sinha

Abstract:

Human brain is an amazingly complex network. Aberrant activities in this network can lead to various neurological disorders such as multiple sclerosis, Parkinson’s disease, Alzheimer’s disease and autism. fMRI has emerged as an important tool to delineate the neural networks affected by such diseases, particularly autism. In this paper, we propose mixed-effects models together with an appropriate procedure for controlling false discoveries to detect disrupted connectivities in whole brain studies. Results are illustrated with a large data set known as Autism Brain Imaging Data Exchange or ABIDE which includes 361 subjects from 8 medical centers. We believe that our findings have addressed adequately the small sample inference problem, and thus are more reliable for therapeutic target for intervention. In addition, our result can be used for early detection of subjects who are at high risk of developing neurological disorders.

Keywords: ABIDE, autism spectrum disorder, fMRI, mixed-effects model

Procedia PDF Downloads 280
6041 Leveraging Multimodal Neuroimaging Techniques to in vivo Address Compensatory and Disintegration Patterns in Neurodegenerative Disorders: Evidence from Cortico-Cerebellar Connections in Multiple Sclerosis

Authors: Efstratios Karavasilis, Foteini Christidi, Georgios Velonakis, Agapi Plousi, Kalliopi Platoni, Nikolaos Kelekis, Ioannis Evdokimidis, Efstathios Efstathopoulos

Abstract:

Introduction: Advanced structural and functional neuroimaging techniques contribute to the study of anatomical and functional brain connectivity and its role in the pathophysiology and symptoms’ heterogeneity in several neurodegenerative disorders, including multiple sclerosis (MS). Aim: In the present study, we applied multiparametric neuroimaging techniques to investigate the structural and functional cortico-cerebellar changes in MS patients. Material: We included 51 MS patients (28 with clinically isolated syndrome [CIS], 31 with relapsing-remitting MS [RRMS]) and 51 age- and gender-matched healthy controls (HC) who underwent MRI in a 3.0T MRI scanner. Methodology: The acquisition protocol included high-resolution 3D T1 weighted, diffusion-weighted imaging and echo planar imaging sequences for the analysis of volumetric, tractography and functional resting state data, respectively. We performed between-group comparisons (CIS, RRMS, HC) using CAT12 and CONN16 MATLAB toolboxes for the analysis of volumetric (cerebellar gray matter density) and functional (cortico-cerebellar resting-state functional connectivity) data, respectively. Brainance suite was used for the analysis of tractography data (cortico-cerebellar white matter integrity; fractional anisotropy [FA]; axial and radial diffusivity [AD; RD]) to reconstruct the cerebellum tracts. Results: Patients with CIS did not show significant gray matter (GM) density differences compared with HC. However, they showed decreased FA and increased diffusivity measures in cortico-cerebellar tracts, and increased cortico-cerebellar functional connectivity. Patients with RRMS showed decreased GM density in cerebellar regions, decreased FA and increased diffusivity measures in cortico-cerebellar WM tracts, as well as a pattern of increased and mostly decreased functional cortico-cerebellar connectivity compared to HC. The comparison between CIS and RRMS patients revealed significant GM density difference, reduced FA and increased diffusivity measures in WM cortico-cerebellar tracts and increased/decreased functional connectivity. The identification of decreased WM integrity and increased functional cortico-cerebellar connectivity without GM changes in CIS and the pattern of decreased GM density decreased WM integrity and mostly decreased functional connectivity in RRMS patients emphasizes the role of compensatory mechanisms in early disease stages and the disintegration of structural and functional networks with disease progression. Conclusions: In conclusion, our study highlights the added value of multimodal neuroimaging techniques for the in vivo investigation of cortico-cerebellar brain changes in neurodegenerative disorders. An extension and future opportunity to leverage multimodal neuroimaging data inevitably remain the integration of such data in the recently-applied mathematical approaches of machine learning algorithms to more accurately classify and predict patients’ disease course.

Keywords: advanced neuroimaging techniques, cerebellum, MRI, multiple sclerosis

Procedia PDF Downloads 134
6040 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model

Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero

Abstract:

Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.

Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods

Procedia PDF Downloads 8
6039 The Brain’s Attenuation Coefficient as a Potential Estimator of Temperature Elevation during Intracranial High Intensity Focused Ultrasound Procedures

Authors: Daniel Dahis, Haim Azhari

Abstract:

Noninvasive image-guided intracranial treatments using high intensity focused ultrasound (HIFU) are on the course of translation into clinical applications. They include, among others, tumor ablation, hyperthermia, and blood-brain-barrier (BBB) penetration. Since many of these procedures are associated with local temperature elevation, thermal monitoring is essential. MRI constitutes an imaging method with high spatial resolution and thermal mapping capacity. It is the currently leading modality for temperature guidance, commonly under the name MRgHIFU (magnetic-resonance guided HIFU). Nevertheless, MRI is a very expensive non-portable modality which jeopardizes its accessibility. Ultrasonic thermal monitoring, on the other hand, could provide a modular, cost-effective alternative with higher temporal resolution and accessibility. In order to assess the feasibility of ultrasonic brain thermal monitoring, this study investigated the usage of brain tissue attenuation coefficient (AC) temporal changes as potential estimators of thermal changes. Newton's law of cooling describes a temporal exponential decay behavior for the temperature of a heated object immersed in a relatively cold surrounding. Similarly, in the case of cerebral HIFU treatments, the temperature in the region of interest, i.e., focal zone, is suggested to follow the same law. Thus, it was hypothesized that the AC of the irradiated tissue may follow a temporal exponential behavior during cool down regime. Three ex-vivo bovine brain tissue specimens were inserted into plastic containers along with four thermocouple probes in each sample. The containers were placed inside a specially built ultrasonic tomograph and scanned at room temperature. The corresponding pixel-averaged AC was acquired for each specimen and used as a reference. Subsequently, the containers were placed in a beaker containing hot water and gradually heated to about 45ᵒC. They were then repeatedly rescanned during cool down using ultrasonic through-transmission raster trajectory until reaching about 30ᵒC. From the obtained images, the normalized AC and its temporal derivative as a function of temperature and time were registered. The results have demonstrated high correlation (R² > 0.92) between both the brain AC and its temporal derivative to temperature. This indicates the validity of the hypothesis and the possibility of obtaining brain tissue temperature estimation from the temporal AC thermal changes. It is important to note that each brain yielded different AC values and slopes. This implies that a calibration step is required for each specimen. Thus, for a practical acoustic monitoring of the brain, two steps are suggested. The first step consists of simply measuring the AC at normal body temperature. The second step entails measuring the AC after small temperature elevation. In face of the urging need for a more accessible thermal monitoring technique for brain treatments, the proposed methodology enables a cost-effective high temporal resolution acoustical temperature estimation during HIFU treatments.

Keywords: attenuation coefficient, brain, HIFU, image-guidance, temperature

Procedia PDF Downloads 156
6038 Developing Manufacturing Process for the Graphene Sensors

Authors: Abdullah Faqihi, John Hedley

Abstract:

Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.

Keywords: laser scribing, lightscribe DVD, graphene oxide, scanning electron microscopy

Procedia PDF Downloads 110
6037 Exposing Latent Fingermarks on Problematic Metal Surfaces Using Time of Flight Secondary Ion Mass Spectroscopy

Authors: Tshaiya Devi Thandauthapani, Adam J. Reeve, Adam S. Long, Ian J. Turner, James S. Sharp

Abstract:

Fingermarks are a crucial form of evidence for identifying a person at a crime scene. However, visualising latent (hidden) fingermarks can be difficult, and the correct choice of techniques is essential to develop and preserve any fingermarks that might be present. Knives, firearms and other metal weapons have proven to be challenging substrates (stainless steel in particular) from which to reliably obtain fingermarks. In this study, time of flight secondary ion mass spectroscopy (ToF-SIMS) was used to image fingermarks on metal surfaces. This technique was compared to a conventional superglue based fuming technique that was accompanied by a series of contrast enhancing dyes (basic yellow 40 (BY40), crystal violet (CV) and Sudan black (SB)) on three different metal surfaces. The conventional techniques showed little to no evidence of fingermarks being present on the metal surfaces after a few days. However, ToF-SIMS images revealed fingermarks on the same and similar substrates with an exceptional level of detail demonstrating clear ridge definition as well as detail about sweat pore position and shape, that persist for over 26 days after deposition when the samples were stored under ambient conditions.

Keywords: conventional techniques, latent fingermarks, metal substrates, time of flight secondary ion mass spectroscopy

Procedia PDF Downloads 157
6036 Postcolonialism and Feminist Dialogics: Re-Imaging Cultural Exclusion in the Nigerian Feminist Fiction

Authors: Muhammad Dahiru

Abstract:

A contestable polemic in postcolonialism is the Western Universalist conception of the people of a vast continent such as Africa as homogenous. Quite often, the postcolonial African woman is seen as an entity in western cultural and literary feminist theorisations. The debate between the so-called western feminist scholarship and the postcolonial/third world feminists that began in the late 1980s focuses on this universalisation of women’s concerns as monolithic. This article argues that the universalising assumption that all women share similar concerns in not only Africa as a continent but even in Nigeria as a country is misleading because of cultural differences. The article is a dialogic reading of Nigerian literature arguing that there is no culturally normative perspective on Nigerian feminist fiction because of the multifaceted and multicultural concerns of women writers from the different cultural regions in the country. The article concludes that this can better be read and appreciated through the lens of M. M. Bakhtin’s theory of dialogism.

Keywords: cultural exclusion, dialogics, Nigerian feminist fiction, postcolonialism

Procedia PDF Downloads 197
6035 Controllable Modification of Glass-Crystal Composites with Ion-Exchange Technique

Authors: Andrey A. Lipovskii, Alexey V. Redkov, Vyacheslav V. Rusan, Dmitry K. Tagantsev, Valentina V. Zhurikhina

Abstract:

The presented research is related to the development of recently proposed technique of the formation of composite materials, like optical glass-ceramics, with predetermined structure and properties of the crystalline component. The technique is based on the control of the size and concentration of the crystalline grains using the phenomenon of glass-ceramics decrystallization (vitrification) induced by ion-exchange. This phenomenon was discovered and explained in the beginning of the 2000s, while related theoretical description was given in 2016 only. In general, the developed theory enables one to model the process and optimize the conditions of ion-exchange processing of glass-ceramics, which provide given properties of crystalline component, in particular, profile of the average size of the crystalline grains. The optimization is possible if one knows two dimensionless parameters of the theoretical model. One of them (β) is the value which is directly related to the solubility of crystalline component of the glass-ceramics in the glass matrix, and another (γ) is equal to the ratio of characteristic times of ion-exchange diffusion and crystalline grain dissolution. The presented study is dedicated to the development of experimental technique and simulation which allow determining these parameters. It is shown that these parameters can be deduced from the data on the space distributions of diffusant concentrations and average size of crystalline grains in the glass-ceramics samples subjected to ion-exchange treatment. Measurements at least at two temperatures and two processing times at each temperature are necessary. The composite material used was a silica-based glass-ceramics with crystalline grains of Li2OSiO2. Cubical samples of the glass-ceramics (6x6x6 mm3) underwent the ion exchange process in NaNO3 salt melt at 520 oC (for 16 and 48 h), 540 oC (for 8 and 24 h), 560 oC (for 4 and 12 h), and 580 oC (for 2 and 8 h). The ion exchange processing resulted in the glass-ceramics vitrification in the subsurface layers where ion-exchange diffusion took place. Slabs about 1 mm thick were cut from the central part of the samples and their big facets were polished. These slabs were used to find profiles of diffusant concentrations and average size of the crystalline grains. The concentration profiles were determined from refractive index profiles measured with Max-Zender interferometer, and profiles of the average size of the crystalline grains were determined with micro-Raman spectroscopy. Numerical simulation were based on the developed theoretical model of the glass-ceramics decrystallization induced by ion exchange. The simulation of the processes was carried out for different values of β and γ parameters under all above-mentioned ion exchange conditions. As a result, the temperature dependences of the parameters, which provided a reliable coincidence of the simulation and experimental data, were found. This ensured the adequate modeling of the process of the glass-ceramics decrystallization in 520-580 oC temperature interval. Developed approach provides a powerful tool for fine tuning of the glass-ceramics structure, namely, concentration and average size of crystalline grains.

Keywords: diffusion, glass-ceramics, ion exchange, vitrification

Procedia PDF Downloads 266
6034 Application of Artificial Ground-Freezing to Construct a Passenger Interchange Tunnel for the Subway Line 14 in Paris, France

Authors: G. Lancellotta, G. Di Salvo, A. Rigazio, A. Davout, V. Pastore, G. Tonoli, A. Martin, P. Jullien, R. Jagow-Klaff, R. Wernecke

Abstract:

Artificial ground freezing (AGF) technique is a well-proven soil improvement approach used worldwide to construct shafts, tunnels and many other civil structures in difficult subsoil or ambient conditions. As part of the extension of Line 14 of the Paris subway, a passenger interchange tunnel between the new station at Porte de CI ichy and the new Tribunal the Grand Instance has been successfully constructed using this technique. The paper presents the successful application of AGF by Liquid Nitrogen and Brine implemented to provide structural stability and groundwater cut-off around the passenger interchange tunnel. The working conditions were considered to be rather challenging, due to the proximity of a hundred-year-old existing service tunnel of the Line 13, and subsoil conditions on site. Laboratory tests were carried out to determine the relevant soil parameters for hydro-thermal-mechanical aspects and to implement numerical analyses. Monitoring data were used in order to check and control the development and the efficiency of the freezing process as well as to back analyze the parameters assumed for the design, both during the freezing and thawing phases.

Keywords: artificial ground freezing, brine method, case history, liquid nitrogen

Procedia PDF Downloads 218
6033 An Overview of Posterior Fossa Associated Pathologies and Segmentation

Authors: Samuel J. Ahmad, Michael Zhu, Andrew J. Kobets

Abstract:

Segmentation tools continue to advance, evolving from manual methods to automated contouring technologies utilizing convolutional neural networks. These techniques have evaluated ventricular and hemorrhagic volumes in the past but may be applied in novel ways to assess posterior fossa-associated pathologies such as Chiari malformations. Herein, we summarize literature pertaining to segmentation in the context of this and other posterior fossa-based diseases such as trigeminal neuralgia, hemifacial spasm, and posterior fossa syndrome. A literature search for volumetric analysis of the posterior fossa identified 27 papers where semi-automated, automated, manual segmentation, linear measurement-based formulas, and the Cavalieri estimator were utilized. These studies produced superior data than older methods utilizing formulas for rough volumetric estimations. The most commonly used segmentation technique was semi-automated segmentation (12 studies). Manual segmentation was the second most common technique (7 studies). Automated segmentation techniques (4 studies) and the Cavalieri estimator (3 studies), a point-counting method that uses a grid of points to estimate the volume of a region, were the next most commonly used techniques. The least commonly utilized segmentation technique was linear measurement-based formulas (1 study). Semi-automated segmentation produced accurate, reproducible results. However, it is apparent that there does not exist a single semi-automated software, open source or otherwise, that has been widely applied to the posterior fossa. Fully-automated segmentation via such open source software as FSL and Freesurfer produced highly accurate posterior fossa segmentations. Various forms of segmentation have been used to assess posterior fossa pathologies and each has its advantages and disadvantages. According to our results, semi-automated segmentation is the predominant method. However, atlas-based automated segmentation is an extremely promising method that produces accurate results. Future evolution of segmentation technologies will undoubtedly yield superior results, which may be applied to posterior fossa related pathologies. Medical professionals will save time and effort analyzing large sets of data due to these advances.

Keywords: chiari, posterior fossa, segmentation, volumetric

Procedia PDF Downloads 103
6032 Total-Reflection X-Ray Spectroscopy as a Tool for Element Screening in Food Samples

Authors: Hagen Stosnach

Abstract:

The analytical demands on modern instruments for element analysis in food samples include the analysis of major, trace and ultra-trace essential elements as well as potentially toxic trace elements. In this study total reflection, X-ray fluorescence analysis (TXRF) is presented as an analytical technique, which meets the requirements, defined by the Association of Official Agricultural Chemists (AOAC) regarding the limit of quantification, repeatability, reproducibility and recovery for most of the target elements. The advantages of TXRF are the small sample mass required, the broad linear range from µg/kg up to wt.-% values, no consumption of gases or cooling water, and the flexible and easy sample preparation. Liquid samples like alcoholic or non-alcoholic beverages can be analyzed without any preparation. For solid food samples, the most common sample pre-treatment methods are mineralization, direct deposition of the sample onto the reflector without/with minimal treatment, mainly as solid suspensions or after extraction. The main disadvantages are due to the possible peaks overlapping, which may lower the accuracy of quantitative analysis and the limit in the element identification. This analytical technique will be presented by several application examples, covering a broad range of liquid and solid food types.

Keywords: essential elements, toxic metals, XRF, spectroscopy

Procedia PDF Downloads 128
6031 Fostering Fresh Graduate Students’ Confidence in Speaking English: An Action Research to Students of Muria Kudus University, Central Java, Indonesia

Authors: Farid Noor Romadlon

Abstract:

Welcoming the ASEAN Economic Community and globalization, people need to have a good communication skill. Being able to speak English is one of important qualification in this skill and as global citizen. This study focused on fostering fresh graduate students’ confidence in speaking English. So, students have good performance in speaking. There were thirty (30) students from first semester of English Education Department who joined Intensive Course class as the subject. They had poor motivation to speak English since English is a foreign language which is not exposed in their environment. This study used Three Communicative Activities technique in twelve successive meetings totally. It was done in two cycles (six meetings for each) since there were some activities should be improved in the first session (cycle). Oral test was administered to find the quantitative result and observation conducted to strengthen the finding. The result indicated that Three Communicative Activities improved students’ confidence in speaking English. They had significant progress in their performance in the class. The technique which allowed students to have more spaces to explore and express their ideas to their friends increased their confidence in their performance. The group or cooperative activities stimulated students to think critically in the discussion and promoted their confidence to talk more.

Keywords: students’ confidence, three communicative activities, speaking, Muria Kudus University

Procedia PDF Downloads 206