Search results for: computational imaging
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3152

Search results for: computational imaging

1442 Radical Web Text Classification Using a Composite-Based Approach

Authors: Kolade Olawande Owoeye, George R. S. Weir

Abstract:

The widespread of terrorism and extremism activities on the internet has become a major threat to the government and national securities due to their potential dangers which have necessitated the need for intelligence gathering via web and real-time monitoring of potential websites for extremist activities. However, the manual classification for such contents is practically difficult or time-consuming. In response to this challenge, an automated classification system called composite technique was developed. This is a computational framework that explores the combination of both semantics and syntactic features of textual contents of a web. We implemented the framework on a set of extremist webpages dataset that has been subjected to the manual classification process. Therein, we developed a classification model on the data using J48 decision algorithm, this is to generate a measure of how well each page can be classified into their appropriate classes. The classification result obtained from our method when compared with other states of arts, indicated a 96% success rate in classifying overall webpages when matched against the manual classification.

Keywords: extremist, web pages, classification, semantics, posit

Procedia PDF Downloads 130
1441 The Use of Degradation Measures to Design Reliability Test Plans

Authors: Stephen V. Crowder, Jonathan W. Lane

Abstract:

With short production development times, there is an increased need to demonstrate product reliability relatively quickly with minimal testing. In such cases there may be few if any observed failures. Thus it may be difficult to assess reliability using the traditional reliability test plans that measure only time (or cycles) to failure. For many components, degradation measures will contain important information about performance and reliability. These measures can be used to design a minimal test plan, in terms of number of units placed on test and duration of the test, necessary to demonstrate a reliability goal. In this work we present a case study involving an electronic component subject to degradation. The data, consisting of 42 degradation paths of cycles to failure, are first used to estimate a reliability function. Bootstrapping techniques are then used to perform power studies and develop a minimal reliability test plan for future production of this component.

Keywords: degradation measure, time to failure distribution, bootstrap, computational science

Procedia PDF Downloads 509
1440 Performance Evaluation and Kinetics of Artocarpus heterophyllus Seed for the Purification of Paint Industrial Wastewater by Coagulation-Flocculation Process

Authors: Ifeoma Maryjane Iloamaeke, Kelvin Obazie, Mmesoma Offornze, Chiamaka Marysilvia Ifeaghalu, Cecilia Aduaka, Ugomma Chibuzo Onyeije, Claudine Ifunanaya Ogu, Ngozi Anastesia Okonkwo

Abstract:

This work investigated the effects of pH, settling time, and coagulant dosages on the removal of color, turbidity, and heavy metals from paint industrial wastewater using the seed of Artocarpus heterophyllus (AH) by the coagulation-flocculation process. The paint effluent was physicochemically characterized, while AH coagulant was instrumentally characterized by Scanning Electron Microscope (SEM), Fourier Transform Infrared (FTIR), and X-ray diffraction (XRD). A Jar test experiment was used for the coagulation-flocculation process. The result showed that paint effluent was polluted with color, turbidity (36000 NTU), mercury (1.392 mg/L), lead (0.252 mg/L), arsenic (1.236 mg/L), TSS (63.40mg/L), and COD (121.70 mg/L). The maximum color removal efficiency was 94.33% at the dosage of 0.2 g/L, pH 2 at a constant time of 50 mins, and 74.67% at constant pH 2, coagulant dosage of 0.2 g/L and 50 mins. The highest turbidity removal efficiency was 99.94% at 0.2 g/L and 50 mins at constant pH 2 and 96.66% at pH 2 and 0.2 g/L at constant time of 50 mins. The mercury removal efficiency of 99.29% was achieved at the optimal condition of 0.8 g/L coagulant dosage, pH 8, and constant time of 50 mins and 99.57% at coagulant dosage of 0.8 g/L, time of 50 mins constant pH 8. The highest lead removal efficiency was 99.76% at a coagulant dosage of 10 g/L, time of 40 mins at constant pH 10, and 96.53% at pH 10, coagulant dosage of 10 g/L and constant time of 40 mins. For arsenic, the removal efficiency is 75.24 % at 0.8 g/L coagulant dosage, time of 40 mins, and constant pH of 8. XRD imaging before treatment showed that Artocarpus heterophyllus coagulant was crystalline and changed to amorphous after treatment. The SEM and FTIR results of the AH coagulant and sludge suggested there were changes in the surface morphology and functional groups before and after treatment. The reaction kinetics were modeled best in the second order.

Keywords: Artocarpus heterophyllus, coagulation-flocculation, coagulant dosages, setting time, paint effluent

Procedia PDF Downloads 79
1439 Evaluation of Progressive Collapse of Transmission Tower

Authors: Jeong-Hwan Choi, Hyo-Sang Park, Tae-Hyung Lee

Abstract:

The transmission tower is one of the crucial lifeline structures in a modern society, and it needs to be protected against extreme loading conditions. However, the transmission tower is a very complex structure and, therefore, it is very difficult to simulate the actual damage and the collapse behavior of the tower structure. In this study, the actual collapse behavior of the transmission tower due to lateral loading conditions such as wind load is evaluated through the computational simulation. For that, a progressive collapse procedure is applied to the simulation. In this procedure, after running the simulation, if a member of the tower structure fails, the failed member is removed and the simulation run again. The 154kV transmission tower is selected for this study. The simulation is performed by nonlinear static analysis procedure, namely pushover analysis, using OpenSEES, an earthquake simulation platform. Three-dimensional finite element models of those towers are developed.

Keywords: transmission tower, OpenSEES, pushover, progressive collapse

Procedia PDF Downloads 341
1438 Automatic Near-Infrared Image Colorization Using Synthetic Images

Authors: Yoganathan Karthik, Guhanathan Poravi

Abstract:

Colorizing near-infrared (NIR) images poses unique challenges due to the absence of color information and the nuances in light absorption. In this paper, we present an approach to NIR image colorization utilizing a synthetic dataset generated from visible light images. Our method addresses two major challenges encountered in NIR image colorization: accurately colorizing objects with color variations and avoiding over/under saturation in dimly lit scenes. To tackle these challenges, we propose a Generative Adversarial Network (GAN)-based framework that learns to map NIR images to their corresponding colorized versions. The synthetic dataset ensures diverse color representations, enabling the model to effectively handle objects with varying hues and shades. Furthermore, the GAN architecture facilitates the generation of realistic colorizations while preserving the integrity of dimly lit scenes, thus mitigating issues related to over/under saturation. Experimental results on benchmark NIR image datasets demonstrate the efficacy of our approach in producing high-quality colorizations with improved color accuracy and naturalness. Quantitative evaluations and comparative studies validate the superiority of our method over existing techniques, showcasing its robustness and generalization capability across diverse NIR image scenarios. Our research not only contributes to advancing NIR image colorization but also underscores the importance of synthetic datasets and GANs in addressing domain-specific challenges in image processing tasks. The proposed framework holds promise for various applications in remote sensing, medical imaging, and surveillance where accurate color representation of NIR imagery is crucial for analysis and interpretation.

Keywords: computer vision, near-infrared images, automatic image colorization, generative adversarial networks, synthetic data

Procedia PDF Downloads 24
1437 Numerical Solution Speedup of the Laplace Equation Using FPGA Hardware

Authors: Abbas Ebrahimi, Mohammad Zandsalimy

Abstract:

The main purpose of this study is to investigate the feasibility of using FPGA (Field Programmable Gate Arrays) chips as alternatives for the conventional CPUs to accelerate the numerical solution of the Laplace equation. FPGA is an integrated circuit that contains an array of logic blocks, and its architecture can be reprogrammed and reconfigured after manufacturing. Complex circuits for various applications can be designed and implemented using FPGA hardware. The reconfigurable hardware used in this paper is an SoC (System on a Chip) FPGA type that integrates both microprocessor and FPGA architectures into a single device. In the present study the Laplace equation is implemented and solved numerically on both reconfigurable hardware and CPU. The precision of results and speedups of the calculations are compared together. The computational process on FPGA, is up to 20 times faster than a conventional CPU, with the same data precision. An analytical solution is used to validate the results.

Keywords: accelerating numerical solutions, CFD, FPGA, hardware definition language, numerical solutions, reconfigurable hardware

Procedia PDF Downloads 366
1436 Numerical Investigation of Aerodynamic Analysis on Passenger Vehicle

Authors: Cafer Görkem Pınar, İlker Coşar, Serkan Uzun, Atahan Çelebi, Mehmet Ali Ersoy, Ali Pınarbaşı

Abstract:

In this study, it was numerically investigated that a 1:1 scale model of the Renault Clio MK4 SW brand vehicle aerodynamic analysis was performed in the commercial computational fluid dynamics (CFD) package program of ANSYS CFX 2021 R1 under steady, subsonic, and 3-D conditions. The model of vehicle used for the analysis was made independent of the number of mesh elements, and the k-epsilon turbulence model was applied during the analysis. Results were interpreted as streamlines, pressure gradient, and turbulent kinetic energy contours around the vehicle at 50 km/h and 100 km/h speeds. In addition, the validity of the analysis was decided by comparing the drag coefficient of the vehicle with the values in the literature. As a result, the pressure gradient contours of the taillight of the Renault Clio MK4 SW vehicle were examined, and the behavior of the total force at speeds of 50 km/h and 100 km/h was interpreted.

Keywords: CFD, k-epsilon, aerodynamics, drag coefficient, taillight

Procedia PDF Downloads 128
1435 A High-Level Co-Evolutionary Hybrid Algorithm for the Multi-Objective Job Shop Scheduling Problem

Authors: Aydin Teymourifar, Gurkan Ozturk

Abstract:

In this paper, a hybrid distributed algorithm has been suggested for the multi-objective job shop scheduling problem. Many new approaches are used at design steps of the distributed algorithm. Co-evolutionary structure of the algorithm and competition between different communicated hybrid algorithms, which are executed simultaneously, causes to efficient search. Using several machines for distributing the algorithms, at the iteration and solution levels, increases computational speed. The proposed algorithm is able to find the Pareto solutions of the big problems in shorter time than other algorithm in the literature. Apache Spark and Hadoop platforms have been used for the distribution of the algorithm. The suggested algorithm and implementations have been compared with results of the successful algorithms in the literature. Results prove the efficiency and high speed of the algorithm.

Keywords: distributed algorithms, Apache Spark, Hadoop, job shop scheduling, multi-objective optimization

Procedia PDF Downloads 351
1434 Generation of Quasi-Measurement Data for On-Line Process Data Analysis

Authors: Hyun-Woo Cho

Abstract:

For ensuring the safety of a manufacturing process one should quickly identify an assignable cause of a fault in an on-line basis. To this end, many statistical techniques including linear and nonlinear methods have been frequently utilized. However, such methods possessed a major problem of small sample size, which is mostly attributed to the characteristics of empirical models used for reference models. This work presents a new method to overcome the insufficiency of measurement data in the monitoring and diagnosis tasks. Some quasi-measurement data are generated from existing data based on the two indices of similarity and importance. The performance of the method is demonstrated using a real data set. The results turn out that the presented methods are able to handle the insufficiency problem successfully. In addition, it is shown to be quite efficient in terms of computational speed and memory usage, and thus on-line implementation of the method is straightforward for monitoring and diagnosis purposes.

Keywords: data analysis, diagnosis, monitoring, process data, quality control

Procedia PDF Downloads 463
1433 Osteoarticular Ultrasound for Diagnostic Purposes in the Practice of the Rheumatologist

Authors: A. Ibovi Mouondayi, S. Zaher, K. Nassar, S. Janani

Abstract:

Introduction: Osteoarticular ultrasound has become an essential tool for the investigation and monitoring of osteoarticular pathologies for rheumatologists. It is performed in the clinic, cheap to access than other imaging technics. Important anatomical sites of inflammation in inflammatory diseases such as synovium, tendon sheath, and enthesis are easily identifiable on ultrasound. Objective: The objective of this study was to evaluate the importance of ultrasound for rheumatologists in the development of diagnoses of inflammatory rheumatism in cases of uncertain clinical presentation. Material and Methods: This is a retrospective study conducted in our department and carried out over a period of 30 months from January 2020 to June 2022. We included all patients with inflammatory arthralgia without clinical arthritis. Patients' data were collected through a patient operating system. Results: A total of 35 patients were identified, made up of 4 men and 31 women, with a sex ratio M/F of 0.12. The average age of the patients was 48.8 years, with extremes ranging from 17 years to 83 years. All patients had inflammatory polyarthralgia for an average of 9.3 years. Only two patients had suspicious synovitis on clinical examination. 91.43% of patients had a positive inflammatory assessment with an average CRP of 22.2 mg/L. Rheumatoid factor (RF) was present in 45.7% of patients and anti-CCP in 48.57%, with respective averages of 294.43 and 314.63 international units/mL. Radiographic lesions were found in 54% of patients. Osteoarticular ultrasound was performed in all these patients. Subclinical synovitis was found in 60% of patients, including 23% Doppler positive. Tenosynovitis was found in 11% of patients. Enthesitis was objectified in 3% of patients. Rheumatoid arthritis (RA) was retained in 40% of patients; psoriatic arthritis in 6% of patients, hydroxyapatite arthritis, and osteoarthritis in 3% each. Conclusion: Osteoarticular ultrasound has been an essential tool in the practice of rheumatology in recent years. It is for diagnostic purposes in chronic inflammatory rheumatism as well as in degenerative rheumatism and crystal induced arthropathies, but also essential in the follow-up of patients in rheumatology.

Keywords: ultrasound, skeletal, rheumatoid arthritis, arthralgia

Procedia PDF Downloads 101
1432 Marker-Controlled Level-Set for Segmenting Breast Tumor from Thermal Images

Authors: Swathi Gopakumar, Sruthi Krishna, Shivasubramani Krishnamoorthy

Abstract:

Contactless, painless and radiation-free thermal imaging technology is one of the preferred screening modalities for detection of breast cancer. However, poor signal to noise ratio and the inexorable need to preserve edges defining cancer cells and normal cells, make the segmentation process difficult and hence unsuitable for computer-aided diagnosis of breast cancer. This paper presents key findings from a research conducted on the appraisal of two promising techniques, for the detection of breast cancer: (I) marker-controlled, Level-set segmentation of anisotropic diffusion filtered preprocessed image versus (II) Segmentation using marker-controlled level-set on a Gaussian-filtered image. Gaussian-filtering processes the image uniformly, whereas anisotropic filtering processes only in specific areas of a thermographic image. The pre-processed (Gaussian-filtered and anisotropic-filtered) images of breast samples were then applied for segmentation. The segmentation of breast starts with initial level-set function. In this study, marker refers to the position of the image to which initial level-set function is applied. The markers are generally placed on the left and right side of the breast, which may vary with the breast size. The proposed method was carried out on images from an online database with samples collected from women of varying breast characteristics. It was observed that the breast was able to be segmented out from the background by adjustment of the markers. From the results, it was observed that as a pre-processing technique, anisotropic filtering with level-set segmentation, preserved the edges more effectively than Gaussian filtering. Segmented image, by application of anisotropic filtering was found to be more suitable for feature extraction, enabling automated computer-aided diagnosis of breast cancer.

Keywords: anisotropic diffusion, breast, Gaussian, level-set, thermograms

Procedia PDF Downloads 364
1431 Demographic Characteristics and Factors Affecting Mortality in Pediatric Trauma Patients Who Are Admitted to Emergency Service

Authors: Latif Duran, Erdem Aydin, Ahmet Baydin, Ali Kemal Erenler, Iskender Aksoy

Abstract:

Aim: In this retrospective study, we aim to contribute to the literature by presenting the proposals for taking measures to reduce the mortality by examining the demographic characteristics of the pediatric age group patients presenting with trauma and the factors that may cause mortality Material and Method: This study has been performed by retrospectively investigating the data obtained from the patient files and the hospital automation registration system of the pediatric trauma patients who applied to the Adult Emergency Department of the Ondokuz Mayıs University Medical Faculty between January 1, 2016, and December 31, 2016. Results: 289 of 415 patients involved in our study, were males. The median age was 11.3 years. The most common trauma mechanism was falling from the high. A significant statistical difference was found on the association between trauma mechanisms and gender. An increase in the number of trauma cases was found especially in the summer months. The study showed that thoracic and abdominal trauma was relevant to the increased mortality. Computerized tomography was the most common diagnostic imaging modality. The presence of subarachnoid hemorrhage has increased the risk of mortality by 62.3 fold. Eight of the patients (1.9%) died. Scoring systems were statistically significant to predict mortality. Conclusion: Children are vulnerable to trauma because of their unique anatomical and physiological differences compared to adult patient groups. It will be more successful in the mortality rate and in the post-traumatic healing process by administering the patient triage fast and most appropriate trauma centers in the prehospital period, management of the critical patients with the scoring systems and management with standard treatment protocols

Keywords: emergency service, pediatric patients, scoring systems, trauma, age groups

Procedia PDF Downloads 186
1430 Using Scilab® as New Introductory Method in Numerical Calculations and Programming for Computational Fluid Dynamics (CFD)

Authors: Nicoly Coelho, Eduardo Vieira Vilas Boas, Paulo Orestes Formigoni

Abstract:

Faced with the remarkable developments in the various segments of modern engineering, provided by the increasing technological development, professionals of all educational areas need to overcome the difficulties generated due to the good understanding of those who are starting their academic journey. Aiming to overcome these difficulties, this article aims at an introduction to the basic study of numerical methods applied to fluid mechanics and thermodynamics, demonstrating the modeling and simulations with its substance, and a detailed explanation of the fundamental numerical solution for the use of finite difference method, using SCILAB, a free software easily accessible as it is free and can be used for any research center or university, anywhere, both in developed and developing countries. It is known that the Computational Fluid Dynamics (CFD) is a necessary tool for engineers and professionals who study fluid mechanics, however, the teaching of this area of knowledge in undergraduate programs faced some difficulties due to software costs and the degree of difficulty of mathematical problems involved in this way the matter is treated only in postgraduate courses. This work aims to bring the use of DFC low cost in teaching Transport Phenomena for graduation analyzing a small classic case of fundamental thermodynamics with Scilab® program. The study starts from the basic theory involving the equation the partial differential equation governing heat transfer problem, implies the need for mastery of students, discretization processes that include the basic principles of series expansion Taylor responsible for generating a system capable of convergence check equations using the concepts of Sassenfeld, finally coming to be solved by Gauss-Seidel method. In this work we demonstrated processes involving both simple problems solved manually, as well as the complex problems that required computer implementation, for which we use a small algorithm with less than 200 lines in Scilab® in heat transfer study of a heated plate in rectangular shape on four sides with different temperatures on either side, producing a two-dimensional transport with colored graphic simulation. With the spread of computer technology, numerous programs have emerged requiring great researcher programming skills. Thinking that this ability to program DFC is the main problem to be overcome, both by students and by researchers, we present in this article a hint of use of programs with less complex interface, thus enabling less difficulty in producing graphical modeling and simulation for DFC with an extension of the programming area of experience for undergraduates.

Keywords: numerical methods, finite difference method, heat transfer, Scilab

Procedia PDF Downloads 363
1429 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations

Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso

Abstract:

Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.

Keywords: pipeline, leakage, detection, AI

Procedia PDF Downloads 172
1428 Study the Sloshing Phenomenon in the Tank Filled Partially with Liquid Using Computational Fluid Dynamics (CFD) Simulation

Authors: Amit Kumar, Jaikumar V., Pradeep A. G., Shivakumar Bhavi

Abstract:

Amit Kumar, Jaikumar V, Pradeep AG, Shivakumar Bhavi Reducing sloshing is one of the major challenges in industries where transporting of liquid is involved. The present study investigates the sloshing effect for different liquid levels of 50% of the tank capacity. CFD simulation for two different baffle configurations has been carried out using a time-based multiphase Volume of fluid (VOF) scheme. Baffles were introduced to examine the sloshing effect inside the tank. Results were compared against the baseline case to assess the effectiveness of baffles; maximum liquid height over the period of the simulation was considered as the parameter for measuring the sloshing effect inside the tank. It was found that the addition of baffles reduced the sloshing effect inside the tank as compared to the baseline model.

Keywords: CFD, sloshing, VOF, multiphase

Procedia PDF Downloads 179
1427 Variable-Fidelity Surrogate Modelling with Kriging

Authors: Selvakumar Ulaganathan, Ivo Couckuyt, Francesco Ferranti, Tom Dhaene, Eric Laermans

Abstract:

Variable-fidelity surrogate modelling offers an efficient way to approximate function data available in multiple degrees of accuracy each with varying computational cost. In this paper, a Kriging-based variable-fidelity surrogate modelling approach is introduced to approximate such deterministic data. Initially, individual Kriging surrogate models, which are enhanced with gradient data of different degrees of accuracy, are constructed. Then these Gradient enhanced Kriging surrogate models are strategically coupled using a recursive CoKriging formulation to provide an accurate surrogate model for the highest fidelity data. While, intuitively, gradient data is useful to enhance the accuracy of surrogate models, the primary motivation behind this work is to investigate if it is also worthwhile incorporating gradient data of varying degrees of accuracy.

Keywords: Kriging, CoKriging, Surrogate modelling, Variable- fidelity modelling, Gradients

Procedia PDF Downloads 540
1426 Track and Evaluate Cortical Responses Evoked by Electrical Stimulation

Authors: Kyosuke Kamada, Christoph Kapeller, Michael Jordan, Mostafa Mohammadpour, Christy Li, Christoph Guger

Abstract:

Cortico-cortical evoked potentials (CCEP) refer to responses generated by cortical electrical stimulation at distant brain sites. These responses provide insights into the functional networks associated with language or motor functions, and in the context of epilepsy, they can reveal pathological networks. Locating the origin and spread of seizures within the cortex is crucial for pre-surgical planning. This process can be enhanced by employing cortical stimulation at the seizure onset zone (SOZ), leading to the generation of CCEPs in remote brain regions that may be targeted for disconnection. In the case of a 24-year-old male patient suffering from intractable epilepsy, corpus callosotomy was performed as part of the treatment. DTI-MRI imaging, conducted using a 3T MRI scanner for fiber tracking, along with CCEP, is used as part of an assessment for surgical planning. Stimulation of the SOZ, with alternating monophasic pulses of 300µs duration and 15mA current intensity, resulted in CCEPs on the contralateral frontal cortex, reaching a peak amplitude of 206µV with a latency of 31ms, specifically in the left pars triangularis. The related fiber tracts were identified with a two-tensor unscented Kalman filter (UKF) technique, showing transversal fibers through the corpus callosum. The CCEPs were monitored through the progress of the surgery. Notably, the SOZ-associated CCEPs exhibited a reduction following the resection of the anterior portion of the corpus callosum, reaching the identified connecting fibers. This intervention demonstrated a potential strategy for mitigating the impact of intractable epilepsy through targeted disconnection of identified cortical regions.

Keywords: CCEP, SOZ, Corpus callosotomy, DTI

Procedia PDF Downloads 42
1425 Experimental and CFD of Desgined Small Wind Turbine

Authors: Tarek A. Mekail, Walid M. A. Elmagid

Abstract:

Many researches have concentrated on improving the aerodynamic performance of wind turbine blade through testing and theoretical studies. A small wind turbine blade is designed, fabricated and tested. The power performance of small horizontal axis wind turbines is simulated in details using Computational Fluid Dynamic (CFD). The three-dimensional CFD models are presented using ANSYS-CFX v13 software for predicting the performance of a small horizontal axis wind turbine. The simulation results are compared with the experimental data measured from a small wind turbine model, which designed according to a vehicle-based test system. The analysis of wake effect and aerodynamic of the blade can be carried out when the rotational effect was simulated. Finally, comparison between experimental, numerical and analytical performance has been done. The comparison is fairly good.

Keywords: small wind turbine, CFD of wind turbine, CFD, performance of wind turbine, test of small wind turbine, wind turbine aerodynamic, 3D model

Procedia PDF Downloads 527
1424 Rare Differential Diagnostic Dilemma

Authors: Angelis P. Barlampas

Abstract:

Theoretical background Disorders of fixation and rotation of the large intestine, result in the existence of its parts in ectopic anatomical positions. In case of symptomatology, the clinical picture is complicated by the possible symptomatology of the neighboring anatomical structures and a differential diagnostic problem arises. Target The purpose of this work is to demonstrate the difficulty of revealing the real cause of abdominal pain, in cases of anatomical variants and the decisive contribution of imaging and especially that of computed tomography. Methods A patient came to the emergency room, because of acute pain in the right hypochondrium. Clinical examination revealed tenderness in the gallbladder area and a positive Murphy's sign. An ultrasound exam depicted a normal gallbladder and the patient was referred for a CT scan. Results Flexible, unfixed ascending colon and cecum, located in the anatomical region of the right mesentery. Opacities of the surrounding peritoneal fat and a small linear concentration of fluid can be seen. There was an appendix of normal anteroposterior diameter with the presence of air in its lumen and without clear signs of inflammation. There was an impression of possible inflammatory swelling at the base of the appendix, (DD phenomenon of partial volume; e.t.c.). Linear opacities of the peritoneal fat in the region of the second loop of the duodenum. Multiple diverticula throughout the colon. Differential Diagnosis The differential diagnosis includes the following: Inflammation of the base of the appendix, diverticulitis of the cecum-ascending colon, a rare case of second duodenal loop ulcer, tuberculosis, terminal ileitis, pancreatitis, torsion of unfixed cecum-ascending colon, embolism or thrombosis of a vascular intestinal branch. Final Diagnosis There is an unfixed cecum-ascending colon, which is exhibiting diverticulitis.

Keywords: unfixed cecum-ascending colon, abdominal pain, malrotation, abdominal CT, congenital anomalies

Procedia PDF Downloads 41
1423 Further Analysis of Global Robust Stability of Neural Networks with Multiple Time Delays

Authors: Sabri Arik

Abstract:

In this paper, we study the global asymptotic robust stability of delayed neural networks with norm-bounded uncertainties. By employing the Lyapunov stability theory and Homeomorphic mapping theorem, we derive some new types of sufficient conditions ensuring the existence, uniqueness and global asymptotic stability of the equilibrium point for the class of neural networks with discrete time delays under parameter uncertainties and with respect to continuous and slopebounded activation functions. An important aspect of our results is their low computational complexity as the reported results can be verified by checking some properties symmetric matrices associated with the uncertainty sets of network parameters. The obtained results are shown to be generalization of some of the previously published corresponding results. Some comparative numerical examples are also constructed to compare our results with some closely related existing literature results.

Keywords: neural networks, delayed systems, lyapunov functionals, stability analysis

Procedia PDF Downloads 510
1422 Radiation Skin Decontamination Formulation

Authors: Navneet Sharma, Himanshu Ojha, Dharam Pal Pathak, Rakesh Kumar Sharma

Abstract:

Radio-nuclides decontamination is an important task because any extra second of deposition leads to deleterious health effects. We had developed and characterise nanoemulsion of p-tertbutylcalix[4]arens using phase inversion temperature (PIT) method and evaluate its decontamination efficacy (DE). The solubility of the drug was determined in various oils and surfactants. Nanoemulsion developed with an HLB value of 11 and different ratios of the surfactants 10% (7:3, w/w), oil (20%, w/w), and double distilled water (70%) were selected. Formulation was characterised by multi-photon spectroscopy and parameters like viscosity, droplet size distribution, zeta potential and stability were optimised. In vitro and Ex vivo decontamination efficacy (DE) was evaluated against Technetium-99m, Iodine-131, and Thallium-201 as radio-contaminants applied over skin of Sprague-Dawley rat and human tissue equivalent model. Contaminants were removed using formulation soaked in cotton swabs at different time intervals and whole body imaging and static counts were recorded using SPECT gamma camera before and after decontamination attempt. Data were analysed using one-way analysis of variance (ANOVA) and was found to be significant (p <0.05). DE of the nanoemulsion loaded with p-tertbutylcalix[4]arens was compared with placebo and recorded to be 88±5%, 90±3% and 89±3% for 99mTc, 131I and 201Tl respectively. Ex-vivo complexation study of p-tertbutylcalix[4]arene nanoemulsion with surrogate nuclides of radioactive thallium and Iodine, were performed on rat skin mounted on Franz diffusion cell using high-resolution sector field inductively coupled plasma mass spectroscopy (HR-SF-ICPMS). More than 90% complexation of the formulation with these nuclides was observed. Results demonstrate that the prepared nanoemulsion formulation was found efficacious for the decontamination of radionuclides from a large contaminated population.

Keywords: p-tertbutylcalix[4]arens, skin decontamination, radiological emergencies, nanoemulsion, iodine-131, thallium-201

Procedia PDF Downloads 383
1421 Comparison of Computed Tomography Dose Index, Dose Length Product and Effective Dose Among Male and Female Patients From Contrast Enhanced Computed Tomography Pancreatitis Protocol

Authors: Babina Aryal

Abstract:

Background: The diagnosis of pancreatitis is generally based on clinical and laboratory findings; however, Computed Tomography (CT) is an imaging technique of choice specially Contrast Enhanced Computed Tomography (CECT) shows morphological characteristic findings that allow for establishing the diagnosis of pancreatitis and determining the extent of disease severity which is done along with the administration of appropriate contrast medium. The purpose of this study was to compare Computed Tomography Dose Index (CTDI), Dose Length Product (DLP) and Effective Dose (ED) among male and female patients from Contrast Enhanced Computed Tomography (CECT) Pancreatitis Protocol. Methods: This retrospective study involved data collection based on clinical/laboratory/ultrasonography diagnosis of Pancreatitis and has undergone CECT Abdomen pancreatitis protocol. data collection involved detailed information about a patient's Age and Gender, Clinical history, Individual Computed Tomography Dose Index and Dose Length Product and effective dose. Results: We have retrospectively collected dose data from 150 among which 127 were males and 23 were females. The values obtained from the display of the CT screen were measured, calculated and compared to determine whether the CTDI, DLP and ED values were similar or not. CTDI for females was more as compared to males. The differences in CTDI values for females and males were 32.2087 and 37.1609 respectively. DLP values and Effective dose for both the genders did not show significant differences. Conclusion: This study concluded that there were no more significant changes in the DLP and ED values among both the genders however we noticed that female patients had more CTDI than males.

Keywords: computed tomography, contrast enhanced computed tomography, computed tomography dose index, dose length product, effective dose

Procedia PDF Downloads 96
1420 The Effect of Reaction Time on the Morphology and Phase of Quaternary Ferrite Nanoparticles (FeCoCrO₄) Synthesised from a Single Source Precursor

Authors: Khadijat Olabisi Abdulwahab, Mohammad Azad Malik, Paul O'Brien, Grigore Timco, Floriana Tuna

Abstract:

The synthesis of spinel ferrite nanoparticles with a narrow size distribution is very crucial in their numerous applications including information storage, hyperthermia treatment, drug delivery, contrast agent in magnetic resonance imaging, catalysis, sensors, and environmental remediation. Ferrites have the general formula MFe₂O₄ (M = Fe, Co, Mn, Ni, Zn e.t.c) and possess remarkable electrical and magnetic properties which depend on the cations, method of preparation, size and their site occupancies. To the best of our knowledge, there are no reports on the use of a single source precursor to synthesise quaternary ferrite nanoparticles. Here in, we demonstrated the use of trimetallic iron pivalate cluster [CrCoFeO(O₂CᵗBu)₆(HO₂CᵗBu)₃] as a single source precursor to synthesise monodisperse cobalt chromium ferrite (FeCoCrO₄) nanoparticles by the hot injection thermolysis method. The precursor was thermolysed in oleylamine, oleic acid, with diphenyl ether as solvent at 260 °C. The effect of reaction time on the stoichiometry, phases or morphology of the nanoparticles was studied. The p-XRD patterns of the nanoparticles obtained after one hour was pure phase of cubic iron cobalt chromium ferrite (FeCoCrO₄). TEM showed that a more monodispersed spherical ferrite nanoparticles were obtained after one hour. Magnetic measurements revealed that the ferrite particles are superparamagnetic at room temperature. The nanoparticles were characterised by Powder X-ray Diffraction (p-XRD), Transmission Electron Microscopy (TEM), Energy Dispersive Spectroscopy (EDS) and Super Conducting Quantum Interference Device (SQUID).

Keywords: cobalt chromium ferrite, colloidal, hot injection thermolysis, monodisperse, reaction time, single source precursor, quaternary ferrite nanoparticles

Procedia PDF Downloads 291
1419 Efficient Pre-Processing of Single-Cell Assay for Transposase Accessible Chromatin with High-Throughput Sequencing Data

Authors: Fan Gao, Lior Pachter

Abstract:

The primary tool currently used to pre-process 10X Chromium single-cell ATAC-seq data is Cell Ranger, which can take very long to run on standard datasets. To facilitate rapid pre-processing that enables reproducible workflows, we present a suite of tools called scATAK for pre-processing single-cell ATAC-seq data that is 15 to 18 times faster than Cell Ranger on mouse and human samples. Our tool can also calculate chromatin interaction potential matrices, and generate open chromatin signal and interaction traces for cell groups. We use scATAK tool to explore the chromatin regulatory landscape of a healthy adult human brain and unveil cell-type specific features, and show that it provides a convenient and computational efficient approach for pre-processing single-cell ATAC-seq data.

Keywords: single-cell, ATAC-seq, bioinformatics, open chromatin landscape, chromatin interactome

Procedia PDF Downloads 144
1418 Pneumoperitoneum Creation Assisted with Optical Coherence Tomography and Automatic Identification

Authors: Eric Yi-Hsiu Huang, Meng-Chun Kao, Wen-Chuan Kuo

Abstract:

For every laparoscopic surgery, a safe pneumoperitoneumcreation (gaining access to the peritoneal cavity) is the first and essential step. However, closed pneumoperitoneum is usually obtained by blind insertion of a Veress needle into the peritoneal cavity, which may carry potential risks suchas bowel and vascular injury.Until now, there remains no definite measure to visually confirm the position of the needle tip inside the peritoneal cavity. Therefore, this study established an image-guided Veress needle method by combining a fiber probe with optical coherence tomography (OCT). An algorithm was also proposed for determining the exact location of the needle tip through the acquisition of OCT images. Our method not only generates a series of “live” two-dimensional (2D) images during the needle puncture toward the peritoneal cavity but also can eliminate operator variation in image judgment, thus improving peritoneal access safety. This study was approved by the Ethics Committee of Taipei Veterans General Hospital (Taipei VGH IACUC 2020-144). A total of 2400 in vivo OCT images, independent of each other, were acquired from experiments of forty peritoneal punctures on two piglets. Characteristic OCT image patterns could be observed during the puncturing process. The ROC curve demonstrates the discrimination capability of these quantitative image features of the classifier, showing the accuracy of the classifier for determining the inside vs. outside of the peritoneal was 98% (AUC=0.98). In summary, the present study demonstrates the ability of the combination of our proposed automatic identification method and OCT imaging for automatically and objectively identifying the location of the needle tip. OCT images translate the blind closed technique of peritoneal access into a visualized procedure, thus improving peritoneal access safety.

Keywords: pneumoperitoneum, optical coherence tomography, automatic identification, veress needle

Procedia PDF Downloads 111
1417 Analysis of Moving Loads on Bridges Using Surrogate Models

Authors: Susmita Panda, Arnab Banerjee, Ajinkya Baxy, Bappaditya Manna

Abstract:

The design of short to medium-span high-speed bridges in critical locations is an essential aspect of vehicle-bridge interaction. Due to dynamic interaction between moving load and bridge, mathematical models or finite element modeling computations become time-consuming. Thus, to reduce the computational effort, a universal approximator using an artificial neural network (ANN) has been used to evaluate the dynamic response of the bridge. The data set generation and training of surrogate models have been conducted over the results obtained from mathematical modeling. Further, the robustness of the surrogate model has been investigated, which showed an error percentage of less than 10% with conventional methods. Additionally, the dependency of the dynamic response of the bridge on various load and bridge parameters has been highlighted through a parametric study.

Keywords: artificial neural network, mode superposition method, moving load analysis, surrogate models

Procedia PDF Downloads 85
1416 Additive Manufacturing of Overhangs: From Temporary Supports to Self-Support

Authors: Paulo Mendonca, Nzar Faiq Naqeshbandi

Abstract:

The objective of this study is to propose an interactive design environment that outlines the underlying computational framework to reach self-supporting overhangs. The research demonstrates the digital printability of overhangs taking into consideration factors related to the geometry design, the material used, the applied support, and the printing set-up of slicing and the extruder inclination. Parametric design tools can contribute to the design phase, form-finding, and stability optimization of self-supporting structures while printing in order to hold the components in place until they are sufficiently advanced to support themselves. The challenge is to ensure the stability of the printed parts in the critical inclinations during the whole fabrication process. Facilitating the identification of parameterization will allow to predict and optimize the process. Later, in the light of the previous findings, some guidelines of simulations and physical tests are given to be conducted for estimating the structural and functional performance.

Keywords: additive manufacturing, overhangs, self-support overhangs, printability, parametric tools

Procedia PDF Downloads 105
1415 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique

Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki

Abstract:

Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.

Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector

Procedia PDF Downloads 315
1414 Radar Signal Detection Using Neural Networks in Log-Normal Clutter for Multiple Targets Situations

Authors: Boudemagh Naime

Abstract:

Automatic radar detection requires some methods of adapting to variations in the background clutter in order to control their false alarm rate. The problem becomes more complicated in non-Gaussian environment. In fact, the conventional approach in real time applications requires a complex statistical modeling and much computational operations. To overcome these constraints, we propose another approach based on artificial neural network (ANN-CMLD-CFAR) using a Back Propagation (BP) training algorithm. The considered environment follows a log-normal distribution in the presence of multiple Rayleigh-targets. To evaluate the performances of the considered detector, several situations, such as scale parameter and the number of interferes targets, have been investigated. The simulation results show that the ANN-CMLD-CFAR processor outperforms the conventional statistical one.

Keywords: radat detection, ANN-CMLD-CFAR, log-normal clutter, statistical modelling

Procedia PDF Downloads 348
1413 The Implementation of Secton Method for Finding the Root of Interpolation Function

Authors: Nur Rokhman

Abstract:

A mathematical function gives relationship between the variables composing the function. Interpolation can be viewed as a process of finding mathematical function which goes through some specified points. There are many interpolation methods, namely: Lagrange method, Newton method, Spline method etc. For some specific condition, such as, big amount of interpolation points, the interpolation function can not be written explicitly. This such function consist of computational steps. The solution of equations involving the interpolation function is a problem of solution of non linear equation. Newton method will not work on the interpolation function, for the derivative of the interpolation function cannot be written explicitly. This paper shows the use of Secton method to determine the numerical solution of the function involving the interpolation function. The experiment shows the fact that Secton method works better than Newton method in finding the root of Lagrange interpolation function.

Keywords: Secton method, interpolation, non linear function, numerical solution

Procedia PDF Downloads 364