Search results for: discrete tomography
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1055

Search results for: discrete tomography

785 Anterior Segment Optical Coherence Tomography Study of Cornea and Tear Film Parameters in Juvenile Systemic Lupus Erythematous Patients

Authors: Mohamed Salah El-Din Mahmoud, Ahmed Hamed, Asmaa Anwar Mohamed

Abstract:

Purpose: To study the tear film parameters, total corneal thickness (CT), corneal epithelial thickness and, corneal power in Juvenile systemic lupus erythematosus (JSLE) patients compared to age-matched controls using anterior segment optical coherence tomography (AS-OCT). Methods: This was a cross-sectional study. Study participants were divided into 2 groups: Group A: 75 eyes of JSLE patients, Group B: 75 eyes of healthy controls. Tear meniscus height (TMH), tear meniscus depth (TMD), and tear meniscus area (TMA) were the lower tear meniscus parameters that were measured. The corneal power, CT, and epithelial thickness were all determined automatically. Results: In the JSLE group, the range of age was 10 to 15 years while the control group was 11 to 16 years. TMH, TMA, and TMD were 527.7±46.8, 0.059±0.015 and 343.3±59.9 respectively in JSLE group while 525.4±44.6, 0.058±0.011 and 340.6±58.0 respectively in control group without significant difference (p-value<0.001). The corneal power was 43.3±0.55 in the JSLE while 43.2±0.54 in the control group without significant difference (p-value= 0.407). CT was 551.1±13.5 in JSLE group while 551.2±15.3 in control group without significant difference (p-value= 0.982). Epithelial thickness was 52.66±1.35 in the JSLE group while 52.60±1.36 in the control group without significant difference (p-value= 0.765). Conclusion: We demonstrated no significant difference in tear meniscus dimensions, CT, epithelial thickness, and corneal power in the JSLE patients compared to age-matched controls using AS-OCT.

Keywords: tear film, ASOCT, JSLE, pachymetry, corneal thickness

Procedia PDF Downloads 133
784 Computational Fluid Dynamics (CFD) Simulation of Transient Flow in a Rectangular Bubble Column Using a Coupled Discrete Phase Model (DPM) and Volume of Fluid (VOF) Model

Authors: Sonia Besbes, Mahmoud El Hajem, Habib Ben Aissia, Jean Yves Champagne, Jacques Jay

Abstract:

In this work, we present a computational study for the characterization of the flow in a rectangular bubble column. To simulate the dynamic characteristics of the flow, a three-dimensional transient numerical simulations based on a coupled discrete phase model (DPM) and Volume of Fluid (VOF) model are performed. Modeling of bubble column reactor is often carried out under the assumption of a flat liquid surface with a degassing boundary condition. However, the dynamic behavior of the top surface surmounting the liquid phase will to some extent influence the meandering oscillations of the bubble plume. Therefore it is important to capture the surface behavior, and the assumption of a flat surface may not be applicable. So, the modeling approach needs to account for a dynamic liquid surface induced by the rising bubble plume. The volume of fluid (VOF) model was applied for the liquid and top gas which both interacts with bubbles implemented with a discrete phase model. This model treats the bubbles as Lagrangian particles and the liquid and the top gas as Eulerian phases with a sharp interface. Two-way coupling between Eulerian phases and Lagrangian bubbles are accounted for in a single set continuous phase momentum equation for the mixture of the two Eulerian phases. The effect of gas flow rate on the dynamic and time-averaged flow properties was studied. The time averaged liquid velocity field predicted from simulations and from our previous PIV measurements shows that the liquid is entrained up flow in the wake of the bubbles and down flow near the walls. The simulated and measured vertical velocity profiles exhibit a reasonable agreement looking at the minimum velocity values near the walls and the maximum values at the column center.

Keywords: bubble column, computational fluid dynamics (CFD), coupled DPM and VOF model, hydrodynamics

Procedia PDF Downloads 346
783 Importance of Developing a Decision Support System for Diagnosis of Glaucoma

Authors: Murat Durucu

Abstract:

Glaucoma is a condition of irreversible blindness, early diagnosis and appropriate interventions to make the patients able to see longer time. In this study, it addressed that the importance of developing a decision support system for glaucoma diagnosis. Glaucoma occurs when pressure happens around the eyes it causes some damage to the optic nerves and deterioration of vision. There are different levels ranging blindness of glaucoma disease. The diagnosis at an early stage allows a chance for therapies that slows the progression of the disease. In recent years, imaging technology from Heidelberg Retinal Tomography (HRT), Stereoscopic Disc Photo (SDP) and Optical Coherence Tomography (OCT) have been used for the diagnosis of glaucoma. This better accuracy and faster imaging techniques in response technique of OCT have become the most common method used by experts. Although OCT images or HRT precision and quickness, especially in the early stages, there are still difficulties and mistakes are occurred in diagnosis of glaucoma. It is difficult to obtain objective results on diagnosis and placement process of the doctor's. It seems very important to develop an objective decision support system for diagnosis and level the glaucoma disease for patients. By using OCT images and pattern recognition systems, it is possible to develop a support system for doctors to make their decisions on glaucoma. Thus, in this recent study, we develop an evaluation and support system to the usage of doctors. Pattern recognition system based computer software would help the doctors to make an objective evaluation for their patients. It is intended that after development and evaluation processes of the software, the system is planning to be serve for the usage of doctors in different hospitals.

Keywords: decision support system, glaucoma, image processing, pattern recognition

Procedia PDF Downloads 255
782 A Simulation Study on the Applicability of Overbooking Strategies in Inland Container Transport

Authors: S. Fazi, B. Behdani

Abstract:

The inland transportation of maritime containers entails the use of different modalities whose capacity is typically booked in advance. Containers may miss their scheduled departure time at a terminal for several reasons, such as delays, change of transport modes, multiple bookings pending. In those cases, it may be difficult for transport service providers to find last minute containers to fill the vacant capacity. Similarly to other industries, overbooking could potentially limit these drawbacks at the cost of a lower service level in case of actual excess of capacity in overbooked rides. However, the presence of multiple modalities may provide the required flexibility in rescheduling and limit the dissatisfaction of the shippers in case of containers in overbooking. This flexibility is known with the term 'synchromodality'. In this paper, we evaluate via discrete event simulation the application of overbooking. Results show that in certain conditions overbooking can significantly increase profit and utilization of high-capacity means of transport, such as barges and trains. On the other hand, in case of high penalty costs and limited no-show, overbooking may lead to an excessive use of expensive trucks.

Keywords: discrete event simulation, flexibility, inland shipping, multimodality, overbooking

Procedia PDF Downloads 97
781 A Simple and Efficient Method for Accurate Measurement and Control of Power Frequency Deviation

Authors: S. J. Arif

Abstract:

In the presented technique, a simple method is given for accurate measurement and control of power frequency deviation. The sinusoidal signal for which the frequency deviation measurement is required is transformed to a low voltage level and passed through a zero crossing detector to convert it into a pulse train. Another stable square wave signal of 10 KHz is obtained using a crystal oscillator and decade dividing assemblies (DDA). These signals are combined digitally and then passed through decade counters to give a unique combination of pulses or levels, which are further encoded to make them equally suitable for both control applications and display units. The developed circuit using discrete components has a resolution of 0.5 Hz and completes measurement within 20 ms. The realized circuit is simulated and synthesized using Verilog HDL and subsequently implemented on FPGA. The results of measurement on FPGA are observed on a very high resolution logic analyzer. These results accurately match the simulation results as well as the results of same circuit implemented with discrete components. The proposed system is suitable for accurate measurement and control of power frequency deviation.

Keywords: digital encoder for frequency measurement, frequency deviation measurement, measurement and control systems, power systems

Procedia PDF Downloads 342
780 Modeling and Implementation of a Hierarchical Safety Controller for Human Machine Collaboration

Authors: Damtew Samson Zerihun

Abstract:

This paper primarily describes the concept of a hierarchical safety control (HSC) in discrete manufacturing to up-hold productivity with human intervention and machine failures using a systematic approach, through increasing the system availability and using additional knowledge on machines so as to improve the human machine collaboration (HMC). It also highlights the implemented PLC safety algorithm, in applying this generic concept to a concrete pro-duction line using a lab demonstrator called FATIE (Factory Automation Test and Integration Environment). Furthermore, the paper describes a model and provide a systematic representation of human-machine collabora-tion in discrete manufacturing and to this end, the Hierarchical Safety Control concept is proposed. This offers a ge-neric description of human-machine collaboration based on Finite State Machines (FSM) that can be applied to vari-ous discrete manufacturing lines instead of using ad-hoc solutions for each line. With its reusability, flexibility, and extendibility, the Hierarchical Safety Control scheme allows upholding productivity while maintaining safety with reduced engineering effort compared to existing solutions. The approach to the solution begins with a successful partitioning of different zones around the Integrated Manufacturing System (IMS), which are defined by operator tasks and the risk assessment, used to describe the location of the human operator and thus to identify the related po-tential hazards and trigger the corresponding safety functions to mitigate it. This includes selective reduced speed zones and stop zones, and in addition with the hierarchical safety control scheme and advanced safety functions such as safe standstill and safe reduced speed are used to achieve the main goals in improving the safe Human Ma-chine Collaboration and increasing the productivity. In a sample scenarios, It is shown that an increase of productivity in the order of 2.5% is already possible with a hi-erarchical safety control, which consequently under a given assumptions, a total sum of 213 € could be saved for each intervention, compared to a protective stop reaction. Thereby the loss is reduced by 22.8%, if occasional haz-ard can be refined in a hierarchical way. Furthermore, production downtime due to temporary unavailability of safety devices can be avoided with safety failover that can save millions per year. Moreover, the paper highlights the proof of the development, implementation and application of the concept on the lab demonstrator (FATIE), where it is realized on the new safety PLCs, Drive Units, HMI as well as Safety devices in addition to the main components of the IMS.

Keywords: discrete automation, hierarchical safety controller, human machine collaboration, programmable logical controller

Procedia PDF Downloads 342
779 Iterative Method for Lung Tumor Localization in 4D CT

Authors: Sarah K. Hagi, Majdi Alnowaimi

Abstract:

In the last decade, there were immense advancements in the medical imaging modalities. These advancements can scan a whole volume of the lung organ in high resolution images within a short time. According to this performance, the physicians can clearly identify the complicated anatomical and pathological structures of lung. Therefore, these advancements give large opportunities for more advance of all types of lung cancer treatment available and will increase the survival rate. However, lung cancer is still one of the major causes of death with around 19% of all the cancer patients. Several factors may affect survival rate. One of the serious effects is the breathing process, which can affect the accuracy of diagnosis and lung tumor treatment plan. We have therefore developed a semi automated algorithm to localize the 3D lung tumor positions across all respiratory data during respiratory motion. The algorithm can be divided into two stages. First, a lung tumor segmentation for the first phase of the 4D computed tomography (CT). Lung tumor segmentation is performed using an active contours method. Then, localize the tumor 3D position across all next phases using a 12 degrees of freedom of an affine transformation. Two data set where used in this study, a compute simulate for 4D CT using extended cardiac-torso (XCAT) phantom and 4D CT clinical data sets. The result and error calculation is presented as root mean square error (RMSE). The average error in data sets is 0.94 mm ± 0.36. Finally, evaluation and quantitative comparison of the results with a state-of-the-art registration algorithm was introduced. The results obtained from the proposed localization algorithm show a promising result to localize alung tumor in 4D CT data.

Keywords: automated algorithm , computed tomography, lung tumor, tumor localization

Procedia PDF Downloads 571
778 Control of an SIR Model for Basic Reproduction Number Regulation

Authors: Enrique Barbieri

Abstract:

The basic disease-spread model described by three states denoting the susceptible (S), infectious (I), and removed (recovered and deceased) (R) sub-groups of the total population N, or SIR model, has been considered. Heuristic mitigating action profiles of the pharmaceutical and non-pharmaceutical types may be developed in a control design setting for the purpose of reducing the transmission rate or improving the recovery rate parameters in the model. Even though the transmission and recovery rates are not control inputs in the traditional sense, a linear observer and feedback controller can be tuned to generate an asymptotic estimate of the transmission rate for a linearized, discrete-time version of the SIR model. Then, a set of mitigating actions is suggested to steer the basic reproduction number toward unity, in which case the disease does not spread, and the infected population state does not suffer from multiple waves. The special case of piecewise constant transmission rate is described and applied to a seventh-order SEIQRDP model, which segments the population into four additional states. The offline simulations in discrete time may be used to produce heuristic policies implemented by public health and government organizations.

Keywords: control of SIR, observer, SEIQRDP, disease spread

Procedia PDF Downloads 65
777 Evaluation of 18F Fluorodeoxyglucose Positron Emission Tomography, MRI, and Ultrasound in the Assessment of Axillary Lymph Node Metastases in Patients with Early Stage Breast Cancer

Authors: Wooseok Byon, Eunyoung Kim, Junseong Kwon, Byung Joo Song, Chan Heun Park

Abstract:

Purpose: 18F Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) is a noninvasive imaging modality that can identify nodal metastases in women with primary breast cancer. The aim of this study was to compare the accuracy of FDG-PET with MRI and sonography scanning to determine axillary lymph node status in patients with breast cancer undergoing sentinel lymph node biopsy or axillary lymph node dissection. Patients and Methods: Between January and December 2012, ninety-nine patients with breast cancer and clinically negative axillary nodes were evaluated. All patients underwent FDG-PET, MRI, ultrasound followed by sentinel lymph node biopsy (SLNB) or axillary lymph node dissection (ALND). Results: Using axillary lymph node assessment as the gold standard, the sensitivity and specificity of FDG-PET were 51.4% (95% CI, 41.3% to 65.6%) and 92.2% (95% CI, 82.7% to 97.4%) respectively. The sensitivity and specificity of MRI and ultrasound were 57.1% (95% CI, 39.4% to 73.7%), 67.2% (95% CI, 54.3% to 78.4%) and 42.86% (95% CI, 26.3% to 60.7%), 92.2% (95% CI, 82.7% to 97.4%). Stratification according to hormone receptor status showed an increase in specificity when negative (FDG-PET: 42.3% to 77.8%, MRI 50% to 77.8%, ultrasound 34.6% to 66.7%). Also, positive HER2 status was associated with an increase in specificity (FDG-PET: 42.9% to 85.7%, MRI 50% to 85.7%, ultrasound 35.7% to 71.4%). Conclusions: The sensitivity and specificity of FDG-PET compared with MRI and ultrasound was high. However, FDG-PET is not sufficiently accurate to appropriately identify lymph node metastases. This study suggests that FDG-PET scanning cannot replace histologic staging in early-stage breast cancer, but might have a role in evaluating axillary lymph node status in hormone receptor negative or HER-2 overexpressing subtypes.

Keywords: axillary lymph node metastasis, FDG-PET, MRI, ultrasound

Procedia PDF Downloads 339
776 Comparing Accuracy of Semantic and Radiomics Features in Prognosis of Epidermal Growth Factor Receptor Mutation in Non-Small Cell Lung Cancer

Authors: Mahya Naghipoor

Abstract:

Purpose: Non-small cell lung cancer (NSCLC) is the most common lung cancer type. Epidermal growth factor receptor (EGFR) mutation is the main reason which causes NSCLC. Computed tomography (CT) is used for diagnosis and prognosis of lung cancers because of low price and little invasion. Semantic analyses of qualitative CT features are based on visual evaluation by radiologist. However, the naked eye ability may not assess all image features. On the other hand, radiomics provides the opportunity of quantitative analyses for CT images features. The aim of this review study was comparing accuracy of semantic and radiomics features in prognosis of EGFR mutation in NSCLC. Methods: For this purpose, the keywords including: non-small cell lung cancer, epidermal growth factor receptor mutation, semantic, radiomics, feature, receiver operating characteristics curve (ROC) and area under curve (AUC) were searched in PubMed and Google Scholar. Totally 29 papers were reviewed and the AUC of ROC analyses for semantic and radiomics features were compared. Results: The results showed that the reported AUC amounts for semantic features (ground glass opacity, shape, margins, lesion density and presence or absence of air bronchogram, emphysema and pleural effusion) were %41-%79. For radiomics features (kurtosis, skewness, entropy, texture, standard deviation (SD) and wavelet) the AUC values were found %50-%86. Conclusions: In conclusion, the accuracy of radiomics analysis is a little higher than semantic in prognosis of EGFR mutation in NSCLC.

Keywords: lung cancer, radiomics, computer tomography, mutation

Procedia PDF Downloads 123
775 An Evaluation of Discontinuities in Rock Mass Using Coupled Hydromechanical Finite Element and Discrete Element Analyses

Authors: Mohammad Moridzadeh, Aaron Gallant

Abstract:

The paper will present the design and construction of the underground excavations of a pump station forebay and its related components including connector tunnels, access shaft, riser shaft and well shafts. The underground openings include an 8 m-diameter riser shaft, an 8-m-diameter access shaft, 34 2.4-m-diameter well shafts, a 107-m-long forebay with a cross section having a height of 11 m and width of 10 m, and a 6 m by 6 m stub connector tunnel between the access shaft and a future forebay extension. The riser shaft extends down from the existing forebay connector tunnel at elevation 247 m to the crown of the forebay at elevation 770.0 feet. The access shaft will extend from the platform at the surface down to El. 223.5 m. The pump station will have the capacity to deliver 600 million gallons per day. The project is located on an uplifted horst consisting of a mass of Precambrian metamorphic rock trending in a north-south direction. The eastern slope of the area is very steep and pronounced and is likely the result of high-angle normal faulting. Toward the west, the area is bordered by a high angle normal fault and recent alluvial, lacustrine, and colluvial deposits. An evaluation of rock mass properties, fault and discontinuities, foliation and joints, and in situ stresses was performed. The response of the rock mass was evaluated in 3DEC using Discrete Element Method (DEM) by explicitly accounting for both major and minor discontinuities within the rock mass (i.e. joints, shear zones, faults). Moreover, the stability of the entire subsurface structure including the forebay, access and riser shafts, future forebay, well shafts, and connecting tunnels and their interactions with each other were evaluated using a 3D coupled hydromechanical Finite Element Analysis (FEA).

Keywords: coupled hydromechanical analysis, discontinuities, discrete element, finite element, pump station

Procedia PDF Downloads 236
774 Integration of Climatic Factors in the Meta-Population Modelling of the Dynamic of Malaria Transmission, Case of Douala and Yaoundé, Two Cities of Cameroon

Authors: Justin-Herve Noubissi, Jean Claude Kamgang, Eric Ramat, Januarius Asongu, Christophe Cambier

Abstract:

The goal of our study is to analyse the impact of climatic factors in malaria transmission taking into account migration between Douala and Yaoundé, two cities of Cameroon country. We show how variations of climatic factors such as temperature and relative humidity affect the malaria spread. We propose a meta-population model of the dynamic transmission of malaria that evolves in space and time and that takes into account temperature and relative humidity and the migration between Douala and Yaoundé. We also integrate the variation of environmental factors as events also called mathematical impulsion that can disrupt the model evolution at any time. Our modelling has been done using the Discrete EVents System Specification (DEVS) formalism. Our implementation has been done on Virtual Laboratory Environment (VLE) that uses DEVS formalism and abstract simulators for coupling models by integrating the concept of DEVS.

Keywords: compartmental models, DEVS, discrete events, meta-population model, VLE

Procedia PDF Downloads 528
773 Iterative Solver for Solving Large-Scale Frictional Contact Problems

Authors: Thierno Diop, Michel Fortin, Jean Deteix

Abstract:

Since the precise formulation of the elastic part is irrelevant for the description of the algorithm, we shall consider a generic case. In practice, however, we will have to deal with a non linear material (for instance a Mooney-Rivlin model). We are interested in solving a finite element approximation of the problem, leading to large-scale non linear discrete problems and, after linearization, to large linear systems and ultimately to calculations needing iterative methods. This also implies that penalty method, and therefore augmented Lagrangian method, are to be banned because of their negative effect on the condition number of the underlying discrete systems and thus on the convergence of iterative methods. This is in rupture to the mainstream of methods for contact in which augmented Lagrangian is the principal tool. We shall first present the problem and its discretization; this will lead us to describe a general solution algorithm relying on a preconditioner for saddle-point problems which we shall describe in some detail as it is not entirely standard. We will propose an iterative approach for solving three-dimensional frictional contact problems between elastic bodies, including contact with a rigid body, contact between two or more bodies and also self-contact.

Keywords: frictional contact, three-dimensional, large-scale, iterative method

Procedia PDF Downloads 167
772 Influence of the Coarse-Graining Method on a DEM-CFD Simulation of a Pilot-Scale Gas Fluidized Bed

Authors: Theo Ndereyimana, Yann Dufresne, Micael Boulet, Stephane Moreau

Abstract:

The DEM (Discrete Element Method) is used a lot in the industry to simulate large-scale flows of particles; for instance, in a fluidized bed, it allows to predict of the trajectory of every particle. One of the main limits of the DEM is the computational time. The CGM (Coarse-Graining Method) has been developed to tackle this issue. The goal is to increase the size of the particle and, by this means, decrease the number of particles. The method leads to a reduction of the collision frequency due to the reduction of the number of particles. Multiple characteristics of the particle movement and the fluid flow - when there is a coupling between DEM and CFD (Computational Fluid Dynamics). The main characteristic that is impacted is the energy dissipation of the system, to regain the dissipation, an ADM (Additional Dissipative Mechanism) can be added to the model. The objective of this current work is to observe the influence of the choice of the ADM and the factor of coarse-graining on the numerical results. These results will be compared with experimental results of a fluidized bed and with a numerical model of the same fluidized bed without using the CGM. The numerical model is one of a 3D cylindrical fluidized bed with 9.6M Geldart B-type particles in a bubbling regime.

Keywords: additive dissipative mechanism, coarse-graining, discrete element method, fluidized bed

Procedia PDF Downloads 32
771 Calibration of Contact Model Parameters and Analysis of Microscopic Behaviors of Cuxhaven Sand Using The Discrete Element Method

Authors: Anjali Uday, Yuting Wang, Andres Alfonso Pena Olare

Abstract:

The Discrete Element Method is a promising approach to modeling microscopic behaviors of granular materials. The quality of the simulations however depends on the model parameters utilized. The present study focuses on calibration and validation of the discrete element parameters for Cuxhaven sand based on the experimental data from triaxial and oedometer tests. A sensitivity analysis was conducted during the sample preparation stage and the shear stage of the triaxial tests. The influence of parameters like rolling resistance, inter-particle friction coefficient, confining pressure and effective modulus were investigated on the void ratio of the sample generated. During the shear stage, the effect of parameters like inter-particle friction coefficient, effective modulus, rolling resistance friction coefficient and normal-to-shear stiffness ratio are examined. The calibration of the parameters is carried out such that the simulations reproduce the macro mechanical characteristics like dilation angle, peak stress, and stiffness. The above-mentioned calibrated parameters are then validated by simulating an oedometer test on the sand. The oedometer test results are in good agreement with experiments, which proves the suitability of the calibrated parameters. In the next step, the calibrated and validated model parameters are applied to forecast the micromechanical behavior including the evolution of contact force chains, buckling of columns of particles, observation of non-coaxiality, and sample inhomogeneity during a simple shear test. The evolution of contact force chains vividly shows the distribution, and alignment of strong contact forces. The changes in coordination number are in good agreement with the volumetric strain exhibited during the simple shear test. The vertical inhomogeneity of void ratios is documented throughout the shearing phase, which shows looser structures in the top and bottom layers. Buckling of columns is not observed due to the small rolling resistance coefficient adopted for simulations. The non-coaxiality of principal stress and strain rate is also well captured. Thus the micromechanical behaviors are well described using the calibrated and validated material parameters.

Keywords: discrete element model, parameter calibration, triaxial test, oedometer test, simple shear test

Procedia PDF Downloads 90
770 Modelling a Hospital as a Queueing Network: Analysis for Improving Performance

Authors: Emad Alenany, M. Adel El-Baz

Abstract:

In this paper, the flow of different classes of patients into a hospital is modelled and analyzed by using the queueing network analyzer (QNA) algorithm and discrete event simulation. Input data for QNA are the rate and variability parameters of the arrival and service times in addition to the number of servers in each facility. Patient flows mostly match real flow for a hospital in Egypt. Based on the analysis of the waiting times, two approaches are suggested for improving performance: Separating patients into service groups, and adopting different service policies for sequencing patients through hospital units. The separation of a specific group of patients, with higher performance target, to be served separately from the rest of patients requiring lower performance target, requires the same capacity while improves performance for the selected group of patients with higher target. Besides, it is shown that adopting the shortest processing time and shortest remaining processing time service policies among other tested policies would results in, respectively, 11.47% and 13.75% reduction in average waiting time relative to first come first served policy.

Keywords: queueing network, discrete-event simulation, health applications, SPT

Procedia PDF Downloads 153
769 Equivalent Circuit Representation of Lossless and Lossy Power Transmission Systems Including Discrete Sampler

Authors: Yuichi Kida, Takuro Kida

Abstract:

In a new smart society supported by the recent development of 5G and 6G Communication systems, the im- portance of wireless power transmission is increasing. These systems contain discrete sampling systems in the middle of the transmission path and equivalent circuit representation of lossless or lossy power transmission through these systems is an important issue in circuit theory. In this paper, for the given weight function, we show that a lossless power transmission system with the given weight is expressed by an equivalent circuit representation of the Kida’s optimal signal prediction system followed by a reactance multi-port circuit behind it. Further, it is shown that, when the system is lossy, the system has an equivalent circuit in the form of connecting a multi-port positive-real circuit behind the Kida’s optimal signal prediction system. Also, for the convenience of the reader, in this paper, the equivalent circuit expression of the reactance multi-port circuit and the positive- real multi-port circuit by Cauer and Ohno, whose information is currently being lost even in the world of the Internet.

Keywords: signal prediction, pseudo inverse matrix, artificial intelligence, power transmission

Procedia PDF Downloads 90
768 Modeling User Departure Time Choice for Trips in Urban Streets

Authors: Saeed Sayyad Hagh Shomar

Abstract:

Modeling users’ decisions on departure time choice is the main motivation for this research. In particular, it examines the impact of social-demographic features, household, job characteristics and trip qualities on individuals’ departure time choice. Departure time alternatives are presented as adjacent discrete time periods. The choice between these alternatives is done using a discrete choice model. Since a great deal of early morning trips and traffic congestion at that time of the day comprise work trips, the focus of this study is on the work trip over the entire day. Therefore, this study by using questionnaire of stated preference models users’ departure time choice affected by congestion pricing plan in downtown Tehran. Experimental results demonstrate efficient social-demographic impact on work trips’ departure time. These findings have substantial outcomes for the analysis of transportation planning. Particularly, the analysis shows that ignoring the effects of these variables could result in erroneous information and consequently decisions in the field of transportation planning and air quality would fail and cause financial resources loss.

Keywords: modeling, departure time, travel timing, time of the day, congestion pricing, transportation planning

Procedia PDF Downloads 406
767 Effective Dose and Size Specific Dose Estimation with and without Tube Current Modulation for Thoracic Computed Tomography Examinations: A Phantom Study

Authors: S. Gharbi, S. Labidi, M. Mars, M. Chelli, F. Ladeb

Abstract:

The purpose of this study is to reduce radiation dose for chest CT examination by including Tube Current Modulation (TCM) to a standard CT protocol. A scan of an anthropomorphic male Alderson phantom was performed on a 128-slice scanner. The estimation of effective dose (ED) in both scans with and without mAs modulation was done via multiplication of Dose Length Product (DLP) to a conversion factor. Results were compared to those measured with a CT-Expo software. The size specific dose estimation (SSDE) values were obtained by multiplication of the volume CT dose index (CTDIvol) with a conversion size factor related to the phantom’s effective diameter. Objective assessment of image quality was performed with Signal to Noise Ratio (SNR) measurements in phantom. SPSS software was used for data analysis. Results showed including CARE Dose 4D; ED was lowered by 48.35% and 51.51% using DLP and CT-expo, respectively. In addition, ED ranges between 7.01 mSv and 6.6 mSv in case of standard protocol, while it ranges between 3.62 mSv and 3.2 mSv with TCM. Similar results are found for SSDE; dose was higher without TCM of 16.25 mGy and was lower by 48.8% including TCM. The SNR values calculated were significantly different (p=0.03<0.05). The highest one is measured on images acquired with TCM and reconstructed with Filtered back projection (FBP). In conclusion, this study proves the potential of TCM technique in SSDE and ED reduction and in conserving image quality with high diagnostic reference level for thoracic CT examinations.

Keywords: anthropomorphic phantom, computed tomography, CT-expo, radiation dose

Procedia PDF Downloads 188
766 Optical Assessment of Marginal Sealing Performance around Restorations Using Swept-Source Optical Coherence Tomography

Authors: Rima Zakzouk, Yasushi Shimada, Yasunori Sumi, Junji Tagami

Abstract:

Background and purpose: The resin composite has become the main material for the restorations of caries in recent years due to aesthetic characteristics, especially with the development of the adhesive techniques. The quality of adhesion to tooth structures is depending on an exchange process between inorganic tooth material and synthetic resin and a micromechanical retention promoted by resin infiltration in partially demineralized dentin. Optical coherence tomography (OCT) is a noninvasive diagnostic method for obtaining cross-sectional images that produce high-resolution of the biological tissue at the micron scale. The aim of this study was to evaluate the gap formation at adhesive/tooth interface of two-step self-etch adhesives that are preceded with or without phosphoric acid pre-etching in different regions of teeth using SS-OCT. Materials and methods: Round tapered cavities (2×2 mm) were prepared in cervical part of bovine incisors teeth and divided into 2 groups (n=10): first group self-etch adhesive (Clearfil SE Bond) was applied for SE group and second group treated with acid etching before applying the self-etch adhesive for PA group. Subsequently, both groups were restored with Estelite Flow Quick Flowable Composite Resin and observed under OCT. Following 5000 thermal cycles, the same section was obtained again for each cavity using OCT at 1310-nm wavelength. Scanning was repeated after two months to monitor the gap progress. Then the gap length was measured using image analysis software, and the statistics analysis were done between both groups using SPSS software. After that, the cavities were sectioned and observed under Confocal Laser Scanning Microscope (CLSM) to confirm the result of OCT. Results: Gaps formed at the bottom of the cavity was longer than the gap formed at the margin and dento-enamel junction in both groups. On the other hand, pre-etching treatment led to damage the DEJ regions creating longer gap. After 2 months the results showed almost progress in the gap length significantly at the bottom regions in both groups. In conclusions, phosphoric acid etching treatment did not reduce the gap lrngth in most regions of the cavity. Significance: The bottom region of tooth was more exposed to gap formation than margin and DEJ regions, The DEJ damaged with phosphoric acid treatment.

Keywords: optical coherence tomography, self-etch adhesives, bottom, dento enamel junction

Procedia PDF Downloads 189
765 Data Augmentation for Early-Stage Lung Nodules Using Deep Image Prior and Pix2pix

Authors: Qasim Munye, Juned Islam, Haseeb Qureshi, Syed Jung

Abstract:

Lung nodules are commonly identified in computed tomography (CT) scans by experienced radiologists at a relatively late stage. Early diagnosis can greatly increase survival. We propose using a pix2pix conditional generative adversarial network to generate realistic images simulating early-stage lung nodule growth. We have applied deep images prior to 2341 slices from 895 computed tomography (CT) scans from the Lung Image Database Consortium (LIDC) dataset to generate pseudo-healthy medical images. From these images, 819 were chosen to train a pix2pix network. We observed that for most of the images, the pix2pix network was able to generate images where the nodule increased in size and intensity across epochs. To evaluate the images, 400 generated images were chosen at random and shown to a medical student beside their corresponding original image. Of these 400 generated images, 384 were defined as satisfactory - meaning they resembled a nodule and were visually similar to the corresponding image. We believe that this generated dataset could be used as training data for neural networks to detect lung nodules at an early stage or to improve the accuracy of such networks. This is particularly significant as datasets containing the growth of early-stage nodules are scarce. This project shows that the combination of deep image prior and generative models could potentially open the door to creating larger datasets than currently possible and has the potential to increase the accuracy of medical classification tasks.

Keywords: medical technology, artificial intelligence, radiology, lung cancer

Procedia PDF Downloads 35
764 Statistical Inferences for GQARCH-It\^{o} - Jumps Model Based on The Realized Range Volatility

Authors: Fu Jinyu, Lin Jinguan

Abstract:

This paper introduces a novel approach that unifies two types of models: one is the continuous-time jump-diffusion used to model high-frequency data, and the other is discrete-time GQARCH employed to model low-frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named “GQARCH-It\^{o} -Jumps mode.” We adopt the realized range-based threshold estimation for high-frequency financial data rather than the realized return-based volatility estimators, which entail the loss of intra-day information of the price movement. Meanwhile, a quasi-likelihood function for the low-frequency GQARCH structure with jumps is developed for the parametric estimate. The asymptotic theories are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies are implemented to check the finite sample performance of the proposed methodology. Specifically, it is demonstrated that how our proposed approaches can be practically used on some financial data.

Keywords: It\^{o} process, GQARCH, leverage effects, threshold, realized range-based volatility estimator, quasi-maximum likelihood estimate

Procedia PDF Downloads 121
763 Estimation of Effective Radiation Dose Following Computed Tomography Urography at Aminu Kano Teaching Hospital, Kano Nigeria

Authors: Idris Garba, Aisha Rabiu Abdullahi, Mansur Yahuza, Akintade Dare

Abstract:

Background: CT urography (CTU) is efficient radiological examination for the evaluation of the urinary system disorders. However, patients are exposed to a significant radiation dose which is in a way associated with increased cancer risks. Objectives: To determine Computed Tomography Dose Index following CTU, and to evaluate organs equivalent doses. Materials and Methods: A prospective cohort study was carried at a tertiary institution located in Kano northwestern. Ethical clearance was sought and obtained from the research ethics board of the institution. Demographic, scan parameters and CT radiation dose data were obtained from patients that had CTU procedure. Effective dose, organ equivalent doses, and cancer risks were estimated using SPSS statistical software version 16 and CT dose calculator software. Result: A total of 56 patients were included in the study, consisting of 29 males and 27 females. The common indication for CTU examination was found to be renal cyst seen commonly among young adults (15-44yrs). CT radiation dose values in DLP, CTDI and effective dose for CTU were 2320 mGy cm, CTDIw 9.67 mGy and 35.04 mSv respectively. The probability of cancer risks was estimated to be 600 per a million CTU examinations. Conclusion: In this study, the radiation dose for CTU is considered significantly high, with increase in cancer risks probability. Wide radiation dose variations between patient doses suggest that optimization is not fulfilled yet. Patient radiation dose estimate should be taken into consideration when imaging protocols are established for CT urography.

Keywords: CT urography, cancer risks, effective dose, radiation exposure

Procedia PDF Downloads 300
762 Advanced Simulation and Enhancement for Distributed and Energy Efficient Scheduling for IEEE802.11s Wireless Enhanced Distributed Channel Access Networks

Authors: Fisayo G. Ojo, Shamala K. Subramaniam, Zuriati Ahmad Zukarnain

Abstract:

As technology is advancing and wireless applications are becoming dependable sources, while the physical layer of the applications are been embedded into tiny layer, so the more the problem on energy efficiency and consumption. This paper reviews works done in recent years in wireless applications and distributed computing, we discovered that applications are becoming dependable, and resource allocation sharing with other applications in distributed computing. Applications embedded in distributed system are suffering from power stability and efficiency. In the reviews, we also prove that discrete event simulation has been left behind untouched and not been adapted into distributed system as a simulation technique in scheduling of each event that took place in the development of distributed computing applications. We shed more lights on some researcher proposed techniques and results in our reviews to prove the unsatisfactory results, and to show that more work still have to be done on issues of energy efficiency in wireless applications, and congestion in distributed computing.

Keywords: discrete event simulation (DES), distributed computing, energy efficiency (EE), internet of things (IOT), quality of service (QOS), user equipment (UE), wireless mesh network (WMN), wireless sensor network (wsn), worldwide interoperability for microwave access x (WiMAX)

Procedia PDF Downloads 154
761 A New Approach of Preprocessing with SVM Optimization Based on PSO for Bearing Fault Diagnosis

Authors: Tawfik Thelaidjia, Salah Chenikher

Abstract:

Bearing fault diagnosis has attracted significant attention over the past few decades. It consists of two major parts: vibration signal feature extraction and condition classification for the extracted features. In this paper, feature extraction from faulty bearing vibration signals is performed by a combination of the signal’s Kurtosis and features obtained through the preprocessing of the vibration signal samples using Db2 discrete wavelet transform at the fifth level of decomposition. In this way, a 7-dimensional vector of the vibration signal feature is obtained. After feature extraction from vibration signal, the support vector machine (SVM) was applied to automate the fault diagnosis procedure. To improve the classification accuracy for bearing fault prediction, particle swarm optimization (PSO) is employed to simultaneously optimize the SVM kernel function parameter and the penalty parameter. The results have shown feasibility and effectiveness of the proposed approach

Keywords: condition monitoring, discrete wavelet transform, fault diagnosis, kurtosis, machine learning, particle swarm optimization, roller bearing, rotating machines, support vector machine, vibration measurement

Procedia PDF Downloads 403
760 Design of a Low Cost Programmable LED Lighting System

Authors: S. Abeysekera, M. Bazghaleh, M. P. L. Ooi, Y. C. Kuang, V. Kalavally

Abstract:

Smart LED-based lighting systems have significant advantages over traditional lighting systems due to their capability of producing tunable light spectrums on demand. The main challenge in the design of smart lighting systems is to produce sufficient luminous flux and uniformly accurate output spectrum for sufficiently broad area. This paper outlines the programmable LED lighting system design principles of design to achieve the two aims. In this paper, a seven-channel design using low-cost discrete LEDs is presented. Optimization algorithms are used to calculate the number of required LEDs, LEDs arrangements and optimum LED separation distance. The results show the illumination uniformity for each channel. The results also show that the maximum color error is below 0.0808 on the CIE1976 chromaticity scale. In conclusion, this paper considered the simulation and design of a seven-channel programmable lighting system using low-cost discrete LEDs to produce sufficient luminous flux and uniformly accurate output spectrum for sufficiently broad area.

Keywords: light spectrum control, LEDs, smart lighting, programmable LED lighting system

Procedia PDF Downloads 156
759 Public Preferences and Willingness to Pay for Social Health Insurance in Iran: A Discrete Choice Experiment

Authors: Mohammad Ranjbar, Mohammad Bazyar, Blake Angell, Thomas Lung, Yibeltal Assefa

Abstract:

Background: Current health insurance programs in Iran suffer from low enrolment and are not sufficient to attain the country to universal health coverage (UHC). We hypothesize that improving the enrollment rate and moving towards a more sustainable UHC can be achieved by improving the benefits package and providing new incentives. The objective of this study is to assess public preferences and willingness to pay (WTP) for social health insurance (SHI) in Iran. Methods: A discrete choice experiment (DCE) was conducted in 2021, using a self-administered questionnaire on 500 participants to estimate WTP and determine individual preferences for the SHI in Yazd, Iran. Respondents were presented with an eight-choice set and asked to select their preferred one. In each choice set, scenarios were described by eight attributes with varying levels. The conditional logit regression model was used to analyze the participants' preferences. Willingness to pay for each attribute was also calculated. Results: Most included attributes were significant predictors of the choice of a health insurance package. The maximum coverage of hospitalization costs in the private sector, ancillary services such as glasses, canes, etc., as well as coverage for hospitalization costs in the public sector and drug costs, were the most important determining factors for this choice. Coverage of preventive dental care did not significantly influence respondent choices. Estimating WTP showed that individuals are willing to pay more for higher financial protection, particularly against private sector costs; the WTP to increase the coverage of hospitalization costs in the private sector from 50% to 90% is estimated at 362,068 IR, Rials per month. Conclusion: This study identifies the key factors that the population value with regard to health insurance and the tradeoffs they are willing to make between them. Hospitalization, drugs, and ancillary services were the most important determining factors for their choice. The data suggest that additional resources coming into the Iranian health system might best be prioritized to cover hospitalization and drug costs and those associated with ancillary services.

Keywords: social health insurance, preferences, discrete choice experiment, willingness to pay

Procedia PDF Downloads 50
758 Computed Tomography Guided Bone Biopsies: Experience at an Australian Metropolitan Hospital

Authors: K. Hinde, R. Bookun, P. Tran

Abstract:

Percutaneous CT guided biopsies provide a fast, minimally invasive, cost effective and safe method for obtaining tissue for histopathology and culture. Standards for diagnostic yield vary depending on whether the tissue is being obtained for histopathology or culture. We present a retrospective audit from Western Health in Melbourne Australia over a 12-month period which aimed to determine the diagnostic yield, technical success and complication rate for CT guided bone biopsies and identify factors affecting these results. The digital imaging storage program (Synapse Picture Archiving and Communication System – Fujifilm Australia) was analysed with key word searches from October 2015 to October 2016. Nineteen CT guided bone biopsies were performed during this time. The most common referring unit was oncology, work up imaging included CT, MRI, bone scan and PET scan. The complication rate was 0%, overall diagnostic yield was 74% with a technical success of 95%. When performing biopsies for histologic analysis diagnostic yield was 85% and when performing biopsies for bacterial culture diagnostic yield was 60%. There was no significant relationship identified between size of lesion, distance of lesion to skin, lesion appearance on CT, the number of samples taken or gauge of needle to diagnostic yield or technical success. CT guided bone biopsy at Western Health meets the standard reported at other major clinical centres for technical success and safety. It is a useful investigation in identification of primary malignancy in distal bone metastases.

Keywords: bone biopsy, computed tomography, core biopsy, histopathology

Procedia PDF Downloads 172
757 Dose Saving and Image Quality Evaluation for Computed Tomography Head Scanning with Eye Protection

Authors: Yuan-Hao Lee, Chia-Wei Lee, Ming-Fang Lin, Tzu-Huei Wu, Chih-Hsiang Ko, Wing P. Chan

Abstract:

Computed tomography (CT) scan of the head is a good method for investigating cranial lesions. However, radiation-induced oxidative stress can be accumulated in the eyes and promote carcinogenesis and cataract. In this regard, we aimed to protect the eyes with barium sulfate shield(s) during CT scans and investigate the resultant image quality and radiation dose to the eye. Patients who underwent health examinations were selectively enrolled in this study in compliance with the protocol approved by the Ethics Committee of the Joint Institutional Review Board at Taipei Medical University. Participants’ brains were scanned with a water-based marker simultaneously by a multislice CT scanner (SOMATON Definition Flash) under a fixed tube current-time setting or automatic tube current modulation (TCM). The lens dose was measured by Gafchromic films, whose dose response curve was previously fitted using thermoluminescent dosimeters, with or without barium sulfate or bismuth-antimony shield laid above. For the assessment of image quality CT images at slice planes that exhibit the interested regions on the zygomatic, orbital and nasal bones of the head phantom as well as the water-based marker were used for calculating the signal-to-noise and contrast-to-noise ratios. The application of barium sulfate and bismuth-antimony shields decreased 24% and 47% of the lens dose on average, respectively. Under topogram-based TCM, the dose saving power of bismuth-antimony shield was mitigated whereas that of barium sulfate shield was enhanced. On the other hand, the signal-to-noise and contrast-to-noise ratios of DSCT images were decreased separately by barium sulfate and bismuth-antimony shield, resulting in an overall reduction of the CNR. In contrast, the integration of topogram-based TCM elevated signal difference between the ROIs on the zygomatic bones and eyeballs while preferentially decreasing the signal-to-noise ratios upon the use of barium sulfate shield. The results of this study indicate that the balance between eye exposure and image quality can be optimized by combining eye shields with topogram-based TCM on the multislice scanner. Eye shielding could change the photon attenuation characteristics of tissues that are close to the shield. The application of both shields on eye protection hence is not recommended for seeking intraorbital lesions.

Keywords: computed tomography, barium sulfate shield, dose saving, image quality

Procedia PDF Downloads 241
756 A Two-Stage Airport Ground Movement Speed Profile Design Methodology Using Particle Swarm Optimization

Authors: Zhang Tianci, Ding Meng, Zuo Hongfu, Zeng Lina, Sun Zejun

Abstract:

Automation of airport operations can greatly improve ground movement efficiency. In this paper, we study the speed profile design problem for advanced airport ground movement control and guidance. The problem is constrained by the surface four-dimensional trajectory generated in taxi planning. A decomposed approach of two stages is presented to solve this problem efficiently. In the first stage, speeds are allocated at control points which ensure smooth speed profiles can be found later. In the second stage, detailed speed profiles of each taxi interval are generated according to the allocated control point speeds with the objective of minimizing the overall fuel consumption. We present a swarm intelligence based algorithm for the first-stage problem and a discrete variable driven enumeration method for the second-stage problem since it only has a small set of discrete variables. Experimental results demonstrate the presented methodology performs well on real world speed profile design problems.

Keywords: airport ground movement, fuel consumption, particle swarm optimization, smoothness, speed profile design

Procedia PDF Downloads 542