Search results for: inverse problem in tomography
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7754

Search results for: inverse problem in tomography

7544 A Fuzzy Programming Approach for Solving Intuitionistic Fuzzy Linear Fractional Programming Problem

Authors: Sujeet Kumar Singh, Shiv Prasad Yadav

Abstract:

This paper develops an approach for solving intuitionistic fuzzy linear fractional programming (IFLFP) problem where the cost of the objective function, the resources, and the technological coefficients are triangular intuitionistic fuzzy numbers. Here, the IFLFP problem is transformed into an equivalent crisp multi-objective linear fractional programming (MOLFP) problem. By using fuzzy mathematical programming approach the transformed MOLFP problem is reduced into a single objective linear programming (LP) problem. The proposed procedure is illustrated through a numerical example.

Keywords: triangular intuitionistic fuzzy number, linear programming problem, multi objective linear programming problem, fuzzy mathematical programming, membership function

Procedia PDF Downloads 535
7543 Calculation of Organ Dose for Adult and Pediatric Patients Undergoing Computed Tomography Examinations: A Software Comparison

Authors: Aya Al Masri, Naima Oubenali, Safoin Aktaou, Thibault Julien, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: The increased number of performed 'Computed Tomography (CT)' examinations raise public concerns regarding associated stochastic risk to patients. In its Publication 102, the ‘International Commission on Radiological Protection (ICRP)’ emphasized the importance of managing patient dose, particularly from repeated or multiple examinations. We developed a Dose Archiving and Communication System that gives multiple dose indexes (organ dose, effective dose, and skin-dose mapping) for patients undergoing radiological imaging exams. The aim of this study is to compare the organ dose values given by our software for patients undergoing CT exams with those of another software named "VirtualDose". Materials and methods: Our software uses Monte Carlo simulations to calculate organ doses for patients undergoing computed tomography examinations. The general calculation principle consists to simulate: (1) the scanner machine with all its technical specifications and associated irradiation cases (kVp, field collimation, mAs, pitch ...) (2) detailed geometric and compositional information of dozens of well identified organs of computational hybrid phantoms that contain the necessary anatomical data. The mass as well as the elemental composition of the tissues and organs that constitute our phantoms correspond to the recommendations of the international organizations (namely the ICRP and the ICRU). Their body dimensions correspond to reference data developed in the United States. Simulated data was verified by clinical measurement. To perform the comparison, 270 adult patients and 150 pediatric patients were used, whose data corresponds to exams carried out in France hospital centers. The comparison dataset of adult patients includes adult males and females for three different scanner machines and three different acquisition protocols (Head, Chest, and Chest-Abdomen-Pelvis). The comparison sample of pediatric patients includes the exams of thirty patients for each of the following age groups: new born, 1-2 years, 3-7 years, 8-12 years, and 13-16 years. The comparison for pediatric patients were performed on the “Head” protocol. The percentage of the dose difference were calculated for organs receiving a significant dose according to the acquisition protocol (80% of the maximal dose). Results: Adult patients: for organs that are completely covered by the scan range, the maximum percentage of dose difference between the two software is 27 %. However, there are three organs situated at the edges of the scan range that show a slightly higher dose difference. Pediatric patients: the percentage of dose difference between the two software does not exceed 30%. These dose differences may be due to the use of two different generations of hybrid phantoms by the two software. Conclusion: This study shows that our software provides a reliable dosimetric information for patients undergoing Computed Tomography exams.

Keywords: adult and pediatric patients, computed tomography, organ dose calculation, software comparison

Procedia PDF Downloads 133
7542 Experiment on Artificial Recharge of Groundwater Implemented Project: Effect on the Infiltration Velocity by Vegetation Mulch

Authors: Cheh-Shyh Ting, Jiin-Liang Lin

Abstract:

This study was conducted at the Wanglung Farm in Pingtung County to test the groundwater seepage influences on the implemented project for artificial groundwater recharge. The study was divided into three phases. The first phase, conducted on natural groundwater that was recharged through the local climate and growing conditions, observed the natural form of vegetation species. The original plants were flooded, and after 60 days it was observed that of the original plants only Goosegrass (Eleusine indica) and Black heart (Polygonum lapathifolium Linn.) remained. Direct infiltration tests were carried out, and calculations for the effect of vegetation on infiltration velocity of the recharge pool were noted. The second phase was an indoor test. Bahia grass and wild amaranth were selected as vegetation roots. After growth, the distribution of different grassroots was observed in order to facilitate a comparison permeability coefficient calculated by the amount of penetration and to explore the relationship between density and the efficiency to groundwater recharge. The third phase was the root tomography analysis, further observation of the development of plant roots using computed tomography technology. Computed Tomography, also known as (CT), is a diagnostic imaging examination, normally used in the medical field. In the first phase of the feasibility study, most non-aquatic plants wilted and died within seven days. In seven days, the remaining plants were used for experimental infiltration analysis. Results showed that in eight hours of infiltration test, Eleusine indica stems averaged 0.466 m/day and wild amaranth averaged 0.014 m/day. The second phase of the experiment was conducted on the remains of the plant a week in it had died and rotted, and the infiltration experiment was performed under these conditions. The results showed eight hours in end of the infiltration test, Eleusine indica stems averaged 0.033 m/day, and wild amaranth averaged 0.098 m/day. Non-aquatic plants died within two weeks, and their rotted remains clogged the pores of bottom soil particles, causing obstruction of recharge pool infiltration. Experiment results showed that eight hours in the test the average infiltration velocity for Eleusine indica stems was 0.0229 m/day and wild amaranth averaged 0.0117 m/day. Since the rotted roots of the plants blocked the pores of the soil in the recharge pool, which resulted in the obstruction of the artificial infiltration pond and showed an immediate impact on recharge efficiency. In order to observe the development of plant roots, the third phase used computed tomography imaging. Iodine developer was injected into the Black heart, allowing its cross-sectional images to be shown on CT and to be used to observe root development.

Keywords: artificial recharge of groundwater, computed tomography, infiltration velocity, vegetation root system

Procedia PDF Downloads 280
7541 Balancing a Rotary Inverted Pendulum System Using Robust Generalized Dynamic Inverse: Design and Experiment

Authors: Ibrahim M. Mehedi, Uzair Ansari, Ubaid M. Al-Saggaf, Abdulrahman H. Bajodah

Abstract:

This paper presents a methodology for balancing a rotary inverted pendulum system using Robust Generalized Dynamic Inversion (RGDI) under influence of parametric variations and external disturbances. In GDI control, dynamic constraints are formulated in the form of asymptotically stable differential equation which encapsulates the control objectives. The constraint differential equations are based on the deviation function of the angular position and its rates from their reference values. The constraint dynamics are inverted using Moore-Penrose Generalized Inverse (MPGI) to realize the control expression. The GDI singularity problem is addressed by augmenting a dynamic scale factor in the interpretation of MPGI which guarantee asymptotically stable position tracking. An additional term based on Sliding Mode Control is appended within GDI control to make it robust against parametric variations, disturbances and tracking performance deterioration due to generalized inversion scaling. The stability of the closed loop system is ensured by using positive definite Lyapunov energy function that guarantees semi-global practically stable position tracking. Numerical simulations are conducted on the dynamic model of rotary inverted pendulum system to analyze the efficiency of proposed RGDI control law. The comparative study is also presented, in which the performance of RGDI control is compared with Linear Quadratic Regulator (LQR) and is verified through experiments. Numerical simulations and real-time experiments demonstrate better tracking performance abilities and robustness features of RGDI control in the presence of parametric uncertainties and disturbances.

Keywords: generalized dynamic inversion, lyapunov stability, rotary inverted pendulum system, sliding mode control

Procedia PDF Downloads 149
7540 Alternating Electric fields-Induced Senescence in Glioblastoma

Authors: Eun Ho Kim

Abstract:

Innovations have conjured up a mode of treating GBM cancer cells in the newly diagnosed patients in a period of 4.9 months at an improved median OS, which brings along only a few minor side effects in the phase III of the clinical trial. This mode has been termed the Alternating Electric Fields (AEF). The study at hand is aimed at determining whether the AEF treatment is beneficial in sensitizing the GBM cancer cells through the process of increasing the AEF –induced senescence. The methodology to obtain the findings for this research ranged across various components, such as obtaining and testing SA-β-gal staining, flow cytometry, Western blotting, morphology, and Positron Emission Tomography (PET) / Computed Tomography (CT), immunohistochemical staining and microarray. The number of cells that displayed a senescence-specific morphology and positive SA-ß-Gal activity gradually increased up to 5 days. These results suggest that p16, p21 and p27 are essential regulators of AEF -induced senescence via NF-κB activation. The results showed that the AEF treatment is functional in enhancing the AEF –induced senescence in the GBM cells via an apoptosis- independent mechanism. This research concludes that this mode of treatment is a trustworthy protocol that can be effectively employed to overcome the limitations of the conventional mode of treatment on GBM.

Keywords: alternating electric fields, senescence, glioblastoma, cell death

Procedia PDF Downloads 55
7539 The Algorithm to Solve the Extend General Malfatti’s Problem in a Convex Circular Triangle

Authors: Ching-Shoei Chiang

Abstract:

The Malfatti’s Problem solves the problem of fitting 3 circles into a right triangle such that these 3 circles are tangent to each other, and each circle is also tangent to a pair of the triangle’s sides. This problem has been extended to any triangle (called general Malfatti’s Problem). Furthermore, the problem has been extended to have 1+2+…+n circles inside the triangle with special tangency properties among circles and triangle sides; we call it extended general Malfatti’s problem. In the extended general Malfatti’s problem, call it Tri(Tn), where Tn is the triangle number, there are closed-form solutions for Tri(T₁) (inscribed circle) problem and Tri(T₂) (3 Malfatti’s circles) problem. These problems become more complex when n is greater than 2. In solving Tri(Tn) problem, n>2, algorithms have been proposed to solve these problems numerically. With a similar idea, this paper proposed an algorithm to find the radii of circles with the same tangency properties. Instead of the boundary of the triangle being a straight line, we use a convex circular arc as the boundary and try to find Tn circles inside this convex circular triangle with the same tangency properties among circles and boundary Carc. We call these problems the Carc(Tn) problems. The CPU time it takes for Carc(T16) problem, which finds 136 circles inside a convex circular triangle with specified tangency properties, is less than one second.

Keywords: circle packing, computer-aided geometric design, geometric constraint solver, Malfatti’s problem

Procedia PDF Downloads 73
7538 Investigation of Ductile Failure Mechanisms in SA508 Grade 3 Steel via X-Ray Computed Tomography and Fractography Analysis

Authors: Suleyman Karabal, Timothy L. Burnett, Egemen Avcu, Andrew H. Sherry, Philip J. Withers

Abstract:

SA508 Grade 3 steel is widely used in the construction of nuclear pressure vessels, where its fracture toughness plays a critical role in ensuring operational safety and reliability. Understanding the ductile failure mechanisms in this steel grade is crucial for designing robust pressure vessels that can withstand severe nuclear environment conditions. In the present study, round bar specimens of SA508 Grade 3 steel with four distinct notch geometries were subjected to tensile loading while capturing continuous 2D images at 5-second intervals in order to monitor any alterations in their geometries to construct true stress-strain curves of the specimens. 3D reconstructions of X-ray computed tomography (CT) images at high-resolution (a spatial resolution of 0.82 μm) allowed for a comprehensive assessment of the influences of second-phase particles (i.e., manganese sulfide inclusions and cementite particles) on ductile failure initiation as a function of applied plastic strain. Additionally, based on 2D and 3D images, plasticity modeling was executed, and the results were compared to experimental data. A specific ‘two-parameter criterion’ was established and calibrated based on the correlation between stress triaxiality and equivalent plastic strain at failure initiation. The proposed criterion demonstrated substantial agreement with the experimental results, thus enhancing our knowledge of ductile fracture behavior in this steel grade. The implementation of X-ray CT and fractography analysis provided new insights into the diverse roles played by different populations of second-phase particles in fracture initiation under varying stress triaxiality conditions.

Keywords: ductile fracture, two-parameter criterion, x-ray computed tomography, stress triaxiality

Procedia PDF Downloads 64
7537 Multiscale Simulation of Absolute Permeability in Carbonate Samples Using 3D X-Ray Micro Computed Tomography Images Textures

Authors: M. S. Jouini, A. Al-Sumaiti, M. Tembely, K. Rahimov

Abstract:

Characterizing rock properties of carbonate reservoirs is highly challenging because of rock heterogeneities revealed at several length scales. In the last two decades, the Digital Rock Physics (DRP) approach was implemented successfully in sandstone rocks reservoirs in order to understand rock properties behaviour at the pore scale. This approach uses 3D X-ray Microtomography images to characterize pore network and also simulate rock properties from these images. Even though, DRP is able to predict realistic rock properties results in sandstone reservoirs it is still suffering from a lack of clear workflow in carbonate rocks. The main challenge is the integration of properties simulated at different scales in order to obtain the effective rock property of core plugs. In this paper, we propose several approaches to characterize absolute permeability in some carbonate core plugs samples using multi-scale numerical simulation workflow. In this study, we propose a procedure to simulate porosity and absolute permeability of a carbonate rock sample using textures of Micro-Computed Tomography images. First, we discretize X-Ray Micro-CT image into a regular grid. Then, we use a textural parametric model to classify each cell of the grid using supervised classification. The main parameters are first and second order statistics such as mean, variance, range and autocorrelations computed from sub-bands obtained after wavelet decomposition. Furthermore, we fill permeability property in each cell using two strategies based on numerical simulation values obtained locally on subsets. Finally, we simulate numerically the effective permeability using Darcy’s law simulator. Results obtained for studied carbonate sample shows good agreement with the experimental property.

Keywords: multiscale modeling, permeability, texture, micro-tomography images

Procedia PDF Downloads 163
7536 Ubiquitous Scaffold Learning Environment Using Problem-based Learning Activities to Enhance Problem-solving Skills and Context Awareness

Authors: Noppadon Phumeechanya, Panita Wannapiroon

Abstract:

The purpose of this research is to design the ubiquitous scaffold learning environment using problem-based learning activities that enhance problem-solving skills and context awareness, and to evaluate the suitability of the ubiquitous scaffold learning environment using problem-based learning activities. We divide the research procedures into two phases. The first phase is to design the ubiquitous scaffold learning environment using problem-based learning activities, and the second is to evaluate the ubiquitous scaffold learning environment using problem-based learning activities. The sample group in this study consists of five experts selected using the purposive sampling method. We analyse data by arithmetic mean and standard deviation. The research findings are as follows; the ubiquitous scaffold learning environment using problem-based learning activities consists of three major steps, the first is preparation before learning. This prepares learners to acknowledge details and learn through u-LMS. The second is the learning process, where learning activities happen in the ubiquitous learning environment and learners learn online with scaffold systems for each step of problem solving. The third step is measurement and evaluation. The experts agree that the ubiquitous scaffold learning environment using problem-based learning activities is highly appropriate.

Keywords: ubiquitous learning environment scaffolding, learning activities, problem-based learning, problem-solving skills, context awareness

Procedia PDF Downloads 476
7535 Influence of Packing Density of Layers Placed in Specific Order in Composite Nonwoven Structure for Improved Filtration Performance

Authors: Saiyed M Ishtiaque, Priyal Dixit

Abstract:

Objectives: An approach is being suggested to design the filter media to maximize the filtration efficiency with minimum possible pressure drop of composite nonwoven by incorporating the layers of different packing densities induced by fibre of different deniers and punching parameters by using the concept of sequential punching technique in specific order in layered composite nonwoven structure. X-ray computed tomography technique is used to measure the packing density along the thickness of layered nonwoven structure composed by placing the layer of differently oriented fibres influenced by fibres of different deniers and punching parameters in various combinations to minimize the pressure drop at maximum possible filtration efficiency. Methodology Used: This work involves preparation of needle punched layered structure with batts 100g/m2 basis weight having fibre denier, punch density and needle penetration depth as variables to produce 300 g/m2 basis weight nonwoven composite. X-ray computed tomography technique is used to measure the packing density along the thickness of layered nonwoven structure composed by placing the layers of differently oriented fibres influenced by considered variables in various combinations. to minimize the pressure drop at maximum possible filtration efficiencyFor developing layered nonwoven fabrics, batts made of fibre of different deniers having 100g/m2 each basis weight were placed in various combinations. For second set of experiment, the composite nonwoven fabrics were prepared by using 3 denier circular cross section polyester fibre having 64 mm length on needle punched nonwoven machine by using the sequential punching technique to prepare the composite nonwoven fabrics. In this technique, three semi punched fabrics of 100 g/m2 each having either different punch densities or needle penetration depths were prepared for first phase of fabric preparation. These fabrics were later punched altogether to obtain the overall basis weight of 300 g/m2. The total punch density of the composite nonwoven fabric was kept at 200 punches/ cm2 with a needle penetration depth of 10 mm. The layered structures so formed were subcategorised into two groups- homogeneous layered structure in which all the three batts comprising the nonwoven fabric were made from same denier of fibre, punch density and needle penetration depth and were placed in different positions in respective fabric and heterogeneous layered structure in which batts were made from fibres of different deniers, punch densities and needle penetration depths and were placed in different positions. Contributions: The results concluded that reduction in pressure drop is not derived by the overall packing density of the layered nonwoven fabric rather sequencing of layers of specific packing density in layered structure decides the pressure drop. Accordingly, creation of inverse gradient of packing density in layered structure provided maximum filtration efficiency with least pressure drop. This study paves the way for the possibility of customising the composite nonwoven fabrics by the incorporation of differently oriented fibres in constituent layers induced by considered variablres for desired filtration properties.

Keywords: filtration efficiency, layered nonwoven structure, packing density, pressure drop

Procedia PDF Downloads 43
7534 Noise Source Identification on Urban Construction Sites Using Signal Time Delay Analysis

Authors: Balgaisha G. Mukanova, Yelbek B. Utepov, Aida G. Nazarova, Alisher Z. Imanov

Abstract:

The problem of identifying local noise sources on a construction site using a sensor system is considered. Mathematical modeling of detected signals on sensors was carried out, considering signal decay and signal delay time between the source and detector. Recordings of noises produced by construction tools were used as a dependence of noise on time. Synthetic sensor data was constructed based on these data, and a model of the propagation of acoustic waves from a point source in the three-dimensional space was applied. All sensors and sources are assumed to be located in the same plane. A source localization method is checked based on the signal time delay between two adjacent detectors and plotting the direction of the source. Based on the two direct lines' crossline, the noise source's position is determined. Cases of one dominant source and the case of two sources in the presence of several other sources of lower intensity are considered. The number of detectors varies from three to eight detectors. The intensity of the noise field in the assessed area is plotted. The signal of a two-second duration is considered. The source is located for subsequent parts of the signal with a duration above 0.04 sec; the final result is obtained by computing the average value.

Keywords: acoustic model, direction of arrival, inverse source problem, sound localization, urban noises

Procedia PDF Downloads 30
7533 Application of an Analytical Model to Obtain Daily Flow Duration Curves for Different Hydrological Regimes in Switzerland

Authors: Ana Clara Santos, Maria Manuela Portela, Bettina Schaefli

Abstract:

This work assesses the performance of an analytical model framework to generate daily flow duration curves, FDCs, based on climatic characteristics of the catchments and on their streamflow recession coefficients. According to the analytical model framework, precipitation is considered to be a stochastic process, modeled as a marked Poisson process, and recession is considered to be deterministic, with parameters that can be computed based on different models. The analytical model framework was tested for three case studies with different hydrological regimes located in Switzerland: pluvial, snow-dominated and glacier. For that purpose, five time intervals were analyzed (the four meteorological seasons and the civil year) and two developments of the model were tested: one considering a linear recession model and the other adopting a nonlinear recession model. Those developments were combined with recession coefficients obtained from two different approaches: forward and inverse estimation. The performance of the analytical framework when considering forward parameter estimation is poor in comparison with the inverse estimation for both, linear and nonlinear models. For the pluvial catchment, the inverse estimation shows exceptional good results, especially for the nonlinear model, clearing suggesting that the model has the ability to describe FDCs. For the snow-dominated and glacier catchments the seasonal results are better than the annual ones suggesting that the model can describe streamflows in those conditions and that future efforts should focus on improving and combining seasonal curves instead of considering single annual ones.

Keywords: analytical streamflow distribution, stochastic process, linear and non-linear recession, hydrological modelling, daily discharges

Procedia PDF Downloads 137
7532 Young Children’s Use of Representations in Problem Solving

Authors: Kamariah Abu Bakar, Jennifer Way

Abstract:

This study investigated how young children (six years old) constructed and used representations in mathematics classroom; particularly in problem solving. The purpose of this study is to explore the ways children used representations in solving addition problems and to determine whether their representations can play a supportive role in understanding the problem situation and solving them correctly. Data collection includes observations, children’s artifact, photographs and conversation with children during task completion. The results revealed that children were able to construct and use various representations in solving problems. However, they have certain preferences in generating representations to support their problem solving.

Keywords: young children, representations, addition, problem solving

Procedia PDF Downloads 431
7531 Two-Stage Approach for Solving the Multi-Objective Optimization Problem on Combinatorial Configurations

Authors: Liudmyla Koliechkina, Olena Dvirna

Abstract:

The statement of the multi-objective optimization problem on combinatorial configurations is formulated, and the approach to its solution is proposed. The problem is of interest as a combinatorial optimization one with many criteria, which is a model of many applied tasks. The approach to solving the multi-objective optimization problem on combinatorial configurations consists of two stages; the first is the reduction of the multi-objective problem to the single criterion based on existing multi-objective optimization methods, the second stage solves the directly replaced single criterion combinatorial optimization problem by the horizontal combinatorial method. This approach provides the optimal solution to the multi-objective optimization problem on combinatorial configurations, taking into account additional restrictions for a finite number of steps.

Keywords: discrete set, linear combinatorial optimization, multi-objective optimization, Pareto solutions, partial permutation set, structural graph

Procedia PDF Downloads 140
7530 A Comparison of Inverse Simulation-Based Fault Detection in a Simple Robotic Rover with a Traditional Model-Based Method

Authors: Murray L. Ireland, Kevin J. Worrall, Rebecca Mackenzie, Thaleia Flessa, Euan McGookin, Douglas Thomson

Abstract:

Robotic rovers which are designed to work in extra-terrestrial environments present a unique challenge in terms of the reliability and availability of systems throughout the mission. Should some fault occur, with the nearest human potentially millions of kilometres away, detection and identification of the fault must be performed solely by the robot and its subsystems. Faults in the system sensors are relatively straightforward to detect, through the residuals produced by comparison of the system output with that of a simple model. However, faults in the input, that is, the actuators of the system, are harder to detect. A step change in the input signal, caused potentially by the loss of an actuator, can propagate through the system, resulting in complex residuals in multiple outputs. These residuals can be difficult to isolate or distinguish from residuals caused by environmental disturbances. While a more complex fault detection method or additional sensors could be used to solve these issues, an alternative is presented here. Using inverse simulation (InvSim), the inputs and outputs of the mathematical model of the rover system are reversed. Thus, for a desired trajectory, the corresponding actuator inputs are obtained. A step fault near the input then manifests itself as a step change in the residual between the system inputs and the input trajectory obtained through inverse simulation. This approach avoids the need for additional hardware on a mass- and power-critical system such as the rover. The InvSim fault detection method is applied to a simple four-wheeled rover in simulation. Additive system faults and an external disturbance force and are applied to the vehicle in turn, such that the dynamic response and sensor output of the rover are impacted. Basic model-based fault detection is then employed to provide output residuals which may be analysed to provide information on the fault/disturbance. InvSim-based fault detection is then employed, similarly providing input residuals which provide further information on the fault/disturbance. The input residuals are shown to provide clearer information on the location and magnitude of an input fault than the output residuals. Additionally, they can allow faults to be more clearly discriminated from environmental disturbances.

Keywords: fault detection, ground robot, inverse simulation, rover

Procedia PDF Downloads 279
7529 On Optimum Stratification

Authors: M. G. M. Khan, V. D. Prasad, D. K. Rao

Abstract:

In this manuscript, we discuss the problem of determining the optimum stratification of a study (or main) variable based on the auxiliary variable that follows a uniform distribution. If the stratification of survey variable is made using the auxiliary variable it may lead to substantial gains in precision of the estimates. This problem is formulated as a Nonlinear Programming Problem (NLPP), which turn out to multistage decision problem and is solved using dynamic programming technique.

Keywords: auxiliary variable, dynamic programming technique, nonlinear programming problem, optimum stratification, uniform distribution

Procedia PDF Downloads 304
7528 Modeling and Controlling Nonlinear Dynamical Effects in Non-Contact Superconducting and Diamagnetic Suspensions

Authors: Sergey Kuznetsov, Yuri Urman

Abstract:

We present an approach to investigate non-linear dynamical effects occurring in the noncontact superconducting and diamagnetic suspensions, when levitated body has finite size. This approach is based on the calculation of interaction energy between spherical finite size superconducting or diamagnetic body with external magnetic field. Effects of small deviations from spherical shape may be also taken into account by introducing small corrections to the energy. This model allows investigating dynamical effects important for practical applications, such as nonlinear resonances, change of vibration plane, coupling of rotational and translational motions etc. We also show how the geometry of suspension affects various dynamical effects and how an inverse problem may be formulated to enforce or diminish various dynamical effects.

Keywords: levitation, non-linear dynamics, superconducting, diamagnetic stability

Procedia PDF Downloads 376
7527 Multifunctional Bismuth-Based Nanoparticles as Theranostic Agent for Imaging and Radiation Therapy

Authors: Azimeh Rajaee, Lingyun Zhao, Shi Wang, Yaqiang Liu

Abstract:

In recent years many studies have been focused on bismuth-based nanoparticles as radiosensitizer and contrast agent in radiation therapy and imaging due to the high atomic number (Z = 82), high photoelectric absorption, low cost, and low toxicity. This study aims to introduce a new multifunctional bismuth-based nanoparticle as a theranostic agent for radiotherapy, computed tomography (CT) and magnetic resonance imaging (MRI). We synthesized bismuth ferrite (BFO, BiFeO3) nanoparticles by sol-gel method and surface of the nanoparticles were modified by Polyethylene glycol (PEG). After proved biocompatibility of the nanoparticles, the ability of them as contract agent in Computed tomography (CT) and magnetic resonance imaging (MRI) was investigated. The relaxation time rate (R2) in MRI and Hounsfield unit (HU) in CT imaging were increased with the concentration of the nanoparticles. Moreover, the effect of nanoparticles on dose enhancement in low energy was investigated by clonogenic assay. According to clonogenic assay, sensitizer enhancement ratios (SERs) were obtained as 1.35 and 1.76 for nanoparticle concentrations of 0.05 mg/ml and 0.1 mg/ml, respectively. In conclusion, our experimental results demonstrate that the multifunctional nanoparticles have the ability to employ as multimodal imaging and therapy to enhance theranostic efficacy.

Keywords: molecular imaging, nanomedicine, radiotherapy, theranostics

Procedia PDF Downloads 284
7526 Cardiothoracic Ratio in Postmortem Computed Tomography: A Tool for the Diagnosis of Cardiomegaly

Authors: Alex Eldo Simon, Abhishek Yadav

Abstract:

This study aimed to evaluate the utility of postmortem computed tomography (CT) and heart weight measurements in the assessment of cardiomegaly in cases of sudden death due to cardiac origin by comparing the results of these two diagnostic methods. The study retrospectively analyzed postmortem computed tomography (PMCT) data from 54 cases of sudden natural death and compared the findings with those of the autopsy. The study involved measuring the cardiothoracic ratio (CTR) from coronal computed tomography (CT) images and determining the actual cardiac weight by weighing the heart during the autopsy. The inclusion criteria for the study were cases of sudden death suspected to be caused by cardiac pathology, while exclusion criteria included death due to unnatural causes such as trauma or poisoning, diagnosed natural causes of death related to organs other than the heart, and cases of decomposition. Sensitivity, specificity, and diagnostic accuracy were calculated, and to evaluate the accuracy of using the cardiothoracic ratio (CTR) to detect an enlarged heart, the study generated receiver operating characteristic (ROC) curves. The cardiothoracic ratio (CTR) is a radiological tool used to assess cardiomegaly by measuring the maximum cardiac diameter in relation to the maximum transverse diameter of the chest wall. The clinically used criteria for CTR have been modified from 0.50 to 0.57 for use in postmortem settings, where abnormalities can be detected by comparing CTR values to this threshold. A CTR value of 0.57 or higher is suggestive of hypertrophy but not conclusive. Similarly, heart weight is measured during the traditional autopsy, and a cardiac weight greater than 450 grams is defined as hypertrophy. Of the 54 cases evaluated, 22 (40.7%) had a cardiothoracic ratio (CTR) ranging from > 0.50 to equal 0.57, and 12 cases (22.2%) had a CTR greater than 0.57, which was defined as hypertrophy. The mean CTR was calculated as 0.52 ± 0.06. Among the 54 cases evaluated, the weight of the heart was measured, and the mean was calculated as 369.4 ± 99.9 grams. Out of the 54 cases evaluated, 12 were found to have hypertrophy as defined by PMCT, while only 9 cases were identified with hypertrophy in traditional autopsy. The sensitivity and specificity of the test were calculated as 55.56% and 84.44%, respectively. The sensitivity of the hypertrophy test was found to be 55.56% (95% CI: 26.66, 81.12¹), the specificity was 84.44% (95% CI: 71.22, 92.25¹), and the diagnostic accuracy was 79.63% (95% CI: 67.1, 88.23¹). The limitation of the study was a low sample size of only 54 cases, which may limit the generalizability of the findings. The comparison of the cardiothoracic ratio with heart weight in this study suggests that PMCT may serve as a screening tool for medico-legal autopsies when performed by forensic pathologists. However, it should be noted that the low sensitivity of the test (55.5%) may limit its diagnostic accuracy, and therefore, further studies with larger sample sizes and more diverse populations are needed to validate these findings.

Keywords: PMCT, virtopsy, CTR, cardiothoracic ratio

Procedia PDF Downloads 55
7525 Application of Electrical Resistivity Tomography to Image the Subsurface Structure of a Sinkhole, a Case Study in Southwestern Missouri

Authors: Shishay T. Kidanu

Abstract:

The study area is located in Southwestern Missouri and is mainly underlain by Mississippian Age limestone which is highly susceptible to karst processes. The area is known for the presence of various karst features like caves, springs and more importantly Sinkholes. Sinkholes are one of the most common karst features and the primary hazard in karst areas. Investigating the subsurface structure and development mechanism of existing sinkholes enables to understand their long-term impact and chance of reactivation and also helps to provide effective mitigation measures. In this study ERT (Electrical Resistivity Tomography), MASW (Multichannel Analysis of Surface Waves) and borehole control data have been used to image the subsurface structure and investigate the development mechanism of a sinkhole in Southwestern Missouri. The study shows that the main process responsible for the development of the sinkhole is the downward piping of fine grained soils. Furthermore, the study reveals that the sinkhole developed along a north-south oriented vertical joint set characterized by a vertical zone of water seepage and associated fine grained soil piping into preexisting fractures.

Keywords: ERT, Karst, MASW, sinkhole

Procedia PDF Downloads 193
7524 Observational Study Reveals Inverse Relationship: Rising PM₂.₅ Concentrations Linked to Decreasing Muon Flux

Authors: Yashas Mattur, Jensen Coonradt

Abstract:

Muon flux, the rate of muons reaching Earth from the atmosphere, is impacted by various factors such as air pressure, temperature, and humidity. However, the influence of concentrations of PM₂.₅ (particulate matter with diameters 2.5 mm or smaller) on muon detection rates remains unexplored. During the summer of 2023, smoke from Canadian wildfires (containing significant amounts of particulate matter) blew over regions in the Northern US, introducing huge fluctuations in PM₂.₅ concentrations, thus inspiring our experiment to investigate the correlation of PM₂.₅ concentrations and muon rates. To investigate this correlation, muon collision rates were measured and analyzed alongside PM₂.₅ concentration data over the periods of both light and heavy smoke. Other confounding variables, including temperature, humidity, and atmospheric pressure, were also considered. The results reveal a statistically significant inverse correlation between muon flux and PM₂.₅ concentrations, indicating that particulate matter has an impact on the rate of muons reaching the earth’s surface.

Keywords: Muon Flux, atmospheric effects on muons, PM₂.₅, airborne particulate matter

Procedia PDF Downloads 44
7523 Low Dose In-Line Electron Holography for 3D Atomic Resolution Tomography of Soft Materials

Authors: F. R. Chen, C. Kisielowski, D. Van Dyck

Abstract:

In principle, the latest generation aberration-corrected transmission electron microscopes (TEMs) could achieve sub-Å resolution, but there is bottleneck that hinders the final step towards revealing 3D structure. Firstly, in order to achieve a resolution around 1 Å with single atom sensitivity, the electron dose rate needs to be sufficiently large (10⁴-10⁵eÅ⁻² s⁻¹). With such large dose rate, the electron beam can induce surfaces alterations or even bulk modifications, in particular, for electron beam sensitive (soft) materials such as nm size particles, organic materials, proteins or macro-molecules. We will demonstrate methodology of low dose electron holography for observing 3D structure for soft materials such as single Oleic acid molecules at atomic resolution. The main improvement of this new type of electron holography is based on two concepts. Firstly, the total electron dose is distributed over many images obtained at different defocus values from which the electron hologram is then reconstructed. Secondly, in contrast to the current tomographic methods that require projections from several directions, the 3D structural information of the nano-object is then extracted from this one hologram obtained from only one viewing direction.

Keywords: low dose electron microscopy, in-line electron holography, atomic resolution tomography, soft materials

Procedia PDF Downloads 166
7522 Genetic Algorithm for Solving the Flexible Job-Shop Scheduling Problem

Authors: Guilherme Baldo Carlos

Abstract:

The flexible job-shop scheduling problem (FJSP) is an NP-hard combinatorial optimization problem, which can be applied to model several applications in a wide array of industries. This problem will have its importance increase due to the shift in the production mode that modern society is going through. The demands are increasing and for products personalized and customized. This work aims to apply a meta-heuristic called a genetic algorithm (GA) to solve this problem. A GA is a meta-heuristic inspired by the natural selection of Charles Darwin; it produces a population of individuals (solutions) and selects, mutates, and mates the individuals through generations in order to find a good solution for the problem. The results found indicate that the GA is suitable for FJSP solving.

Keywords: genetic algorithm, evolutionary algorithm, scheduling, flexible job-shop scheduling

Procedia PDF Downloads 121
7521 Number Sense Proficiency and Problem Solving Performance of Grade Seven Students

Authors: Laissa Mae Francisco, John Rolex Ingreso, Anna Krizel Menguito, Criselda Robrigado, Rej Maegan Tuazon

Abstract:

This study aims to determine and describe the existing relationship between number sense proficiency and problem-solving performance of grade seven students from Victorino Mapa High School, Manila. A paper pencil exam containing of 50-item number sense test and 5-item problem-solving test which measures their number sense proficiency and problem-solving performance adapted from McIntosh, Reys, and Bana were used as the research instruments. The data obtained from this study were interpreted and analyzed using the Pearson – Product Moment Coefficient of Correlation to determine the relationship between the two variables. It was found out that students who were low in number sense proficiency tend to be the students with poor problem-solving performance and students with medium number sense proficiency are most likely to have an average problem-solving performance. Likewise, students with high number sense proficiency are those who do excellently in problem-solving performance.

Keywords: number sense, performance, problem solving, proficiency

Procedia PDF Downloads 398
7520 Incorporating Polya’s Problem Solving Process: A Polytechnic Mathematics Module Case Study

Authors: Pei Chin Lim

Abstract:

School of Mathematics and Science of Singapore Polytechnic offers a Basic Mathematics module to students who did not pass GCE O-Level Additional Mathematics. These students are weaker in Mathematics. In particular, they struggle with word problems and tend to leave them blank in tests and examinations. In order to improve students’ problem-solving skills, the school redesigned the Basic Mathematics module to incorporate Polya’s problem-solving methodology. During tutorial lessons, students have to work through learning activities designed to raise their metacognitive awareness by following Polya’s problem-solving process. To assess the effectiveness of the redesign, students’ working for a challenging word problem in the mid-semester test were analyzed. Sixty-five percent of students attempted to understand the problem by making sketches. Twenty-eight percent of students went on to devise a plan and implement it. Only five percent of the students still left the question blank. These preliminary results suggest that with regular exposure to an explicit and systematic problem-solving approach, weak students’ problem-solving skills can potentially be improved.

Keywords: mathematics education, metacognition, problem solving, weak students

Procedia PDF Downloads 136
7519 Examination of the Influence of the Near-Surface Geology on the Initial Infrastructural Development Using High-Resolution Seismic Method

Authors: Collins Chiemeke, Stephen Ibe, Godwin Onyedim

Abstract:

This research work on high-resolution seismic tomography method was carried out with the aim of investigating how near-surface geology influences the initial distribution of infrastructural development in an area like Otuoke and its environs. To achieve this objective, seismic tomography method was employed. The result revealed that the overburden (highly-weathered layer) thickness ranges from 27 m to 50 m within the survey area, with an average value of 37 m. The 3D surface analysis for the overburden thickness distribution within the survey area showed that the thickness of the overburden is more in regions with less infrastructural development, and least in built-up areas. The range of velocity distribution from the surface to within a depth of 5 m is about 660 m/s to 1160 m/s, with an average value of 946 m/s. The 3D surface analysis of the velocity distribution also revealed that the areas with large infrastructural development are characterized with large velocity values compared with the undeveloped regions that has average low-velocity values. Hence, one can conclusively say that the initial settlement of Otuoke and its environs and the subsequent infrastructural development was influenced by the underlying near surface geology (rigid earth), among other factors.

Keywords: geology, seismic, infrastructural, near-surface

Procedia PDF Downloads 267
7518 An Optimization Model for Maximum Clique Problem Based on Semidefinite Programming

Authors: Derkaoui Orkia, Lehireche Ahmed

Abstract:

The topic of this article is to exploring the potentialities of a powerful optimization technique, namely Semidefinite Programming, for solving NP-hard problems. This approach provides tight relaxations of combinatorial and quadratic problems. In this work, we solve the maximum clique problem using this relaxation. The clique problem is the computational problem of finding cliques in a graph. It is widely acknowledged for its many applications in real-world problems. The numerical results show that it is possible to find a maximum clique in polynomial time, using an algorithm based on semidefinite programming. We implement a primal-dual interior points algorithm to solve this problem based on semidefinite programming. The semidefinite relaxation of this problem can be solved in polynomial time.

Keywords: semidefinite programming, maximum clique problem, primal-dual interior point method, relaxation

Procedia PDF Downloads 194
7517 A New Graph Theoretic Problem with Ample Practical Applications

Authors: Mehmet Hakan Karaata

Abstract:

In this paper, we first coin a new graph theocratic problem with numerous applications. Second, we provide two algorithms for the problem. The first solution is using a brute-force techniques, whereas the second solution is based on an initial identification of the cycles in the given graph. We then provide a correctness proof of the algorithm. The applications of the problem include graph analysis, graph drawing and network structuring.

Keywords: algorithm, cycle, graph algorithm, graph theory, network structuring

Procedia PDF Downloads 359
7516 Kinematic Hardening Parameters Identification with Respect to Objective Function

Authors: Marina Franulovic, Robert Basan, Bozidar Krizan

Abstract:

Constitutive modelling of material behaviour is becoming increasingly important in prediction of possible failures in highly loaded engineering components, and consequently, optimization of their design. In order to account for large number of phenomena that occur in the material during operation, such as kinematic hardening effect in low cycle fatigue behaviour of steels, complex nonlinear material models are used ever more frequently, despite of the complexity of determination of their parameters. As a method for the determination of these parameters, genetic algorithm is good choice because of its capability to provide very good approximation of the solution in systems with large number of unknown variables. For the application of genetic algorithm to parameter identification, inverse analysis must be primarily defined. It is used as a tool to fine-tune calculated stress-strain values with experimental ones. In order to choose proper objective function for inverse analysis among already existent and newly developed functions, the research is performed to investigate its influence on material behaviour modelling.

Keywords: genetic algorithm, kinematic hardening, material model, objective function

Procedia PDF Downloads 301
7515 High-Resolution Computed Tomography Imaging Features during Pandemic 'COVID-19'

Authors: Sahar Heidary, Ramin Ghasemi Shayan

Abstract:

By the development of new coronavirus (2019-nCoV) pneumonia, chest high-resolution computed tomography (HRCT) has been one of the main investigative implements. To realize timely and truthful diagnostics, defining the radiological features of the infection is of excessive value. The purpose of this impression was to consider the imaging demonstrations of early-stage coronavirus disease 2019 (COVID-19) and to run an imaging base for a primary finding of supposed cases and stratified interference. The right prophetic rate of HRCT was 85%, sensitivity was 73% for all patients. Total accuracy was 68%. There was no important change in these values for symptomatic and asymptomatic persons. These consequences were besides free of the period of X-ray from the beginning of signs or interaction. Therefore, we suggest that HRCT is a brilliant attachment for early identification of COVID-19 pneumonia in both symptomatic and asymptomatic individuals in adding to the role of predictive gauge for COVID-19 pneumonia. Patients experienced non-contrast HRCT chest checkups and images were restored in a thin 1.25 mm lung window. Images were estimated for the existence of lung scratches & a CT severity notch was allocated separately for each patient based on the number of lung lobes convoluted.

Keywords: COVID-19, radiology, respiratory diseases, HRCT

Procedia PDF Downloads 120