Search results for: beta/gamma coincidence technique
5547 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications
Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon
Abstract:
The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.Keywords: analysis, automated fibre placement, high speed, splicing
Procedia PDF Downloads 1555546 THz Phase Extraction Algorithms for a THz Modulating Interferometric Doppler Radar
Authors: Shaolin Allen Liao, Hual-Te Chien
Abstract:
Various THz phase extraction algorithms have been developed for a novel THz Modulating Interferometric Doppler Radar (THz-MIDR) developed recently by the author. The THz-MIDR differs from the well-known FTIR technique in that it introduces a continuously modulating reference branch, compared to the time-consuming discrete FTIR stepping reference branch. Such change allows real-time tracking of a moving object and capturing of its Doppler signature. The working principle of the THz-MIDR is similar to the FTIR technique: the incoming THz emission from the scene is split by a beam splitter/combiner; one of the beams is continuously modulated by a vibrating mirror or phase modulator and the other split beam is reflected by a reflection mirror; finally both the modulated reference beam and reflected beam are combined by the same beam splitter/combiner and detected by a THz intensity detector (for example, a pyroelectric detector). In order to extract THz phase from the single intensity measurement signal, we have derived rigorous mathematical formulas for 3 Frequency Banded (FB) signals: 1) DC Low-Frequency Banded (LFB) signal; 2) Fundamental Frequency Banded (FFB) signal; and 3) Harmonic Frequency Banded (HFB) signal. The THz phase extraction algorithms are then developed based combinations of 2 or all of these 3 FB signals with efficient algorithms such as Levenberg-Marquardt nonlinear fitting algorithm. Numerical simulation has also been performed in Matlab with simulated THz-MIDR interferometric signal of various Signal to Noise Ratio (SNR) to verify the algorithms.Keywords: algorithm, modulation, THz phase, THz interferometry doppler radar
Procedia PDF Downloads 3455545 Operative Tips of Strattice Based Breast Reconstruction
Authors: Cho Ee Ng, Hazem Khout, Tarannum Fasih
Abstract:
Acellular dermal matrices are increasingly used to reinforce the lower pole of the breast during implant breast reconstruction. There is no standard technique described in literature for the use of this product. In this article, we share our operative method of fixation.Keywords: strattice, acellular dermal matric, breast reconstruction, implant
Procedia PDF Downloads 3965544 Tracing Sources of Sediment in an Arid River, Southern Iran
Authors: Hesam Gholami
Abstract:
Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran
Procedia PDF Downloads 745543 A Comparative Study of Euglena gracilis Cultivations for Improving Laminaribiose Phosphorylase Production
Authors: Akram Abi, Clarissa Müller, Hans-Joachim Jördening
Abstract:
Laminaribiose is a beta-1,3-glycoside which is used in the medical field for the treatment of dermatitis and also can be used as a building block for new pharmaceutics. The conventional process of laminaribiose production is the uneconomical process of hydrolysis of laminarin extracted from natural polysaccharides of plant origin. A more economical approach however is attainable by enzymatically synthesis of laminaribiose via a reverse phosphorylase reaction catalyzed by laminaribiose phosphorylase (LP) from Euglena gracilis. Different cultivation methods of Euglena gracilis and the effect on LP production have been investigated. Buffered/unbuffered heterotrophic and mixotrophic cultivations of Euglena gracilis has been carried out. Changes of biomass and LP production, glucose level and pH, cell count and shape has been monitored in the course of time. The results obtained from experiments each in three repetitions, show that in the heterotrophic cultivation of Euglena gracilis not only more biomass is produced compared to mixotrophic cultivation, but also higher specific protein concentration is achieved. Furthermore, the LP activity test showed that the protein extracted from heterotrophically cultured cells has a higher LP activity. It was also observed that the cells develop in a distinctive different shape between these two cultures and have different length to width ratios. Taking the heterotrophic culture as the more efficient cultivation method in LP production, another comparative experiment between buffered and unbuffered heterothrophic culture was carried out that showed the unbuffered culture has advantages over the other one in respect of both LP production and resulting activity. A hetrotrophic cultivation of Euglena gracilis in a 5L bioreactor with controlled operating conditions showed a distinctive improvement of all the aspects of culture compared to the shaking flask cultivations. Biomass production was improved from 5 to more than 8 g/l (dry weight) which resulted in a specific protein concentration of 45 g/l in the heterotrophic cultivation in the bioreactor. In further attempts to improve LP production, different purification methods were tested and each method was checks through an activity assay. A laminaribiose yield of 35% was achieved which was by far the highest amount amongst different methods tested.Keywords: euglena gracilis, heterotrophic culture, laminaribiose production, mixotrophic culture
Procedia PDF Downloads 3655542 The Optimal Order Policy for the Newsvendor Model under Worker Learning
Authors: Sunantha Teyarachakul
Abstract:
We consider the worker-learning Newsvendor Model, under the case of lost-sales for unmet demand, with the research objective of proposing the cost-minimization order policy and lot size, scheduled to arrive at the beginning of the selling-period. In general, the New Vendor Model is used to find the optimal order quantity for the perishable items such as fashionable products or those with seasonal demand or short-life cycles. Technically, it is used when the product demand is stochastic and available for the single selling-season, and when there is only a one time opportunity for the vendor to purchase, with possibly of long ordering lead-times. Our work differs from the classical Newsvendor Model in that we incorporate the human factor (specifically worker learning) and its influence over the costs of processing units into the model. We describe this by using the well-known Wright’s Learning Curve. Most of the assumptions of the classical New Vendor Model are still maintained in our work, such as the constant per-unit cost of leftover and shortage, the zero initial inventory, as well as the continuous time. Our problem is challenging in the way that the best order quantity in the classical model, which is balancing the over-stocking and under-stocking costs, is no longer optimal. Specifically, when adding the cost-saving from worker learning to such expected total cost, the convexity of the cost function will likely not be maintained. This has called for a new way in determining the optimal order policy. In response to such challenges, we found a number of characteristics related to the expected cost function and its derivatives, which we then used in formulating the optimal ordering policy. Examples of such characteristics are; the optimal order quantity exists and is unique if the demand follows a Uniform Distribution; if the demand follows the Beta Distribution with some specific properties of its parameters, the second derivative of the expected cost function has at most two roots; and there exists the specific level of lot size that satisfies the first order condition. Our research results could be helpful for analysis of supply chain coordination and of the periodic review system for similar problems.Keywords: inventory management, Newsvendor model, order policy, worker learning
Procedia PDF Downloads 4165541 Numerical Modelling of Skin Tumor Diagnostics through Dynamic Thermography
Authors: Luiz Carlos Wrobel, Matjaz Hribersek, Jure Marn, Jurij Iljaz
Abstract:
Dynamic thermography has been clinically proven to be a valuable diagnostic technique for skin tumor detection as well as for other medical applications such as breast cancer diagnostics, diagnostics of vascular diseases, fever screening, dermatological and other applications. Thermography for medical screening can be done in two different ways, observing the temperature response under steady-state conditions (passive or static thermography), and by inducing thermal stresses by cooling or heating the observed tissue and measuring the thermal response during the recovery phase (active or dynamic thermography). The numerical modelling of heat transfer phenomena in biological tissue during dynamic thermography can aid the technique by improving process parameters or by estimating unknown tissue parameters based on measured data. This paper presents a nonlinear numerical model of multilayer skin tissue containing a skin tumor, together with the thermoregulation response of the tissue during the cooling-rewarming processes of dynamic thermography. The model is based on the Pennes bioheat equation and solved numerically by using a subdomain boundary element method which treats the problem as axisymmetric. The paper includes computational tests and numerical results for Clark II and Clark IV tumors, comparing the models using constant and temperature-dependent thermophysical properties, which showed noticeable differences and highlighted the importance of using a local thermoregulation model.Keywords: boundary element method, dynamic thermography, static thermography, skin tumor diagnostic
Procedia PDF Downloads 1075540 An Assessment of Financial Viability and Sustainability of Hydroponics Using Reclaimed Water Using LCA and LCC
Authors: Muhammad Abdullah, Muhammad Atiq Ur Rehman Tariq, Faraz Ul Haq
Abstract:
In developed countries, sustainability measures are widely accepted and acknowledged as crucial for addressing environmental concerns. Hydroponics, a soilless cultivation technique, has emerged as a potentially sustainable solution as it can reduce water consumption, land use, and environmental impacts. However, hydroponics may not be economically viable, especially when using reclaimed water, which may entail additional costs and risks. This study aims to address the critical question of whether hydroponics using reclaimed water can achieve a balance between sustainability and financial viability. Life Cycle Assessment (LCA) and Life Cycle Cost (LCC) will be integrated to assess the potential of hydroponics whether it is environmentally sustainable and economically viable. Life cycle assessment, or LCA, is a methodology for assessing environmental impacts associated with all the stages of the life cycle of a commercial product, process, or service. While Life Cycle Cost (LCC) is an approach that assesses the total cost of an asset over its life cycle, including initial capital costs and maintenance costs. The expected benefits of this study include supporting evidence-based decision-making for policymakers, farmers, and stakeholders involved in agriculture. By quantifying environmental impacts and economic costs, this research will facilitate informed choices regarding the adoption of hydroponics with reclaimed water. It is believed that the outcomes of this research work will help to achieve a sustainable approach to agricultural production, aligning with sustainability goals while considering economic factors by adopting hydroponic technique.Keywords: hydroponic, life cycle assessment, life cycle cost, sustainability
Procedia PDF Downloads 715539 Rapid Soil Classification Using Computer Vision with Electrical Resistivity and Soil Strength
Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, P. L. Goh, Grace H. B. Foo, M. L. Leong
Abstract:
This paper presents the evaluation of various soil testing methods such as the four-probe soil electrical resistivity method and cone penetration test (CPT) that can complement a newly developed novel rapid soil classification scheme using computer vision, to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from the local construction industry are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labor-intensive. Thus, a rapid classification method is needed at the SGs. Four-probe soil electrical resistivity and CPT were evaluated for their feasibility as suitable additions to the computer vision system to further develop this innovative non-destructive and instantaneous classification method. The computer vision technique comprises soil image acquisition using an industrial-grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the following three items were targeted to be added onto the computer vision scheme: the apparent electrical resistivity of soil (ρ) measured using a set of four probes arranged in Wenner’s array, the soil strength measured using a modified mini cone penetrometer, and w measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay,” and a mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay” and are feasible as complementing methods to the computer vision system.Keywords: computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification
Procedia PDF Downloads 2395538 Application of the Finite Window Method to a Time-Dependent Convection-Diffusion Equation
Authors: Raoul Ouambo Tobou, Alexis Kuitche, Marcel Edoun
Abstract:
The FWM (Finite Window Method) is a new numerical meshfree technique for solving problems defined either in terms of PDEs (Partial Differential Equation) or by a set of conservation/equilibrium laws. The principle behind the FWM is that in such problem each element of the concerned domain is interacting with its neighbors and will always try to adapt to keep in equilibrium with respect to those neighbors. This leads to a very simple and robust problem solving scheme, well suited for transfer problems. In this work, we have applied the FWM to an unsteady scalar convection-diffusion equation. Despite its simplicity, it is well known that convection-diffusion problems can be challenging to be solved numerically, especially when convection is highly dominant. This has led researchers to set the scalar convection-diffusion equation as a benchmark one used to analyze and derive the required conditions or artifacts needed to numerically solve problems where convection and diffusion occur simultaneously. We have shown here that the standard FWM can be used to solve convection-diffusion equations in a robust manner as no adjustments (Upwinding or Artificial Diffusion addition) were required to obtain good results even for high Peclet numbers and coarse space and time steps. A comparison was performed between the FWM scheme and both a first order implicit Finite Volume Scheme (Upwind scheme) and a third order implicit Finite Volume Scheme (QUICK Scheme). The results of the comparison was that for equal space and time grid spacing, the FWM yields a much better precision than the used Finite Volume schemes, all having similar computational cost and conditioning number.Keywords: Finite Window Method, Convection-Diffusion, Numerical Technique, Convergence
Procedia PDF Downloads 3325537 A Study of Secondary Particle Production from Carbon Ion Beam for Radiotherapy
Authors: Shaikah Alsubayae, Gianluigi Casse, Carlos Chavez, Jon Taylor, Alan Taylor, Mohammad Alsulimane
Abstract:
Achieving precise radiotherapy through carbon therapy necessitates the accurate monitoring of radiation dose distribution within the patient's body. This process is pivotal for targeted tumor treatment, minimizing harm to healthy tissues, and enhancing overall treatment effectiveness while reducing the risk of side effects. In our investigation, we adopted a methodological approach to monitor secondary proton doses in carbon therapy using Monte Carlo (MC) simulations. Initially, Geant4 simulations were employed to extract the initial positions of secondary particles generated during interactions between carbon ions and water, including protons, gamma rays, alpha particles, neutrons, and tritons. Subsequently, we explored the relationship between the carbon ion beam and these secondary particles. Interaction vertex imaging (IVI) proves valuable for monitoring dose distribution during carbon therapy, providing information about secondary particle locations and abundances, particularly protons. The IVI method relies on charged particles produced during ion fragmentation to gather range information by reconstructing particle trajectories back to their point of origin, known as the vertex. In the context of carbon ion therapy, our simulation results indicated a strong correlation between some secondary particles and the range of carbon ions. However, challenges arose due to the unique elongated geometry of the target, hindering the straightforward transmission of forward-generated protons. Consequently, the limited protons that did emerge predominantly originated from points close to the target entrance. Fragment (protons) trajectories were approximated as straight lines, and a beam back-projection algorithm, utilizing interaction positions recorded in Si detectors, was developed to reconstruct vertices. The analysis revealed a correlation between the reconstructed and actual positions.Keywords: radiotherapy, carbon therapy, monitor secondary proton doses, interaction vertex imaging
Procedia PDF Downloads 785536 Utilizing Laser Cutting Method in Men's' Custom-Made Casualwear
Authors: M A. Habit, S. A. Syed-Sahil, A. Bahari
Abstract:
Abstract—Laser cutting is a method of manufacturing process that uses laser in order to cut materials. It provides and ensures extreme accuracy which has a clean cut effect, CO2 laser dominate this application due to their good- quality beam combined with high output power. It comes with a small scale and it has a limitation in cutting sizes of materials, therefore it is more appropriate for custom- made products. The same laser cutting machine is also capable in cutting fine material such as fine silk, cotton, leather, polyester, etc. Lack of explorations and knowledge besides being unaware about this technology had caused many of the designers not to use this laser cutting method in their collections. The objectives of this study are: 1) To identify the potential of laser cutting technique in Custom-Made Garments for men’s casual wear: 2) To experiment the laser cutting technique in custom made garments: 3) To offer guidelines and formula for men’s custom- made casualwear designs with aesthetic value. In order to achieve the objectives, this research has been conducted by using mixed methods which are interviews with two (2) local experts in the apparel manufacturing industries and interviews via telephone with five (5) local respondents who are local emerging fashion designers, the questionnaires were distributed to one hundred (100) respondents around Klang Valley, in order to gain the information about their understanding and awareness regarding laser cutting technology. The experiment was conducted by using natural and man- made fibers. As a conclusion, all of the objectives had been achieved in producing custom-made men’s casualwear and with the production of these attires it will help to educate and enhance the innovation in fine technology. Therefore, there will be a good linkage and collaboration between the design experts and the manufacturing companies.Keywords: custom-made, fashion, laser cut, men’s wear
Procedia PDF Downloads 4415535 Analyzing the Street Pattern Characteristics on Young People’s Choice to Walk or Not: A Study Based on Accelerometer and Global Positioning Systems Data
Authors: Ebru Cubukcu, Gozde Eksioglu Cetintahra, Burcin Hepguzel Hatip, Mert Cubukcu
Abstract:
Obesity and overweight cause serious health problems. Public and private organizations aim to encourage walking in various ways in order to cope with the problem of obesity and overweight. This study aims to understand how the spatial characteristics of urban street pattern, connectivity and complexity influence young people’s choice to walk or not. 185 public university students in Izmir, the third largest city in Turkey, participated in the study. Each participant had worn an accelerometer and a global positioning (GPS) device for a week. The accelerometer device records data on the intensity of the participant’s activity at a specified time interval, and the GPS device on the activities’ locations. Combining the two datasets, activity maps are derived. These maps are then used to differentiate the participants’ walk trips and motor vehicle trips. Given that, the frequency of walk and motor vehicle trips are calculated at the street segment level, and the street segments are then categorized into two as ‘preferred by pedestrians’ and ‘preferred by motor vehicles’. Graph Theory-based accessibility indices are calculated to quantify the spatial characteristics of the streets in the sample. Six different indices are used: (I) edge density, (II) edge sinuosity, (III) eta index, (IV) node density, (V) order of a node, and (VI) beta index. T-tests show that the index values for the ‘preferred by pedestrians’ and ‘preferred by motor vehicles’ are significantly different. The findings indicate that the spatial characteristics of the street network have a measurable effect on young people’s choice to walk or not. Policy implications are discussed. This study is funded by the Scientific and Technological Research Council of Turkey, Project No: 116K358.Keywords: graph theory, walkability, accessibility, street network
Procedia PDF Downloads 2255534 Virtual Process Hazard Analysis (Pha) Of a Nuclear Power Plant (Npp) Using Failure Mode and Effects Analysis (Fmea) Technique
Authors: Lormaine Anne A. Branzuela, Elysa V. Largo, Monet Concepcion M. Detras, Neil C. Concibido
Abstract:
The electricity demand is still increasing, and currently, the Philippine government is investigating the feasibility of operating the Bataan Nuclear Power Plant (BNPP) to address the country’s energy problem. However, the lack of process safety studies on BNPP focused on the effects of hazardous substances on the integrity of the structure, equipment, and other components, have made the plant operationalization questionable to the public. The three major nuclear power plant incidents – TMI-2, Chernobyl, and Fukushima – have made many people hesitant to include nuclear energy in the energy matrix. This study focused on the safety evaluation of possible operations of a nuclear power plant installed with a Pressurized Water Reactor (PWR), which is similar to BNPP. Failure Mode and Effects Analysis (FMEA) is one of the Process Hazard Analysis (PHA) techniques used for the identification of equipment failure modes and minimizing its consequences. Using the FMEA technique, this study was able to recognize 116 different failure modes in total. Upon computation and ranking of the risk priority number (RPN) and criticality rating (CR), it showed that failure of the reactor coolant pump due to earthquakes is the most critical failure mode. This hazard scenario could lead to a nuclear meltdown and radioactive release, as identified by the FMEA team. Safeguards and recommended risk reduction strategies to lower the RPN and CR were identified such that the effects are minimized, the likelihood of occurrence is reduced, and failure detection is improved.Keywords: PHA, FMEA, nuclear power plant, bataan nuclear power plant
Procedia PDF Downloads 1315533 A Comparative Study Mechanical Properties of Polytetrafluoroethylene Materials Synthesized by Non-Conventional and Conventional Techniques
Authors: H. Lahlali F. El Haouzi, A.M.Al-Baradi, I. El Aboudi, M. El Azhari, A. Mdarhri
Abstract:
Polytetrafluoroethylene (PTFE) is a high performance thermoplastic polymer with exceptional physical and chemical properties, such as a high melting temperature, high thermal stability, and very good chemical resistance. Nevertheless, manufacturing PTFE is problematic due to its high melt viscosity (10 12 Pa.s). In practice, it is by now well established that this property presents a serious problem when the classical methods are used to synthesized the dense PTFE materials in particularly hot pressing, high temperature extrusion. In this framework, we use here a new process namely spark plasma sintering (SPS) to elaborate PTFE samples from the micro metric particles powder. It consists in applying simultaneous electric current and pressure directly on the sample powder. By controlling the processing parameters of this technique, a series of PTFE samples are easy obtained and associated to remarkably short time as is reported in an early work. Our central goal in the present study is to understand how the non conventional SPS affects the mechanical properties at room temperature. For this end, a second commercially series of PTFE synthesized by using the extrusion method is investigated. The first data according to the tensile mechanical properties are found to be superior for the first set samples (SPS). However, this trend is not observed for the results obtained from the compression testing. The observed macro-behaviors are correlated to some physical properties of the two series of samples such as their crystallinity or density. Upon a close examination of these properties, we believe the SPS technique can be seen as a promising way to elaborate the polymer having high molecular mass without compromising their mechanical properties.Keywords: PTFE, extrusion, Spark Plasma Sintering, physical properties, mechanical behavior
Procedia PDF Downloads 3085532 Synthesis and Thermoluminescence Investigations of Doped LiF Nanophosphor
Authors: Pooja Seth, Shruti Aggarwal
Abstract:
Thermoluminescence dosimetry (TLD) is one of the most effective methods for the assessment of dose during diagnostic radiology and radiotherapy applications. In these applications monitoring of absorbed dose is essential to prevent patient from undue exposure and to evaluate the risks that may arise due to exposure. LiF based thermoluminescence (TL) dosimeters are promising materials for the estimation, calibration and monitoring of dose due to their favourable dosimetric characteristics like tissue-equivalence, high sensitivity, energy independence and dose linearity. As the TL efficiency of a phosphor strongly depends on the preparation route, it is interesting to investigate the TL properties of LiF based phosphor in nanocrystalline form. LiF doped with magnesium (Mg), copper (Cu), sodium (Na) and silicon (Si) in nanocrystalline form has been prepared using chemical co-precipitation method. Cubical shape LiF nanostructures are formed. TL dosimetry properties have been investigated by exposing it to gamma rays. TL glow curve structure of nanocrystalline form consists of a single peak at 419 K as compared to the multiple peaks observed in microcrystalline form. A consistent glow curve structure with maximum TL intensity at annealing temperature of 573 K and linear dose response from 0.1 to 1000 Gy is observed which is advantageous for radiotherapy application. Good reusability, low fading (5 % over a month) and negligible residual signal (0.0019%) are observed. As per photoluminescence measurements, wide emission band at 360 nm - 550 nm is observed in an undoped LiF. However, an intense peak at 488 nm is observed in doped LiF nanophosphor. The phosphor also exhibits the intense optically stimulated luminescence. Nanocrystalline LiF: Mg, Cu, Na, Si phosphor prepared by co-precipitation method showed simple glow curve structure, linear dose response, reproducibility, negligible residual signal, good thermal stability and low fading. The LiF: Mg, Cu, Na, Si phosphor in nanocrystalline form has tremendous potential in diagnostic radiology, radiotherapy and high energy radiation application.Keywords: thermoluminescence, nanophosphor, optically stimulated luminescence, co-precipitation method
Procedia PDF Downloads 4045531 Effects of Different Meteorological Variables on Reference Evapotranspiration Modeling: Application of Principal Component Analysis
Authors: Akinola Ikudayisi, Josiah Adeyemo
Abstract:
The correct estimation of reference evapotranspiration (ETₒ) is required for effective irrigation water resources planning and management. However, there are some variables that must be considered while estimating and modeling ETₒ. This study therefore determines the multivariate analysis of correlated variables involved in the estimation and modeling of ETₒ at Vaalharts irrigation scheme (VIS) in South Africa using Principal Component Analysis (PCA) technique. Weather and meteorological data between 1994 and 2014 were obtained both from South African Weather Service (SAWS) and Agricultural Research Council (ARC) in South Africa for this study. Average monthly data of minimum and maximum temperature (°C), rainfall (mm), relative humidity (%), and wind speed (m/s) were the inputs to the PCA-based model, while ETₒ is the output. PCA technique was adopted to extract the most important information from the dataset and also to analyze the relationship between the five variables and ETₒ. This is to determine the most significant variables affecting ETₒ estimation at VIS. From the model performances, two principal components with a variance of 82.7% were retained after the eigenvector extraction. The results of the two principal components were compared and the model output shows that minimum temperature, maximum temperature and windspeed are the most important variables in ETₒ estimation and modeling at VIS. In order words, ETₒ increases with temperature and windspeed. Other variables such as rainfall and relative humidity are less important and cannot be used to provide enough information about ETₒ estimation at VIS. The outcome of this study has helped to reduce input variable dimensionality from five to the three most significant variables in ETₒ modelling at VIS, South Africa.Keywords: irrigation, principal component analysis, reference evapotranspiration, Vaalharts
Procedia PDF Downloads 2585530 The Influence of Guided and Independent Training Toward Teachers’ Competence to Plan Early Childhood Education Learning Program
Authors: Sofia Hartati
Abstract:
This research is aimed at describing training in early childhood education program empirically, describing teachers ability to plan lessons empirically, and acquiring empirical data as well as analyzing the influence of guided and independent training toward teachers competence in planning early childhood learning program. The method used is an experiment. It collected data with a population of 76 early childhood educators in Tunjung Teja Sub District area through random sampling technique and grouped into two namely 38 people in an experiment class and 38 people in a controlled class. The technique used for data collections is a test. The result of the research shows that there is a significant influence between training for guided educators toward Teachers Ability toward Planning Early Childhood Learning Program. Guided training has been proven to improve the ability to comprehend planning a learning program. The ability to comprehend planning a learning program owned by teachers of early childhood program comprises of 1) determining the characteristics and competence of students prior to learning; 2) formulating the objective of the learning; 3) selecting materials and its sequences; 4) selecting teaching methods; 5) determining the means or learning media; 6) selecting evaluation strategy as a part of teachers pedagogic competence. The result of this research describes a difference in the competence level of teachers who have joined guided training which is relatively higher than the teachers who joined the independent training. Guided training is one of an effective way to improve the knowledge and competence of early childhood educators.Keywords: competence, planning, teachers, training
Procedia PDF Downloads 2645529 Quantum Statistical Machine Learning and Quantum Time Series
Authors: Omar Alzeley, Sergey Utev
Abstract:
Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series
Procedia PDF Downloads 4695528 Accuracy of VCCT for Calculating Stress Intensity Factor in Metal Specimens Subjected to Bending Load
Authors: Sanjin Kršćanski, Josip Brnić
Abstract:
Virtual Crack Closure Technique (VCCT) is a method used for calculating stress intensity factor (SIF) of a cracked body that is easily implemented on top of basic finite element (FE) codes and as such can be applied on the various component geometries. It is a relatively simple method that does not require any special finite elements to be used and is usually used for calculating stress intensity factors at the crack tip for components made of brittle materials. This paper studies applicability and accuracy of VCCT applied on standard metal specimens containing trough thickness crack, subjected to an in-plane bending load. Finite element analyses were performed using regular 4-node, regular 8-node and a modified quarter-point 8-node 2D elements. Stress intensity factor was calculated from the FE model results for a given crack length, using data available from FE analysis and a custom programmed algorithm based on virtual crack closure technique. Influence of the finite element size on the accuracy of calculated SIF was also studied. The final part of this paper includes a comparison of calculated stress intensity factors with results obtained from analytical expressions found in available literature and in ASTM standard. Results calculated by this algorithm based on VCCT were found to be in good correlation with results obtained with mentioned analytical expressions.Keywords: VCCT, stress intensity factor, finite element analysis, 2D finite elements, bending
Procedia PDF Downloads 3055527 African Culture and Youth Morality: A Critique of the On-Going Transitional Rites in Thulamela Municipality, South Africa
Authors: Bassey Rofem Inyang, Matshidze Pfarelo, Mabale Dolphin
Abstract:
Using a qualitative descriptive design, this study established the consequences of the on-going transitional rites on youth morality in the Thulamela Local Municipality, South Africa. The participants were sampled using a non-random sampling procedure, specifically, a purposive sampling technique and a snowball sampling technique. A semi-structured interview guide was recruited to collect data from the Indigenous Knowledge (IK) custodians, the parents of the youths and the youths until the point of saturation. The analysis was performed using a thematic content method. With the emergence of themes and sub-themes, broad categories were generated to differentiate and explain the thoughts expressed by the various respondents and the observations made in the field. The study findings suggest that the on-going transitional rites are depicted by weekend social activities with the practice of substance use and abuse among the youths at recreational spots. The transitional rites are structured under the guise of “freaks” as an evolving culture among the youths. The freaks culture is a counterculture of the usual initiation schools for transitional rites of passage which is believed to instill morality among youths. The findings comprehensively show that the on-going transitional rites influence inappropriate youth morality. This study concluded that the on-going transitional rites activities and practices evolved as a current socialization standard for quick maturity status; as a result, it will be challenging to provide a complete turnaround of this evolving culture. The study, however, recommends building on the exciting transitional rites of passage to moderate appropriate youths’ morality in Thulamela communities.Keywords: morality, transitional rites, youths, behaviour
Procedia PDF Downloads 935526 Evaluation of Important Transcription Factors and Kinases in Regulating the Signaling Pathways of Cancer Stem Cells With Low and High Proliferation Rate Derived From Colorectal Cancer
Authors: Mohammad Hossein Habibi, Atena Sadat Hosseini
Abstract:
Colorectal cancer is the third leading cause of cancer-related death in the world. Colorectal cancer screening, early detection, and treatment programs could benefit from the most up-to-date information on the disease's burden, given the present worldwide trend of increasing colorectal cancer incidence. Tumor recurrence and resistance are exacerbated by the presence of chemotherapy-resistant cancer stem cells that can generate rapidly proliferating tumor cells. In addition, tumor cells can evolve chemoresistance through adaptation mechanisms. In this work, we used in silico analysis to select suitable GEO datasets. In this study, we compared slow-growing cancer stem cells with high-growth colorectal cancer-derived cancer stem cells. We then evaluated the signal pathways, transcription factors, and kinases associated with these two types of cancer stem cells. A total of 980 upregulated genes and 870 downregulated genes were clustered. MAPK signaling pathway, AGE-RAGE signaling pathway in diabetic complications, Fc gamma R-mediated phagocytosis, and Steroid biosynthesis signaling pathways were observed in upregulated genes. Also, caffeine metabolism, amino sugar and nucleotide sugar metabolism, TNF signaling pathway, and cytosolic DNA-sensing pathway were involved in downregulated genes. In the next step, we evaluated the best transcription factors and kinases in two types of cancer stem cells. In this regard, NR2F2, ZEB2, HEY1, and HDGF as transcription factors and PRDM5, SMAD, CBP, and KDM2B as critical kinases in upregulated genes. On the other hand, IRF1, SPDEF, NCOA1, and STAT1 transcription factors and CTNNB1 and CDH7 kinases were regulated low expression genes. Using bioinformatics analysis in the present study, we conducted an in-depth study of colorectal cancer stem cells at low and high growth rates so that we could take further steps to detect and even target these cells. Naturally, more additional tests are needed in this direction.Keywords: colorectal cancer, bioinformatics analysis, transcription factor, kinases, cancer stem cells
Procedia PDF Downloads 1265525 Experimental Investigation of the Thermal Conductivity of Neodymium and Samarium Melts by a Laser Flash Technique
Authors: Igor V. Savchenko, Dmitrii A. Samoshkin
Abstract:
The active study of the properties of lanthanides has begun in the late 50s of the last century, when methods for their purification were developed and metals with a relatively low content of impurities were obtained. Nevertheless, up to date, many properties of the rare earth metals (REM) have not been experimentally investigated, or insufficiently studied. Currently, the thermal conductivity and thermal diffusivity of lanthanides have been studied most thoroughly in the low-temperature region and at moderate temperatures (near 293 K). In the high-temperature region, corresponding to the solid phase, data on the thermophysical characteristics of the REM are fragmentary and in some cases contradictory. Analysis of the literature showed that the data on the thermal conductivity and thermal diffusivity of light REM in the liquid state are few in number, little informative (only one point corresponds to the liquid state region), contradictory (the nature of the thermal conductivity change with temperature is not reproduced), as well as the results of measurements diverge significantly beyond the limits of the total errors. Thereby our experimental results allow to fill this gap and to clarify the existing information on the heat transfer coefficients of neodymium and samarium in a wide temperature range from the melting point up to 1770 K. The measurement of the thermal conductivity of investigated metallic melts was carried out by laser flash technique on an automated experimental setup LFA-427. Neodymium sample of brand NM-1 (99.21 wt % purity) and samarium sample of brand SmM-1 (99.94 wt % purity) were cut from metal ingots and then ones were annealed in a vacuum (1 mPa) at a temperature of 1400 K for 3 hours. Measuring cells of a special design from tantalum were used for experiments. Sealing of the cell with a sample inside it was carried out by argon-arc welding in the protective atmosphere of the glovebox. The glovebox was filled with argon with purity of 99.998 vol. %; argon was additionally cleaned up by continuous running through sponge titanium heated to 900–1000 K. The general systematic error in determining the thermal conductivity of investigated metallic melts was 2–5%. The approximation dependences and the reference tables of the thermal conductivity and thermal diffusivity coefficients were developed. New reliable experimental data on the transport properties of the REM and their changes in phase transitions can serve as a scientific basis for optimizing the industrial processes of production and use of these materials, as well as ones are of interest for the theory of thermophysical properties of substances, physics of metals, liquids and phase transformations.Keywords: high temperatures, laser flash technique, liquid state, metallic melt, rare earth metals, thermal conductivity, thermal diffusivity
Procedia PDF Downloads 1985524 Mastopexy with the "Dermoglandular Autоaugmentation" Method. Increased Stability of the Result. Personalized Technique
Authors: Maksim Barsakov
Abstract:
Introduction. In modern plastic surgery, there are a large number of breast lift techniques.Due to the spreading information about the "side effects" of silicone implants, interest in implant-free mastopexy is increasing year after year. However, despite the variety of techniques, patients sometimes do not get full satisfaction from the results of mastopexy because of the unexpressed filling of the upper pole, extended anchoring postoperative scars and sometimes because of obtaining an aesthetically unattractive breast shape. The stability of the result after mastopexy depends on many factors, including postoperative rehabilitation. Stability of weight and hormonal background, stretchability of tissues. The high recurrence rate of ptosis and short-term aesthetic effect of mastopexy indicate the urgency of improving surgical techniques and increasing the stabilization of breast tissue. Purpose of the study. To develop and introduce into practice a technique of mastopexy based on the use of a modified Ribeiro flap, as well as elements of tissue movement and fixation designed to increase the stability of postoperative mastopexy. In addition, to give indications for the application of this surgical technique. Materials and Methods. it operated on 103 patients aged 18 to 53 years from 2019 to 2023 according to the reported method. These were patients with primary mastopexy, secondary mastopexy, and also patient with implant removal and one-stage mastopexy. The patients were followed up for 12 months to assess the stability of the result. Results and their discussion. Observing the patients, we noted greater stability of the breast shape and upper pole filling compared to the conventional classical methods. We did not have to resort to anchoring scars. In 90 percent of cases, a inverted T-shape scar was used. In 10 percent, the J-scar was used. The quantitative distribution of complications identified among the operated patients is as follows: worsened healing of the junction of vertical and horizontal sutures at the period of 1-1.5 months after surgery - 15 patients; at treatment with ointment method healing was observed in 7-30 days; permanent loss of NAC sensitivity - 0 patients; vascular disorders in the area of NAC/areola necrosis - 0 patients; marginal necrosis of the areola-2 patients. independent healing within 3-4 weeks without aesthetic defects. Aesthetically unacceptable mature scars-3 patients; partial liponecrosis of the autoflap unilaterally - 1 patient. recurrence of ptosis - 1 patient (after weight loss of 12 kg). In the late postoperative period, 2 patients became pregnant, gave birth, and no lactation problems were observed. Conclusion. Thus, in the world of plastic surgery methods of breast lift continue to improve, which is especially relevant in modern times, due to the increased attention to this operation. The author's proposed method of mastopexy with glandular autoflap allows obtaining in most cases a stable result, a fuller breast shape, avoiding the presence of extended anchoring scars, and also preserves the possibility of lactation. The author of this article has obtained a patent for invention for this method of mastopexy.Keywords: mastopexy, mammoplasty, autoflap, personal technique
Procedia PDF Downloads 375523 Implementation of a Monostatic Microwave Imaging System using a UWB Vivaldi Antenna
Authors: Babatunde Olatujoye, Binbin Yang
Abstract:
Microwave imaging is a portable, noninvasive, and non-ionizing imaging technique that employs low-power microwave signals to reveal objects in the microwave frequency range. This technique has immense potential for adoption in commercial and scientific applications such as security scanning, material characterization, and nondestructive testing. This work presents a monostatic microwave imaging setup using an Ultra-Wideband (UWB), low-cost, miniaturized Vivaldi antenna with a bandwidth of 1 – 6 GHz. The backscattered signals (S-parameters) of the Vivaldi antenna used for scanning targets were measured in the lab using a VNA. An automated two-dimensional (2-D) scanner was employed for the 2-D movement of the transceiver to collect the measured scattering data from different positions. The targets consist of four metallic objects, each with a distinct shape. Similar setup was also simulated in Ansys HFSS. A high-resolution Back Propagation Algorithm (BPA) was applied to both the simulated and experimental backscattered signals. The BPA utilizes the phase and amplitude information recorded over a two-dimensional aperture of 50 cm × 50 cm with a discreet step size of 2 cm to reconstruct a focused image of the targets. The adoption of BPA was demonstrated by coherently resolving and reconstructing reflection signals from conventional time-of-flight profiles. For both the simulation and experimental data, BPA accurately reconstructed a high resolution 2D image of the targets in terms of shape and location. An improvement of the BPA, in terms of target resolution, was achieved by applying the filtering method in frequency domain.Keywords: back propagation, microwave imaging, monostatic, vivialdi antenna, ultra wideband
Procedia PDF Downloads 195522 Surface Modified Quantum Dots for Nanophotonics, Stereolithography and Hybrid Systems for Biomedical Studies
Authors: Redouane Krini, Lutz Nuhn, Hicham El Mard Cheol Woo Ha, Yoondeok Han, Kwang-Sup Lee, Dong-Yol Yang, Jinsoo Joo, Rudolf Zentel
Abstract:
To use Quantum Dots (QDs) in the two photon initiated polymerization technique (TPIP) for 3D patternings, QDs were modified on the surface with photosensitive end groups which are able to undergo a photopolymerization. We were able to fabricate fluorescent 3D lattice structures using photopatternable QDs by TPIP for photonic devices such as photonic crystals and metamaterials. The QDs in different diameter have different emission colors and through mixing of RGB QDs white light fluorescent from the polymeric structures has been created. Metamaterials are capable for unique interaction with the electrical and magnetic components of the electromagnetic radiation and for manipulating light it is crucial to have a negative refractive index. In combination with QDs via TPIP technique polymeric structures can be designed with properties which cannot be found in nature. This makes these artificial materials gaining a huge importance for real-life applications in photonic and optoelectronic. Understanding of interactions between nanoparticles and biological systems is of a huge interest in the biomedical research field. We developed a synthetic strategy of polymer functionalized nanoparticles for biomedical studies to obtain hybrid systems of QDs and copolymers with a strong binding network in an inner shell and which can be modified in the end through their poly(ethylene glycol) functionalized outer shell. These hybrid systems can be used as models for investigation of cell penetration and drug delivery by using measurements combination between CryoTEM and fluorescence studies.Keywords: biomedical study models, lithography, photo induced polymerization, quantum dots
Procedia PDF Downloads 5265521 The Effect of Micro/Nano Structure of Poly (ε-caprolactone) (PCL) Film Using a Two-Step Process (Casting/Plasma) on Cellular Responses
Authors: JaeYoon Lee, Gi-Hoon Yang, JongHan Ha, MyungGu Yeo, SeungHyun Ahn, Hyeongjin Lee, HoJun Jeon, YongBok Kim, Minseong Kim, GeunHyung Kim
Abstract:
One of the important factors in tissue engineering is to design optimal biomedical scaffolds, which can be governed by topographical surface characteristics, such as size, shape, and direction. Of these properties, we focused on the effects of nano- to micro-sized hierarchical surface. To fabricate the hierarchical surface structure on poly(ε-caprolactone) (PCL) film, we employed a micro-casting technique by pressing the mold and nano-etching technique using a modified plasma process. The micro-sized topography of PCL film was controlled by sizes of the micro structures on lotus leaf. Also, the nano-sized topography and hydrophilicity of PCL film were controlled by a modified plasma process. After the plasma treatment, the hydrophobic property of the PCL film was significantly changed into hydrophilic property, and the nano-sized structure was well developed. The surface properties of the modified PCL film were investigated in terms of initial cell morphology, attachment, and proliferation using osteoblast-like-cells (MG63). In particular, initial cell attachment, proliferation and osteogenic differentiation in the hierarchical structure were enhanced dramatically compared to those of the smooth surface. We believe that these results are because of a synergistic effect between the hierarchical structure and the reactive functional groups due to the plasma process. Based on the results presented here, we propose a new biomimetic surface model that maybe useful for effectively regenerating hard tissues.Keywords: hierarchical surface, lotus leaf, nano-etching, plasma treatment
Procedia PDF Downloads 3765520 Genomic Identification of Anisakis Simplex Larvae by PCR-RAPD
Authors: Fumiko Kojima, Shuji Fujimoto
Abstract:
Anisakiasis is a disease caused by infection with an anisakid larvae, mostly Anisakis simplex. The larvae commonly infect in marine fish and the disease is frequently reported in areas of the world where fish is consumed raw, lightly pickled or salted. In Japan, people have the habit of eating raw fish such as ‘sushi’ or ‘sashimi’, so they have more chance of infection with larvae of anisakid nematodes. There are three sibling species in A. simplex larvae, namely, A. simplex sensu stricto (Asss), A. pegreffii (Ap) and A. simplex C. It was revealed that Ap is dominant among the larvae from fish (Scomber japonics) in the Japan Sea side and Asss is dominant among those of the Pacific Ocean side conversely. Although anisakiasis has happened in Japan among both the Japan Sea side area and the Pacific Ocean side area. The aim of this study was to investigate genetic variations between the siblings (Asss and Ap) and within the same sibling species by random amplified polymorphic DNA (RAPD) technique. In order to investigate the genetic difference among the each A. simplex larvae, we used RAPD technique to differentiate individuals of A. simplex obtained from Scomber japonics fish those were caught in the Japan sea (Goto Islands in Nagasaki Prefecture) and the cost of Pacific Ocean (Kanagawa Prefecture). The RAPD patterns of the control DNA (Genus Raphidascaris) were markedly different from those of the A. simplex. There were differences in amplification patterns between Asss and Ap. The RAPD patterns for larvae obtained from fish of the same sea were somewhat different and variations were detected even among larvae from the same fish. These results suggest the considerable high genetic variability between Asss and Ap and the possible existence of genetic variation within the sibling species.Keywords: Anisakiasis in Japan, Anisakis simplex, genomic identification, PCR-RAPD
Procedia PDF Downloads 1815519 Axillary Evaluation with Targeted Axillary Dissection Using Ultrasound-Visible Clips after Neoadjuvant Chemotherapy for Patients with Node-Positive Breast Cancer
Authors: Naomi Sakamoto, Eisuke Fukuma, Mika Nashimoto, Yoshitomo Koshida
Abstract:
Background: Selective localization of the metastatic lymph node with clip and removal of clipped nodes with sentinel lymph node (SLN), known as targeted axillary dissection (TAD), reduced false-negative rates (FNR) of SLN biopsy (SLNB) after neoadjuvant chemotherapy (NAC). For the patients who achieved nodal pathologic complete response (pCR), accurate staging of axilla by TAD lead to omit axillary lymph node dissection (ALND), decreasing postoperative arm morbidity without a negative effect on overall survival. This study aimed to investigate the ultrasound (US) identification rate and success removal rate of two kinds of ultrasound-visible clips placed in metastatic lymph nodes during TAD procedure. Methods: This prospective study was conducted using patients with clinically T1-3, N1, 2, M0 breast cancer undergoing NAC followed by surgery. A US-visible clip was placed in the suspicious lymph node under US guidance before neoadjuvant chemotherapy. Before surgery, US examination was performed to evaluate the detection rate of clipped node. During the surgery, the clipped node was removed using several localization techniques, including hook-wire localization, dye-injection, or fluorescence technique, followed by a dual-technique SLNB and resection of palpable nodes if present. For the fluorescence technique, after injection of 0.1-0.2 mL of indocyanine green dye (ICG) into the clipped node, ICG fluorescent imaging was performed using the Photodynamic Eye infrared camera (Hamamatsu Photonics k. k., Shizuoka, Japan). For the dye injection method, 0.1-0.2 mL of pyoktanin blue dye was injected into the clipped node. Results: A total of 29 patients were enrolled. Hydromark™ breast biopsy site markers (Hydromark, T3 shape; Devicor Medical Japan, Tokyo, Japan) was used in 15patients, whereas a UltraCor™ Twirl™ breast marker (Twirl; C.R. Bard, Inc, NJ, USA) was placed in 14 patients. US identified the clipped node marked with the UltraCore Twirl in 100% (14/14) and with the Hydromark in 93.3% (14/15, p = ns). Success removal of clipped node marked with the UltraCore Twirl was achieved in 100% (14/14), whereas the node marked with the Hydromark was removed in 80% (12/15) (p = ns). Conclusions: The ultrasound identification rate differed between the two types of ultrasound-visible clips, which also affected the success removal rate of clipped nodes. Labelling the positive node with a US-highly-visible clip allowed successful TAD.Keywords: breast cancer, neoadjuvant chemotherapy, targeted axillary dissection, breast tissue marker, clip
Procedia PDF Downloads 665518 Finite Element Analysis of Human Tarsals, Meta Tarsals and Phalanges for Predicting probable location of Fractures
Authors: Irfan Anjum Manarvi, Fawzi Aljassir
Abstract:
Human bones have been a keen area of research over a long time in the field of biomechanical engineering. Medical professionals, as well as engineering academics and researchers, have investigated various bones by using medical, mechanical, and materials approaches to discover the available body of knowledge. Their major focus has been to establish properties of these and ultimately develop processes and tools either to prevent fracture or recover its damage. Literature shows that mechanical professionals conducted a variety of tests for hardness, deformation, and strain field measurement to arrive at their findings. However, they considered these results accuracy to be insufficient due to various limitations of tools, test equipment, difficulties in the availability of human bones. They proposed the need for further studies to first overcome inaccuracies in measurement methods, testing machines, and experimental errors and then carry out experimental or theoretical studies. Finite Element analysis is a technique which was developed for the aerospace industry due to the complexity of design and materials. But over a period of time, it has found its applications in many other industries due to accuracy and flexibility in selection of materials and types of loading that could be theoretically applied to an object under study. In the past few decades, the field of biomechanical engineering has also started to see its applicability. However, the work done in the area of Tarsals, metatarsals and phalanges using this technique is very limited. Therefore, present research has been focused on using this technique for analysis of these critical bones of the human body. This technique requires a 3-dimensional geometric computer model of the object to be analyzed. In the present research, a 3d laser scanner was used for accurate geometric scans of individual tarsals, metatarsals, and phalanges from a typical human foot to make these computer geometric models. These were then imported into a Finite Element Analysis software and a length refining process was carried out prior to analysis to ensure the computer models were true representatives of actual bone. This was followed by analysis of each bone individually. A number of constraints and load conditions were applied to observe the stress and strain distributions in these bones under the conditions of compression and tensile loads or their combination. Results were collected for deformations in various axis, and stress and strain distributions were observed to identify critical locations where fracture could occur. A comparative analysis of failure properties of all the three types of bones was carried out to establish which of these could fail earlier which is presented in this research. Results of this investigation could be used for further experimental studies by the academics and researchers, as well as industrial engineers, for development of various foot protection devices or tools for surgical operations and recovery treatment of these bones. Researchers could build up on these models to carryout analysis of a complete human foot through Finite Element analysis under various loading conditions such as walking, marching, running, and landing after a jump etc.Keywords: tarsals, metatarsals, phalanges, 3D scanning, finite element analysis
Procedia PDF Downloads 329