Search results for: line ampacity prediction
4060 Outcome of Using Penpat Pinyowattanasilp Equation for Prediction of 24-Hour Uptake, First and Second Therapeutic Doses Calculation in Graves’ Disease Patient
Authors: Piyarat Parklug, Busaba Supawattanaobodee, Penpat Pinyowattanasilp
Abstract:
The radioactive iodine thyroid uptake (RAIU) has been widely used to differentiate the cause of thyrotoxicosis and treatment. Twenty-four hours RAIU is routinely used to calculate the dose of radioactive iodine (RAI) therapy; however, 2 days protocol is required. This study aims to evaluate the modification of Penpat Pinyowattanasilp equation application by the exclusion of outlier data, 3 hours RAIU less than 20% and more than 80%, to improve prediction of 24-hour uptake. The equation is predicted 24 hours RAIU (P24RAIU) = 32.5+0.702 (3 hours RAIU). Then calculating separation first and second therapeutic doses in Graves’ disease patients. Methods; This study was a retrospective study at Faculty of Medicine Vajira Hospital in Bangkok, Thailand. Inclusion were Graves’ disease patients who visited RAI clinic between January 2014-March 2019. We divided subjects into 2 groups according to first and second therapeutic doses. Results; Our study had a total of 151 patients. The study was done in 115 patients with first RAI dose and 36 patients with second RAI dose. The P24RAIU are highly correlated with actual 24-hour RAIU in first and second therapeutic doses (r = 0.913, 95% CI = 0.876 to 0.939 and r = 0.806, 95% CI = 0.649 to 0.897). Bland-Altman plot shows that mean differences between predictive and actual 24 hours RAI in the first dose and second dose were 2.14% (95%CI 0.83-3.46) and 1.37% (95%CI -1.41-4.14). The mean first actual and predictive therapeutic doses are 8.33 ± 4.93 and 7.38 ± 3.43 milliCuries (mCi) respectively. The mean second actual and predictive therapeutic doses are 6.51 ± 3.96 and 6.01 ± 3.11 mCi respectively. The predictive therapeutic doses are highly correlated with the actual dose in first and second therapeutic doses (r = 0.907, 95% CI = 0.868 to 0.935 and r = 0.953, 95% CI = 0.909 to 0.976). Bland-Altman plot shows that mean difference between predictive and actual P24RAIU in the first dose and second dose were less than 1 mCi (-0.94 and -0.5 mCi). This modification equation application is simply used in clinical practice especially patient with 3 hours RAIU in range of 20-80% in a Thai population. Before use, this equation for other population should be tested for the correlation.Keywords: equation, Graves’disease, prediction, 24-hour uptake
Procedia PDF Downloads 1374059 The Prediction Mechanism of M. cajuputi Extract from Lampung-Indonesia, as an Anti-Inflammatory Agent for COVID-19 by NFκβ Pathway
Authors: Agustyas Tjiptaningrum, Intanri Kurniati, Fadilah Fadilah, Linda Erlina, Tiwuk Susantiningsih
Abstract:
Coronavirus disease-19 (COVID-19) is still one of the health problems. It can be a severe condition that is caused by a cytokine storm. In a cytokine storm, several proinflammatory cytokines are released massively. It destroys epithelial cells, and subsequently, it can cause death. The anti-inflammatory agent can be used to decrease the number of severe Covid-19 conditions. Melaleuca cajuputi is a plant that has antiviral, antibiotic, antioxidant, and anti-inflammatory activities. This study was carried out to analyze the prediction mechanism of the M. cajuputi extract from Lampung, Indonesia, as an anti-inflammatory agent for COVID-19. This study constructed a database of protein host target that was involved in the inflammation process of COVID-19 using data retrieval from GeneCards with the keyword “SARS-CoV2”, “inflammation,” “cytokine storm,” and “acute respiratory distress syndrome.” Subsequent protein-protein interaction was generated by using Cytoscape version 3.9.1. It can predict the significant target protein. Then the analysis of the Gene Ontology (GO) and KEGG pathways was conducted to generate the genes and components that play a role in COVID-19. The result of this study was 30 nodes representing significant proteins, namely NF-κβ, IL-6, IL-6R, IL-2RA, IL-2, IFN2, C3, TRAF6, IFNAR1, and DOX58. From the KEGG pathway, we obtained the result that NF-κβ has a role in the production of proinflammatory cytokines, which play a role in the COVID-19 cytokine storm. It is an important factor for macrophage transcription; therefore, it will induce inflammatory gene expression that encodes proinflammatory cytokines such as IL-6, TNF-α, and IL-1β. In conclusion, the blocking of NF-κβ is the prediction mechanism of the M. cajuputi extract as an anti-inflammation agent for COVID-19.Keywords: antiinflammation, COVID-19, cytokine storm, NF-κβ, M. cajuputi
Procedia PDF Downloads 854058 The Effects of Continuous and Interval Aerobic Exercises with Moderate Intensity on Serum Levels of Glial Cell Line-Derived Neurotrophic Factor and Aerobic Capacity in Obese Children
Authors: Ali Golestani, Vahid Naseri, Hossein Taheri
Abstract:
Recently, some of studies examined the effect of exercise on neurotrophic factors influencing the growth, protection, plasticity and function in central and peripheral nerve cells. The aim of this study was to investigate the effects of continuous and interval aerobic exercises with moderate intensity on serum levels of glial cell line-derived neurotrophic factor (GDNF) and aerobic capacity in obese children. 21 obese students with an average age of 13.6 ± 0.5 height 171 ± 5 and BMI 32 ± 1.2 were divided randomly to control, continuous aerobic and interval aerobic groups. Training protocol included continuous or interval aerobic exercises with moderate intensity 50-65%MHR, three times per week for 10 weeks. 48 hours before and after executing of protocol, blood samples were taken from the participants and their GDNF serum levels were measured by ELISA. Aerobic power was estimated using Shuttle-run test. T-test results indicated a small increase in their GDNF serum levels, which was not statistically significant (p =0.11). In addition, the results of ANOVA did not show any significant difference between continuous and interval aerobic training on the serum levels of their GDNF but their aerobic capacity significantly increased (p =0.012). Although continuous and interval aerobic exercise improves aerobic power in obese children, they had no significant effect on their serum levels of GDNF.Keywords: aerobic power, continuous aerobic training, glial cell line-derived neurotrophic factor (GDNF), interval aerobic training, obese children
Procedia PDF Downloads 1764057 On the Exergy Analysis of the Aluminum Smelter
Authors: Ayoola T. Brimmo, Mohamed I. Hassan
Abstract:
The push to mitigate the aluminum smelting industry’s enormous energy consumption and high emission releases is now even more persistent with the recent climate change happenings. Common approaches to achieve this have been focused on improving energy efficiency in the pot line and cast house sections of the smelter. However, the conventional energy efficiency analyses are based on the first law of thermodynamics, which do not shed proper light on the smelter’s degradation of energy. This just gives a general idea of the furnace’s performance with no reference to locations where improvement is a possibility based on the second law of thermodynamics. In this study, we apply exergy analyses on the pot line and cast house sections of the smelter to identify the locality and causes of energy degradation. The exergy analyses, which are based on a real life smelter conditions, highlight the possible locations for technology improvement in a typical smelter. With this established, methods of minimizing the smelter’s exergy losses are assessed.Keywords: exergy analysis, electrolytic cell, furnace, heat transfer
Procedia PDF Downloads 2864056 Estimation of Dynamic Characteristics of a Middle Rise Steel Reinforced Concrete Building Using Long-Term
Authors: Fumiya Sugino, Naohiro Nakamura, Yuji Miyazu
Abstract:
In earthquake resistant design of buildings, evaluation of vibration characteristics is important. In recent years, due to the increment of super high-rise buildings, the evaluation of response is important for not only the first mode but also higher modes. The knowledge of vibration characteristics in buildings is mostly limited to the first mode and the knowledge of higher modes is still insufficient. In this paper, using earthquake observation records of a SRC building by applying frequency filter to ARX model, characteristics of first and second modes were studied. First, we studied the change of the eigen frequency and the damping ratio during the 3.11 earthquake. The eigen frequency gradually decreases from the time of earthquake occurrence, and it is almost stable after about 150 seconds have passed. At this time, the decreasing rates of the 1st and 2nd eigen frequencies are both about 0.7. Although the damping ratio has more large error than the eigen frequency, both the 1st and 2nd damping ratio are 3 to 5%. Also, there is a strong correlation between the 1st and 2nd eigen frequency, and the regression line is y=3.17x. In the damping ratio, the regression line is y=0.90x. Therefore 1st and 2nd damping ratios are approximately the same degree. Next, we study the eigen frequency and damping ratio from 1998 after 3.11 earthquakes, the final year is 2014. In all the considered earthquakes, they are connected in order of occurrence respectively. The eigen frequency slowly declined from immediately after completion, and tend to stabilize after several years. Although it has declined greatly after the 3.11 earthquake. Both the decresing rate of the 1st and 2nd eigen frequencies until about 7 years later are about 0.8. For the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1% and the 2nd increases by less than 1%. For the eigen frequency, there is a strong correlation between the 1st and 2nd, and the regression line is y=3.17x. For the damping ratio, the regression line is y=1.01x. Therefore, it can be said that the 1st and 2nd damping ratio is approximately the same degree. Based on the above results, changes in eigen frequency and damping ratio are summarized as follows. In the long-term study of the eigen frequency, both the 1st and 2nd gradually declined from immediately after completion, and tended to stabilize after a few years. Further it declined after the 3.11 earthquake. In addition, there is a strong correlation between the 1st and 2nd, and the declining time and the decreasing rate are the same degree. In the long-term study of the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1%, the 2nd increases by less than 1%. Also, the 1st and 2nd are approximately the same degree.Keywords: eigenfrequency, damping ratio, ARX model, earthquake observation records
Procedia PDF Downloads 2154055 Three-Dimensional Numerical Simulation of Drops Suspended in Poiseuille Flow: Effect of Reynolds Number
Authors: A. Nourbakhsh
Abstract:
A finite difference/front tracking method is used to study the motion of three-dimensional deformable drops suspended in plane Poiseuille flow at non-zero Reynolds numbers. A parallel version of the code was used to study the behavior of suspension on a reasonable grid resolution (grids). The viscosity and density of drops are assumed to be equal to that of the suspending medium. The effect of the Reynolds number is studied in detail. It is found that drops with small deformation behave like rigid particles and migrate to an equilibrium position about half way between the wall and the center line (the Segre-Silberberg effect). However, for highly deformable drops there is a tendency for drops to migrate to the middle of the channel, and the maximum concentration occurs at the center line. The effective viscosity of suspension and the fluctuation energy of the flow across the channel increases with the Reynolds number of the flow.Keywords: suspensions, Poiseuille flow, effective viscosity, Reynolds number
Procedia PDF Downloads 3534054 Enhanced Cytotoxic Effect of Expanded NK Cells with IL12 and IL15 from Leukoreduction Filter on K562 Cell Line Exhibits Comparable Cytotoxicity to Whole Blood
Authors: Abdulbaset Mazarzaei
Abstract:
Natural killer (NK) cells are innate immune effectors that play a pivotal role in combating tumors and infected cells. In recent years, the therapeutic potential of NK cells has gained significant attention due to their remarkable cytotoxic ability. This study focuses on investigating the cytotoxic effect of expanded NK cells enriched with interleukin 12 (IL12) and interleukin 15 (IL15), derived from the leukoreduction filter, on the K562 cell line. Firstly, NK cells were isolated from whole blood samples obtained from healthy volunteers. These cells were subsequently expanded ex vivo using a combination of feeder cells, IL12, and IL15. The expanded NK cells were then harvested and assessed for their cytotoxicity against K562, a well-established human chronic myelogenous leukemia cell line. The cytotoxicity was evaluated using flow cytometry assay. Results demonstrate that the expanded NK cells significantly exhibited enhanced cytotoxicity against K562 cells compared to non-expanded NK cells. Interestingly, the expanded NK cells derived specifically from IL12 and IL15-enriched leukoreduction filters showed a robust cytotoxic effect similar to the whole blood-derived NK cells. These findings suggest that IL12 and IL15 in the leukoreduction filter are crucial in promoting NK cell cytotoxicity. Furthermore, the expanded NK cells displayed relatively similar cytotoxicity profiles to whole blood-derived NK cells, indicating their comparable capability in targeting and eliminating tumor cells. This observation is of significant relevance as expanded NK cells from the leukoreduction filter could potentially serve as a readily accessible and efficient source for adoptive immunotherapy. In conclusion, this study highlights the significant cytotoxic effect of expanded NK cells enriched with IL12 and IL15 obtained from the leukoreduction filter on the K562 cell line. Moreover, it emphasizes that these expanded NK cells exhibit comparable cytotoxicity to whole blood-derived NK cells. These findings reinforce the potential clinical utility of using expanded NK cells from the leukoreduction filter as an effective strategy in adoptive immunotherapy for the treatment of cancer. Further studies are warranted to explore the broader implications of this approach in clinical settings.Keywords: natural killer (NK) cells, Cytotoxicity, Leukoreduction filter, IL-12 and IL-15 Cytokines
Procedia PDF Downloads 624053 Use of Real Time Ultrasound for the Prediction of Carcass Composition in Serrana Goats
Authors: Antonio Monteiro, Jorge Azevedo, Severiano Silva, Alfredo Teixeira
Abstract:
The objective of this study was to compare the carcass and in vivo real-time ultrasound measurements (RTU) and their capacity to predict the composition of Serrana goats up to 40% of maturity. Twenty one females (11.1 ± 3.97 kg) and Twenty one males (15.6 ± 5.38 kg) were utilized to made in vivo measurements with a 5 MHz probe (ALOKA 500V scanner) at the 9th-10th, 10th-11th thoracic vertebrae (uT910 and uT1011, respectively), at the 1st- 2nd, 3rd-4th, and 4th-5th lumbar vertebrae (uL12, ul34 and uL45, respectively) and also at the 3rd-4th sternebrae (EEST). It was recorded the images of RTU measurements of Longissimus thoracis et lumborum muscle (LTL) depth (EM), width (LM), perimeter (PM), area (AM) and subcutaneous fat thickness (SFD) above the LTL, as well as the depth of tissues of the sternum (EEST) between the 3rd-4th sternebrae. All RTU images were analyzed using the ImageJ software. After slaughter, the carcasses were stored at 4 ºC for 24 h. After this period the carcasses were divided and the left half was entirely dissected into muscle, dissected fat (subcutaneous fat plus intermuscular fat) and bone. Prior to the dissection measurements equivalent to those obtained in vivo with RTU were recorded. Using the Statistica 5, correlation and regression analyses were performed. The prediction of carcass composition was achieved by stepwise regression procedure, with live weight and RTU measurements with and without transformation of variables to the same dimension. The RTU and carcass measurements, except for SFD measurements, showed high correlation (r > 0.60, P < 0.001). The RTU measurements and the live weight, showed ability to predict carcass composition on muscle (R2 = 0.99, P < 0.001), subcutaneous fat (R2 = 0.41, P < 0.001), intermuscular fat (R2 = 0.84, P < 0.001), dissected fat (R2 = 0.71, P < 0.001) and bone (R2 = 0.94, P < 0.001). The transformation of variables allowed a slight increase of precision, but with the increase in the number of variables, with the exception of subcutaneous fat prediction. In vivo measurements by RTU can be applied to predict kid goat carcass composition, from 5 measurements of RTU and the live weight.Keywords: carcass, goats, real time, ultrasound
Procedia PDF Downloads 2594052 Oil Reservoir Asphalting Precipitation Estimating during CO2 Injection
Authors: I. Alhajri, G. Zahedi, R. Alazmi, A. Akbari
Abstract:
In this paper, an Artificial Neural Network (ANN) was developed to predict Asphaltene Precipitation (AP) during the injection of carbon dioxide into crude oil reservoirs. In this study, the experimental data from six different oil fields were collected. Seventy percent of the data was used to develop the ANN model, and different ANN architectures were examined. A network with the Trainlm training algorithm was found to be the best network to estimate the AP. To check the validity of the proposed model, the model was used to predict the AP for the thirty percent of the data that was unevaluated. The Mean Square Error (MSE) of the prediction was 0.0018, which confirms the excellent prediction capability of the proposed model. In the second part of this study, the ANN model predictions were compared with modified Hirschberg model predictions. The ANN was found to provide more accurate estimates compared to the modified Hirschberg model. Finally, the proposed model was employed to examine the effect of different operating parameters during gas injection on the AP. It was found that the AP is mostly sensitive to the reservoir temperature. Furthermore, the carbon dioxide concentration in liquid phase increases the AP.Keywords: artificial neural network, asphaltene, CO2 injection, Hirschberg model, oil reservoirs
Procedia PDF Downloads 3634051 Predicting of Hydrate Deposition in Loading and Offloading Flowlines of Marine CNG Systems
Authors: Esam I. Jassim
Abstract:
The main aim of this paper is to demonstrate the prediction of the model capability of predicting the nucleation process, the growth rate, and the deposition potential of second phase particles in gas flowlines. The primary objective of the research is to predict the risk hazards involved in the marine transportation of compressed natural gas. However, the proposed model can be equally used for other applications including production and transportation of natural gas in any high-pressure flow-line. The proposed model employs the following three main components to approach the problem: computational fluid dynamics (CFD) technique is used to configure the flow field; the nucleation model is developed and incorporated in the simulation to predict the incipient hydrate particles size and growth rate; and the deposition of the gas/particle flow is proposed using the concept of the particle deposition velocity. These components are integrated in a comprehended model to locate the hydrate deposition in natural gas flowlines. The present research is prepared to foresee the deposition location of solid particles that could occur in a real application in Compressed Natural Gas loading and offloading. A pipeline with 120 m length and different sizes carried a natural gas is taken in the study. The location of particle deposition formed as a result of restriction is determined based on the procedure mentioned earlier and the effect of water content and downstream pressure is studied. The critical flow speed that prevents such particle to accumulate in the certain pipe length is also addressed.Keywords: hydrate deposition, compressed natural gas, marine transportation, oceanography
Procedia PDF Downloads 4854050 Numerical Prediction of Effects of Location of Across-the-Width Laminations on Tensile Properties of Rectangular Wires
Authors: Kazeem K. Adewole
Abstract:
This paper presents the finite element analysis numerical investigation of the effects of the location of across-the-width lamination on the tensile properties of rectangular wires for civil engineering applications. FE analysis revealed that the presence of the mid-thickness across-the-width lamination changes the cup and cone fracture shape exhibited by the lamination-free wire to a V-shaped fracture shape with an opening at the bottom/pointed end of the V-shape at the location of the mid-thickness across-the-width lamination. FE analysis also revealed that the presence of the mid-width across-the-thickness lamination changes the cup and cone fracture shape of the lamination-free wire without an opening to a cup and cone fracture shape with an opening at the location of the mid-width across-the-thickness lamination. The FE fracture behaviour prediction approach presented in this work serves as a tool for failure analysis of wires with lamination at different orientations which cannot be conducted experimentally.Keywords: across-the-width lamination, tensile properties, lamination location, wire
Procedia PDF Downloads 4734049 Additive Weibull Model Using Warranty Claim and Finite Element Analysis Fatigue Analysis
Authors: Kanchan Mondal, Dasharath Koulage, Dattatray Manerikar, Asmita Ghate
Abstract:
This paper presents an additive reliability model using warranty data and Finite Element Analysis (FEA) data. Warranty data for any product gives insight to its underlying issues. This is often used by Reliability Engineers to build prediction model to forecast failure rate of parts. But there is one major limitation in using warranty data for prediction. Warranty periods constitute only a small fraction of total lifetime of a product, most of the time it covers only the infant mortality and useful life zone of a bathtub curve. Predicting with warranty data alone in these cases is not generally provide results with desired accuracy. Failure rate of a mechanical part is driven by random issues initially and wear-out or usage related issues at later stages of the lifetime. For better predictability of failure rate, one need to explore the failure rate behavior at wear out zone of a bathtub curve. Due to cost and time constraints, it is not always possible to test samples till failure, but FEA-Fatigue analysis can provide the failure rate behavior of a part much beyond warranty period in a quicker time and at lesser cost. In this work, the authors proposed an Additive Weibull Model, which make use of both warranty and FEA fatigue analysis data for predicting failure rates. It involves modeling of two data sets of a part, one with existing warranty claims and other with fatigue life data. Hazard rate base Weibull estimation has been used for the modeling the warranty data whereas S-N curved based Weibull parameter estimation is used for FEA data. Two separate Weibull models’ parameters are estimated and combined to form the proposed Additive Weibull Model for prediction.Keywords: bathtub curve, fatigue, FEA, reliability, warranty, Weibull
Procedia PDF Downloads 724048 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.Keywords: classification, CRISP-DM, machine learning, predictive quality, regression
Procedia PDF Downloads 1434047 COVID-19 Analysis with Deep Learning Model Using Chest X-Rays Images
Authors: Uma Maheshwari V., Rajanikanth Aluvalu, Kumar Gautam
Abstract:
The COVID-19 disease is a highly contagious viral infection with major worldwide health implications. The global economy suffers as a result of COVID. The spread of this pandemic disease can be slowed if positive patients are found early. COVID-19 disease prediction is beneficial for identifying patients' health problems that are at risk for COVID. Deep learning and machine learning algorithms for COVID prediction using X-rays have the potential to be extremely useful in solving the scarcity of doctors and clinicians in remote places. In this paper, a convolutional neural network (CNN) with deep layers is presented for recognizing COVID-19 patients using real-world datasets. We gathered around 6000 X-ray scan images from various sources and split them into two categories: normal and COVID-impacted. Our model examines chest X-ray images to recognize such patients. Because X-rays are commonly available and affordable, our findings show that X-ray analysis is effective in COVID diagnosis. The predictions performed well, with an average accuracy of 99% on training photographs and 88% on X-ray test images.Keywords: deep CNN, COVID–19 analysis, feature extraction, feature map, accuracy
Procedia PDF Downloads 774046 Jitter Based Reconstruction of Transmission Line Pulse Using On-Chip Sensor
Authors: Bhuvnesh Narayanan, Bernhard Weiss, Tvrtko Mandic, Adrijan Baric
Abstract:
This paper discusses a method to reconstruct internal high-frequency signals through subsampling techniques in an IC using an on-chip sensor. Though there are existing methods to internally probe and reconstruct high frequency signals through subsampling techniques; these methods have been applicable mainly for synchronized systems. This paper demonstrates a method for making such non-intrusive on-chip reconstructions possible also in non-synchronized systems. The TLP pulse is used to demonstrate the experimental validation of the concept. The on-chip sensor measures the voltage in an internal node. The jitter in the input pulse causes a varying pulse delay with respect to the on-chip sampling command. By measuring this pulse delay and by correlating it with the measured on-chip voltage, time domain waveforms can be reconstructed, and the influence of the pulse on the internal nodes can be better understood.Keywords: on-chip sensor, jitter, transmission line pulse, subsampling
Procedia PDF Downloads 1434045 Pattern Recognition Using Feature Based Die-Map Clustering in the Semiconductor Manufacturing Process
Authors: Seung Hwan Park, Cheng-Sool Park, Jun Seok Kim, Youngji Yoo, Daewoong An, Jun-Geol Baek
Abstract:
Depending on the big data analysis becomes important, yield prediction using data from the semiconductor process is essential. In general, yield prediction and analysis of the causes of the failure are closely related. The purpose of this study is to analyze pattern affects the final test results using a die map based clustering. Many researches have been conducted using die data from the semiconductor test process. However, analysis has limitation as the test data is less directly related to the final test results. Therefore, this study proposes a framework for analysis through clustering using more detailed data than existing die data. This study consists of three phases. In the first phase, die map is created through fail bit data in each sub-area of die. In the second phase, clustering using map data is performed. And the third stage is to find patterns that affect final test result. Finally, the proposed three steps are applied to actual industrial data and experimental results showed the potential field application.Keywords: die-map clustering, feature extraction, pattern recognition, semiconductor manufacturing process
Procedia PDF Downloads 4014044 Variability Management of Contextual Feature Model in Multi-Software Product Line
Authors: Muhammad Fezan Afzal, Asad Abbas, Imran Khan, Salma Imtiaz
Abstract:
Software Product Line (SPL) paradigm is used for the development of the family of software products that share common and variable features. Feature model is a domain of SPL that consists of common and variable features with predefined relationships and constraints. Multiple SPLs consist of a number of similar common and variable features, such as mobile phones and Tabs. Reusability of common and variable features from the different domains of SPL is a complex task due to the external relationships and constraints of features in the feature model. To increase the reusability of feature model resources from domain engineering, it is required to manage the commonality of features at the level of SPL application development. In this research, we have proposed an approach that combines multiple SPLs into a single domain and converts them to a common feature model. Extracting the common features from different feature models is more effective, less cost and time to market for the application development. For extracting features from multiple SPLs, the proposed framework consists of three steps: 1) find the variation points, 2) find the constraints, and 3) combine the feature models into a single feature model on the basis of variation points and constraints. By using this approach, reusability can increase features from the multiple feature models. The impact of this research is to reduce the development of cost, time to market and increase products of SPL.Keywords: software product line, feature model, variability management, multi-SPLs
Procedia PDF Downloads 654043 Application of Artificial Neural Network for Prediction of Load-Haul-Dump Machine Performance Characteristics
Authors: J. Balaraju, M. Govinda Raj, C. S. N. Murthy
Abstract:
Every industry is constantly looking for enhancement of its day to day production and productivity. This can be possible only by maintaining the men and machinery at its adequate level. Prediction of performance characteristics plays an important role in performance evaluation of the equipment. Analytical and statistical approaches will take a bit more time to solve complex problems such as performance estimations as compared with software-based approaches. Keeping this in view the present study deals with an Artificial Neural Network (ANN) modelling of a Load-Haul-Dump (LHD) machine to predict the performance characteristics such as reliability, availability and preventive maintenance (PM). A feed-forward-back-propagation ANN technique has been used to model the Levenberg-Marquardt (LM) training algorithm. The performance characteristics were computed using Isograph Reliability Workbench 13.0 software. These computed values were validated using predicted output responses of ANN models. Further, recommendations are given to the industry based on the performed analysis for improvement of equipment performance.Keywords: load-haul-dump, LHD, artificial neural network, ANN, performance, reliability, availability, preventive maintenance
Procedia PDF Downloads 1474042 Clinical Prediction Rules for Using Open Kinetic Chain Exercise in Treatment of Knee Osteoarthritis
Authors: Mohamed Aly, Aliaa Rehan Youssef, Emad Sawerees, Mounir Guirgis
Abstract:
Relevance: Osteoarthritis (OA) is the most common degenerative disease seen in all populations. It causes disability and substantial socioeconomic burden. Evidence supports that exercise are the most effective conservative treatment for patients with OA. Therapists experience and clinical judgment play major role in exercise prescription and scientific evidence for this regard is lacking. The development of clinical prediction rules to identify patients who are most likely benefit from exercise may help solving this dilemma. Purpose: This study investigated whether body mass index and functional ability at baseline can predict patients’ response to a selected exercise program. Approach: Fifty-six patients, aged 35 to 65 years, completed an exercise program consisting of open kinetic chain strengthening and passive stretching exercises. The program was given for 3 sessions per week, 45 minutes per session, for 6 weeks Evaluation: At baseline and post treatment, pain severity was assessed using the numerical pain rating scale, whereas functional ability was being assessed by step test (ST), time up and go test (TUG) and 50 feet time walk test (50 FTW). After completing the program, global rate of change (GROC) score of greater than 4 was used to categorize patients as successful and non-successful. Thirty-eight patients (68%) had successful response to the intervention. Logistic regression showed that BMI and 50 FTW test were the only significant predictors. Based on the results, patients with BMI less than 34.71 kg/m2 and 50 FTW test less than 25.64 sec are 68% to 89% more likely to benefit from the exercise program. Conclusions: Clinicians should consider the described strengthening and flexibility exercise program for patents with BMI less than 34.7 Kg/m2 and 50 FTW faster than 25.6 seconds. The validity of these predictors should be investigated for other exercise.Keywords: clinical prediction rule, knee osteoarthritis, physical therapy exercises, validity
Procedia PDF Downloads 4204041 The Application of Artificial Neural Networks for the Performance Prediction of Evacuated Tube Solar Air Collector with Phase Change Material
Authors: Sukhbir Singh
Abstract:
This paper describes the modeling of novel solar air collector (NSAC) system by using artificial neural network (ANN) model. The objective of the study is to demonstrate the application of the ANN model to predict the performance of the NSAC with acetamide as a phase change material (PCM) storage. Input data set consist of time, solar intensity and ambient temperature wherever as outlet air temperature of NSAC was considered as output. Experiments were conducted between 9.00 and 24.00 h in June and July 2014 underneath the prevailing atmospheric condition of Kurukshetra (city of the India). After that, experimental results were utilized to train the back propagation neural network (BPNN) to predict the outlet air temperature of NSAC. The results of proposed algorithm show that the BPNN is effective tool for the prediction of responses. The BPNN predicted results are 99% in agreement with the experimental results.Keywords: Evacuated tube solar air collector, Artificial neural network, Phase change material, solar air collector
Procedia PDF Downloads 1194040 Coherent All-Fiber and Polarization Maintaining Source for CO2 Range-Resolved Differential Absorption Lidar
Authors: Erwan Negre, Ewan J. O'Connor, Juha Toivonen
Abstract:
The need for CO2 monitoring technologies grows simultaneously with the worldwide concerns regarding environmental challenges. To that purpose, we developed a compact coherent all-fiber ranged-resolved Differential Absorption Lidar (RR-DIAL). It has been designed along a tunable 2x1fiber optic switch set to a frequency of 1 Hz between two Distributed FeedBack (DFB) lasers emitting in the continuous-wave mode at 1571.41 nm (absorption line of CO2) and 1571.25 nm (CO2 absorption-free line), with linewidth and tuning range of respectively 1 MHz and 3 nm over operating wavelength. A three stages amplification through Erbium and Erbium-Ytterbium doped fibers coupled to a Radio Frequency (RF) driven Acousto-Optic Modulator (AOM) generates 100 ns pulses at a repetition rate from 10 to 30 kHz with a peak power up to 2.5 kW and a spatial resolution of 15 m, allowing fast and highly resolved CO2 profiles. The same afocal collection system is used for the output of the laser source and the backscattered light which is then directed to a circulator before being mixed with the local oscillator for heterodyne detection. Packaged in an easily transportable box which also includes a server and a Field Programmable Gate Array (FPGA) card for on-line data processing and storing, our setup allows an effective and quick deployment for versatile in-situ analysis, whether it be vertical atmospheric monitoring, large field mapping or sequestration site continuous oversight. Setup operation and results from initial field measurements will be discussed.Keywords: CO2 profiles, coherent DIAL, in-situ atmospheric sensing, near infrared fiber source
Procedia PDF Downloads 1274039 The LMPA/Epoxy Mixture Encapsulation of OLED on Polyimide Substrate
Authors: Chuyi Ye, Minsang Kim, Cheol-Hee Moon
Abstract:
The organic light emitting diode(OLED), is a potential organic optical functional materials which is considered as the next generation display technology with the advantages such as all-solid state, ultra-thin thickness, active luminous and flexibility. Due to the development of polymer-inorganic substrate, it becomes possible to achieve the flexible OLED display. However the organic light-emitting material is very sensitive to the oxygen and water vapor, and the encapsulation requires water vapor transmission rate(WVTR) and oxygen transmission rate(OTR) as lower as 10-6 g/(m2.d) and 10-5 cm3/(m2.d) respectively. In current situation, the rigorous WVTR and OTR have restricted the application of the OLED display. Traditional epoxy/getter or glass frit approaches, which have been widely applied on glass-substrate-based devices, are not suitable for transparent flexible organic devices, and mechanically flexible thin-film approaches are required. To ensure the OLED’s lifetime, the encapsulation material of the OLED package is very important. In this paper, a low melting point alloy(LMPA)-epoxy mixture in the encapsulation process is introduced. There will be a phase separation when the mixture is heated to the melting of LMPA and the formation of the double line structure between two substrates: the alloy barrier has extremely low WVTR and OTR and the epoxy fills the potential tiny cracks. In our experiment, the PI film is chosen as a flexible transparent substrate, and Mo and Cu are deposited on the PI film successively. Then the two metal layers are photolithographied to the sealing pattern line. The Mo is a transition layer between the PI film and Cu, at the same time, the Cu has a good wettability with the LMPA(Sn-58Bi). At last, pattern is printed with LMPA layer and applied voltage, the gathering Joule heat melt the LMPA and form the double line structure and the OLED package is sealed in the same time. In this research, the double-line encapsulating structure of LMPA and epoxy on the PI film is manufactured for the flexible OLED encapsulation, and in this process it is investigated whether the encapsulation satisfies the requirement of WVTR and OTR for the flexible OLED.Keywords: encapsulation, flexible, low melting point alloy, OLED
Procedia PDF Downloads 5964038 The Dynamics of a Droplet Spreading on a Steel Surface
Authors: Evgeniya Orlova, Dmitriy Feoktistov, Geniy Kuznetsov
Abstract:
Spreading of a droplet over a solid substrate is a key phenomenon observed in the following engineering applications: thin film coating, oil extraction, inkjet printing, and spray cooling of heated surfaces. Droplet cooling systems are known to be more effective than film or rivulet cooling systems. It is caused by the greater evaporation surface area of droplets compared with the film of the same mass and wetting surface. And the greater surface area of droplets is connected with the curvature of the interface. Location of the droplets on the cooling surface influences on the heat transfer conditions. The close distance between the droplets provides intensive heat removal, but there is a possibility of their coalescence in the liquid film. The long distance leads to overheating of the local areas of the cooling surface and the occurrence of thermal stresses. To control the location of droplets is possible by changing the roughness, structure and chemical composition of the surface. Thus, control of spreading can be implemented. The most important characteristic of spreading of droplets on solid surfaces is a dynamic contact angle, which is a function of the contact line speed or capillary number. However, there is currently no universal equation, which would describe the relationship between these parameters. This paper presents the results of the experimental studies of water droplet spreading on metal substrates with different surface roughness. The effect of the droplet growth rate and the surface roughness on spreading characteristics was studied at low capillary numbers. The shadow method using high speed video cameras recording up to 10,000 frames per seconds was implemented. A droplet profile was analyzed by Axisymmetric Drop Shape Analyses techniques. According to change of the dynamic contact angle and the contact line speed three sequential spreading stages were observed: rapid increase in the dynamic contact angle; monotonous decrease in the contact angle and the contact line speed; and form of the equilibrium contact angle at constant contact line. At low droplet growth rate, the dynamic contact angle of the droplet spreading on the surfaces with the maximum roughness is found to increase throughout the spreading time. It is due to the fact that the friction force on such surfaces is significantly greater than the inertia force; and the contact line is pinned on microasperities of a relief. At high droplet growth rate the contact angle decreases during the second stage even on the surfaces with the maximum roughness, as in this case, the liquid does not fill the microcavities, and the droplet moves over the “air cushion”, i.e. the interface is a liquid/gas/solid system. Also at such growth rates pulsation of liquid flow was detected; and the droplet oscillates during the spreading. Thus, obtained results allow to conclude that it is possible to control spreading by using the surface roughness and the growth rate of droplets on surfaces as varied factors. Also, the research findings may be used for analyzing heat transfer in rivulet and drop cooling systems of high energy equipment.Keywords: contact line speed, droplet growth rate, dynamic contact angle, shadow system, spreading
Procedia PDF Downloads 3284037 The Theory behind Logistic Regression
Authors: Jan Henrik Wosnitza
Abstract:
The logistic regression has developed into a standard approach for estimating conditional probabilities in a wide range of applications including credit risk prediction. The article at hand contributes to the current literature on logistic regression fourfold: First, it is demonstrated that the binary logistic regression automatically meets its model assumptions under very general conditions. This result explains, at least in part, the logistic regression's popularity. Second, the requirement of homoscedasticity in the context of binary logistic regression is theoretically substantiated. The variances among the groups of defaulted and non-defaulted obligors have to be the same across the level of the aggregated default indicators in order to achieve linear logits. Third, this article sheds some light on the question why nonlinear logits might be superior to linear logits in case of a small amount of data. Fourth, an innovative methodology for estimating correlations between obligor-specific log-odds is proposed. In order to crystallize the key ideas, this paper focuses on the example of credit risk prediction. However, the results presented in this paper can easily be transferred to any other field of application.Keywords: correlation, credit risk estimation, default correlation, homoscedasticity, logistic regression, nonlinear logistic regression
Procedia PDF Downloads 4254036 Runoff Simulation by Using WetSpa Model in Garmabrood Watershed of Mazandaran Province, Iran
Authors: Mohammad Reza Dahmardeh Ghaleno, Mohammad Nohtani, Saeedeh Khaledi
Abstract:
Hydrological models are applied to simulation and prediction floods in watersheds. WetSpa is a distributed, continuous and physically model with daily or hourly time step that explains of precipitation, runoff and evapotranspiration processes for both simple and complex contexts. This model uses a modified rational method for runoff calculation. In this model, runoff is routed along the flow path using Diffusion-Wave Equation which depend on the slope, velocity and flow route characteristics. Garmabrood watershed located in Mazandaran province in Iran and passing over coordinates 53° 10´ 55" to 53° 38´ 20" E and 36° 06´ 45" to 36° 25´ 30"N. The area of the catchment is about 1133 km2 and elevations in the catchment range from 213 to 3136 m at the outlet, with average slope of 25.77 %. Results of the simulations show a good agreement between calculated and measured hydrographs at the outlet of the basin. Drawing upon Nash-Sutcliffe Model Efficiency Coefficient for calibration periodic model estimated daily hydrographs and maximum flow rate with an accuracy up to 61% and 83.17 % respectively.Keywords: watershed simulation, WetSpa, runoff, flood prediction
Procedia PDF Downloads 3354035 Virtual Metrology for Copper Clad Laminate Manufacturing
Authors: Misuk Kim, Seokho Kang, Jehyuk Lee, Hyunchang Cho, Sungzoon Cho
Abstract:
In semiconductor manufacturing, virtual metrology (VM) refers to methods to predict properties of a wafer based on machine parameters and sensor data of the production equipment, without performing the (costly) physical measurement of the wafer properties (Wikipedia). Additional benefits include avoidance of human bias and identification of important factors affecting the quality of the process which allow improving the process quality in the future. It is however rare to find VM applied to other areas of manufacturing. In this work, we propose to use VM to copper clad laminate (CCL) manufacturing. CCL is a core element of a printed circuit board (PCB) which is used in smartphones, tablets, digital cameras, and laptop computers. The manufacturing of CCL consists of three processes: Treating, lay-up, and pressing. Treating, the most important process among the three, puts resin on glass cloth, heat up in a drying oven, then produces prepreg for lay-up process. In this process, three important quality factors are inspected: Treated weight (T/W), Minimum Viscosity (M/V), and Gel Time (G/T). They are manually inspected, incurring heavy cost in terms of time and money, which makes it a good candidate for VM application. We developed prediction models of the three quality factors T/W, M/V, and G/T, respectively, with process variables, raw material, and environment variables. The actual process data was obtained from a CCL manufacturer. A variety of variable selection methods and learning algorithms were employed to find the best prediction model. We obtained prediction models of M/V and G/T with a high enough accuracy. They also provided us with information on “important” predictor variables, some of which the process engineers had been already aware and the rest of which they had not. They were quite excited to find new insights that the model revealed and set out to do further analysis on them to gain process control implications. T/W did not turn out to be possible to predict with a reasonable accuracy with given factors. The very fact indicates that the factors currently monitored may not affect T/W, thus an effort has to be made to find other factors which are not currently monitored in order to understand the process better and improve the quality of it. In conclusion, VM application to CCL’s treating process was quite successful. The newly built quality prediction model allowed one to reduce the cost associated with actual metrology as well as reveal some insights on the factors affecting the important quality factors and on the level of our less than perfect understanding of the treating process.Keywords: copper clad laminate, predictive modeling, quality control, virtual metrology
Procedia PDF Downloads 3494034 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance
Authors: Ammar Alali, Mahmoud Abughaban
Abstract:
Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe
Procedia PDF Downloads 2254033 Cooling Profile Analysis of Hot Strip Coil Using Finite Volume Method
Authors: Subhamita Chakraborty, Shubhabrata Datta, Sujay Kumar Mukherjea, Partha Protim Chattopadhyay
Abstract:
Manufacturing of multiphase high strength steel in hot strip mill have drawn significant attention due to the possibility of forming low temperature transformation product of austenite under continuous cooling condition. In such endeavor, reliable prediction of temperature profile of hot strip coil is essential in order to accesses the evolution of microstructure at different location of hot strip coil, on the basis of corresponding Continuous Cooling Transformation (CCT) diagram. Temperature distribution profile of the hot strip coil has been determined by using finite volume method (FVM) vis-à-vis finite difference method (FDM). It has been demonstrated that FVM offer greater computational reliability in estimation of contact pressure distribution and hence the temperature distribution for curved and irregular profiles, owing to the flexibility in selection of grid geometry and discrete point position, Moreover, use of finite volume concept allows enforcing the conservation of mass, momentum and energy, leading to enhanced accuracy of prediction.Keywords: simulation, modeling, thermal analysis, coil cooling, contact pressure, finite volume method
Procedia PDF Downloads 4714032 COX-2 Inhibitor NS398 Counteracts Chemoresistance to Temozolomide in T98G Glioblastoma Cell Line
Authors: Francesca Lombardi, Francesca Rosaria Augello, Benedetta Cinque, Maria Grazia Cifone, Paola Palumbo
Abstract:
Glioblastoma multiforme (GBM) is a high-grade primary brain tumor refractory to current forms of treatment. The survival benefits of patients with GBM remain unsatisfactory due to the intrinsic or acquired resistance to temozolomide (TMZ), an alkylating agent, used as the first-line chemotherapeutic drug to treat GBM patients. Its cytotoxic effect is visualized by the induction of O6-methylguanine (O6MeG) within DNA. Cyclooxygenase-2 (COX-2), an inflammation-associated enzyme, has been implicated in tumorigenesis and progression of GBM, its inhibition shows anticancer activities. In the present study, it was verified if the combination of a COX-2 selective inhibitor, NS398, with TMZ could counteract the TMZ resistance. In particular, the effect of NS398 mixed with TMZ was investigated in the GBM TMZ-resistant cell line, T98G. Cells were pretreated with NS398 (100µM, 24 hours) and then exposed to TMZ alone (200µM), NS398 alone, or both for 72 hours, after which cell growth rate and cycle phases, as well as apoptosis level, were evaluated. Coadministration of NS398 and TMZ caused a significant decrease in cell growth and a progressive increase of dead cells detected by trypan blue staining. Moreover, a significant level of apoptotic cell percentage and alteration of cell cycle phases were observed in T98G treated with TMZ-NS398 combination when compared to untreated cells or TMZ-treated cells. TMZ-resistant tumors, as GBM, express elevated levels of DNA repair enzyme O6-methylguanine-DNA methyltransferase (MGMT). The mixture drastically reduced MGMT expression in the TMZ-resistant cell line T98G, known to express high levels of MGMT basically. Moreover, while TMZ alone did not influence the COX-2 protein expression, the combination successfully reduced it. In conclusion, these results demonstrated that NS398 enhanced the efficacy of TMZ through cell number reduction, apoptosis induction, and decreased MGMT levels, suggesting the ability of drug combination to reduce the chemoresistance. This drug combination deserves attention and could be considered as a promising therapeutic strategy for GBM patients.Keywords: COX-2, COX-2 inhibitor, glioblastoma, NS398, T98G, temozolomide
Procedia PDF Downloads 1504031 The Effect of Brand Mascots on Consumers' Purchasing Behaviors
Authors: Isari Pairoa, Proud Arunrangsiwed
Abstract:
Brand mascots are the cartoon characters, which are mainly designed for advertising or other related marketing purposes. Many brand mascots are extremely popular, since they were presented in commercial advertisements and Line Stickers. Brand Line Stickers could lead the users to identify with the brand and brand mascots, where might influence users to become loyal customers, and share the identity with the brand. The objective of the current study is to examine the effect of brand mascots on consumers’ decision and consumers’ intention to purchase the product. This study involved 400 participants, using cluster sampling from 50 districts in Bangkok metropolitan area. The descriptive analysis shows that using brand mascot causes consumers' positive attitude toward the products, and also heightens the possibility to purchasing the products. The current study suggests the new type of marketing strategy, which is brand fandom. This study has also contributed the knowledge to the area of integrated marketing communication and identification theory.Keywords: brand mascot, consumers’ behavior, marketing communication, purchasing
Procedia PDF Downloads 258