Search results for: high precision geometric positioning
19075 The Network Relative Model Accuracy (NeRMA) Score: A Method to Quantify the Accuracy of Prediction Models in a Concurrent External Validation
Authors: Carl van Walraven, Meltem Tuna
Abstract:
Background: Network meta-analysis (NMA) quantifies the relative efficacy of 3 or more interventions from studies containing a subgroup of interventions. This study applied the analytical approach of NMA to quantify the relative accuracy of prediction models with distinct inclusion criteria that are evaluated on a common population (‘concurrent external validation’). Methods: We simulated binary events in 5000 patients using a known risk function. We biased the risk function and modified its precision by pre-specified amounts to create 15 prediction models with varying accuracy and distinct patient applicability. Prediction model accuracy was measured using the Scaled Brier Score (SBS). Overall prediction model accuracy was measured using fixed-effects methods that accounted for model applicability patterns. Prediction model accuracy was summarized as the Network Relative Model Accuracy (NeRMA) Score which ranges from -∞ through 0 (accuracy of random guessing) to 1 (accuracy of most accurate model in concurrent external validation). Results: The unbiased prediction model had the highest SBS. The NeRMA score correctly ranked all simulated prediction models by the extent of bias from the known risk function. A SAS macro and R-function was created to implement the NeRMA Score. Conclusions: The NeRMA Score makes it possible to quantify the accuracy of binomial prediction models having distinct inclusion criteria in a concurrent external validation.Keywords: prediction model accuracy, scaled brier score, fixed effects methods, concurrent external validation
Procedia PDF Downloads 24219074 Participation in IAEA Proficiency Test to Analyse Cobalt, Strontium and Caesium in Seawater Using Direct Counting and Radiochemical Techniques
Authors: S. Visetpotjanakit, C. Khrautongkieo
Abstract:
Radiation monitoring in the environment and foodstuffs is one of the main responsibilities of Office of Atoms for Peace (OAP) as the nuclear regulatory body of Thailand. The main goal of the OAP is to assure the safety of the Thai people and environment from any radiological incidents. Various radioanalytical methods have been developed to monitor radiation and radionuclides in the environmental and foodstuff samples. To validate our analytical performance, several proficiency test exercises from the International Atomic Energy Agency (IAEA) have been performed. Here, the results of a proficiency test exercise referred to as the Proficiency Test for Tritium, Cobalt, Strontium and Caesium Isotopes in Seawater 2017 (IAEA-RML-2017-01) are presented. All radionuclides excepting ³H were analysed using various radioanalytical methods, i.e. direct gamma-ray counting for determining ⁶⁰Co, ¹³⁴Cs and ¹³⁷Cs and developed radiochemical techniques for analysing ¹³⁴Cs, ¹³⁷Cs using AMP pre-concentration technique and 90Sr using di-(2-ethylhexyl) phosphoric acid (HDEHP) liquid extraction technique. The analysis results were submitted to IAEA. All results passed IAEA criteria, i.e. accuracy, precision and trueness and obtained ‘Accepted’ statuses. These confirm the data quality from the OAP environmental radiation laboratory to monitor radiation in the environment.Keywords: international atomic energy agency, proficiency test, radiation monitoring, seawater
Procedia PDF Downloads 17719073 Effect of Injection Strategy on the Performance and Emission of E85 in a Heavy-Duty Engine under Partially Premixed Combustion
Authors: Amir Aziz, Martin Tuner, Sebastian Verhelst, Oivind Andersson
Abstract:
Partially Premixed Combustion (PPC) is a combustion concept which aims to simultaneously achieve high efficiency and low engine-out emissions. Extending the ignition delay to promote the premixing, has been recognized as one of the key factor to achieve PPC. Fuels with high octane number have been proven to be a good candidates to extend the ignition delay. In this work, E85 (85% ethanol) has been used as a PPC fuel. The aim of this work was to investigate a suitable injection strategy for PPC combustion fueled with E85 in a single-cylinder heavy-duty engine. Single and double injection strategy were applied with different injection timing and the ratio between different injection pulses was varied. The performance and emission were investigated at low load. The results show that the double injection strategy should be preferred for PPC fueled with E85 due to low emissions and high efficiency, while keeping the pressure raise rate at very low levels.Keywords: E85, partially premixed combustion, injection strategy, performance and emission
Procedia PDF Downloads 18019072 Cooling-Rate Induced Fiber Birefringence Variation in Regenerated High Birefringent Fiber
Authors: Man-Hong Lai, Dinusha S. Gunawardena, Kok-Sing Lim, Harith Ahmad
Abstract:
In this paper, we have reported birefringence manipulation in regenerated high-birefringent fiber Bragg grating (RPMG) by using CO2 laser annealing method. The results indicate that the birefringence of RPMG remains unchanged after CO2 laser annealing followed by a slow cooling process, but reduced after the fast cooling process (~5.6×10-5). After a series of annealing procedures with different cooling rates, the obtained results show that slower the cooling rate, higher the birefringence of RPMG. The volume, thermal expansion coefficient (TEC) and glass transition temperature (Tg) change of stress applying part in RPMG during the cooling process are responsible for the birefringence change. Therefore, these findings are important to the RPMG sensor in high and dynamic temperature environment. The measuring accuracy, range and sensitivity of RPMG sensor are greatly affected by its birefringence value. This work also opens up a new application of CO2 laser for fiber annealing and birefringence modification.Keywords: birefringence, CO2 laser annealing, regenerated gratings, thermal stress
Procedia PDF Downloads 46319071 Aspects of Tone in the Educated Nigeria Accent of English
Authors: Nkereke Essien
Abstract:
The study seeks to analyze tone in the Educated Nigerian accent of English (ENAE) using the three tones: Low (L), High (H) and Low-High (LH). The aim is to find out whether there are any differences or similarities in the performance of the experimental group and the control. To achieve this, twenty educated Nigerian speakers of English who are educated in the language were selected by a Stratified Random Sampling (SRS) technique from two federal universities in Nigeria. They were given a passage to read and their intonation patterns were compared with that of a native speaker (control). The data were analyzed using Pierrehumbert’s (1980) intonation system of analysis. Three different approaches were employed in the analysis of the intonation Phrase (IP) as used by Pierrehumbert: perceptual, statistical and acoustic. We first analyzed our data from the passage and utterances using Willcoxon Matched Pairs Signs Ranks Test to establish the differences between the performance of the experimental group and the control. Then, the one-way Analysis of variance (ANOVA) statistical and Tukey-Krammar Post Hoc Tests were used to test for any significant difference in the performances of the twenty subjects. The acoustic data were presented to corroborate both the perceptual and statistical findings. Finally, the tonal patterns of the selected subjects in the three categories - A, B, C, were compared with those of the control. Our findings revealed that the tonal pattern of the Educated Nigerian Accent of English (ENAE) is significantly different from the tonal pattern of the Standard British Accent of English (SBAE) as represented by the control. A high preference for unidirectional tones, especially, the high tones was observed in the performance of the experimental group. Also, high tones do not necessarily correspond to stressed syllables and low tones to unstressed syllables.Keywords: accent, intonation phrase (IP), tonal patterns, tone
Procedia PDF Downloads 23719070 A Query Optimization Strategy for Autonomous Distributed Database Systems
Authors: Dina K. Badawy, Dina M. Ibrahim, Alsayed A. Sallam
Abstract:
Distributed database is a collection of logically related databases that cooperate in a transparent manner. Query processing uses a communication network for transmitting data between sites. It refers to one of the challenges in the database world. The development of sophisticated query optimization technology is the reason for the commercial success of database systems, which complexity and cost increase with increasing number of relations in the query. Mariposa, query trading and query trading with processing task-trading strategies developed for autonomous distributed database systems, but they cause high optimization cost because of involvement of all nodes in generating an optimal plan. In this paper, we proposed a modification on the autonomous strategy K-QTPT that make the seller’s nodes with the lowest cost have gradually high priorities to reduce the optimization time. We implement our proposed strategy and present the results and analysis based on those results.Keywords: autonomous strategies, distributed database systems, high priority, query optimization
Procedia PDF Downloads 52819069 Distribution of Dynamical and Energy Parameters in Axisymmetric Air Plasma Jet
Authors: Vitas Valinčius, Rolandas Uscila, Viktorija Grigaitienė, Žydrūnas Kavaliauskas, Romualdas Kėželis
Abstract:
Determination of integral dynamical and energy characteristics of high-temperature gas flows is a very important task of gas-dynamic for hazardous substances destruction systems. They are also always necessary for the investigation of high-temperature turbulent flow dynamics, heat and mass transfer. It is well known that distribution of dynamical and thermal characteristics of high-temperature flows and jets is strongly related to heat flux variation over an imposed area of heating. As is visible from numerous experiments and theoretical considerations, the fundamental properties of an isothermal jet are well investigated. However, the establishment of regularities in high-temperature conditions meets certain specific behavior comparing with moderate-temperature jets and flows. Their structures have not been thoroughly studied yet, especially in the cases of plasma ambient. It is well known that the distribution of local plasma jet parameters in high temperature and isothermal jets and flows may significantly differ. High temperature axisymmetric air jet generated by atmospheric pressure DC arc plasma torch was investigated employing enthalpy probe 3.8∙10-3 m of diameter. Distribution of velocities and temperatures were established in different cross-sections of the plasma jet outflowing from 42∙10-3 m diameter pipe at the average mean velocity of 700 m∙s-1, and averaged temperature of 4000 K. It has been found that gas heating fractionally influences shape and values of a dimensionless profile of velocity and temperature in the main zone of plasma jet and has a significant influence in the initial zone of the plasma jet. The width of the initial zone of the plasma jet has been found to be lesser than in the case of isothermal flow. The relation between dynamical thickness and turbulent number of Prandtl has been established along jet axis. Experimental results were generalized in dimensionless form. The presence of convective heating shows that heat transfer in a moving high-temperature jet also occurs due to heat transfer by moving particles of the jet. In this case, the intensity of convective heat transfer is proportional to the instantaneous value of the flow velocity at a given point in space. Consequently, the configuration of the temperature field in moving jets and flows essentially depends on the configuration of the velocity field.Keywords: plasma jet, plasma torch, heat transfer, enthalpy probe, turbulent number of Prandtl
Procedia PDF Downloads 18619068 Analysis of Rock Cutting Progress with a New Axe-Shaped PDC Cutter to Improve PDC Bit Performance in Elastoplastic Formation
Authors: Fangyuan Shao, Wei Liu, Deli Gao
Abstract:
Polycrystalline diamond compact (PDC) bits have occupied a large market of unconventional oil and gas drilling. The application of PDC bits benefits from the efficient rock breaking of PDC cutters. In response to increasingly complex formations, many shaped cutters have been invited, but many of them have not been solved by the mechanism of rock breaking. In this paper, two kinds of PDC cutters: a new axe-shaped (NAS) cutter and cylindrical cutter (benchmark) were studied by laboratory experiments. NAS cutter is obtained by optimizing two sides of axe-shaped cutter with curved surfaces. All the cutters were put on a vertical turret lathe (VTL) in the laboratory for cutting tests. According to the cutting distance, the VTL tests can be divided into two modes: single-turn rotary cutting and continuous cutting. The cutting depth of cutting (DOC) was set at 1.0 mm and 2.0 mm in the former mode. The later mode includes a dry VTL test for thermal stability and a wet VTL test for wear resistance. Load cell and 3D optical profiler were used to obtain the value of cutting forces and wear area, respectively. Based on the findings of the single-turn rotary cutting VTL tests, the performance of A NAS cutter was better than the benchmark cutter on elastoplastic material cutting. The cutting forces (normal forces, tangential force, and radial force) and special mechanical energy (MSE) of a NAS cutter were lower than that of the benchmark cutter under the same condition. It meant that a NAS cutter was more efficient on elastoplastic material breaking. However, the wear resistance of a new axe-shaped cutter was higher than that of a benchmark cutter. The results of the dry VTL test showed that the thermal stability of a NAS cutter was higher than that of a benchmark cutter. The cutting efficiency can be improved by optimizing the geometric structure of the PDC cutter. The change of thermal stability may be caused by the decrease of the contact area between cutter and rock at given DOC. The conclusions of this paper can be used as an important reference for PDC cutters designers.Keywords: axe-shaped cutter, PDC cutter, rotary cutting test, vertical turret lathe
Procedia PDF Downloads 20919067 Unlocking the Puzzle of Borrowing Adult Data for Designing Hybrid Pediatric Clinical Trials
Authors: Rajesh Kumar G
Abstract:
A challenging aspect of any clinical trial is to carefully plan the study design to meet the study objective in optimum way and to validate the assumptions made during protocol designing. And when it is a pediatric study, there is the added challenge of stringent guidelines and difficulty in recruiting the necessary subjects. Unlike adult trials, there is not much historical data available for pediatrics, which is required to validate assumptions for planning pediatric trials. Typically, pediatric studies are initiated as soon as approval is obtained for a drug to be marketed for adults, so with the adult study historical information and with the available pediatric pilot study data or simulated pediatric data, the pediatric study can be well planned. Generalizing the historical adult study for new pediatric study is a tedious task; however, it is possible by integrating various statistical techniques and utilizing the advantage of hybrid study design, which will help to achieve the study objective in a smoother way even with the presence of many constraints. This research paper will explain how well the hybrid study design can be planned along with integrated technique (SEV) to plan the pediatric study; In brief the SEV technique (Simulation, Estimation (using borrowed adult data and applying Bayesian methods)) incorporates the use of simulating the planned study data and getting the desired estimates to Validate the assumptions.This method of validation can be used to improve the accuracy of data analysis, ensuring that results are as valid and reliable as possible, which allow us to make informed decisions well ahead of study initiation. With professional precision, this technique based on the collected data allows to gain insight into best practices when using data from historical study and simulated data alike.Keywords: adaptive design, simulation, borrowing data, bayesian model
Procedia PDF Downloads 8119066 Drop-Out Rate in Leocadio Alejo Entienza High School for SY 2013-2014: Its Causes and Interventions
Authors: Raquel Balon Quintana
Abstract:
This study aims to help the Students-At-Risk of Dropping Out to finish their studies in their grade/year level category for this school year by finding out students’ behavior in and out the school, community involvement in the learning process and the causes or reasons behind drop-out rate that affect the performance level of the school. This study also looked for the intervention measures to reduce the drop-out rate of the school. The Normative Survey Method of research was used to achieve its purpose and objective of conducting interview with students and their parents, subject teachers, classmates and friends; undertaking observation and monitoring to find out the whereabouts of SARDO’s on and off classes hours; using questionnaires; and conducting home visitation to be able to link the community involvement into dropping-out of student. Results of the study revealed that out of 32 Students-At-Risk of Dropping Out, 50% were over age for high school (16 years old to 21 years old) while the other 50% came from the regular high school students. These 16 students came from the 41 students who dropped-out from their classes last school year. All Students-At-Risk of Dropping-Out are single and seventy-eight percent of them are male. Top five (5) among the factors that affect their school performance were peer pressure, self-drive, malnutrition, family problem/support and truancy. The five (5) least factors that affect their schooling were problems within their community, school-administration factor, harassment, teacher factor and distance from the school.Keywords: students-at-risk of dropping-out, drop-out rate, Leocadio Alejo Entienza High School, Philippines
Procedia PDF Downloads 56319065 Technological Development of a Biostimulant Bioproduct for Fruit Seedlings: An Engineering Overview
Authors: Andres Diaz Garcia
Abstract:
The successful technological development of any bioproduct, including those of the biostimulant type, requires to adequately completion of a series of stages allied to different disciplines that are related to microbiological, engineering, pharmaceutical chemistry, legal and market components, among others. Engineering as a discipline has a key contribution in different aspects of fermentation processes such as the design and optimization of culture media, the standardization of operating conditions within the bioreactor and the scaling of the production process of the active ingredient that it will be used in unit operations downstream. However, all aspects mentioned must take into account many biological factors of the microorganism such as the growth rate, the level of assimilation to various organic and inorganic sources and the mechanisms of action associated with its biological activity. This paper focuses on the practical experience within the Colombian Corporation for Agricultural Research (Agrosavia), which led to the development of a biostimulant bioproduct based on native rhizobacteria Bacillus amyloliquefaciens, oriented mainly to plant growth promotion in cape gooseberry nurseries and fruit crops in Colombia, and the challenges that were overcome from the expertise in the area of engineering. Through the application of strategies and engineering tools, a culture medium was optimized to obtain concentrations higher than 1E09 CFU (colony form units)/ml in liquid fermentation, the process of biomass production was standardized and a scale-up strategy was generated based on geometric (H/D of bioreactor relationships), and operational criteria based on a minimum dissolved oxygen concentration and that took into account the differences in the capacity of control of the process in the laboratory and pilot scales. Currently, the bioproduct obtained through this technological process is in stages of registration in Colombia for cape gooseberry fruits for export.Keywords: biochemical engineering, liquid fermentation, plant growth promoting, scale-up process
Procedia PDF Downloads 11519064 A Sensor Placement Methodology for Chemical Plants
Authors: Omid Ataei Nia, Karim Salahshoor
Abstract:
In this paper, a new precise and reliable sensor network methodology is introduced for unit processes and operations using the Constriction Coefficient Particle Swarm Optimization (CPSO) method. CPSO is introduced as a new search engine for optimal sensor network design purposes. Furthermore, a Square Root Unscented Kalman Filter (SRUKF) algorithm is employed as a new data reconciliation technique to enhance the stability and accuracy of the filter. The proposed design procedure incorporates precision, cost, observability, reliability together with importance-of-variables (IVs) as a novel measure in Instrumentation Criteria (IC). To the best of our knowledge, no comprehensive approach has yet been proposed in the literature to take into account the importance of variables in the sensor network design procedure. In this paper, specific weight is assigned to each sensor, measuring a process variable in the sensor network to indicate the importance of that variable over the others to cater to the ultimate sensor network application requirements. A set of distinct scenarios has been conducted to evaluate the performance of the proposed methodology in a simulated Continuous Stirred Tank Reactor (CSTR) as a highly nonlinear process plant benchmark. The obtained results reveal the efficacy of the proposed method, leading to significant improvement in accuracy with respect to other alternative sensor network design approaches and securing the definite allocation of sensors to the most important process variables in sensor network design as a novel achievement.Keywords: constriction coefficient PSO, importance of variable, MRMSE, reliability, sensor network design, square root unscented Kalman filter
Procedia PDF Downloads 16419063 Importance of Developing a Decision Support System for Diagnosis of Glaucoma
Authors: Murat Durucu
Abstract:
Glaucoma is a condition of irreversible blindness, early diagnosis and appropriate interventions to make the patients able to see longer time. In this study, it addressed that the importance of developing a decision support system for glaucoma diagnosis. Glaucoma occurs when pressure happens around the eyes it causes some damage to the optic nerves and deterioration of vision. There are different levels ranging blindness of glaucoma disease. The diagnosis at an early stage allows a chance for therapies that slows the progression of the disease. In recent years, imaging technology from Heidelberg Retinal Tomography (HRT), Stereoscopic Disc Photo (SDP) and Optical Coherence Tomography (OCT) have been used for the diagnosis of glaucoma. This better accuracy and faster imaging techniques in response technique of OCT have become the most common method used by experts. Although OCT images or HRT precision and quickness, especially in the early stages, there are still difficulties and mistakes are occurred in diagnosis of glaucoma. It is difficult to obtain objective results on diagnosis and placement process of the doctor's. It seems very important to develop an objective decision support system for diagnosis and level the glaucoma disease for patients. By using OCT images and pattern recognition systems, it is possible to develop a support system for doctors to make their decisions on glaucoma. Thus, in this recent study, we develop an evaluation and support system to the usage of doctors. Pattern recognition system based computer software would help the doctors to make an objective evaluation for their patients. It is intended that after development and evaluation processes of the software, the system is planning to be serve for the usage of doctors in different hospitals.Keywords: decision support system, glaucoma, image processing, pattern recognition
Procedia PDF Downloads 30519062 Pretherapy Initial Dosimetry Results in Prostat Cancer Radionuclide Therapy with Lu-177-PSMA-DOTA-617
Authors: M. Abuqebitah, H. Tanyildizi, N. Yeyin, I. Cavdar, M. Demir, L. Kabasakal
Abstract:
Aim: Targeted radionuclide therapy (TRT) is an increasingly used treatment modality for wide range of cancers. Presently dosimetry is highly required either to plan treatment or to ascertain the absorbed dose delivered to critical organs during treatment. Methods and Materials: The study comprised 7 patients suffered from prostate cancer with progressive disease and candidate to undergo Lu-177-DOTA-617 therapy following to PSMA- PET/CT imaging for all patients. (5.2±0.3 mCi) was intravenously injected. To evaluate bone marrow absorbed dose 2 cc blood samples were withdrawn in short variable times (3, 15, 30, 60, 180 minutes) after injection. Furthermore, whole body scans were performed using scintillation gama camera in 4, 24, 48, and 120 hours after injection and in order to quantify the activity taken up in the body, kidneys , liver, right parotid, and left parotid the geometric mean of anterior and posterior counts were determined through ROI analysis, after that background subtraction and attenuation correction were applied using patients PSMA- PET/CT images taking in a consideration: organ thickness, body thickness, and Hounsfield unites from CT scan. OLINDA/EXM dosimetry program was used for curve fitting, residence time calculation, and absorbed dose calculations. Findings: Absorbed doses of bone marrow, left kidney, right kidney, liver, left parotid, right parotid, total body were 1.28±0.52, 32.36±16.36, 32.7±13.68, 10.35±3.45, 38.67±21.29, 37.55±19.77, 2.25±0.95 (mGy/mCi), respectively. Conclusion: Our first results clarify that Lu-177-DOTA-617 is safe and reliable therapy as there were no complications seen. In the other hand, the observable variation in the absorbed dose of the critical organs among the patients necessitate patient-specific dosimetry approach to save body organs and particularly highly exposed kidneys and parotid gland.Keywords: Lu-177-PSMA, prostate cancer, radionuclide therapy
Procedia PDF Downloads 48319061 Determining of the Performance of Data Mining Algorithm Determining the Influential Factors and Prediction of Ischemic Stroke: A Comparative Study in the Southeast of Iran
Authors: Y. Mehdipour, S. Ebrahimi, A. Jahanpour, F. Seyedzaei, B. Sabayan, A. Karimi, H. Amirifard
Abstract:
Ischemic stroke is one of the common reasons for disability and mortality. The fourth leading cause of death in the world and the third in some other sources. Only 1/3 of the patients with ischemic stroke fully recover, 1/3 of them end in permanent disability and 1/3 face death. Thus, the use of predictive models to predict stroke has a vital role in reducing the complications and costs related to this disease. Thus, the aim of this study was to specify the effective factors and predict ischemic stroke with the help of DM methods. The present study was a descriptive-analytic study. The population was 213 cases from among patients referring to Ali ibn Abi Talib (AS) Hospital in Zahedan. Data collection tool was a checklist with the validity and reliability confirmed. This study used DM algorithms of decision tree for modeling. Data analysis was performed using SPSS-19 and SPSS Modeler 14.2. The results of the comparison of algorithms showed that CHAID algorithm with 95.7% accuracy has the best performance. Moreover, based on the model created, factors such as anemia, diabetes mellitus, hyperlipidemia, transient ischemic attacks, coronary artery disease, and atherosclerosis are the most effective factors in stroke. Decision tree algorithms, especially CHAID algorithm, have acceptable precision and predictive ability to determine the factors affecting ischemic stroke. Thus, by creating predictive models through this algorithm, will play a significant role in decreasing the mortality and disability caused by ischemic stroke.Keywords: data mining, ischemic stroke, decision tree, Bayesian network
Procedia PDF Downloads 17819060 Wind Turbines Optimization: Shield Structure for a High Wind Speed Conditions
Authors: Daniyar Seitenov, Nazim Mir-Nasiri
Abstract:
Optimization of horizontal axis semi-exposed wind turbine has been performed using a shield protection that automatically protects the generator shaft at extreme wind speeds from over speeding, mechanical damage and continues generating electricity during the high wind speed conditions. A semi-exposed to wind generator has been designed and its structure has been described in this paper. The simplified point-force dynamic load model on the blades has been derived for normal and extreme wind conditions with and without shield involvement. Numerical simulation has been conducted at different values of wind speed to study the efficiency of shield application. The obtained results show that the maximum power generated by the wind turbine with shield does not exceed approximately the rated value of the generator, where shield serves as an automatic break for extreme wind speed values of 15 m/sec and above. Meantime the wind turbine without shield produced a power that is much larger than the rated value. The optimized horizontal axis semi-exposed wind turbine with shield protection is suitable for low and medium power generation when installed on the roofs of high rise buildings for harvesting wind energy. Wind shield works automatically with no power consumption. The structure of the generator with the protection, math simulation of kinematics and dynamics of power generation has been described in details in this paper.Keywords: renewable energy, wind turbine, wind turbine optimization, high wind speed
Procedia PDF Downloads 18019059 Influence of Grain Shape, Size and Grain Boundary Diffusion on High Temperature Oxidation of Metal
Authors: Sneha Samal, Iva Petrikova, Bohdana Marvalova
Abstract:
Influence of grain size, shape and grain boundary diffusion at high temperature oxidation of pure metal is investigated as the function of microstructure evolution in this article. The oxidized scale depends on the geometrical parameter of the metal-scale system and grain shape, size, diffusion through boundary layers and influence of the contamination. The creation of the inner layer and the morphological structure develops from the internal stress generated during the growth of the scale. The oxidation rate depends on the cation and anion mobile transport of the metal in the inward and outward direction of the diffusion layer. Oxidation rate decreases with decreasing the grain size of the pure metal, whereas zinc deviates from this principle. A strong correlation between the surface roughness evolution, grain size, crystalline properties and oxidation mechanism of the oxidized metal was established.Keywords: high temperature oxidation, pure metals, grain size, shape and grain boundary
Procedia PDF Downloads 49919058 Development of Tensile Stress-Strain Relationship for High-Strength Steel Fiber Reinforced Concrete
Authors: H. A. Alguhi, W. A. Elsaigh
Abstract:
This paper provides a tensile stress-strain (σ-ε) relationship for High-Strength Steel Fiber Reinforced Concrete (HSFRC). Load-deflection (P-δ) behavior of HSFRC beams tested under four-point flexural load were used with inverse analysis to calculate the tensile σ-ε relationship for various tested concrete grades (70 and 90MPa) containing 60 kg/m3 (0.76 %) of hook-end steel fibers. A first estimate of the tensile (σ-ε) relationship is obtained using RILEM TC 162-TDF and other methods available in literature, frequently used for determining tensile σ-ε relationship of Normal-Strength Concrete (NSC) Non-Linear Finite Element Analysis (NLFEA) package ABAQUS® is used to model the beam’s P-δ behavior. The results have shown that an element-size dependent tensile σ-ε relationship for HSFRC can be successfully generated and adopted for further analyzes involving HSFRC structures.Keywords: tensile stress-strain, flexural response, high strength concrete, steel fibers, non-linear finite element analysis
Procedia PDF Downloads 36219057 Optimization of Fused Deposition Modeling 3D Printing Process via Preprocess Calibration Routine Using Low-Cost Thermal Sensing
Authors: Raz Flieshman, Adam Michael Altenbuchner, Jörg Krüger
Abstract:
This paper presents an approach to optimizing the Fused Deposition Modeling (FDM) 3D printing process through a preprocess calibration routine of printing parameters. The core of this method involves the use of a low-cost thermal sensor capable of measuring tempera-tures within the range of -20 to 500 degrees Celsius for detailed process observation. The calibration process is conducted by printing a predetermined path while varying the process parameters through machine instructions (g-code). This enables the extraction of critical thermal, dimensional, and surface properties along the printed path. The calibration routine utilizes computer vision models to extract features and metrics from the thermal images, in-cluding temperature distribution, layer adhesion quality, surface roughness, and dimension-al accuracy and consistency. These extracted properties are then analyzed to optimize the process parameters to achieve the desired qualities of the printed material. A significant benefit of this calibration method is its potential to create printing parameter profiles for new polymer and composite materials, thereby enhancing the versatility and application range of FDM 3D printing. The proposed method demonstrates significant potential in enhancing the precision and reliability of FDM 3D printing, making it a valuable contribution to the field of additive manufacturing.Keywords: FDM 3D printing, preprocess calibration, thermal sensor, process optimization, additive manufacturing, computer vision, material profiles
Procedia PDF Downloads 5119056 An Investigation of Food Quality and Risks in Thailand: A Case of Inbound Senior Tourists
Authors: Kevin Wongleedee
Abstract:
Food quality and risks are major concerns for inbound senior tourists when visiting tourist destinations in Thailand. The purposes of this study were to investigate food quality and risks perceived by inbound senior tourists. This paper drew upon data collection from an inbound senior tourist survey conducted in Thailand during summer 2013. Summer time in Thailand is a high season for inbound tourists. It is also a high risk period in which a variety food safety issues and incidents have often occurred. The survey was structured primarily to obtain inbound senior tourists’ concerns toward a variety of food quality and risks they encountered during their trip in Thailand. A total of 400 inbound senior tourists were elicited as data input for mean and standard deviation. The findings revealed that inbound tourists rated the overall food quality at a high level and the three most important perceived food risks were 1) unclean physical cooking facility, 2) toxic chemical handling, and 3) unclean water.Keywords: food quality, inbound senior tourists, risks, Thailand
Procedia PDF Downloads 40319055 Select-Low and Select-High Methods for the Wheeled Robot Dynamic States Control
Authors: Bogusław Schreyer
Abstract:
The paper enquires on the two methods of the wheeled robot braking torque control. Those two methods are applied when the adhesion coefficient under left side wheels is different from the adhesion coefficient under the right side wheels. In case of the select-low (SL) method the braking torque on both wheels is controlled by the signals originating from the wheels on the side of the lower adhesion. In the select-high (SH) method the torque is controlled by the signals originating from the wheels on the side of the higher adhesion. The SL method is securing stable and secure robot behaviors during the braking process. However, the efficiency of this method is relatively low. The SH method is more efficient in terms of time and braking distance but in some situations may cause wheels blocking. It is important to monitor the velocity of all wheels and then take a decision about the braking torque distribution accordingly. In case of the SH method the braking torque slope may require significant decrease in order to avoid wheel blocking.Keywords: select-high, select-low, torque distribution, wheeled robots
Procedia PDF Downloads 12419054 Analysis of the Impact of Refractivity on Ultra High Frequency Signal Strength over Gusau, North West, Nigeria
Authors: B. G. Ayantunji, B. Musa, H. Mai-Unguwa, L. A. Sunmonu, A. S. Adewumi, L. Sa'ad, A. Kado
Abstract:
For achieving reliable and efficient communication system, both terrestrial and satellite communication, surface refractivity is critical in planning and design of radio links. This study analyzed the impact of atmospheric parameters on Ultra High Frequency (UHF) signal strength over Gusau, North West, Nigeria. The analysis exploited meteorological data measured simultaneously with UHF signal strength for the month of June 2017 using a Davis Vantage Pro2 automatic weather station and UHF signal strength measuring devices respectively. The instruments were situated at the premise of Federal University, Gusau (6° 78' N, 12° 13' E). The refractivity values were computed using ITU-R model. The result shows that the refractivity value attained the highest value of 366.28 at 2200hr and a minimum value of 350.66 at 2100hr local time. The correlation between signal strength and refractivity is 0.350; Humidity is 0.532 and a negative correlation of -0.515 for temperature.Keywords: refractivity, UHF (ultra high frequency) signal strength, free space, automatic weather station
Procedia PDF Downloads 20419053 Plasma Ion Implantation Study: A Comparison between Tungsten and Tantalum as Plasma Facing Components
Authors: Tahreem Yousaf, Michael P. Bradley, Jerzy A. Szpunar
Abstract:
Currently, nuclear fusion is considered one of the most favorable options for future energy generation, due both to its abundant fuel and lack of emissions. For fusion power reactors, a major problem will be a suitable material choice for the Plasma Facing Components (PFCs) which will constitute the reactor first wall. Tungsten (W) has advantages as a PFC material because of its high melting point, low vapour pressure, high thermal conductivity and low retention of hydrogen isotopes. However, several adverse effects such as embrittlement, melting and morphological evolution have been observed in W when it is bombarded by low-energy and high-fluence helium (He) and deuterium (D) ions, as a simulation conditions adjacent to a fusion plasma. Recently, tantalum (Ta) also investigate as PFC and show better reluctance to nanostructure fuzz as compared to W under simulated fusion plasma conditions. But retention of D ions found high in Ta than W. Preparatory to plasma-based ion implantation studies, the effect of D and He ion impact on W and Ta is predicted by using the stopping and range of ions in the matter (SRIM) code. SRIM provided some theoretical results regarding projected range, ion concentration (at. %) and displacement damage (dpa) in W and Ta. The projected range for W under Irradiation of He and D ions with an energy of 3-keV and 1×fluence is determined 75Å and 135 Å and for Ta 85Å and 155Å, respectively. For both W and Ta samples, the maximum implanted peak for helium is predicted ~ 5.3 at. % at 12 nm and for De ions concentration peak is located near 3.1 at. % at 25 nm. For the same parameters, the displacement damage for He ions is observed in W ~ 0.65 dpa and Ta ~ 0.35 dpa at 5 nm. For D ions the displacement damage for W ~ 0.20 dpa at 8 nm and Ta ~ 0.175 dpa at 7 nm. The mean implantation depth is same for W and Ta, i.e. for He ions ~ 40 nm and D ions ~ 70 nm. From these results, we conclude that retention of D is high than He ions, but damage is low for Ta as compared to W. Further investigation still in progress regarding W and T.Keywords: helium and deuterium ion impact, plasma facing components, SRIM simulation, tungsten, tantalum
Procedia PDF Downloads 13519052 Parameters of Validation Method of Determining Polycyclic Aromatic Hydrocarbons in Drinking Water by High Performance Liquid Chromatography
Authors: Jonida Canaj
Abstract:
A simple method of extraction and determination of fifteen priority polycyclic aromatic hydrocarbons (PAHs) from drinking water using high performance liquid chromatography (HPLC) has been validated with limits of detection (LOD) and limits of quantification (LOQ), method recovery and reproducibility, and other factors. HPLC parameters, such as mobile phase composition and flow standardized for determination of PAHs using fluorescent detector (FLD). PAH was carried out by liquid-liquid extraction using dichloromethane. Linearity of calibration curves was good for all PAH (R², 0.9954-1.0000) in the concentration range 0.1-100 ppb. Analysis of standard spiked water samples resulted in good recoveries between 78.5-150%(0.1ppb) and 93.04-137.47% (10ppb). The estimated LOD and LOQ ranged between 0.0018-0.98 ppb. The method described has been used for determination of the fifteen PAHs contents in drinking water samples.Keywords: high performance liquid chromatography, HPLC, method validation, polycyclic aromatic hydrocarbons, PAHs, water
Procedia PDF Downloads 10619051 Combination of Plantar Pressure and Star Excursion Balance Test for Evaluation of Dynamic Posture Control on High-Heeled Shoes
Authors: Yan Zhang, Jan Awrejcewicz, Lin Fu
Abstract:
High-heeled shoes force the foot into plantar flexion position resulting in foot arch rising and disturbance of the articular congruence between the talus and tibiofibular mortice, all of which may increase the challenge of balance maintenance. Plantar pressure distribution of the stance limb during the star excursion balance test (SEBT) contributes to the understanding of potential sources of reaching excursions in SEBT. The purpose of this study is to evaluate the dynamic posture control while wearing high-heeled shoes using SEBT in a combination of plantar pressure measurement. Twenty healthy young females were recruited. Shoes of three heel heights were used: flat (0.8 cm), low (4.0 cm), high (6.6 cm). The testing grid of SEBT consists of three lines extending out at 120° from each other, which were defined as anterior, posteromedial, and posterolateral directions. Participants were instructed to stand on their dominant limb with the heel in the middle of the testing grid and hands on hips and to reach the non-stance limb as far as possible towards each direction. The distal portion of the reaching limb lightly touched the ground without shifting weight. Then returned the reaching limb to the beginning position. The excursion distances were normalized to leg length. The insole plantar measurement system was used to record peak pressure, contact area, and pressure-time integral of the stance limb. Results showed that normalized excursion distance decreased significantly as heel height increased. The changes of plantar pressure in SEBT as heel height increased were more obvious in the medial forefoot (MF), medial midfoot (MM), rearfoot areas. At MF, the peak pressure and pressure-time integral of low and high shoes increased significantly compared with that of flat shoes, while the contact area decreased significantly as heel height increased. At MM, peak pressure, contact area, and pressure-time integral of high and low shoes were significantly lower than that of flat shoes. To reduce posture instability, the stance limb plantar loading shifted to medial forefoot. Knowledge of this study identified dynamic posture control deficits while wearing high-heeled shoes and the critical role of the medial forefoot in dynamic balance maintenance.Keywords: dynamic posture control, high-heeled shoes, plantar pressure, star excursion balance test.
Procedia PDF Downloads 13719050 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier
Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh
Abstract:
This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems
Procedia PDF Downloads 5219049 Enhancing Financial Security: Real-Time Anomaly Detection in Financial Transactions Using Machine Learning
Authors: Ali Kazemi
Abstract:
The digital evolution of financial services, while offering unprecedented convenience and accessibility, has also escalated the vulnerabilities to fraudulent activities. In this study, we introduce a distinct approach to real-time anomaly detection in financial transactions, aiming to fortify the defenses of banking and financial institutions against such threats. Utilizing unsupervised machine learning algorithms, specifically autoencoders and isolation forests, our research focuses on identifying irregular patterns indicative of fraud within transactional data, thus enabling immediate action to prevent financial loss. The data we used in this study included the monetary value of each transaction. This is a crucial feature as fraudulent transactions may have distributions of different amounts than legitimate ones, such as timestamps indicating when transactions occurred. Analyzing transactions' temporal patterns can reveal anomalies (e.g., unusual activity in the middle of the night). Also, the sector or category of the merchant where the transaction occurred, such as retail, groceries, online services, etc. Specific categories may be more prone to fraud. Moreover, the type of payment used (e.g., credit, debit, online payment systems). Different payment methods have varying risk levels associated with fraud. This dataset, anonymized to ensure privacy, reflects a wide array of transactions typical of a global banking institution, ranging from small-scale retail purchases to large wire transfers, embodying the diverse nature of potentially fraudulent activities. By engineering features that capture the essence of transactions, including normalized amounts and encoded categorical variables, we tailor our data to enhance model sensitivity to anomalies. The autoencoder model leverages its reconstruction error mechanism to flag transactions that deviate significantly from the learned normal pattern, while the isolation forest identifies anomalies based on their susceptibility to isolation from the dataset's majority. Our experimental results, validated through techniques such as k-fold cross-validation, are evaluated using precision, recall, and the F1 score alongside the area under the receiver operating characteristic (ROC) curve. Our models achieved an F1 score of 0.85 and a ROC AUC of 0.93, indicating high accuracy in detecting fraudulent transactions without excessive false positives. This study contributes to the academic discourse on financial fraud detection and provides a practical framework for banking institutions seeking to implement real-time anomaly detection systems. By demonstrating the effectiveness of unsupervised learning techniques in a real-world context, our research offers a pathway to significantly reduce the incidence of financial fraud, thereby enhancing the security and trustworthiness of digital financial services.Keywords: anomaly detection, financial fraud, machine learning, autoencoders, isolation forest, transactional data analysis
Procedia PDF Downloads 6319048 A Low Power and High-Speed Conditional-Precharge Sense Amplifier Based Flip-Flop Using Single Ended Latch
Authors: Guo-Ming Sung, Ramavath Naga Raju Naik
Abstract:
This paper presents a low power, high speed, sense-amplifier based flip-flop (SAFF). The flip-flop’s power con-sumption and delay are greatly reduced by employing a new conditionally precharge sense-amplifier stage and a single-ended latch stage. Glitch-free and contention-free latch operation is achieved by using a conditional cut-off strategy. The design uses fewer transistors, has a lower clock load, and has a simple structure, all of which contribute to a near-zero setup time. When compared to previous flip-flop structures proposed for similar input/output conditions, this design’s performance and overall PDP have improved. The post layout simulation of the circuit uses 2.91µW of power and has a delay of 65.82 ps. Overall, the power-delay product has seen some enhancements. Cadence Virtuoso Designing tool with CMOS 90nm technology are used for all designs.Keywords: high-speed, low-power, flip-flop, sense-amplifier
Procedia PDF Downloads 16719047 Modeling Slow Crack Growth under Thermal and Chemical Effects for Fitness Predictions of High-Density Polyethylene Material
Authors: Luis Marquez, Ge Zhu, Vikas Srivastava
Abstract:
High-density polyethylene (HDPE) is one of the most commonly used thermoplastic polymer materials for water and gas pipelines. Slow crack growth failure is a well-known phenomenon in high-density polyethylene material and causes brittle failure well below the yield point with no obvious sign. The failure of transportation pipelines can cause catastrophic environmental and economic consequences. Using the non-destructive testing method to predict slow crack growth failure behavior is the primary preventative measurement employed by the pipeline industry but is often costly and time-consuming. Phenomenological slow crack growth models are useful to predict the slow crack growth behavior in the polymer material due to their ability to evaluate slow crack growth under different temperature and loading conditions. We developed a quantitative method to assess the slow crack growth behavior in the high-density polyethylene pipeline material under different thermal conditions based on existing physics-based phenomenological models. We are also working on developing an experimental protocol and quantitative model that can address slow crack growth behavior under different chemical exposure conditions to improve the safety, reliability, and resilience of HDPE-based pipeline infrastructure.Keywords: mechanics of materials, physics-based modeling, civil engineering, fracture mechanics
Procedia PDF Downloads 20819046 Effect of High-Intensity Core Muscle Exercises Training on Sport Performance in Dancers
Authors: Che Hsiu Chen, Su Yun Chen, Hon Wen Cheng
Abstract:
Traditional core stability, core endurance, and balance exercises on a stable surface with isometric muscle actions, low loads, and multiple repetitions, which may not improvements the swimming and running economy performance. However, the effects of high intensity core muscle exercise training on jump height, sprint, and aerobic fitness remain unclear. The purpose of this study was to examine whether high intensity core muscle exercises training could improve sport performances in dancers. Thirty healthy university dancer students (28 women and 2 men; age 20.0 years, height 159.4 cm, body mass 52.7 kg) were voluntarily participated in this study, and each participant underwent five suspension exercises (e.g., hip abduction in plank alternative, hamstring curl, 45-degree row, lunge and oblique crunch). Each type of exercise was performed for 30-second, with 30-second of rest between exercises, two times per week for eight weeks and each exercise session was increased by 10-second every week. We measured agility, explosive force, anaerobic and cardiovascular fitness in dancer performance before and after eight weeks of training. The results showed that the 8-week high intensity core muscle training would significantly increase T-test agility (7.78%), explosive force of acceleration (3.35%), vertical jump height (8.10%), jump power (6.95%), lower extremity anaerobic ability (7.10%) and oxygen uptake efficiency slope (4.15%). Therefore, it can be concluded that eight weeks of high intensity core muscle exercises training can improve not only agility, sprint ability, vertical jump ability, anaerobic and but also cardiovascular fitness measures as well.Keywords: balance, jump height, sprint, maximal oxygen uptake
Procedia PDF Downloads 409