Search results for: synchronous reluctance machine (SynRM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2992

Search results for: synchronous reluctance machine (SynRM)

502 Modelling of Pipe Jacked Twin Tunnels in a Very Soft Clay

Authors: Hojjat Mohammadi, Randall Divito, Gary J. E. Kramer

Abstract:

Tunnelling and pipe jacking in very soft soils (fat clays), even with an Earth Pressure Balance tunnel boring machine (EPBM), can cause large ground displacements. In this study, the short-term and long-term ground and tunnel response is predicted for twin, pipe-jacked EPBM 3 meter diameter tunnels with a narrow pillar width. Initial modelling indicated complete closure of the annulus gap at the tail shield onto the centrifugally cast, glass-fiber-reinforced, polymer mortar jacking pipe (FRP). Numerical modelling was employed to simulate the excavation and support installation sequence, examine the ground response during excavation, confirm the adequacy of the pillar width and check the structural adequacy of the installed pipe. In the numerical models, Mohr-Coulomb constitutive model with the effect of unloading was adopted for the fat clays, while for the bedrock layer, the generalized Hoek-Brown was employed. The numerical models considered explicit excavation sequences and different levels of ground convergence prior to support installation. The well-studied excavation sequences made the analysis possible for this study on a very soft clay, otherwise, obtaining the convergency in the numerical analysis would be impossible. The predicted results indicate that the ground displacements around the tunnel and its effect on the pipe would be acceptable despite predictions of large zones of plastic behaviour around the tunnels and within the entire pillar between them due to excavation-induced ground movements.

Keywords: finite element modeling (FEM), pipe-jacked tunneling, very soft clay, EPBM

Procedia PDF Downloads 63
501 Relevance of Brain Stem Evoked Potential in Diagnosis of Central Demyelination in Guillain Barre’ Syndrome

Authors: Geetanjali Sharma

Abstract:

Guillain Barre’ syndrome (GBS) is an auto-immune mediated demyelination poly-radiculo-neuropathy. Clinical features include progressive symmetrical ascending muscle weakness of more than two limbs, areflexia with or without sensory, autonomic and brainstem abnormalities, the purpose of this study was to determine subclinical neurological changes of CNS with GBS and to establish the presence of central demyelination in GBS. The study was prospective and conducted in the Department of Physiology, Pt. B. D. Sharma Post-graduate Institute of Medical Sciences, University of Health Sciences, Rohtak, Haryana, India to find out early central demyelination in clinically diagnosed patients of GBS. These patients were referred from the department of Medicine of our Institute to our department for electro-diagnostic evaluation. The study group comprised of 40 subjects (20 clinically diagnosed GBS patients and 20 healthy individuals as controls) aged between 6-65 years. Brain Stem evoked Potential (BAEP) were done in both groups using RMS EMG EP mark II machine. BAEP parameters included the latencies of waves I to IV, inter peak latencies I-III, III-IV & I-V. Statistically significant increase in absolute peak and inter peak latencies in the GBS group as compared with control group was noted. Results of evoked potential reflect impairment of auditory pathways probably due to focal demyelination in Schwann cell derived myelin sheaths that cover the extramedullary portion of auditory nerves. Early detection of the sub-clinical abnormalities is important as timely intervention reduces morbidity.

Keywords: brainstem, demyelination, evoked potential, Guillain Barre’

Procedia PDF Downloads 281
500 Surface-Enhanced Raman Spectroscopy on Gold Nanoparticles in the Kidney Disease

Authors: Leonardo C. Pacheco-Londoño, Nataly J Galan-Freyle, Lisandro Pacheco-Lugo, Antonio Acosta-Hoyos, Elkin Navarro, Gustavo Aroca-Martinez, Karin Rondón-Payares, Alberto C. Espinosa-Garavito, Samuel P. Hernández-Rivera

Abstract:

At the Life Science Research Center at Simon Bolivar University, a primary focus is the diagnosis of various diseases, and the use of gold nanoparticles (Au-NPs) in diverse biomedical applications is continually expanding. In the present study, Au-NPs were employed as substrates for Surface-Enhanced Raman Spectroscopy (SERS) aimed at diagnosing kidney diseases arising from Lupus Nephritis (LN), preeclampsia (PC), and Hypertension (H). Discrimination models were developed for distinguishing patients with and without kidney diseases based on the SERS signals from urine samples by partial least squares-discriminant analysis (PLS-DA). A comparative study of the Raman signals across the three conditions was conducted, leading to the identification of potential metabolite signals. Model performance was assessed through cross-validation and external validation, determining parameters like sensitivity and specificity. Additionally, a secondary analysis was performed using machine learning (ML) models, wherein different ML algorithms were evaluated for their efficiency. Models’ validation was carried out using cross-validation and external validation, and other parameters were determined, such as sensitivity and specificity; the models showed average values of 0.9 for both parameters. Additionally, it is not possible to highlight this collaborative effort involved two university research centers and two healthcare institutions, ensuring ethical treatment and informed consent of patient samples.

Keywords: SERS, Raman, PLS-DA, kidney diseases

Procedia PDF Downloads 26
499 Improving Efficiency and Effectiveness of FMEA Studies

Authors: Joshua Loiselle

Abstract:

This paper discusses the challenges engineering teams face in conducting Failure Modes and Effects Analysis (FMEA) studies. This paper focuses on the specific topic of improving the efficiency and effectiveness of FMEA studies. Modern economic needs and increased business competition require engineers to constantly develop newer and better solutions within shorter timeframes and tighter margins. In addition, documentation requirements for meeting standards/regulatory compliance and customer needs are becoming increasingly complex and verbose. Managing open actions and continuous improvement activities across all projects, product variations, and processes in addition to daily engineering tasks is cumbersome, time consuming, and is susceptible to errors, omissions, and non-conformances. FMEA studies are proven methods for improving products and processes while subsequently reducing engineering workload and improving machine and resource availability through a pre-emptive, systematic approach of identifying, analyzing, and improving high-risk components. If implemented correctly, FMEA studies significantly reduce costs and improve productivity. However, the value of an effective FMEA is often shrouded by a lack of clarity and structure, misconceptions, and previous experiences and, as such, FMEA studies are frequently grouped with the other required information and documented retrospectively in preparation of customer requirements or audits. Performing studies in this way only adds cost to a project and perpetuates the misnomer that FMEA studies are not value-added activities. This paper discusses the benefits of effective FMEA studies, the challenges related to conducting FMEA studies, best practices for efficiently overcoming challenges via structure and automation, and the benefits of implementing those practices.

Keywords: FMEA, quality, APQP, PPAP

Procedia PDF Downloads 283
498 An Explanatory Study Approach Using Artificial Intelligence to Forecast Solar Energy Outcome

Authors: Agada N. Ihuoma, Nagata Yasunori

Abstract:

Artificial intelligence (AI) techniques play a crucial role in predicting the expected energy outcome and its performance, analysis, modeling, and control of renewable energy. Renewable energy is becoming more popular for economic and environmental reasons. In the face of global energy consumption and increased depletion of most fossil fuels, the world is faced with the challenges of meeting the ever-increasing energy demands. Therefore, incorporating artificial intelligence to predict solar radiation outcomes from the intermittent sunlight is crucial to enable a balance between supply and demand of energy on loads, predict the performance and outcome of solar energy, enhance production planning and energy management, and ensure proper sizing of parameters when generating clean energy. However, one of the major problems of forecasting is the algorithms used to control, model, and predict performances of the energy systems, which are complicated and involves large computer power, differential equations, and time series. Also, having unreliable data (poor quality) for solar radiation over a geographical location as well as insufficient long series can be a bottleneck to actualization. To overcome these problems, this study employs the anaconda Navigator (Jupyter Notebook) for machine learning which can combine larger amounts of data with fast, iterative processing and intelligent algorithms allowing the software to learn automatically from patterns or features to predict the performance and outcome of Solar Energy which in turns enables the balance of supply and demand on loads as well as enhance production planning and energy management.

Keywords: artificial Intelligence, backward elimination, linear regression, solar energy

Procedia PDF Downloads 142
497 Determination of Selected Engineering Properties of Giant Palm Seeds (Borassus Aethiopum) in Relation to Its Oil Potential

Authors: Rasheed Amao Busari, Ahmed Ibrahim

Abstract:

The engineering properties of giant palms are crucial for the reasonable design of the processing and handling systems. The research was conducted to investigate some engineering properties of giant palm seeds in relation to their oil potential. The ripe giant palm fruit was sourced from some parts of Zaria in Kaduna State and Ado Ekiti in Ekiti State, Nigeria. The mesocarps of the fruits collected were removed to obtain the nuts, while the collected nuts were dried under ambient conditions for several days. The actual moisture content of the nuts at the time of the experiment was determined using KT100S Moisture Meter, with moisture content ranged 17.9% to 19.15%. The physical properties determined are axial dimension, geometric mean diameter, arithmetic mean diameter, sphericity, true and bulk densities, porosity, angles of repose, and coefficients of friction. The nuts were measured using a vernier caliper for physical assessment of their sizes. The axial dimensions of 100 nuts were taken and the result shows that the size ranges from 7.30 to 9.32cm for major diameter, 7.2 to 8.9 cm for intermediate diameter, and 4.2 to 6.33 for minor diameter. The mechanical properties determined were compressive force, compressive stress, and deformation both at peak and break using Instron hydraulic universal tensile testing machine. The work also revealed that giant palm seed can be classified as an oil-bearing seed. The seed gave 18% using the solvent extraction method. The results obtained from the study will help in solving the problem of equipment design, handling, and further processing of the seeds.

Keywords: giant palm seeds, engineering properties, oil potential, moisture content, and giant palm fruit

Procedia PDF Downloads 55
496 Effect of Shot Peening on the Mechanical Properties for Welded Joints of Aluminium Alloy 6061-T6

Authors: Muna Khethier Abbass, Khairia Salman Hussan, Huda Mohummed AbdudAlaziz

Abstract:

This work aims to study the effect of shot peening on the mechanical properties of welded joints which performed by two different welding processes: Tungsten inert gas (TIG) welding and friction stir welding (FSW) processes of aluminum alloy 6061 T6. Arc welding process (TIG) was carried out on the sheet with dimensions of (100x50x6 mm) to obtain many welded joints with using electrode type ER4043 (AlSi5) as a filler metal and argon as shielding gas. While the friction stir welding process was carried out using CNC milling machine with a tool of rotational speed (1000 rpm) and welding speed of (20 mm/min) to obtain the same butt welded joints. The welded pieces were tested by X-ray radiography to detect the internal defects and faulty welded pieces were excluded. Tensile test specimens were prepared from welded joints and base alloy in the dimensions according to ASTM17500 and then subjected to shot peening process using steel ball of diameter 0.9 mm and for 15 min. All specimens were subjected to Vickers hardness test and micro structure examination to study the effect of welding process (TIG and FSW) on the micro structure of the weld zones. Results showed that a general decay of mechanical properties of TIG and FSW welded joints comparing with base alloy while the FSW welded joint gives better mechanical properties than that of TIG welded joint. This is due to the micro structure changes during the welding process. It has been found that the surface hardening by shot peening improved the mechanical properties of both welded joints, this is due to the compressive residual stress generation in the weld zones which was measured using X-Ray diffraction (XRD) inspection.

Keywords: friction stir welding, TIG welding, mechanical properties, shot peening

Procedia PDF Downloads 318
495 Optimizing Data Integration and Management Strategies for Upstream Oil and Gas Operations

Authors: Deepak Singh, Rail Kuliev

Abstract:

The abstract highlights the critical importance of optimizing data integration and management strategies in the upstream oil and gas industry. With its complex and dynamic nature generating vast volumes of data, efficient data integration and management are essential for informed decision-making, cost reduction, and maximizing operational performance. Challenges such as data silos, heterogeneity, real-time data management, and data quality issues are addressed, prompting the proposal of several strategies. These strategies include implementing a centralized data repository, adopting industry-wide data standards, employing master data management (MDM), utilizing real-time data integration technologies, and ensuring data quality assurance. Training and developing the workforce, “reskilling and upskilling” the employees and establishing robust Data Management training programs play an essential role and integral part in this strategy. The article also emphasizes the significance of data governance and best practices, as well as the role of technological advancements such as big data analytics, cloud computing, Internet of Things (IoT), and artificial intelligence (AI) and machine learning (ML). To illustrate the practicality of these strategies, real-world case studies are presented, showcasing successful implementations that improve operational efficiency and decision-making. In present study, by embracing the proposed optimization strategies, leveraging technological advancements, and adhering to best practices, upstream oil and gas companies can harness the full potential of data-driven decision-making, ultimately achieving increased profitability and a competitive edge in the ever-evolving industry.

Keywords: master data management, IoT, AI&ML, cloud Computing, data optimization

Procedia PDF Downloads 51
494 Effect of High Intensity Ultrasonic Treatment on the Micro Structure, Corrosion and Mechanical Behavior of ac4c Aluminium Alloy

Authors: A.Farrag Farrag, A. M. El-Aziz Abdel Aziz, W. Khlifa Khlifa

Abstract:

Ultrasonic treatment is a promising process nowadays in the engineering field due to its high efficiency and it is a low-cost process. It enhances mechanical properties, corrosion resistance, and homogeneity of the microstructure. In this study, the effect of ultrasonic treatment and several casting conditions on microstructure, hardness and corrosion behavior of AC4C aluminum alloy was examined. Various ultrasonic treatments of the AC4C alloys were carried out to prepare billets for thixocasting process. Treatment temperatures varied from about 630oC and cooled down to under ultrasonic field. Treatment time was about 90s. A 600-watts ultrasonic system with 19.5 kHz and intensity of 170 W/cm2 was used. Billets were reheated to semisolid state and held for 5 minutes at 582 oC and temperatures (soaking) using high-frequency induction system, then thixocasted using a die casting machine. Microstructures of the thixocast parts were studied using optical and SEM microscopes. On the other hand, two samples were conventionally cast and poured at 634 oC and 750 oC. The microstructure showed a globular none dendritic grains for AC4C with the application of UST at 630-582 oC, Less dendritic grains when the sample was conventionally cast without the application of UST and poured at 624 oC and a fully dendritic microstructure When the sample was cast and poured at 750 oC without UST .The ultrasonic treatment during solidification proved that it has a positive influence on the microstructure as it produced the finest and globular grains thus it is expected to increase the mechanical properties of the alloy. Higher values of corrosion resistance and hardness were recorded for the ultrasound-treated sample in comparison to cast one.

Keywords: ultrasonic treatment, aluminum alloys, corrosion behaviour, mechanical behaviour, microstructure

Procedia PDF Downloads 328
493 Human Vibrotactile Discrimination Thresholds for Simultaneous and Sequential Stimuli

Authors: Joanna Maj

Abstract:

Body machine interfaces (BMIs) afford users a non-invasive way coordinate movement. Vibrotactile stimulation has been incorporated into BMIs to allow feedback in real-time and guide movement control to benefit patients with cognitive deficits, such as stroke survivors. To advance research in this area, we examined vibrational discrimination thresholds at four body locations to determine suitable application sites for future multi-channel BMIs using vibration cues to guide movement planning and control. Twelve healthy adults had a pair of small vibrators (tactors) affixed to the skin at each location: forearm, shoulders, torso, and knee. A "standard" stimulus (186 Hz; 750 ms) and "probe" stimuli (11 levels ranging from 100 Hz to 235 Hz; 750 ms) were delivered. Probe and test stimulus pairs could occur sequentially or simultaneously (timing). Participants verbally indicated which stimulus felt more intense. Stimulus order was counterbalanced across tactors and body locations. Probabilities that probe stimuli felt more intense than the standard stimulus were computed and fit with a cumulative Gaussian function; the discrimination threshold was defined as one standard deviation of the underlying distribution. Threshold magnitudes depended on stimulus timing and location. Discrimination thresholds were better for stimuli applied sequentially vs. simultaneously at the torso as well as the knee. Thresholds were small (better) and relatively insensitive to timing differences for vibrations applied at the shoulder. BMI applications requiring multiple channels of simultaneous vibrotactile stimulation should therefore consider the shoulder as a deployment site for a vibrotactile BMI interface.

Keywords: electromyography, electromyogram, neuromuscular disorders, biomedical instrumentation, controls engineering

Procedia PDF Downloads 43
492 Modeling of a Pilot Installation for the Recovery of Residual Sludge from Olive Oil Extraction

Authors: Riad Benelmir, Muhammad Shoaib Ahmed Khan

Abstract:

The socio-economic importance of the olive oil production is significant in the Mediterranean region, both in terms of wealth and tradition. However, the extraction of olive oil generates huge quantities of wastes that may have a great impact on land and water environment because of their high phytotoxicity. Especially olive mill wastewater (OMWW) is one of the major environmental pollutants in olive oil industry. This work projects to design a smart and sustainable integrated thermochemical catalytic processes of residues from olive mills by hydrothermal carbonization (HTC) of olive mill wastewater (OMWW) and fast pyrolysis of olive mill wastewater sludge (OMWS). The byproducts resulting from OMWW-HTC treatment are a solid phase enriched in carbon, called biochar and a liquid phase (residual water with less dissolved organic and phenolic compounds). HTC biochar can be tested as a fuel in combustion systems and will also be utilized in high-value applications, such as soil bio-fertilizer and as catalyst or/and catalyst support. The HTC residual water is characterized, treated and used in soil irrigation since the organic and the toxic compounds will be reduced under the permitted limits. This project’s concept includes also the conversion of OMWS to a green diesel through a catalytic pyrolysis process. The green diesel is then used as biofuel in an internal combustion engine (IC-Engine) for automotive application to be used for clean transportation. In this work, a theoretical study is considered for the use of heat from the pyrolysis non-condensable gases in a sorption-refrigeration machine for pyrolysis gases cooling and condensation of bio-oil vapors.

Keywords: biomass, olive oil extraction, adsorption cooling, pyrolisis

Procedia PDF Downloads 66
491 Diabetes Mellitus and Blood Glucose Variability Increases the 30-day Readmission Rate after Kidney Transplantation

Authors: Harini Chakkera

Abstract:

Background: Inpatient hyperglycemia is an established independent risk factor among several patient cohorts with hospital readmission. This has not been studied after kidney transplantation. Nearly one-third of patients who have undergone a kidney transplant reportedly experience 30-day readmission. Methods: Data on first-time solitary kidney transplantations were retrieved between September 2015 to December 2018. Information was linked to the electronic health record to determine a diagnosis of diabetes mellitus and extract glucometeric and insulin therapy data. Univariate logistic regression analysis and the XGBoost algorithm were used to predict 30-day readmission. We report the average performance of the models on the testing set on five bootstrapped partitions of the data to ensure statistical significance. Results: The cohort included 1036 patients who received kidney transplantation, and 224 (22%) experienced 30-day readmission. The machine learning algorithm was able to predict 30-day readmission with an average AUC of 77.3% (95% CI 75.30-79.3%). We observed statistically significant differences in the presence of pretransplant diabetes, inpatient-hyperglycemia, inpatient-hypoglycemia, and minimum and maximum glucose values among those with higher 30-day readmission rates. The XGBoost model identified the index admission length of stay, presence of hyper- and hypoglycemia and recipient and donor BMI values as the most predictive risk factors of 30-day readmission. Additionally, significant variations in the therapeutic management of blood glucose by providers were observed. Conclusions: Suboptimal glucose metrics during hospitalization after kidney transplantation is associated with an increased risk for 30-day hospital readmission. Optimizing the hospital blood glucose management, a modifiable factor, after kidney transplantation may reduce the risk of 30-day readmission.

Keywords: kidney, transplant, diabetes, insulin

Procedia PDF Downloads 56
490 Remote Sensing through Deep Neural Networks for Satellite Image Classification

Authors: Teja Sai Puligadda

Abstract:

Satellite images in detail can serve an important role in the geographic study. Quantitative and qualitative information provided by the satellite and remote sensing images minimizes the complexity of work and time. Data/images are captured at regular intervals by satellite remote sensing systems, and the amount of data collected is often enormous, and it expands rapidly as technology develops. Interpreting remote sensing images, geographic data mining, and researching distinct vegetation types such as agricultural and forests are all part of satellite image categorization. One of the biggest challenge data scientists faces while classifying satellite images is finding the best suitable classification algorithms based on the available that could able to classify images with utmost accuracy. In order to categorize satellite images, which is difficult due to the sheer volume of data, many academics are turning to deep learning machine algorithms. As, the CNN algorithm gives high accuracy in image recognition problems and automatically detects the important features without any human supervision and the ANN algorithm stores information on the entire network (Abhishek Gupta., 2020), these two deep learning algorithms have been used for satellite image classification. This project focuses on remote sensing through Deep Neural Networks i.e., ANN and CNN with Deep Sat (SAT-4) Airborne dataset for classifying images. Thus, in this project of classifying satellite images, the algorithms ANN and CNN are implemented, evaluated & compared and the performance is analyzed through evaluation metrics such as Accuracy and Loss. Additionally, the Neural Network algorithm which gives the lowest bias and lowest variance in solving multi-class satellite image classification is analyzed.

Keywords: artificial neural network, convolutional neural network, remote sensing, accuracy, loss

Procedia PDF Downloads 134
489 Scheduling Jobs with Stochastic Processing Times or Due Dates on a Server to Minimize the Number of Tardy Jobs

Authors: H. M. Soroush

Abstract:

The problem of scheduling products and services for on-time deliveries is of paramount importance in today’s competitive environments. It arises in many manufacturing and service organizations where it is desirable to complete jobs (products or services) with different weights (penalties) on or before their due dates. In such environments, schedules should frequently decide whether to schedule a job based on its processing time, due-date, and the penalty for tardy delivery to improve the system performance. For example, it is common to measure the weighted number of late jobs or the percentage of on-time shipments to evaluate the performance of a semiconductor production facility or an automobile assembly line. In this paper, we address the problem of scheduling a set of jobs on a server where processing times or due-dates of jobs are random variables and fixed weights (penalties) are imposed on the jobs’ late deliveries. The goal is to find the schedule that minimizes the expected weighted number of tardy jobs. The problem is NP-hard to solve; however, we explore three scenarios of the problem wherein: (i) both processing times and due-dates are stochastic; (ii) processing times are stochastic and due-dates are deterministic; and (iii) processing times are deterministic and due-dates are stochastic. We prove that special cases of these scenarios are solvable optimally in polynomial time, and introduce efficient heuristic methods for the general cases. Our computational results show that the heuristics perform well in yielding either optimal or near optimal sequences. The results also demonstrate that the stochasticity of processing times or due-dates can affect scheduling decisions. Moreover, the proposed problem is general in the sense that its special cases reduce to some new and some classical stochastic single machine models.

Keywords: number of late jobs, scheduling, single server, stochastic

Procedia PDF Downloads 479
488 A High Content Screening Platform for the Accurate Prediction of Nephrotoxicity

Authors: Sijing Xiong, Ran Su, Lit-Hsin Loo, Daniele Zink

Abstract:

The kidney is a major target for toxic effects of drugs, industrial and environmental chemicals and other compounds. Typically, nephrotoxicity is detected late during drug development, and regulatory animal models could not solve this problem. Validated or accepted in silico or in vitro methods for the prediction of nephrotoxicity are not available. We have established the first and currently only pre-validated in vitro models for the accurate prediction of nephrotoxicity in humans and the first predictive platforms based on renal cells derived from human pluripotent stem cells. In order to further improve the efficiency of our predictive models, we recently developed a high content screening (HCS) platform. This platform employed automated imaging in combination with automated quantitative phenotypic profiling and machine learning methods. 129 image-based phenotypic features were analyzed with respect to their predictive performance in combination with 44 compounds with different chemical structures that included drugs, environmental and industrial chemicals and herbal and fungal compounds. The nephrotoxicity of these compounds in humans is well characterized. A combination of chromatin and cytoskeletal features resulted in high predictivity with respect to nephrotoxicity in humans. Test balanced accuracies of 82% or 89% were obtained with human primary or immortalized renal proximal tubular cells, respectively. Furthermore, our results revealed that a DNA damage response is commonly induced by different PTC-toxicants with diverse chemical structures and injury mechanisms. Together, the results show that the automated HCS platform allows efficient and accurate nephrotoxicity prediction for compounds with diverse chemical structures.

Keywords: high content screening, in vitro models, nephrotoxicity, toxicity prediction

Procedia PDF Downloads 294
487 Characteristics of the Particle Size Distribution and Exposure Concentrations of Nanoparticles Generated from the Laser Metal Deposition Process

Authors: Yu-Hsuan Liu, Ying-Fang Wang

Abstract:

The objectives of the present study are to characterize nanoparticles generated from the laser metal deposition (LMD) process and to estimate particle concentrations deposited in the head (H), that the tracheobronchial (TB) and alveolar (A) regions, respectively. The studied LMD chamber (3.6m × 3.8m × 2.9m) is installed with a robot laser metal deposition machine. Direct-reading instrument of a scanning mobility particle sizer (SMPS, Model 3082, TSI Inc., St. Paul, MN, USA) was used to conduct static sampling inside the chamber for nanoparticle number concentration and particle size distribution measurements. The SMPS obtained particle number concentration at every 3 minutes, the diameter of the SMPS ranged from 11~372 nm when the aerosol and sheath flow rates were set at 0.6 and 6 L / min, respectively. The resultant size distributions were used to predict depositions of nanoparticles at the H, TB, and A regions of the respiratory tract using the UK National Radiological Protection Board’s (NRPB’s) LUDEP Software. Result that the number concentrations of nanoparticles in indoor background and LMD chamber were 4.8×10³ and 4.3×10⁵ # / cm³, respectively. However, the nanoparticles emitted from the LMD process was in the form of the uni-modal with number median diameter (NMD) and geometric standard deviation (GSD) as 142nm and 1.86, respectively. The fractions of the nanoparticles deposited on the alveolar region (A: 69.8%) were higher than the other two regions of the head region (H: 10.9%), tracheobronchial region (TB: 19.3%). This study conducted static sampling to measure the nanoparticles in the LMD process, and the results show that the fraction of particles deposited on the A region was higher than the other two regions. Therefore, applying the characteristics of nanoparticles emitted from LMD process could be provided valuable scientific-based evidence for exposure assessments in the future.

Keywords: exposure assessment, laser metal deposition process, nanoparticle, respiratory region

Procedia PDF Downloads 266
486 Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models

Authors: Ethan James

Abstract:

Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes.

Keywords: artificial intelligence, deep learning, imaging, medical devices, ophthalmic devices, ophthalmology, retina

Procedia PDF Downloads 154
485 Performance Analysis of Pumps-as-Turbine Under Cavitating Conditions

Authors: Calvin Stephen, Biswajit Basu, Aonghus McNabola

Abstract:

Market liberalization in the power sector has led to the emergence of micro-hydropower schemes that are dependent on the use of pumps-as-turbines in applications that were not suitable as potential hydropower sites in earlier years. These applications include energy recovery in water supply networks, sewage systems, irrigation systems, alcohol breweries, underground mining and desalination plants. As a result, there has been an accelerated adoption of pumpsas-turbine technology due to the economic advantages it presents in comparison to the conventional turbines in the micro-hydropower space. The performance of this machines under cavitation conditions, however, is not well understood as there is a deficiency of knowledge in literature focused on their turbine mode of operation. In hydraulic machines, cavitation is a common occurrence which needs to be understood to safeguard them and prolong their operation life. The overall purpose of this study is to investigate the effects of cavitation on the performance of a pumps-as-turbine system over its entire operating range. At various operating speeds, the cavitating region is identified experimentally while monitoring the effects this has on the power produced by the machine. Initial results indicate occurrence of cavitation at higher flow rates for lower operating speeds and at lower flow rates at higher operating speeds. This implies that for cavitation free operation, low speed pumps-as-turbine must be used for low flow rate conditions whereas for sites with higher flow rate conditions high speed turbines should be adopted. Such a complete understanding of pumps-as-turbine suction performance can aid avoid cavitation induced failures hence improved reliability of the micro-hydropower plant.

Keywords: cavitation, micro-hydropower, pumps-as-turbine, system design

Procedia PDF Downloads 86
484 Applications of Evolutionary Optimization Methods in Reinforcement Learning

Authors: Rahul Paul, Kedar Nath Das

Abstract:

The paradigm of Reinforcement Learning (RL) has become prominent in training intelligent agents to make decisions in environments that are both dynamic and uncertain. The primary objective of RL is to optimize the policy of an agent in order to maximize the cumulative reward it receives throughout a given period. Nevertheless, the process of optimization presents notable difficulties as a result of the inherent trade-off between exploration and exploitation, the presence of extensive state-action spaces, and the intricate nature of the dynamics involved. Evolutionary Optimization Methods (EOMs) have garnered considerable attention as a supplementary approach to tackle these challenges, providing distinct capabilities for optimizing RL policies and value functions. The ongoing advancement of research in both RL and EOMs presents an opportunity for significant advancements in autonomous decision-making systems. The convergence of these two fields has the potential to have a transformative impact on various domains of artificial intelligence (AI) applications. This article highlights the considerable influence of EOMs in enhancing the capabilities of RL. Taking advantage of evolutionary principles enables RL algorithms to effectively traverse extensive action spaces and discover optimal solutions within intricate environments. Moreover, this paper emphasizes the practical implementations of EOMs in the field of RL, specifically in areas such as robotic control, autonomous systems, inventory problems, and multi-agent scenarios. The article highlights the utilization of EOMs in facilitating RL agents to effectively adapt, evolve, and uncover proficient strategies for complex tasks that may pose challenges for conventional RL approaches.

Keywords: machine learning, reinforcement learning, loss function, optimization techniques, evolutionary optimization methods

Procedia PDF Downloads 58
483 Energy Production with Closed Methods

Authors: Bujar Ismaili, Bahti Ismajli, Venhar Ismaili, Skender Ramadani

Abstract:

In Kosovo, the problem with the electricity supply is huge and does not meet the demands of consumers. Older thermal power plants, which are regarded as big environmental polluters, produce most of the energy. Our experiment is based on the production of electricity using the closed method that does not affect environmental pollution by using waste as fuel that is considered to pollute the environment. The experiment was carried out in the village of Godanc, municipality of Shtime - Kosovo. In the experiment, a production line based on the production of electricity and central heating was designed at the same time. The results are the benefits of electricity as well as the release of temperature for heating with minimal expenses and with the release of 0% gases into the atmosphere. During this experiment, coal, plastic, waste from wood processing, and agricultural wastes were used as raw materials. The method utilized in the experiment allows for the release of gas through pipes and filters during the top-to-bottom combustion of the raw material in the boiler, followed by the method of gas filtration from waste wood processing (sawdust). During this process, the final product is obtained - gas, which passes through the carburetor, which enables the gas combustion process and puts into operation the internal combustion machine and the generator and produces electricity that does not release gases into the atmosphere. The obtained results show that the system provides energy stability without environmental pollution from toxic substances and waste, as well as with low production costs. From the final results, it follows that: in the case of using coal fuel, we have benefited from more electricity and higher temperature release, followed by plastic waste, which also gave good results. The results obtained during these experiments prove that the current problems of lack of electricity and heating can be met at a lower cost and have a clean environment and waste management.

Keywords: energy, heating, atmosphere, waste, gasification

Procedia PDF Downloads 210
482 Analyzing the Influence of Hydrometeorlogical Extremes, Geological Setting, and Social Demographic on Public Health

Authors: Irfan Ahmad Afip

Abstract:

This main research objective is to accurately identify the possibility for a Leptospirosis outbreak severity of a certain area based on its input features into a multivariate regression model. The research question is the possibility of an outbreak in a specific area being influenced by this feature, such as social demographics and hydrometeorological extremes. If the occurrence of an outbreak is being subjected to these features, then the epidemic severity for an area will be different depending on its environmental setting because the features will influence the possibility and severity of an outbreak. Specifically, this research objective was three-fold, namely: (a) to identify the relevant multivariate features and visualize the patterns data, (b) to develop a multivariate regression model based from the selected features and determine the possibility for Leptospirosis outbreak in an area, and (c) to compare the predictive ability of multivariate regression model and machine learning algorithms. Several secondary data features were collected locations in the state of Negeri Sembilan, Malaysia, based on the possibility it would be relevant to determine the outbreak severity in the area. The relevant features then will become an input in a multivariate regression model; a linear regression model is a simple and quick solution for creating prognostic capabilities. A multivariate regression model has proven more precise prognostic capabilities than univariate models. The expected outcome from this research is to establish a correlation between the features of social demographic and hydrometeorological with Leptospirosis bacteria; it will also become a contributor for understanding the underlying relationship between the pathogen and the ecosystem. The relationship established can be beneficial for the health department or urban planner to inspect and prepare for future outcomes in event detection and system health monitoring.

Keywords: geographical information system, hydrometeorological, leptospirosis, multivariate regression

Procedia PDF Downloads 90
481 An Approach to Building a Recommendation Engine for Travel Applications Using Genetic Algorithms and Neural Networks

Authors: Adrian Ionita, Ana-Maria Ghimes

Abstract:

The lack of features, design and the lack of promoting an integrated booking application are some of the reasons why most online travel platforms only offer automation of old booking processes, being limited to the integration of a smaller number of services without addressing the user experience. This paper represents a practical study on how to improve travel applications creating user-profiles through data-mining based on neural networks and genetic algorithms. Choices made by users and their ‘friends’ in the ‘social’ network context can be considered input data for a recommendation engine. The purpose of using these algorithms and this design is to improve user experience and to deliver more features to the users. The paper aims to highlight a broader range of improvements that could be applied to travel applications in terms of design and service integration, while the main scientific approach remains the technical implementation of the neural network solution. The motivation of the technologies used is also related to the initiative of some online booking providers that have made the fact that they use some ‘neural network’ related designs public. These companies use similar Big-Data technologies to provide recommendations for hotels, restaurants, and cinemas with a neural network based recommendation engine for building a user ‘DNA profile’. This implementation of the ‘profile’ a collection of neural networks trained from previous user choices, can improve the usability and design of any type of application.

Keywords: artificial intelligence, big data, cloud computing, DNA profile, genetic algorithms, machine learning, neural networks, optimization, recommendation system, user profiling

Procedia PDF Downloads 143
480 Automation of Pneumatic Seed Planter for System of Rice Intensification

Authors: Tukur Daiyabu Abdulkadir, Wan Ishak Wan Ismail, Muhammad Saufi Mohd Kassim

Abstract:

Seed singulation and accuracy in seed spacing are the major challenges associated with the adoption of mechanical seeder for system of rice intensification. In this research the metering system of a pneumatic planter was modified and automated for increase precision to meet the demand of system of rice intensification SRI. The chain and sprocket mechanism of a conventional vacuum planter were now replaced with an electro mechanical system made up of a set of servo motors, limit switch, micro controller and a wheel divided into 10 equal angles. The circumference of the planter wheel was determined based on which seed spacing was computed and mapped to the angles of the metering wheel. A program was then written and uploaded to arduino micro controller and it automatically turns the seed plates for seeding upon covering the required distance. The servo motor was calibrated with the aid of labVIEW. The machine was then calibrated using a grease belt and varying the servo rpm through voltage variation between 37 rpm to 47 rpm until an optimum value of 40 rpm was obtained with a forward speed of 5 kilometers per hour. A pressure of 1.5 kpa was found to be optimum under which no skip or double was recorded. Precision in spacing (coefficient of variation), miss index, multiple index, doubles and skips were investigated. No skip or double was recorded both at laboratory and field levels. The operational parameters under consideration were both evaluated at laboratory and field. Even though there was little variation between the laboratory and field values of precision in spacing, multiple index and miss index, the different is not significant as both laboratory and field values fall within the acceptable range.

Keywords: automation, calibration, pneumatic seed planter, system of rice intensification

Procedia PDF Downloads 619
479 A Study Problem and Needs Compare the Held of the Garment Industries in Nonthaburi and Bangkok Area

Authors: Thepnarintra Praphanphat

Abstract:

The purposes of this study were to investigate garment industry’s condition, problems, and need for assistance. The population of the study was 504 managers or managing directors of garment establishments finished apparel industrial manager and permission of the Department of Industrial Works 28, Ministry of Industry until January 1, 2012. In determining the sample size with the opening of the Taro Yamane finished at 95% confidence level is ± 5% deviation was 224 managers. Questionnaires were used to collect the data. Percentage, frequency, arithmetic mean, standard deviation, t-test, ANOVA, and LSD were used to analyze the data. It was found that most establishments were of a large size, operated in a form of limited company for more than 15 years most of which produced garments for working women. All investment was made by Thai people. The products were made to order and distributed domestically and internationally. The total sale of the year 2010, 2011, and 2012 was almost the same. With respect to the problems of operating the business, the study indicated, as a whole, by- aspects, and by-items, that they were at a high level. The comparison of the level of problems of operating garment business as classified by general condition showed that problems occurring in business of different sizes were, as a whole, not different. In taking aspects into consideration, it was found that the level of problem in relation to production was different; medium establishments had more problems in production than those of small and large sizes. According to the by-items analysis, five problems were found different; namely, problems concerning employees, machine maintenance, number of designers, and price competition. Such problems in the medium establishments were at a higher level than those in the small and large establishments. Regarding business age, the examination yielded no differences as a whole, by-aspects, and by-items. The statistical significance level of this study was set at .05.

Keywords: garment industry, garment, fashion, competitive enhancement project

Procedia PDF Downloads 170
478 The Associations between Ankle and Brachial Systolic Blood Pressures with Obesity Parameters

Authors: Matei Tudor Berceanu, Hema Viswambharan, Kirti Kain, Chew Weng Cheng

Abstract:

Background - Obesity parameters, particularly visceral obesity as measured by the waist-to-height ratio (WHtR), correlate with insulin resistance. The metabolic microvascular changes associated with insulin resistance causes increased peripheral arteriolar resistance primarily to the lower limb vessels. We hypothesize that ankle systolic blood pressures (SBPs) are more significantly associated with visceral obesity than brachial SBPs. Methods - 1098 adults enriched in south Asians or Europeans with diabetes (T2DM) were recruited from a primary care practice in West Yorkshire. Their medical histories, including T2DM and cardiovascular disease (CVD) status, were gathered from an electronic database. The brachial, dorsalis pedis, and posterior tibial SBPs were measured using a Doppler machine. Their body mass index (BMI) and WHtR were calculated after measuring their weight, height, and waist circumference. Linear regressions were performed between the 6 SBPs and both obesity parameters, after adjusting for covariates. Results - Generally, the left posterior tibial SBP (P=4.559*10⁻¹⁵) and right posterior tibial SBP (P=1.114* 10⁻¹³ ) are the pressures most significantly associated with the BMI, as well as in south Asians (P < 0.001) and Europeans (P < 0.001) specifically. In South Asians, although the left (P=0.032) and right brachial SBP (P=0.045) were associated to the WHtR, the left posterior tibial SBP (P=0.023) showed the strongest association. Conclusion - Regardless of ethnicity, ankle SBPs are more significantly associated with generalized obesity than brachial SBPs, suggesting their screening potential for screening for early detection of T2DM and CVD. A combination of ankle SBPs with WHtR is proposed in south Asians.

Keywords: ankle blood pressures, body mass index, insulin resistance, waist-to-height-ratio

Procedia PDF Downloads 120
477 Similar Script Character Recognition on Kannada and Telugu

Authors: Gurukiran Veerapur, Nytik Birudavolu, Seetharam U. N., Chandravva Hebbi, R. Praneeth Reddy

Abstract:

This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts.

Keywords: base characters, modifiers, guninthalu, aksharas, vattakshara, VAN

Procedia PDF Downloads 34
476 Studies on the Proximate Composition and Functional Properties of Extracted Cocoyam Starch Flour

Authors: Adebola Ajayi, Francis B. Aiyeleye, Olakunke M. Makanjuola, Olalekan J. Adebowale

Abstract:

Cocoyam, a generic term for both xanthoma and colocasia, is a traditional staple root crop in many developing countries in Africa, Asia and the Pacific. It is mostly cultivated as food crop which is very rich in vitamin B6, magnesium and also in dietary fiber. The cocoyam starch is easily digested and often used for baby food. Drying food is a method of food preservation that removes enough moisture from the food so bacteria, yeast and molds cannot grow. It is a one of the oldest methods of preserving food. The effect of drying methods on the proximate composition and functional properties of extracted cocoyam starch flour were studied. Freshly harvested cocoyam cultivars at matured level were washed with portable water, peeled, washed and grated. The starch in the grated cocoyam was extracted, dried using sun drying, oven and cabinet dryers. The extracted starch flour was milled into flour using Apex mill and packed and sealed in low-density polyethylene film (LDPE) 75 micron thickness with Nylon sealing machine QN5-3200HI and kept for three months under ambient temperature before analysis. The result showed that the moisture content, ash, crude fiber, fat, protein and carbohydrate ranged from 6.28% to 12.8% 2.32% to 3.2%, 0.89% to 2.24%%, 1.89% to 2.91%, 7.30% to 10.2% and 69% to 83% respectively. The functional properties of the cocoyam starch flour ranged from 2.65ml/g to 4.84ml/g water absorption capacity, 1.95ml/g to 3.12ml/g oil absorption capacity, 0.66ml/g to 7.82ml/g bulk density and 3.82% to 5.30ml/g swelling capacity. Significant difference (P≥0.5) was not obtained across the various drying methods used. The drying methods provide extension to the shelf-life of the extracted cocoyam starch flour.

Keywords: cocoyam, extraction, oven dryer, cabinet dryer

Procedia PDF Downloads 268
475 Prevalence of Diabetes Mellitus Among Human Immune Deficiency Virus-Positive Patients Under Anti-retroviral Attending in Rwanda, a Case Study of University Teaching Hospital of Butare

Authors: Venuste Kayinamura, V. Iyamuremye, A. Ngirabakunzi

Abstract:

Anti-retroviral therapy (ART) for HIV patient can cause a deficiency in glucose metabolism by promoting insulin resistance, glucose intolerance, and diabetes, diabetes mellitus keep increasing among HIV-infected patients worldwide but there is limited data on levels of blood glucose and its relationship with antiretroviral drugs (ARVs) and HIV-infection worldwide, particularly in Rwanda. A convenient sampling strategy was used in this study and it involved 323 HIV patients (n=323). Patients who are HIV positive under ARVs were involved in this study. The patient’s blood glucose was analyzed using an automated machine or glucometer (COBAS C 311). Data were analyzed using Microsoft Excel and SPSS V. 20.0 and presented in percentages. The highest diabetes mellitus prevalence was 93.33 % in people aged >40 years while the lowest diabetes mellitus prevalence was 6.67% in people aged between 21-and 40 years. The P-value was (0.021). Thus, there is a significant association between age and diabetes occurrence. The highest diabetes mellitus prevalence was 28.2% in patients under ART treatment for more than 10 years, 16.7% were <5years while 20% of patients were on ART treatment between 5-10 years. The P-value here is (0.03), thus the incidence of diabetes is associated with long-term ART use in HIV-infected patients. This study assessed the prevalence of diabetes among HIV-infected patients under ARVs attending the University Teaching Hospital of Butare (CHUB), it shows that the prevalence of diabetes is high in HIV-infected patients under ARTs. This study found no significant relationship between gender and diabetes mellitus growth. Therefore, regular assessment of diabetes mellitus especially among HIV-infected patients under ARVs is highly recommended to control other health issues caused by diabetes mellitus.

Keywords: anti-retroviral, diabetes mellitus, antiretroviral therapy, human immune deficiency virus

Procedia PDF Downloads 93
474 Experimental Investigations on the Mechanical properties of Spiny (Kawayan Tinik) Bamboo Layers

Authors: Ma. Doreen E. Candelaria, Ma. Louise Margaret A. Ramos, Dr. Jaime Y. Hernandez, Jr

Abstract:

Bamboo has been introduced as a possible alternative to some construction materials nowadays. Its potential use in the field of engineering, however, is still not widely practiced due to insufficient engineering knowledge on the material’s properties and characteristics. Although there are researches and studies proving its advantages, it is still not enough to say that bamboo can sustain and provide the strength and capacity required of common structures. In line with this, a more detailed analysis was made to observe the layered structure of the bamboo, particularly the species of Kawayan Tinik. It is the main intent of this research to provide the necessary experiments to determine the tensile strength of dried bamboo samples. The test includes tensile strength parallel to fibers with samples taken at internodes only. Throughout the experiment, methods suggested by the International Organization for Standardization (ISO) were followed. The specimens were tested using 3366 INSTRON Universal Testing Machine, with a rate of loading set to 0.6 mm/min. It was then observed from the results of these experiments that dried bamboo samples recorded high layered tensile strengths, as high as 600 MPa. Likewise, along the culm’s length and across its cross section, higher tensile strength were observed at the top part and at its outer layers. Overall, the top part recorded the highest tensile strength per layer, with its outer layers having tensile strength as high as 600 MPa. The recorded tensile strength of its middle and inner layers, on the other hand, were approximately 450 MPa and 180 MPa, respectively. From this variation in tensile strength across the cross section, it may be concluded that an increase in tensile strength may be observed towards the outer periphery of the bamboo. With these preliminary investigations on the layered tensile strength of bamboo, it is highly recommended to conduct experimental investigations on the layered compressive strength properties as well. It is also suggested to conduct investigations evaluating perpendicular layered tensile strength of the material.

Keywords: bamboo strength, layered strength tests, strength test, tensile test

Procedia PDF Downloads 387
473 Comparison of Mechanical Properties of Three Different Orthodontic Latex Elastic Bands Leached with NaOH Solution

Authors: Thipsupar Pureprasert, Niwat Anuwongnukroh, Surachai Dechkunakorn, Surapich Loykulanant, Chaveewan Kongkaew, Wassana Wichai

Abstract:

Objective: Orthodontic elastic bands made from natural rubber continue to be commonly used due to their favorable characteristics. However, there are concerns associated cytotoxicity due to harmful components released during conventional vulcanization (sulfur-based method). With the co-operation of The National Metal and Materials Technology Center (MTEC) and Faculty of Dentistry Mahidol University, a method was introduced to reduce toxic components by leaching the orthodontic elastic bands with NaOH solution. Objectives: To evaluate the mechanical properties of Thai and commercial orthodontic elastic brands (Ormco and W&H) leached with NaOH solution. Material and methods: Three elastic brands (N =30, size ¼ inch, 4.5 oz.) were tested for mechanical properties in terms of initial extension force, residual force, force loss, breaking strength and maximum displacement using a Universal Testing Machine. Results : Force loss significantly decreased in Thai-LEACH and W&H-LEACH, whereas the values increased in Ormco-LEACH (P < 0.05). The data exhibited a significantly decrease in breaking strength with Thai-LEACH and Ormco-LEACH, whereas all 3 brands revealed a significantly decrease in maximum displacement with the leaching process (P < 0.05). Conclusion: Leaching with NaOH solution is a new method, which can remove toxic components from orthodontic latex elastic bands. However, this process can affect their mechanical properties. Leached elastic bands from Thai had comparable properties with Ormco and have potential to be developed as a promising product.

Keywords: leaching, orthodontic elastics, natural rubber latex, orthodontic

Procedia PDF Downloads 252