Search results for: shape error
3191 Evaluating the Validity of CFD Model of Dispersion in a Complex Urban Geometry Using Two Sets of Experimental Measurements
Authors: Mohammad R. Kavian Nezhad, Carlos F. Lange, Brian A. Fleck
Abstract:
This research presents the validation study of a computational fluid dynamics (CFD) model developed to simulate the scalar dispersion emitted from rooftop sources around the buildings at the University of Alberta North Campus. The ANSYS CFX code was used to perform the numerical simulation of the wind regime and pollutant dispersion by solving the 3D steady Reynolds-averaged Navier-Stokes (RANS) equations on a building-scale high-resolution grid. The validation study was performed in two steps. First, the CFD model performance in 24 cases (eight wind directions and three wind speeds) was evaluated by comparing the predicted flow fields with the available data from the previous measurement campaign designed at the North Campus, using the standard deviation method (SDM), while the estimated results of the numerical model showed maximum average percent errors of approximately 53% and 37% for wind incidents from the North and Northwest, respectively. Good agreement with the measurements was observed for the other six directions, with an average error of less than 30%. In the second step, the reliability of the implemented turbulence model, numerical algorithm, modeling techniques, and the grid generation scheme was further evaluated using the Mock Urban Setting Test (MUST) dispersion dataset. Different statistical measures, including the fractional bias (FB), the geometric mean bias (MG), and the normalized mean square error (NMSE), were used to assess the accuracy of the predicted dispersion field. Our CFD results are in very good agreement with the field measurements.Keywords: CFD, plume dispersion, complex urban geometry, validation study, wind flow
Procedia PDF Downloads 1353190 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data
Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa
Abstract:
A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation
Procedia PDF Downloads 2023189 Forecasting Nokoué Lake Water Levels Using Long Short-Term Memory Network
Authors: Namwinwelbere Dabire, Eugene C. Ezin, Adandedji M. Firmin
Abstract:
The prediction of hydrological flows (rainfall-depth or rainfall-discharge) is becoming increasingly important in the management of hydrological risks such as floods. In this study, the Long Short-Term Memory (LSTM) network, a state-of-the-art algorithm dedicated to time series, is applied to predict the daily water level of Nokoue Lake in Benin. This paper aims to provide an effective and reliable method enable of reproducing the future daily water level of Nokoue Lake, which is influenced by a combination of two phenomena: rainfall and river flow (runoff from the Ouémé River, the Sô River, the Porto-Novo lagoon, and the Atlantic Ocean). Performance analysis based on the forecasting horizon indicates that LSTM can predict the water level of Nokoué Lake up to a forecast horizon of t+10 days. Performance metrics such as Root Mean Square Error (RMSE), coefficient of correlation (R²), Nash-Sutcliffe Efficiency (NSE), and Mean Absolute Error (MAE) agree on a forecast horizon of up to t+3 days. The values of these metrics remain stable for forecast horizons of t+1 days, t+2 days, and t+3 days. The values of R² and NSE are greater than 0.97 during the training and testing phases in the Nokoué Lake basin. Based on the evaluation indices used to assess the model's performance for the appropriate forecast horizon of water level in the Nokoué Lake basin, the forecast horizon of t+3 days is chosen for predicting future daily water levels.Keywords: forecasting, long short-term memory cell, recurrent artificial neural network, Nokoué lake
Procedia PDF Downloads 643188 The Effect of Global Value Chain Participation on Environment
Authors: Piyaphan Changwatchai
Abstract:
Global value chain is important for current world economy through foreign direct investment. Multinational enterprises' efficient location seeking for each stage of production lead to global production network and more global value chain participation of several countries. Global value chain participation has several effects on participating countries in several aspects including the environment. The effect of global value chain participation on the environment is ambiguous. As a result, this research aims to study the effect of global value chain participation on countries' CO₂ emission and methane emission by using quantitative analysis with secondary panel data of sixty countries. The analysis is divided into two types of global value chain participation, which are forward global value chain participation and backward global value chain participation. The results show that, for forward global value chain participation, GDP per capita affects two types of pollutants in downward bell curve shape. Forward global value chain participation negatively affects CO₂ emission and methane emission. As for backward global value chain participation, GDP per capita affects two types of pollutants in downward bell curve shape. Backward global value chain participation negatively affects methane emission only. However, when considering Asian countries, forward global value chain participation positively affects CO₂ emission. The recommendations of this research are that countries participating in global value chain should promote production with effective environmental management in each stage of value chain. The examples of policies are providing incentives to private sectors, including domestic producers and MNEs, for green production technology and efficient environment management and engaging in international agreements in terms of green production. Furthermore, government should regulate each stage of production in value chain toward green production, especially for Asia countries.Keywords: CO₂ emission, environment, global value chain participation, methane emission
Procedia PDF Downloads 1913187 Acceleration-Based Motion Model for Visual Simultaneous Localization and Mapping
Authors: Daohong Yang, Xiang Zhang, Lei Li, Wanting Zhou
Abstract:
Visual Simultaneous Localization and Mapping (VSLAM) is a technology that obtains information in the environment for self-positioning and mapping. It is widely used in computer vision, robotics and other fields. Many visual SLAM systems, such as OBSLAM3, employ a constant-speed motion model that provides the initial pose of the current frame to improve the speed and accuracy of feature matching. However, in actual situations, the constant velocity motion model is often difficult to be satisfied, which may lead to a large deviation between the obtained initial pose and the real value, and may lead to errors in nonlinear optimization results. Therefore, this paper proposed a motion model based on acceleration, which can be applied on most SLAM systems. In order to better describe the acceleration of the camera pose, we decoupled the pose transformation matrix, and calculated the rotation matrix and the translation vector respectively, where the rotation matrix is represented by rotation vector. We assume that, in a short period of time, the changes of rotating angular velocity and translation vector remain the same. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of constant velocity model was analyzed theoretically. Finally, we applied our proposed approach to the ORBSLAM3 system and evaluated two sets of sequences on the TUM dataset. The results showed that our proposed method had a more accurate initial pose estimation and the accuracy of ORBSLAM3 system is improved by 6.61% and 6.46% respectively on the two test sequences.Keywords: error estimation, constant acceleration motion model, pose estimation, visual SLAM
Procedia PDF Downloads 943186 The Impact of Introspective Models on Software Engineering
Authors: Rajneekant Bachan, Dhanush Vijay
Abstract:
The visualization of operating systems has refined the Turing machine, and current trends suggest that the emulation of 32 bit architectures will soon emerge. After years of technical research into Web services, we demonstrate the synthesis of gigabit switches, which embodies the robust principles of theory. Loam, our new algorithm for forward-error correction, is the solution to all of these challenges.Keywords: software engineering, architectures, introspective models, operating systems
Procedia PDF Downloads 5383185 The Per Capita Income, Energy production and Environmental Degradation: A Comprehensive Assessment of the existence of the Environmental Kuznets Curve Hypothesis in Bangladesh
Authors: Ashique Mahmud, MD. Ataul Gani Osmani, Shoria Sharmin
Abstract:
In the first quarter of the twenty-first century, the most substantial global concern is environmental contamination, and it has gained the prioritization of both the national and international community. Keeping in mind this crucial fact, this study conducted different statistical and econometrical methods to identify whether the gross national income of the country has a significant impact on electricity production from nonrenewable sources and different air pollutants like carbon dioxide, nitrous oxide, and methane emissions. Besides, the primary objective of this research was to analyze whether the environmental Kuznets curve hypothesis holds for the examined variables. After analyzing different statistical properties of the variables, this study came to the conclusion that the environmental Kuznets curve hypothesis holds for gross national income and carbon dioxide emission in Bangladesh in the short run as well as the long run. This study comes to this conclusion based on the findings of ordinary least square estimations, ARDL bound tests, short-run causality analysis, the Error Correction Model, and other pre-diagnostic and post-diagnostic tests that have been employed in the structural model. Moreover, this study wants to demonstrate that the outline of gross national income and carbon dioxide emissions is in its initial stage of development and will increase up to the optimal peak. The compositional effect will then force the emission to decrease, and the environmental quality will be restored in the long run.Keywords: environmental Kuznets curve hypothesis, carbon dioxide emission in Bangladesh, gross national income in Bangladesh, autoregressive distributed lag model, granger causality, error correction model
Procedia PDF Downloads 1503184 Lamb Waves Wireless Communication in Healthy Plates Using Coherent Demodulation
Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad
Abstract:
Guided ultrasonic waves are used in Non-Destructive Testing (NDT) and Structural Health Monitoring (SHM) for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average Bit Error Rate. Results have shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.Keywords: lamb waves communication, wireless communication, coherent demodulation, bit error rate
Procedia PDF Downloads 2603183 A Pilot Study to Investigate the Use of Machine Translation Post-Editing Training for Foreign Language Learning
Authors: Hong Zhang
Abstract:
The main purpose of this study is to show that machine translation (MT) post-editing (PE) training can help our Chinese students learn Spanish as a second language. Our hypothesis is that they might make better use of it by learning PE skills specific for foreign language learning. We have developed PE training materials based on the data collected in a previous study. Training material included the special error types of the output of MT and the error types that our Chinese students studying Spanish could not detect in the experiment last year. This year we performed a pilot study in order to evaluate the PE training materials effectiveness and to what extent PE training helps Chinese students who study the Spanish language. We used screen recording to record these moments and made note of every action done by the students. Participants were speakers of Chinese with intermediate knowledge of Spanish. They were divided into two groups: Group A performed PE training and Group B did not. We prepared a Chinese text for both groups, and participants translated it by themselves (human translation), and then used Google Translate to translate the text and asked them to post-edit the raw MT output. Comparing the results of PE test, Group A could identify and correct the errors faster than Group B students, Group A did especially better in omission, word order, part of speech, terminology, mistranslation, official names, and formal register. From the results of this study, we can see that PE training can help Chinese students learn Spanish as a second language. In the future, we could focus on the students’ struggles during their Spanish studies and complete the PE training materials to teach Chinese students learning Spanish with machine translation.Keywords: machine translation, post-editing, post-editing training, Chinese, Spanish, foreign language learning
Procedia PDF Downloads 1443182 Preparation and In vitro Characterization of Nanoparticle Hydrogel for Wound Healing
Authors: Rajni Kant Panik
Abstract:
The aim of the present study was to develop and evaluate mupirocin loaded nanoparticle incorporated into hydrogel as an infected wound healer. Incorporated Nanoparticle in hydrogel provides a barrier that effectively prevents the contamination of the wound and further progression of infection to deeper tissues. Hydrogel creates moist healing environment on wound space with good fluid absorbance. Nanoparticles were prepared by double emulsion solvent evaporation method using different ratios of PLGA polymer and the hydrogels was developed using sodium alginate and gelatin. Further prepared nanoparticles were then incorporated into the hydrogels. The formulations were characterized by FT-IR and DSC for drug and polymer compatibility and surface morphology was studied by TEM. Nanoparticle hydrogel were evaluated for their size, shape, encapsulation efficiency and for in vitro studies. The FT-IR and DSC confirmed the absence of any drug polymer interaction. The average size of Nanoparticle was found to be in range of 208.21-412.33 nm and shape was found to be spherical. The maximum encapsulation efficiency was found to be 69.03%. The in vitro release profile of Nanoparticle incorporated hydrogel formulation was found to give sustained release of drug. Antimicrobial activity testing confirmed that encapsulated drug preserve its effectiveness. The stability study confirmed that the formulation prepared were stable. Present study complements our finding that mupirocin loaded Nanoparticle incorporated into hydrogel has the potential to be an effective and safe novel addition for the release of mupirocin in sustained manner, which may be a better option for the management of wound. These finding also supports the progression of antibiotic via hydrogel delivery system is a novel topical dosage form for the management of wound.Keywords: hydrogel, nanoparticle, PLGA, wound healing
Procedia PDF Downloads 3113181 An Experimental (Wind Tunnel) and Numerical (CFD) Study on the Flow over Hills
Authors: Tanit Daniel Jodar Vecina, Adriane Prisco Petry
Abstract:
The shape of the wind velocity profile changes according to local features of terrain shape and roughness, which are parameters responsible for defining the Atmospheric Boundary Layer (ABL) profile. Air flow characteristics over and around landforms, such as hills, are of considerable importance for applications related to Wind Farm and Turbine Engineering. The air flow is accelerated on top of hills, which can represent a decisive factor for Wind Turbine placement choices. The present work focuses on the study of ABL behavior as a function of slope and surface roughness of hill-shaped landforms, using the Computational Fluid Dynamics (CFD) to build wind velocity and turbulent intensity profiles. Reynolds-Averaged Navier-Stokes (RANS) equations are closed using the SST k-ω turbulence model; numerical results are compared to experimental data measured in wind tunnel over scale models of the hills under consideration. Eight hill models with slopes varying from 25° to 68° were tested for two types of terrain categories in 2D and 3D, and two analytical codes are used to represent the inlet velocity profiles. Numerical results for the velocity profiles show differences under 4% when compared to their respective experimental data. Turbulent intensity profiles show maximum differences around 7% when compared to experimental data; this can be explained by not being possible to insert inlet turbulent intensity profiles in the simulations. Alternatively, constant values based on the averages of the turbulent intensity at the wind tunnel inlet were used.Keywords: Atmospheric Boundary Layer, Computational Fluid Dynamic (CFD), Numerical Modeling, Wind Tunnel
Procedia PDF Downloads 3803180 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea
Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng
Abstract:
During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea
Procedia PDF Downloads 1723179 Design of the Compliant Mechanism of a Biomechanical Assistive Device for the Knee
Authors: Kevin Giraldo, Juan A. Gallego, Uriel Zapata, Fanny L. Casado
Abstract:
Compliant mechanisms are designed to deform in a controlled manner in response to external forces, utilizing the flexibility of their components to store potential elastic energy during deformation, gradually releasing it upon returning to its original form. This article explores the design of a knee orthosis intended to assist users during stand-up motion. The orthosis makes use of a compliant mechanism to balance the user’s weight, thereby minimizing the strain on leg muscles during standup motion. The primary function of the compliant mechanism is to store and exchange potential energy, so when coupled with the gravitational potential of the user, the total potential energy variation is minimized. The design process for the semi-rigid knee orthosis involved material selection and the development of a numerical model for the compliant mechanism seen as a spring. Geometric properties are obtained through the numerical modeling of the spring once the desired stiffness and safety factor values have been attained. Subsequently, a 3D finite element analysis was conducted. The study demonstrates a strong correlation between the maximum stress in the mathematical model (250.22 MPa) and the simulation (239.8 MPa), with a 4.16% error. Both analyses safety factors: 1.02 for the mathematical approach and 1.1 for the simulation, with a consistent 7.84% margin of error. The spring’s stiffness, calculated at 90.82 Nm/rad analytically and 85.71 Nm/rad in the simulation, exhibits a 5.62% difference. These results suggest significant potential for the proposed device in assisting patients with knee orthopedic restrictions, contributing to ongoing efforts in advancing the understanding and treatment of knee osteoarthritis.Keywords: biomechanics, complaint mechanisms, gonarthrosis, orthoses
Procedia PDF Downloads 363178 Drilling Quantification and Bioactivity of Machinable Hydroxyapatite : Yttrium phosphate Bioceramic Composite
Authors: Rupita Ghosh, Ritwik Sarkar, Sumit K. Pal, Soumitra Paul
Abstract:
The use of Hydroxyapatite bioceramics as restorative implants is widely known. These materials can be manufactured by pressing and sintering route to a particular shape. However machining processes are still a basic requirement to give a near net shape to those implants for ensuring dimensional and geometrical accuracy. In this context, optimising the machining parameters is an important factor to understand the machinability of the materials and to reduce the production cost. In the present study a method has been optimized to produce true particulate drilled composite of Hydroxyapatite Yttrium Phosphate. The phosphates are used in varying ratio for a comparative study on the effect of flexural strength, hardness, machining (drilling) parameters and bioactivity.. The maximum flexural strength and hardness of the composite that could be attained are 46.07 MPa and 1.02 GPa respectively. Drilling is done with a conventional radial drilling machine aided with dynamometer with high speed steel (HSS) and solid carbide (SC) drills. The effect of variation in drilling parameters (cutting speed and feed), cutting tool, batch composition on torque, thrust force and tool wear are studied. It is observed that the thrust force and torque varies greatly with the increase in the speed, feed and yttrium phosphate content in the composite. Significant differences in the thrust and torque are noticed due to the change of the drills as well. Bioactivity study is done in simulated body fluid (SBF) upto 28 days. The growth of the bone like apatite has become denser with the increase in the number of days for all the composition of the composites and it is comparable to that of the pure hydroxyapatite.Keywords: Bioactivity, Drilling, Hydroxyapatite, Yttrium Phosphate
Procedia PDF Downloads 3003177 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes
Authors: Angela U. Makolo
Abstract:
Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation
Procedia PDF Downloads 683176 Finite Element Modeling of Mass Transfer Phenomenon and Optimization of Process Parameters for Drying of Paddy in a Hybrid Solar Dryer
Authors: Aprajeeta Jha, Punyadarshini P. Tripathy
Abstract:
Drying technologies for various food processing operations shares an inevitable linkage with energy, cost and environmental sustainability. Hence, solar drying of food grains has become imperative choice to combat duo challenges of meeting high energy demand for drying and to address climate change scenario. But performance and reliability of solar dryers depend hugely on sunshine period, climatic conditions, therefore, offer a limited control over drying conditions and have lower efficiencies. Solar drying technology, supported by Photovoltaic (PV) power plant and hybrid type solar air collector can potentially overpower the disadvantages of solar dryers. For development of such robust hybrid dryers; to ensure quality and shelf-life of paddy grains the optimization of process parameter becomes extremely critical. Investigation of the moisture distribution profile within the grains becomes necessary in order to avoid over drying or under drying of food grains in hybrid solar dryer. Computational simulations based on finite element modeling can serve as potential tool in providing a better insight of moisture migration during drying process. Hence, present work aims at optimizing the process parameters and to develop a 3-dimensional (3D) finite element model (FEM) for predicting moisture profile in paddy during solar drying. COMSOL Multiphysics was employed to develop a 3D finite element model for predicting moisture profile. Furthermore, optimization of process parameters (power level, air velocity and moisture content) was done using response surface methodology in design expert software. 3D finite element model (FEM) for predicting moisture migration in single kernel for every time step has been developed and validated with experimental data. The mean absolute error (MAE), mean relative error (MRE) and standard error (SE) were found to be 0.003, 0.0531 and 0.0007, respectively, indicating close agreement of model with experimental results. Furthermore, optimized process parameters for drying paddy were found to be 700 W, 2.75 m/s at 13% (wb) with optimum temperature, milling yield and drying time of 42˚C, 62%, 86 min respectively, having desirability of 0.905. Above optimized conditions can be successfully used to dry paddy in PV integrated solar dryer in order to attain maximum uniformity, quality and yield of product. PV-integrated hybrid solar dryers can be employed as potential and cutting edge drying technology alternative for sustainable energy and food security.Keywords: finite element modeling, moisture migration, paddy grain, process optimization, PV integrated hybrid solar dryer
Procedia PDF Downloads 1503175 Development of 4-Allylpyrocatechol Loaded Self-Nanoemulsifying Drug Delivery System for Enhancing Water Solubility and Antibacterial Activity against Oral Pathogenic Bacteria
Authors: Pimpak Phumat, Sakornrat Khongkhunthian, Thomas Rades, Anette Müllertz, Siriporn Okonogi
Abstract:
Self-nanoemulsifying drug delivery systems (SNEDDS) containing 4-allylpyrocatechol (AP) extracted from Piper betle were developed to enhance water solubility of AP by using modeling and design (MODDE) program. The amount of AP in each SNEDDS formulation was determined by using high-performance liquid chromatography. The formulation consisted of 20% Miglyol®812N, 40 % Kolliphor®RH40, 30 % Maisine®35-1 and 10 % ethanol was found to be the best SNEDDS that provided the highest loading capacity of AP. (141.48±15.64 mg/g SNEDDS). The system also showed miscibility with water. The particle shape and size of the AP-SNEDDS after dispersing in water was investigated by using a transmission electron microscope and photon correlation spectrophotometer, respectively. The results showed that they were a spherical shape, having a particle size of 34.27 ± 1.14 nm with a narrow size distribution of 0.17 ± 0.04. The particles showed negative zeta potential with a value of -21.66 ± 2.09 mV. Antibacterial activity of AP-SNEDDS containing 1.5 mg/mL of AP was investigated against Streptococcus intermedius. The effect of this system on S. intermedius cells was observed by a scanning electron microscope (SEM). The results from SEM revealed that the bacterial cells were obviously destroyed. Killing kinetic study of AP-SNEDDS was carried out. It was found that the killing rate of AP-SNEDDS against S. intermedius was dose-dependent and the bacterial reduction was 79.86 ± 0.45 % within 30 min. In comparison with chlorhexidine (CHX), AP-SNEDDS showed similar antibacterial effects against S. intermedius. It is concluded that SNEDDS is a potential system for enhancing water solubility of AP. The antibacterial study reveals that AP-SNEDDS can be a promising system to treat bacterial infection caused by S. intermedius.Keywords: SNEDDS, 4-allylpyrocathecol, solubility, antibacterial activity, Streptococcus intermedius
Procedia PDF Downloads 1193174 Development of Site-Specific Colonic Drug Delivery System (Nanoparticles) of Chitosan Coated with pH Sensitive Polymer for the Management of Colonic Inflammation
Authors: Pooja Mongia Raj, Rakesh Raj, Alpana Ram
Abstract:
Background: The use of multiparticulate drug delivery systems in preference to single unit dosage forms for colon targeting purposes dates back to 1985 when Hardy and co-workers showed that multiparticulate systems enabled the drug to reach the colon quickly and were retained in the ascending colon for a relatively long period of time. Methods: Site-specific colonic drug delivery system (nanoparticles) of 5-ASA were prepared and coated with pH sensitive polymer. Chitosan nanoparticles (CTNP) bearing 5-Amino salicylic acid (5-ASA) were prepared, by ionotropic gelation method. Nanoparticulate dosage form consisting of a hydrophobic core enteric coated with pH-dependent polymer Eudragit S-100 by solvent evaporation method, for the effective delivery of drug to the colon for treatment of ulcerative colitis. Results: The mean diameter of CTNP and ECTNP formulations were 159 and 661 nm, respectively. Also optimum value of polydispersity index was found to be 0.249 [count rate (kcps) was 251.2] and 0.170 [count rate (kcps) was 173.9] was obtained for both the formulations respectively. Conclusion: CTNP and Eudragit chitosan nanoparticles (ECTNP) was characterized for shape and surface morphology by scanning electron microscopy (SEM) appeared to be spherical in shape. The in vitro drug release was investigated using USP dissolution test apparatus in different simulated GIT fluids showed promising release. In vivo experiments are in further proceeding for fruitful results.Keywords: colon targeting, nanoparticles, polymer, 5-amino salicylic acid, edragit
Procedia PDF Downloads 4953173 Enhancing Traditional Saudi Designs Pattern Cutting to Integrate Them Into Current Clothing Offers
Authors: Faizah Almalki, Simeon Gill, Steve G. Hayes, Lisa Taylor
Abstract:
A core element of cultural identity is the traditional costumes that provide insight into the heritage that has been acquired over time. This heritage is apparent in the use of colour, the styles and the functions of the clothing and it also reflects the skills of those who created the items and the time taken to produce them. Modern flat pattern drafting methods for making garment patterns are simple in comparison to the relatively laborious traditional approaches that would require personal interaction with the wearer throughout the production process. The current study reflects on the main elements of the pattern cutting system and how this has evolved in Saudi Arabia to affect the design of the Sawan garment. Analysis of the traditional methods for constructing Sawan garments was undertaken through observation of the practice and the garments and consulting documented guidance. This provided a foundation through which to explore how modern technology can be applied to improve the process. In this research, modern methods are proposed for producing traditional Saudi garments more efficiently while retaining elements of the conventional style and design. The current study has documented the vital aspects of Sawan garment style. The result showed that the method had been used to take the body measurements and pattern making was elementary and offered simple geometric shape and the Sawan garment is composed of four pieces. Consequently, this research allows for classical pattern shapes to be embedded in garments now worn in Saudi Arabia and for the continuation of cultural heritage.Keywords: traditional Sawan garment technique, modern pattern cutting technique, the shape of the garment and software, Lectra Modaris
Procedia PDF Downloads 1323172 The Prediction of Reflection Noise and Its Reduction by Shaped Noise Barriers
Authors: I. L. Kim, J. Y. Lee, A. K. Tekile
Abstract:
In consequence of the very high urbanization rate of Korea, the number of traffic noise damages in areas congested with population and facilities is steadily increasing. The current environmental noise levels data in major cities of the country show that the noise levels exceed the standards set for both day and night times. This research was about comparative analysis in search for optimal soundproof panel shape and design factor that can minimize sound reflection noise. In addition to the normal flat-type panel shape, the reflection noise reduction of swelling-type, combined swelling and curved-type, and screen-type were evaluated. The noise source model Nord 2000, which often provides abundant information compared to models for the similar purpose, was used in the study to determine the overall noise level. Based on vehicle categorization in Korea, the noise levels for varying frequency from different heights of the sound source (directivity heights of Harmonize model) have been calculated for simulation. Each simulation has been made using the ray-tracing method. The noise level has also been calculated using the noise prediction program called SoundPlan 7.2, for comparison. The noise level prediction was made at 15m (R1), 30 m (R2) and at middle of the road, 2m (R3) receiving the point. By designing the noise barriers by shape and running the prediction program by inserting the noise source on the 2nd lane to the noise barrier side, among the 6 lanes considered, the reflection noise slightly decreased or increased in all noise barriers. At R1, especially in the cases of the screen-type noise barriers, there was no reduction effect predicted in all conditions. However, the swelling-type showed a decrease of 0.7~1.2 dB at R1, performing the best reduction effect among the tested noise barriers. Compared to other forms of noise barriers, the swelling-type was thought to be the most suitable for reducing the reflection noise; however, since a slight increase was predicted at R2, further research based on a more sophisticated categorization of related design factors is necessary. Moreover, as swellings are difficult to produce and the size of the modules are smaller than other panels, it is challenging to install swelling-type noise barriers. If these problems are solved, its applicable region will not be limited to other types of noise barriers. Hence, when a swelling-type noise barrier is installed at a downtown region where the amount of traffic is increasing every day, it will both secure visibility through the transparent walls and diminish any noise pollution due to the reflection. Moreover, when decorated with shapes and design, noise barriers will achieve a visual attraction than a flat-type one and thus will alleviate any psychological hardships related to noise, other than the unique physical soundproofing functions of the soundproof panels.Keywords: reflection noise, shaped noise barriers, sound proof panel, traffic noise
Procedia PDF Downloads 5093171 Theory of the Optimum Signal Approximation Clarifying the Importance in the Recognition of Parallel World and Application to Secure Signal Communication with Feedback
Authors: Takuro Kida, Yuichi Kida
Abstract:
In this paper, it is shown a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detail algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output-signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory, and it is indicated that introducing conversations with feedback do not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.Keywords: matrix filterbank, optimum signal approximation, category theory, simultaneous minimization
Procedia PDF Downloads 1433170 Application of Particle Swarm Optimization to Thermal Sensor Placement for Smart Grid
Authors: Hung-Shuo Wu, Huan-Chieh Chiu, Xiang-Yao Zheng, Yu-Cheng Yang, Chien-Hao Wang, Jen-Cheng Wang, Chwan-Lu Tseng, Joe-Air Jiang
Abstract:
Dynamic Thermal Rating (DTR) provides crucial information by estimating the ampacity of transmission lines to improve power dispatching efficiency. To perform the DTR, it is necessary to install on-line thermal sensors to monitor conductor temperature and weather variables. A simple and intuitive strategy is to allocate a thermal sensor to every span of transmission lines, but the cost of sensors might be too high to bear. To deal with the cost issue, a thermal sensor placement problem must be solved. This research proposes and implements a hybrid algorithm which combines proper orthogonal decomposition (POD) with particle swarm optimization (PSO) methods. The proposed hybrid algorithm solves a multi-objective optimization problem that concludes the minimum number of sensors and the minimum error on conductor temperature, and the optimal sensor placement is determined simultaneously. The data of 345 kV transmission lines and the hourly weather data from the Taiwan Power Company and Central Weather Bureau (CWB), respectively, are used by the proposed method. The simulated results indicate that the number of sensors could be reduced using the optimal placement method proposed by the study and an acceptable error on conductor temperature could be achieved. This study provides power companies with a reliable reference for efficiently monitoring and managing their power grids.Keywords: dynamic thermal rating, proper orthogonal decomposition, particle swarm optimization, sensor placement, smart grid
Procedia PDF Downloads 4323169 Microencapsulation of Phenobarbital by Ethyl Cellulose Matrix
Authors: S. Bouameur, S. Chirani
Abstract:
The aim of this study was to evaluate the potential use of EthylCellulose in the preparation of microspheres as a Drug Delivery System for sustained release of phenobarbital. The microspheres were prepared by solvent evaporation technique using ethylcellulose as polymer matrix with a ratio 1:2, dichloromethane as solvent and Polyvinyl alcohol 1% as processing medium to solidify the microspheres. Size, shape, drug loading capacity and entrapement efficiency were studied.Keywords: phenobarbital, microspheres, ethylcellulose, polyvinylacohol
Procedia PDF Downloads 3613168 Modeling, Topology Optimization and Experimental Validation of Glass-Transition-Based 4D-Printed Polymeric Structures
Authors: Sara A. Pakvis, Giulia Scalet, Stefania Marconi, Ferdinando Auricchio, Matthijs Langelaar
Abstract:
In recent developments in the field of multi-material additive manufacturing, differences in material properties are exploited to create printed shape-memory structures, which are referred to as 4D-printed structures. New printing techniques allow for the deliberate introduction of prestresses in the specimen during manufacturing, and, in combination with the right design, this enables new functionalities. This research focuses on bi-polymer 4D-printed structures, where the transformation process is based on a heat-induced glass transition in one material lowering its Young’s modulus, combined with an initial prestress in the other material. Upon the decrease in stiffness, the prestress is released, which results in the realization of an essentially pre-programmed deformation. As the design of such functional multi-material structures is crucial but far from trivial, a systematic methodology to find the design of 4D-printed structures is developed, where a finite element model is combined with a density-based topology optimization method to describe the material layout. This modeling approach is verified by a convergence analysis and validated by comparing its numerical results to analytical and published data. Specific aspects that are addressed include the interplay between the definition of the prestress and the material interpolation function used in the density-based topology description, the inclusion of a temperature-dependent stiffness relationship to simulate the glass transition effect, and the importance of the consideration of geometric nonlinearity in the finite element modeling. The efficacy of topology optimization to design 4D-printed structures is explored by applying the methodology to a variety of design problems, both in 2D and 3D settings. Bi-layer designs composed of thermoplastic polymers are printed by means of the fused deposition modeling (FDM) technology. Acrylonitrile butadiene styrene (ABS) polymer undergoes the glass transition transformation, while polyurethane (TPU) polymer is prestressed by means of the 3D-printing process itself. Tests inducing shape transformation in the printed samples through heating are performed to calibrate the prestress and validate the modeling approach by comparing the numerical results to the experimental findings. Using the experimentally obtained prestress values, more complex designs have been generated through topology optimization, and samples have been printed and tested to evaluate their performance. This study demonstrates that by combining topology optimization and 4D-printing concepts, stimuli-responsive structures with specific properties can be designed and realized.Keywords: 4D-printing, glass transition, shape memory polymer, topology optimization
Procedia PDF Downloads 2083167 An Adaptive Oversampling Technique for Imbalanced Datasets
Authors: Shaukat Ali Shahee, Usha Ananthakumar
Abstract:
A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling
Procedia PDF Downloads 4183166 Comparison of the Effectiveness of Tree Algorithms in Classification of Spongy Tissue Texture
Authors: Roza Dzierzak, Waldemar Wojcik, Piotr Kacejko
Abstract:
Analysis of the texture of medical images consists of determining the parameters and characteristics of the examined tissue. The main goal is to assign the analyzed area to one of two basic groups: as a healthy tissue or a tissue with pathological changes. The CT images of the thoracic lumbar spine from 15 healthy patients and 15 with confirmed osteoporosis were used for the analysis. As a result, 120 samples with dimensions of 50x50 pixels were obtained. The set of features has been obtained based on the histogram, gradient, run-length matrix, co-occurrence matrix, autoregressive model, and Haar wavelet. As a result of the image analysis, 290 descriptors of textural features were obtained. The dimension of the space of features was reduced by the use of three selection methods: Fisher coefficient (FC), mutual information (MI), minimization of the classification error probability and average correlation coefficients between the chosen features minimization of classification error probability (POE) and average correlation coefficients (ACC). Each of them returned ten features occupying the initial place in the ranking devised according to its own coefficient. As a result of the Fisher coefficient and mutual information selections, the same features arranged in a different order were obtained. In both rankings, the 50% percentile (Perc.50%) was found in the first place. The next selected features come from the co-occurrence matrix. The sets of features selected in the selection process were evaluated using six classification tree methods. These were: decision stump (DS), Hoeffding tree (HT), logistic model trees (LMT), random forest (RF), random tree (RT) and reduced error pruning tree (REPT). In order to assess the accuracy of classifiers, the following parameters were used: overall classification accuracy (ACC), true positive rate (TPR, classification sensitivity), true negative rate (TNR, classification specificity), positive predictive value (PPV) and negative predictive value (NPV). Taking into account the classification results, it should be stated that the best results were obtained for the Hoeffding tree and logistic model trees classifiers, using the set of features selected by the POE + ACC method. In the case of the Hoeffding tree classifier, the highest values of three parameters were obtained: ACC = 90%, TPR = 93.3% and PPV = 93.3%. Additionally, the values of the other two parameters, i.e., TNR = 86.7% and NPV = 86.6% were close to the maximum values obtained for the LMT classifier. In the case of logistic model trees classifier, the same ACC value was obtained ACC=90% and the highest values for TNR=88.3% and NPV= 88.3%. The values of the other two parameters remained at a level close to the highest TPR = 91.7% and PPV = 91.6%. The results obtained in the experiment show that the use of classification trees is an effective method of classification of texture features. This allows identifying the conditions of the spongy tissue for healthy cases and those with the porosis.Keywords: classification, feature selection, texture analysis, tree algorithms
Procedia PDF Downloads 1783165 On the Question of Ideology: Criticism of the Enlightenment Approach and Theory of Ideology as Objective Force in Gramsci and Althusser
Authors: Edoardo Schinco
Abstract:
Studying the Marxist intellectual tradition, it is possible to verify that there were numerous cases of philosophical regression, in which the important achievements of detailed studies have been replaced by naïve ideas and previous misunderstandings: one of most important example of this tendency is related to the question of ideology. According to a common Enlightenment approach, the ideology is essentially not a reality, i.e., a factor capable of having an effect on the reality itself; in other words, the ideology is a mere error without specific historical meaning, which is only due to ignorance or inability of subjects to understand the truth. From this point of view, the consequent and immediate practice against every form of ideology are the rational dialogue, the reasoning based on common sense, in order to dispel the obscurity of ignorance through the light of pure reason. The limits of this philosophical orientation are however both theoretical and practical: on the one hand, the Enlightenment criticism of ideology is not an historicistic thought, since it cannot grasp the inner connection that ties an historical context and its peculiar ideology together; moreover, on the other hand, when the Enlightenment approach fails to release people from their illusions (e.g., when the ideology persists, despite the explanation of its illusoriness), it usually becomes a racist or elitarian thought. Unlike this first conception of ideology, Gramsci attempts to recover Marx’s original thought and to valorize its dialectical methodology with respect to the reality of ideology. As Marx suggests, the ideology – in negative meaning – is surely an error, a misleading knowledge, which aims to defense the current state of things and to conceal social, political or moral contradictions; but, that is precisely why the ideological error is not casual: every ideology mediately roots in a particular material context, from which it takes its reason being. Gramsci avoids, however, any mechanistic interpretation of Marx and, for this reason; he underlines the dialectic relation that exists between material base and ideological superstructure; in this way, a specific ideology is not only a passive product of base but also an active factor that reacts on the base itself and modifies it. Therefore, there is a considerable revaluation of ideology’s role in maintenance of status quo and the consequent thematization of both ideology as objective force, active in history, and ideology as cultural hegemony of ruling class on subordinate groups. Among the Marxists, the French philosopher Louis Althusser also gives his contribution to this crucial question; as follower of Gramsci’s thought, he develops the idea of ideology as an objective force through the notions of Repressive State Apparatus (RSA) and Ideological State Apparatuses (ISA). In addition to this, his philosophy is characterized by the presence of structuralist elements, which must be studied, since they deeply change the theoretical foundation of his Marxist thought.Keywords: Althusser, enlightenment, Gramsci, ideology
Procedia PDF Downloads 1993164 Transport Medium That Prevents the Conversion of Helicobacter Pylori to the Coccoid Form
Authors: Eldar Mammadov, Konul Mammadova, Aytaj Ilyaszada
Abstract:
Background: According to many studies, it is known that H. pylori transform into the coccoid form, which cannot be cultured and has poor metabolic activity.In this study, we succeeded in preserving the spiral shape of H.pylori for a long time by preparing a biphase transport medium with a hard bottom (Muller Hinton with 7% HRBC (horse red blood cells) agar 5ml) and liquid top part (BH (brain heart) broth + HS (horse serum)+7% HRBC+antibiotics (Vancomycin 5 mg, Trimethoprim lactate 25 mg, Polymyxin B 1250 I.U.)) in cell culture flasks with filter caps. For comparison, we also used a BH broth medium with 7% HRBC used for the transport of H.pylori. Methods: Rapid urease test positive 7 biopsy specimens were also inoculated into biphasic and BH broth medium with 7% HRBC, then put in CO2 Gaspak packages and sent to the laboratory. Then both mediums were kept in the thermostat at 37 °C for 1 day. After microscopic, PCR and urease test diagnosis, they were transferred to Columbia Agar with 7% HRBC. Incubated at 37°C for 5-7 days, cultures were examined for colony characteristics and bacterial morphology. E-test antimicrobial susceptibility test was performed. Results: There were 3 growths from biphasic transport medium passed to Columbia agar with 7% HRBC and only 1 growth from BH broth medium with 7% HRBC. It was also observed that after the first 3 days in BH broth medium with 7%, H.pylori passed into coccoid form and its biochemical activity weakened, while its spiral shape did not change for 2-3 weeks in the biphase transport medium. Conclusions: By using the biphase transport medium we have prepared; we can culture the bacterium by preventing H.pylori from spiraling into the coccoid form. In our opinion, this may result in the wide use of culture method for diagnosis of H.pylori, study of antibiotic susceptibility and molecular genetic analysis.Keywords: clinical trial, H.pylori, coccoid form, transport medium
Procedia PDF Downloads 733163 Using Equipment Telemetry Data for Condition-Based maintenance decisions
Authors: John Q. Todd
Abstract:
Given that modern equipment can provide comprehensive health, status, and error condition data via built-in sensors, maintenance organizations have a new and valuable source of insight to take advantage of. This presentation will expose what these data payloads might look like and how they can be filtered, visualized, calculated into metrics, used for machine learning, and generate alerts for further action.Keywords: condition based maintenance, equipment data, metrics, alerts
Procedia PDF Downloads 1883162 A Computer-Aided System for Tooth Shade Matching
Authors: Zuhal Kurt, Meral Kurt, Bilge T. Bal, Kemal Ozkan
Abstract:
Shade matching and reproduction is the most important element of success in prosthetic dentistry. Until recently, shade matching procedure was implemented by dentists visual perception with the help of shade guides. Since many factors influence visual perception; tooth shade matching using visual devices (shade guides) is highly subjective and inconsistent. Subjective nature of this process has lead to the development of instrumental devices. Nowadays, colorimeters, spectrophotometers, spectroradiometers and digital image analysing systems are used for instrumental shade selection. Instrumental devices have advantages that readings are quantifiable, can obtain more rapidly and simply, objectively and precisely. However, these devices have noticeable drawbacks. For example, translucent structure and irregular surfaces of teeth lead to defects on measurement with these devices. Also between the results acquired by devices with different measurement principles may make inconsistencies. So, its obligatory to search for new methods for dental shade matching process. A computer-aided system device; digital camera has developed rapidly upon today. Currently, advances in image processing and computing have resulted in the extensive use of digital cameras for color imaging. This procedure has a much cheaper process than the use of traditional contact-type color measurement devices. Digital cameras can be taken by the place of contact-type instruments for shade selection and overcome their disadvantages. Images taken from teeth show morphology and color texture of teeth. In last decades, a new method was recommended to compare the color of shade tabs taken by a digital camera using color features. This method showed that visual and computer-aided shade matching systems should be used as concatenated. Recently using methods of feature extraction techniques are based on shape description and not used color information. However, color is mostly experienced as an essential property in depicting and extracting features from objects in the world around us. When local feature descriptors with color information are extended by concatenating color descriptor with the shape descriptor, that descriptor will be effective on visual object recognition and classification task. Therefore, the color descriptor is to be used in combination with a shape descriptor it does not need to contain any spatial information, which leads us to use local histograms. This local color histogram method is remain reliable under variation of photometric changes, geometrical changes and variation of image quality. So, coloring local feature extraction methods are used to extract features, and also the Scale Invariant Feature Transform (SIFT) descriptor used to for shape description in the proposed method. After the combination of these descriptors, the state-of-art descriptor named by Color-SIFT will be used in this study. Finally, the image feature vectors obtained from quantization algorithm are fed to classifiers such as Nearest Neighbor (KNN), Naive Bayes or Support Vector Machines (SVM) to determine label(s) of the visual object category or matching. In this study, SVM are used as classifiers for color determination and shade matching. Finally, experimental results of this method will be compared with other recent studies. It is concluded from the study that the proposed method is remarkable development on computer aided tooth shade determination system.Keywords: classifiers, color determination, computer-aided system, tooth shade matching, feature extraction
Procedia PDF Downloads 444