Search results for: accuracy ratio
7636 Multi-Class Text Classification Using Ensembles of Classifiers
Authors: Syed Basit Ali Shah Bukhari, Yan Qiang, Saad Abdul Rauf, Syed Saqlaina Bukhari
Abstract:
Text Classification is the methodology to classify any given text into the respective category from a given set of categories. It is highly important and vital to use proper set of pre-processing , feature selection and classification techniques to achieve this purpose. In this paper we have used different ensemble techniques along with variance in feature selection parameters to see the change in overall accuracy of the result and also on some other individual class based features which include precision value of each individual category of the text. After subjecting our data through pre-processing and feature selection techniques , different individual classifiers were tested first and after that classifiers were combined to form ensembles to increase their accuracy. Later we also studied the impact of decreasing the classification categories on over all accuracy of data. Text classification is highly used in sentiment analysis on social media sites such as twitter for realizing people’s opinions about any cause or it is also used to analyze customer’s reviews about certain products or services. Opinion mining is a vital task in data mining and text categorization is a back-bone to opinion mining.Keywords: Natural Language Processing, Ensemble Classifier, Bagging Classifier, AdaBoost
Procedia PDF Downloads 2317635 Identification and Classification of Medicinal Plants of Indian Himalayan Region Using Hyperspectral Remote Sensing and Machine Learning Techniques
Authors: Kishor Chandra Kandpal, Amit Kumar
Abstract:
The Indian Himalaya region harbours approximately 1748 plants of medicinal importance, and as per International Union for Conservation of Nature (IUCN), the 112 plant species among these are threatened and endangered. To ease the pressure on these plants, the government of India is encouraging its in-situ cultivation. The Saussurea costus, Valeriana jatamansi, and Picrorhiza kurroa have also been prioritized for large scale cultivation owing to their market demand, conservation value and medicinal properties. These species are found from 1000 m to 4000 m elevation ranges in the Indian Himalaya. Identification of these plants in the field requires taxonomic skills, which is one of the major bottleneck in the conservation and management of these plants. In recent years, Hyperspectral remote sensing techniques have been precisely used for the discrimination of plant species with the help of their unique spectral signatures. In this background, a spectral library of the above 03 medicinal plants was prepared by collecting the spectral data using a handheld spectroradiometer (325 to 1075 nm) from farmer’s fields of Himachal Pradesh and Uttarakhand states of Indian Himalaya. The Random forest (RF) model was implied on the spectral data for the classification of the medicinal plants. The 80:20 standard split ratio was followed for training and validation of the RF model, which resulted in training accuracy of 84.39 % (kappa coefficient = 0.72) and testing accuracy of 85.29 % (kappa coefficient = 0.77). This RF classifier has identified green (555 to 598 nm), red (605 nm), and near-infrared (725 to 840 nm) wavelength regions suitable for the discrimination of these species. The findings of this study have provided a technique for rapid and onsite identification of the above medicinal plants in the field. This will also be a key input for the classification of hyperspectral remote sensing images for mapping of these species in farmer’s field on a regional scale. This is a pioneer study in the Indian Himalaya region for medicinal plants in which the applicability of hyperspectral remote sensing has been explored.Keywords: himalaya, hyperspectral remote sensing, machine learning; medicinal plants, random forests
Procedia PDF Downloads 2037634 Analysis of Reduced Mechanisms for Premixed Combustion of Methane/Hydrogen/Propane/Air Flames in Geometrically Modified Combustor and Its Effects on Flame Properties
Authors: E. Salem
Abstract:
Combustion has been used for a long time as a means of energy extraction. However, in recent years, there has been a further increase in air pollution, through pollutants such as nitrogen oxides, acid etc. In order to solve this problem, there is a need to reduce carbon and nitrogen oxides through learn burning modifying combustors and fuel dilution. A numerical investigation has been done to investigate the effectiveness of several reduced mechanisms in terms of computational time and accuracy, for the combustion of the hydrocarbons/air or diluted with hydrogen in a micro combustor. The simulations were carried out using the ANSYS Fluent 19.1. To validate the results “PREMIX and CHEMKIN” codes were used to calculate 1D premixed flame based on the temperature, composition of burned and unburned gas mixtures. Numerical calculations were carried for several hydrocarbons by changing the equivalence ratios and adding small amounts of hydrogen into the fuel blends then analyzing the flammable limit, the reduction in NOx and CO emissions, then comparing it to experimental data. By solving the conservations equations, several global reduced mechanisms (2-9-12) were obtained. These reduced mechanisms were simulated on a 2D cylindrical tube with dimensions of 40 cm in length and 2.5 cm diameter. The mesh of the model included a proper fine quad mesh, within the first 7 cm of the tube and around the walls. By developing a proper boundary layer, several simulations were performed on hydrocarbon/air blends to visualize the flame characteristics than were compared with experimental data. Once the results were within acceptable range, the geometry of the combustor was modified through changing the length, diameter, adding hydrogen by volume, and changing the equivalence ratios from lean to rich in the fuel blends, the results on flame temperature, shape, velocity and concentrations of radicals and emissions were observed. It was determined that the reduced mechanisms provided results within an acceptable range. The variation of the inlet velocity and geometry of the tube lead to an increase of the temperature and CO2 emissions, highest temperatures were obtained in lean conditions (0.5-0.9) equivalence ratio. Addition of hydrogen blends into combustor fuel blends resulted in; reduction in CO and NOx emissions, expansion of the flammable limit, under the condition of having same laminar flow, and varying equivalence ratio with hydrogen additions. The production of NO is reduced because the combustion happens in a leaner state and helps in solving environmental problems.Keywords: combustor, equivalence-ratio, hydrogenation, premixed flames
Procedia PDF Downloads 1147633 Selecting the Best RBF Neural Network Using PSO Algorithm for ECG Signal Prediction
Authors: Najmeh Mohsenifar, Narjes Mohsenifar, Abbas Kargar
Abstract:
In this paper, has been presented a stable method for predicting the ECG signals through the RBF neural networks, by the PSO algorithm. In spite of quasi-periodic ECG signal from a healthy person, there are distortions in electro cardiographic data for a patient. Therefore, there is no precise mathematical model for prediction. Here, we have exploited neural networks that are capable of complicated nonlinear mapping. Although the architecture and spread of RBF networks are usually selected through trial and error, the PSO algorithm has been used for choosing the best neural network. In this way, 2 second of a recorded ECG signal is employed to predict duration of 20 second in advance. Our simulations show that PSO algorithm can find the RBF neural network with minimum MSE and the accuracy of the predicted ECG signal is 97 %.Keywords: electrocardiogram, RBF artificial neural network, PSO algorithm, predict, accuracy
Procedia PDF Downloads 6267632 Ultimate Stress of the Steel Tube in Circular Concrete-Filled Steel Tube Stub Columns Subjected to Axial Compression
Authors: Siqi Lin, Yangang Zhao
Abstract:
Concrete-filled steel tube column achieves the excellent performance of high strength, stiffness, and ductility due to the confinement from the steel tube. Well understanding the stress of the steel tube is important to make clear the confinement effect. In this paper, the ultimate stress of the steel tube in circular concrete-filled steel tube columns subjected to axial compression was studied. Experimental tests were conducted to investigate the effects of the parameters, including concrete strength, steel strength, and D/t ratio, on the ultimate stress of the steel tube. The stress of the steel tube was determined by employing the Prandtl-Reuss flow rule associated with isotropic strain hardening. Results indicate that the stress of steel tube was influenced by the parameters. Specimen with higher strength ratio fy/fc and smaller D/t ratio generally leads to a higher utilization efficiency of the steel tube.Keywords: concrete-filled steel tube, axial compression, ultimate stress, utilization efficiency
Procedia PDF Downloads 4247631 An Optimal and Efficient Family of Fourth-Order Methods for Nonlinear Equations
Authors: Parshanth Maroju, Ramandeep Behl, Sandile S. Motsa
Abstract:
In this study, we proposed a simple and interesting family of fourth-order multi-point methods without memory for obtaining simple roots. This family requires only three functional evaluations (viz. two of functions f(xn), f(yn) and third one of its first-order derivative f'(xn)) per iteration. Moreover, the accuracy and validity of new schemes is tested by a number of numerical examples are also proposed to illustrate their accuracy by comparing them with the new existing optimal fourth-order methods available in the literature. It is found that they are very useful in high precision computations. Further, the dynamic study of these methods also supports the theoretical aspect.Keywords: basins of attraction, nonlinear equations, simple roots, Newton's method
Procedia PDF Downloads 3127630 Skew Planar Wheel Antenna for First Person View of Unmanned Aerial Vehicle
Authors: Raymond Yudhi Purba, Levy Olivia Nur, Radial Anwar
Abstract:
This research presents the design and measurement of a skew planar wheel antenna that is used to visualize the first person view perspective of unmanned aerial vehicles. The antenna has been designed using CST Studio Suite 2019 to have voltage standing wave ratio (VSWR) ≤ 2, return loss ≤ -10 dB, bandwidth ≥ 100 MHz to covering outdoor access point band from 5.725 to 5.825 GHz, omnidirectional radiation pattern, and elliptical polarization. Dimensions of skew planar wheel antenna have been modified using parameter sweep technique to provide good performances. The simulation results provide VSWR 1.231, return loss -19.693 dB, bandwidth 828.8 MHz, gain 3.292 dB, and axial ratio 9.229 dB. Meanwhile, the measurement results provide VSWR 1.237, return loss -19.476 dB, bandwidth 790.5 MHz, gain 3.2034 dB, and axial ratio 4.12 dB.Keywords: skew planar wheel, cloverleaf, first-person view, unmanned aerial vehicle, parameter sweep
Procedia PDF Downloads 2167629 Integrated Planning, Designing, Development and Management of Eco-Friendly Human Settlements for Sustainable Development of Environment, Economic, Peace and Society of All Economies
Authors: Indra Bahadur Chand
Abstract:
This paper will focus on the need for development and application of global protocols and policy in planning, designing, development, and management of systems of eco-towns and eco-villages so that sustainable development will be assured from the perspective of environmental, economical, peace, and harmonized social dynamics. This perspective is essential for the development of civilized and eco-friendly human settlements in the town and rural areas of the nation that will be a milestone for developing a happy and sustainable lifestyle of rural and urban communities of the nation. The urban population of most of the town of developing economies has been tremendously increasing, whereas rural people have been tremendously migrating for the past three decades. Consequently, the urban lifestyle in most towns has stressed in terms of environmental pollution, water crisis, congested traffic, energy crisis, food crisis, and unemployment. Eco-towns and villages should be developed where lifestyle of all residents is sustainable and happy. Built up environment of settlement should reduce and minimize the problems of non ecological CO2 emissions, unbalanced utilization of natural resources, environmental degradation, natural calamities, ecological imbalance, energy crisis, water scarcity, waste management, food crisis, unemployment, deterioration of cultural heritage, social, the ratio among the public and private land ownership, ratio of land covered with vegetation and area of settlement, the ratio of people in the vehicles and foot, the ratio of people employed outside of town and village, ratio of resources recycling of waste materials, water consumption level, the ratio of people and vehicles, ratio of the length of the road network and area of town/villages, a ratio of renewable energy consumption with total energy, a ratio of religious/recreational area out of the total built-up area, the ratio of annual suicide case out of total people, a ratio of annual injured and death out of total people from a traffic accident, a ratio of production of agro foods within town out of total food consumption will be used to assist in designing and monitoring of each eco-towns and villages. An eco-town and villages should be planned and developed to offer sustainable infrastructure and utilities that maintain CO2 level in individual homes and settlements, home energy use, transport, food and consumer goods, water supply, waste management, conservation of historical heritages, healthy neighborhood, conservation of natural landscape, conserving bio-diversity and developing green infrastructures. Eco-towns and villages should be developed on the basis of master planning and architecture that affect and define the settlement and its form. Master planning and engineering should focus in delivering the sustainability criteria of eco towns and eco village. This will involve working with specific landscape and natural resources of locality.Keywords: eco-town, ecological habitation, master plan, sustainable development
Procedia PDF Downloads 1797628 Improved Small-Signal Characteristics of Infrared 850 nm Top-Emitting Vertical-Cavity Lasers
Authors: Ahmad Al-Omari, Osama Khreis, Ahmad M. K. Dagamseh, Abdullah Ababneh, Kevin Lear
Abstract:
High-speed infrared vertical-cavity surface-emitting laser diodes (VCSELs) with Cu-plated heat sinks were fabricated and tested. VCSELs with 10 mm aperture diameter and 4 mm of electroplated copper demonstrated a -3dB modulation bandwidth (f-3dB) of 14 GHz and a resonance frequency (fR) of 9.5 GHz at a bias current density (Jbias) of only 4.3 kA/cm2, which corresponds to an improved f-3dB2/Jbias ratio of 44 GHz2/kA/cm2. At higher and lower bias current densities, the f-3dB2/ Jbias ratio decreased to about 30 GHz2/kA/cm2 and 18 GHz2/kA/cm2, respectively. Examination of the analogue modulation response demonstrated that the presented VCSELs displayed a steady f-3dB/ fR ratio of 1.41±10% over the whole range of the bias current (1.3Ith to 6.2Ith). The devices also demonstrated a maximum modulation bandwidth (f-3dB max) of more than 16 GHz at a bias current less than the industrial bias current standard for reliability by 25%.Keywords: current density, high-speed VCSELs, modulation bandwidth, small-signal characteristics, thermal impedance, vertical-cavity surface-emitting lasers
Procedia PDF Downloads 5697627 Approximation of PE-MOCVD to ALD for TiN Concerning Resistivity and Chemical Composition
Authors: D. Geringswald, B. Hintze
Abstract:
The miniaturization of circuits is advancing. During chip manufacturing, structures are filled for example by metal organic chemical vapor deposition (MOCVD). Since this process reaches its limits in case of very high aspect ratios, the use of alternatives such as the atomic layer deposition (ALD) is possible, requiring the extension of existing coating systems. However, it is an unsolved question to what extent MOCVD can achieve results similar as an ALD process. In this context, this work addresses the characterization of a metal organic vapor deposition of titanium nitride. Based on the current state of the art, the film properties coating thickness, sheet resistance, resistivity, stress and chemical composition are considered. The used setting parameters are temperature, plasma gas ratio, plasma power, plasma treatment time, deposition time, deposition pressure, number of cycles and TDMAT flow. The derived process instructions for unstructured wafers and inside a structure with high aspect ratio include lowering the process temperature and increasing the number of cycles, the deposition and the plasma treatment time as well as the plasma gas ratio of hydrogen to nitrogen (H2:N2). In contrast to the current process configuration, the deposited titanium nitride (TiN) layer is more uniform inside the entire test structure. Consequently, this paper provides approaches to employ the MOCVD for structures with increasing aspect ratios.Keywords: ALD, high aspect ratio, PE-MOCVD, TiN
Procedia PDF Downloads 2997626 Quantifying Uncertainties in an Archetype-Based Building Stock Energy Model by Use of Individual Building Models
Authors: Morten Brøgger, Kim Wittchen
Abstract:
Focus on reducing energy consumption in existing buildings at large scale, e.g. in cities or countries, has been increasing in recent years. In order to reduce energy consumption in existing buildings, political incentive schemes are put in place and large scale investments are made by utility companies. Prioritising these investments requires a comprehensive overview of the energy consumption in the existing building stock, as well as potential energy-savings. However, a building stock comprises thousands of buildings with different characteristics making it difficult to model energy consumption accurately. Moreover, the complexity of the building stock makes it difficult to convey model results to policymakers and other stakeholders. In order to manage the complexity of the building stock, building archetypes are often employed in building stock energy models (BSEMs). Building archetypes are formed by segmenting the building stock according to specific characteristics. Segmenting the building stock according to building type and building age is common, among other things because this information is often easily available. This segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all buildings in a segment of the building stock is associated with loss of detail. Thermal characteristics are aggregated while other characteristics, which could affect the energy efficiency of a building, are disregarded. Thus, using a simplified representation of the building stock could come at the expense of the accuracy of the model. The present study evaluates the accuracy of a conventional archetype-based BSEM that segments the building stock according to building type- and age. The accuracy is evaluated in terms of the archetypes’ ability to accurately emulate the average energy demands of the corresponding buildings they were meant to represent. This is done for the buildings’ energy demands as a whole as well as for relevant sub-demands. Both are evaluated in relation to the type- and the age of the building. This should provide researchers, who use archetypes in BSEMs, with an indication of the expected accuracy of the conventional archetype model, as well as the accuracy lost in specific parts of the calculation, due to use of the archetype method.Keywords: building stock energy modelling, energy-savings, archetype
Procedia PDF Downloads 1547625 Effect of Different Commercial Diets and Temperature on the Growth Performance, Feed Intake and Feed Conversion Ratio of Sobaity Seabream Sparidentex hasta
Authors: Seemab Zehra, A. H. W. Mohammed, E. Pantanella, J. L. Q. Laranja, P. H. De Mello, R. Saleh, A. A. Siddik, A. Al Shaikhi, A. M. Al-Suwailem
Abstract:
Two separate feeding trials were conducted to determine the effects of using different commercial diets and water temperatures on the growth performance, feed intake, feed conversion ratio (FCR) and condition factor of sobaity seabream Sparidentex hasta. In experiment I, growth performance, feed intake, protein efficiency ratio (PER), feed conversion ratio (FCR) and survival (%) of sobaity seabream Sparidentex hasta (330.5±2.6 g; 26.9±1.0 cm) were evaluated by four different commercial diets (1, 2, 3 and 4) for 80 days. The daily weight gain was around 3.2 g day-1 with an SGR of 0.7% day-1. Both the FCR and PER in the fish were significantly better in diet 2 that contained 46.36% crude protein and 12.54% crude fat. In experiment II, (99±2.6 g; 17.1±1.0 cm). The fish were cultured in 1m3 tanks supplied with seawater from the Red Sea wherein three different rearing temperatures were set as treatments (24, 28 and 32°C). Fish were fed with a commercial diet based on the results of experiment I (46.4% protein; 20.1 MJ kg-1 energy) to satiation for 96 days. Total weight gain was significantly higher for the fish reared in the 32°C group (158.57 g) followed by the 28°C group (138.25 g), while the lowest weight gain was observed in the 24°C group (116.98 g). The FCR was significantly lower in the 32°C group (1.62) as compared to 28 (1.8) and 24°C (1.85) groups. Based on the results obtained from these preliminary studies (experiment I and II), sobaity seabream can attain better growth performance, FCR and PER at 32°C in the Red Sea by feeding commercial diet 2.Keywords: Sparidentex hasta, nutrition, FCR, Red Sea, growth performance
Procedia PDF Downloads 787624 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.Keywords: cross-validation, importance sampling, information criteria, predictive accuracy
Procedia PDF Downloads 3927623 Performance Improvement in a Micro Compressor for Micro Gas Turbine Using Computational Fluid Dynamics
Authors: Kamran Siddique, Hiroyuki Asada, Yoshifumi Ogami
Abstract:
Micro gas turbine (MGT) nowadays has a wide variety of applications from drones to hybrid electric vehicles. As microfabrication technology getting better, the size of MGT is getting smaller. Overall performance of MGT is dependent on the individual components. Each component’s performance is dependent and interrelated with another component. Therefore, careful consideration needs to be given to each and every individual component of MGT. In this study, the focus is on improving the performance of the compressor in order to improve the overall performance of MGT. Computational Fluid Dynamics (CFD) is being performed using the software FLUENT to analyze the design of a micro compressor. Operating parameters like mass flow rate and RPM, and design parameters like inner blade angle (IBA), outer blade angle (OBA), blade thickness and number of blades are varied to study its effect on the performance of the compressor. Pressure ratio is used as a tool to measure the performance of the compressor. Higher the pressure ratio, better the design is. In the study, target mass flow rate is 0.2 g/s and RPM to be less than or equal to 900,000. So far, a pressure ratio of above 3 has been achieved at 0.2 g/s mass flow rate with 5 rotor blades, 0.36 mm blade thickness, 94.25 degrees OBA and 10.46 degrees IBA. The design in this study differs from a regular centrifugal compressor used in conventional gas turbines such that compressor is designed keeping in mind ease of manufacturability. So, this study proposes a compressor design which has a good pressure ratio, and at the same time, it is easy to manufacture using current microfabrication technologies.Keywords: computational fluid dynamics, FLUENT microfabrication, RPM
Procedia PDF Downloads 1627622 A Study on the Quantitative Evaluation Method of Asphalt Pavement Condition through the Visual Investigation
Authors: Sungho Kim, Jaechoul Shin, Yujin Baek
Abstract:
In recent years, due to the environmental impacts and time factor, etc., various type of pavement deterioration is increasing rapidly such as crack, pothole, rutting and roughness degradation. The Ministry of Land, Infrastructure and Transport maintains regular pavement condition of the highway and the national highway using the pavement condition survey equipment and structural survey equipment in Korea. Local governments that maintain local roads, farm roads, etc. are difficult to maintain the pavement condition using the pavement condition survey equipment depending on economic conditions, skills shortages and local conditions such as narrow roads. This study presents a quantitative evaluation method of the pavement condition through the visual inspection to overcome these problems of roads managed by local governments. It is difficult to evaluate rutting and roughness with the naked eye. However, the condition of cracks can be evaluated with the naked eye. Linear cracks (m), area cracks (m²) and potholes (number, m²) were investigated with the naked eye every 100 meters for survey the cracks. In this paper, crack ratio was calculated using the results of the condition of cracks and pavement condition was evaluated by calculated crack ratio. The pavement condition survey equipment also investigated the pavement condition in the same section in order to evaluate the reliability of pavement condition evaluation by the calculated crack ratio. The pavement condition was evaluated through the SPI (Seoul Pavement Index) and calculated crack ratio using results of field survey. The results of a comparison between 'the SPI considering only crack ratio' and 'the SPI considering rutting and roughness either' using the equipment survey data showed a margin of error below 5% when the SPI is less than 5. The SPI 5 is considered the base point to determine whether to maintain the pavement condition. It showed that the pavement condition can be evaluated using only the crack ratio. According to the analysis results of the crack ratio between the visual inspection and the equipment survey, it has an average error of 1.86%(minimum 0.03%, maximum 9.58%). Economically, the visual inspection costs only 10% of the equipment survey and will also help the economy by creating new jobs. This paper advises that local governments maintain the pavement condition through the visual investigations. However, more research is needed to improve reliability. Acknowledgment: The author would like to thank the MOLIT (Ministry of Land, Infrastructure, and Transport). This work was carried out through the project funded by the MOLIT. The project name is 'development of 20mm grade for road surface detecting roadway condition and rapid detection automation system for removal of pothole'.Keywords: asphalt pavement maintenance, crack ratio, evaluation of asphalt pavement condition, SPI (Seoul Pavement Index), visual investigation
Procedia PDF Downloads 1677621 Analysis of a Lignocellulose Degrading Microbial Consortium to Enhance the Anaerobic Digestion of Rice Straws
Authors: Supanun Kangrang, Kraipat Cheenkachorn, Kittiphong Rattanaporn, Malinee Sriariyanun
Abstract:
Rice straw is lignocellulosic biomass which can be utilized as substrate for the biogas production. However, due to the property and composition of rice straw, it is difficult to be degraded by hydrolysis enzymes. One of the pretreatment method that modifies such properties of lignocellulosic biomass is the application of lignocellulose-degrading microbial consortia. The aim of this study is to investigate the effect of microbial consortia to enhance biogas production. To select the high efficient consortium, cellulase enzymes were extracted and their activities were analyzed. The results suggested that microbial consortium culture obtained from cattle manure is the best candidate compared to decomposed wood and horse manure. A microbial consortium isolated from cattle manure was then mixed with anaerobic sludge and used as inoculum for biogas production. The optimal conditions for biogas production were investigated using response surface methodology (RSM). The tested parameters were the ratio of amount of microbial consortium isolated and amount of anaerobic sludge (MI:AS), substrate to inoculum ratio (S:I) and temperature. Here, the value of the regression coefficient R2 = 0.7661 could be explained by the model which is high to advocate the significance of the model. The highest cumulative biogas yield was 104.6 ml/g-rice straw at optimum ratio of MI:AS, ratio of S:I, and temperature of 2.5:1, 15:1 and 44°C respectively.Keywords: lignocellulolytic biomass, microbial consortium, cellulase, biogas, Response Surface Methodology (RSM)
Procedia PDF Downloads 3987620 Finite Element Analysis of Raft Foundation on Various Soil Types under Earthquake Loading
Authors: Qassun S. Mohammed Shafiqu, Murtadha A. Abdulrasool
Abstract:
The design of shallow foundations to withstand different dynamic loads has given considerable attention in recent years. Dynamic loads may be due to the earthquakes, pile driving, blasting, water waves, and machine vibrations. But, predicting the behavior of shallow foundations during earthquakes remains a difficult task for geotechnical engineers. A database for dynamic and static parameters for different soils in seismic active zones in Iraq is prepared which has been collected from geophysical and geotechnical investigation works. Then, analysis of a typical 3-D soil-raft foundation system under earthquake loading is carried out using the database. And a parametric study has been carried out taking into consideration the influence of some parameters on the dynamic behavior of the raft foundation, such as raft stiffness, damping ratio as well as the influence of the earthquake acceleration-time records. The results of the parametric study show that the settlement caused by the earthquake can be decreased by about 72% with increasing the thickness from 0.5 m to 1.5 m. But, it has been noticed that reduction in the maximum bending moment by about 82% was predicted by decreasing the raft thickness from 1.5 m to 0.5 m in all sites model. Also, it has been observed that the maximum lateral displacement, the maximum vertical settlement and the maximum bending moment for damping ratio 0% is about 14%, 20%, and 18% higher than that for damping ratio 7.5%, respectively for all sites model.Keywords: shallow foundation, seismic behavior, raft thickness, damping ratio
Procedia PDF Downloads 1487619 Assessment of Highly Sensitive Dielectric Modulated GaN-FinFET for Label-Free Biosensing Applications
Authors: Ajay Kumar, Neha Gupta
Abstract:
This work presents the sensitivity assessment of Gallium Nitride (GaN) material-based FinFET by dielectric modulation in the nanocavity gap for label-free biosensing applications. The significant deflection is observed in the electrical characteristics such as drain current (ID), transconductance (gm), surface potential, energy band profile, electric field, sub-threshold slope (SS), and threshold voltage (Vth) in the presence of biomolecules owing to GaN material. Further, the device sensitivity is evaluated to identify the effectiveness of the proposed biosensor and its capability to detect the biomolecules with high precision or accuracy. Higher sensitivity is observed for Gelatin (k=12) in terms of on-current (SION), threshold voltage (SVth), and switching ratio (SSR) by 104.88%, 82.12%, and 119.73%, respectively. This work is performed using a powerful tool 3D Sentaurus TCAD using a well-calibrated structure. All the results pave the way for GaN-FinFET as a viable candidate for label-free dielectric modulated biosensor applications.Keywords: biosensor, biomolecules, FinFET, sensitivity
Procedia PDF Downloads 2047618 Exploring Coordination between Monetary and Macroprudential Policies Using a Monetary Policy Procyclicality Ratio
Authors: Lukasz Kurowski, Paweł Smaga
Abstract:
We explore the procyclicality of monetary policy decisions towards the financial cycle in the 1995−2015 period on a sample of six central banks. Using interest rate paths and the credit-to-GDP gap to construct a monetary policy procyclicality ratio, we provide evidence that monetary policy procyclicality was high in BoE and CNB and low in Riksbank and ECB. The results support the need for coordination between macroprudential and monetary policies, for example, by including financial stability considerations to the inflation targeting strategy.Keywords: central bank, financial stability, macroprudential policy, monetary policy
Procedia PDF Downloads 3727617 Impact of Different Fuel Inlet Diameters onto the NOx Emissions in a Hydrogen Combustor
Authors: Annapurna Basavaraju, Arianna Mastrodonato, Franz Heitmeir
Abstract:
The Advisory Council for Aeronautics Research in Europe (ACARE) is creating awareness for the overall reduction of NOx emissions by 80% in its vision 2020. Hence this promotes the researchers to work on novel technologies, one such technology is the use of alternative fuels. Among these fuels hydrogen is of interest due to its one and only significant pollutant NOx. The influence of NOx formation due to hydrogen combustion depends on various parameters such as air pressure, inlet air temperature, air to fuel jet momentum ratio etc. Appropriately, this research is motivated to investigate the impact of the air to fuel jet momentum ratio onto the NOx formation in a hydrogen combustion chamber for aircraft engines. The air to jet fuel momentum is defined as the ratio of impulse/momentum of air with respect to the momentum of fuel. The experiments were performed in an existing combustion chamber that has been previously tested for methane. Premix of the reactants has not been considered due to the high reactivity of the hydrogen and high risk of a flashback. In order to create a less rich zone of reaction at the burner and to decrease the emissions, a forced internal recirculation flow has been achieved by integrating a plate similar to honeycomb structure, suitable to the geometry of the liner. The liner has been provided with an external cooling system to avoid the increase of local temperatures and in turn the reaction rate of the NOx formation. The injected air has been preheated to aim at so called flameless combustion. The air to fuel jet momentum ratio has been inspected by changing the area of fuel inlets and keeping the number of fuel inlets constant in order to alter the fuel jet momentum, thus maintaining the homogeneity of the flow. Within this analysis, promising results for a flameless combustion have been achieved. For a constant number of fuel inlets, it was seen that the reduction of the fuel inlet diameter resulted in decrease of air to fuel jet momentum ratio in turn lowering the NOx emissions.Keywords: combustion chamber, hydrogen, jet momentum, NOx emission
Procedia PDF Downloads 2927616 Computer Simulation Approach in the 3D Printing Operations of Surimi Paste
Authors: Timilehin Martins Oyinloye, Won Byong Yoon
Abstract:
Simulation technology is being adopted in many industries, with research focusing on the development of new ways in which technology becomes embedded within production, services, and society in general. 3D printing (3DP) technology is fast developing in the food industry. However, the limited processability of high-performance material restricts the robustness of the process in some cases. Significantly, the printability of materials becomes the foundation for extrusion-based 3DP, with residual stress being a major challenge in the printing of complex geometry. In many situations, the trial-a-error method is being used to determine the optimum printing condition, which results in time and resource wastage. In this report, the analysis of 3 moisture levels for surimi paste was investigated for an optimum 3DP material and printing conditions by probing its rheology, flow characteristics in the nozzle, and post-deposition process using the finite element method (FEM) model. Rheological tests revealed that surimi pastes with 82% moisture are suitable for 3DP. According to the FEM model, decreasing the nozzle diameter from 1.2 mm to 0.6 mm, increased the die swell from 9.8% to 14.1%. The die swell ratio increased due to an increase in the pressure gradient (1.15107 Pa to 7.80107 Pa) at the nozzle exit. The nozzle diameter influenced the fluid properties, i.e., the shear rate, velocity, and pressure in the flow field, as well as the residual stress and the deformation of the printed sample, according to FEM simulation. The post-printing stability of the model was investigated using the additive layer manufacturing (ALM) model. The ALM simulation revealed that the residual stress and total deformation of the sample were dependent on the nozzle diameter. A small nozzle diameter (0.6 mm) resulted in a greater total deformation (0.023), particularly at the top part of the model, which eventually resulted in the sample collapsing. As the nozzle diameter increased, the accuracy of the model improved until the optimum nozzle size (1.0 mm). Validation with 3D-printed surimi products confirmed that the nozzle diameter was a key parameter affecting the geometry accuracy of 3DP of surimi paste.Keywords: 3D printing, deformation analysis, die swell, numerical simulation, surimi paste
Procedia PDF Downloads 677615 Prediction of California Bearing Ratio from Physical Properties of Fine-Grained Soils
Authors: Bao Thach Nguyen, Abbas Mohajerani
Abstract:
The California bearing ratio (CBR) has been acknowledged as an important parameter to characterize the bearing capacity of earth structures, such as earth dams, road embankments, airport runways, bridge abutments, and pavements. Technically, the CBR test can be carried out in the laboratory or in the field. The CBR test is time-consuming and is infrequently performed due to the equipment needed and the fact that the field moisture content keeps changing over time. Over the years, many correlations have been developed for the prediction of CBR by various researchers, including the dynamic cone penetrometer, undrained shear strength, and Clegg impact hammer. This paper reports and discusses some of the results from a study on the prediction of CBR. In the current study, the CBR test was performed in the laboratory on some fine-grained subgrade soils collected from various locations in Victoria. Based on the test results, a satisfactory empirical correlation was found between the CBR and the physical properties of the experimental soils.Keywords: California bearing ratio, fine-grained soils, soil physical properties, pavement, soil test
Procedia PDF Downloads 5097614 Optimizing Machine Vision System Setup Accuracy by Six-Sigma DMAIC Approach
Authors: Joseph C. Chen
Abstract:
Machine vision system provides automatic inspection to reduce manufacturing costs considerably. However, only a few principles have been found to optimize machine vision system and help it function more accurately in industrial practice. Mostly, there were complicated and impractical design techniques to improve the accuracy of machine vision system. This paper discusses implementing the Six Sigma Define, Measure, Analyze, Improve, and Control (DMAIC) approach to optimize the setup parameters of machine vision system when it is used as a direct measurement technique. This research follows a case study showing how Six Sigma DMAIC methodology has been put into use.Keywords: DMAIC, machine vision system, process capability, Taguchi Parameter Design
Procedia PDF Downloads 4367613 Deposition of Diamond Like Carbon Thin Film by Pulse Laser Deposition for Surgical Instruments
Authors: M. Khalid Alamgir, Javed Ahsan Bhatti, M. Zafarullah Khan
Abstract:
Thin film of amorphous carbon (DLC) was deposited on 316 steel using Nd: YAG laser having energy 300mJ. Pure graphite was used as a target. The vacuum in the deposition chamber was generated in the range of 10-6 mbar by turbo molecular pump. Ratio of sp3 to sp2 content shows amorphous nature of the film. This was confirmed by Raman spectra having two peaks around 1300 cm-1 i.e. D-band to 1700 cm-1 i.e. G-band. If sp3 bonding ratio is high, the films behave like diamond-like whereas, with high sp2, films are graphite-like. The ratio of sp3 and sp2 contents in the film depends upon the deposition method, hydrogen contents and system parameters. The structural study of the film was carried out by XRD. The hardness of the films as measured by Vickers hardness tester and was found to be 28 GPa. The EDX result shows the presence of carbon contents on the surface in high rate and optical microscopy result shows the smoothness of the film on substrate. The film possesses good adhesion and can be used to coat surgical instruments.Keywords: DLC, thin film, Raman spectroscopy, XRD, EDX
Procedia PDF Downloads 5647612 A Calibration Method of Portable Coordinate Measuring Arm Using Bar Gauge with Cone Holes
Authors: Rim Chang Hyon, Song Hak Jin, Song Kwang Hyok, Jong Ki Hun
Abstract:
The calibration of the articulated arm coordinate measuring machine (AACMM) is key to improving calibration accuracy and saving calibration time. To reduce the time consumed for calibration, we should choose the proper calibration gauges and develop a reasonable calibration method. In addition, we should get the exact optimal solution by accurately removing the rough errors within the experimental data. In this paper, we present a calibration method of the portable coordinate measuring arm (PCMA) using the 1.2m long bar guage with cone-holes. First, we determine the locations of the bar gauge and establish an optimal objective function for identifying the structural parameter errors. Next, we make a mathematical model of the calibration algorithm and present a new mathematical method to remove the rough errors within calibration data. Finally, we find the optimal solution to identify the kinematic parameter errors by using Levenberg-Marquardt algorithm. The experimental results show that our calibration method is very effective in saving the calibration time and improving the calibration accuracy.Keywords: AACMM, kinematic model, parameter identify, measurement accuracy, calibration
Procedia PDF Downloads 837611 Development and Validation of a Liquid Chromatographic Method for the Quantification of Related Substance in Gentamicin Drug Substances
Authors: Sofiqul Islam, V. Murugan, Prema Kumari, Hari
Abstract:
Gentamicin is a broad spectrum water-soluble aminoglycoside antibiotics produced by the fermentation process of microorganism known as Micromonospora purpurea. It is widely used for the treatment of infection caused by both gram positive and gram negative bacteria. Gentamicin consists of a mixture of aminoglycoside components like C1, C1a, C2a, and C2. The molecular structure of Gentamicin and its related substances showed that it has lack of presence of chromophore group in the molecule due to which the detection of such components were quite critical and challenging. In this study, a simple Reversed Phase-High Performance Liquid Chromatographic (RP-HPLC) method using ultraviolet (UV) detector was developed and validated for quantification of the related substances present in Gentamicin drug substances. The method was achieved by using Thermo Scientific Hypersil Gold analytical column (150 x 4.6 mm, 5 µm particle size) with isocratic elution composed of methanol: water: glacial acetic acid: sodium hexane sulfonate in the ratio 70:25:5:3 % v/v/v/w as a mobile phase at a flow rate of 0.5 mL/min, column temperature was maintained at 30 °C and detection wavelength of 330 nm. The four components of Gentamicin namely Gentamicin C1, C1a, C2a, and C2 were well separated along with the related substance present in Gentamicin. The Limit of Quantification (LOQ) values were found to be at 0.0075 mg/mL. The accuracy of the method was quite satisfactory in which the % recovery was resulted between 95-105% for the related substances. The correlation coefficient (≥ 0.995) shows the linearity response against concentration over the range of Limit of Quantification (LOQ). Precision studies showed the % Relative Standard Deviation (RSD) values less than 5% for its related substance. The method was validated in accordance with the International Conference of Harmonization (ICH) guideline with various parameters like system suitability, specificity, precision, linearity, accuracy, limit of quantification, and robustness. This proposed method was easy and suitable for use for the quantification of related substances in routine analysis of Gentamicin formulations.Keywords: reversed phase-high performance liquid chromatographic (RP-HPLC), high performance liquid chromatography, gentamicin, isocratic, ultraviolet
Procedia PDF Downloads 1597610 A Survey of Skin Cancer Detection and Classification from Skin Lesion Images Using Deep Learning
Authors: Joseph George, Anne Kotteswara Roa
Abstract:
Skin disease is one of the most common and popular kinds of health issues faced by people nowadays. Skin cancer (SC) is one among them, and its detection relies on the skin biopsy outputs and the expertise of the doctors, but it consumes more time and some inaccurate results. At the early stage, skin cancer detection is a challenging task, and it easily spreads to the whole body and leads to an increase in the mortality rate. Skin cancer is curable when it is detected at an early stage. In order to classify correct and accurate skin cancer, the critical task is skin cancer identification and classification, and it is more based on the cancer disease features such as shape, size, color, symmetry and etc. More similar characteristics are present in many skin diseases; hence it makes it a challenging issue to select important features from a skin cancer dataset images. Hence, the skin cancer diagnostic accuracy is improved by requiring an automated skin cancer detection and classification framework; thereby, the human expert’s scarcity is handled. Recently, the deep learning techniques like Convolutional neural network (CNN), Deep belief neural network (DBN), Artificial neural network (ANN), Recurrent neural network (RNN), and Long and short term memory (LSTM) have been widely used for the identification and classification of skin cancers. This survey reviews different DL techniques for skin cancer identification and classification. The performance metrics such as precision, recall, accuracy, sensitivity, specificity, and F-measures are used to evaluate the effectiveness of SC identification using DL techniques. By using these DL techniques, the classification accuracy increases along with the mitigation of computational complexities and time consumption.Keywords: skin cancer, deep learning, performance measures, accuracy, datasets
Procedia PDF Downloads 1287609 Performance Enrichment of Deep Feed Forward Neural Network and Deep Belief Neural Networks for Fault Detection of Automobile Gearbox Using Vibration Signal
Authors: T. Praveenkumar, Kulpreet Singh, Divy Bhanpuriya, M. Saimurugan
Abstract:
This study analysed the classification accuracy for gearbox faults using Machine Learning Techniques. Gearboxes are widely used for mechanical power transmission in rotating machines. Its rotating components such as bearings, gears, and shafts tend to wear due to prolonged usage, causing fluctuating vibrations. Increasing the dependability of mechanical components like a gearbox is hampered by their sealed design, which makes visual inspection difficult. One way of detecting impending failure is to detect a change in the vibration signature. The current study proposes various machine learning algorithms, with aid of these vibration signals for obtaining the fault classification accuracy of an automotive 4-Speed synchromesh gearbox. Experimental data in the form of vibration signals were acquired from a 4-Speed synchromesh gearbox using Data Acquisition System (DAQs). Statistical features were extracted from the acquired vibration signal under various operating conditions. Then the extracted features were given as input to the algorithms for fault classification. Supervised Machine Learning algorithms such as Support Vector Machines (SVM) and unsupervised algorithms such as Deep Feed Forward Neural Network (DFFNN), Deep Belief Networks (DBN) algorithms are used for fault classification. The fusion of DBN & DFFNN classifiers were architected to further enhance the classification accuracy and to reduce the computational complexity. The fault classification accuracy for each algorithm was thoroughly studied, tabulated, and graphically analysed for fused and individual algorithms. In conclusion, the fusion of DBN and DFFNN algorithm yielded the better classification accuracy and was selected for fault detection due to its faster computational processing and greater efficiency.Keywords: deep belief networks, DBN, deep feed forward neural network, DFFNN, fault diagnosis, fusion of algorithm, vibration signal
Procedia PDF Downloads 1137608 Estimation of the Effect of Initial Damping Model and Hysteretic Model on Dynamic Characteristics of Structure
Authors: Shinji Ukita, Naohiro Nakamura, Yuji Miyazu
Abstract:
In considering the dynamic characteristics of structure, natural frequency and damping ratio are useful indicator. When performing dynamic design, it's necessary to select an appropriate initial damping model and hysteretic model. In the linear region, the setting of initial damping model influences the response, and in the nonlinear region, the combination of initial damping model and hysteretic model influences the response. However, the dynamic characteristics of structure in the nonlinear region remain unclear. In this paper, we studied the effect of setting of initial damping model and hysteretic model on the dynamic characteristics of structure. On initial damping model setting, Initial stiffness proportional, Tangent stiffness proportional, and Rayleigh-type were used. On hysteretic model setting, TAKEDA model and Normal-trilinear model were used. As a study method, dynamic analysis was performed using a lumped mass model of base-fixed. During analysis, the maximum acceleration of input earthquake motion was gradually increased from 1 to 600 gal. The dynamic characteristics were calculated using the ARX model. Then, the characteristics of 1st and 2nd natural frequency and 1st damping ratio were evaluated. Input earthquake motion was simulated wave that the Building Center of Japan has published. On the building model, an RC building with 30×30m planes on each floor was assumed. The story height was 3m and the maximum height was 18m. Unit weight for each floor was 1.0t/m2. The building natural period was set to 0.36sec, and the initial stiffness of each floor was calculated by assuming the 1st mode to be an inverted triangle. First, we investigated the difference of the dynamic characteristics depending on the difference of initial damping model setting. With the increase in the maximum acceleration of the input earthquake motions, the 1st and 2nd natural frequency decreased, and the 1st damping ratio increased. Then, in the natural frequency, the difference due to initial damping model setting was small, but in the damping ratio, a significant difference was observed (Initial stiffness proportional≒Rayleigh type>Tangent stiffness proportional). The acceleration and the displacement of the earthquake response were largest in the tangent stiffness proportional. In the range where the acceleration response increased, the damping ratio was constant. In the range where the acceleration response was constant, the damping ratio increased. Next, we investigated the difference of the dynamic characteristics depending on the difference of hysteretic model setting. With the increase in the maximum acceleration of the input earthquake motions, the natural frequency decreased in TAKEDA model, but in Normal-trilinear model, the natural frequency didn’t change. The damping ratio in TAKEDA model was higher than that in Normal-trilinear model, although, both in TAKEDA model and Normal-trilinear model, the damping ratio increased. In conclusion, in initial damping model setting, the tangent stiffness proportional was evaluated the most. In the hysteretic model setting, TAKEDA model was more appreciated than the Normal-trilinear model in the nonlinear region. Our results would provide useful indicator on dynamic design.Keywords: initial damping model, damping ratio, dynamic analysis, hysteretic model, natural frequency
Procedia PDF Downloads 1777607 Optimal Portfolio Selection under Treynor Ratio Using Genetic Algorithms
Authors: Imad Zeyad Ramadan
Abstract:
In this paper a genetic algorithm was developed to construct the optimal portfolio based on the Treynor method. The GA maximizes the Treynor ratio under budget constraint to select the best allocation of the budget for the companies in the portfolio. The results show that the GA was able to construct a conservative portfolio which includes companies from the three sectors. This indicates that the GA reduced the risk on the investor as it choose some companies with positive risks (goes with the market) and some with negative risks (goes against the market).Keywords: oOptimization, genetic algorithm, portfolio selection, Treynor method
Procedia PDF Downloads 449