Search results for: vertical flow performance
1994 Dairy Wastewater Treatment by Electrochemical and Catalytic Method
Authors: Basanti Ekka, Talis Juhna
Abstract:
Dairy industrial effluents originated by the typical processing activities are composed of various organic and inorganic constituents, and these include proteins, fats, inorganic salts, antibiotics, detergents, sanitizers, pathogenic viruses, bacteria, etc. These contaminants are harmful to not only human beings but also aquatic flora and fauna. Because consisting of large classes of contaminants, the specific targeted removal methods available in the literature are not viable solutions on the industrial scale. Therefore, in this on-going research, a series of coagulation, electrochemical, and catalytic methods will be employed. The bulk coagulation and electrochemical methods can wash off most of the contaminants, but some of the harmful chemicals may slip in; therefore, specific catalysts designed and synthesized will be employed for the removal of targeted chemicals. In the context of Latvian dairy industries, presently, work is under progress on the characterization of dairy effluents by total organic carbon (TOC), Inductively Coupled Plasma Mass Spectrometry (ICP-MS)/ Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES), High-Performance Liquid Chromatography (HPLC), Gas Chromatography-Mass Spectrometry (GC-MS), and Mass Spectrometry. After careful evaluation of the dairy effluents, a cost-effective natural coagulant will be employed prior to advanced electrochemical technology such as electrocoagulation and electro-oxidation as a secondary treatment process. Finally, graphene oxide (GO) based hybrid materials will be used for post-treatment of dairy wastewater as graphene oxide has been widely applied in various fields such as environmental remediation and energy production due to the presence of various oxygen-containing groups. Modified GO will be used as a catalyst for the removal of remaining contaminants after the electrochemical process.Keywords: catalysis, dairy wastewater, electrochemical method, graphene oxide
Procedia PDF Downloads 1441993 Integration of Thermal Energy Storage and Electric Heating with Combined Heat and Power Plants
Authors: Erich Ryan, Benjamin McDaniel, Dragoljub Kosanovic
Abstract:
Combined heat and power (CHP) plants are an efficient technology for meeting the heating and electric needs of large campus energy systems, but have come under greater scrutiny as the world pushes for emissions reductions and lower consumption of fossil fuels. The electrification of heating and cooling systems offers a great deal of potential for carbon savings, but these systems can be costly endeavors due to increased electric consumption and peak demand. Thermal energy storage (TES) has been shown to be an effective means of improving the viability of electrified systems, by shifting heating and cooling load to off-peak hours and reducing peak demand charges. In this study, we analyze the integration of an electrified heating and cooling system with thermal energy storage into a campus CHP plant, to investigate the potential of leveraging existing infrastructure and technologies with the climate goals of the 21st century. A TRNSYS model was built to simulate a ground source heat pump (GSHP) system with TES using measured campus heating and cooling loads. The GSHP with TES system is modeled to follow the parameters of industry standards and sized to provide an optimal balance of capital and operating costs. Using known CHP production information, costs and emissions were investigated for a unique large energy user rate structure that operates a CHP plant. The results highlight the cost and emissions benefits of a targeted integration of heat pump technology within the framework of existing CHP systems, along with the performance impacts and value of TES capability within the combined system.Keywords: thermal energy storage, combined heat and power, heat pumps, electrification
Procedia PDF Downloads 891992 National Plans for Recovery and Resilience between National Recovery and EU Cohesion Objectives: Insights from European Countries
Authors: Arbolino Roberta, Boffardi Raffaele
Abstract:
Achieving the highest effectiveness for the National Plans for Recovery and Resilience (NPRR) while strengthening the objectives of cohesion and reduction of intra-EU unbalances is only possible by means of strategic, coordinated, and coherent policy planning. Therefore, the present research aims at assessing and quantifying the potential impact of NPRRs across the twenty-seven European Member States in terms of economic convergence, considering disaggregated data on industrial, construction, and service sectors. The first step of the research involves a performance analysis of the main macroeconomic indicators describing the trends of twenty-seven EU economies before the pandemic outbreak. Subsequently, in order to define the potential effect of the resources allocated, we perform an impact analysis of previous similar EU investment policies, estimating national-level sectoral elasticity associated with the expenditure of the 2007-2013 and 2014-2020 Cohesion programmes funds. These coefficients are then exploited to construct adjustment scenarios. Finally, convergence analysis is performed on the data used for constructing scenarios in order to understand whether the expenditure of funds might be useful to foster economic convergence besides driving recovery. The results of our analysis show that the allocation of resources largely mirrors the aims of the policy framework underlying the NPRR, thus reporting the largest investments in both those sectors most affected by the economic shock (services) and those considered fundamental for the digital and green transition. Notwithstanding an overall positive effect, large differences exist among European countries, while no convergence process seems to be activated or fostered by these interventions.Keywords: NPRR, policy evaluation, cohesion policy, scenario Nalsysi
Procedia PDF Downloads 831991 FE Modelling of Structural Effects of Alkali-Silica Reaction in Reinforced Concrete Beams
Authors: Mehdi Habibagahi, Shami Nejadi, Ata Aminfar
Abstract:
A significant degradation factor that impacts the durability of concrete structures is the alkali-silica reaction. Engineers are frequently charged with the challenges of conducting a thorough safety assessment of concrete structures that have been impacted by ASR. The alkali-silica reaction has a major influence on the structural capacities of structures. In most cases, the reduction in compressive strength, tensile strength, and modulus of elasticity is expressed as a function of free expansion and crack widths. Predicting the effect of ASR on flexural strength is also relevant. In this paper, a nonlinear three-dimensional (3D) finite-element model was proposed to describe the flexural strength degradation induced byASR.Initial strains, initial stresses, initial cracks, and deterioration of material characteristics were all considered ASR factors in this model. The effects of ASR on structural performance were evaluated by focusing on initial flexural stiffness, force–deformation curve, and load-carrying capacity. Degradation of concrete mechanical properties was correlated with ASR growth using material test data conducted at Tech Lab, UTS, and implemented into the FEM for various expansions. The finite element study revealed a better understanding of the ASR-affected RC beam's failure mechanism and capacity reduction as a function of ASR expansion. Furthermore, in this study, decreasing of the residual mechanical properties due to ASRisreviewed, using as input data for the FEM model. Finally, analysis techniques and a comparison of the analysis and the experiment results are discussed. Verification is also provided through analyses of reinforced concrete beams with behavior governed by either flexural or shear mechanisms.Keywords: alkali-silica reaction, analysis, assessment, finite element, nonlinear analysis, reinforced concrete
Procedia PDF Downloads 1601990 INNPT Nano Particles Material Technology as Enhancement Technology for Biological WWTP Performance and Capacity
Authors: Medhat Gad
Abstract:
Wastewater treatment became a big issue in this decade due to shortage of water resources, growth of population and modern live requirements. Reuse of treated wastewater in industrial and agriculture sectors has a big demand to substitute the shortage of clean water supply as well as to save the eco system from dangerous pollutants in insufficient treated wastewater In last decades, most of wastewater treatment plants are built using primary or secondary biological treatment technology which almost does not provide enough treatment and removal of phosphorus and nitrogen. those plants which built ten to 15 years ago also now suffering from overflow which decrease the treatment efficiency of the plant. Discharging treated wastewater which contains phosphorus and nitrogen to water reservoirs and irrigation canals destroy ecosystem and aquatic life. Using chemical material to enhance treatment efficiency for domestic wastewater but it leads to huge amount of sludge which cost a lot of money. To enhance wastewater treatment, we used INNPT nano material which consists of calcium, aluminum and iron oxides and compounds plus silica, sodium and magnesium. INNPT nano material used with a dose of 100 mg/l to upgrade SBR treatment plant in Cairo Egypt -which has three treatment tanks each with a capacity of 2500 cubic meters per day - to tertiary treatment level by removing Phosphorus, Nitrogen and increase dissolved oxygen in final effluent. The results showed that the treatment retention time decreased from 9 hours in SBR system to one hour using INNPT nano material with improvement in effluent quality while increasing plant capacity to 20 k cubic meters per day. Nitrogen removal efficiency achieved 77%, while phosphorus removal efficiency achieved 90% and COD removal efficiency was 93% which all comply with tertiary treatment limits according to Egyptian law.Keywords: INNPT technology, nanomaterial, tertiary wastewater treatment, capacity extending
Procedia PDF Downloads 1671989 Combination between Intrusion Systems and Honeypots
Authors: Majed Sanan, Mohammad Rammal, Wassim Rammal
Abstract:
Today, security is a major concern. Intrusion Detection, Prevention Systems and Honeypot can be used to moderate attacks. Many researchers have proposed to use many IDSs ((Intrusion Detection System) time to time. Some of these IDS’s combine their features of two or more IDSs which are called Hybrid Intrusion Detection Systems. Most of the researchers combine the features of Signature based detection methodology and Anomaly based detection methodology. For a signature based IDS, if an attacker attacks slowly and in organized way, the attack may go undetected through the IDS, as signatures include factors based on duration of the events but the actions of attacker do not match. Sometimes, for an unknown attack there is no signature updated or an attacker attack in the mean time when the database is updating. Thus, signature-based IDS fail to detect unknown attacks. Anomaly based IDS suffer from many false-positive readings. So there is a need to hybridize those IDS which can overcome the shortcomings of each other. In this paper we propose a new approach to IDS (Intrusion Detection System) which is more efficient than the traditional IDS (Intrusion Detection System). The IDS is based on Honeypot Technology and Anomaly based Detection Methodology. We have designed Architecture for the IDS in a packet tracer and then implemented it in real time. We have discussed experimental results performed: both the Honeypot and Anomaly based IDS have some shortcomings but if we hybridized these two technologies, the newly proposed Hybrid Intrusion Detection System (HIDS) is capable enough to overcome these shortcomings with much enhanced performance. In this paper, we present a modified Hybrid Intrusion Detection System (HIDS) that combines the positive features of two different detection methodologies - Honeypot methodology and anomaly based intrusion detection methodology. In the experiment, we ran both the Intrusion Detection System individually first and then together and recorded the data from time to time. From the data we can conclude that the resulting IDS are much better in detecting intrusions from the existing IDSs.Keywords: security, intrusion detection, intrusion prevention, honeypot, anomaly-based detection, signature-based detection, cloud computing, kfsensor
Procedia PDF Downloads 3841988 Novel Wound Healing Biodegradable Patch of Bioactive
Authors: Abhay Asthana, Shally Toshkhani, Gyati Shilakari
Abstract:
The present research was aimed to develop a biodegradable dermal patch formulation for wound healing in a novel, sustained and systematic manner. The goal is to reduce the frequency of dressings with improved drug delivery and thereby enhance therapeutic performance. In present study optimized formulation was designed using component polymers and excipients (e.g. Hydroxypropyl methyl cellulose, Ethylcellulose, and Gelatin) to impart significant folding endurance, elasticity and strength. Gelatin was used to get a mixture using ethylene glycol. Chitosan dissolved in suitable medium was mixed with stirring to gelatin mixture. With continued stirring to the mixture Curcumin was added in optimized ratio to get homogeneous dispersion. Polymers were dispersed with stirring in final formulation. The mixture was sonicated casted to get the film form. All steps were carried out under under strict aseptic conditions. The final formulation was a thin uniformly smooth textured film with dark brown-yellow color. The film was found to have folding endurance was around 20 to 21 times without a crack in an optimized formulation at RT (23C). The drug content was in range 96 to 102% and it passed the content uniform test. The final moisture content of the optimized formulation film was NMT 9.0%. The films passed stability study conducted at refrigerated conditions (4±0.2C) and at room temperature (23 ± 2C) for 30 days. Further, the drug content and texture remained undisturbed with stability study conducted at RT 23±2C for 45 and 90 days. Percentage cumulative drug release was found to be 80% in 12 h and matched the biodegradation rate as drug release with correlation factor R2 > 0.9. The film based formulation developed shows promising results in terms of stability and release profiles.Keywords: biodegradable, patch, bioactive, polymer
Procedia PDF Downloads 5181987 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features
Authors: Bushra Zafar, Usman Qamar
Abstract:
Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection
Procedia PDF Downloads 3181986 Opportunities for Reducing Post-Harvest Losses of Cactus Pear (Opuntia Ficus-Indica) to Improve Small-Holder Farmers Income in Eastern Tigray, Northern Ethiopia: Value Chain Approach
Authors: Meron Zenaselase Rata, Euridice Leyequien Abarca
Abstract:
The production of major crops in Northern Ethiopia, especially the Tigray Region, is at subsistence level due to drought, erratic rainfall, and poor soil fertility. Since cactus pear is a drought-resistant plant, it is considered as a lifesaver fruit and a strategy for poverty reduction in a drought-affected area of the region. Despite its contribution to household income and food security in the area, the cactus pear sub-sector is experiencing many constraints with limited attention given to its post-harvest loss management. Therefore, this research was carried out to identify opportunities for reducing post-harvest losses and recommend possible strategies to reduce post-harvest losses, thereby improving production and smallholder’s income. Both probability and non-probability sampling techniques were employed to collect the data. Ganta Afeshum district was selected from Eastern Tigray, and two peasant associations (Buket and Golea) were also selected from the district purposively for being potential in cactus pear production. Simple random sampling techniques were employed to survey 30 households from each of the two peasant associations, and a semi-structured questionnaire was used as a tool for data collection. Moreover, in this research 2 collectors, 2 wholesalers, 1 processor, 3 retailers, 2 consumers were interviewed; and two focus group discussion was also done with 14 key farmers using semi-structured checklist; and key informant interview with governmental and non-governmental organizations were interviewed to gather more information about the cactus pear production, post-harvest losses, the strategies used to reduce the post-harvest losses and suggestions to improve the post-harvest management. To enter and analyze the quantitative data, SPSS version 20 was used, whereas MS-word were used to transcribe the qualitative data. The data were presented using frequency and descriptive tables and graphs. The data analysis was also done using a chain map, correlations, stakeholder matrix, and gross margin. Mean comparisons like ANOVA and t-test between variables were used. The analysis result shows that the present cactus pear value chain involves main actors and supporters. However, there is inadequate information flow and informal market linkages among actors in the cactus pear value chain. The farmer's gross margin is higher when they sell to the processor than sell to collectors. The significant postharvest loss in the cactus pear value chain is at the producer level, followed by wholesalers and retailers. The maximum and minimum volume of post-harvest losses at the producer level is 4212 and 240 kgs per season. The post-harvest loss was caused by limited farmers skill on-farm management and harvesting, low market price, limited market information, absence of producer organization, poor post-harvest handling, absence of cold storage, absence of collection centers, poor infrastructure, inadequate credit access, using traditional transportation system, absence of quality control, illegal traders, inadequate research and extension services and using inappropriate packaging material. Therefore, some of the recommendations were providing adequate practical training, forming producer organizations, and constructing collection centers.Keywords: cactus pear, post-harvest losses, profit margin, value-chain
Procedia PDF Downloads 1341985 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores
Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan
Abstract:
Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics
Procedia PDF Downloads 1301984 Antihyperlipidemia Combination of Simvastatin and Herbal Drink (Conventional Drug Interaction Potential Study and Herbal As Prevention Adverse Effect on Combination Therapy Hyperlipidemia)
Authors: Gesti Prastiti, Maylina Adani, Yuyun darma A. N., M. Khilmi F., Yunita Wahyu Pratiwi
Abstract:
Combination therapy may allow interaction on two drugs or more that can give adverse effects on patients. Simvastatin is a drug of antihyperlipidemia it can interact with drugs which work on cytochrome P450 CYP3A4 because it can interfere the performance of simvastatin. Flavonoid found in plants can inhibit the cytochrome P450 CYP3A4 if taken with simvastatin and can increase simvastatin levels in the body and increases the potential side effects of simvastatin such as myopati and rhabdomyolysis. Green tea leaves and mint are herbal medicine which has the effect of antihiperlipidemia. This study aims to determine the potential interaction of simvastatin with herbal drinks (green tea leaves and mint). This research method are experimental post-test only control design. Test subjects were divided into 5 groups: normal group, negative control group, simvastatin group, a combination of green tea group and the combination group mint leaves. The study was conducted over 32 days and total cholesterol levels were analyzed by enzymatic colorimetric test method. Results of this study is the obtainment of average value of total cholesterol in each group, the normal group (65.92 mg/dL), the negative control group the average total cholesterol test in the normal group was (69.86 mg/dL), simvastatin group (58.96 mg/dL), the combination of green tea group (58.96 mg/dL), and the combination of mint leaves (63.68 mg/dL). The conclusion is between simvastatin combination therapy with herbal drinks have the potential for pharmacodynamic interactions with a synergistic effect, antagonist, and a powerful additive, so the combination therapy are no more effective than a single administration of simvastatin therapy.Keywords: hyperlipidemia, simvastatin, herbal drinks, green tea leaves, mint leaves, drug interactions
Procedia PDF Downloads 3971983 Highly Efficient Ca-Doped CuS Counter Electrodes for Quantum Dot Sensitized Solar Cells
Authors: Mohammed Panthakkal Abdul Muthalif, Shanmugasundaram Kanagaraj, Jumi Park, Hangyu Park, Youngson Choe
Abstract:
The present study reports the incorporation of calcium ions into the CuS counter electrodes (CEs) in order to modify the photovoltaic performance of quantum dot-sensitized solar cells (QDSSCs). Metal ion-doped CuS thin film was prepared by the chemical bath deposition (CBD) method on FTO substrate and used directly as counter electrodes for TiO₂/CdS/CdSe/ZnS photoanodes based QDSSCs. For the Ca-doped CuS thin films, copper nitrate and thioacetamide were used as anionic and cationic precursors. Calcium nitrate tetrahydrate was used as doping material. The surface morphology of Ca-doped CuS CEs indicates that the fragments are uniformly distributed, and the structure is densely packed with high crystallinity. The changes observed in the diffraction patterns suggest that Ca dopant can introduce increased disorder into CuS material structure. EDX analysis was employed to determine the elemental identification, and the results confirmed the presence of Cu, S, and Ca on the FTO glass substrate. The photovoltaic current density – voltage characteristics of Ca-doped CuS CEs shows the specific improvements in open circuit voltage decay (Voc) and short-circuit current density (Jsc). Electrochemical impedance spectroscopy results display that Ca-doped CuS CEs have greater electrocatalytic activity and charge transport capacity than bare CuS. All the experimental results indicate that 20% Ca-doped CuS CE based QDSSCs exhibit high power conversion efficiency (η) of 4.92%, short circuit current density of 15.47 mA cm⁻², open circuit photovoltage of 0.611 V, and fill factor (FF) of 0.521 under illumination of one sun.Keywords: Ca-doped CuS counter electrodes, surface morphology, chemical bath deposition method, electrocatalytic activity
Procedia PDF Downloads 1661982 Diagnostics and Explanation of the Current Status of the 40- Year Railway Viaduct
Authors: Jakub Zembrzuski, Bartosz Sobczyk, Mikołaj MIśkiewicz
Abstract:
Besides designing new constructions, engineers all over the world must face another problem – maintenance, repairs, and assessment of the technical condition of existing bridges. To solve more complex issues, it is necessary to be familiar with the theory of finite element method and to have access to the software that provides sufficient tools which to enable create of sometimes significantly advanced numerical models. The paper includes a brief assessment of the technical condition, a description of the in situ non-destructive testing carried out and the FEM models created for global and local analysis. In situ testing was performed using strain gauges and displacement sensors. Numerical models were created using various software and numerical modeling techniques. Particularly noteworthy is the method of modeling riveted joints of the crossbeam of the viaduct. It is a simplified method that consists of the use of only basic numerical tools such as beam and shell finite elements, constraints, and simplified boundary conditions (fixed support and symmetry). The results of the numerical analyses were presented and discussed. It is clearly explained why the structure did not fail, despite the fact that the weld of the deck plate completely failed. A further research problem that was solved was to determine the cause of the rapid increase in values on the stress diagram in the cross-section of the transverse section. The problems were solved using the solely mentioned, simplified method of modeling riveted joints, which demonstrates that it is possible to solve such problems without access to sophisticated software that enables to performance of the advanced nonlinear analysis. Moreover, the obtained results are of great importance in the field of assessing the operation of bridge structures with an orthotropic plate.Keywords: bridge, diagnostics, FEM simulations, failure, NDT, in situ testing
Procedia PDF Downloads 751981 Sensing of Cancer DNA Using Resonance Frequency
Authors: Sungsoo Na, Chanho Park
Abstract:
Lung cancer is one of the most common severe diseases driving to the death of a human. Lung cancer can be divided into two cases of small-cell lung cancer (SCLC) and non-SCLC (NSCLC), and about 80% of lung cancers belong to the case of NSCLC. From several studies, the correlation between epidermal growth factor receptor (EGFR) and NSCLCs has been investigated. Therefore, EGFR inhibitor drugs such as gefitinib and erlotinib have been used as lung cancer treatments. However, the treatments result showed low response (10~20%) in clinical trials due to EGFR mutations that cause the drug resistance. Patients with resistance to EGFR inhibitor drugs usually are positive to KRAS mutation. Therefore, assessment of EGFR and KRAS mutation is essential for target therapies of NSCLC patient. In order to overcome the limitation of conventional therapies, overall EGFR and KRAS mutations have to be monitored. In this work, the only detection of EGFR will be presented. A variety of techniques has been presented for the detection of EGFR mutations. The standard detection method of EGFR mutation in ctDNA relies on real-time polymerase chain reaction (PCR). Real-time PCR method provides high sensitive detection performance. However, as the amplification step increases cost effect and complexity increase as well. Other types of technology such as BEAMing, next generation sequencing (NGS), an electrochemical sensor and silicon nanowire field-effect transistor have been presented. However, those technologies have limitations of low sensitivity, high cost and complexity of data analyzation. In this report, we propose a label-free and high-sensitive detection method of lung cancer using quartz crystal microbalance based platform. The proposed platform is able to sense lung cancer mutant DNA with a limit of detection of 1nM.Keywords: cancer DNA, resonance frequency, quartz crystal microbalance, lung cancer
Procedia PDF Downloads 2331980 Study on Shifting Properties of CVT Rubber V-belt
Authors: Natsuki Tsuda, Kiyotaka Obunai, Kazuya Okubo, Hideyuki Tashiro, Yoshinori Yamaji, Hideyuki Kato
Abstract:
The objective of this study is to investigate the effect of belt stiffness on the performance of the CVT unit, such as the required pulley thrust force and the ratio coverage. The CVT unit consists of the V-grooved pulleys and the rubber CVT belt. The width of the driving pulley groove was controlled by the stepper motor, while that of the driven pulley was controlled by the hydraulic pressure. The generated mechanical power on the motor was transmitted from the driving axis to the driven axis through the CVT unit. The rotational speed and the transmitting torque of both axes were measured by the tachometers and the torque meters attached with these axes, respectively. The transmitted, mechanical power was absorbed by the magnetic powder brake. The thrust force acting on both pulleys and the force between both shafts were measured by the load cell. The back face profile of the rubber CVT belt along with width direction was measured by the 2-dimensional laser displacement meter. This paper found that when the stiffness of the rubber CVT belt in the belt width direction was reduced, the thrust force required for shifting was reduced. Moreover, when the stiffness of the rubber CVT belt in the belt width direction was reduced, the ratio coverage of the CVT unit was reduced. Due to the decrement of stiffness in belt width direction, the excessive concave deformation of belt in pulley groove was confirmed. Because of this excessive concave deformation, apparent wrapping radius of belt would have been reduced. Proposed model could be effectively estimated the difference of ratio coverage due to concave deformation. The proposed model could also be utilized for designing the rubber CVT belt with optimal bending stiffness in width direction.Keywords: CVT, countinuously variable transmission, rubber, belt stiffness, transmission
Procedia PDF Downloads 1441979 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs
Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa
Abstract:
Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.Keywords: classification models, egg weight, fertilised eggs, multiple linear regression
Procedia PDF Downloads 881978 Local Directional Encoded Derivative Binary Pattern Based Coral Image Classification Using Weighted Distance Gray Wolf Optimization Algorithm
Authors: Annalakshmi G., Sakthivel Murugan S.
Abstract:
This paper presents a local directional encoded derivative binary pattern (LDEDBP) feature extraction method that can be applied for the classification of submarine coral reef images. The classification of coral reef images using texture features is difficult due to the dissimilarities in class samples. In coral reef image classification, texture features are extracted using the proposed method called local directional encoded derivative binary pattern (LDEDBP). The proposed approach extracts the complete structural arrangement of the local region using local binary batten (LBP) and also extracts the edge information using local directional pattern (LDP) from the edge response available in a particular region, thereby achieving extra discriminative feature value. Typically the LDP extracts the edge details in all eight directions. The process of integrating edge responses along with the local binary pattern achieves a more robust texture descriptor than the other descriptors used in texture feature extraction methods. Finally, the proposed technique is applied to an extreme learning machine (ELM) method with a meta-heuristic algorithm known as weighted distance grey wolf optimizer (GWO) to optimize the input weight and biases of single-hidden-layer feed-forward neural networks (SLFN). In the empirical results, ELM-WDGWO demonstrated their better performance in terms of accuracy on all coral datasets, namely RSMAS, EILAT, EILAT2, and MLC, compared with other state-of-the-art algorithms. The proposed method achieves the highest overall classification accuracy of 94% compared to the other state of art methods.Keywords: feature extraction, local directional pattern, ELM classifier, GWO optimization
Procedia PDF Downloads 1641977 Passive Vibration Isolation Analysis and Optimization for Mechanical Systems
Authors: Ozan Yavuz Baytemir, Ender Cigeroglu, Gokhan Osman Ozgen
Abstract:
Vibration is an important issue in the design of various components of aerospace, marine and vehicular applications. In order not to lose the components’ function and operational performance, vibration isolation design involving the optimum isolator properties selection and isolator positioning processes appear to be a critical study. Knowing the growing need for the vibration isolation system design, this paper aims to present two types of software capable of implementing modal analysis, response analysis for both random and harmonic types of excitations, static deflection analysis, Monte Carlo simulations in addition to study of parameter and location optimization for different types of isolation problem scenarios. Investigating the literature, there is no such study developing a software-based tool that is capable of implementing all those analysis, simulation and optimization studies in one platform simultaneously. In this paper, the theoretical system model is generated for a 6-DOF rigid body. The vibration isolation system of any mechanical structure is able to be optimized using hybrid method involving both global search and gradient-based methods. Defining the optimization design variables, different types of optimization scenarios are listed in detail. Being aware of the need for a user friendly vibration isolation problem solver, two types of graphical user interfaces (GUIs) are prepared and verified using a commercial finite element analysis program, Ansys Workbench 14.0. Using the analysis and optimization capabilities of those GUIs, a real application used in an air-platform is also presented as a case study at the end of the paper.Keywords: hybrid optimization, Monte Carlo simulation, multi-degree-of-freedom system, parameter optimization, location optimization, passive vibration isolation analysis
Procedia PDF Downloads 5651976 Emotional Intelligence Training: Helping Non-Native Pre-Service EFL Teachers to Overcome Speaking Anxiety: The Case of Pre-Service Teachers of English, Algeria
Authors: Khiari Nor El Houda, Hiouani Amira Sarra
Abstract:
Many EFL students with high capacities are hidden because they suffer from speaking anxiety (SA). Most of them find public speaking much demanding. They feel unable to communicate, they fear to make mistakes and they fear negative evaluation or being called on. With the growing number of the learners who suffer from foreign language speaking anxiety (FLSA), it is becoming increasingly difficult to ignore its harmful outcomes on their performance and success, especially during their first contact with the pupils, as they will be teaching in the near future. Different researchers suggested different ways to minimize the negative effects of FLSA. The present study sheds light on emotional intelligence skills training as an effective strategy not only to influence public speaking success but also to help pre-service EFL teachers lessen their speaking anxiety and eventually to prepare them for their professional career. A quasi-experiment was used in order to examine the research hypothesis. We worked with two groups of third-year EFL students at Oum El Bouaghi University. The Foreign Language Classroom Anxiety Scale (FLCAS) and the Emotional Quotient Inventory (EQ-i) were used to collect data about the participants’ FLSA and EI levels. The analysis of the data has yielded that the assumption that there is a negative correlation between EI and FLSA was statistically validated by the Pearson Correlation Test, concluding that, the more emotionally intelligent the individual is the less anxious s/he will be. In addition, the lack of amelioration in the results of the control group and the noteworthy improvement in the experimental group results led us to conclude that EI skills training was an effective strategy in minimizing the FLSA level and therefore, we confirmed our research hypothesis.Keywords: emotional intelligence, emotional intelligence skills training, EQ-I, FLCAS, foreign language speaking anxiety, pre-service EFL teachers
Procedia PDF Downloads 1411975 Living Wall Systems: An Approach for Reducing Energy Consumption in Curtain Wall Façades
Authors: Salma Maher, Ahmed Elseragy, Sally Eldeeb
Abstract:
Nowadays, Urbanism and climate change lead to the rapid growth in energy consumption and the increase of using air-conditioning for cooling. In a hot climate area, there is a need for a new sustainable alternative that is more convenient for an existing situation. The Building envelope controls the heat transfer between the outside and inside the building. While the building façade is the most critical part, types of façade material play a vital role in influences of the energy demand for heating and cooling due to exposure to direct solar radiation throughout the day. Since the beginning of the twentieth century, the use of curtain walls in office buildings façades started to increase rapidly, which lead to more cooling loads in energy consumption. Integrating the living wall system in urban areas as a sustainable renovation and energy-saving method for the built environment will reduce the energy demand of buildings and will also provide environmental benefits. Also, it will balance the urban ecology and enhance urban life quality. The results show that the living wall systems reduce the internal temperature up to 4.0 °C. This research carries on an analytical study by highlighting the different types of living wall systems and verifying their thermal performance, energy-saving, and life potential on the building. These assessing criteria include the reason for using the Living wall systems in the building façade as well as the effect it has upon the surrounding environment. Finally, the paper ends with concluding the effect of using living wall systems on building. And, it suggests a system as long-lasting, and energy-efficient solution to be applied in curtain wall façades in a hot climate area.Keywords: living wall systems, energy consumption, curtain walls, energy-saving, sustainability, urban life quality
Procedia PDF Downloads 1411974 Lesson Learnt from Solar Photovoltaic Power Generation in Thailand with Global Self-Consumption Experience
Authors: Tongpong Sriboon, Prapita Thanarak, Chaitawatch Khunrangabsang
Abstract:
Nowadays, the usage of power generated from photovoltaic system has been promoted significantly in Thailand. The targeted result which is to increase the Solar Power Generation in 2036 to 6000 megawatts (MW) was planned by Alternative Energy Development Plan (AEDP 2015) and Power Development Plan (PDP 2015). The solar rooftop 200 MW was promoted and supported under the Feed-in Tariff scheme (FiT) in two phases; phase I in 2012 and phase II in 2015. However, the number of people interested in supporting the projects reduced due to many reasons which range from the first process to the last that is to sell electricity back to Electricity Authority. This paper will review this situation especially in total electricity generated from solar rooftop system during the day that has been sold back to the grid utility in different capacity FiT rates. With many stakeholders involved, the regulations and criteria were established to maintain the standard of the system. Besides, lots of problems have occurred during the processes including reliability and quality. These problems were shortly followed by other irrevocably issues concerning politics, social, economic etc. In order to effectively develop solar PV power system in Thailand, the problems and solutions were compared to those from six countries including Japan, Australia. America, China, German and Malaysia. This paper particularly focuses on policies and measurement implemented to encourage the rising in solar PV system interest. This review enables one to gain insight into the nature of the changes that have taken place in each and every country mentioned above as well as the underlying reasons behind them. Brief analysis is carried out on identify key challenges and opportunities for solar PV application. This could help create a development path that is suitable with situations to enhance the overall performance of solar PV power generating system in Thailand.Keywords: solar PV rooftop, PV policy, self-consumption, solar PV power generation
Procedia PDF Downloads 3131973 Ensuring Quality in DevOps Culture
Authors: Sagar Jitendra Mahendrakar
Abstract:
Integrating quality assurance (QA) practices into DevOps culture has become increasingly important in modern software development environments. Collaboration, automation and continuous feedback characterize the seamless integration of DevOps development and operations teams to achieve rapid and reliable software delivery. In this context, quality assurance plays a key role in ensuring that software products meet the highest quality, performance and reliability standards throughout the development life cycle. This brief explores key principles, challenges, and best practices related to quality assurance in a DevOps culture. This emphasizes the importance of quality transfer in the development process, as quality control processes are integrated in every step of the DevOps process. Automation is the cornerstone of DevOps quality assurance, enabling continuous testing, integration and deployment and providing rapid feedback for early problem identification and resolution. In addition, the summary addresses the cultural and organizational challenges of implementing quality assurance in DevOps, emphasizing the need to foster collaboration, break down silos, and promote a culture of continuous improvement. It also discusses the importance of toolchain integration and capability development to support effective QA practices in DevOps environments. Moreover, the abstract discusses the cultural and organizational challenges in implementing QA within DevOps, emphasizing the need for fostering collaboration, breaking down silos, and nurturing a culture of continuous improvement. It also addresses the importance of toolchain integration and skills development to support effective QA practices within DevOps environments. Overall, this collection works at the intersection of QA and DevOps culture, providing insights into how organizations can use DevOps principles to improve software quality, accelerate delivery, and meet the changing demands of today's dynamic software. landscape.Keywords: quality engineer, devops, automation, tool
Procedia PDF Downloads 581972 A Trend Based Forecasting Framework of the ATA Method and Its Performance on the M3-Competition Data
Authors: H. Taylan Selamlar, I. Yavuz, G. Yapar
Abstract:
It is difficult to make predictions especially about the future and making accurate predictions is not always easy. However, better predictions remain the foundation of all science therefore the development of accurate, robust and reliable forecasting methods is very important. Numerous number of forecasting methods have been proposed and studied in the literature. There are still two dominant major forecasting methods: Box-Jenkins ARIMA and Exponential Smoothing (ES), and still new methods are derived or inspired from them. After more than 50 years of widespread use, exponential smoothing is still one of the most practically relevant forecasting methods available due to their simplicity, robustness and accuracy as automatic forecasting procedures especially in the famous M-Competitions. Despite its success and widespread use in many areas, ES models have some shortcomings that negatively affect the accuracy of forecasts. Therefore, a new forecasting method in this study will be proposed to cope with these shortcomings and it will be called ATA method. This new method is obtained from traditional ES models by modifying the smoothing parameters therefore both methods have similar structural forms and ATA can be easily adapted to all of the individual ES models however ATA has many advantages due to its innovative new weighting scheme. In this paper, the focus is on modeling the trend component and handling seasonality patterns by utilizing classical decomposition. Therefore, ATA method is expanded to higher order ES methods for additive, multiplicative, additive damped and multiplicative damped trend components. The proposed models are called ATA trended models and their predictive performances are compared to their counter ES models on the M3 competition data set since it is still the most recent and comprehensive time-series data collection available. It is shown that the models outperform their counters on almost all settings and when a model selection is carried out amongst these trended models ATA outperforms all of the competitors in the M3- competition for both short term and long term forecasting horizons when the models’ forecasting accuracies are compared based on popular error metrics.Keywords: accuracy, exponential smoothing, forecasting, initial value
Procedia PDF Downloads 1771971 Numerical Simulation of Air Pollutant Using Coupled AERMOD-WRF Modeling System over Visakhapatnam: A Case Study
Authors: Amit Kumar
Abstract:
Accurate identification of deteriorated air quality regions is very helpful in devising better environmental practices and mitigation efforts. In the present study, an attempt has been made to identify the air pollutant dispersion patterns especially NOX due to vehicular and industrial sources over a rapidly developing urban city, Visakhapatnam (17°42’ N, 83°20’ E), India, during April 2009. Using the emission factors of different vehicles as well as the industry, a high resolution 1 km x 1 km gridded emission inventory has been developed for Visakhapatnam city. A dispersion model AERMOD with explicit representation of planetary boundary layer (PBL) dynamics and offline coupled through a developed coupler mechanism with a high resolution mesoscale model WRF-ARW resolution for simulating the dispersion patterns of NOX is used in the work. The meteorological as well as PBL parameters obtained by employing two PBL schemes viz., non-local Yonsei University (YSU) and local Mellor-Yamada-Janjic (MYJ) of WRF-ARW model, which are reasonably representing the boundary layer parameters are considered for integrating AERMOD. Significantly different dispersion patterns of NOX have been noticed between summer and winter months. The simulated NOX concentration is validated with available six monitoring stations of Central Pollution Control Board, India. Statistical analysis of model evaluated concentrations with the observations reveals that WRF-ARW of YSU scheme with AERMOD has shown better performance. The deteriorated air quality locations are identified over Visakhapatnam based on the validated model simulations of NOX concentrations. The present study advocates the utility of tNumerical Simulation of Air Pollutant Using Coupled AERMOD-WRF Modeling System over Visakhapatnam: A Case Studyhe developed gridded emission inventory of NOX with coupled WRF-AERMOD modeling system for air quality assessment over the study region.Keywords: WRF-ARW, AERMOD, planetary boundary layer, air quality
Procedia PDF Downloads 2821970 Feature Evaluation Based on Random Subspace and Multiple-K Ensemble
Authors: Jaehong Yu, Seoung Bum Kim
Abstract:
Clustering analysis can facilitate the extraction of intrinsic patterns in a dataset and reveal its natural groupings without requiring class information. For effective clustering analysis in high dimensional datasets, unsupervised dimensionality reduction is an important task. Unsupervised dimensionality reduction can generally be achieved by feature extraction or feature selection. In many situations, feature selection methods are more appropriate than feature extraction methods because of their clear interpretation with respect to the original features. The unsupervised feature selection can be categorized as feature subset selection and feature ranking method, and we focused on unsupervised feature ranking methods which evaluate the features based on their importance scores. Recently, several unsupervised feature ranking methods were developed based on ensemble approaches to achieve their higher accuracy and stability. However, most of the ensemble-based feature ranking methods require the true number of clusters. Furthermore, these algorithms evaluate the feature importance depending on the ensemble clustering solution, and they produce undesirable evaluation results if the clustering solutions are inaccurate. To address these limitations, we proposed an ensemble-based feature ranking method with random subspace and multiple-k ensemble (FRRM). The proposed FRRM algorithm evaluates the importance of each feature with the random subspace ensemble, and all evaluation results are combined with the ensemble importance scores. Moreover, FRRM does not require the determination of the true number of clusters in advance through the use of the multiple-k ensemble idea. Experiments on various benchmark datasets were conducted to examine the properties of the proposed FRRM algorithm and to compare its performance with that of existing feature ranking methods. The experimental results demonstrated that the proposed FRRM outperformed the competitors.Keywords: clustering analysis, multiple-k ensemble, random subspace-based feature evaluation, unsupervised feature ranking
Procedia PDF Downloads 3391969 Investigating the Relative Priority of the Factors Affecting Customer Satisfaction in Gaining the Competitive Advantage in Pars-Khazar Company
Authors: Samaneh Pouyanfar, Michael Oliff
Abstract:
The industry of home appliances may beone of theindustries which has the highest competition, and actually what can guarantee the survival of this industry is discovering the superior services. A trend to provide quality products and services plays an important role in this industry because discovering the services is counted as a vital affair for Manufacturing Organizations’ survival and profitability. Given the importance of the topic, this paper attempts to investigate the relative priority of the factors influencing the customer satisfaction in gaining the competitive advantage in Pars-Khazar Company. In sum, 96 executives of Pars-Khazar Company where investigated in a census. For this purpose, after reviewing the research literature and performing deep interviews between pundits and experts active in the industry, the research questionnaire was made based on variables affecting customer satisfaction and components determining business competitive advantage. Determining the content validity took place by judgement of the experts. The reliability of each structure was measured based on Cronbach’s alpha coefficient. Since the value of Cronbach's alpha was higher than 0.7 for each structure, internal consistency of statements was high and the reliability of the questionnaire was acceptable. The data analysis was also done with Kulmgrf-asmyrnf test and Friedman test using SPSS software. The results showed that in dimension of factors affecting customer satisfaction, the History of trade name (brand), Familiarity with the product brand, Brand reputation and Safety have the highest value of priority respectively, and the variable of firm growth has the highest value of priority among the components determining the performance of competitive advantage.Keywords: customer satisfaction, competitive advantage, brand history, safety, growth
Procedia PDF Downloads 2301968 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection
Authors: Yulan Wu
Abstract:
With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.Keywords: fake news, deep learning, natural language processing, multiple domains
Procedia PDF Downloads 1011967 Using Problem-Based Learning on Teaching Early Intervention for College Students
Authors: Chen-Ya Juan
Abstract:
In recent years, the increasing number of children with special needs has brought a lot of attention by many scholars and experts in education, which enforced the preschool teachers face the harsh challenge in the classroom. To protect the right of equal education for all children, enhance the quality of children learning, and take care of the needs of children with special needs, the special education paraprofessional becomes one of the future employment trends for students of the department of the early childhood care and education. Problem-based learning is a problem-oriented instruction, which is different from traditional instruction. The instructor first designed an ambiguous problem direction, following the basic knowledge of early intervention, students had to find clues to solve the problem defined by themselves. In the class, the total instruction included 20 hours, two hours per week. The primary purpose of this paper is to investigate the relationship of student academic scores, self-awareness, learning motivation, learning attitudes, and early intervention knowledge. A total of 105 college students participated in this study and 97 questionnaires were effective. The effective response rate was 90%. The student participants included 95 females and two males. The average age of the participants was 19 years old. The questionnaires included 125 questions divided into four major dimensions: (1) Self-awareness, (2) learning motivation, (3) learning attitudes, and (4) early intervention knowledge. The results indicated (1) the scores of self-awareness were 58%; the scores of the learning motivations was 64.9%; the scores of the learning attitudes was 55.3%. (2) After the instruction, the early intervention knowledge has been increased to 64.2% from 38.4%. (3) Student’s academic performance has positive relationship with self-awareness (p < 0.05; R = 0.506), learning motivation (p < 0.05; R = 0.487), learning attitudes (p < 0.05; R = 0.527). The results implied that although students had gained early intervention knowledge by using PBL instruction, students had medium scores on self-awareness and learning attitudes, medium high in learning motivations.Keywords: college students, children with special needs, problem-based learning, learning motivation
Procedia PDF Downloads 1581966 The Effect of Speech-Shaped Noise and Speaker’s Voice Quality on First-Grade Children’s Speech Perception and Listening Comprehension
Authors: I. Schiller, D. Morsomme, A. Remacle
Abstract:
Children’s ability to process spoken language develops until the late teenage years. At school, where efficient spoken language processing is key to academic achievement, listening conditions are often unfavorable. High background noise and poor teacher’s voice represent typical sources of interference. It can be assumed that these factors particularly affect primary school children, because their language and literacy skills are still low. While it is generally accepted that background noise and impaired voice impede spoken language processing, there is an increasing need for analyzing impacts within specific linguistic areas. Against this background, the aim of the study was to investigate the effect of speech-shaped noise and imitated dysphonic voice on first-grade primary school children’s speech perception and sentence comprehension. Via headphones, 5 to 6-year-old children, recruited within the French-speaking community of Belgium, listened to and performed a minimal-pair discrimination task and a sentence-picture matching task. Stimuli were randomly presented according to four experimental conditions: (1) normal voice / no noise, (2) normal voice / noise, (3) impaired voice / no noise, and (4) impaired voice / noise. The primary outcome measure was task score. How did performance vary with respect to listening condition? Preliminary results will be presented with respect to speech perception and sentence comprehension and carefully interpreted in the light of past findings. This study helps to support our understanding of children’s language processing skills under adverse conditions. Results shall serve as a starting point for probing new measures to optimize children’s learning environment.Keywords: impaired voice, sentence comprehension, speech perception, speech-shaped noise, spoken language processing
Procedia PDF Downloads 1931965 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification
Authors: Oumaima Khlifati, Khadija Baba
Abstract:
Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.Keywords: distress pavement, hyperparameters, automatic classification, deep learning
Procedia PDF Downloads 94