Search results for: Signal processing
87 Simulated Annealing Algorithm for Data Aggregation Trees in Wireless Sensor Networks and Comparison with Genetic Algorithm
Authors: Ladan Darougaran, Hossein Shahinzadeh, Hajar Ghotb, Leila Ramezanpour
Abstract:
In ad hoc networks, the main issue about designing of protocols is quality of service, so that in wireless sensor networks the main constraint in designing protocols is limited energy of sensors. In fact, protocols which minimize the power consumption in sensors are more considered in wireless sensor networks. One approach of reducing energy consumption in wireless sensor networks is to reduce the number of packages that are transmitted in network. The technique of collecting data that combines related data and prevent transmission of additional packages in network can be effective in the reducing of transmitted packages- number. According to this fact that information processing consumes less power than information transmitting, Data Aggregation has great importance and because of this fact this technique is used in many protocols [5]. One of the Data Aggregation techniques is to use Data Aggregation tree. But finding one optimum Data Aggregation tree to collect data in networks with one sink is a NP-hard problem. In the Data Aggregation technique, related information packages are combined in intermediate nodes and form one package. So the number of packages which are transmitted in network reduces and therefore, less energy will be consumed that at last results in improvement of longevity of network. Heuristic methods are used in order to solve the NP-hard problem that one of these optimization methods is to solve Simulated Annealing problems. In this article, we will propose new method in order to build data collection tree in wireless sensor networks by using Simulated Annealing algorithm and we will evaluate its efficiency whit Genetic Algorithm.
Keywords: Data aggregation, wireless sensor networks, energy efficiency, simulated annealing algorithm, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168386 CBIR Using Multi-Resolution Transform for Brain Tumour Detection and Stages Identification
Authors: H. Benjamin Fredrick David, R. Balasubramanian, A. Anbarasa Pandian
Abstract:
Image retrieval is the most interesting technique which is being used today in our digital world. CBIR, commonly expanded as Content Based Image Retrieval is an image processing technique which identifies the relevant images and retrieves them based on the patterns that are extracted from the digital images. In this paper, two research works have been presented using CBIR. The first work provides an automated and interactive approach to the analysis of CBIR techniques. CBIR works on the principle of supervised machine learning which involves feature selection followed by training and testing phase applied on a classifier in order to perform prediction. By using feature extraction, the image transforms such as Contourlet, Ridgelet and Shearlet could be utilized to retrieve the texture features from the images. The features extracted are used to train and build a classifier using the classification algorithms such as Naïve Bayes, K-Nearest Neighbour and Multi-class Support Vector Machine. Further the testing phase involves prediction which predicts the new input image using the trained classifier and label them from one of the four classes namely 1- Normal brain, 2- Benign tumour, 3- Malignant tumour and 4- Severe tumour. The second research work includes developing a tool which is used for tumour stage identification using the best feature extraction and classifier identified from the first work. Finally, the tool will be used to predict tumour stage and provide suggestions based on the stage of tumour identified by the system. This paper presents these two approaches which is a contribution to the medical field for giving better retrieval performance and for tumour stages identification.
Keywords: Brain tumour detection, content based image retrieval, classification of tumours, image retrieval.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 77585 Insights into Smoothies with High Levels of Fibre and Polyphenols: Factors Influencing Chemical, Rheological and Sensory Properties
Authors: Dongxiao Sun-Waterhouse, Shiji Nair, Reginald Wibisono, Sandhya S. Wadhwa, Carl Massarotto, Duncan I. Hedderley, Jing Zhou, Sara R. Jaeger, Virginia Corrigan
Abstract:
Attempts to add fibre and polyphenols (PPs) into popular beverages present challenges related to the properties of finished products such as smoothies. Consumer acceptability, viscosity and phenolic composition of smoothies containing high levels of fruit fibre (2.5-7.5 g per 300 mL serve) and PPs (250-750 mg per 300 mL serve) were examined. The changes in total extractable PP, vitamin C content, and colour of selected smoothies over a storage stability trial (4°C, 14 days) were compared. A set of acidic aqueous model beverages were prepared to further examine the effect of two different heat treatments on the stability and extractability of PPs. Results show that overall consumer acceptability of high fibre and PP smoothies was low, with average hedonic scores ranging from 3.9 to 6.4 (on a 1-9 scale). Flavour, texture and overall acceptability decreased as fibre and polyphenol contents increased, with fibre content exerting a stronger effect. Higher fibre content resulted in greater viscosity, with an elevated PP content increasing viscosity only slightly. The presence of fibre also aided the stability and extractability of PPs after heating. A reduction of extractable PPs, vitamin C content and colour intensity of smoothies was observed after a 14-day storage period at 4°C. Two heat treatments (75°C for 45 min or 85°C for 1 min) that are normally used for beverage production, did not cause significant reduction of total extracted PPs. It is clear that high levels of added fibre and PPs greatly influence the consumer appeal of smoothies, suggesting the need to develop novel formulation and processing methods if a satisfactory functional beverage is to be developed incorporating these ingredients.Keywords: Apple fibre, apple and blackcurrant polyphenols, consumer acceptability, functional foods, stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 433684 Analysis of Surface Hardness, Surface Roughness, and Near Surface Microstructure of AISI 4140 Steel Worked with Turn-Assisted Deep Cold Rolling Process
Authors: P. R. Prabhu, S. M. Kulkarni, S. S. Sharma, K. Jagannath, Achutha Kini U.
Abstract:
In the present study, response surface methodology has been used to optimize turn-assisted deep cold rolling process of AISI 4140 steel. A regression model is developed to predict surface hardness and surface roughness using response surface methodology and central composite design. In the development of predictive model, deep cold rolling force, ball diameter, initial roughness of the workpiece, and number of tool passes are considered as model variables. The rolling force and the ball diameter are the significant factors on the surface hardness and ball diameter and numbers of tool passes are found to be significant for surface roughness. The predicted surface hardness and surface roughness values and the subsequent verification experiments under the optimal operating conditions confirmed the validity of the predicted model. The absolute average error between the experimental and predicted values at the optimal combination of parameter settings for surface hardness and surface roughness is calculated as 0.16% and 1.58% respectively. Using the optimal processing parameters, the surface hardness is improved from 225 to 306 HV, which resulted in an increase in the near surface hardness by about 36% and the surface roughness is improved from 4.84µm to 0.252 µm, which resulted in decrease in the surface roughness by about 95%. The depth of compression is found to be more than 300µm from the microstructure analysis and this is in correlation with the results obtained from the microhardness measurements. Taylor hobson talysurf tester, micro vickers hardness tester, optical microscopy and X-ray diffractometer are used to characterize the modified surface layer.
Keywords: Surface hardness, response surface methodology, microstructure, central composite design, deep cold rolling, surface roughness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 180583 A Hybrid Multi-Criteria Hotel Recommender System Using Explicit and Implicit Feedbacks
Authors: Ashkan Ebadi, Adam Krzyzak
Abstract:
Recommender systems, also known as recommender engines, have become an important research area and are now being applied in various fields. In addition, the techniques behind the recommender systems have been improved over the time. In general, such systems help users to find their required products or services (e.g. books, music) through analyzing and aggregating other users’ activities and behavior, mainly in form of reviews, and making the best recommendations. The recommendations can facilitate user’s decision making process. Despite the wide literature on the topic, using multiple data sources of different types as the input has not been widely studied. Recommender systems can benefit from the high availability of digital data to collect the input data of different types which implicitly or explicitly help the system to improve its accuracy. Moreover, most of the existing research in this area is based on single rating measures in which a single rating is used to link users to items. This paper proposes a highly accurate hotel recommender system, implemented in various layers. Using multi-aspect rating system and benefitting from large-scale data of different types, the recommender system suggests hotels that are personalized and tailored for the given user. The system employs natural language processing and topic modelling techniques to assess the sentiment of the users’ reviews and extract implicit features. The entire recommender engine contains multiple sub-systems, namely users clustering, matrix factorization module, and hybrid recommender system. Each sub-system contributes to the final composite set of recommendations through covering a specific aspect of the problem. The accuracy of the proposed recommender system has been tested intensively where the results confirm the high performance of the system.
Keywords: Tourism, hotel recommender system, hybrid, implicit features.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 190282 Integration of Big Data to Predict Transportation for Smart Cities
Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin
Abstract:
The Intelligent transportation system is essential to build smarter cities. Machine learning based transportation prediction could be highly promising approach by delivering invisible aspect visible. In this context, this research aims to make a prototype model that predicts transportation network by using big data and machine learning technology. In detail, among urban transportation systems this research chooses bus system. The research problem that existing headway model cannot response dynamic transportation conditions. Thus, bus delay problem is often occurred. To overcome this problem, a prediction model is presented to fine patterns of bus delay by using a machine learning implementing the following data sets; traffics, weathers, and bus statues. This research presents a flexible headway model to predict bus delay and analyze the result. The prototyping model is composed by real-time data of buses. The data are gathered through public data portals and real time Application Program Interface (API) by the government. These data are fundamental resources to organize interval pattern models of bus operations as traffic environment factors (road speeds, station conditions, weathers, and bus information of operating in real-time). The prototyping model is designed by the machine learning tool (RapidMiner Studio) and conducted tests for bus delays prediction. This research presents experiments to increase prediction accuracy for bus headway by analyzing the urban big data. The big data analysis is important to predict the future and to find correlations by processing huge amount of data. Therefore, based on the analysis method, this research represents an effective use of the machine learning and urban big data to understand urban dynamics.
Keywords: Big data, bus headway prediction, machine learning, public transportation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 156281 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength
Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos
Abstract:
Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.Keywords: Statistical slope stability analysis, Skew distributions, Probability of failure, Functions of random variables.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 154580 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing
Authors: Yehjune Heo
Abstract:
As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.
Keywords: Anti-spoofing, CNN, fingerprint recognition, loss function, optimizer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42079 Data Centers’ Temperature Profile Simulation Optimized by Finite Elements and Discretization Methods
Authors: José Alberto García Fernández, Zhimin Du, Xinqiao Jin
Abstract:
Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.
Keywords: Burgers’ equations, CFD simulation, data center, discretization methods, FEM simulation, temperature profile.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 51678 In situ Real-Time Multivariate Analysis of Methanolysis Monitoring of Sunflower Oil Using FTIR
Authors: Pascal Mwenge, Tumisang Seodigeng
Abstract:
The combination of world population and the third industrial revolution led to high demand for fuels. On the other hand, the decrease of global fossil 8fuels deposits and the environmental air pollution caused by these fuels has compounded the challenges the world faces due to its need for energy. Therefore, new forms of environmentally friendly and renewable fuels such as biodiesel are needed. The primary analytical techniques for methanolysis yield monitoring have been chromatography and spectroscopy, these methods have been proven reliable but are more demanding, costly and do not provide real-time monitoring. In this work, the in situ monitoring of biodiesel from sunflower oil using FTIR (Fourier Transform Infrared) has been studied; the study was performed using EasyMax Mettler Toledo reactor equipped with a DiComp (Diamond) probe. The quantitative monitoring of methanolysis was performed by building a quantitative model with multivariate calibration using iC Quant module from iC IR 7.0 software. 15 samples of known concentrations were used for the modelling which were taken in duplicate for model calibration and cross-validation, data were pre-processed using mean centering and variance scale, spectrum math square root and solvent subtraction. These pre-processing methods improved the performance indexes from 7.98 to 0.0096, 11.2 to 3.41, 6.32 to 2.72, 0.9416 to 0.9999, RMSEC, RMSECV, RMSEP and R2Cum, respectively. The R2 value of 1 (training), 0.9918 (test), 0.9946 (cross-validation) indicated the fitness of the model built. The model was tested against univariate model; small discrepancies were observed at low concentration due to unmodelled intermediates but were quite close at concentrations above 18%. The software eliminated the complexity of the Partial Least Square (PLS) chemometrics. It was concluded that the model obtained could be used to monitor methanol of sunflower oil at industrial and lab scale.
Keywords: Biodiesel, calibration, chemometrics, FTIR, methanolysis, multivariate analysis, transesterification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 93477 Saving Energy through Scalable Architecture
Authors: John Lamb, Robert Epstein, Vasundhara L. Bhupathi, Sanjeev Kumar Marimekala
Abstract:
In this paper, we focus on the importance of scalable architecture for data centers and buildings in general to help an enterprise achieve environmental sustainability. The scalable architecture helps in many ways, such as adaptability to the business and user requirements, promotes high availability and disaster recovery solutions that are cost effective and low maintenance. The scalable architecture also plays a vital role in three core areas of sustainability: economy, environment, and social, which are also known as the 3 pillars of a sustainability model. If the architecture is scalable, it has many advantages. A few examples are that scalable architecture helps businesses and industries to adapt to changing technology, drive innovation, promote platform independence, and build resilience against natural disasters. Most importantly, having a scalable architecture helps industries bring in cost-effective measures for energy consumption, reduce wastage, increase productivity, and enable a robust environment. It also helps in the reduction of carbon emissions with advanced monitoring and metering capabilities. Scalable architectures help in reducing waste by optimizing the designs to utilize materials efficiently, minimize resources, decrease carbon footprints by using low-impact materials that are environmentally friendly. In this paper we also emphasize the importance of cultural shift towards the reuse and recycling of natural resources for a balanced ecosystem and maintain a circular economy. Also, since all of us are involved in the use of computers, much of the scalable architecture we have studied is related to data centers.
Keywords: Scalable Architectures, Sustainability, Application Design, Disruptive Technology, Machine Learning, Natural Language Processing, AI, Social Media Platform, Cloud Computing, Advanced Networking, Storage Devices, Advanced Monitoring, Metering Infrastructure, Climate change.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6176 In vitro Effects of Salvia officinalis on Bovine Spermatozoa
Authors: Eva Tvrdá, Boris Botman, Marek Halenár, Tomáš Slanina, Norbert Lukáč
Abstract:
In vitro storage and processing of animal semen represents a risk factor to spermatozoa vitality, potentially leading to reduced fertility. A variety of substances isolated from natural sources may exhibit protective or antioxidant properties on the spermatozoon, thus extending the lifespan of stored ejaculates. This study compared the ability of different concentrations of the Salvia officinalis extract on the motility, mitochondrial activity, viability and reactive oxygen species (ROS) production by bovine spermatozoa during different time periods (0, 2, 6 and 24 h) of in vitro culture. Spermatozoa motility was assessed using the Computer-assisted sperm analysis (CASA) system. Cell viability was examined using the metabolic activity MTT assay, the eosin-nigrosin staining technique was used to evaluate the sperm viability and ROS generation was quantified using luminometry. The CASA analysis revealed that the motility in the experimental groups supplemented with 0.5-2 µg/mL Salvia extract was significantly lower in comparison with the control (P<0.05; Time 24 h). At the same time, a long-term exposure of spermatozoa to concentrations ranging between 0.05 µg/mL and 2 µg/mL had a negative impact on the mitochondrial metabolism (P<0.05; Time 24 h). The viability staining revealed that 0.001-1 µg/mL Salvia extract had no effects on bovine male gametes, however 2 µg/mL Salvia had a persisting negative effect on spermatozoa (P<0.05). Furthermore 0.05-2 µg/mL Salvia exhibited an immediate ROS-promoting effect on the sperm culture (P>0.05; Time 0 h and 2 h), which remained significant throughout the entire in vitro culture (P<0.05; Time 24 h). Our results point out to the necessity to examine specific effects the biomolecules present in Salvia officinalis may have individually or collectively on the in vitro sperm vitality and oxidative profile.Keywords: Bulls, CASA, MTT test, reactive oxygen species, sage, Salvia officinalis, spermatozoa.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 135075 Identification of Flexographic-printed Newspapers with NIR Spectral Imaging
Authors: Raimund Leitner, Susanne Rosskopf
Abstract:
Near-infrared (NIR) spectroscopy is a widely used method for material identification for laboratory and industrial applications. While standard spectrometers only allow measurements at one sampling point at a time, NIR Spectral Imaging techniques can measure, in real-time, both the size and shape of an object as well as identify the material the object is made of. The online classification and sorting of recovered paper with NIR Spectral Imaging (SI) is used with success in the paper recycling industry throughout Europe. Recently, the globalisation of the recycling material streams caused that water-based flexographic-printed newspapers mainly from UK and Italy appear also in central Europe. These flexo-printed newspapers are not sufficiently de-inkable with the standard de-inking process originally developed for offset-printed paper. This de-inking process removes the ink from recovered paper and is the fundamental processing step to produce high-quality paper from recovered paper. Thus, the flexo-printed newspapers are a growing problem for the recycling industry as they reduce the quality of the produced paper if their amount exceeds a certain limit within the recovered paper material. This paper presents the results of a research project for the development of an automated entry inspection system for recovered paper that was jointly conducted by CTR AG (Austria) and PTS Papiertechnische Stiftung (Germany). Within the project an NIR SI prototype for the identification of flexo-printed newspaper has been developed. The prototype can identify and sort out flexoprinted newspapers in real-time and achieves a detection accuracy for flexo-printed newspaper of over 95%. NIR SI, the technology the prototype is based on, allows the development of inspection systems for incoming goods in a paper production facility as well as industrial sorting systems for recovered paper in the recycling industry in the near future.Keywords: spectral imaging, imaging spectroscopy, NIR, waterbasedflexographic, flexo-printed, recovered paper, real-time classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 154574 Vitamin Content of Swordfish (Xhiphias gladius) Affected by Salting and Frying
Authors: L. Piñeiro, N. Cobas, L. Gómez-Limia, S. Martínez, I. Franco
Abstract:
The swordfish (Xiphias gladius) is a large oceanic fish of high commercial value, which is widely distributed in waters of the world’s oceans. They are considered to be an important source of high quality proteins, vitamins and essential fatty acids, although only half of the population follows the recommendation of nutritionists to consume fish at least twice a week. Swordfish is consumed worldwide because of its low fat content and high protein content. It is generally sold as fresh, frozen, and as pieces or slices. The aim of this study was to evaluate the effect of salting and frying on the composition of the water-soluble vitamins (B2, B3, B9 and B12) and fat-soluble vitamins (A, D, and E) of swordfish. Three loins of swordfish from Pacific Ocean were analyzed. All the fishes had a weight between 50 and 70 kg and were transported to the laboratory frozen (-18 ºC). Before the processing, they were defrosted at 4 ºC. Each loin was sliced and salted in brine. After cleaning the slices, they were divided into portions (10×2 cm) and fried in olive oil. The identification and quantification of vitamins were carried out by high-performance liquid chromatography (HPLC), using methanol and 0.010% trifluoroacetic acid as mobile phases at a flow-rate of 0.7 mL min-1. The UV-Vis detector was used for the detection of the water- and fat-soluble vitamins (A and D), as well as the fluorescence detector for the detection of the vitamin E. During salting, water and fat-soluble vitamin contents remained constant, observing an evident decrease in the values of vitamin B2. The diffusion of salt into the interior of the pieces and the loss of constitution water that occur during this stage would be related to this significant decrease. In general, after frying water-soluble and fat-soluble vitamins showed a great thermolability with high percentages of retention with values among 50–100%. Vitamin B3 is the one that exhibited higher percentages of retention with values close to 100%. However, vitamin B9 presented the highest losses with a percentage of retention of less than 20%.
Keywords: Frying, HPLC, salting, swordfish, vitamins.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 92873 Preparing Data for Calibration of Mechanistic-Empirical Pavement Design Guide in Central Saudi Arabia
Authors: Abdulraaof H. Alqaili, Hamad A. Alsoliman
Abstract:
Through progress in pavement design developments, a pavement design method was developed, which is titled the Mechanistic Empirical Pavement Design Guide (MEPDG). Nowadays, the evolution in roads network and highways is observed in Saudi Arabia as a result of increasing in traffic volume. Therefore, the MEPDG currently is implemented for flexible pavement design by the Saudi Ministry of Transportation. Implementation of MEPDG for local pavement design requires the calibration of distress models under the local conditions (traffic, climate, and materials). This paper aims to prepare data for calibration of MEPDG in Central Saudi Arabia. Thus, the first goal is data collection for the design of flexible pavement from the local conditions of the Riyadh region. Since, the modifying of collected data to input data is needed; the main goal of this paper is the analysis of collected data. The data analysis in this paper includes processing each: Trucks Classification, Traffic Growth Factor, Annual Average Daily Truck Traffic (AADTT), Monthly Adjustment Factors (MAFi), Vehicle Class Distribution (VCD), Truck Hourly Distribution Factors, Axle Load Distribution Factors (ALDF), Number of axle types (single, tandem, and tridem) per truck class, cloud cover percent, and road sections selected for the local calibration. Detailed descriptions of input parameters are explained in this paper, which leads to providing of an approach for successful implementation of MEPDG. Local calibration of MEPDG to the conditions of Riyadh region can be performed based on the findings in this paper.
Keywords: Mechanistic-empirical pavement design guide, traffic characteristics, materials properties, climate, Riyadh.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 122372 A Methodology for Creating Energy Sustainability in an Enterprise
Authors: John Lamb, Robert Epstein, Vasundhara L. Bhupathi, Sanjeev Kumar Marimekala
Abstract:
As we enter the new era of Artificial Intelligence (AI) and cloud computing, we mostly rely on the machine and natural language processing capabilities of AI, and energy efficient hardware and software devices in almost every industry sector. In these industry sectors, much emphasis is on developing new and innovative methods for producing and conserving energy and to sustain the depletion of natural resources. The core pillars of sustainability are Economic, Environmental, and Social, which are also informally referred to as 3 P's (People, Planet and Profits). The 3 P's play a vital role in creating a core sustainability model in the enterprise. Natural resources are continually being depleted, so there is more focus and growing demand for renewable energy. With this growing demand there is also a growing concern in many industries on how to reduce carbon emission and conserve natural resources while adopting sustainability in the corporate business models and policies. In our paper, we would like to discuss the driving forces such as climate changes, natural disasters, pandemic, disruptive technologies, corporate policies, scaled business models and emerging social media and AI platforms that influence the 3 main pillars of sustainability (3P’s). Through this paper, we would like to bring an overall perspective on enterprise strategies and the primary focus on bringing cultural shifts in adapting energy efficient operational models. Overall, many industries across the globe are incorporating core sustainability principles such as reducing energy costs, reducing greenhouse gas (GHG) emissions, reducing waste and increase recycling, adopting advanced monitoring and metering infrastructure, reducing server footprint and compute resources (shared IT services, cloud computing and application modernization) with the vision for a sustainable environment.
Keywords: AI, cloud computing, machine learning, social media platform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20471 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images
Authors: Amit Kr. Happy
Abstract:
This paper is motivated by the importance of multi-sensor image fusion with specific focus on Infrared (IR) and Visible image (VI) fusion for various applications including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like Visible camera & IR Thermal Imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (IR) that may be reflected or self-emitted. A digital color camera captures the visible source image and a thermal IR camera acquires the thermal source image. In this paper, some image fusion algorithms based upon Multi-Scale Transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, but they also make it hard to become deployed in system and applications that require real-time operation, high flexibility and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.
Keywords: Image fusion, IR thermal imager, multi-sensor, Multi-Scale Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 43070 Minimizing the Drilling-Induced Damage in Fiber Reinforced Polymeric Composites
Authors: S. D. El Wakil, M. Pladsen
Abstract:
Fiber reinforced polymeric (FRP) composites are finding wide-spread industrial applications because of their exceptionally high specific strength and specific modulus of elasticity. Nevertheless, it is very seldom to get ready-for-use components or products made of FRP composites. Secondary processing by machining, particularly drilling, is almost always required to make holes for fastening components together to produce assemblies. That creates problems since the FRP composites are neither homogeneous nor isotropic. Some of the problems that are encountered include the subsequent damage in the region around the drilled hole and the drilling – induced delamination of the layer of ply, that occurs both at the entrance and the exit planes of the work piece. Evidently, the functionality of the work piece would be detrimentally affected. The current work was carried out with the aim of eliminating or at least minimizing the work piece damage associated with drilling of FPR composites. Each test specimen involves a woven reinforced graphite fiber/epoxy composite having a thickness of 12.5 mm (0.5 inch). A large number of test specimens were subjected to drilling operations with different combinations of feed rates and cutting speeds. The drilling induced damage was taken as the absolute value of the difference between the drilled hole diameter and the nominal one taken as a percentage of the nominal diameter. The later was determined for each combination of feed rate and cutting speed, and a matrix comprising those values was established, where the columns indicate varying feed rate while and rows indicate varying cutting speeds. Next, the analysis of variance (ANOVA) approach was employed using Minitab software, in order to obtain the combination that would improve the drilling induced damage. Experimental results show that low feed rates coupled with low cutting speeds yielded the best results.
Keywords: Drilling of Composites, dimensional accuracy of holes drilled in composites, delamination and charring, graphite-epoxy composites.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 80669 Experimenting the Influence of Input Modality on Involvement Load Hypothesis
Authors: Mohammad Hassanzadeh
Abstract:
As far as incidental vocabulary learning is concerned, the basic contention of the Involvement Load Hypothesis (ILH) is that retention of unfamiliar words is, generally, conditional upon the degree of involvement in processing them. This study examined input modality and incidental vocabulary uptake in a task-induced setting whereby three variously loaded task types (marginal glosses, fill-in-task, and sentence-writing) were alternately assigned to one group of students at Allameh Tabataba’i University (n=2l) during six classroom sessions. While one round of exposure was comprised of the audiovisual medium (TV talk shows), the second round consisted of textual materials with approximately similar subject matter (reading texts). In both conditions, however, the tasks were equivalent to one another. Taken together, the study pursued the dual objectives of establishing a litmus test for the ILH and its proposed values of ‘need’, ‘search’ and ‘evaluation’ in the first place. Secondly, it sought to bring to light the superiority issue of exposure to audiovisual input versus the written input as far as the incorporation of tasks is concerned. At the end of each treatment session, a vocabulary active recall test was administered to measure their incidental gains. Running a one-way analysis of variance revealed that the audiovisual intervention yielded higher gains than the written version even when differing tasks were included. Meanwhile, task 'three' (sentence-writing) turned out the most efficient in tapping learners' active recall of the target vocabulary items. In addition to shedding light on the superiority of audiovisual input over the written input when circumstances are relatively held constant, this study for the most part, did support the underlying tenets of ILH.
Keywords: Evaluation, incidental vocabulary learning, input mode, involvement load hypothesis, need, search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 115368 Water Management Scheme: Panacea to Development Using Nigeria’s University of Ibadan Water Supply Scheme as a Case Study
Authors: Sunday Olufemi Adesogan
Abstract:
The supply of potable water at least is a very important index in national development. Water tariffs depend on the treatment cost which carries the highest percentage of the total operation cost in any water supply scheme. In order to keep water tariffs as low as possible, treatment costs have to be minimized. The University of Ibadan, Nigeria, water supply scheme consists of a treatment plant with three distribution stations (Amina way, Kurumi and Lander) and two raw water supply sources (Awba dam and Eleyele dam). An operational study of the scheme was carried out to ascertain the efficiency of the supply of potable water on the campus to justify the need for water supply schemes in tertiary institutions. The study involved regular collection, processing and analysis of periodic operational data. Data collected include supply reading (water production on daily basis) and consumers metered reading for a period of 22 months (October 2013 - July 2015), and also collected, were the operating hours of both plants and human beings. Applying the required mathematical equations, total loss was determined for the distribution system, which was translated into monetary terms. Adequacies of the operational functions were also determined. The study revealed that water supply scheme is justified in tertiary institutions. It was also found that approximately 10.7 million Nigerian naira (N) is lost to leakages during the 22-month study period; the system’s storage capacity is no longer adequate, especially for peak water production. The capacity of the system as a whole is insufficient for the present university population and that the existing water supply system is not being operated in an optimal manner especially due to personnel, power and system ageing constraints.
Keywords: Operational, efficiency, production, supply, water treatment plant, water loss.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 72467 Construction Innovation: Support for 3D Printing House
Authors: Andrea Palazzo, Daniel Macek, Veronika Malinova
Abstract:
Contour processing is the new technology challenge for architects and construction companies. The many advantages it promises make it one of the most interesting solutions for construction in terms of automation of building processes. The technology for 3D printing houses offers many application possibilities, from low-cost construction, to being considered by NASA for visionary projects as a good solution for building settlements on other planets. Another very important point is that clients, as architects, will no longer have many limits in design concerning ideas and creativity. The prices for real estate are constantly increasing and the lack of availability of construction materials as well as the speculation that has been created around it in 2021 is bringing prices to such a level that in the future it will be difficult for developers to find customers for these ultra-expensive homes. Hence, this paper starts with the introduction of 3D printing, which now has the potential to gain an important position in the market, becoming a valid alternative to the classic construction process. This technology is not only beneficial from an economic point of view but it is also a great opportunity to have an impact on the environment by reducing CO2 emissions. Further on in the article we will also understand if, after the COP 26 (2021 United Nations Climate Change Conference), world governments could also push towards building technologies that reduce the waste materials that are needed to be disposed of and at the same time reduce emissions with the contribution of governmental funds. This paper will give us insight on the multiple benefits of 3D printing and emphasize the importance of finding new solutions for materials that can be used by the printer. Therefore, based on the type of material, it will be possible to understand the compatibility with current regulations and how the authorities will be inclined to support this technology. This will help to enable the rise and development of this technology in Europe and in the rest of the world on actual housing projects and not only on prototypes.
Keywords: Additive manufacturing, building development building regulation, contour crafting, printing material.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33766 Disparity of Learning Styles and Cognitive Abilities in Vocational Education
Authors: Mimi Mohaffyza Mohamad, Yee Mei Heong, Nurfirdawati Muhammad Hanafi Tee Tze Kiong
Abstract:
This study is conducted to investigate the disparity of between learning styles and cognitive abilities specifically in Vocational Education. Felder and Silverman Learning Styles Model (FSLSM) was applied to measure the students’ learning styles while the content in Building Construction Subject consists; knowledge, skills and problem solving were taken into account in constructing the elements of cognitive abilities. Building Construction is one of the vocational courses offered in Vocational Education structure. There are four dimension of learning styles proposed by Felder and Silverman intended to capture student learning preferences with regards to processing either active or reflective, perception based on sensing or intuitive, input of information used visual or verbal and understanding information represent with sequential or global learner. Felder-Solomon Learning Styles Index was developed based on FSLSM and the questions were used to identify what type of student learning preferences. The index consists 44 item-questions characterize for learning styles dimension in FSLSM. The achievement test was developed to determine the students’ cognitive abilities. The quantitative data was analyzed in descriptive and inferential statistic involving Multivariate Analysis of Variance (MANOVA). The study discovered students are tending to be visual learners and each type of learner having significant difference whereas cognitive abilities there are different finding for each type of learners in knowledge, skills and problem solving. This study concludes the gap between type of learner and the cognitive abilities in few illustrations and it explained how the connecting made. The finding may help teachers to facilitate students more effectively and to boost the student’s cognitive abilities.
Keywords: Learning Styles, Cognitive Abilities, Dimension of Learning Styles, Learning Preferences.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 263665 A Cumulative Learning Approach to Data Mining Employing Censored Production Rules (CPRs)
Authors: Rekha Kandwal, Kamal K.Bharadwaj
Abstract:
Knowledge is indispensable but voluminous knowledge becomes a bottleneck for efficient processing. A great challenge for data mining activity is the generation of large number of potential rules as a result of mining process. In fact sometimes result size is comparable to the original data. Traditional data mining pruning activities such as support do not sufficiently reduce the huge rule space. Moreover, many practical applications are characterized by continual change of data and knowledge, thereby making knowledge voluminous with each change. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. Michalski & Winston proposed Censored Production Rules (CPRs), as an extension of production rules, that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence, are tight or there is simply no information available as to whether it holds or not. Thus the 'If P Then D' part of the CPR expresses important information while the Unless C part acts only as a switch changes the polarity of D to ~D. In this paper a scheme based on Dempster-Shafer Theory (DST) interpretation of a CPR is suggested for discovering CPRs from the discovered flat PRs. The discovery of CPRs from flat rules would result in considerable reduction of the already discovered rules. The proposed scheme incrementally incorporates new knowledge and also reduces the size of knowledge base considerably with each episode. Examples are given to demonstrate the behaviour of the proposed scheme. The suggested cumulative learning scheme would be useful in mining data streams.
Keywords: Censored production rules, cumulative learning, data mining, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 148564 Preferred Character Size for Oblique Angles
Authors: Photjanat Phimnom, Haruetai Lohasiriwat
Abstract:
In today’s world, the LED display has been used for presenting visual information under various circumstances. Such information is an important intermediary in the human information processing. Researchers have been investigated diverse factors that influence this process effectiveness. The letter size is undoubtedly one major factor that has been tested and recommended by many standards and guidelines. However, viewing information on the display from direct perpendicular position is a typical assumption whereas many actual events are required viewing from the angles. This current research aims to study the effect of oblique viewing angle and viewing distance on ability to recognize alphabet, number, and English word. The total of ten participants was volunteered to our 3 x 4 x 4 within subject study. Independent variables include three distance levels (2, 6, and 12 m), four oblique angles (0, 45, 60, 75 degree), and four target types (alphabet, number, short word, and long word). Following the method of constant stimuli our study suggests that the larger oblique angle, ranging from 0 to 75 degree from the line of sight, results in significant higher legibility threshold or larger font size required (p-value < 0.05). Viewing distance factor also shows to have significant effect on the threshold (p-value < 0.05). However, the effect from distance factor is expected to be confounded by the quality of the screen used in our experiment. Lastly, our results show that single alphabet as well as single number are recognized at significant lower threshold (smaller font size) as compared to both short and long words (p-value < 0.05). Therefore, it is recommended that when designs information to be presented on LED display, understanding of all possible ranges of oblique angle should be taken into account in order to specify the preferred letter size. Additionally, the recommendation of letter size for 100% legibility in our tested conditions is provided in the paper.
Keywords: Letter Size, Oblique Angle, Viewing Distance, Legibility Threshold.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 133063 Named Entity Recognition using Support Vector Machine: A Language Independent Approach
Authors: Asif Ekbal, Sivaji Bandyopadhyay
Abstract:
Named Entity Recognition (NER) aims to classify each word of a document into predefined target named entity classes and is now-a-days considered to be fundamental for many Natural Language Processing (NLP) tasks such as information retrieval, machine translation, information extraction, question answering systems and others. This paper reports about the development of a NER system for Bengali and Hindi using Support Vector Machine (SVM). Though this state of the art machine learning technique has been widely applied to NER in several well-studied languages, the use of this technique to Indian languages (ILs) is very new. The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the four different named (NE) classes, such as Person name, Location name, Organization name and Miscellaneous name. We have used the annotated corpora of 122,467 tokens of Bengali and 502,974 tokens of Hindi tagged with the twelve different NE classes 1, defined as part of the IJCNLP-08 NER Shared Task for South and South East Asian Languages (SSEAL) 2. In addition, we have manually annotated 150K wordforms of the Bengali news corpus, developed from the web-archive of a leading Bengali newspaper. We have also developed an unsupervised algorithm in order to generate the lexical context patterns from a part of the unlabeled Bengali news corpus. Lexical patterns have been used as the features of SVM in order to improve the system performance. The NER system has been tested with the gold standard test sets of 35K, and 60K tokens for Bengali, and Hindi, respectively. Evaluation results have demonstrated the recall, precision, and f-score values of 88.61%, 80.12%, and 84.15%, respectively, for Bengali and 80.23%, 74.34%, and 77.17%, respectively, for Hindi. Results show the improvement in the f-score by 5.13% with the use of context patterns. Statistical analysis, ANOVA is also performed to compare the performance of the proposed NER system with that of the existing HMM based system for both the languages.
Keywords: Named Entity (NE), Named Entity Recognition (NER), Support Vector Machine (SVM), Bengali, Hindi.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 340462 Biological Methods to Control Parasitic Weed Phelipanche ramosa L. Pomel in the Field Tomato Crop
Authors: F. Lops, G. Disciglio, A. Carlucci, G. Gatta, L. Frabboni, A. Tarantino, E. Tarantino
Abstract:
Phelipanche ramosa L. Pomel is a root holoparasitic weed plant of many cultivations, particularly of tomato (Lycopersicum esculentum L.) crop. In Italy, Phelipanche problem is increasing, both in density and in acreage. The biological control of this parasitic weed involves the use of living organisms as numerous fungi and bacteria that can infect the parasitic weed, while it may improve the crop growth. This paper deals with the biocontrol with microorganism, including Arbuscular mycorrhizal (AM) fungi and fungal pathogens as Fusarium oxisporum spp. Colonization of crop roots by AM fungi can provide protection of crops against parasitic weeds because of a reduction in their seed germination and attachment, while F. oxisporum, isolated from diseased broomrape tubercles, proved to be highly virulent on P. ramosa. The experimental trial was carried out in open field at Foggia province (Apulia Region, Southern Italy), during the spring-summer season 2016, in order to evaluate the effect of four biological treatments: AM fungi and Fusarium oxisporum applied in the soil alone or combined together, and Rizosum Max® product, compared with the untreated control, to reduce the P. ramosa infestation in processing tomato crop. The principal results to be drawn from this study under field condition, in contrast of those reported previously under laboratory and greenhouse conditions, show that both AM fungi and F. oxisporum do not provide the reduction of the number of emerged shoots of P. ramosa. This can arise probably from the low efficacy seedling of the agent pathogens for the control of this parasite in the field. On the contrary, the Rizosum Max® product, containing AM fungi and some rizophere bacteria combined with several minerals and organic substances, appears to be most effective for the reduction of P. ramosa infestation.
Keywords: Arbuscular mycorrhizal fungi, biocontrol methods, Phelipanche ramosa, F. oxisporum spp.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 106661 Effect of Different Methods to Control the Parasitic Weed Phelipanche ramosa (L.- Pomel) in Tomato Crop
Authors: G. Disciglio, F. Lops, A. Carlucci, G. Gatta, A. Tarantino, E. Tarantino
Abstract:
Phelipanche ramosa is the most damaging obligate flowering parasitic weed on wide species of cultivated plants. The semi-arid regions of the world are considered the main centers of this parasitic plant that causes heavy infestation. This is due to its production of high numbers of seeds (up to 200,000) that remain viable for extended periods (up to 20 years). In this study, 13 treatments for the control of Phelipanche were carried out, which included agronomic, chemical, and biological treatments and the use of resistant plant methods. In 2014, a trial was performed at the Department of Agriculture, Food and Environment, University of Foggia (southern Italy), on processing tomato (cv ‘Docet’) grown in pots filled with soil taken from a field that was heavily infested by P. ramosa). The tomato seedlings were transplanted on May 8, 2014, into a sandy-clay soil (USDA). A randomized block design with 3 replicates (pots) was adopted. During the growing cycle of the tomato, at 70, 75, 81 and 88 days after transplantation, the number of P. ramosa shoots emerged in each pot was determined. The tomato fruit were harvested on August 8, 2014, and the quantitative and qualitative parameters were determined. All of the data were subjected to analysis of variance (ANOVA) using the JMP software (SAS Institute Inc. Cary, NC, USA), and for comparisons of means (Tukey's tests). The data show that each treatment studied did not provide complete control against P. ramosa. However, the virulence of the attacks was mitigated by some of the treatments tried: radicon biostimulant, compost activated with Fusarium, mineral fertilizer nitrogen, sulfur, enzone, and the resistant tomato genotype. It is assumed that these effects can be improved by combining some of these treatments with each other, especially for a gradual and continuing reduction of the “seed bank” of the parasite in the soil.
Keywords: Control methods, Phelipanche ramosa, tomato crop.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 305160 Producing and Mechanical Testing of Urea-Formaldehyde Resin Foams Reinforced by Waste Phosphogypsum
Authors: Krasimira Georgieva, Yordan Denev
Abstract:
Many of thermosetting resins have application only in filled state, reinforced with different mineral fillers. The co-filling of polymers with mineral filler and gases creates a possibility for production of polymer composites materials with low density. This processing leads to forming of new materials – gas-filled plastics (polymer foams). The properties of these materials are determined mainly by the shape and size of internal structural elements (pores). The interactions on the phase boundaries have influence on the materials properties too. In the present work, the gas-filled urea-formaldehyde resins were reinforced by waste phosphogypsum. The waste phosphogypsum (CaSO4.2H2O) is a solid by-product in wet phosphoric acid production processes. The values of the interactions polymer-filler were increased by using two modifying agents: polyvinyl acetate for polymer matrix and sodium metasilicate for filler. Technological methods for gas-filling and recipes of urea-formaldehyde based materials with apparent density 20-120 kg/m3 were developed. The heat conductivity of the samples is between 0.024 and 0.029 W/moK. Tensile analyses were carried out at 10 and 50% deformation and show values 0.01-0.14 MPa and 0.01-0.09 MPa, respectively. The apparent density of obtained materials is between 20 and 92 kg/m3. The changes in the tensile properties and density of these materials according to sodium metasilicate content were studied too. The mechanism of phosphogypsum adsorption modification was studied using methods of FT-IR spectroscopy. The structure of the gas-filled urea-formaldehyde resins was described by results of electron scanning microscopy at three different magnification ratios – x50, x150 and x 500. The aim of present work is to study the possibility of the usage of phosphogypsum as mineral filler for urea-formaldehyde resins and development of a technology for the production of gas-filled reinforced polymer composite materials. The structure and the properties of obtained composite materials are suitable for thermal and sound insulation applications.
Keywords: Gas-filled thermosets, mechanical properties, phosphogypsum, urea-formaldehyde resins.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 71359 Web Data Scraping Technology Using Term Frequency Inverse Document Frequency to Enhance the Big Data Quality on Sentiment Analysis
Authors: Sangita Pokhrel, Nalinda Somasiri, Rebecca Jeyavadhanam, Swathi Ganesan
Abstract:
Tourism is a booming industry with huge future potential for global wealth and employment. There are countless data generated over social media sites every day, creating numerous opportunities to bring more insights to decision-makers. The integration of big data technology into the tourism industry will allow companies to conclude where their customers have been and what they like. This information can then be used by businesses, such as those in charge of managing visitor centres or hotels, etc., and the tourist can get a clear idea of places before visiting. The technical perspective of natural language is processed by analysing the sentiment features of online reviews from tourists, and we then supply an enhanced long short-term memory (LSTM) framework for sentiment feature extraction of travel reviews. We have constructed a web review database using a crawler and web scraping technique for experimental validation to evaluate the effectiveness of our methodology. The text form of sentences was first classified through VADER and RoBERTa model to get the polarity of the reviews. In this paper, we have conducted study methods for feature extraction, such as Count Vectorization and Term Frequency – Inverse Document Frequency (TFIDF) Vectorization and implemented Convolutional Neural Network (CNN) classifier algorithm for the sentiment analysis to decide if the tourist’s attitude towards the destinations is positive, negative, or simply neutral based on the review text that they posted online. The results demonstrated that from the CNN algorithm, after pre-processing and cleaning the dataset, we received an accuracy of 96.12% for the positive and negative sentiment analysis.
Keywords: Counter vectorization, Convolutional Neural Network, Crawler, data technology, Long Short-Term Memory, LSTM, Web Scraping, sentiment analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17558 Application of Various Methods for Evaluation of Heavy Metal Pollution in Soils around Agarak Copper-Molybdenum Mine Complex, Armenia
Authors: K. A. Ghazaryan, H. S. Movsesyan, N. P. Ghazaryan
Abstract:
The present study was aimed in assessing the heavy metal pollution of the soils around Agarak copper-molybdenum mine complex and related environmental risks. This mine complex is located in the south-east part of Armenia, and the present study was conducted in 2013. The soils of the five riskiest sites of this region were studied: surroundings of the open mine, the sites adjacent to processing plant of Agarak copper-molybdenum mine complex, surroundings of Darazam active tailing dump, the recultivated tailing dump of “ravine - 2”, and the recultivated tailing dump of “ravine - 3”. The mountain cambisol was the main soil type in the study sites. The level of soil contamination by heavy metals was assessed by Contamination factors (Cf), Degree of contamination (Cd), Geoaccumulation index (I-geo) and Enrichment factor (EF). The distribution pattern of trace metals in the soil profile according to Cf, Cd, I-geo and EF values shows that the soil is much polluted. Almost in all studied sites, Cu, Mo, Pb, and Cd were the main polluting heavy metals, and this was conditioned by Agarak copper-molybdenum mine complex activity. It is necessary to state that the pollution problem becomes pressing as some parts of these highly polluted region are inhabited by population, and agriculture is highly developed there; therefore, heavy metals can be transferred into human bodies through food chains and have direct influence on public health. Since the induced pollution can pose serious threats to public health, further investigations on soil and vegetation pollution are recommended. Finally, Cf calculating based on distance from the pollution source and the wind direction can provide more reasonable results.
Keywords: Agarak copper-molybdenum mine complex, heavy metals, soil contamination, enrichment factor, Armenia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1249