Search results for: Image Resolution.
84 Potential of Detailed Environmental Data Produced by Information and Communication Technology Tools for Better Consideration of Microclimatology Issues in Urban Planning to Promote Active Mobility
Authors: Živa Ravnikar, Alfonso Bahillo Martinez, Barbara Goličnik Marušić
Abstract:
Climate change mitigation has been formally adopted and announced by countries over the globe, where cities are targeting carbon neutrality through various more or less successful, systematic, and fragmentary actions. The article is based on the fact that environmental conditions affect human comfort and the usage of space. Urban planning can, with its sustainable solutions, not only support climate mitigation in terms of a planet reduction of global warming but as well enabling natural processes that in the immediate vicinity produce environmental conditions that encourage people to walk or cycle. However, the article draws attention to the importance of integrating climate consideration into urban planning, where detailed environmental data play a key role, enabling urban planners to improve or monitor environmental conditions on cycle paths. In a practical aspect, this paper tests a particular ICT tool, a prototype used for environmental data. Data gathering was performed along the cycling lanes in Ljubljana (Slovenia), where the main objective was to assess the tool's data applicable value within the planning of comfortable cycling lanes. The results suggest that such transportable devices for in-situ measurements can help a researcher interpret detailed environmental information, characterized by fine granularity and precise data spatial and temporal resolution. Data can be interpreted within human comfort zones, where graphical representation is in the form of a map, enabling the link of the environmental conditions with a spatial context. The paper also provides preliminary results in terms of the potential of such tools for identifying the correlations between environmental conditions and different spatial settings, which can help urban planners to prioritize interventions in places. The paper contributes to multidisciplinary approaches as it demonstrates the usefulness of such fine-grained data for better consideration of microclimatology in urban planning, which is a prerequisite for creating climate-comfortable cycling lanes promoting active mobility.
Keywords: Information and communication technology tools, urban planning, human comfort, microclimate, cycling lanes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 51083 Implementing a Visual Servoing System for Robot Controlling
Authors: Maryam Vafadar, Alireza Behrad, Saeed Akbari
Abstract:
Nowadays, with the emerging of the new applications like robot control in image processing, artificial vision for visual servoing is a rapidly growing discipline and Human-machine interaction plays a significant role for controlling the robot. This paper presents a new algorithm based on spatio-temporal volumes for visual servoing aims to control robots. In this algorithm, after applying necessary pre-processing on video frames, a spatio-temporal volume is constructed for each gesture and feature vector is extracted. These volumes are then analyzed for matching in two consecutive stages. For hand gesture recognition and classification we tested different classifiers including k-Nearest neighbor, learning vector quantization and back propagation neural networks. We tested the proposed algorithm with the collected data set and results showed the correct gesture recognition rate of 99.58 percent. We also tested the algorithm with noisy images and algorithm showed the correct recognition rate of 97.92 percent in noisy images.Keywords: Back propagation neural network, Feature vector, Hand gesture recognition, k-Nearest Neighbor, Learning vector quantization neural network, Robot control, Spatio-temporal volume, Visual servoing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 167582 Web-Content Analysis of the Major Spanish Tourist Destinations Evaluation by Russian Tourists
Authors: Natalia Polkanova, Sergey Kazakov
Abstract:
In the second decade of the XXI century the role of tourism destination attractiveness is becoming increasingly important for destination management. Competition in tourism market moves from ordinary service quality to provision of unforgettable emotional experience for tourists. The main purpose of the present study is to identify the perception of the tourism destinations based on the number of factors related to its tourist attractiveness. The content analysis method was used to analyze the on-line tourist feedback data immensely available in Social Media and in travel related sites. The collected data made it possible to procure the information which is necessary to understand the perceived attractiveness of the destinations and key destination appeal factors that are important for Russian leisure travelers. Results of the present study demonstrate key attractiveness factors or destination ‘properties’ that were unveiled as the most important for Russian leisure tourists. The study targeted five main Spanish tourism destinations that initially were determined by in-depth interview with a number of Russian nationals who had visited Spain at least once. The research results can be useful for Spanish Tourism Organization Representation office in Russia as well as for the other national tourism organizations in order to promote their respective destinations for Russian travelers focusing on main attractiveness factors identified in this study.
Keywords: Tourism destination, destination attractiveness, destination competitiveness, content analysis, unstructured image.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 258781 Quality Properties of Fermented Mugworts and Rapid Pattern Analysis of Their Volatile Flavor Components by Electric Nose Based On SAW (Surface Acoustic Wave) Sensor in GC System
Authors: Hyo-Nam Song
Abstract:
The changes in quality properties and nutritional components in two fermented mugworts (Artemisia capillaries Thumberg, Artemisiaeasiaticae Nakai) were characterized followed by the rapid pattern analysis of volatile flavor compounds by Electric Nose based on SAW(Surface Acoustic Wave) sensor in GC system. There were remarkable decreases in the pH and small changes in the total soluble solids after fermentation. The L (lightness) and b (yellowness) values in Hunter's color system were shown to be decreased, whilst the a (redness) value was increased by fermentation. The HPLC analysis demonstrated that total amino acids were increased in quantity and the essential amino acids were contained higher in A. asiaticaeNakai than in A. capillaries Thumberg. While the total polyphenol contents were not affected by fermentation, the total sugar contents were dramatically decreased. Scopoletinwere highly abundant in A. capillarisThumberg, however, it was not detected in A. asiaticaeNakai. Volatile flavor compounds by Electric Nose showed that the intensity of several peaks were increased much and seven additional flavor peaks were newly produced after fermentation. The flavor differences of two mugworts were clearly distinguished from the image patterns of VaporPrintTM which indicate that the fermentation enables the two mugworts to have subtle flavor differences.
Keywords: Mugwort, Fermentation, Electric Nose, SAW sensor, Flavor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 173780 Flow Visualization and Characterization of an Artery Model with Stenosis
Authors: Anis S. Shuib, Peter R. Hoskins, William J. Easson
Abstract:
Cardiovascular diseases, principally atherosclerosis, are responsible for 30% of world deaths. Atherosclerosis is due to the formation of plaque. The fatty plaque may be at risk of rupture, leading typically to stroke and heart attack. The plaque is usually associated with a high degree of lumen reduction, called a stenosis.It is increasingly recognized that the initiation and progression of disease and the occurrence of clinical events is a complex interplay between the local biomechanical environment and the local vascular biology. The aim of this study is to investigate the flow behavior through a stenosed artery. A physical experiment was performed using an artery model and blood analogue fluid. An axisymmetric model constructed consists of contraction and expansion region that follow a mathematical form of cosine function. A 30% diameter reduction was used in this study. The flow field was measured using particle image velocimetry (PIV). Spherical particles with 20μm diameter were seeded in a water-glycerol-NaCl mixture. Steady flow Reynolds numbers are 250. The area of interest is the region after the stenosis where the flow separation occurs. The velocity field was measured and the velocity gradient was investigated. There was high particle concentration in the recirculation zone. High velocity gradient formed immediately after the stenosis throat created a lift force that enhanced particle migration to the flow separation area.
Keywords: Stenosis artery, Biofluid mechanics, PIV
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 200979 An Overview of the Porosity Classification in Carbonate Reservoirs and Their Challenges: An Example of Macro-Microporosity Classification from Offshore Miocene Carbonate in Central Luconia, Malaysia
Authors: Hammad T. Janjuhah, Josep Sanjuan, Mohamed K. Salah
Abstract:
Biological and chemical activities in carbonates are responsible for the complexity of the pore system. Primary porosity is generally of natural origin while secondary porosity is subject to chemical reactivity through diagenetic processes. To understand the integrated part of hydrocarbon exploration, it is necessary to understand the carbonate pore system. However, the current porosity classification scheme is limited to adequately predict the petrophysical properties of different reservoirs having various origins and depositional environments. Rock classification provides a descriptive method for explaining the lithofacies but makes no significant contribution to the application of porosity and permeability (poro-perm) correlation. The Central Luconia carbonate system (Malaysia) represents a good example of pore complexity (in terms of nature and origin) mainly related to diagenetic processes which have altered the original reservoir. For quantitative analysis, 32 high-resolution images of each thin section were taken using transmitted light microscopy. The quantification of grains, matrix, cement, and macroporosity (pore types) was achieved using a petrographic analysis of thin sections and FESEM images. The point counting technique was used to estimate the amount of macroporosity from thin section, which was then subtracted from the total porosity to derive the microporosity. The quantitative observation of thin sections revealed that the mouldic porosity (macroporosity) is the dominant porosity type present, whereas the microporosity seems to correspond to a sum of 40 to 50% of the total porosity. It has been proven that these Miocene carbonates contain a significant amount of microporosity, which significantly complicates the estimation and production of hydrocarbons. Neglecting its impact can increase uncertainty about estimating hydrocarbon reserves. Due to the diversity of geological parameters, the application of existing porosity classifications does not allow a better understanding of the poro-perm relationship. However, the classification can be improved by including the pore types and pore structures where they can be divided into macro- and microporosity. Such studies of microporosity identification/classification represent now a major concern in limestone reservoirs around the world.
Keywords: Carbonate reservoirs, microporosity, overview of porosity classification, reservoir characterization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 101878 Hybrid Temporal Correlation Based on Gaussian Mixture Model Framework for View Synthesis
Authors: Deng Zengming, Wang Mingjiang
Abstract:
As 3D video is explored as a hot research topic in the last few decades, free-viewpoint TV (FTV) is no doubt a promising field for its better visual experience and incomparable interactivity. View synthesis is obviously a crucial technology for FTV; it enables to render images in unlimited numbers of virtual viewpoints with the information from limited numbers of reference view. In this paper, a novel hybrid synthesis framework is proposed and blending priority is explored. In contrast to the commonly used View Synthesis Reference Software (VSRS), the presented synthesis process is driven in consideration of the temporal correlation of image sequences. The temporal correlations will be exploited to produce fine synthesis results even near the foreground boundaries. As for the blending priority, this scheme proposed that one of the two reference views is selected to be the main reference view based on the distance between the reference views and virtual view, another view is chosen as the auxiliary viewpoint, just assist to fill the hole pixel with the help of background information. Significant improvement of the proposed approach over the state-of –the-art pixel-based virtual view synthesis method is presented, the results of the experiments show that subjective gains can be observed, and objective PSNR average gains range from 0.5 to 1.3 dB, while SSIM average gains range from 0.01 to 0.05.
Keywords: View synthesis, Gaussian mixture model, hybrid framework, fusion method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 100277 Flow Regime Characterization in a Diseased Artery Model
Authors: Anis S. Shuib, Peter R. Hoskins, William J. Easson
Abstract:
Cardiovascular disease mostly in the form of atherosclerosis is responsible for 30% of all world deaths amounting to 17 million people per year. Atherosclerosis is due to the formation of plaque. The fatty plaque may be at risk of rupture, leading typically to stroke and heart attack. The plaque is usually associated with a high degree of lumen reduction, called a stenosis. The initiation and progression of the disease is strongly linked to the hemodynamic environment near the vessel wall. The aim of this study is to validate the flow of blood mimic through an arterial stenosis model with computational fluid dynamics (CFD) package. In experiment, an axisymmetric model constructed consists of contraction and expansion region that follow a mathematical form of cosine function. A 30% diameter reduction was used in this study. Particle image velocimetry (PIV) was used to characterize the flow. The fluid consists of rigid spherical particles suspended in waterglycerol- NaCl mixture. The particles with 20 μm diameter were selected to follow the flow of fluid. The flow at Re=155, 270 and 390 were investigated. The experimental result is compared with FLUENT simulated flow that account for viscous laminar flow model. The results suggest that laminar flow model was sufficient to predict flow velocity at the inlet but the velocity at stenosis throat at Re =390 was overestimated. Hence, a transition to turbulent regime might have been developed at throat region as the flow rate increases.
Keywords: Atherosclerosis, Particle-laden flow, Particle imagevelocimetry, Stenosis artery
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 173476 Electron Density Discrepancy Analysis of Energy Metabolism Coenzymes
Authors: Alan Luo, Hunter N. B. Moseley
Abstract:
Many macromolecular structure entries in the Protein Data Bank (PDB) have a range of regional (localized) quality issues, be it derived from X-ray crystallography, Nuclear Magnetic Resonance (NMR) spectroscopy, or other experimental approaches. However, most PDB entries are judged by global quality metrics like R-factor, R-free, and resolution for X-ray crystallography or backbone phi-psi distribution statistics and average restraint violations for NMR. Regional quality is often ignored when PDB entries are re-used for a variety of structurally based analyses. The binding of ligands, especially ligands involved in energy metabolism, is of particular interest in many structurally focused protein studies. Using a regional quality metric that provides chemically interpretable information from electron density maps, a significant number of outliers in regional structural quality was detected across X-ray crystallographic PDB entries for proteins bound to biochemically critical ligands. In this study, a series of analyses was performed to evaluate both specific and general potential factors that could promote these outliers. In particular, these potential factors were the minimum distance to a metal ion, the minimum distance to a crystal contact, and the isotropic atomic b-factor. To evaluate these potential factors, Fisher’s exact tests were performed, using regional quality criteria of outlier (top 1%, 2.5%, 5%, or 10%) versus non-outlier compared to a potential factor metric above versus below a certain outlier cutoff. The results revealed a consistent general effect from region-specific normalized b-factors but no specific effect from metal ion contact distances and only a very weak effect from crystal contact distance as compared to the b-factor results. These findings indicate that no single specific potential factor explains a majority of the outlier ligand-bound regions, implying that human error is likely as important as these other factors. Thus, all factors, including human error, should be considered when regions of low structural quality are detected. Also, the downstream re-use of protein structures for studying ligand-bound conformations should screen the regional quality of the binding sites. Doing so prevents misinterpretation due to the presence of structural uncertainty or flaws in regions of interest.
Keywords: Biomacromolecular structure, coenzyme, electron density discrepancy analysis, X-ray crystallography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27675 Using Field Indices of Rill and Gully in order to Erosion Estimating and Sediment Analysis (Case Study: Menderjan Watershed in Isfahan Province, Iran)
Authors: Masoud Nasri, Sadat Feiznia, Mohammad Jafari, Hasan Ahmadi
Abstract:
Today, incorrect use of lands and land use changes, excessive grazing, no suitable using of agricultural farms, plowing on steep slopes, road construct, building construct, mine excavation etc have been caused increasing of soil erosion and sediment yield. For erosion and sediment estimation one can use statistical and empirical methods. This needs to identify land unit map and the map of effective factors. However, these empirical methods are usually time consuming and do not give accurate estimation of erosion. In this study, we applied GIS techniques to estimate erosion and sediment of Menderjan watershed at upstream Zayandehrud river in center of Iran. Erosion faces at each land unit were defined on the basis of land use, geology and land unit map using GIS. The UTM coordinates of each erosion type that showed more erosion amounts such as rills and gullies were inserted in GIS using GPS data. The frequency of erosion indicators at each land unit, land use and their sediment yield of these indices were calculated. Also using tendency analysis of sediment yield changes in watershed outlet (Menderjan hydrometric gauge station), was calculated related parameters and estimation errors. The results of this study according to implemented watershed management projects can be used for more rapid and more accurate estimation of erosion than traditional methods. These results can also be used for regional erosion assessment and can be used for remote sensing image processing.Keywords: Erosion and sedimentation, Gully, Rill, GIS, GPS, Menderjan Watershed
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 191774 Unsupervised Feature Learning by Pre-Route Simulation of Auto-Encoder Behavior Model
Authors: Youngjae Jin, Daeshik Kim
Abstract:
This paper describes a cycle accurate simulation results of weight values learned by an auto-encoder behavior model in terms of pre-route simulation. Given the results we visualized the first layer representations with natural images. Many common deep learning threads have focused on learning high-level abstraction of unlabeled raw data by unsupervised feature learning. However, in the process of handling such a huge amount of data, the learning method’s computation complexity and time limited advanced research. These limitations came from the fact these algorithms were computed by using only single core CPUs. For this reason, parallel-based hardware, FPGAs, was seen as a possible solution to overcome these limitations. We adopted and simulated the ready-made auto-encoder to design a behavior model in VerilogHDL before designing hardware. With the auto-encoder behavior model pre-route simulation, we obtained the cycle accurate results of the parameter of each hidden layer by using MODELSIM. The cycle accurate results are very important factor in designing a parallel-based digital hardware. Finally this paper shows an appropriate operation of behavior model based pre-route simulation. Moreover, we visualized learning latent representations of the first hidden layer with Kyoto natural image dataset.
Keywords: Auto-encoder, Behavior model simulation, Digital hardware design, Pre-route simulation, Unsupervised feature learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 270173 Urban Renewal from the Perspective of Industrial Heritage Protection: Taking the Qiaokou District of Wuhan as an Example
Abstract:
Most of the earliest national industries in Wuhan are located along the Hanjiang River, and Qiaokou is considered to be a gathering place for Dahankou old industrial base. Zongguan Waterworks, Pacific Soap Factory, Fuxin Flour Factory, Nanyang Tobacco Factory and other hundred-year-old factories are located along Hanjiang River in Qiaokou District, especially the Gutian Industrial Zone, which was listed as one of 156 national restoration projects at the beginning of the founding of the People’s Republic of China. After decades of development, Qiaokou has become the gathering place of the chemical industry and secondary industry, causing damage to the city and serious pollution, becoming a marginalized area forgotten by the central city. In recent years, with the accelerated pace of urban renewal, Qiaokou has been constantly reforming and innovating, and has begun drastic changes in the transformation of old cities and the development of new districts. These factories have been listed as key reconstruction projects, and a large number of industrial heritage with historical value and full urban memory have been relocated, demolished and reformed, with only a few factory buildings preserved. Through the methods of industrial archaeology, image analysis, typology and field investigation, this paper analyzes and summarizes the spatial characteristics of industrial heritage in Qiaokou District, explores urban renewal from the perspective of industrial heritage protection, and provides design strategies for the regeneration of urban industrial sites and industrial heritage.
Keywords: Industrial heritage, urban renewal, protection, urban memory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 99372 Satellite Data Classification Accuracy Assessment Based from Reference Dataset
Authors: Mohd Hasmadi Ismail, Kamaruzaman Jusoff
Abstract:
In order to develop forest management strategies in tropical forest in Malaysia, surveying the forest resources and monitoring the forest area affected by logging activities is essential. There are tremendous effort has been done in classification of land cover related to forest resource management in this country as it is a priority in all aspects of forest mapping using remote sensing and related technology such as GIS. In fact classification process is a compulsory step in any remote sensing research. Therefore, the main objective of this paper is to assess classification accuracy of classified forest map on Landsat TM data from difference number of reference data (200 and 388 reference data). This comparison was made through observation (200 reference data), and interpretation and observation approaches (388 reference data). Five land cover classes namely primary forest, logged over forest, water bodies, bare land and agricultural crop/mixed horticultural can be identified by the differences in spectral wavelength. Result showed that an overall accuracy from 200 reference data was 83.5 % (kappa value 0.7502459; kappa variance 0.002871), which was considered acceptable or good for optical data. However, when 200 reference data was increased to 388 in the confusion matrix, the accuracy slightly improved from 83.5% to 89.17%, with Kappa statistic increased from 0.7502459 to 0.8026135, respectively. The accuracy in this classification suggested that this strategy for the selection of training area, interpretation approaches and number of reference data used were importance to perform better classification result.Keywords: Image Classification, Reference Data, Accuracy Assessment, Kappa Statistic, Forest Land Cover
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 315371 A Vehicular Visual Tracking System Incorporating Global Positioning System
Authors: Hsien-Chou Liao, Yu-Shiang Wang
Abstract:
Surveillance system is widely used in the traffic monitoring. The deployment of cameras is moving toward a ubiquitous camera (UbiCam) environment. In our previous study, a novel service, called GPS-VT, was firstly proposed by incorporating global positioning system (GPS) and visual tracking techniques for the UbiCam environment. The first prototype is called GODTA (GPS-based Moving Object Detection and Tracking Approach). For a moving person carried GPS-enabled mobile device, he can be tracking when he enters the field-of-view (FOV) of a camera according to his real-time GPS coordinate. In this paper, GPS-VT service is applied to the tracking of vehicles. The moving speed of a vehicle is much faster than a person. It means that the time passing through the FOV is much shorter than that of a person. Besides, the update interval of GPS coordinate is once per second, it is asynchronous with the frame rate of the real-time image. The above asynchronous is worsen by the network transmission delay. These factors are the main challenging to fulfill GPS-VT service on a vehicle.In order to overcome the influence of the above factors, a back-propagation neural network (BPNN) is used to predict the possible lane before the vehicle enters the FOV of a camera. Then, a template matching technique is used for the visual tracking of a target vehicle. The experimental result shows that the target vehicle can be located and tracking successfully. The success location rate of the implemented prototype is higher than that of the previous GODTA.Keywords: visual surveillance, visual tracking, globalpositioning system, intelligent transportation system
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 192370 Reduction of False Positives in Head-Shoulder Detection Based on Multi-Part Color Segmentation
Authors: Lae-Jeong Park
Abstract:
The paper presents a method that utilizes figure-ground color segmentation to extract effective global feature in terms of false positive reduction in the head-shoulder detection. Conventional detectors that rely on local features such as HOG due to real-time operation suffer from false positives. Color cue in an input image provides salient information on a global characteristic which is necessary to alleviate the false positives of the local feature based detectors. An effective approach that uses figure-ground color segmentation has been presented in an effort to reduce the false positives in object detection. In this paper, an extended version of the approach is presented that adopts separate multipart foregrounds instead of a single prior foreground and performs the figure-ground color segmentation with each of the foregrounds. The multipart foregrounds include the parts of the head-shoulder shape and additional auxiliary foregrounds being optimized by a search algorithm. A classifier is constructed with the feature that consists of a set of the multiple resulting segmentations. Experimental results show that the presented method can discriminate more false positive than the single prior shape-based classifier as well as detectors with the local features. The improvement is possible because the presented approach can reduce the false positives that have the same colors in the head and shoulder foregrounds.
Keywords: Pedestrian detection, color segmentation, false positives, feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 115669 Hash Based Block Matching for Digital Evidence Image Files from Forensic Software Tools
Abstract:
Internet use, intelligent communication tools, and social media have all become an integral part of our daily life as a result of rapid developments in information technology. However, this widespread use increases crimes committed in the digital environment. Therefore, digital forensics, dealing with various crimes committed in digital environment, has become an important research topic. It is in the research scope of digital forensics to investigate digital evidences such as computer, cell phone, hard disk, DVD, etc. and to report whether it contains any crime related elements. There are many software and hardware tools developed for use in the digital evidence acquisition process. Today, the most widely used digital evidence investigation tools are based on the principle of finding all the data taken place in digital evidence that is matched with specified criteria and presenting it to the investigator (e.g. text files, files starting with letter A, etc.). Then, digital forensics experts carry out data analysis to figure out whether these data are related to a potential crime. Examination of a 1 TB hard disk may take hours or even days, depending on the expertise and experience of the examiner. In addition, it depends on examiner’s experience, and may change overall result involving in different cases overlooked. In this study, a hash-based matching and digital evidence evaluation method is proposed, and it is aimed to automatically classify the evidence containing criminal elements, thereby shortening the time of the digital evidence examination process and preventing human errors.
Keywords: Block matching, digital evidence, hash list.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 136668 Detailed Sensitive Detection of Impurities in Waste Engine Oils Using Laser Induced Breakdown Spectroscopy, Rotating Disk Electrode Optical Emission Spectroscopy and Surface Plasmon Resonance
Authors: Cherry Dhiman, Ayushi Paliwal, Mohd. Shahid Khan, M. N. Reddy, Vinay Gupta, Monika Tomar
Abstract:
The laser based high resolution spectroscopic experimental techniques such as Laser Induced Breakdown Spectroscopy (LIBS), Rotating Disk Electrode Optical Emission spectroscopy (RDE-OES) and Surface Plasmon Resonance (SPR) have been used for the study of composition and degradation analysis of used engine oils. Engine oils are mainly composed of aliphatic and aromatics compounds and its soot contains hazardous components in the form of fine, coarse and ultrafine particles consisting of wear metal elements. Such coarse particulates matter (PM) and toxic elements are extremely dangerous for human health that can cause respiratory and genetic disorder in humans. The combustible soot from thermal power plants, industry, aircrafts, ships and vehicles can lead to the environmental and climate destabilization. It contributes towards global pollution for land, water, air and global warming for environment. The detection of such toxicants in the form of elemental analysis is a very serious issue for the waste material management of various organic, inorganic hydrocarbons and radioactive waste elements. In view of such important points, the current study on used engine oils was performed. The fundamental characterization of engine oils was conducted by measuring water content and kinematic viscosity test that proves the crude analysis of the degradation of used engine oils samples. The microscopic quantitative and qualitative analysis was presented by RDE-OES technique which confirms the presence of elemental impurities of Pb, Al, Cu, Si, Fe, Cr, Na and Ba lines for used waste engine oil samples in few ppm. The presence of such elemental impurities was confirmed by LIBS spectral analysis at various transition levels of atomic line. The recorded transition line of Pb confirms the maximum degradation which was found in used engine oil sample no. 3 and 4. Apart from the basic tests, the calculations for dielectric constants and refractive index of the engine oils were performed via SPR analysis.
Keywords: Laser induced breakdown spectroscopy, rotating disk electrode optical emission spectroscopy, surface plasmon resonance, ICCD spectrometer, Nd:YAG laser, engine oil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 76367 RV-YOLOX: Object Detection on Inland Waterways Based on Optimized YOLOX through Fusion of Vision and 3+1D Millimeter Wave Radar
Authors: Zixian Zhang, Shanliang Yao, Zile Huang, Zhaodong Wu, Xiaohui Zhu, Yong Yue, Jieming Ma
Abstract:
Unmanned Surface Vehicles (USVs) hold significant value for their capacity to undertake hazardous and labor-intensive operations over aquatic environments. Object detection tasks are significant in these applications. Nonetheless, the efficacy of USVs in object detection is impeded by several intrinsic challenges, including the intricate dispersal of obstacles, reflections emanating from coastal structures, and the presence of fog over water surfaces, among others. To address these problems, this paper provides a fusion method for USVs to effectively detect objects in the inland surface environment, utilizing vision sensors and 3+1D Millimeter-wave radar. The MMW radar is a complementary tool to vision sensors, offering reliable environmental data. This approach involves the conversion of the radar’s 3D point cloud into a 2D radar pseudo-image, thereby standardizing the format for radar and vision data by leveraging a point transformer. Furthermore, this paper proposes the development of a multi-source object detection network, named RV-YOLOX, which leverages radar-vision integration specifically tailored for inland waterway environments. The performance is evaluated on our self-recording waterways dataset. Compared with the YOLOX network, our fusion network significantly improves detection accuracy, especially for objects with bad light conditions.
Keywords: Inland waterways, object detection, YOLO, sensor fusion, self-attention, deep learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34666 Long-Term Economic-Ecological Assessment of Optimal Local Heat-Generating Technologies for the German Unrefurbished Residential Building Stock on the Quarter Level
Authors: M. A. Spielmann, L. Schebek
Abstract:
In order to reach the long-term national climate goals of the German government for the building sector, substantial energetic measures have to be executed. Historically, those measures were primarily energetic efficiency measures at the buildings’ shells. Advanced technologies for the on-site generation of heat (or other types of energy) often are not feasible at this small spatial scale of a single building. Therefore, the present approach uses the spatially larger dimension of a quarter. The main focus of the present paper is the long-term economic-ecological assessment of available decentralized heat-generating (CHP power plants and electrical heat pumps) technologies at the quarter level for the German unrefurbished residential buildings. Three distinct terms have to be described methodologically: i) Quarter approach, ii) Economic assessment, iii) Ecological assessment. The quarter approach is used to enable synergies and scaling effects over a single-building. For the present study, generic quarters that are differentiated according to significant parameters concerning their heat demand are used. The core differentiation of those quarters is made by the construction time period of the buildings. The economic assessment as the second crucial parameter is executed with the following structure: Full costs are quantized for each technology combination and quarter. The investment costs are analyzed on an annual basis and are modeled with the acquisition of debt. Annuity loans are assumed. Consequently, for each generic quarter, an optimal technology combination for decentralized heat generation is provided in each year of the temporal boundaries (2016-2050). The ecological assessment elaborates for each technology combination and each quarter a Life Cycle assessment. The measured impact category hereby is GWP 100. The technology combinations for heat production can be therefore compared against each other concerning their long-term climatic impacts. Core results of the approach can be differentiated to an economic and ecological dimension. With an annual resolution, the investment and running costs of different energetic technology combinations are quantified. For each quarter an optimal technology combination for local heat supply and/or energetic refurbishment of the buildings within the quarter is provided. Coherently to the economic assessment, the climatic impacts of the technology combinations are quantized and compared against each other.
Keywords: Building sector, heat, LCA, quarter level, systemic approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 95865 Comparison of Central Light Reflex Width-to-Retinal Vessel Diameter Ratio between Glaucoma and Normal Eyes by Using Edge Detection Technique
Authors: P. Siriarchawatana, K. Leungchavaphongse, N. Covavisaruch, K. Rojananuangnit, P. Boondaeng, N. Panyayingyong
Abstract:
Glaucoma is a disease that causes visual loss in adults. Glaucoma causes damage to the optic nerve and its overall pathophysiology is still not fully understood. Vasculopathy may be one of the possible causes of nerve damage. Photographic imaging of retinal vessels by fundus camera during eye examination may complement clinical management. This paper presents an innovation for measuring central light reflex width-to-retinal vessel diameter ratio (CRR) from digital retinal photographs. Using our edge detection technique, CRRs from glaucoma and normal eyes were compared to examine differences and associations. CRRs were evaluated on fundus photographs of participants from Mettapracharak (Wat Raikhing) Hospital in Nakhon Pathom, Thailand. Fifty-five photographs from normal eyes and twenty-one photographs from glaucoma eyes were included. Participants with hypertension were excluded. In each photograph, CRRs from four retinal vessels, including arteries and veins in the inferotemporal and superotemporal regions, were quantified using edge detection technique. From our finding, mean CRRs of all four retinal arteries and veins were significantly higher in persons with glaucoma than in those without glaucoma (0.34 vs. 0.32, p < 0.05 for inferotemporal vein, 0.33 vs. 0.30, p < 0.01 for inferotemporal artery, 0.34 vs. 0.31, p < 0.01 for superotemporal vein, and 0.33 vs. 0.30, p < 0.05 for superotemporal artery). From these results, an increase in CRRs of retinal vessels, as quantitatively measured from fundus photographs, could be associated with glaucoma.
Keywords: Glaucoma, retinal vessel, central light reflex, image processing, fundus photograph, edge detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 109664 The Effects of TiO2 Nanoparticles on Tumor Cell Colonies: Fractal Dimension and Morphological Properties
Authors: T. Sungkaworn, W. Triampo, P. Nalakarn, D. Triampo, I. M. Tang, Y. Lenbury, P. Picha
Abstract:
Semiconductor nanomaterials like TiO2 nanoparticles (TiO2-NPs) approximately less than 100 nm in diameter have become a new generation of advanced materials due to their novel and interesting optical, dielectric, and photo-catalytic properties. With the increasing use of NPs in commerce, to date few studies have investigated the toxicological and environmental effects of NPs. Motivated by the importance of TiO2-NPs that may contribute to the cancer research field especially from the treatment prospective together with the fractal analysis technique, we have investigated the effect of TiO2-NPs on colony morphology in the dark condition using fractal dimension as a key morphological characterization parameter. The aim of this work is mainly to investigate the cytotoxic effects of TiO2-NPs in the dark on the growth of human cervical carcinoma (HeLa) cell colonies from morphological aspect. The in vitro studies were carried out together with the image processing technique and fractal analysis. It was found that, these colonies were abnormal in shape and size. Moreover, the size of the control colonies appeared to be larger than those of the treated group. The mean Df +/- SEM of the colonies in untreated cultures was 1.085±0.019, N= 25, while that of the cultures treated with TiO2-NPs was 1.287±0.045. It was found that the circularity of the control group (0.401±0.071) is higher than that of the treated group (0.103±0.042). The same tendency was found in the diameter parameters which are 1161.30±219.56 μm and 852.28±206.50 μm for the control and treated group respectively. Possible explanation of the results was discussed, though more works need to be done in terms of the for mechanism aspects. Finally, our results indicate that fractal dimension can serve as a useful feature, by itself or in conjunction with other shape features, in the classification of cancer colonies.Keywords: Tumor growth, Cell colonies, TiO2, Nanoparticles, Fractal, Morphology, Aggregation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 201363 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes
Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono
Abstract:
Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is widely used for LV segmentation, but it suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is improved to achieve a fast and efficient LV segmentation. First, a robust and efficient detection based on Hough forest localizes cardiac feature points. Such feature points are used to predict the initial fitting of the LV shape model. Second, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. With the robust initialization, ASM is able to achieve more accurate segmentation. The performance of the proposed method is evaluated on a dataset of 810 cardiac ultrasound images that are mostly abnormal shapes. This proposed method is compared with several combinations of ASM and existing initialization methods. Our experiment results demonstrate that accuracy of the proposed method for feature point detection for initialization was 40% higher than the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops and thus speeds up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.Keywords: Hough forest, active shape model, segmentation, cardiac left ventricle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 150862 Surface Elevation Dynamics Assessment Using Digital Elevation Models, Light Detection and Ranging, GPS and Geospatial Information Science Analysis: Ecosystem Modelling Approach
Authors: Ali K. M. Al-Nasrawi, Uday A. Al-Hamdany, Sarah M. Hamylton, Brian G. Jones, Yasir M. Alyazichi
Abstract:
Surface elevation dynamics have always responded to disturbance regimes. Creating Digital Elevation Models (DEMs) to detect surface dynamics has led to the development of several methods, devices and data clouds. DEMs can provide accurate and quick results with cost efficiency, in comparison to the inherited geomatics survey techniques. Nowadays, remote sensing datasets have become a primary source to create DEMs, including LiDAR point clouds with GIS analytic tools. However, these data need to be tested for error detection and correction. This paper evaluates various DEMs from different data sources over time for Apple Orchard Island, a coastal site in southeastern Australia, in order to detect surface dynamics. Subsequently, 30 chosen locations were examined in the field to test the error of the DEMs surface detection using high resolution global positioning systems (GPSs). Results show significant surface elevation changes on Apple Orchard Island. Accretion occurred on most of the island while surface elevation loss due to erosion is limited to the northern and southern parts. Concurrently, the projected differential correction and validation method aimed to identify errors in the dataset. The resultant DEMs demonstrated a small error ratio (≤ 3%) from the gathered datasets when compared with the fieldwork survey using RTK-GPS. As modern modelling approaches need to become more effective and accurate, applying several tools to create different DEMs on a multi-temporal scale would allow easy predictions in time-cost-frames with more comprehensive coverage and greater accuracy. With a DEM technique for the eco-geomorphic context, such insights about the ecosystem dynamic detection, at such a coastal intertidal system, would be valuable to assess the accuracy of the predicted eco-geomorphic risk for the conservation management sustainability. Demonstrating this framework to evaluate the historical and current anthropogenic and environmental stressors on coastal surface elevation dynamism could be profitably applied worldwide.
Keywords: DEMs, eco-geomorphic-dynamic processes, geospatial information science. Remote sensing, surface elevation changes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 116561 Time Series Simulation by Conditional Generative Adversarial Net
Authors: Rao Fu, Jie Chen, Shutian Zeng, Yiping Zhuang, Agus Sudjianto
Abstract:
Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper.
Keywords: Conditional Generative Adversarial Net, market and credit risk management, neural network, time series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 121660 Q-Map: Clinical Concept Mining from Clinical Documents
Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala
Abstract:
Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.Keywords: Information retrieval (IR), unified medical language system (UMLS), Syntax Based Analysis, natural language processing (NLP), medical informatics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 79159 Comparative Study Using Weka for Red Blood Cells Classification
Authors: Jameela Ali Alkrimi, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy
Abstract:
Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifying the RBCs as normal or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithms tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital - Malaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectively.
Keywords: K-Nearest Neighbors, Neural Network, Radial Basis Function, Red blood cells, Support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 300358 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis
Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya
Abstract:
In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.Keywords: Cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 98657 Thresholding Approach for Automatic Detection of Pseudomonas aeruginosa Biofilms from Fluorescence in situ Hybridization Images
Authors: Zonglin Yang, Tatsuya Akiyama, Kerry S. Williamson, Michael J. Franklin, Thiruvarangan Ramaraj
Abstract:
Pseudomonas aeruginosa is an opportunistic pathogen that forms surface-associated microbial communities (biofilms) on artificial implant devices and on human tissue. Biofilm infections are difficult to treat with antibiotics, in part, because the bacteria in biofilms are physiologically heterogeneous. One measure of biological heterogeneity in a population of cells is to quantify the cellular concentrations of ribosomes, which can be probed with fluorescently labeled nucleic acids. The fluorescent signal intensity following fluorescence in situ hybridization (FISH) analysis correlates to the cellular level of ribosomes. The goals here are to provide computationally and statistically robust approaches to automatically quantify cellular heterogeneity in biofilms from a large library of epifluorescent microscopy FISH images. In this work, the initial steps were developed toward these goals by developing an automated biofilm detection approach for use with FISH images. The approach allows rapid identification of biofilm regions from FISH images that are counterstained with fluorescent dyes. This methodology provides advances over other computational methods, allowing subtraction of spurious signals and non-biological fluorescent substrata. This method will be a robust and user-friendly approach which will enable users to semi-automatically detect biofilm boundaries and extract intensity values from fluorescent images for quantitative analysis of biofilm heterogeneity.
Keywords: Image informatics, Pseudomonas aeruginosa, biofilm, FISH, computer vision, data visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 120356 Gate Tunnel Current Calculation for NMOSFET Based on Deep Sub-Micron Effects
Authors: Ashwani K. Rana, Narottam Chand, Vinod Kapoor
Abstract:
Aggressive scaling of MOS devices requires use of ultra-thin gate oxides to maintain a reasonable short channel effect and to take the advantage of higher density, high speed, lower cost etc. Such thin oxides give rise to high electric fields, resulting in considerable gate tunneling current through gate oxide in nano regime. Consequently, accurate analysis of gate tunneling current is very important especially in context of low power application. In this paper, a simple and efficient analytical model has been developed for channel and source/drain overlap region gate tunneling current through ultra thin gate oxide n-channel MOSFET with inevitable deep submicron effect (DSME).The results obtained have been verified with simulated and reported experimental results for the purpose of validation. It is shown that the calculated tunnel current is well fitted to the measured one over the entire oxide thickness range. The proposed model is suitable enough to be used in circuit simulator due to its simplicity. It is observed that neglecting deep sub-micron effect may lead to large error in the calculated gate tunneling current. It is found that temperature has almost negligible effect on gate tunneling current. It is also reported that gate tunneling current reduces with the increase of gate oxide thickness. The impact of source/drain overlap length is also assessed on gate tunneling current.
Keywords: Gate tunneling current, analytical model, gate dielectrics, non uniform poly gate doping, MOSFET, fringing field effect and image charges.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 173955 Time Series Forecasting Using Various Deep Learning Models
Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan
Abstract:
Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed length window in the past as an explicit input. In this paper, we study how the performance of predictive models change as a function of different look-back window sizes and different amounts of time to predict into the future. We also consider the performance of the recent attention-based transformer models, which had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (Recurrent Neural Network (RNN), Long Short-term Memory (LSTM), Gated Recurrent Units (GRU), and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the website of University of California, Irvine (UCI), which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Absolute Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.
Keywords: Air quality prediction, deep learning algorithms, time series forecasting, look-back window.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1188