Search results for: combining classifiers
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 497

Search results for: combining classifiers

77 Ammonia Adsorption Properties of Composite Ammonia Carriers Obtained by Supporting Metal Chloride on Porous Materials

Authors: Cheng Shen, LaiHong Shen

Abstract:

Ammonia is an important carrier of hydrogen energy, with the characteristics of high hydrogen content density and no carbon dioxide emission. Safe and efficient ammonia capture for ammonia synthesis from biomass is an important way to alleviate the energy crisis and solve the energy problem. Metal chloride has a chemical adsorption effect on ammonia and can be desorbed at high temperatures to obtain high-concentration ammonia after combining with ammonia, which has a good development prospect in ammonia capture and separation technology. In this paper, the ammonia adsorption properties of CuCl2 were measured, and the composite adsorbents were prepared by using silicon and multi-walled carbon nanotubes, respectively to support CuCl2, and the ammonia adsorption properties of the composite adsorbents were studied. The study found that the ammonia adsorption capacity of the three adsorbents decreased with the increase in temperature, so metal chlorides were more suitable for the low-temperature adsorption of ammonia. Silicon and multi-walled carbon nanotubes have an enhanced effect on the ammonia adsorption of CuCl2. The reason is that the porous material itself has a physical adsorption effect on ammonia, and silicon can play the role of skeleton support in cupric chloride particles, which enhances the pore structure of the adsorbent, thereby alleviating sintering.

Keywords: Ammonia, adsorption properties, metal chloride, MWCNTs, silicon.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 119
76 Comparison of Number of Waves Surfed and Duration Using Global Positioning System and Inertial Sensors

Authors: J. Madureira, R. Lagido, I. Sousa

Abstract:

Surf is an increasingly popular sport and its performance evaluation is often qualitative. This work aims at using a smartphone to collect and analyze the GPS and inertial sensors data in order to obtain quantitative metrics of the surfing performance. Two approaches are compared for detection of wave rides, computing the number of waves rode in a surfing session, the starting time of each wave and its duration. The first approach is based on computing the velocity from the Global Positioning System (GPS) signal and finding the velocity thresholds that allow identifying the start and end of each wave ride. The second approach adds information from the Inertial Measurement Unit (IMU) of the smartphone, to the velocity thresholds obtained from the GPS unit, to determine the start and end of each wave ride. The two methods were evaluated using GPS and IMU data from two surfing sessions and validated with similar metrics extracted from video data collected from the beach. The second method, combining GPS and IMU data, was found to be more accurate in determining the number of waves, start time and duration. This paper shows that it is feasible to use smartphones for quantification of performance metrics during surfing. In particular, detection of the waves rode and their duration can be accurately determined using the smartphone GPS and IMU. 

Keywords: Inertial Measurement Unit (IMU), Global Positioning System (GPS), smartphone, surfing performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1625
75 A Comparison of Experimental Data with Monte Carlo Calculations for Optimisation of the Sourceto- Detector Distance in Determining the Efficiency of a LaBr3:Ce (5%) Detector

Authors: H. Aldousari, T. Buchacher, N. M. Spyrou

Abstract:

Cerium-doped lanthanum bromide LaBr3:Ce(5%) crystals are considered to be one of the most advanced scintillator materials used in PET scanning, combining a high light yield, fast decay time and excellent energy resolution. Apart from the correct choice of scintillator, it is also important to optimise the detector geometry, not least in terms of source-to-detector distance in order to obtain reliable measurements and efficiency. In this study a commercially available 25 mm x 25 mm BrilLanCeTM 380 LaBr3: Ce (5%) detector was characterised in terms of its efficiency at varying source-to-detector distances. Gamma-ray spectra of 22Na, 60Co, and 137Cs were separately acquired at distances of 5, 10, 15, and 20cm. As a result of the change in solid angle subtended by the detector, the geometric efficiency reduced in efficiency with increasing distance. High efficiencies at low distances can cause pulse pile-up when subsequent photons are detected before previously detected events have decayed. To reduce this systematic error the source-to-detector distance should be balanced between efficiency and pulse pile-up suppression as otherwise pile-up corrections would need to be necessary at short distances. In addition to the experimental measurements Monte Carlo simulations have been carried out for the same setup, allowing a comparison of results. The advantages and disadvantages of each approach have been highlighted.

Keywords: BrilLanCeTM380 LaBr3:Ce(5%), Coincidence summing, GATE simulation, Geometric efficiency

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1867
74 Locating Cultural Centers in Shiraz (Iran) Applying Geographic Information System (GIS)

Authors: R. Mokhtari Malekabadi, S. Ghaed Rahmati, S. Aram

Abstract:

Optimal cultural site selection is one of the ways that can lead to the promotion of citizenship culture in addition to ensuring the health and leisure of city residents. This study examines the social and cultural needs of the community and optimal cultural site allocation and after identifying the problems and shortcomings, provides a suitable model for finding the best location for these centers where there is the greatest impact on the promotion of citizenship culture. On the other hand, non-scientific methods cause irreversible impacts to the urban environment and citizens. But modern efficient methods can reduce these impacts. One of these methods is using geographical information systems (GIS). In this study, Analytical Hierarchy Process (AHP) method was used to locate the optimal cultural site. In AHP, three principles (decomposition), (comparative analysis), and (combining preferences) are used. The objectives of this research include providing optimal contexts for passing time and performing cultural activities by Shiraz residents and also proposing construction of some cultural sites in different areas of the city. The results of this study show the correct positioning of cultural sites based on social needs of citizens. Thus, considering the population parameters and radii access, GIS and AHP model for locating cultural centers can meet social needs of citizens.

Keywords: Analytical Hierarchy Process (AHP), geographical information systems (GIS), Cultural site, locating, Shiraz.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1576
73 Assisted Prediction of Hypertension Based on Heart Rate Variability and Improved Residual Networks

Authors: Yong Zhao, Jian He, Cheng Zhang

Abstract:

Cardiovascular disease resulting from hypertension poses a significant threat to human health, and early detection of hypertension can potentially save numerous lives. Traditional methods for detecting hypertension require specialized equipment and are often incapable of capturing continuous blood pressure fluctuations. To address this issue, this study starts by analyzing the principle of heart rate variability (HRV) and introduces the utilization of sliding window and power spectral density (PSD) techniques to analyze both temporal and frequency domain features of HRV. Subsequently, a hypertension prediction network that relies on HRV is proposed, combining Resnet, attention mechanisms, and a multi-layer perceptron. The network leverages a modified ResNet18 to extract frequency domain features, while employing an attention mechanism to integrate temporal domain features, thus enabling auxiliary hypertension prediction through the multi-layer perceptron. The proposed network is trained and tested using the publicly available SHAREE dataset from PhysioNet. The results demonstrate that the network achieves a high prediction accuracy of 92.06% for hypertension, surpassing traditional models such as K Near Neighbor (KNN), Bayes, Logistic regression, and traditional Convolutional Neural Network (CNN).

Keywords: Feature extraction, heart rate variability, hypertension, residual networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 134
72 Spatiotemporal Analysis of Visual Evoked Responses Using Dense EEG

Authors: Rima Hleiss, Elie Bitar, Mahmoud Hassan, Mohamad Khalil

Abstract:

A comprehensive study of object recognition in the human brain requires combining both spatial and temporal analysis of brain activity. Here, we are mainly interested in three issues: the time perception of visual objects, the ability of discrimination between two particular categories (objects vs. animals), and the possibility to identify a particular spatial representation of visual objects. Our experiment consisted of acquiring dense electroencephalographic (EEG) signals during a picture-naming task comprising a set of objects and animals’ images. These EEG responses were recorded from nine participants. In order to determine the time perception of the presented visual stimulus, we analyzed the Event Related Potentials (ERPs) derived from the recorded EEG signals. The analysis of these signals showed that the brain perceives animals and objects with different time instants. Concerning the discrimination of the two categories, the support vector machine (SVM) was applied on the instantaneous EEG (excellent temporal resolution: on the order of millisecond) to categorize the visual stimuli into two different classes. The spatial differences between the evoked responses of the two categories were also investigated. The results showed a variation of the neural activity with the properties of the visual input. Results showed also the existence of a spatial pattern of electrodes over particular regions of the scalp in correspondence to their responses to the visual inputs.

Keywords: Brain activity, dense EEG, evoked responses, spatiotemporal analysis, SVM, perception.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1049
71 On Combining Support Vector Machines and Fuzzy K-Means in Vision-based Precision Agriculture

Authors: A. Tellaeche, X. P. Burgos-Artizzu, G. Pajares, A. Ribeiro

Abstract:

One important objective in Precision Agriculture is to minimize the volume of herbicides that are applied to the fields through the use of site-specific weed management systems. In order to reach this goal, two major factors need to be considered: 1) the similar spectral signature, shape and texture between weeds and crops; 2) the irregular distribution of the weeds within the crop's field. This paper outlines an automatic computer vision system for the detection and differential spraying of Avena sterilis, a noxious weed growing in cereal crops. The proposed system involves two processes: image segmentation and decision making. Image segmentation combines basic suitable image processing techniques in order to extract cells from the image as the low level units. Each cell is described by two area-based attributes measuring the relations among the crops and the weeds. From these attributes, a hybrid decision making approach determines if a cell must be or not sprayed. The hybrid approach uses the Support Vector Machines and the Fuzzy k-Means methods, combined through the fuzzy aggregation theory. This makes the main finding of this paper. The method performance is compared against other available strategies.

Keywords: Fuzzy k-Means, Precision agriculture, SupportVectors Machines, Weed detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1752
70 Expectation-Confirmation Model of Information System Continuance: A Meta-Analysis

Authors: Hui-Min Lai, Chin-Pin Chen, Yung-Fu Chang

Abstract:

The expectation-confirmation model (ECM) is one of the most widely used models for evaluating information system continuance, and this model has been extended to other study backgrounds, or expanded with other theoretical perspectives. However, combining ECM with other theories or investigating the background problem may produce some disparities, thus generating inaccurate conclusions. Habit is considered to be an important factor that influences the user’s continuance behavior. This paper thus critically examines seven pairs of relationships from the original ECM and the habit variable. A meta-analysis was used to tackle the development of ECM research over the last 10 years from a range of journals and conference papers published in 2005–2014. Forty-six journal articles and 19 conference papers were selected for analysis. The results confirm our prediction that a high effect size for the seven pairs of relationships was obtained (ranging from r=0.386 to r=0.588). Furthermore, a meta-analytic structural equation modeling was performed to simultaneously test all relationships. The results show that habit had a significant positive effect on continuance intention at p<=0.05 and that the six other pairs of relationships were significant at p<0.10. Based on the findings, we refined our original research model and an alternative model was proposed for understanding and predicting information system continuance. Some theoretical implications are also discussed.

Keywords: Expectation-confirmation theory, expectation- confirmation model, meta-analysis, meta-analytic structural equation modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2643
69 Design Criteria for Achieving Acceptable Indoor Radon Concentration

Authors: T. Valdbjørn Rasmussen

Abstract:

Design criteria for achieving an acceptable indoor radon concentration are presented in this paper. The paper suggests three design criteria. These criteria have to be considered at the early stage of the building design phase to meet the latest recommendations from the World Health Organization in most countries. The three design criteria are; first, establishing a radon barrier facing the ground; second, lowering the air pressure in the lower zone of the slab on ground facing downwards; third, diluting the indoor air with outdoor air. The first two criteria can prevent radon from infiltrating from the ground, and the third criteria can dilute the indoor air. By combining these three criteria, the indoor radon concentration can be lowered achieving an acceptable level. In addition, a cheap and reliable method for measuring the radon concentration in the indoor air is described. The provision on radon in the Danish Building Regulations complies with the latest recommendations from the World Health Organization. Radon can cause lung cancer and it is not known whether there is a lower limit for when it is not harmful to human beings. Therefore, it is important to reduce the radon concentration as much as possible in buildings. Airtightness is an important factor when dealing with buildings. It is important to avoid air leakages in the building envelope both facing the atmosphere, e.g. in compliance with energy requirements, but also facing the ground, to meet the requirements to ensure and control the indoor environment. Infiltration of air from the ground underneath a building is the main providing source of radon to the indoor air.

Keywords: Radon, natural radiation, barrier, pressure lowering, ventilation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1161
68 A Combined Conventional and Differential Evolution Method for Model Order Reduction

Authors: J. S. Yadav, N. P. Patidar, J. Singhai, S. Panda, C. Ardil

Abstract:

In this paper a mixed method by combining an evolutionary and a conventional technique is proposed for reduction of Single Input Single Output (SISO) continuous systems into Reduced Order Model (ROM). In the conventional technique, the mixed advantages of Mihailov stability criterion and continued Fraction Expansions (CFE) technique is employed where the reduced denominator polynomial is derived using Mihailov stability criterion and the numerator is obtained by matching the quotients of the Cauer second form of Continued fraction expansions. Then, retaining the numerator polynomial, the denominator polynomial is recalculated by an evolutionary technique. In the evolutionary method, the recently proposed Differential Evolution (DE) optimization technique is employed. DE method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. The proposed method is illustrated through a numerical example and compared with ROM where both numerator and denominator polynomials are obtained by conventional method to show its superiority.

Keywords: Reduced Order Modeling, Stability, Mihailov Stability Criterion, Continued Fraction Expansions, Differential Evolution, Integral Squared Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2136
67 Deep Reinforcement Learning Approach for Trading Automation in the Stock Market

Authors: Taylan Kabbani, Ekrem Duman

Abstract:

Deep Reinforcement Learning (DRL) algorithms can scale to previously intractable problems. The automation of profit generation in the stock market is possible using DRL, by combining  the financial assets price ”prediction” step and the ”allocation” step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. This work represents a DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem as a Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. We then solved the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm and achieved a 2.68 Sharpe ratio on the test dataset. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of DRL in financial markets over other types of machine learning and proves its credibility and advantages of strategic decision-making.

Keywords: Autonomous agent, deep reinforcement learning, MDP, sentiment analysis, stock market, technical indicators, twin delayed deep deterministic policy gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 481
66 Object Recognition on Horse Riding Simulator System

Authors: Kyekyung Kim, Sangseung Kang, Suyoung Chi, Jaehong Kim

Abstract:

In recent years, IT convergence technology has been developed to get creative solution by combining robotics or sports science technology. Object detection and recognition have mainly applied to sports science field that has processed by recognizing face and by tracking human body. But object detection and recognition using vision sensor is challenge task in real world because of illumination. In this paper, object detection and recognition using vision sensor applied to sports simulator has been introduced. Face recognition has been processed to identify user and to update automatically a person athletic recording. Human body has tracked to offer a most accurate way of riding horse simulator. Combined image processing has been processed to reduce illumination adverse affect because illumination has caused low performance in detection and recognition in real world application filed. Face has recognized using standard face graph and human body has tracked using pose model, which has composed of feature nodes generated diverse face and pose images. Face recognition using Gabor wavelet and pose recognition using pose graph is robust to real application. We have simulated using ETRI database, which has constructed on horse riding simulator.

Keywords: Horse riding simulator, Object detection, Object recognition, User identification, Pose recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2056
65 Market Feasibility for New Brand Coffee House: The Case Study of Thailand

Authors: Pongsiri K.

Abstract:

This research aimed to study the market feasibility for new brand coffee house, the case study of Thailand.. This study is a mixed methods research combining quantitative research and the qualitative research. Primary data 350 sets of questionnaires were distributed, and the high quality completed questionnaires of 320 sets returned. Research samples are identified as customers’ of Hi-end department stores in Thailand. The sources of secondary data were critical selected from highly reliable sources, both from public and private sectors. The results were used to classify the customer group into two main groups, the younger than 25 and the older than 25years old. Results of the younger group, are give priority to the dimension of coffee house and its services dimension more than others, then branding dimension and the product dimension respectively. On the other hand, the older group give the difference result as they rate the important of the branding, coffee house and its services, then the product respectively. Coffee consuming is not just the trend but it has become part of people lifestyle. And the new cultures also created by the wise businessman. Coffee was long produced and consumed in Thailand. But it is surprisingly the hi-end brand coffee houses in Thai market are mostly imported brands. The café business possibility for Thai brand coffee house in Thai market were discussed in the paper.

Keywords: Coffee House, Café, Coffee Consuming and new entry branding, market feasibility

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14007
64 When Psychology Meets Ecology: Cognitive Flexibility for Quarry Rehabilitation

Authors: J. Fenianos, C. Khater, D. Brouillet

Abstract:

Ecological projects are often faced with reluctance from local communities hosting the project, especially when this project involves variation from preset ideas or classical practices. This paper aims at appreciating the contribution of environmental psychology through cognitive flexibility exercises to improve the acceptability of local communities in adopting more ecological rehabilitation scenarios. The study is based on a quarry site located in Bekaa- Lebanon. Four groups were considered with different levels of involvement, as follows: Group 1 is Training (T) – 50 hours of on-site training over 8 months, Group 2 is Awareness (A) – 2 hours of awareness raising session, Group 3 is Flexibility (F) – 2 hours of flexibility exercises and Group 4 is the Control (C). The results show that individuals in Group 3 (F) who followed flexibility sessions accept comparably the ecological rehabilitation option over the more classical one. This is also the case for the people in Group 1 (T) who followed a more time-demanding “on-site training”. Another experience was conducted on a second quarry site combining flexibility with awareness-raising. This research confirms that it is possible to reduce resistance to change thanks to a limited in-time intervention using cognitive flexibility. This methodological approach could be transferable to other environmental problems involving local communities and changes in preset perceptions.

Keywords: Acceptability, ecological restoration, environmental psychology, Lebanon, local communities, resistance to change.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1248
63 Semi-Analytic Method in Fast Evaluation of Thermal Management Solution in Energy Storage System

Authors: Ya Lv

Abstract:

This article presents the application of the semi-analytic method (SAM) in the thermal management solution (TMS) of the energy storage system (ESS). The TMS studied in this work is fluid cooling. In fluid cooling, both effective heat conduction and heat convection are indispensable due to the heat transfer from solid to fluid. Correspondingly, an efficient TMS requires a design investigation of the following parameters: fluid inlet temperature, ESS initial temperature, fluid flow rate, working c rate, continuous working time, and materials properties. Their variation induces a change of thermal performance in the battery module, which is usually evaluated by numerical simulation. Compared to complicated computation resources and long computation time in simulation, the SAM is developed in this article to predict the thermal influence within a few seconds. In SAM, a fast prediction model is reckoned by combining numerical simulation with theoretical/empirical equations. The SAM can explore the thermal effect of boundary parameters in both steady-state and transient heat transfer scenarios within a short time. Therefore, the SAM developed in this work can simplify the design cycle of TMS and inspire more possibilities in TMS design.

Keywords: Semi-analytic method, fast prediction model, thermal influence of boundary parameters, energy storage system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 620
62 High School Stem Curriculum and Example of Laboratory Work That Shows How Microcomputers Can Help in Understanding of Physical Concepts

Authors: Jelena Slugan, Ivica Ružić

Abstract:

We are witnessing the rapid development of technologies that change the world around us. However, curriculums and teaching processes are often slow to adapt to the change; it takes time, money and expertise to implement technology in the classroom. Therefore, the University of Split, Croatia, partnered with local school Marko Marulić High School and created the project "Modern competence in modern high schools" as part of which five different curriculums for STEM areas were developed. One of the curriculums involves combining information technology with physics. The main idea was to teach students how to use different circuits and microcomputers to explore nature and physical phenomena. As a result, using electrical circuits, students are able to recreate in the classroom the phenomena that they observe every day in their environment. So far, high school students had very little opportunity to perform experiments independently, and especially, those physics experiment did not involve ICT. Therefore, this project has a great importance, because the students will finally get a chance to develop themselves in accordance to modern technologies. This paper presents some new methods of teaching physics that will help students to develop experimental skills through the study of deterministic nature of physical laws. Students will learn how to formulate hypotheses, model physical problems using the electronic circuits and evaluate their results. While doing that, they will also acquire useful problem solving skills.

Keywords: ICT in physics, curriculum, laboratory activities, STEM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 961
61 An Approach to Capture, Evaluate and Handle Complexity of Engineering Change Occurrences in New Product Development

Authors: Mohammad Rostami Mehr, Seyed Arya Mir Rashed, Arndt Lueder, Magdalena Mißler-Behr

Abstract:

This paper represents the conception that complex problems do not necessary need similar complex solutions in order to cope with the complexity. Furthermore, a simple solution based on established methods can provide a sufficient way dealing with the complexity. To verify this conception, the presented paper focuses on the field of change management as a part of new product development process in automotive sector. In the field of complexity management, dealing with increasing complexity is essential, while, only non-flexible rigid processes that are not designed to handle complexity are available. The basic methodology of this paper can be divided in four main sections: 1) analyzing the complexity of the change management, 2) literature review in order to identify potential solutions and methods, 3) capturing and implementing expertise of experts from change management filed of an automobile manufacturing company and 4) systematical comparison of the identified methods from literature and connecting these with defined requirements of the complexity of the change management in order to develop a solution. As a practical outcome, this paper provides a method to capture the complexity of engineering changes (EC) and includes it within the EC evaluation process, following case-related process guidance to cope with the complexity. Furthermore, this approach supports the conception that dealing with complexity is possible while utilizing rather simple and established methods by combining them in to a powerful tool.

Keywords: complexity management, new product development, engineering change management, flexibility

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 508
60 Ice Load Measurements on Known Structures Using Image Processing Methods

Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka

Abstract:

This study employs a method based on image analyses and structure information to detect accumulated ice on known structures. The icing of marine vessels and offshore structures causes significant reductions in their efficiency and creates unsafe working conditions. Image processing methods are used to measure ice loads automatically. Most image processing methods are developed based on captured image analyses. In this method, ice loads on structures are calculated by defining structure coordinates and processing captured images. A pyramidal structure is designed with nine cylindrical bars as the known structure of experimental setup. Unsymmetrical ice accumulated on the structure in a cold room represents the actual case of experiments. Camera intrinsic and extrinsic parameters are used to define structure coordinates in the image coordinate system according to the camera location and angle. The thresholding method is applied to capture images and detect iced structures in a binary image. The ice thickness of each element is calculated by combining the information from the binary image and the structure coordinate. Averaging ice diameters from different camera views obtains ice thicknesses of structure elements. Comparison between ice load measurements using this method and the actual ice loads shows positive correlations with an acceptable range of error. The method can be applied to complex structures defining structure and camera coordinates.

Keywords: Camera calibration, Ice detection, ice load measurements, image processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1222
59 Combining the Deep Neural Network with the K-Means for Traffic Accident Prediction

Authors: Celso L. Fernando, Toshio Yoshii, Takahiro Tsubota

Abstract:

Understanding the causes of a road accident and predicting their occurrence is key to prevent deaths and serious injuries from road accident events. Traditional statistical methods such as the Poisson and the Logistics regressions have been used to find the association of the traffic environmental factors with the accident occurred; recently, an artificial neural network, ANN, a computational technique that learns from historical data to make a more accurate prediction, has emerged. Although the ability to make accurate predictions, the ANN has difficulty dealing with highly unbalanced attribute patterns distribution in the training dataset; in such circumstances, the ANN treats the minority group as noise. However, in the real world data, the minority group is often the group of interest; e.g., in the road traffic accident data, the events of the accident are the group of interest. This study proposes a combination of the k-means with the ANN to improve the predictive ability of the neural network model by alleviating the effect of the unbalanced distribution of the attribute patterns in the training dataset. The results show that the proposed method improves the ability of the neural network to make a prediction on a highly unbalanced distributed attribute patterns dataset; however, on an even distributed attribute patterns dataset, the proposed method performs almost like a standard neural network. 

Keywords: Accident risks estimation, artificial neural network, deep learning, K-mean, road safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 896
58 First and Second Analysis on the Reheat Organic Rankine Cycle

Authors: E. Moradimaram, H. Sayehvand

Abstract:

In recent years the increasing use of fossil fuels has led to various environmental problems including urban pollution, ozone layer depletion and acid rains. Moreover, with the increased number of industrial centers and higher consumption of these fuels, the end point of the fossil energy reserves has become more evident. Considering the environmental pollution caused by fossil fuels and their limited availability, renewable sources can be considered as the main substitute for non-renewable resources. One of these resources is the Organic Rankine Cycles (ORCs). These cycles while having high safety, have low maintenance requirements. Combining the ORCs with other systems, such as ejector and reheater will increase overall cycle efficiency. In this study, ejector and reheater are used to improve the thermal efficiency (ηth), exergy efficiency (η_ex) and net output power (w_net); therefore, the ORCs with reheater (RORCs) are proposed. A computational program has been developed to calculate the thermodynamic parameters required in Engineering Equations Solver (EES). In this program, the analysis of the first and second law in RORC is conducted, and a comparison is made between them and the ORCs with Ejector (EORC). R245fa is selected as the working fluid and water is chosen as low temperature heat source with a temperature of 95 °C and a mass transfer rate of 1 kg/s. The pressures of the second evaporator and reheater are optimized in terms of maximum exergy efficiency. The environment is at 298.15 k and at 101.325 kpa. The results indicate that the thermodynamic parameters in the RORC have improved compared to EORC.

Keywords: Organic rankine cycle, organic rankine cycle with reheater, organic rankine cycle with ejector, exergy efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 943
57 Development of a Double Coating Technique for Recycled Concrete Aggregates Used in Hot-mix Asphalt

Authors: Abbaas I. Kareem, H. Nikraz

Abstract:

The use of recycled concrete aggregates (RCAs) in hot-mix asphalt (HMA) production could ease natural aggregate shortage and maintain sustainability in modern societies. However, it was the attached cement mortar and other impurities that make the RCAs behave differently than high-quality aggregates. Therefore, different upgrading treatments were suggested to enhance its properties before being used in HMA production. Disappointedly, some of these treatments had caused degradation to some RCA properties. In order to avoid degradation, a coating technique is developed. This technique is based on combining of two main treatments, so it is named as double coating technique (DCT). Dosages of 0%, 20%, 40% and 60% uncoated RCA, RCA coated with Cement Slag Paste (CSP), and Double Coated Recycled Concrete Aggregates (DCRCAs) in place of granite aggregates were evaluated. The results indicated that the DCT improves strength and reduces water absorption of the DCRCAs compared with uncoated RCAs and RCA coated with CSP. In addition, the DCRCA asphalt mixtures exhibit stability values higher than those obtained for mixes made with granite aggregates, uncoated RCAs and RCAs coated with CSP. Also, the DCRCA asphalt mixtures require less bitumen to achieve the optimum bitumen content (OBC) than those manufactured with uncoated RCA and RCA-coated with CSP. Although the results obtained were encouraging, more testing is required in order to examine the effect of the DCT on performance properties of DCRCA- asphalt mixtures such as rutting and fatigue.

Keywords: Recycled concrete aggregates, hot mix asphalt, double coating technique, aggregate crashed value, Marshall parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 811
56 Design of QFT-Based Self-Tuning Deadbeat Controller

Authors: H. Mansor, S. B. Mohd Noor

Abstract:

This paper presents a design method of self-tuning Quantitative Feedback Theory (QFT) by using improved deadbeat control algorithm. QFT is a technique to achieve robust control with pre-defined specifications whereas deadbeat is an algorithm that could bring the output to steady state with minimum step size. Nevertheless, usually there are large peaks in the deadbeat response. By integrating QFT specifications into deadbeat algorithm, the large peaks could be tolerated. On the other hand, emerging QFT with adaptive element will produce a robust controller with wider coverage of uncertainty. By combining QFT-based deadbeat algorithm and adaptive element, superior controller that is called selftuning QFT-based deadbeat controller could be achieved. The output response that is fast, robust and adaptive is expected. Using a grain dryer plant model as a pilot case-study, the performance of the proposed method has been evaluated and analyzed. Grain drying process is very complex with highly nonlinear behaviour, long delay, affected by environmental changes and affected by disturbances. Performance comparisons have been performed between the proposed self-tuning QFT-based deadbeat, standard QFT and standard dead-beat controllers. The efficiency of the self-tuning QFTbased dead-beat controller has been proven from the tests results in terms of controller’s parameters are updated online, less percentage of overshoot and settling time especially when there are variations in the plant.

Keywords: Deadbeat control, quantitative feedback theory (QFT), robust control, self-tuning control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2305
55 Urban Growth Analysis Using Multi-Temporal Satellite Images, Non-stationary Decomposition Methods and Stochastic Modeling

Authors: Ali Ben Abbes, ImedRiadh Farah, Vincent Barra

Abstract:

Remotely sensed data are a significant source for monitoring and updating databases for land use/cover. Nowadays, changes detection of urban area has been a subject of intensive researches. Timely and accurate data on spatio-temporal changes of urban areas are therefore required. The data extracted from multi-temporal satellite images are usually non-stationary. In fact, the changes evolve in time and space. This paper is an attempt to propose a methodology for changes detection in urban area by combining a non-stationary decomposition method and stochastic modeling. We consider as input of our methodology a sequence of satellite images I1, I2, … In at different periods (t = 1, 2, ..., n). Firstly, a preprocessing of multi-temporal satellite images is applied. (e.g. radiometric, atmospheric and geometric). The systematic study of global urban expansion in our methodology can be approached in two ways: The first considers the urban area as one same object as opposed to non-urban areas (e.g. vegetation, bare soil and water). The objective is to extract the urban mask. The second one aims to obtain a more knowledge of urban area, distinguishing different types of tissue within the urban area. In order to validate our approach, we used a database of Tres Cantos-Madrid in Spain, which is derived from Landsat for a period (from January 2004 to July 2013) by collecting two frames per year at a spatial resolution of 25 meters. The obtained results show the effectiveness of our method.

Keywords: Multi-temporal satellite image, urban growth, Non-stationarity, stochastic modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1481
54 Conventional and PSO Based Approaches for Model Reduction of SISO Discrete Systems

Authors: S. K. Tomar, R. Prasad, S. Panda, C. Ardil

Abstract:

Reduction of Single Input Single Output (SISO) discrete systems into lower order model, using a conventional and an evolutionary technique is presented in this paper. In the conventional technique, the mixed advantages of Modified Cauer Form (MCF) and differentiation are used. In this method the original discrete system is, first, converted into equivalent continuous system by applying bilinear transformation. The denominator of the equivalent continuous system and its reciprocal are differentiated successively, the reduced denominator of the desired order is obtained by combining the differentiated polynomials. The numerator is obtained by matching the quotients of MCF. The reduced continuous system is converted back into discrete system using inverse bilinear transformation. In the evolutionary technique method, Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example.

Keywords: Discrete System, Single Input Single Output (SISO), Bilinear Transformation, Reduced Order Model, Modified CauerForm, Polynomial Differentiation, Particle Swarm Optimization, Integral Squared Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1920
53 Hybrid Advanced Oxidative Pretreatment of Complex Industrial Effluent for Biodegradability Enhancement

Authors: K. Paradkar, S. N. Mudliar, A. Sharma, A. B. Pandit, R. A. Pandey

Abstract:

The study explores the hybrid combination of Hydrodynamic Cavitation (HC) and Subcritical Wet Air Oxidation-based pretreatment of complex industrial effluent to enhance the biodegradability selectively (without major COD destruction) to facilitate subsequent enhanced downstream processing via anaerobic or aerobic biological treatment. Advanced oxidation based techniques can be less efficient as standalone options and a hybrid approach by combining Hydrodynamic Cavitation (HC), and Wet Air Oxidation (WAO) can lead to a synergistic effect since both the options are based on common free radical mechanism. The HC can be used for initial turbulence and generation of hotspots which can begin the free radical attack and this agitating mixture then can be subjected to less intense WAO since initial heat (to raise the activation energy) can be taken care by HC alone. Lab-scale venturi-based hydrodynamic cavitation and wet air oxidation reactor with biomethanated distillery wastewater (BMDWW) as a model effluent was examined for establishing the proof-of-concept. The results indicated that for a desirable biodegradability index (BOD: COD - BI) enhancement (up to 0.4), the Cavitation (standalone) pretreatment condition was: 5 bar and 88 min reaction time with a COD reduction of 36 % and BI enhancement of up to 0.27 (initial BI - 0.17). The optimum WAO condition (standalone) was: 150oC, 6 bar and 30 minutes with 31% COD reduction and 0.33 BI. The hybrid pretreatment (combined Cavitation + WAO) worked out to be 23.18 min HC (at 5 bar) followed by 30 min WAO at 150oC, 6 bar, at which around 50% COD was retained yielding a BI of 0.55. FTIR & NMR analysis of pretreated effluent indicated dissociation and/or reorientation of complex organic compounds in untreated effluent to simpler organic compounds post-pretreatment.

Keywords: BI, hybrid, hydrodynamic cavitation, wet air oxidation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1729
52 Retrieval of User Specific Images Using Semantic Signatures

Authors: K. Venkateswari, U. K. Balaji Saravanan, K. Thangaraj, K. V. Deepana

Abstract:

Image search engines rely on the surrounding textual keywords for the retrieval of images. It is a tedious work for the search engines like Google and Bing to interpret the user’s search intention and to provide the desired results. The recent researches also state that the Google image search engines do not work well on all the images. Consequently, this leads to the emergence of efficient image retrieval technique, which interprets the user’s search intention and shows the desired results. In order to accomplish this task, an efficient image re-ranking framework is required. Sequentially, to provide best image retrieval, the new image re-ranking framework is experimented in this paper. The implemented new image re-ranking framework provides best image retrieval from the image dataset by making use of re-ranking of retrieved images that is based on the user’s desired images. This is experimented in two sections. One is offline section and other is online section. In offline section, the reranking framework studies differently (reference classes or Semantic Spaces) for diverse user query keywords. The semantic signatures get generated by combining the textual and visual features of the images. In the online section, images are re-ranked by comparing the semantic signatures that are obtained from the reference classes with the user specified image query keywords. This re-ranking methodology will increases the retrieval image efficiency and the result will be effective to the user.

Keywords: CBIR, Image Re-ranking, Image Retrieval, Semantic Signature, Semantic Space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1911
51 Optimization and GIS-Based Intelligent Decision Support System for Urban Transportation Systems Analysis

Authors: Mohamad K. Hasan, Hameed Al-Qaheri

Abstract:

Optimization plays an important role in most real world applications that support decision makers to take the right decision regarding the strategic directions and operations of the system they manage. Solutions for traffic management and traffic congestion problems are considered major problems that most decision making authorities for cities around the world are looking for. This review paper gives a full description of the traffic problem as part of the transportation planning process and present a view as a framework of urban transportation system analysis where the core of the system is a transportation network equilibrium model that is based on optimization techniques and that can also be used for evaluating an alternative solution or a combination of alternative solutions for the traffic congestion. Different transportation network equilibrium models are reviewed from the sequential approach to the multiclass combining trip generation, trip distribution, modal split, trip assignment and departure time model. A GIS-Based intelligent decision support system framework for urban transportation system analysis is suggested for implementation where the selection of optimized alternative solutions, single or packages, will be based on an intelligent agent rather than human being which would lead to reduction in time, cost and the elimination of the difficulty, by human being, for finding the best solution to the traffic congestion problem.

Keywords: Multiclass simultaneous transportation equilibrium models, transportation planning, urban transportation systems analysis, intelligent decision support system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2267
50 Packet Forwarding with Multiprotocol Label Switching

Authors: R.N.Pise, S.A.Kulkarni, R.V.Pawar

Abstract:

MultiProtocol Label Switching (MPLS) is an emerging technology that aims to address many of the existing issues associated with packet forwarding in today-s Internetworking environment. It provides a method of forwarding packets at a high rate of speed by combining the speed and performance of Layer 2 with the scalability and IP intelligence of Layer 3. In a traditional IP (Internet Protocol) routing network, a router analyzes the destination IP address contained in the packet header. The router independently determines the next hop for the packet using the destination IP address and the interior gateway protocol. This process is repeated at each hop to deliver the packet to its final destination. In contrast, in the MPLS forwarding paradigm routers on the edge of the network (label edge routers) attach labels to packets based on the forwarding Equivalence class (FEC). Packets are then forwarded through the MPLS domain, based on their associated FECs , through swapping the labels by routers in the core of the network called label switch routers. The act of simply swapping the label instead of referencing the IP header of the packet in the routing table at each hop provides a more efficient manner of forwarding packets, which in turn allows the opportunity for traffic to be forwarded at tremendous speeds and to have granular control over the path taken by a packet. This paper deals with the process of MPLS forwarding mechanism, implementation of MPLS datapath , and test results showing the performance comparison of MPLS and IP routing. The discussion will focus primarily on MPLS IP packet networks – by far the most common application of MPLS today.

Keywords: Forwarding equivalence class, incoming label map, label, next hop label forwarding entry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2669
49 NANCY: Combining Adversarial Networks with Cycle-Consistency for Robust Multi-Modal Image Registration

Authors: Mirjana Ruppel, Rajendra Persad, Amit Bahl, Sanja Dogramadzi, Chris Melhuish, Lyndon Smith

Abstract:

Multimodal image registration is a profoundly complex task which is why deep learning has been used widely to address it in recent years. However, two main challenges remain: Firstly, the lack of ground truth data calls for an unsupervised learning approach, which leads to the second challenge of defining a feasible loss function that can compare two images of different modalities to judge their level of alignment. To avoid this issue altogether we implement a generative adversarial network consisting of two registration networks GAB, GBA and two discrimination networks DA, DB connected by spatial transformation layers. GAB learns to generate a deformation field which registers an image of the modality B to an image of the modality A. To do that, it uses the feedback of the discriminator DB which is learning to judge the quality of alignment of the registered image B. GBA and DA learn a mapping from modality A to modality B. Additionally, a cycle-consistency loss is implemented. For this, both registration networks are employed twice, therefore resulting in images ˆA, ˆB which were registered to ˜B, ˜A which were registered to the initial image pair A, B. Thus the resulting and initial images of the same modality can be easily compared. A dataset of liver CT and MRI was used to evaluate the quality of our approach and to compare it against learning and non-learning based registration algorithms. Our approach leads to dice scores of up to 0.80 ± 0.01 and is therefore comparable to and slightly more successful than algorithms like SimpleElastix and VoxelMorph.

Keywords: Multimodal image registration, GAN, cycle consistency, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 769
48 A Hybrid Ontology Based Approach for Ranking Documents

Authors: Sarah Motiee, Azadeh Nematzadeh, Mehrnoush Shamsfard

Abstract:

Increasing growth of information volume in the internet causes an increasing need to develop new (semi)automatic methods for retrieval of documents and ranking them according to their relevance to the user query. In this paper, after a brief review on ranking models, a new ontology based approach for ranking HTML documents is proposed and evaluated in various circumstances. Our approach is a combination of conceptual, statistical and linguistic methods. This combination reserves the precision of ranking without loosing the speed. Our approach exploits natural language processing techniques to extract phrases from documents and the query and doing stemming on words. Then an ontology based conceptual method will be used to annotate documents and expand the query. To expand a query the spread activation algorithm is improved so that the expansion can be done flexible and in various aspects. The annotated documents and the expanded query will be processed to compute the relevance degree exploiting statistical methods. The outstanding features of our approach are (1) combining conceptual, statistical and linguistic features of documents, (2) expanding the query with its related concepts before comparing to documents, (3) extracting and using both words and phrases to compute relevance degree, (4) improving the spread activation algorithm to do the expansion based on weighted combination of different conceptual relationships and (5) allowing variable document vector dimensions. A ranking system called ORank is developed to implement and test the proposed model. The test results will be included at the end of the paper.

Keywords: Document ranking, Ontology, Spread activation algorithm, Annotation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1597