Search results for: Data Centric Approach
8945 Microscopic Simulation of Toll Plaza Safety and Operations
Authors: Bekir O. Bartin, Kaan Ozbay, Sandeep Mudigonda, Hong Yang
Abstract:
The use of microscopic traffic simulation in evaluating the operational and safety conditions at toll plazas is demonstrated. Two toll plazas in New Jersey are selected as case studies and were developed and validated in Paramics traffic simulation software. In order to simulate drivers’ lane selection behavior in Paramics, a utility-based lane selection approach is implemented in Paramics Application Programming Interface (API). For each vehicle approaching the toll plaza, a utility value is assigned to each toll lane by taking into account the factors that are likely to impact drivers’ lane selection behavior, such as approach lane, exit lane and queue lengths. The results demonstrate that similar operational conditions, such as lane-by-lane toll plaza traffic volume can be attained using this approach. In addition, assessment of safety at toll plazas is conducted via a surrogate safety measure. In particular, the crash index (CI), an improved surrogate measure of time-to-collision (TTC), which reflects the severity of a crash is used in the simulation analyses. The results indicate that the spatial and temporal frequency of observed crashes can be simulated using the proposed methodology. Further analyses can be conducted to evaluate and compare various different operational decisions and safety measures using microscopic simulation models.
Keywords: Microscopic simulation, toll plaza, surrogate safety, application programming interface.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7888944 Large Eddy Simulation of Hydrogen Deflagration in Open Space and Vented Enclosure
Authors: T. Nozu, K. Hibi, T. Nishiie
Abstract:
This paper discusses the applicability of the numerical model for a damage prediction method of the accidental hydrogen explosion occurring in a hydrogen facility. The numerical model was based on an unstructured finite volume method (FVM) code “NuFD/FrontFlowRed”. For simulating unsteady turbulent combustion of leaked hydrogen gas, a combination of Large Eddy Simulation (LES) and a combustion model were used. The combustion model was based on a two scalar flamelet approach, where a G-equation model and a conserved scalar model expressed a propagation of premixed flame surface and a diffusion combustion process, respectively. For validation of this numerical model, we have simulated the previous two types of hydrogen explosion tests. One is open-space explosion test, and the source was a prismatic 5.27 m3 volume with 30% of hydrogen-air mixture. A reinforced concrete wall was set 4 m away from the front surface of the source. The source was ignited at the bottom center by a spark. The other is vented enclosure explosion test, and the chamber was 4.6 m × 4.6 m × 3.0 m with a vent opening on one side. Vent area of 5.4 m2 was used. Test was performed with ignition at the center of the wall opposite the vent. Hydrogen-air mixtures with hydrogen concentrations close to 18% vol. were used in the tests. The results from the numerical simulations are compared with the previous experimental data for the accuracy of the numerical model, and we have verified that the simulated overpressures and flame time-of-arrival data were in good agreement with the results of the previous two explosion tests.
Keywords: Deflagration, Large Eddy Simulation, Turbulent combustion, Vented enclosure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14778943 A Review on Soft Computing Technique in Intrusion Detection System
Authors: Noor Suhana Sulaiman, Rohani Abu Bakar, Norrozila Sulaiman
Abstract:
Intrusion Detection System is significant in network security. It detects and identifies intrusion behavior or intrusion attempts in a computer system by monitoring and analyzing the network packets in real time. In the recent year, intelligent algorithms applied in the intrusion detection system (IDS) have been an increasing concern with the rapid growth of the network security. IDS data deals with a huge amount of data which contains irrelevant and redundant features causing slow training and testing process, higher resource consumption as well as poor detection rate. Since the amount of audit data that an IDS needs to examine is very large even for a small network, classification by hand is impossible. Hence, the primary objective of this review is to review the techniques prior to classification process suit to IDS data.Keywords: Intrusion Detection System, security, soft computing, classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18648942 Economic Dispatch Fuzzy Linear Regression and Optimization
Authors: A. K. Al-Othman
Abstract:
This study presents a new approach based on Tanaka's fuzzy linear regression (FLP) algorithm to solve well-known power system economic load dispatch problem (ELD). Tanaka's fuzzy linear regression (FLP) formulation will be employed to compute the optimal solution of optimization problem after linearization. The unknowns are expressed as fuzzy numbers with a triangular membership function that has middle and spread value reflected on the unknowns. The proposed fuzzy model is formulated as a linear optimization problem, where the objective is to minimize the sum of the spread of the unknowns, subject to double inequality constraints. Linear programming technique is employed to obtain the middle and the symmetric spread for every unknown (power generation level). Simulation results of the proposed approach will be compared with those reported in literature.Keywords: Economic Dispatch, Fuzzy Linear Regression (FLP)and Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22938941 Modeling and Simulation of Acoustic Link Using Mackenize Propagation Speed Equation
Authors: Christhu Raj M. R., Rajeev Sukumaran
Abstract:
Underwater acoustic networks have attracted great attention in the last few years because of its numerous applications. High data rate can be achieved by efficiently modeling the physical layer in the network protocol stack. In Acoustic medium, propagation speed of the acoustic waves is dependent on many parameters such as temperature, salinity, density, and depth. Acoustic propagation speed cannot be modeled using standard empirical formulas such as Urick and Thorp descriptions. In this paper, we have modeled the acoustic channel using real time data of temperature, salinity, and speed of Bay of Bengal (Indian Coastal Region). We have modeled the acoustic channel by using Mackenzie speed equation and real time data obtained from National Institute of Oceanography and Technology. It is found that acoustic propagation speed varies between 1503 m/s to 1544 m/s as temperature and depth differs. The simulation results show that temperature, salinity, depth plays major role in acoustic propagation and data rate increases with appropriate data sets substituted in the simulated model.Keywords: Underwater Acoustics, Mackenzie Speed Equation, Temperature, Salinity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21998940 Reducing Defects through Organizational Learning within a Housing Association Environment
Authors: T. Hopkin, S. Lu, P. Rogers, M. Sexton
Abstract:
Housing Associations (HAs) contribute circa 20% of the UK’s housing supply. HAs are however under increasing pressure as a result of funding cuts and rent reductions. Due to the increased pressure, a number of processes are currently being reviewed by HAs, especially how they manage and learn from defects. Learning from defects is considered a useful approach to achieving defect reduction within the UK housebuilding industry. This paper contributes to our understanding of how HAs learn from defects by undertaking an initial round table discussion with key HA stakeholders as part of an ongoing collaborative research project with the National House Building Council (NHBC) to better understand how house builders and HAs learn from defects to reduce their prevalence. The initial discussion shows that defect information runs through a number of groups, both internal and external of a HA during both the defects management process and organizational learning (OL) process. Furthermore, HAs are reliant on capturing and recording defect data as the foundation for the OL process. During the OL process defect data analysis is the primary enabler to recognizing a need for a change to organizational routines. When a need for change has been recognized, new options are typically pursued to design out defects via updates to a HAs Employer’s Requirements. Proposed solutions are selected by a review board and committed to organizational routine. After implementing a change, both structured and unstructured feedback is sought to establish the change’s success. The findings from the HA discussion demonstrates that OL can achieve defect reduction within the house building sector in the UK. The paper concludes by outlining a potential ‘learning from defects model’ for the housebuilding industry as well as describing future work.
Keywords: Defects, new homes, housing associations, organizational learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18978939 Isobaric Vapor-Liquid Equilibrium Data for Binary Mixture of 2-Methyltetrahydrofuran and Cumene
Authors: V. K. Rattan, Baljinder K. Gill, Seema Kapoor
Abstract:
Isobaric vapor-liquid equilibrium measurements are reported for binary mixture of 2-Methyltetrahydrofuran and Cumene at 97.3 kPa. The data were obtained using a vapor recirculating type (modified Othmer's) equilibrium still. The mixture shows slight negative deviation from ideality. The system does not form an azeotrope. The experimental data obtained in this study are thermodynamically consistent according to the Herington test. The activity coefficients have been satisfactorily correlated by means of the Margules, and NRTL equations. Excess Gibbs free energy has been calculated from the experimental data. The values of activity coefficients have also been obtained by the UNIFAC group contribution method.Keywords: Binary mixture, 2-Methyltetrahydrofuran, Cumene, Vapor-liquid equilibrium, UNIFAC, Excess Gibbs free energy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27198938 Simulated Annealing and Genetic Algorithm in Telecommunications Network Planning
Authors: Aleksandar Tsenov
Abstract:
The main goal of this work is to propose a way for combined use of two nontraditional algorithms by solving topological problems on telecommunications concentrator networks. The algorithms suggested are the Simulated Annealing algorithm and the Genetic Algorithm. The Algorithm of Simulated Annealing unifies the well known local search algorithms. In addition - Simulated Annealing allows acceptation of moves in the search space witch lead to decisions with higher cost in order to attempt to overcome any local minima obtained. The Genetic Algorithm is a heuristic approach witch is being used in wide areas of optimization works. In the last years this approach is also widely implemented in Telecommunications Networks Planning. In order to solve less or more complex planning problem it is important to find the most appropriate parameters for initializing the function of the algorithm.Keywords: Concentrator network, genetic algorithm, simulated annealing, UCPL.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17258937 Social Software Approach to E-Learning 3.0
Authors: Anna Nedyalkova, KrassimirNedyalkov, TeodoraBakardjieva
Abstract:
In the present paper, we-ll explore how social media tools provide an opportunity for new developments of the e-Learning in the context of managing personal knowledge. There will be a discussion how social media tools provide a possibility for helping knowledge workersand students to gather, organize and manage their personal information as a part of the e-learning process. At the centre of this social software driven approach to e-learning environments are the challenges of personalization and collaboration. We-ll share concepts of how organizations are using social media for e-Learning and believe that integration of these tools into traditional e-Learning is probably not a choice, but inevitability. Students- Survey of use of web technologies and social networking tools is presented. Newly developed framework for semantic blogging capable of organizing results relevant to user requirements is implemented at Varna Free University (VFU) to provide more effective navigation and search.
Keywords: Semantic blogging, social media tools, e-Learning, web 2.0, web 3.0.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18168936 A Centroid Ranking Approach Based Fuzzy MCDM Model
Abstract:
This paper suggests ranking alternatives under fuzzy MCDM (multiple criteria decision making) via an centroid based ranking approach, where criteria are classified to benefit qualitative, benefit quantitative and cost quantitative ones. The ratings of alternatives versus qualitative criteria and the importance weights of all criteria are assessed in linguistic values represented by fuzzy numbers. The membership function for the final fuzzy evaluation value of each alternative can be developed through α-cuts and interval arithmetic of fuzzy numbers. The distance between the original point and the relative centroid is applied to defuzzify the final fuzzy evaluation values in order to rank alternatives. Finally a numerical example demonstrates the computation procedure of the proposed model.
Keywords: Fuzzy MCDM, Criteria, Fuzzy number, Ranking, Relative centroid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16768935 A New Approach of Wireless Network Traffic on VPN
Authors: Amir Rashid, M. Saleem Khan, Freeha Zafar
Abstract:
This work presents a new approach of securing a wireless network. The configuration is focused on securing & Protecting wireless network traffic for a small network such as a home or dorm room. The security Mechanism provided both authentication, allowing only known authorized users access to the wireless network, and encryption, preventing anyone from reading the wireless traffic. The mentioned solution utilizes the open source free S/WAN software which implements the Internet Protocol Security –IPSEC. In addition to wireless components, wireless NIC in PC and wireless access point needs a machine running Linux to act as security gateway. While the current configuration assumes that the wireless PC clients are running Linux, Windows XP/VISTA/7 based machines equipped with VPN software which will allow to interface with this configuration.Keywords: Wireless network security, security network, authentication, encryption and internet protocol security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21438934 Automatic Distance Compensation for Robust Voice-based Human-Computer Interaction
Authors: Randy Gomez, Keisuke Nakamura, Kazuhiro Nakadai
Abstract:
Distant-talking voice-based HCI system suffers from performance degradation due to mismatch between the acoustic speech (runtime) and the acoustic model (training). Mismatch is caused by the change in the power of the speech signal as observed at the microphones. This change is greatly influenced by the change in distance, affecting speech dynamics inside the room before reaching the microphones. Moreover, as the speech signal is reflected, its acoustical characteristic is also altered by the room properties. In general, power mismatch due to distance is a complex problem. This paper presents a novel approach in dealing with distance-induced mismatch by intelligently sensing instantaneous voice power variation and compensating model parameters. First, the distant-talking speech signal is processed through microphone array processing, and the corresponding distance information is extracted. Distance-sensitive Gaussian Mixture Models (GMMs), pre-trained to capture both speech power and room property are used to predict the optimal distance of the speech source. Consequently, pre-computed statistic priors corresponding to the optimal distance is selected to correct the statistics of the generic model which was frozen during training. Thus, model combinatorics are post-conditioned to match the power of instantaneous speech acoustics at runtime. This results to an improved likelihood in predicting the correct speech command at farther distances. We experiment using real data recorded inside two rooms. Experimental evaluation shows voice recognition performance using our method is more robust to the change in distance compared to the conventional approach. In our experiment, under the most acoustically challenging environment (i.e., Room 2: 2.5 meters), our method achieved 24.2% improvement in recognition performance against the best-performing conventional method.
Keywords: Human Machine Interaction, Human Computer Interaction, Voice Recognition, Acoustic Model Compensation, Acoustic Speech Enhancement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18858933 Cardiovascular Modeling Software Tools in Medicine
Authors: J. Fernandez, R. Fernandez de Canete, J. Perea-Paizal, J. C. Ramos-Diaz
Abstract:
The high prevalence of cardiovascular diseases has provoked a raising interest in the development of mathematical models in order to evaluate the cardiovascular function both under physiological and pathological conditions. In this paper, a physical model of the cardiovascular system with intrinsic regulation is presented and implemented by using the object-oriented Modelica simulation software tools. For this task, a multi-compartmental system previously validated with physiological data has been built, based on the interconnection of cardiovascular elements such as resistances, capacitances and pumping among others, by following an electrohydraulic analogy. The results obtained under both physiological and pathological scenarios provide an easy interpretative key to analyze the hemodynamic behavior of the patient. The described approach represents a valuable tool in the teaching of physiology for graduate medical and nursing students among others.
Keywords: Cardiovascular system, Modelica simulation software, physical modeling, teaching tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12528932 DCBOR: A Density Clustering Based on Outlier Removal
Authors: A. M. Fahim, G. Saake, A. M. Salem, F. A. Torkey, M. A. Ramadan
Abstract:
Data clustering is an important data exploration technique with many applications in data mining. We present an enhanced version of the well known single link clustering algorithm. We will refer to this algorithm as DCBOR. The proposed algorithm alleviates the chain effect by removing the outliers from the given dataset. So this algorithm provides outlier detection and data clustering simultaneously. This algorithm does not need to update the distance matrix, since the algorithm depends on merging the most k-nearest objects in one step and the cluster continues grow as long as possible under specified condition. So the algorithm consists of two phases; at the first phase, it removes the outliers from the input dataset. At the second phase, it performs the clustering process. This algorithm discovers clusters of different shapes, sizes, densities and requires only one input parameter; this parameter represents a threshold for outlier points. The value of the input parameter is ranging from 0 to 1. The algorithm supports the user in determining an appropriate value for it. We have tested this algorithm on different datasets contain outlier and connecting clusters by chain of density points, and the algorithm discovers the correct clusters. The results of our experiments demonstrate the effectiveness and the efficiency of DCBOR.Keywords: Data Clustering, Clustering Algorithms, Handling Noise, Arbitrary Shape of Clusters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19338931 Analyzing the Factors Effecting the Passenger Car Breakdowns using Com-Poisson GLM
Authors: N. Mamode Khan, V. Jowaheer
Abstract:
Number of breakdowns experienced by a machinery is a highly under-dispersed count random variable and its value can be attributed to the factors related to the mechanical input and output of that machinery. Analyzing such under-dispersed count observations as a function of the explanatory factors has been a challenging problem. In this paper, we aim at estimating the effects of various factors on the number of breakdowns experienced by a passenger car based on a study performed in Mauritius over a year. We remark that the number of passenger car breakdowns is highly under-dispersed. These data are therefore modelled and analyzed using Com-Poisson regression model. We use quasi-likelihood estimation approach to estimate the parameters of the model. Under-dispersion parameter is estimated to be 2.14 justifying the appropriateness of Com-Poisson distribution in modelling under-dispersed count responses recorded in this study.
Keywords: Breakdowns, under-dispersion, com-poisson, generalized linear model, quasi-likelihood estimation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15458930 Swarm Intelligence based Optimal Linear Phase FIR High Pass Filter Design using Particle Swarm Optimization with Constriction Factor and Inertia Weight Approach
Authors: Sangeeta Mandal, Rajib Kar, Durbadal Mandal, Sakti Prasad Ghoshal
Abstract:
In this paper, an optimal design of linear phase digital high pass finite impulse response (FIR) filter using Particle Swarm Optimization with Constriction Factor and Inertia Weight Approach (PSO-CFIWA) has been presented. In the design process, the filter length, pass band and stop band frequencies, feasible pass band and stop band ripple sizes are specified. FIR filter design is a multi-modal optimization problem. The conventional gradient based optimization techniques are not efficient for digital filter design. Given the filter specifications to be realized, the PSO-CFIWA algorithm generates a set of optimal filter coefficients and tries to meet the ideal frequency response characteristic. In this paper, for the given problem, the designs of the optimal FIR high pass filters of different orders have been performed. The simulation results have been compared to those obtained by the well accepted algorithms such as Parks and McClellan algorithm (PM), genetic algorithm (GA). The results justify that the proposed optimal filter design approach using PSOCFIWA outperforms PM and GA, not only in the accuracy of the designed filter but also in the convergence speed and solution quality.Keywords: FIR Filter; PSO-CFIWA; PSO; Parks and McClellanAlgorithm, Evolutionary Optimization Technique; MagnitudeResponse; Convergence; High Pass Filter
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15548929 Improving Subjective Bias Detection Using Bidirectional Encoder Representations from Transformers and Bidirectional Long Short-Term Memory
Authors: Ebipatei Victoria Tunyan, T. A. Cao, Cheol Young Ock
Abstract:
Detecting subjectively biased statements is a vital task. This is because this kind of bias, when present in the text or other forms of information dissemination media such as news, social media, scientific texts, and encyclopedias, can weaken trust in the information and stir conflicts amongst consumers. Subjective bias detection is also critical for many Natural Language Processing (NLP) tasks like sentiment analysis, opinion identification, and bias neutralization. Having a system that can adequately detect subjectivity in text will boost research in the above-mentioned areas significantly. It can also come in handy for platforms like Wikipedia, where the use of neutral language is of importance. The goal of this work is to identify the subjectively biased language in text on a sentence level. With machine learning, we can solve complex AI problems, making it a good fit for the problem of subjective bias detection. A key step in this approach is to train a classifier based on BERT (Bidirectional Encoder Representations from Transformers) as upstream model. BERT by itself can be used as a classifier; however, in this study, we use BERT as data preprocessor as well as an embedding generator for a Bi-LSTM (Bidirectional Long Short-Term Memory) network incorporated with attention mechanism. This approach produces a deeper and better classifier. We evaluate the effectiveness of our model using the Wiki Neutrality Corpus (WNC), which was compiled from Wikipedia edits that removed various biased instances from sentences as a benchmark dataset, with which we also compare our model to existing approaches. Experimental analysis indicates an improved performance, as our model achieved state-of-the-art accuracy in detecting subjective bias. This study focuses on the English language, but the model can be fine-tuned to accommodate other languages.
Keywords: Subjective bias detection, machine learning, BERT–BiLSTM–Attention, text classification, natural language processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8308928 Alternative to M-Estimates in Multisensor Data Fusion
Authors: Nga-Viet Nguyen, Georgy Shevlyakov, Vladimir Shin
Abstract:
To solve the problem of multisensor data fusion under non-Gaussian channel noise. The advanced M-estimates are known to be robust solution while trading off some accuracy. In order to improve the estimation accuracy while still maintaining the equivalent robustness, a two-stage robust fusion algorithm is proposed using preliminary rejection of outliers then an optimal linear fusion. The numerical experiments show that the proposed algorithm is equivalent to the M-estimates in the case of uncorrelated local estimates and significantly outperforms the M-estimates when local estimates are correlated.Keywords: Data fusion, estimation, robustness, M-estimates.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18328927 Self Watermarking based on Visual Cryptography
Authors: Mahmoud A. Hassan, Mohammed A. Khalili
Abstract:
We are proposing a simple watermarking method based on visual cryptography. The method is based on selection of specific pixels from the original image instead of random selection of pixels as per Hwang [1] paper. Verification information is generated which will be used to verify the ownership of the image without the need to embed the watermark pattern into the original digital data. Experimental results show the proposed method can recover the watermark pattern from the marked data even if some changes are made to the original digital data.Keywords: Watermarking, visual cryptography, visualthreshold.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17388926 Multiclass Support Vector Machines for Environmental Sounds Classification Using log-Gabor Filters
Authors: S. Souli, Z. Lachiri
Abstract:
In this paper we propose a robust environmental sound classification approach, based on spectrograms features driven from log-Gabor filters. This approach includes two methods. In the first methods, the spectrograms are passed through an appropriate log-Gabor filter banks and the outputs are averaged and underwent an optimal feature selection procedure based on a mutual information criteria. The second method uses the same steps but applied only to three patches extracted from each spectrogram.
To investigate the accuracy of the proposed methods, we conduct experiments using a large database containing 10 environmental sound classes. The classification results based on Multiclass Support Vector Machines show that the second method is the most efficient with an average classification accuracy of 89.62 %.
Keywords: Environmental sounds, Log-Gabor filters, Spectrogram, SVM Multiclass, Visual features.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17468925 A Fuzzy Tumor Volume Estimation Approach Based On Fuzzy Segmentation of MR Images
Authors: Sara A.Yones, Ahmed S. Moussa
Abstract:
Quantitative measurements of tumor in general and tumor volume in particular, become more realistic with the use of Magnetic Resonance imaging, especially when the tumor morphological changes become irregular and difficult to assess by clinical examination. However, tumor volume estimation strongly depends on the image segmentation, which is fuzzy by nature. In this paper a fuzzy approach is presented for tumor volume segmentation based on the fuzzy connectedness algorithm. The fuzzy affinity matrix resulting from segmentation is then used to estimate a fuzzy volume based on a certainty parameter, an Alpha Cut, defined by the user. The proposed method was shown to highly affect treatment decisions. A statistical analysis was performed in this study to validate the results based on a manual method for volume estimation and the importance of using the Alpha Cut is further explained.
Keywords: Alpha Cut, Fuzzy Connectedness, Magnetic Resonance Imaging, Tumor volume estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23988924 Object-Based Image Indexing and Retrieval in DCT Domain using Clustering Techniques
Authors: Hossein Nezamabadi-pour, Saeid Saryazdi
Abstract:
In this paper, we present a new and effective image indexing technique that extracts features directly from DCT domain. Our proposed approach is an object-based image indexing. For each block of size 8*8 in DCT domain a feature vector is extracted. Then, feature vectors of all blocks of image using a k-means algorithm is clustered into groups. Each cluster represents a special object of the image. Then we select some clusters that have largest members after clustering. The centroids of the selected clusters are taken as image feature vectors and indexed into the database. Also, we propose an approach for using of proposed image indexing method in automatic image classification. Experimental results on a database of 800 images from 8 semantic groups in automatic image classification are reported.
Keywords: Object-based image retrieval, DCT domain, Image indexing, Image classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20258923 Estimation of Missing or Incomplete Data in Road Performance Measurement Systems
Authors: Kristjan Kuhi, Kati K. Kaare, Ott Koppel
Abstract:
Modern management in most fields is performance based; both planning and implementation of maintenance and operational activities are driven by appropriately defined performance indicators. Continuous real-time data collection for management is becoming feasible due to technological advancements. Outdated and insufficient input data may result in incorrect decisions. When using deterministic models the uncertainty of the object state is not visible thus applying the deterministic models are more likely to give false diagnosis. Constructing structured probabilistic models of the performance indicators taking into consideration the surrounding indicator environment enables to estimate the trustworthiness of the indicator values. It also assists to fill gaps in data to improve the quality of the performance analysis and management decisions. In this paper authors discuss the application of probabilistic graphical models in the road performance measurement and propose a high-level conceptual model that enables analyzing and predicting more precisely future pavement deterioration based on road utilization.
Keywords: Probabilistic graphical models, performance indicators, road performance management, data collection
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18358922 The First Integral Approach in Stability Problem of Large Scale Nonlinear Dynamical Systems
Authors: M. Kidouche, H. Habbi, M. Zelmat, S. Grouni
Abstract:
In analyzing large scale nonlinear dynamical systems, it is often desirable to treat the overall system as a collection of interconnected subsystems. Solutions properties of the large scale system are then deduced from the solution properties of the individual subsystems and the nature of the interconnections. In this paper a new approach is proposed for the stability analysis of large scale systems, which is based upon the concept of vector Lyapunov functions and the decomposition methods. The present results make use of graph theoretic decomposition techniques in which the overall system is partitioned into a hierarchy of strongly connected components. We show then, that under very reasonable assumptions, the overall system is stable once the strongly connected subsystems are stables. Finally an example is given to illustrate the constructive methodology proposed.Keywords: Comparison principle, First integral, Large scale system, Lyapunov stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15278921 Roof Material Detection Based on Object-Based Approach Using WorldView-2 Satellite Imagery
Authors: Ebrahim Taherzadeh, Helmi Z. M. Shafri, Kaveh Shahi
Abstract:
One of the most important tasks in urban remote sensing is the detection of impervious surfaces (IS), such as roofs and roads. However, detection of IS in heterogeneous areas still remains one of the most challenging tasks. In this study, detection of concrete roof using an object-based approach was proposed. A new rule-based classification was developed to detect concrete roof tile. This proposed rule-based classification was applied to WorldView-2 image and results showed that the proposed rule has good potential to predict concrete roof material from WorldView-2 images, with 85% accuracy.
Keywords: Urban remote sensing, impervious surface, Object- Based, Roof Material, Concrete tile, WorldView-2.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37938920 Optimizing Approach for Sifting Process to Solve a Common Type of Empirical Mode Decomposition Mode Mixing
Authors: Saad Al-Baddai, Karema Al-Subari, Elmar Lang, Bernd Ludwig
Abstract:
Empirical mode decomposition (EMD), a new data-driven of time-series decomposition, has the advantage of supposing that a time series is non-linear or non-stationary, as is implicitly achieved in Fourier decomposition. However, the EMD suffers of mode mixing problem in some cases. The aim of this paper is to present a solution for a common type of signals causing of EMD mode mixing problem, in case a signal suffers of an intermittency. By an artificial example, the solution shows superior performance in terms of cope EMD mode mixing problem comparing with the conventional EMD and Ensemble Empirical Mode decomposition (EEMD). Furthermore, the over-sifting problem is also completely avoided; and computation load is reduced roughly six times compared with EEMD, an ensemble number of 50.Keywords: Empirical mode decomposition, mode mixing, sifting process, over-sifting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9928919 Integration of Seismic and Seismological Data Interpretation for Subsurface Structure Identification
Authors: Iftikhar Ahmed Satti, Wan Ismail Wan Yusoff
Abstract:
The structural interpretation of a part of eastern Potwar (Missa Keswal) has been carried out with available seismological, seismic and well data. Seismological data contains both the source parameters and fault plane solution (FPS) parameters and seismic data contains ten seismic lines that were re-interpreted by using well data. Structural interpretation depicts two broad types of fault sets namely, thrust and back thrust faults. These faults together give rise to pop up structures in the study area and also responsible for many structural traps and seismicity. Seismic interpretation includes time and depth contour maps of Chorgali Formation while seismological interpretation includes focal mechanism solution (FMS), depth, frequency, magnitude bar graphs and renewal of Seismotectonic map. The Focal Mechanism Solutions (FMS) that surrounds the study area are correlated with the different geological and structural maps of the area for the determination of the nature of subsurface faults. Results of structural interpretation from both seismic and seismological data show good correlation. It is hoped that the present work will help in better understanding of the variations in the subsurface structure and can be a useful tool for earthquake prediction, planning of oil field and reservoir monitoring.Keywords: Focal mechanism solution (FMS), Fault plane solution (FPS), Reservoir monitoring, earthquake prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24818918 Faults Forecasting System
Authors: Hanaa E.Sayed, Hossam A. Gabbar, Shigeji Miyazaki
Abstract:
This paper presents Faults Forecasting System (FFS) that utilizes statistical forecasting techniques in analyzing process variables data in order to forecast faults occurrences. FFS is proposing new idea in detecting faults. Current techniques used in faults detection are based on analyzing the current status of the system variables in order to check if the current status is fault or not. FFS is using forecasting techniques to predict future timing for faults before it happens. Proposed model is applying subset modeling strategy and Bayesian approach in order to decrease dimensionality of the process variables and improve faults forecasting accuracy. A practical experiment, designed and implemented in Okayama University, Japan, is implemented, and the comparison shows that our proposed model is showing high forecasting accuracy and BEFORE-TIME.Keywords: Bayesian Techniques, Faults Detection, Forecasting techniques, Multivariate Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15528917 Persuasive Communication on Social Egg Freezing in California from a Framing Theory Perspective
Authors: Leila Mohammadi
Abstract:
This paper presents the impact of persuasive communication implemented by fertility clinics websites, and how this information influences women at their decision-making for undertaking this procedure. The influential factors for women decisions to do social egg freezing (SEF) are analyzed from a framing theory perspective, with a specific focus on the impact of persuasive information on women’s decision making. This study follows a quantitative approach. A two-phase survey has been conducted to examine the interest rate to undertake SEF. In the first phase, a questionnaire was available during a month (May 2015) to women to answer whether or not they knew enough information of this process, with a total of 230 answers. The second phase took place in the two last weeks of July 2015. All the respondents were invited to a seminars called ‘All about egg freezing’ and afretwards they were requested to answer the second questionnaire. After the seminar, in which they were given an extensive amount of information about egg freezing, a total of 115 women replied the questionnaire. The collected data during this process were analyzed using descriptive statistics. Most of the respondents changed their opinion in the second questionaire which was after receiving information. Although in the first questionnaire their self-evaluation of having knowledge about this process and the implemented technologies was very high, they realized that they still need to access more information from different sources in order to be able to make a decision. The study reached the conclusion that persuasive and framed information by clinics would affect the decisions of these women. Despite the reasons women have to do egg freezing and their motivations behind it, providing people necessary information and unprejudiced data about this process (such as its positive and negative aspects, requirements, suppositions, possibilities and consequences) would help them to make a more precise and reasonable decision about what they are buying.
Keywords: Decision making, fertility clinics, framing theory, persuasive information, social egg freezing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9708916 An Optimization Model for Natural Gas Supply Chain through a Cost Approach under Uncertainty
Abstract:
Natural gas, as one of the most important sources of energy for many of the industrial and domestic users all over the world, has a complex, huge supply chain which is in need of heavy investments in all the phases of exploration, extraction, production, transportation, storage and distribution. The main purpose of supply chain is to meet customers’ need efficiently and with minimum cost. In this study, with the aim of minimizing economic costs, different levels of natural gas supply chain in the form of a multi-echelon, multi-period fuzzy linear programming have been modeled. In this model, different constraints including constraints on demand satisfaction, capacity, input/output balance and presence/absence of a path have been defined. The obtained results suggest efficiency of the recommended model in optimal allocation and reduction of supply chain costs.
Keywords: Cost Approach, Fuzzy Theory, Linear Programming, Natural Gas Supply Chain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2521