Search results for: factorization of computation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 559

Search results for: factorization of computation

229 Graph Codes - 2D Projections of Multimedia Feature Graphs for Fast and Effective Retrieval

Authors: Stefan Wagenpfeil, Felix Engel, Paul McKevitt, Matthias Hemmje

Abstract:

Multimedia Indexing and Retrieval is generally designed and implemented by employing feature graphs. These graphs typically contain a significant number of nodes and edges to reflect the level of detail in feature detection. A higher level of detail increases the effectiveness of the results but also leads to more complex graph structures. However, graph-traversal-based algorithms for similarity are quite inefficient and computation intensive, especially for large data structures. To deliver fast and effective retrieval, an efficient similarity algorithm, particularly for large graphs, is mandatory. Hence, in this paper, we define a graph-projection into a 2D space (Graph Code) as well as the corresponding algorithms for indexing and retrieval. We show that calculations in this space can be performed more efficiently than graph-traversals due to a simpler processing model and a high level of parallelization. In consequence, we prove that the effectiveness of retrieval also increases substantially, as Graph Codes facilitate more levels of detail in feature fusion. Thus, Graph Codes provide a significant increase in efficiency and effectiveness (especially for Multimedia indexing and retrieval) and can be applied to images, videos, audio, and text information.

Keywords: indexing, retrieval, multimedia, graph algorithm, graph code

Procedia PDF Downloads 133
228 Improved Multi–Objective Firefly Algorithms to Find Optimal Golomb Ruler Sequences for Optimal Golomb Ruler Channel Allocation

Authors: Shonak Bansal, Prince Jain, Arun Kumar Singh, Neena Gupta

Abstract:

Recently nature–inspired algorithms have widespread use throughout the tough and time consuming multi–objective scientific and engineering design optimization problems. In this paper, we present extended forms of firefly algorithm to find optimal Golomb ruler (OGR) sequences. The OGRs have their one of the major application as unequally spaced channel–allocation algorithm in optical wavelength division multiplexing (WDM) systems in order to minimize the adverse four–wave mixing (FWM) crosstalk effect. The simulation results conclude that the proposed optimization algorithm has superior performance compared to the existing conventional computing and nature–inspired optimization algorithms to find OGRs in terms of ruler length, total optical channel bandwidth and computation time.

Keywords: channel allocation, conventional computing, four–wave mixing, nature–inspired algorithm, optimal Golomb ruler, lévy flight distribution, optimization, improved multi–objective firefly algorithms, Pareto optimal

Procedia PDF Downloads 292
227 Data Security and Privacy Challenges in Cloud Computing

Authors: Amir Rashid

Abstract:

Cloud Computing frameworks empower organizations to cut expenses by outsourcing computation resources on-request. As of now, customers of Cloud service providers have no methods for confirming the privacy and ownership of their information and data. To address this issue we propose the platform of a trusted cloud computing program (TCCP). TCCP empowers Infrastructure as a Service (IaaS) suppliers, for example, Amazon EC2 to give a shout box execution condition that ensures secret execution of visitor virtual machines. Also, it permits clients to bear witness to the IaaS supplier and decide if the administration is secure before they dispatch their virtual machines. This paper proposes a Trusted Cloud Computing Platform (TCCP) for guaranteeing the privacy and trustworthiness of computed data that are outsourced to IaaS service providers. The TCCP gives the deliberation of a shut box execution condition for a client's VM, ensuring that no cloud supplier's authorized manager can examine or mess up with its data. Furthermore, before launching the VM, the TCCP permits a client to dependably and remotely acknowledge that the provider at backend is running a confided in TCCP. This capacity extends the verification of whole administration, and hence permits a client to confirm the data operation in secure mode.

Keywords: cloud security, IaaS, cloud data privacy and integrity, hybrid cloud

Procedia PDF Downloads 269
226 Bidirectional Long Short-Term Memory-Based Signal Detection for Orthogonal Frequency Division Multiplexing With All Index Modulation

Authors: Mahmut Yildirim

Abstract:

This paper proposed the bidirectional long short-term memory (Bi-LSTM) network-aided deep learning (DL)-based signal detection for Orthogonal frequency division multiplexing with all index modulation (OFDM-AIM), namely Bi-DeepAIM. OFDM-AIM is developed to increase the spectral efficiency of OFDM with index modulation (OFDM-IM), a promising multi-carrier technique for communication systems beyond 5G. In this paper, due to its strong classification ability, Bi-LSTM is considered an alternative to the maximum likelihood (ML) algorithm, which is used for signal detection in the classical OFDM-AIM scheme. The performance of the Bi-DeepAIM is compared with LSTM network-aided DL-based OFDM-AIM (DeepAIM) and classic OFDM-AIM that uses (ML)-based signal detection via BER performance and computational time criteria. Simulation results show that Bi-DeepAIM obtains better bit error rate (BER) performance than DeepAIM and lower computation time in signal detection than ML-AIM.

Keywords: bidirectional long short-term memory, deep learning, maximum likelihood, OFDM with all index modulation, signal detection

Procedia PDF Downloads 33
225 State Estimation of a Biotechnological Process Using Extended Kalman Filter and Particle Filter

Authors: R. Simutis, V. Galvanauskas, D. Levisauskas, J. Repsyte, V. Grincas

Abstract:

This paper deals with advanced state estimation algorithms for estimation of biomass concentration and specific growth rate in a typical fed-batch biotechnological process. This biotechnological process was represented by a nonlinear mass-balance based process model. Extended Kalman Filter (EKF) and Particle Filter (PF) was used to estimate the unmeasured state variables from oxygen uptake rate (OUR) and base consumption (BC) measurements. To obtain more general results, a simplified process model was involved in EKF and PF estimation algorithms. This model doesn’t require any special growth kinetic equations and could be applied for state estimation in various bioprocesses. The focus of this investigation was concentrated on the comparison of the estimation quality of the EKF and PF estimators by applying different measurement noises. The simulation results show that Particle Filter algorithm requires significantly more computation time for state estimation but gives lower estimation errors both for biomass concentration and specific growth rate. Also the tuning procedure for Particle Filter is simpler than for EKF. Consequently, Particle Filter should be preferred in real applications, especially for monitoring of industrial bioprocesses where the simplified implementation procedures are always desirable.

Keywords: biomass concentration, extended Kalman filter, particle filter, state estimation, specific growth rate

Procedia PDF Downloads 401
224 Inversion of Electrical Resistivity Data: A Review

Authors: Shrey Sharma, Gunjan Kumar Verma

Abstract:

High density electrical prospecting has been widely used in groundwater investigation, civil engineering and environmental survey. For efficient inversion, the forward modeling routine, sensitivity calculation, and inversion algorithm must be efficient. This paper attempts to provide a brief summary of the past and ongoing developments of the method. It includes reviews of the procedures used for data acquisition, processing and inversion of electrical resistivity data based on compilation of academic literature. In recent times there had been a significant evolution in field survey designs and data inversion techniques for the resistivity method. In general 2-D inversion for resistivity data is carried out using the linearized least-square method with the local optimization technique .Multi-electrode and multi-channel systems have made it possible to conduct large 2-D, 3-D and even 4-D surveys efficiently to resolve complex geological structures that were not possible with traditional 1-D surveys. 3-D surveys play an increasingly important role in very complex areas where 2-D models suffer from artifacts due to off-line structures. Continued developments in computation technology, as well as fast data inversion techniques and software, have made it possible to use optimization techniques to obtain model parameters to a higher accuracy. A brief discussion on the limitations of the electrical resistivity method has also been presented.

Keywords: inversion, limitations, optimization, resistivity

Procedia PDF Downloads 339
223 Abdominal Organ Segmentation in CT Images Based On Watershed Transform and Mosaic Image

Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid

Abstract:

Accurate Liver, spleen and kidneys segmentation in abdominal CT images is one of the most important steps for computer aided abdominal organs pathology diagnosis. In this paper, we have proposed a new semi-automatic algorithm for Liver, spleen and kidneys area extraction in abdominal CT images. Our proposed method is based on hierarchical segmentation and watershed algorithm. In our approach, a powerful technique has been designed to suppress over-segmentation based on mosaic image and on the computation of the watershed transform. The algorithm is currency in two parts. In the first, we seek to improve the quality of the gradient-mosaic image. In this step, we propose a method for improving the gradient-mosaic image by applying the anisotropic diffusion filter followed by the morphological filters. Thereafter we proceed to the hierarchical segmentation of the liver, spleen and kidney. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.

Keywords: anisotropic diffusion filter, CT images, morphological filter, mosaic image, multi-abdominal organ segmentation, mosaic image, the watershed algorithm

Procedia PDF Downloads 468
222 A Predictive Analytics Approach to Project Management: Reducing Project Failures in Web and Software Development Projects

Authors: Tazeen Fatima

Abstract:

Use of project management in web & software development projects is very significant. It has been observed that even with the application of effective project management, projects usually do not complete their lifecycle and fail. To minimize these failures, key performance indicators have been introduced in previous studies to counter project failures. However, there are always gaps and problems in the KPIs identified. Despite of incessant efforts at technical and managerial levels, projects still fail. There is no substantial approach to identify and avoid these failures in the very beginning of the project lifecycle. In this study, we aim to answer these research problems by analyzing the concept of predictive analytics which is a specialized technology and is very easy to use in this era of computation. Project organizations can use data gathering, compute power, and modern tools to render efficient Predictions. The research aims to identify such a predictive analytics approach. The core objective of the study was to reduce failures and introduce effective implementation of project management principles. Existing predictive analytics methodologies, tools and solution providers were also analyzed. Relevant data was gathered from projects and was analyzed via predictive techniques to make predictions well advance in time to render effective project management in web & software development industry.

Keywords: project management, predictive analytics, predictive analytics methodology, project failures

Procedia PDF Downloads 311
221 Geothermal Energy Evaluation of Lower Benue Trough Using Spectral Analysis of Aeromagnetic Data

Authors: Stella C. Okenu, Stephen O. Adikwu, Martins E. Okoro

Abstract:

The geothermal energy resource potential of the Lower Benue Trough (LBT) in Nigeria was evaluated in this study using spectral analysis of high-resolution aeromagnetic (HRAM) data. The reduced to the equator aeromagnetic data was divided into sixteen (16) overlapping blocks, and each of the blocks was analyzed to obtain the radial averaged power spectrum which enabled the computation of the top and centroid depths to magnetic sources. The values were then used to assess the Curie Point Depth (CPD), geothermal gradients, and heat flow variations in the study area. Results showed that CPD varies from 7.03 to 18.23 km, with an average of 12.26 km; geothermal gradient values vary between 31.82 and 82.50°C/km, with an average of 51.21°C/km, while heat flow variations range from 79.54 to 206.26 mW/m², with an average of 128.02 mW/m². Shallow CPD zones that run from the eastern through the western and southwestern parts of the study area correspond to zones of high geothermal gradient values and high subsurface heat flow distributions. These areas signify zones associated with anomalous subsurface thermal conditions and are therefore recommended for detailed geothermal energy exploration studies.

Keywords: geothermal energy, curie-point depth, geothermal gradient, heat flow, aeromagnetic data, LBT

Procedia PDF Downloads 46
220 Controller Design for Highly Maneuverable Aircraft Technology Using Structured Singular Value and Direct Search Method

Authors: Marek Dlapa

Abstract:

The algebraic approach is applied to the control of the HiMAT (Highly Maneuverable Aircraft Technology). The objective is to find a robust controller which guarantees robust stability and decoupled control of longitudinal model of a scaled remotely controlled vehicle version of the advanced fighter HiMAT. Control design is performed by decoupling the nominal MIMO (multi-input multi-output) system into two identical SISO (single-input single-output) plants which are approximated by a 4th order transfer function. The algebraic approach is then used for pole placement design, and the nominal closed-loop poles are tuned so that the peak of the µ-function is minimal. As an optimization tool, evolutionary algorithm Differential Migration is used in order to overcome the multimodality of the cost function yielding simple controller with decoupling for nominal plant which is compared with the D-K iteration through simulations of standard longitudinal manoeuvres documenting decoupled control obtained from algebraic approach for nominal plant as well as worst case perturbation.

Keywords: algebraic approach, evolutionary computation, genetic algorithms, HiMAT, robust control, structured singular value

Procedia PDF Downloads 120
219 A Mathematical Based Prediction of the Forming Limit of Thin-Walled Sheet Metals

Authors: Masoud Ghermezi

Abstract:

Studying the sheet metals is one of the most important research areas in the field of metal forming due to their extensive applications in the aerospace industries. A useful method for determining the forming limit of these materials and consequently preventing the rupture of sheet metals during the forming process is the use of the forming limit curve (FLC). In addition to specifying the forming limit, this curve also delineates a boundary for the allowed values of strain in sheet metal forming; these characteristics of the FLC along with its accuracy of computation and wide range of applications have made this curve the basis of research in the present paper. This study presents a new model that not only agrees with the results obtained from the above mentioned theory, but also eliminates its shortcomings. In this theory, like in the M-K theory, a thin sheet with an inhomogeneity as a gradient thickness reduction with a sinusoidal function has been chosen and subjected to two-dimensional stress. Through analytical evaluation, ultimately, a governing differential equation has been obtained. The numerical solution of this equation for the range of positive strains (stretched region) yields the results that agree with the results obtained from M-K theory. Also the solution of this equation for the range of negative strains (tension region) completes the FLC curve. The findings obtained by applying this equation on two alloys with the hardening exponents of 0.4 and 0.24 indicate the validity of the presented equation.

Keywords: sheet metal, metal forming, forming limit curve (FLC), M-K theory

Procedia PDF Downloads 341
218 Classification of Sequential Sports Using Automata Theory

Authors: Aniket Alam, Sravya Gurram

Abstract:

This paper proposes a categorization of sport that is based on the system of rules that a sport must adhere to. We focus on these systems of rules to examine how a winner is produced in different sports. The rules of a sport dictate the game play and the direction it takes. We propose to break down the game play into events. At this junction, we observe two kinds of events that constitute the game play of a sport –ones that follow sequential logic and ones that do not. Our focus is pertained to sports that are comprised of sequential events. To examine these events further, to understand how a winner emerges, we take the help of finite-state automaton from the theory of computation (Automata theory). We showcase how sequential sports are eligible to be represented as finite state machines. We depict these finite state machines as state diagrams. We examine these state diagrams to observe how a team/player reaches the final states of the sport, with a special focus on one final state –the final state which determines the winner. This exercise has been carried out for the following sports: Hurdles, Track, Shot Put, Long Jump, Bowling, Badminton, Pacman and Weightlifting (Snatch). Based on our observations of how this final state of winning is achieved, we propose a categorization of sports.

Keywords: sport classification, sport modelling, ontology, automata theory

Procedia PDF Downloads 98
217 Bayesian Variable Selection in Quantile Regression with Application to the Health and Retirement Study

Authors: Priya Kedia, Kiranmoy Das

Abstract:

There is a rich literature on variable selection in regression setting. However, most of these methods assume normality for the response variable under consideration for implementing the methodology and establishing the statistical properties of the estimates. In many real applications, the distribution for the response variable may be non-Gaussian, and one might be interested in finding the best subset of covariates at some predetermined quantile level. We develop dynamic Bayesian approach for variable selection in quantile regression framework. We use a zero-inflated mixture prior for the regression coefficients, and consider the asymmetric Laplace distribution for the response variable for modeling different quantiles of its distribution. An efficient Gibbs sampler is developed for our computation. Our proposed approach is assessed through extensive simulation studies, and real application of the proposed approach is also illustrated. We consider the data from health and retirement study conducted by the University of Michigan, and select the important predictors when the outcome of interest is out-of-pocket medical cost, which is considered as an important measure for financial risk. Our analysis finds important predictors at different quantiles of the outcome, and thus enhance our understanding on the effects of different predictors on the out-of-pocket medical cost.

Keywords: variable selection, quantile regression, Gibbs sampler, asymmetric Laplace distribution

Procedia PDF Downloads 130
216 Computation of Flood and Drought Years over the North-West Himalayan Region Using Indian Meteorological Department Rainfall Data

Authors: Sudip Kumar Kundu, Charu Singh

Abstract:

The climatic condition over Indian region is highly dependent on monsoon. India receives maximum amount of rainfall during southwest monsoon. Indian economy is highly dependent on agriculture. The presence of flood and drought years influenced the total cultivation system as well as the economy of the country as Indian agricultural systems is still highly dependent on the monsoon rainfall. The present study has been planned to investigate the flood and drought years for the north-west Himalayan region from 1951 to 2014 by using area average Indian Meteorological Department (IMD) rainfall data. For this investigation the Normalized index (NI) has been utilized to find out whether the particular year is drought or flood. The data have been extracted for the north-west Himalayan (NWH) region states namely Uttarakhand (UK), Himachal Pradesh (HP) and Jammu and Kashmir (J&K) to find out the rainy season average rainfall for each year, climatological mean and the standard deviation. After calculation it has been plotted by the diagrams (or graphs) to show the results- some of the years associated with drought years, some are flood years and rest are neutral. The flood and drought years can also relate with the large-scale phenomena El-Nino and La-Lina.

Keywords: IMD, rainfall, normalized index, flood, drought, NWH

Procedia PDF Downloads 268
215 Video Text Information Detection and Localization in Lecture Videos Using Moments

Authors: Belkacem Soundes, Guezouli Larbi

Abstract:

This paper presents a robust and accurate method for text detection and localization over lecture videos. Frame regions are classified into text or background based on visual feature analysis. However, lecture video shows significant degradation mainly related to acquisition conditions, camera motion and environmental changes resulting in low quality videos. Hence, affecting feature extraction and description efficiency. Moreover, traditional text detection methods cannot be directly applied to lecture videos. Therefore, robust feature extraction methods dedicated to this specific video genre are required for robust and accurate text detection and extraction. Method consists of a three-step process: Slide region detection and segmentation; Feature extraction and non-text filtering. For robust and effective features extraction moment functions are used. Two distinct types of moments are used: orthogonal and non-orthogonal. For orthogonal Zernike Moments, both Pseudo Zernike moments are used, whereas for non-orthogonal ones Hu moments are used. Expressivity and description efficiency are given and discussed. Proposed approach shows that in general, orthogonal moments show high accuracy in comparison to the non-orthogonal one. Pseudo Zernike moments are more effective than Zernike with better computation time.

Keywords: text detection, text localization, lecture videos, pseudo zernike moments

Procedia PDF Downloads 128
214 Analyzing the Effect of Design of Pipe in Shell and Tube Type Heat Exchanger by Measuring Its Heat Transfer Rate by Computation Fluid Dynamics and Thermal Approach

Authors: Dhawal Ladani

Abstract:

Shell and tube type heat exchangers are predominantly used in heat exchange between two fluids and other applications. This paper projects the optimal design of the pipe used in the heat exchanger in such a way to minimize the vibration occurring in the pipe. Paper also consists of the comparison of the different design of the pipe to get the maximize the heat transfer rate by converting laminar flow into the turbulent flow. By the updated design the vibration in the pipe due to the flow is also decreased. Computational Fluid Dynamics and Thermal Heat Transfer analysis are done to justifying the result. Currently, the straight pipe is used in the shell and tube type of heat exchanger where as per the paper the pipe consists of the curvature along with the pipe. Hence, the heat transfer area is also increased and result in the increasing in heat transfer rate. Curvature type design is useful to create turbulence and minimizing the vibration, also. The result will give the output comparison of the effect of laminar flow and the turbulent flow in the heat exchange mechanism, as well as, inverse effect of the boundary layer in heat exchanger is also justified.

Keywords: heat exchanger, heat transfer rate, laminar and turbulent effect, shell and tube

Procedia PDF Downloads 286
213 Transportation Mode Classification Using GPS Coordinates and Recurrent Neural Networks

Authors: Taylor Kolody, Farkhund Iqbal, Rabia Batool, Benjamin Fung, Mohammed Hussaeni, Saiqa Aleem

Abstract:

The rising threat of climate change has led to an increase in public awareness and care about our collective and individual environmental impact. A key component of this impact is our use of cars and other polluting forms of transportation, but it is often difficult for an individual to know how severe this impact is. While there are applications that offer this feedback, they require manual entry of what transportation mode was used for a given trip, which can be burdensome. In order to alleviate this shortcoming, a data from the 2016 TRIPlab datasets has been used to train a variety of machine learning models to automatically recognize the mode of transportation. The accuracy of 89.6% is achieved using single deep neural network model with Gated Recurrent Unit (GRU) architecture applied directly to trip data points over 4 primary classes, namely walking, public transit, car, and bike. These results are comparable in accuracy to results achieved by others using ensemble methods and require far less computation when classifying new trips. The lack of trip context data, e.g., bus routes, bike paths, etc., and the need for only a single set of weights make this an appropriate methodology for applications hoping to reach a broad demographic and have responsive feedback.

Keywords: classification, gated recurrent unit, recurrent neural network, transportation

Procedia PDF Downloads 108
212 Morphology Operation and Discrete Wavelet Transform for Blood Vessels Segmentation in Retina Fundus

Authors: Rita Magdalena, N. K. Caecar Pratiwi, Yunendah Nur Fuadah, Sofia Saidah, Bima Sakti

Abstract:

Vessel segmentation of retinal fundus is important for biomedical sciences in diagnosing ailments related to the eye. Segmentation can simplify medical experts in diagnosing retinal fundus image state. Therefore, in this study, we designed a software using MATLAB which enables the segmentation of the retinal blood vessels on retinal fundus images. There are two main steps in the process of segmentation. The first step is image preprocessing that aims to improve the quality of the image to be optimum segmented. The second step is the image segmentation in order to perform the extraction process to retrieve the retina’s blood vessel from the eye fundus image. The image segmentation methods that will be analyzed in this study are Morphology Operation, Discrete Wavelet Transform and combination of both. The amount of data that used in this project is 40 for the retinal image and 40 for manually segmentation image. After doing some testing scenarios, the average accuracy for Morphology Operation method is 88.46 % while for Discrete Wavelet Transform is 89.28 %. By combining the two methods mentioned in later, the average accuracy was increased to 89.53 %. The result of this study is an image processing system that can segment the blood vessels in retinal fundus with high accuracy and low computation time.

Keywords: discrete wavelet transform, fundus retina, morphology operation, segmentation, vessel

Procedia PDF Downloads 172
211 A Distributed Cryptographically Generated Address Computing Algorithm for Secure Neighbor Discovery Protocol in IPv6

Authors: M. Moslehpour, S. Khorsandi

Abstract:

Due to shortage in IPv4 addresses, transition to IPv6 has gained significant momentum in recent years. Like Address Resolution Protocol (ARP) in IPv4, Neighbor Discovery Protocol (NDP) provides some functions like address resolution in IPv6. Besides functionality of NDP, it is vulnerable to some attacks. To mitigate these attacks, Internet Protocol Security (IPsec) was introduced, but it was not efficient due to its limitation. Therefore, SEND protocol is proposed to automatic protection of auto-configuration process. It is secure neighbor discovery and address resolution process. To defend against threats on NDP’s integrity and identity, Cryptographically Generated Address (CGA) and asymmetric cryptography are used by SEND. Besides advantages of SEND, its disadvantages like the computation process of CGA algorithm and sequentially of CGA generation algorithm are considerable. In this paper, we parallel this process between network resources in order to improve it. In addition, we compare the CGA generation time in self-computing and distributed-computing process. We focus on the impact of the malicious nodes on the CGA generation time in the network. According to the result, although malicious nodes participate in the generation process, CGA generation time is less than when it is computed in a one-way. By Trust Management System, detecting and insulating malicious nodes is easier.

Keywords: NDP, IPsec, SEND, CGA, modifier, malicious node, self-computing, distributed-computing

Procedia PDF Downloads 261
210 Two-Stage Launch Vehicle Trajectory Modeling for Low Earth Orbit Applications

Authors: Assem M. F. Sallam, Ah. El-S. Makled

Abstract:

This paper presents a study on the trajectory of a two stage launch vehicle. The study includes dynamic responses of motion parameters as well as the variation of angles affecting the orientation of the launch vehicle (LV). LV dynamic characteristics including state vector variation with corresponding altitude and velocity for the different LV stages separation, as well as the angle of attack and flight path angles are also discussed. A flight trajectory study for the drop zone of first stage and the jettisoning of fairing are introduced in the mathematical modeling to study their effect. To increase the accuracy of the LV model, atmospheric model is used taking into consideration geographical location and the values of solar flux related to the date and time of launch, accurate atmospheric model leads to enhancement of the calculation of Mach number, which affects the drag force over the LV. The mathematical model is implemented on MATLAB based software (Simulink). The real available experimental data are compared with results obtained from the theoretical computation model. The comparison shows good agreement, which proves the validity of the developed simulation model; the maximum error noticed was generally less than 10%, which is a result that can lead to future works and enhancement to decrease this level of error.

Keywords: launch vehicle modeling, launch vehicle trajectory, mathematical modeling, Matlab- Simulink

Procedia PDF Downloads 257
209 Seismic Hazard Analysis for a Multi Layer Fault System: Antalya (SW Turkey) Example

Authors: Nihat Dipova, Bulent Cangir

Abstract:

This article presents the results of probabilistic seismic hazard analysis (PSHA) for Antalya (SW Turkey). South west of Turkey is characterized by large earthquakes resulting from the continental collision between the African, Arabian and Eurasian plates and crustal faults. Earthquakes around the study area are grouped into two; crustal earthquakes (D=0-50 km) and subduction zone earthquakes (50-140 km). Maximum observed magnitude of subduction earthquakes is Mw=6.0. Maximum magnitude of crustal earthquakes is Mw=6.6. Sources for crustal earthquakes are faults which are related with Isparta Angle and Cyprus Arc tectonic structures. A new earthquake catalogue for Antalya, with unified moment magnitude scale has been prepared and seismicity of the area around Antalya city has been evaluated by defining ‘a’ and ‘b’ parameters of the Gutenberg-Richter recurrence relationship. The Standard Cornell-McGuire method has been used for hazard computation utilizing CRISIS2007 software. Attenuation relationships proposed by Chiou and Youngs (2008) has been used for 0-50 km earthquakes and Youngs et. al (1997) for deep subduction earthquakes. Finally, Seismic hazard map for peak horizontal acceleration on a uniform site condition of firm rock (average shear wave velocity of about 1130 m/s) at a hazard level of 10% probability of exceedance in 50 years has been prepared.

Keywords: Antalya, peak ground acceleration, seismic hazard assessment, subduction

Procedia PDF Downloads 352
208 Experiences of Timing Analysis of Parallel Embedded Software

Authors: Muhammad Waqar Aziz, Syed Abdul Baqi Shah

Abstract:

The execution time analysis is fundamental to the successful design and execution of real-time embedded software. In such analysis, the Worst-Case Execution Time (WCET) of a program is a key measure, on the basis of which system tasks are scheduled. The WCET analysis of embedded software is also needed for system understanding and to guarantee its behavior. WCET analysis can be performed statically (without executing the program) or dynamically (through measurement). Traditionally, research on the WCET analysis assumes sequential code running on single-core platforms. However, as computation is steadily moving towards using a combination of parallel programs and multi-core hardware, new challenges in WCET analysis need to be addressed. In this article, we report our experiences of performing the WCET analysis of Parallel Embedded Software (PES) running on multi-core platform. The primary purpose was to investigate how WCET estimates of PES can be computed statically, and how they can be derived dynamically. Our experiences, as reported in this article, include the challenges we faced, possible suggestions to these challenges and the workarounds that were developed. This article also provides observations on the benefits and drawbacks of deriving the WCET estimates using the said methods and provides useful recommendations for further research in this area.

Keywords: embedded software, worst-case execution-time analysis, static flow analysis, measurement-based analysis, parallel computing

Procedia PDF Downloads 300
207 A Decision Support System to Detect the Lumbar Disc Disease on the Basis of Clinical MRI

Authors: Yavuz Unal, Kemal Polat, H. Erdinc Kocer

Abstract:

In this study, a decision support system comprising three stages has been proposed to detect the disc abnormalities of the lumbar region. In the first stage named the feature extraction, T2-weighted sagittal and axial Magnetic Resonance Images (MRI) were taken from 55 people and then 27 appearance and shape features were acquired from both sagittal and transverse images. In the second stage named the feature weighting process, k-means clustering based feature weighting (KMCBFW) proposed by Gunes et al. Finally, in the third stage named the classification process, the classifier algorithms including multi-layer perceptron (MLP- neural network), support vector machine (SVM), Naïve Bayes, and decision tree have been used to classify whether the subject has lumbar disc or not. In order to test the performance of the proposed method, the classification accuracy (%), sensitivity, specificity, precision, recall, f-measure, kappa value, and computation times have been used. The best hybrid model is the combination of k-means clustering based feature weighting and decision tree in the detecting of lumbar disc disease based on both sagittal and axial MR images.

Keywords: lumbar disc abnormality, lumbar MRI, lumbar spine, hybrid models, hybrid features, k-means clustering based feature weighting

Procedia PDF Downloads 499
206 Stability Enhancement of a Large-Scale Power System Using Power System Stabilizer Based on Adaptive Neuro Fuzzy Inference System

Authors: Agung Budi Muljono, I Made Ginarsa, I Made Ari Nrartha

Abstract:

A large-scale power system (LSPS) consists of two or more sub-systems connected by inter-connecting transmission. Loading pattern on an LSPS always changes from time to time and varies depend on consumer need. The serious instability problem is appeared in an LSPS due to load fluctuation in all of the bus. Adaptive neuro-fuzzy inference system (ANFIS)-based power system stabilizer (PSS) is presented to cover the stability problem and to enhance the stability of an LSPS. The ANFIS control is presented because the ANFIS control is more effective than Mamdani fuzzy control in the computation aspect. Simulation results show that the presented PSS is able to maintain the stability by decreasing peak overshoot to the value of −2.56 × 10−5 pu for rotor speed deviation Δω2−3. The presented PSS also makes the settling time to achieve at 3.78 s on local mode oscillation. Furthermore, the presented PSS is able to improve the peak overshoot and settling time of Δω3−9 to the value of −0.868 × 10−5 pu and at the time of 3.50 s for inter-area oscillation.

Keywords: ANFIS, large-scale, power system, PSS, stability enhancement

Procedia PDF Downloads 278
205 Triangulations via Iterated Largest Angle Bisection

Authors: Yeonjune Kang

Abstract:

A triangulation of a planar region is a partition of that region into triangles. In the finite element method, triangulations are often used as the grid underlying a computation. In order to be suitable as a finite element mesh, a triangulation must have well-shaped triangles, according to criteria that depend on the details of the particular problem. For instance, most methods require that all triangles be small and as close to the equilateral shape as possible. Stated differently, one wants to avoid having either thin or flat triangles in the triangulation. There are many triangulation procedures, a particular one being the one known as the longest edge bisection algorithm described below. Starting with a given triangle, locate the midpoint of the longest edge and join it to the opposite vertex of the triangle. Two smaller triangles are formed; apply the same bisection procedure to each of these triangles. Continuing in this manner after n steps one obtains a triangulation of the initial triangle into 2n smaller triangles. The longest edge algorithm was first considered in the late 70’s. It was shown by various authors that this triangulation has the desirable properties for the finite element method: independently of the number of iterations the angles of these triangles cannot get too small; moreover, the size of the triangles decays exponentially. In the present paper we consider a related triangulation algorithm we refer to as the largest angle bisection procedure. As the name suggests, rather than bisecting the longest edge, at each step we bisect the largest angle. We study the properties of the resulting triangulation and prove that, while the general behavior resembles the one in the longest edge bisection algorithm, there are several notable differences as well.

Keywords: angle bisectors, geometry, triangulation, applied mathematics

Procedia PDF Downloads 370
204 Simulation of Improving the Efficiency of a Fire-Tube Steam Boiler

Authors: Roudane Mohamed

Abstract:

In this study we are interested in improving the efficiency of a steam boiler to 4.5T/h and minimize fume discharge temperature by the addition of a heat exchanger against the current in the energy system, the output of the boiler. The mathematical approach to the problem is based on the use of heat transfer by convection and conduction equations. These equations have been chosen because of their extensive use in a wide range of application. A software and developed for solving the equations governing these phenomena and the estimation of the thermal characteristics of boiler through the study of the thermal characteristics of the heat exchanger by both LMTD and NUT methods. Subsequently, an analysis of the thermal performance of the steam boiler by studying the influence of different operating parameters on heat flux densities, temperatures, exchanged power and performance was carried out. The study showed that the behavior of the boiler is largely influenced. In the first regime (P = 3.5 bar), the boiler efficiency has improved significantly from 93.03 to 99.43 at the rate of 6.47% and 4.5%. For maximum speed, the change is less important, it is of the order of 1.06%. The results obtained in this study of great interest to industrial utilities equipped with smoke tube boilers for the preheating air temperature intervene to calculate the actual temperature of the gas so the heat exchanged will be increased and minimize temperature smoke discharge. On the other hand, this work could be used as a model of computation in the design process.

Keywords: numerical simulation, efficiency, fire tube, heat exchanger, convection and conduction

Procedia PDF Downloads 197
203 Investigation on a Wave-Powered Electrical Generator Consisted of a Geared Motor-Generator Housed by a Double-Cone Rolling on Concentric Circular Rails

Authors: Barenten Suciu

Abstract:

An electrical generator able to harness energy from the water waves and designed as a double-cone geared motor-generator (DCGMG), is proposed and theoretically investigated. Similar to a differential gear mechanism, used in the transmission system of the auto vehicle wheels, an angular speed differential is created between the cones rolling on two concentric circular rails. Water wave acting on the floating DCGMG produces and a gear-box amplifies the speed differential to gain sufficient torque for power generation. A model that allows computation of the speed differential, torque, and power of the DCGMG is suggested. Influence of various parameters, regarding the construction of the DCGMG, as well as the contact between the double-cone and rails, on the electro-mechanical output, is emphasized. Results obtained indicate that the generated electrical power can be increased by augmenting the mass of the double-cone, the span of the rails, the apex angle of the cones, the friction between cones and rails, the amplification factor of the gear-box, and the efficiency of the motor-generator. Such findings are useful to formulate a design methodology for the proposed wave-powered generator.

Keywords: amplification of angular speed differential, circular concentric rails, double-cone, wave-powered electrical generator

Procedia PDF Downloads 131
202 Testing a Flexible Manufacturing System Facility Production Capacity through Discrete Event Simulation: Automotive Case Study

Authors: Justyna Rybicka, Ashutosh Tiwari, Shane Enticott

Abstract:

In the age of automation and computation aiding manufacturing, it is clear that manufacturing systems have become more complex than ever before. Although technological advances provide the capability to gain more value with fewer resources, sometimes utilisation of the manufacturing capabilities available to organisations is difficult to achieve. Flexible manufacturing systems (FMS) provide a unique capability to manufacturing organisations where there is a need for product range diversification by providing line efficiency through production flexibility. This is very valuable in trend driven production set-ups or niche volume production requirements. Although FMS provides flexible and efficient facilities, its optimal set-up is key in achieving production performance. As many variables are interlinked due to the flexibility provided by the FMS, analytical calculations are not always sufficient to predict the FMS’ performance. Simulation modelling is capable of capturing the complexity and constraints associated with FMS. This paper demonstrates how discrete event simulation (DES) can address complexity in an FMS to optimise the production line performance. A case study of an automotive FMS is presented. The DES model demonstrates different configuration options depending on prioritising objectives: utilisation and throughput. Additionally, this paper provides insight into understanding the impact of system set-up constraints on the FMS performance and demonstrates the exploration into the optimal production set-up.

Keywords: discrete event simulation, flexible manufacturing system, capacity performance, automotive

Procedia PDF Downloads 308
201 Faster, Lighter, More Accurate: A Deep Learning Ensemble for Content Moderation

Authors: Arian Hosseini, Mahmudul Hasan

Abstract:

To address the increasing need for efficient and accurate content moderation, we propose an efficient and lightweight deep classification ensemble structure. Our approach is based on a combination of simple visual features, designed for high-accuracy classification of violent content with low false positives. Our ensemble architecture utilizes a set of lightweight models with narrowed-down color features, and we apply it to both images and videos. We evaluated our approach using a large dataset of explosion and blast contents and compared its performance to popular deep learning models such as ResNet-50. Our evaluation results demonstrate significant improvements in prediction accuracy, while benefiting from 7.64x faster inference and lower computation cost. While our approach is tailored to explosion detection, it can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we propose a "think small, think many" philosophy in classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step model ensemble of multiple small, simple, and lightweight models with narrowed-down visual features can possibly lead to predictions with higher accuracy.

Keywords: deep classification, content moderation, ensemble learning, explosion detection, video processing

Procedia PDF Downloads 24
200 Three-Dimensional Unsteady Natural Convection and Entropy Generation in an Inclined Cubical Trapezoidal Cavity Subjected to Uniformly Heated Bottom Wall

Authors: Farshid Fathinia

Abstract:

Numerical computation of unsteady laminar three-dimensional natural convection and entropy generation in an inclined cubical trapezoidal air-filled cavity is performed for the first time in this work. The vertical right and left sidewalls of the cavity are maintained at constant cold temperatures. The lower wall is subjected to a constant hot temperature, while the upper one is considered insulated. Computations are performed for Rayleigh numbers varied as 103 ≤ Ra ≤ 105, while the trapezoidal cavity inclination angle is varied as 0° ≤ ϕ ≤ 180°. Prandtl number is considered constant at Pr = 0.71. The second law of thermodynamics is applied to obtain thermodynamic losses inside the cavity due to both heat transfer and fluid friction irreversibilities. The variation of local and average Nusselt numbers are presented and discussed.While, streamlines, isotherms and entropy contours are presented in both two and three-dimensional pattern. The results show that when the Rayleigh number increases, the flow patterns are changed especially in three-dimensional results and the flow circulation increases. Also, the inclination angle effect on the total entropy generation becomes insignificant when the Rayleigh number is low.Moreover, when the Rayleigh number increases the average Nusselt number increases.

Keywords: transient natural convection, trapezoidal cavity, three-dimensional flow, entropy generation, second law

Procedia PDF Downloads 328