Search results for: Software Estimation
1657 Determining the Best Fitting Distributions for Minimum Flows of Streams in Gediz Basin
Authors: Naci Büyükkaracığan
Abstract:
Today, the need for water sources is swiftly increasing due to population growth. At the same time, it is known that some regions will face with shortage of water and drought because of the global warming and climate change. In this context, evaluation and analysis of hydrological data such as the observed trends, drought and flood prediction of short term flow has great deal of importance. The most accurate selection probability distribution is important to describe the low flow statistics for the studies related to drought analysis. As in many basins In Turkey, Gediz River basin will be affected enough by the drought and will decrease the amount of used water. The aim of this study is to derive appropriate probability distributions for frequency analysis of annual minimum flows at 6 gauging stations of the Gediz Basin. After applying 10 different probability distributions, six different parameter estimation methods and 3 fitness test, the Pearson 3 distribution and general extreme values distributions were found to give optimal results.
Keywords: Gediz Basin, goodness-of-fit tests, Minimum flows, probability distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25051656 Analysis of a TBM Tunneling Effect on Surface Subsidence: A Case Study from Tehran, Iran
Authors: A. R. Salimi, M. Esmaeili, B. Salehi
Abstract:
The development and extension of large cities induced a need for shallow tunnel in soft ground of building areas. Estimation of ground settlement caused by the tunnel excavation is important engineering point. In this paper, prediction of surface subsidence caused by tunneling in one section of seventh line of Tehran subway is considered. On the basis of studied geotechnical conditions of the region, tunnel with the length of 26.9km has been excavated applying a mechanized method using an EPB-TBM with a diameter of 9.14m. In this regard, settlement is estimated utilizing both analytical and numerical finite element method. The numerical method shows that the value of settlement in this section is 5cm. Besides, the analytical consequences (Bobet and Loganathan-Polous) are 5.29 and 12.36cm, respectively. According to results of this study, due tosaturation of this section, there are good agreement between Bobet and numerical methods. Therefore, tunneling processes in this section needs a special consolidation measurement and support system before the passage of tunnel boring machine.Keywords: TBM, Subsidence, Numerical Method, Analytical Method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 54101655 Analytical Investigation of Sediment Formation and Transport in the Vicinity of the Water Intake Structures - A Case Study of the Dez Diversion Weir in Greater Dezful
Authors: M.karavanmasjedi, N.Hedayat , A.Rohani, H.Shirin
Abstract:
Sedimentation process resulting from soil erosion in the water basin especially in arid and semi-arid where poor vegetation cover in the slope of the mountains upstream could contribute to sediment formation. The consequence of sedimentation not only makes considerable change in the morphology of the river and the hydraulic characteristics but would also have a major challenge for the operation and maintenance of the canal network which depend on water flow to meet the stakeholder-s requirements. For this reason mathematical modeling can be used to simulate the effective factors on scouring, sediment transport and their settling along the waterways. This is particularly important behind the reservoirs which enable the operators to estimate the useful life of these hydraulic structures. The aim of this paper is to simulate the sedimentation and erosion in the eastern and western water intake structures of the Dez Diversion weir using GSTARS-3 software. This is done to estimate the sedimentation and investigate the ways in which to optimize the process and minimize the operational problems. Results indicated that the at the furthest point upstream of the diversion weir, the coarser sediment grains tended to settle. The reason for this is the construction of the phantom bridge and the outstanding rocks just upstream of the structure. The construction of these along the river course has reduced the momentum energy require to push the sediment loads and make it possible for them to settle wherever the river regime allows it. Results further indicated a trend for the sediment size in such a way that as the focus of study shifts downstream the size of grains get smaller and vice versa. It was also found that the finding of the GSTARS-3 had a close proximity with the sets of the observed data. This suggests that the software is a powerful analytical tool which can be applied in the river engineering project with a minimum of costs and relatively accurate results.Keywords: Erosion, sedimentation, Dez Diversion weir, GSTARS-3
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16181654 Multiple Moving Talker Tracking by Integration of Two Successive Algorithms
Authors: Kenji Suyama, Masahiro Oshida, Noboru Owada
Abstract:
In this paper, an estimation accuracy of multiple moving talker tracking using a microphone array is improved. The tracking can be achieved by the adaptive method in which two algorithms are integrated, namely, the PAST (Projection Approximation Subspace Tracking) algorithm and the IPLS (Interior Point Least Square) algorithm. When either talker begins to speak again after a silent period, an appropriate feasible region for an evaluation function of the IPLS algorithm might not be set. Then, the tracking fails due to the incorrect updating. Therefore, if an increment of the number of active talkers is detected, the feasible region must be reset. Then, a low cost realization is required for the high speed tracking and a high accuracy realization is desired for the precise tracking. In this paper, the directions roughly estimated using the delayed-sum-array method are used for the resetting. Several results of experiments performed in an actual room environment show the effectiveness of the proposed method.Keywords: moving talkers tracking, microphone array, signal subspace
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13371653 Transformer Top-Oil Temperature Modeling and Simulation
Authors: T. C. B. N. Assunção, J. L. Silvino, P. Resende
Abstract:
The winding hot-spot temperature is one of the most critical parameters that affect the useful life of the power transformers. The winding hot-spot temperature can be calculated as function of the top-oil temperature that can estimated by using the ambient temperature and transformer loading measured data. This paper proposes the estimation of the top-oil temperature by using a method based on Least Squares Support Vector Machines approach. The estimated top-oil temperature is compared with measured data of a power transformer in operation. The results are also compared with methods based on the IEEE Standard C57.91-1995/2000 and Artificial Neural Networks. It is shown that the Least Squares Support Vector Machines approach presents better performance than the methods based in the IEEE Standard C57.91-1995/2000 and artificial neural networks.Keywords: Artificial Neural Networks, Hot-spot Temperature, Least Squares Support Vector, Top-oil Temperature.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24911652 Investigation on Mesh Sensitivity of a Transient Model for Nozzle Clogging
Authors: H. Barati, M. Wu, A. Kharicha, A. Ludwig
Abstract:
A transient model for nozzle clogging has been developed and successfully validated against a laboratory experiment. Key steps of clogging are considered: transport of particles by turbulent flow towards the nozzle wall; interactions between fluid flow and nozzle wall, and the adhesion of the particle on the wall; the growth of the clog layer and its interaction with the flow. The current paper is to investigate the mesh (size and type) sensitivity of the model in both two and three dimensions. It is found that the algorithm for clog growth alone excluding the flow effect is insensitive to the mesh type and size, but the calculation including flow becomes sensitive to the mesh quality. The use of 2D meshes leads to overestimation of the clog growth because the 3D nature of flow in the boundary layer cannot be properly solved by 2D calculation. 3D simulation with tetrahedron mesh can also lead to an error estimation of the clog growth. A mesh-independent result can be achieved with hexahedral mesh, or at least with triangular prism (inflation layer) for near-wall regions.
Keywords: Clogging, nozzle, numerical model, simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8411651 Modified Hybrid Genetic Algorithm-Based Artificial Neural Network Application on Wall Shear Stress Prediction
Authors: Zohreh Sheikh Khozani, Wan Hanna Melini Wan Mohtar, Mojtaba Porhemmat
Abstract:
Prediction of wall shear stress in a rectangular channel, with non-homogeneous roughness distribution, was studied. Estimation of shear stress is an important subject in hydraulic engineering, since it affects the flow structure directly. In this study, the Genetic Algorithm Artificial (GAA) neural network is introduced as a hybrid methodology of the Artificial Neural Network (ANN) and modified Genetic Algorithm (GA) combination. This GAA method was employed to predict the wall shear stress. Various input combinations and transfer functions were considered to find the most appropriate GAA model. The results show that the proposed GAA method could predict the wall shear stress of open channels with high accuracy, by Root Mean Square Error (RMSE) of 0.064 in the test dataset. Thus, using GAA provides an accurate and practical simple-to-use equation.
Keywords: Artificial neural network, genetic algorithm, genetic programming, rectangular channel, shear stress.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6701650 Long-Range Dependence of Financial Time Series Data
Authors: Chatchai Pesee
Abstract:
This paper examines long-range dependence or longmemory of financial time series on the exchange rate data by the fractional Brownian motion (fBm). The principle of spectral density function in Section 2 is used to find the range of Hurst parameter (H) of the fBm. If 0< H <1/2, then it has a short-range dependence (SRD). It simulates long-memory or long-range dependence (LRD) if 1/2< H <1. The curve of exchange rate data is fBm because of the specific appearance of the Hurst parameter (H). Furthermore, some of the definitions of the fBm, long-range dependence and selfsimilarity are reviewed in Section II as well. Our results indicate that there exists a long-memory or a long-range dependence (LRD) for the exchange rate data in section III. Long-range dependence of the exchange rate data and estimation of the Hurst parameter (H) are discussed in Section IV, while a conclusion is discussed in Section V.Keywords: Fractional Brownian motion, long-rangedependence, memory, short-range dependence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18841649 Precombining Adaptive LMMSE Detection for DS-CDMA Systems in Time Varying Channels: Non Blind and Blind Approaches
Authors: M. D. Kokate, T. R. Sontakke, P. W. Wani
Abstract:
This paper deals with an adaptive multiuser detector for direct sequence code division multiple-access (DS-CDMA) systems. A modified receiver, precombinig LMMSE is considered under time varying channel environment. Detector updating is performed with two criterions, mean square estimation (MSE) and MOE optimization technique. The adaptive implementation issues of these two schemes are quite different. MSE criterion updates the filter weights by minimizing error between data vector and adaptive vector. MOE criterion together with canonical representation of the detector results in a constrained optimization problem. Even though the canonical representation is very complicated under time varying channels, it is analyzed with assumption of average power profile of multipath replicas of user of interest. The performance of both schemes is studied for practical SNR conditions. Results show that for poor SNR, MSE precombining LMMSE is better than the blind precombining LMMSE but for greater SNR, MOE scheme outperforms with better result.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14951648 An Attempt to Predict the Performances of a Rocket Thrust Chamber
Authors: A. Benarous, D. Karmed, R. Haoui, A. Liazid
Abstract:
The process for predicting the ballistic properties of a liquid rocket engine is based on the quantitative estimation of idealized performance deviations. In this aim, an equilibrium chemistry procedure is firstly developed and implemented in a Fortran routine. The thermodynamic formulation allows for the calculation of the theoretical performances of a rocket thrust chamber. In a second step, a computational fluid dynamic analysis of the turbulent reactive flow within the chamber is performed using a finite volume approach. The obtained values for the “quasi-real" performances account for both turbulent mixing and chemistryturbulence coupling. In the present work, emphasis is made on the combustion efficiency performance for which deviation is mainly due to radial gradients of static temperature and mixture ratio. Numerical values of the characteristic velocity are successfully compared with results from an industry-used code. The results are also confronted with the experimental data of a laboratory-scale rocket engine.
Keywords: JANAF methodology, Liquid rocket engine, Mascotte test-rig, Theoretical performances.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20431647 Intelligent Process and Model Applied for E-Learning Systems
Authors: Mafawez Alharbi, Mahdi Jemmali
Abstract:
E-learning is a developing area especially in education. E-learning can provide several benefits to learners. An intelligent system to collect all components satisfying user preferences is so important. This research presents an approach that it capable to personalize e-information and give the user their needs following their preferences. This proposal can make some knowledge after more evaluations made by the user. In addition, it can learn from the habit from the user. Finally, we show a walk-through to prove how intelligent process work.
Keywords: Artificial intelligence, architecture, e-learning, software engineering, processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10931646 Contribution of On-Site and Off-Site Processes to Greenhouse Gas (GHG) Emissions by Wastewater Treatment Plants
Authors: Laleh Yerushalmi, Fariborz Haghighat, Maziar Bani Shahabadi
Abstract:
The estimation of overall on-site and off-site greenhouse gas (GHG) emissions by wastewater treatment plants revealed that in anaerobic and hybrid treatment systems greater emissions result from off-site processes compared to on-site processes. However, in aerobic treatment systems, onsite processes make a higher contribution to the overall GHG emissions. The total GHG emissions were estimated to be 1.6, 3.3 and 3.8 kg CO2-e/kg BOD in the aerobic, anaerobic and hybrid treatment systems, respectively. In the aerobic treatment system without the recovery and use of the generated biogas, the off-site GHG emissions were 0.65 kg CO2-e/kg BOD, accounting for 40.2% of the overall GHG emissions. This value changed to 2.3 and 2.6 kg CO2-e/kg BOD, and accounted for 69.9% and 68.1% of the overall GHG emissions in the anaerobic and hybrid treatment systems, respectively. The increased off-site GHG emissions in the anaerobic and hybrid treatment systems are mainly due to material usage and energy demand in these systems. The anaerobic digester can contribute up to 100%, 55% and 60% of the overall energy needs of plants in the aerobic, anaerobic and hybrid treatment systems, respectively.
Keywords: On-site and off-site greenhouse gas (GHG)emissions, wastewater treatment plants, biogas recovery
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21661645 The Effectiveness of Synthesizing A-Pillar Structures in Passenger Cars
Authors: Chris Phan, Yong Seok Park
Abstract:
The Toyota Camry is one of the best-selling cars in America. It is economical, reliable, and most importantly, safe. These attributes allowed the Camry to be the trustworthy choice when choosing dependable vehicle. However, a new finding brought question to the Camry’s safety. Since 1997, the Camry received a “good” rating on its moderate overlap front crash test through the Insurance Institute of Highway Safety. In 2012, the Insurance Institute of Highway Safety introduced a frontal small overlap crash test into the overall evaluation of vehicle occupant safety test. The 2012 Camry received a “poor” rating on this new test, while the 2015 Camry redeemed itself with a “good” rating once again. This study aims to find a possible solution that Toyota implemented to reduce the severity of a frontal small overlap crash in the Camry during a mid-cycle update. The purpose of this study is to analyze and evaluate the performance of various A-pillar shapes as energy absorbing structures in improving passenger safety in a frontal crash. First, A-pillar structures of the 2012 and 2015 Camry were modeled using CAD software, namely SolidWorks. Then, a crash test simulation using ANSYS software, was applied to the A-pillars to analyze the behavior of the structures in similar conditions. Finally, the results were compared to safety values of cabin intrusion to determine the crashworthy behaviors of both A-pillar structures by measuring total deformation. This study highlights that it is possible that Toyota improved the shape of the A-pillar in the 2015 Camry in order to receive a “good” rating from the IIHS safety evaluation once again. These findings can possibly be used to increase safety performance in future vehicles to decrease passenger injury or fatality.
Keywords: A-pillar, crashworthiness, design synthesis, finite element analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7761644 Optimal Convolutive Filters for Real-Time Detection and Arrival Time Estimation of Transient Signals
Authors: Michal Natora, Felix Franke, Klaus Obermayer
Abstract:
Linear convolutive filters are fast in calculation and in application, and thus, often used for real-time processing of continuous data streams. In the case of transient signals, a filter has not only to detect the presence of a specific waveform, but to estimate its arrival time as well. In this study, a measure is presented which indicates the performance of detectors in achieving both of these tasks simultaneously. Furthermore, a new sub-class of linear filters within the class of filters which minimize the quadratic response is proposed. The proposed filters are more flexible than the existing ones, like the adaptive matched filter or the minimum power distortionless response beamformer, and prove to be superior with respect to that measure in certain settings. Simulations of a real-time scenario confirm the advantage of these filters as well as the usefulness of the performance measure.
Keywords: Adaptive matched filter, minimum variance distortionless response, beam forming, Capon beam former, linear filters, performance measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15231643 Semi-automatic Background Detection in Microscopic Images
Authors: Alessandro Bevilacqua, Alessandro Gherardi, Ludovico Carozza, Filippo Piccinini
Abstract:
The last years have seen an increasing use of image analysis techniques in the field of biomedical imaging, in particular in microscopic imaging. The basic step for most of the image analysis techniques relies on a background image free of objects of interest, whether they are cells or histological samples, to perform further analysis, such as segmentation or mosaicing. Commonly, this image consists of an empty field acquired in advance. However, many times achieving an empty field could not be feasible. Or else, this could be different from the background region of the sample really being studied, because of the interaction with the organic matter. At last, it could be expensive, for instance in case of live cell analyses. We propose a non parametric and general purpose approach where the background is built automatically stemming from a sequence of images containing even objects of interest. The amount of area, in each image, free of objects just affects the overall speed to obtain the background. Experiments with different kinds of microscopic images prove the effectiveness of our approach.
Keywords: Microscopy, flat field correction, background estimation, image segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18351642 Dynamic Measurement System Modeling with Machine Learning Algorithms
Authors: Changqiao Wu, Guoqing Ding, Xin Chen
Abstract:
In this paper, ways of modeling dynamic measurement systems are discussed. Specially, for linear system with single-input single-output, it could be modeled with shallow neural network. Then, gradient based optimization algorithms are used for searching the proper coefficients. Besides, method with normal equation and second order gradient descent are proposed to accelerate the modeling process, and ways of better gradient estimation are discussed. It shows that the mathematical essence of the learning objective is maximum likelihood with noises under Gaussian distribution. For conventional gradient descent, the mini-batch learning and gradient with momentum contribute to faster convergence and enhance model ability. Lastly, experimental results proved the effectiveness of second order gradient descent algorithm, and indicated that optimization with normal equation was the most suitable for linear dynamic models.Keywords: Dynamic system modeling, neural network, normal equation, second order gradient descent.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7811641 Combining Color and Layout Features for the Identification of Low-resolution Documents
Authors: Ardhendu Behera, Denis Lalanne, Rolf Ingold
Abstract:
This paper proposes a method, combining color and layout features, for identifying documents captured from lowresolution handheld devices. On one hand, the document image color density surface is estimated and represented with an equivalent ellipse and on the other hand, the document shallow layout structure is computed and hierarchically represented. The combined color and layout features are arranged in a symbolic file, which is unique for each document and is called the document-s visual signature. Our identification method first uses the color information in the signatures in order to focus the search space on documents having a similar color distribution, and finally selects the document having the most similar layout structure in the remaining search space. Finally, our experiment considers slide documents, which are often captured using handheld devices.Keywords: Document color modeling, document visual signature, kernel density estimation, document identification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13751640 Sparse Unmixing of Hyperspectral Data by Exploiting Joint-Sparsity and Rank-Deficiency
Authors: Fanqiang Kong, Chending Bian
Abstract:
In this work, we exploit two assumed properties of the abundances of the observed signatures (endmembers) in order to reconstruct the abundances from hyperspectral data. Joint-sparsity is the first property of the abundances, which assumes the adjacent pixels can be expressed as different linear combinations of same materials. The second property is rank-deficiency where the number of endmembers participating in hyperspectral data is very small compared with the dimensionality of spectral library, which means that the abundances matrix of the endmembers is a low-rank matrix. These assumptions lead to an optimization problem for the sparse unmixing model that requires minimizing a combined l2,p-norm and nuclear norm. We propose a variable splitting and augmented Lagrangian algorithm to solve the optimization problem. Experimental evaluation carried out on synthetic and real hyperspectral data shows that the proposed method outperforms the state-of-the-art algorithms with a better spectral unmixing accuracy.Keywords: Hyperspectral unmixing, joint-sparse, low-rank representation, abundance estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7701639 Security Risk Analysis Based on the Policy Formalization and the Modeling of Big Systems
Authors: Luc Cessieux, French Navy, Adrien Derock, DCNS/IMATH
Abstract:
Security risk models have been successful in estimating the likelihood of attack for simple security threats. However, modeling complex system and their security risk is even a challenge. Many methods have been proposed to face this problem. Often difficult to manipulate, and not enough all-embracing they are not as famous as they should with administrators and deciders. We propose in this paper a new tool to model big systems on purpose. The software, takes into account attack threats and security strength.
Keywords: Security, risk management, threat, modelization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13241638 Kinetic Parameter Estimation from Thermogravimetry and Microscale Combustion Calorimetry
Authors: Rhoda Afriyie Mensah, Lin Jiang, Solomon Asante-Okyere, Xu Qiang, Cong Jin
Abstract:
Flammability analysis of extruded polystyrene (XPS) has become crucial due to its utilization as insulation material for energy efficient buildings. Using the Kissinger-Akahira-Sunose and Flynn-Wall-Ozawa methods, the degradation kinetics of two pure XPS from the local market, red and grey ones, were obtained from the results of thermogravity analysis (TG) and microscale combustion calorimetry (MCC) experiments performed under the same heating rates. From the experiments, it was discovered that red XPS released more heat than grey XPS and both materials showed two mass loss stages. Consequently, the kinetic parameters for red XPS were higher than grey XPS. A comparative evaluation of activation energies from MCC and TG showed an insignificant degree of deviation signifying an equivalent apparent activation energy from both methods. However, different activation energy profiles as a result of the different chemical pathways were presented when the dependencies of the activation energies on extent of conversion for TG and MCC were compared.
Keywords: Flammability, microscale combustion calorimetry, thermogravity analysis, thermal degradation, kinetic analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8841637 Spatio-Temporal Analysis and Mapping of Malaria in Thailand
Authors: Krisada Lekdee, Sunee Sammatat, Nittaya Boonsit
Abstract:
This paper proposes a GLMM with spatial and temporal effects for malaria data in Thailand. A Bayesian method is used for parameter estimation via Gibbs sampling MCMC. A conditional autoregressive (CAR) model is assumed to present the spatial effects. The temporal correlation is presented through the covariance matrix of the random effects. The malaria quarterly data have been extracted from the Bureau of Epidemiology, Ministry of Public Health of Thailand. The factors considered are rainfall and temperature. The result shows that rainfall and temperature are positively related to the malaria morbidity rate. The posterior means of the estimated morbidity rates are used to construct the malaria maps. The top 5 highest morbidity rates (per 100,000 population) are in Trat (Q3, 111.70), Chiang Mai (Q3, 104.70), Narathiwat (Q4, 97.69), Chiang Mai (Q2, 88.51), and Chanthaburi (Q3, 86.82). According to the DIC criterion, the proposed model has a better performance than the GLMM with spatial effects but without temporal terms.
Keywords: Bayesian method, generalized linear mixed model (GLMM), malaria, spatial effects, temporal correlation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21471636 Deployment of a Biocompatible International Space Station into Geostationary Orbit
Authors: Tim Falk, Chris Chatwin
Abstract:
This study explores the possibility of a space station that will occupy a geostationary equatorial orbit (GEO) and create artificial gravity using centripetal acceleration. The concept of the station is to create a habitable, safe environment that can increase the possibility of space tourism by reducing the wide variation of hazards associated with space exploration. The ability to control the intensity of artificial gravity through Hall-effect thrusters will allow experiments to be carried out at different levels of artificial gravity. A feasible prototype model was built to convey the concept and to enable cost estimation. The SpaceX Falcon Heavy rocket with a 26,700 kg payload to GEO was selected to take the 675 tonne spacecraft into orbit; space station construction will require up to 30 launches, this would be reduced to 5 launches when the SpaceX BFR becomes available. The estimated total cost of implementing the Sussex Biocompatible International Space Station (BISS) is approximately $47.039 billion, which is very attractive when compared to the cost of the International Space Station, which cost $150 billion.
Keywords: Artificial gravity, biocompatible, geostationary orbit, space station.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5661635 Application of Generalized Autoregressive Score Model to Stock Returns
Authors: Katleho Daniel Makatjane, Diteboho Lawrence Xaba, Ntebogang Dinah Moroke
Abstract:
The current study investigates the behaviour of time-varying parameters that are based on the score function of the predictive model density at time t. The mechanism to update the parameters over time is the scaled score of the likelihood function. The results revealed that there is high persistence of time-varying, as the location parameter is higher and the skewness parameter implied the departure of scale parameter from the normality with the unconditional parameter as 1.5. The results also revealed that there is a perseverance of the leptokurtic behaviour in stock returns which implies the returns are heavily tailed. Prior to model estimation, the White Neural Network test exposed that the stock price can be modelled by a GAS model. Finally, we proposed further researches specifically to model the existence of time-varying parameters with a more detailed model that encounters the heavy tail distribution of the series and computes the risk measure associated with the returns.
Keywords: Generalized autoregressive score model, stock returns, time-varying.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10341634 The Estimation of Human Vital Signs Complexity
Authors: L. Bikulciene, E. Venskaityte, G. Jarusevicius
Abstract:
Nonstationary and nonlinear signals generated by living complex systems defy traditional mechanistic approaches, which are based on homeostasis. Previous our studies have shown that the evaluation of the interactions of physiological signals by using special analysis methods is suitable for observation of physiological processes. It is demonstrated the possibility of using deep physiological model, based on the interpretation of the changes of the human body’s functional states combined with an application of the analytical method based on matrix theory for the physiological signals analysis, which was applied on high risk cardiac patients. It is shown that evaluation of cardiac signals interactions show peculiar for each individual functional changes at the onset of hemodynamic restoration procedure. Therefore, we suggest that the alterations of functional state of the body, after patients overcome surgery can be complemented by the data received from the suggested approach of the evaluation of functional variables’ interactions.
Keywords: Cardiac diseases, Complex systems theory, ECG analysis, matrix analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22471633 Nodal Load Profiles Estimation for Time Series Load Flow Using Independent Component Analysis
Authors: Mashitah Mohd Hussain, Salleh Serwan, Zuhaina Hj Zakaria
Abstract:
This paper presents a method to estimate load profile in a multiple power flow solutions for every minutes in 24 hours per day. A method to calculate multiple solutions of non linear profile is introduced. The Power System Simulation/Engineering (PSS®E) and python has been used to solve the load power flow. The result of this power flow solutions has been used to estimate the load profiles for each load at buses using Independent Component Analysis (ICA) without any knowledge of parameter and network topology of the systems. The proposed algorithm is tested with IEEE 69 test bus system represents for distribution part and the method of ICA has been programmed in MATLAB R2012b version. Simulation results and errors of estimations are discussed in this paper.Keywords: Electrical Distribution System, Power Flow Solution, Distribution Network, Independent Component Analysis, Newton Raphson, Power System Simulation for Engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29161632 Speech Intelligibility Improvement Using Variable Level Decomposition DWT
Authors: Samba Raju, Chiluveru, Manoj Tripathy
Abstract:
Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with Universal Discrete Wavelet Transform (DWT) thresholding and Minimum Mean Square Error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methodsKeywords: Discrete Wavelet Transform, speech intelligibility, STOI, standard deviation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6931631 Comparison of Hough Transform and Mean Shift Algorithm for Estimation of the Orientation Angle of Industrial Data Matrix Codes
Authors: Ion-Cosmin Dita, Vasile Gui, Franz Quint, Marius Otesteanu
Abstract:
In automatic manufacturing and assembling of mechanical, electrical and electronic parts one needs to reliably identify the position of components and to extract the information of these components. Data Matrix Codes (DMC) are established by these days in many areas of industrial manufacturing thanks to their concentration of information on small spaces. In today’s usually order-related industry, where increased tracing requirements prevail, they offer further advantages over other identification systems. This underlines in an impressive way the necessity of a robust code reading system for detecting DMC on the components in factories. This paper compares two methods for estimating the angle of orientation of Data Matrix Codes: one method based on the Hough Transform and the other based on the Mean Shift Algorithm. We concentrate on Data Matrix Codes in industrial environment, punched, milled, lasered or etched on different materials in arbitrary orientation.
Keywords: Industrial data matrix code, Hough transform, mean shift.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13361630 Unscented Transformation for Estimating the Lyapunov Exponents of Chaotic Time Series Corrupted by Random Noise
Authors: K. Kamalanand, P. Mannar Jawahar
Abstract:
Many systems in the natural world exhibit chaos or non-linear behavior, the complexity of which is so great that they appear to be random. Identification of chaos in experimental data is essential for characterizing the system and for analyzing the predictability of the data under analysis. The Lyapunov exponents provide a quantitative measure of the sensitivity to initial conditions and are the most useful dynamical diagnostic for chaotic systems. However, it is difficult to accurately estimate the Lyapunov exponents of chaotic signals which are corrupted by a random noise. In this work, a method for estimation of Lyapunov exponents from noisy time series using unscented transformation is proposed. The proposed methodology was validated using time series obtained from known chaotic maps. In this paper, the objective of the work, the proposed methodology and validation results are discussed in detail.
Keywords: Lyapunov exponents, unscented transformation, chaos theory, neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19881629 Estimating Shortest Circuit Path Length Complexity
Authors: Azam Beg, P. W. Chandana Prasad, S.M.N.A Senenayake
Abstract:
When binary decision diagrams are formed from uniformly distributed Monte Carlo data for a large number of variables, the complexity of the decision diagrams exhibits a predictable relationship to the number of variables and minterms. In the present work, a neural network model has been used to analyze the pattern of shortest path length for larger number of Monte Carlo data points. The neural model shows a strong descriptive power for the ISCAS benchmark data with an RMS error of 0.102 for the shortest path length complexity. Therefore, the model can be considered as a method of predicting path length complexities; this is expected to lead to minimum time complexity of very large-scale integrated circuitries and related computer-aided design tools that use binary decision diagrams.Keywords: Monte Carlo circuit simulation data, binary decision diagrams, neural network modeling, shortest path length estimation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13781628 An Efficient Collocation Method for Solving the Variable-Order Time-Fractional Partial Differential Equations Arising from the Physical Phenomenon
Authors: Haniye Dehestani, Yadollah Ordokhani
Abstract:
In this work, we present an efficient approach for solving variable-order time-fractional partial differential equations, which are based on Legendre and Laguerre polynomials. First, we introduced the pseudo-operational matrices of integer and variable fractional order of integration by use of some properties of Riemann-Liouville fractional integral. Then, applied together with collocation method and Legendre-Laguerre functions for solving variable-order time-fractional partial differential equations. Also, an estimation of the error is presented. At last, we investigate numerical examples which arise in physics to demonstrate the accuracy of the present method. In comparison results obtained by the present method with the exact solution and the other methods reveals that the method is very effective.Keywords: Collocation method, fractional partial differential equations, Legendre-Laguerre functions, pseudo-operational matrix of integration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1022