Search results for: Immersed Boundary Method
6401 Feature Weighting and Selection - A Novel Genetic Evolutionary Approach
Authors: Serkawt Khola
Abstract:
A feature weighting and selection method is proposed which uses the structure of a weightless neuron and exploits the principles that govern the operation of Genetic Algorithms and Evolution. Features are coded onto chromosomes in a novel way which allows weighting information regarding the features to be directly inferred from the gene values. The proposed method is significant in that it addresses several problems concerned with algorithms for feature selection and weighting as well as providing significant advantages such as speed, simplicity and suitability for real-time systems.Keywords: Feature weighting, genetic algorithm, pattern recognition, weightless neuron.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18566400 Evaluation of Mixed-Mode Stress Intensity Factor by Digital Image Correlation and Intelligent Hybrid Method
Authors: K. Machida, H. Yamada
Abstract:
Displacement measurement was conducted on compact normal and shear specimens made of acrylic homogeneous material subjected to mixed-mode loading by digital image correlation. The intelligent hybrid method proposed by Nishioka et al. was applied to the stress-strain analysis near the crack tip. The accuracy of stress-intensity factor at the free surface was discussed from the viewpoint of both the experiment and 3-D finite element analysis. The surface images before and after deformation were taken by a CMOS camera, and we developed the system which enabled the real time stress analysis based on digital image correlation and inverse problem analysis. The great portion of processing time of this system was spent on displacement analysis. Then, we tried improvement in speed of this portion. In the case of cracked body, it is also possible to evaluate fracture mechanics parameters such as the J integral, the strain energy release rate, and the stress-intensity factor of mixed-mode. The 9-points elliptic paraboloid approximation could not analyze the displacement of submicron order with high accuracy. The analysis accuracy of displacement was improved considerably by introducing the Newton-Raphson method in consideration of deformation of a subset. The stress-intensity factor was evaluated with high accuracy of less than 1% of the error.
Keywords: Digital image correlation, mixed mode, Newton-Raphson method, stress intensity factor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17036399 Study of Electro-Optical Properties of ZnS Nanoparticles Prepared by Colloidal Particles Method
Authors: A. Rahdar, V. Arbabi, H. Ghanbari
Abstract:
ZnS nanoparticles of different size have been synthesized using a colloidal particles method. Zns nanoparticles prepared with capping agent (mercaptoethanol) then were characterized using X-ray diffraction (XRD) and UV-Vis spectroscopy. The particle size of the nanoparticles calculated from the XRD patterns has been found in the range 1.85-2.44nm. Absorption spectra have been obtained using UV-Vis spectrophotometer to find the optical band gap and the obtained values have been founded to being range 3.83-4.59eV. It was also found that energy band gap increase with the increase in molar capping agent solution.Keywords: ZnS, Nanoparticle, X-ray.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18736398 Numerical Optimization of Pin-Fin Heat Sink with Forced Cooling
Authors: Y. T. Yang, H. S. Peng, H. T. Hsu
Abstract:
This study presents the numerical simulation of optimum pin-fin heat sink with air impinging cooling by using Taguchi method. 9 L ( 4 3 ) orthogonal array is selected as a plan for the four design-parameters with three levels. The governing equations are discretized by using the control-volume-based-finite-difference method with a power-law scheme on the non-uniform staggered grid. We solved the coupling of the velocity and the pressure terms of momentum equations using SIMPLEC algorithm. We employ the k −ε two-equations turbulence model to describe the turbulent behavior. The parameters studied include fin height H (35mm-45mm), inter-fin spacing a , b , and c (2 mm-6.4 mm), and Reynolds number ( Re = 10000- 25000). The objective of this study is to examine the effects of the fin spacings and fin height on the thermal resistance and to find the optimum group by using the Taguchi method. We found that the fin spacings from the center to the edge of the heat sink gradually extended, and the longer the fin’s height the better the results. The optimum group is 3 1 2 3 H a b c . In addition, the effects of parameters are ranked by importance as a , H , c , and b .
Keywords: Heat sink, Optimum, Electronics cooling, CFD.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37056397 Fuzzy Logic Approach to Robust Regression Models of Uncertain Medical Categories
Authors: Arkady Bolotin
Abstract:
Dichotomization of the outcome by a single cut-off point is an important part of various medical studies. Usually the relationship between the resulted dichotomized dependent variable and explanatory variables is analyzed with linear regression, probit regression or logistic regression. However, in many real-life situations, a certain cut-off point dividing the outcome into two groups is unknown and can be specified only approximately, i.e. surrounded by some (small) uncertainty. It means that in order to have any practical meaning the regression model must be robust to this uncertainty. In this paper, we show that neither the beta in the linear regression model, nor its significance level is robust to the small variations in the dichotomization cut-off point. As an alternative robust approach to the problem of uncertain medical categories, we propose to use the linear regression model with the fuzzy membership function as a dependent variable. This fuzzy membership function denotes to what degree the value of the underlying (continuous) outcome falls below or above the dichotomization cut-off point. In the paper, we demonstrate that the linear regression model of the fuzzy dependent variable can be insensitive against the uncertainty in the cut-off point location. In the paper we present the modeling results from the real study of low hemoglobin levels in infants. We systematically test the robustness of the binomial regression model and the linear regression model with the fuzzy dependent variable by changing the boundary for the category Anemia and show that the behavior of the latter model persists over a quite wide interval.
Keywords: Categorization, Uncertain medical categories, Binomial regression model, Fuzzy dependent variable, Robustness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15616396 Hybrid Finite Element Analysis of Expansion Joints for Piping Systems in Aircraft Engine External Configurations and Nuclear Power Plants
Authors: Dong Wook Lee
Abstract:
This paper presents a method to analyze the stiffness of the expansion joint with structural support using a hybrid method combining computational and analytical methods. Many expansion joints found in tubes and ducts of mechanical structures are designed to absorb thermal expansion mismatch between their structural members and deal with misalignments introduced from the assembly/manufacturing processes. One of the important design perspectives is the system’s vibrational characteristics. We calculate the stiffness as a characterization parameter for structural joint systems using a combined Finite Element Analysis (FEA) and an analytical method. We apply the methods to two sample applications: external configurations of aircraft engines and nuclear power plant structures.Keywords: Expansion joint, expansion joint stiffness, Finite Element Analysis, FEA, nuclear power plants, aircraft engine external configurations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7046395 Study of Qualitative and Quantitative Metric for Pixel Factor Mapping and Extended Pixel Mapping Method
Authors: Indradip Banerjee, Souvik Bhattacharyya, Gautam Sanyal
Abstract:
In this paper, an approach is presented to investigate the performance of Pixel Factor Mapping (PFM) and Extended PMM (Pixel Mapping Method) through the qualitative and quantitative approach. These methods are tested against a number of well-known image similarity metrics and statistical distribution techniques. The PFM has been performed in spatial domain as well as frequency domain and the Extended PMM has also been performed in spatial domain through large set of images available in the internet.Keywords: Qualitative, quantitative, PFM, EXTENDED PMM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10636394 On the Construction of m-Sequences via Primitive Polynomials with a Fast Identification Method
Authors: Abhijit Mitra
Abstract:
The paper provides an in-depth tutorial of mathematical construction of maximal length sequences (m-sequences) via primitive polynomials and how to map the same when implemented in shift registers. It is equally important to check whether a polynomial is primitive or not so as to get proper m-sequences. A fast method to identify primitive polynomials over binary fields is proposed where the complexity is considerably less in comparison with the standard procedures for the same purpose.Keywords: Finite field, irreducible polynomial, primitive polynomial, maximal length sequence, additive shift register, multiplicative shift register.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39406393 Some Exact Solutions of the (2+1)-Dimensional Breaking Soliton Equation using the Three-wave Method
Authors: Mohammad Taghi Darvishi, Mohammad Najafi
Abstract:
This paper considers the (2+1)-dimensional breaking soliton equation in its bilinear form. Some exact solutions to this equation are explicitly derived by the idea of three-wave solution method with the assistance of Maple. We can see that the new idea is very simple and straightforward.
Keywords: Soliton solution, computerized symbolic computation, painleve analysis, (2+1)-dimensional breaking soliton equation, Hirota's bilinear form.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19106392 3D Model Retrieval based on Normal Vector Interpolation Method
Authors: Ami Kim, Oubong Gwun, Juwhan Song
Abstract:
In this paper, we proposed the distribution of mesh normal vector direction as a feature descriptor of a 3D model. A normal vector shows the entire shape of a model well. The distribution of normal vectors was sampled in proportion to each polygon's area so that the information on the surface with less surface area may be less reflected on composing a feature descriptor in order to enhance retrieval performance. At the analysis result of ANMRR, the enhancement of approx. 12.4%~34.7% compared to the existing method has also been indicated.Keywords: Interpolated Normal Vector, Feature Descriptor, 3DModel Retrieval.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14746391 A Rule-based Approach for Anomaly Detection in Subscriber Usage Pattern
Authors: Rupesh K. Gopal, Saroj K. Meher
Abstract:
In this report we present a rule-based approach to detect anomalous telephone calls. The method described here uses subscriber usage CDR (call detail record) data sampled over two observation periods: study period and test period. The study period contains call records of customers- non-anomalous behaviour. Customers are first grouped according to their similar usage behaviour (like, average number of local calls per week, etc). For customers in each group, we develop a probabilistic model to describe their usage. Next, we use maximum likelihood estimation (MLE) to estimate the parameters of the calling behaviour. Then we determine thresholds by calculating acceptable change within a group. MLE is used on the data in the test period to estimate the parameters of the calling behaviour. These parameters are compared against thresholds. Any deviation beyond the threshold is used to raise an alarm. This method has the advantage of identifying local anomalies as compared to techniques which identify global anomalies. The method is tested for 90 days of study data and 10 days of test data of telecom customers. For medium to large deviations in the data in test window, the method is able to identify 90% of anomalous usage with less than 1% false alarm rate.Keywords: Subscription fraud, fraud detection, anomalydetection, maximum likelihood estimation, rule based systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28136390 An ensemble of Weighted Support Vector Machines for Ordinal Regression
Authors: Willem Waegeman, Luc Boullart
Abstract:
Instead of traditional (nominal) classification we investigate the subject of ordinal classification or ranking. An enhanced method based on an ensemble of Support Vector Machines (SVM-s) is proposed. Each binary classifier is trained with specific weights for each object in the training data set. Experiments on benchmark datasets and synthetic data indicate that the performance of our approach is comparable to state of the art kernel methods for ordinal regression. The ensemble method, which is straightforward to implement, provides a very good sensitivity-specificity trade-off for the highest and lowest rank.Keywords: Ordinal regression, support vector machines, ensemblelearning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16436389 Optimizing Usability Testing with Collaborative Method in an E-Commerce Ecosystem
Authors: Markandeya Kunchi
Abstract:
Usability testing (UT) is one of the vital steps in the User-centred design (UCD) process when designing a product. In an e-commerce ecosystem, UT becomes primary as new products, features, and services are launched very frequently. And, there are losses attached to the company if an unusable and inefficient product is put out to market and is rejected by customers. This paper tries to answer why UT is important in the product life-cycle of an E-commerce ecosystem. Secondary user research was conducted to find out work patterns, development methods, type of stakeholders, and technology constraints, etc. of a typical E-commerce company. Qualitative user interviews were conducted with product managers and designers to find out the structure, project planning, product management method and role of the design team in a mid-level company. The paper tries to address the usual apprehensions of the company to inculcate UT within the team. As well, it stresses upon factors like monetary resources, lack of usability expert, narrow timelines, and lack of understanding of higher management as some primary reasons. Outsourcing UT to vendors is also very prevalent with mid-level e-commerce companies, but it has its own severe repercussions like very little team involvement, huge cost, misinterpretation of the findings, elongated timelines, and lack of empathy towards the customer, etc. The shortfalls of the unavailability of a UT process in place within the team and conducting UT through vendors are bad user experiences for customers while interacting with the product, badly designed products which are neither useful and nor utilitarian. As a result, companies see dipping conversions rates in apps and websites, huge bounce rates and increased uninstall rates. Thus, there was a need for a more lean UT system in place which could solve all these issues for the company. This paper highlights on optimizing the UT process with a collaborative method. The degree of optimization and structure of collaborative method is the highlight of this paper. Collaborative method of UT is one in which the centralised design team of the company takes for conducting and analysing the UT. The UT is usually a formative kind where designers take findings into account and uses in the ideation process. The success of collaborative method of UT is due to its ability to sync with the product management method employed by the company or team. The collaborative methods focus on engaging various teams (design, marketing, product, administration, IT, etc.) each with its own defined roles and responsibility in conducting a smooth UT with users In-house. The paper finally highlights the positive results of collaborative UT method after conducting more than 100 In-lab interviews with users across the different lines of businesses. Some of which are the improvement of interaction between stakeholders and the design team, empathy towards users, improved design iteration, better sanity check of design solutions, optimization of time and money, effective and efficient design solution. The future scope of collaborative UT is to make this method leaner, by reducing the number of days to complete the entire project starting from planning between teams to publishing the UT report.
Keywords: Usability testing, collaborative method, e-commerce, product management method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6686388 Problems of Boolean Reasoning Based Biclustering Parallelization
Authors: Marcin Michalak
Abstract:
Biclustering is the way of two-dimensional data analysis. For several years it became possible to express such issue in terms of Boolean reasoning, for processing continuous, discrete and binary data. The mathematical backgrounds of such approach — proved ability of induction of exact and inclusion–maximal biclusters fulfilling assumed criteria — are strong advantages of the method. Unfortunately, the core of the method has quite high computational complexity. In the paper the basics of Boolean reasoning approach for biclustering are presented. In such context the problems of computation parallelization are risen.Keywords: Boolean reasoning, biclustering, parallelization, prime implicant.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5976387 Iron Recovery from Red Mud as Zero-Valent Iron Metal Powder Using Direct Electrochemical Reduction Method
Authors: Franky Michael Hamonangan Siagian, Affan Maulana, Himawan Tri Bayu Murti Petrus, Panut Mulyono, Widi Astuti
Abstract:
In this study, the feasibility of the direct electrowinning method was used to produce zero-valent iron from red mud. The red mud sample came from the Tayan mine, Indonesia, which contains high hematite (Fe2O3). Before electrolysis, the samples were characterized by various analytical techniques (ICP-AES, SEM, XRD) to determine their chemical composition and mineralogy. The direct electrowinning method of red mud suspended in NaOH was introduced at low temperatures ranging from 30-110 °C. Current density and temperature variations were carried out to determine the optimum operation of the direct electrowinning process. Cathode deposits and residues in electrochemical cells were analyzed using XRD, XRF, and SEM to determine the chemical composition and current recovery. The low-temperature electrolysis current efficiency on Redmud can reach 11.8% recovery at a current density of 796 A/m². The moderate performance of the process was investigated with red mud, which was attributed to the troublesome adsorption of red mud particles on the cathode, making the reduction far less efficient than that with hematite.
Keywords: Alumina, electrochemical reduction, iron production, red mud.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2536386 Metal Berthelot Tubes with Windows for Observing Cavitation under Static Negative Pressure
Authors: K. Hiro, Y. Imai, T. Sasayama
Abstract:
Cavitation under static negative pressure is not revealed well. The Berthelot method to generate such negative pressure can be a means to study cavitation inception. In this study, metal Berthelot tubes built in observation windows are newly developed and are checked whether high static negative pressure is generated or not. Negative pressure in the tube with a pair of a corundum plate and an aluminum gasket increased with temperature cycles. The trend was similar to that as reported before.
Keywords: Berthelot method, negative pressure, cavitation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10586385 A New Approach for Effect Evaluation of Sediment Management
Authors: Jazaul Ikhsan, Masaharu Fujita
Abstract:
Safety, river environment, and sediment utilization are the elements of the target of sediment management. As a change in an element by sediment management, may affect the other two elements, and the priority among three elements depends on stakeholders. It is necessary to develop a method to evaluate the effect of sediment management on each element and an integrated evaluation method for socio-economic effect. In this study, taking Mount Merapi basin as an investigation field, the method for an active volcanic basin was developed. An integrated evaluation method for sediment management was discussed from a socio-economic point on safety, environment, and sediment utilization and a case study of sediment management was evaluated by means of this method. To evaluate the effect of sediment management, some parameters on safety, utilization, and environment have been introduced. From a utilization point of view, job opportunity, additional income of local people, and tax income to local government were used to evaluate the effectiveness of sediment management. The risk degree of river infrastructure was used to describe the effect of sediment management on a safety aspect. To evaluate the effects of sediment management on environment, the mean diameter of grain size distribution of riverbed surface was used. On the coordinate system designating these elements, the direction of change in basin condition by sediment management can be predicted, so that the most preferable sediment management can be decided. The results indicate that the cases of sediment management tend to give the negative impacts on sediment utilization. However, these sediment managements will give positive impacts on safety and environment condition. Evaluation result from a social-economic point of view shows that the case study of sediment management reduces job opportunity and additional income for inhabitants as well as tax income for government. Therefore, it is necessary to make another policy for creating job opportunity for inhabitants to support these sediment managements.
Keywords: Merapi, sediment, management, evaluation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14456384 Graded Orientation of the Linear Polymers
Authors: Levan Nadareishvili, Roland Bakuradze, Barbara Kilosanidze, Nona Topuridze, Liana Sharashidze, Ineza Pavlenishvili
Abstract:
Some regularities of formation of a new structural state of the thermoplastic polymers - gradually oriented (stretched) state (GOS) are discussed. Transition into GOS is realized by the graded oriented stretching - by action of inhomogeneous mechanical field on the isotropic linear polymers or by zone stretching that is implemented on a standard tensile-testing machine with using a specially designed zone stretching device (ZSD). Both technical approaches (especially zone stretching method) allows to manage the such quantitative parameters of gradually oriented polymers as a range of change in relative elongation/orientation degree, length of this change and profile (linear, hyperbolic, parabolic, logarithmic, etc.). The possibility of obtaining functionally graded materials (FGMs) by graded orientation method is briefly discussed. Uniaxial graded stretching method should be considered as an effective technological solution to create polymer materials with a predetermined gradient of physical properties.
Keywords: Controlled graded stretching, gradually oriented state, linear polymers, zone stretching device.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21466383 Unsteady MHD Flow of an Incompressible Elastico-Viscous Fluid in a Tube of Spherical Cross Section on a Porous Boundary
Authors: Sanjay Baburao Kulkarni
Abstract:
Exact solution of an unsteady MHD flow of elasticoviscous fluid through a porous media in a tube of spherical cross section under the influence of magnetic field and constant pressure gradient has been obtained in this paper. Initially, the flow is generated by a constant pressure gradient. After attaining the steady state, the pressure gradient is suddenly withdrawn and the resulting fluid motion in a tube of spherical cross section by taking into account of the porosity factor and magnetic parameter of the bounding surface is investigated. The problem is solved in two-stages the first stage is a steady motion in tube under the influence of a constant pressure gradient, the second stage concern with an unsteady motion. The problem is solved employing separation of variables technique. The results are expressed in terms of a non-dimensional porosity parameter (K), magnetic parameter (m) and elasticoviscosity parameter (β), which depends on the Non-Newtonian coefficient. The flow parameters are found to be identical with that of Newtonian case as elastic-viscosity parameter and magnetic parameter tends to zero and porosity tends to infinity. It is seen that the effect of elastico-viscosity parameter, porosity parameter and magnetic parameter of the bounding surface has significant effect on the velocity parameter.
Keywords: Elastico-viscous fluid, Porous media, Second order fluids, Spherical cross-section, Magnetic parameter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16356382 Laban Movement Analysis Using Kinect
Authors: Ran Bernstein, Tal Shafir, Rachelle Tsachor, Karen Studd, Assaf Schuster
Abstract:
Laban Movement Analysis (LMA), developed in the dance community over the past seventy years, is an effective method for observing, describing, notating, and interpreting human movement to enhance communication and expression in everyday and professional life. Many applications that use motion capture data might be significantly leveraged if the Laban qualities will be recognized automatically. This paper presents an automated recognition method of Laban qualities from motion capture skeletal recordings and it is demonstrated on the output of Microsoft’s Kinect V2 sensor.Keywords: Laban Movement Analysis, Kinect, Machine Learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28336381 Comparative Analysis of Two Modeling Approaches for Optimizing Plate Heat Exchangers
Authors: Fábio A. S. Mota, Mauro A. S. S. Ravagnani, E. P. Carvalho
Abstract:
In the present paper the design of plate heat exchangers is formulated as an optimization problem considering two mathematical modelling. The number of plates is the objective function to be minimized, considering implicitly some parameters configuration. Screening is the optimization method used to solve the problem. Thermal and hydraulic constraints are verified, not viable solutions are discarded and the method searches for the convergence to the optimum, case it exists. A case study is presented to test the applicability of the developed algorithm. Results show coherency with the literature.
Keywords: Plate heat exchanger, optimization, modeling, simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19616380 Modified Naïve Bayes Based Prediction Modeling for Crop Yield Prediction
Authors: Kefaya Qaddoum
Abstract:
Most of greenhouse growers desire a determined amount of yields in order to accurately meet market requirements. The purpose of this paper is to model a simple but often satisfactory supervised classification method. The original naive Bayes have a serious weakness, which is producing redundant predictors. In this paper, utilized regularization technique was used to obtain a computationally efficient classifier based on naive Bayes. The suggested construction, utilized L1-penalty, is capable of clearing redundant predictors, where a modification of the LARS algorithm is devised to solve this problem, making this method applicable to a wide range of data. In the experimental section, a study conducted to examine the effect of redundant and irrelevant predictors, and test the method on WSG data set for tomato yields, where there are many more predictors than data, and the urge need to predict weekly yield is the goal of this approach. Finally, the modified approach is compared with several naive Bayes variants and other classification algorithms (SVM and kNN), and is shown to be fairly good.
Keywords: Tomato yields prediction, naive Bayes, redundancy
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 51106379 Modeling and Visualizing Seismic Wave Propagation in Elastic Medium Using Multi-Dimension Wave Digital Filtering Approach
Authors: Jason Chien-Hsun Tseng, Nguyen Dong-Thai Dao, Chong-Ching Chang
Abstract:
A novel PDE solver using the multidimensional wave digital filtering (MDWDF) technique to achieve the solution of a 2D seismic wave system is presented. In essence, the continuous physical system served by a linear Kirchhoff circuit is transformed to an equivalent discrete dynamic system implemented by a MD wave digital filtering (MDWDF) circuit. This amounts to numerically approximating the differential equations used to describe elements of a MD passive electronic circuit by a grid-based difference equations implemented by the so-called state quantities within the passive MDWDF circuit. So the digital model can track the wave field on a dense 3D grid of points. Details about how to transform the continuous system into a desired discrete passive system are addressed. In addition, initial and boundary conditions are properly embedded into the MDWDF circuit in terms of state quantities. Graphic results have clearly demonstrated some physical effects of seismic wave (P-wave and S–wave) propagation including radiation, reflection, and refraction from and across the hard boundaries. Comparison between the MDWDF technique and the finite difference time domain (FDTD) approach is also made in terms of the computational efficiency.Keywords: Seismic Wave Propagation, Multi-dimension WaveDigital Filters, Partial Differential Equations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14356378 Estimating of the Renewal Function with Heavy-tailed Claims
Authors: Rassoul Abdelaziz
Abstract:
We develop a new estimator of the renewal function for heavy-tailed claims amounts. Our approach is based on the peak over threshold method for estimating the tail of the distribution with a generalized Pareto distribution. The asymptotic normality of an appropriately centered and normalized estimator is established, and its performance illustrated in a simulation study.
Keywords: Renewal function, peak-over-threshold, POT method, extremes value, generalized pareto distribution, heavy-tailed distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14736377 Delaunay Triangulations Efficiency for Conduction-Convection Problems
Authors: Bashar Albaalbaki, Roger E. Khayat
Abstract:
This work is a comparative study on the effect of Delaunay triangulation algorithms on discretization error for conduction-convection conservation problems. A structured triangulation and many unstructured Delaunay triangulations using three popular algorithms for node placement strategies are used. The numerical method employed is the vertex-centered finite volume method. It is found that when the computational domain can be meshed using a structured triangulation, the discretization error is lower for structured triangulations compared to unstructured ones for only low Peclet number values, i.e. when conduction is dominant. However, as the Peclet number is increased and convection becomes more significant, the unstructured triangulations reduce the discretization error. Also, no statistical correlation between triangulation angle extremums and the discretization error is found using 200 samples of randomly generated Delaunay and non-Delaunay triangulations. Thus, the angle extremums cannot be an indicator of the discretization error on their own and need to be combined with other triangulation quality measures, which is the subject of further studies.
Keywords: Conduction-convection problems, Delaunay triangulation, discretization error, finite volume method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1586376 Dispersed Error Control based on Error Filter Design for Improving Halftone Image Quality
Authors: Sang-Chul Kim, Sung-Il Chien
Abstract:
The error diffusion method generates worm artifacts, and weakens the edge of the halftone image when the continuous gray scale image is reproduced by a binary image. First, to enhance the edges, we propose the edge-enhancing filter by considering the quantization error information and gradient of the neighboring pixels. Furthermore, to remove worm artifacts often appearing in a halftone image, we add adaptively random noise into the weights of an error filter.Keywords: Artifact suppression, Edge enhancement, Error diffusion method, Halftone image
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14246375 Improving Teacher Profesionalism through Certification Program: An Indonesia Case Study
Authors: Triyanto
Abstract:
Government of Indonesia held a certification program to enhance the professionalism of teachers by using portfolio assessment. This research discusses about the effectiveness of certification programs to enhance the professionalism of teacher in Indonesia. Portfolio assessment method has drawbacks. The certified teachers do not show significant performance improvement. Therefore, the government changes the portfolio assessment method to the education and training for teachers.
Keywords: Profesionalism, Teacher, Certification, Indonesia
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28156374 Single Image Defogging Method Using Variational Approach for Edge-Preserving Regularization
Authors: Wan-Hyun Cho, In-Seop Na, Seong-ChaeSeo, Sang-Kyoon Kim, Soon-Young Park
Abstract:
In this paper, we propose the variational approach to solve single image defogging problem. In the inference process of the atmospheric veil, we defined new functional for atmospheric veil that satisfy edge-preserving regularization property. By using the fundamental lemma of calculus of variations, we derive the Euler-Lagrange equation foratmospheric veil that can find the maxima of a given functional. This equation can be solved by using a gradient decent method and time parameter. Then, we can have obtained the estimated atmospheric veil, and then have conducted the image restoration by using inferred atmospheric veil. Finally we have improved the contrast of restoration image by various histogram equalization methods. The experimental results show that the proposed method achieves rather good defogging results.
Keywords: Image defogging, Image restoration, Atmospheric veil, Transmission, Variational approach, Euler-Lagrange equation, Image enhancement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29436373 Effect of Non Uniformity Factors and Assignment Factors on Errors in Charge Simulation Method with Point Charge Model
Authors: Gururaj S Punekar, N K Kishore Senior, H S Y Shastry
Abstract:
Charge Simulation Method (CSM) is one of the very widely used numerical field computation technique in High Voltage (HV) engineering. The high voltage fields of varying non uniformities are encountered in practice. CSM programs being case specific, the simulation accuracies heavily depend on the user (programmers) experience. Here is an effort to understand CSM errors and evolve some guidelines to setup accurate CSM models, relating non uniformities with assignment factors. The results are for the six-point-charge model of sphere-plane gap geometry. Using genetic algorithm (GA) as tool, optimum assignment factors at different non uniformity factors for this model have been evaluated and analyzed. It is shown that the symmetrically placed six-point-charge models can be good enough to set up CSM programs with potential errors less than 0.1% when the field non uniformity factor is greater than 2.64 (field utilization factor less than 52.76%).
Keywords: Assignment factor, Charge Simulation Method, High Voltage, Numerical field computation, Non uniformity factor, Simulation errors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20536372 Signal Reconstruction Using Cepstrum of Higher Order Statistics
Authors: Adnan Al-Smadi, Mahmoud Smadi
Abstract:
This paper presents an algorithm for reconstructing phase and magnitude responses of the impulse response when only the output data are available. The system is driven by a zero-mean independent identically distributed (i.i.d) non-Gaussian sequence that is not observed. The additive noise is assumed to be Gaussian. This is an important and essential problem in many practical applications of various science and engineering areas such as biomedical, seismic, and speech processing signals. The method is based on evaluating the bicepstrum of the third-order statistics of the observed output data. Simulations results are presented that demonstrate the performance of this method.
Keywords: Cepstrum, bicepstrum, third order statistics
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2037